What CIOs Need to Know About the Technical Aspects of AI Integration An AI integration modifies a business process and how employees work, but it also requires an integration with IT infrastructure and systems. This is where some of IT’s most..."> What CIOs Need to Know About the Technical Aspects of AI Integration An AI integration modifies a business process and how employees work, but it also requires an integration with IT infrastructure and systems. This is where some of IT’s most..." /> What CIOs Need to Know About the Technical Aspects of AI Integration An AI integration modifies a business process and how employees work, but it also requires an integration with IT infrastructure and systems. This is where some of IT’s most..." />

Upgrade to Pro

What CIOs Need to Know About the Technical Aspects of AI Integration

An AI integration modifies a business process and how employees work, but it also requires an integration with IT infrastructure and systems. This is where some of IT’s most technically savvy staff will be working, and they will want to discuss technology integration approaches and ideas. Most CIOs aren’t software engineers, but they are responsible for having a working knowledge of all things IT so they can hold meaningful dialogues with their most technical employees to assist in defining technology direction. What do CIOs need to know about the technical side of AI integration? 1. AI technical integration is about embedding AI in systems and workflows The assumption here is that by the time your staff is getting into technical design and tooling decisions, that the business case and application for AI have already been decided. Now the task is deciding how to effect a technical embedding and integration of the AI into the IT infrastructure and applications that will support the business process. 2. Modeling is first and foremost AI systems are built around models that utilize data stores, algorithms for query, and machine learning that expands the AI’s body of knowledge as the AI recognizes common logic patterns in data and assimilates knowledge from them. There are many different AI models to choose from. In most cases, companies use predefined AI models from vendors and then expand on them. In other cases, companies elect to build their own models “from scratch.”  Related:Building from scratch usually means that the organization has an on-board data science group with expertise in AI model building. Common AI model frameworks, provide the software resources and tools. These AI model-building technologies are not familiar to most IT staffs. The technologies use data graphs to build dataflows and structures that define how the data will move through the graph. Operational flows for the logic that operates on data must be defined. The model-building software also provides for algorithm development, model training, business rule definitions, and the machine learning that the model executes on its own as it “learns” from the data it ingests. IT might not know this stuff, but it can’t afford to ignore it. IT and CIOs need at least a working knowledge of how these opensource model building technologies work, because inevitably, these models must interface with IT infrastructure and data.  3. IT Infrastructure comes next Related:How to integrate an AI system with existing IT infrastructure is where CIOs can expect significant dialogue with their technical staffs. The AI has to be integrated seamlessly with the top to bottom tech stack if it is going to work. This means discussing how and where data from the AI will be stored, with SQL and noSQL databases being the early favorites. Middleware that enables the AI to interoperate with other IT systems must be interfaced with. Most AI models are open source, which can simplify integration --  but integration still requires using middleware APIslike REST, which integrates the AI system with Internet-based resources; or GraphQLwhich facilitates the integration of data from multiple sources. It’s IT that decides how to deploy the optimal data stores, infrastructure storage and connectors needed to support the AI, and there are likely to be different optionsfor deployment. This is where the CIO needs to dialogue with technical staff. 4. Data quality The AI group will rely on IT to provide quality data for the AI. This is accomplished in two ways: 1) by ensuring that all data incoming into the AI data repository is “clean”, and it is accurate and it is able to interact with other data in the AI data repository; and the data is secure. Whether it is working with outside vendors, vetting vendors for clean, secure data and periodically auditing them; or defining the data transformations and security technology and operations that must be put in place internally, it is all IT’s responsibility. The CIO will need to dialogue on technical levels with vendors, and with the IT database, storage, security, systems, applications and networking groups. Related:5. AI security The datain and to AI must be secure at all times. To arrive at this point, security must be enacted on multiple levels, and it will entail technical discussions and decision making to get there.  First and foremost is data security. Much of this has already been discussed under data quality, and it will involve most IT departmental teams. Second is user access authorities and activity monitoring. Who gets access to what, and how will you monitor user activities? The users can define their own authorization lists and IT can implement these -- but complication occurs when it comes to monitoring user activities. If for example, the user activities occur only with onsite data repositories, sites can use a technology like IAM, which gives IT granular visibility of every user activity. However, if cloud-based access is involved, IAM won’t be able to monitor this activity at any level of detail. It might become necessary to use CIEMsoftware instead to gain granular observation of user activity in the cloud. Then there are “umbrella” technologies like IGAthat can serve as an over-arching framework for both IAM and CIEM.  The IT security groupmust decide which strategy to adopt for comprehensive protection of AI. Finally, there are malware threats that are unique to AI. Yes, you can use standard malware detection to ward off attacks from bad actors on AI data, just as you would on standard data and applications -- but the plot thickens from there. For example, there are malware injections into AI systems that can inject inaccurate data or change the labels and features of data. These skew the results derived from that data and result in erroneous recommendations and decisions. The practice is known as “data poisoning.”  IT is expected to come up with a data validation technique for incoming data that can detect possible poisoning attempts and stop them. This could involve data sanitization technologies, or data source verifications, and it is possible that inserting these technologies could slow down data transport. The technical staff needs to weigh these options, and CIOs should insert themselves into the discussions. The Bottom Line The bottom line is clear: CIOs must be able to dialogue and participate in decisions at multiple AI levels: the strategic, the operational and the technical. Even if companies have dedicated data science groups, both data scientists and users will ultimately wend their way to IT, which still must make the whole thing happen. CIOs can help both their staffs and their companies if they develop a working knowledge of how AI works, in addition to understanding the strategic and operational aspects of AI -- because companies, employees and business partners all need to hear the CIO’s voice. 
#what #cios #need #know #about
What CIOs Need to Know About the Technical Aspects of AI Integration
An AI integration modifies a business process and how employees work, but it also requires an integration with IT infrastructure and systems. This is where some of IT’s most technically savvy staff will be working, and they will want to discuss technology integration approaches and ideas. Most CIOs aren’t software engineers, but they are responsible for having a working knowledge of all things IT so they can hold meaningful dialogues with their most technical employees to assist in defining technology direction. What do CIOs need to know about the technical side of AI integration? 1. AI technical integration is about embedding AI in systems and workflows The assumption here is that by the time your staff is getting into technical design and tooling decisions, that the business case and application for AI have already been decided. Now the task is deciding how to effect a technical embedding and integration of the AI into the IT infrastructure and applications that will support the business process. 2. Modeling is first and foremost AI systems are built around models that utilize data stores, algorithms for query, and machine learning that expands the AI’s body of knowledge as the AI recognizes common logic patterns in data and assimilates knowledge from them. There are many different AI models to choose from. In most cases, companies use predefined AI models from vendors and then expand on them. In other cases, companies elect to build their own models “from scratch.”  Related:Building from scratch usually means that the organization has an on-board data science group with expertise in AI model building. Common AI model frameworks, provide the software resources and tools. These AI model-building technologies are not familiar to most IT staffs. The technologies use data graphs to build dataflows and structures that define how the data will move through the graph. Operational flows for the logic that operates on data must be defined. The model-building software also provides for algorithm development, model training, business rule definitions, and the machine learning that the model executes on its own as it “learns” from the data it ingests. IT might not know this stuff, but it can’t afford to ignore it. IT and CIOs need at least a working knowledge of how these opensource model building technologies work, because inevitably, these models must interface with IT infrastructure and data.  3. IT Infrastructure comes next Related:How to integrate an AI system with existing IT infrastructure is where CIOs can expect significant dialogue with their technical staffs. The AI has to be integrated seamlessly with the top to bottom tech stack if it is going to work. This means discussing how and where data from the AI will be stored, with SQL and noSQL databases being the early favorites. Middleware that enables the AI to interoperate with other IT systems must be interfaced with. Most AI models are open source, which can simplify integration --  but integration still requires using middleware APIslike REST, which integrates the AI system with Internet-based resources; or GraphQLwhich facilitates the integration of data from multiple sources. It’s IT that decides how to deploy the optimal data stores, infrastructure storage and connectors needed to support the AI, and there are likely to be different optionsfor deployment. This is where the CIO needs to dialogue with technical staff. 4. Data quality The AI group will rely on IT to provide quality data for the AI. This is accomplished in two ways: 1) by ensuring that all data incoming into the AI data repository is “clean”, and it is accurate and it is able to interact with other data in the AI data repository; and the data is secure. Whether it is working with outside vendors, vetting vendors for clean, secure data and periodically auditing them; or defining the data transformations and security technology and operations that must be put in place internally, it is all IT’s responsibility. The CIO will need to dialogue on technical levels with vendors, and with the IT database, storage, security, systems, applications and networking groups. Related:5. AI security The datain and to AI must be secure at all times. To arrive at this point, security must be enacted on multiple levels, and it will entail technical discussions and decision making to get there.  First and foremost is data security. Much of this has already been discussed under data quality, and it will involve most IT departmental teams. Second is user access authorities and activity monitoring. Who gets access to what, and how will you monitor user activities? The users can define their own authorization lists and IT can implement these -- but complication occurs when it comes to monitoring user activities. If for example, the user activities occur only with onsite data repositories, sites can use a technology like IAM, which gives IT granular visibility of every user activity. However, if cloud-based access is involved, IAM won’t be able to monitor this activity at any level of detail. It might become necessary to use CIEMsoftware instead to gain granular observation of user activity in the cloud. Then there are “umbrella” technologies like IGAthat can serve as an over-arching framework for both IAM and CIEM.  The IT security groupmust decide which strategy to adopt for comprehensive protection of AI. Finally, there are malware threats that are unique to AI. Yes, you can use standard malware detection to ward off attacks from bad actors on AI data, just as you would on standard data and applications -- but the plot thickens from there. For example, there are malware injections into AI systems that can inject inaccurate data or change the labels and features of data. These skew the results derived from that data and result in erroneous recommendations and decisions. The practice is known as “data poisoning.”  IT is expected to come up with a data validation technique for incoming data that can detect possible poisoning attempts and stop them. This could involve data sanitization technologies, or data source verifications, and it is possible that inserting these technologies could slow down data transport. The technical staff needs to weigh these options, and CIOs should insert themselves into the discussions. The Bottom Line The bottom line is clear: CIOs must be able to dialogue and participate in decisions at multiple AI levels: the strategic, the operational and the technical. Even if companies have dedicated data science groups, both data scientists and users will ultimately wend their way to IT, which still must make the whole thing happen. CIOs can help both their staffs and their companies if they develop a working knowledge of how AI works, in addition to understanding the strategic and operational aspects of AI -- because companies, employees and business partners all need to hear the CIO’s voice.  #what #cios #need #know #about
WWW.INFORMATIONWEEK.COM
What CIOs Need to Know About the Technical Aspects of AI Integration
An AI integration modifies a business process and how employees work, but it also requires an integration with IT infrastructure and systems. This is where some of IT’s most technically savvy staff will be working, and they will want to discuss technology integration approaches and ideas. Most CIOs aren’t software engineers, but they are responsible for having a working knowledge of all things IT so they can hold meaningful dialogues with their most technical employees to assist in defining technology direction. What do CIOs need to know about the technical side of AI integration? 1. AI technical integration is about embedding AI in systems and workflows The assumption here is that by the time your staff is getting into technical design and tooling decisions, that the business case and application for AI have already been decided. Now the task is deciding how to effect a technical embedding and integration of the AI into the IT infrastructure and applications that will support the business process. 2. Modeling is first and foremost AI systems are built around models that utilize data stores, algorithms for query, and machine learning that expands the AI’s body of knowledge as the AI recognizes common logic patterns in data and assimilates knowledge from them. There are many different AI models to choose from. In most cases, companies use predefined AI models from vendors and then expand on them. In other cases, companies elect to build their own models “from scratch.”  Related:Building from scratch usually means that the organization has an on-board data science group with expertise in AI model building. Common AI model frameworks (e.g., Tensorflow, PyTorch, Keras, and others), provide the software resources and tools. These AI model-building technologies are not familiar to most IT staffs. The technologies use data graphs to build dataflows and structures that define how the data will move through the graph. Operational flows for the logic that operates on data must be defined. The model-building software also provides for algorithm development, model training, business rule definitions, and the machine learning that the model executes on its own as it “learns” from the data it ingests. IT might not know this stuff, but it can’t afford to ignore it. IT and CIOs need at least a working knowledge of how these opensource model building technologies work, because inevitably, these models must interface with IT infrastructure and data.  3. IT Infrastructure comes next Related:How to integrate an AI system with existing IT infrastructure is where CIOs can expect significant dialogue with their technical staffs. The AI has to be integrated seamlessly with the top to bottom tech stack if it is going to work. This means discussing how and where data from the AI will be stored, with SQL and noSQL databases being the early favorites. Middleware that enables the AI to interoperate with other IT systems must be interfaced with. Most AI models are open source, which can simplify integration --  but integration still requires using middleware APIs (application programming interfaces) like REST (representational state transfer application programming interface), which integrates the AI system with Internet-based resources; or GraphQL (graph query language,) which facilitates the integration of data from multiple sources. It’s IT that decides how to deploy the optimal data stores, infrastructure storage and connectors needed to support the AI, and there are likely to be different options (and costs) for deployment. This is where the CIO needs to dialogue with technical staff. 4. Data quality The AI group will rely on IT to provide quality data for the AI. This is accomplished in two ways: 1) by ensuring that all data incoming into the AI data repository is “clean” (i.e., the data has been transformed by software like ETL (extract-transform-load), and it is accurate and it is able to interact with other data in the AI data repository; and the data is secure (i.e., encrypted between transfer points or checked at the edges of each resource the data must traverse). Whether it is working with outside vendors, vetting vendors for clean, secure data and periodically auditing them; or defining the data transformations and security technology and operations that must be put in place internally, it is all IT’s responsibility. The CIO will need to dialogue on technical levels with vendors, and with the IT database, storage, security, systems, applications and networking groups. Related:5. AI security The data (and data access) in and to AI must be secure at all times. To arrive at this point, security must be enacted on multiple levels, and it will entail technical discussions and decision making to get there.  First and foremost is data security. Much of this has already been discussed under data quality, and it will involve most IT departmental teams. Second is user access authorities and activity monitoring. Who gets access to what, and how will you monitor user activities? The users can define their own authorization lists and IT can implement these -- but complication occurs when it comes to monitoring user activities. If for example, the user activities occur only with onsite data repositories, sites can use a technology like IAM (identity access management), which gives IT granular visibility of every user activity. However, if cloud-based access is involved, IAM won’t be able to monitor this activity at any level of detail. It might become necessary to use CIEM (cloud infrastructure entitlement management) software instead to gain granular observation of user activity in the cloud. Then there are “umbrella” technologies like IGA (identity governance administration) that can serve as an over-arching framework for both IAM and CIEM.  The IT security group (and their CIO) must decide which strategy to adopt for comprehensive protection of AI. Finally, there are malware threats that are unique to AI. Yes, you can use standard malware detection to ward off attacks from bad actors on AI data, just as you would on standard data and applications -- but the plot thickens from there. For example, there are malware injections into AI systems that can inject inaccurate data or change the labels and features of data. These skew the results derived from that data and result in erroneous recommendations and decisions. The practice is known as “data poisoning.”  IT is expected to come up with a data validation technique for incoming data that can detect possible poisoning attempts and stop them. This could involve data sanitization technologies, or data source verifications, and it is possible that inserting these technologies could slow down data transport. The technical staff needs to weigh these options, and CIOs should insert themselves into the discussions. The Bottom Line The bottom line is clear: CIOs must be able to dialogue and participate in decisions at multiple AI levels: the strategic, the operational and the technical. Even if companies have dedicated data science groups, both data scientists and users will ultimately wend their way to IT, which still must make the whole thing happen. CIOs can help both their staffs and their companies if they develop a working knowledge of how AI works, in addition to understanding the strategic and operational aspects of AI -- because companies, employees and business partners all need to hear the CIO’s voice. 
·96 Views