News and Analysis Tech Leaders Trust
Recent Updates
-
What CIOs Need to Know About the Technical Aspects of AI Integration
An AI integration modifies a business process and how employees work, but it also requires an integration with IT infrastructure and systems. This is where some of IT’s most technically savvy staff will be working, and they will want to discuss technology integration approaches and ideas. Most CIOs aren’t software engineers, but they are responsible for having a working knowledge of all things IT so they can hold meaningful dialogues with their most technical employees to assist in defining technology direction. What do CIOs need to know about the technical side of AI integration? 1. AI technical integration is about embedding AI in systems and workflows The assumption here is that by the time your staff is getting into technical design and tooling decisions, that the business case and application for AI have already been decided. Now the task is deciding how to effect a technical embedding and integration of the AI into the IT infrastructure and applications that will support the business process. 2. Modeling is first and foremost AI systems are built around models that utilize data stores, algorithms for query, and machine learning that expands the AI’s body of knowledge as the AI recognizes common logic patterns in data and assimilates knowledge from them. There are many different AI models to choose from. In most cases, companies use predefined AI models from vendors and then expand on them. In other cases, companies elect to build their own models “from scratch.” Related:Building from scratch usually means that the organization has an on-board data science group with expertise in AI model building. Common AI model frameworks, provide the software resources and tools. These AI model-building technologies are not familiar to most IT staffs. The technologies use data graphs to build dataflows and structures that define how the data will move through the graph. Operational flows for the logic that operates on data must be defined. The model-building software also provides for algorithm development, model training, business rule definitions, and the machine learning that the model executes on its own as it “learns” from the data it ingests. IT might not know this stuff, but it can’t afford to ignore it. IT and CIOs need at least a working knowledge of how these opensource model building technologies work, because inevitably, these models must interface with IT infrastructure and data. 3. IT Infrastructure comes next Related:How to integrate an AI system with existing IT infrastructure is where CIOs can expect significant dialogue with their technical staffs. The AI has to be integrated seamlessly with the top to bottom tech stack if it is going to work. This means discussing how and where data from the AI will be stored, with SQL and noSQL databases being the early favorites. Middleware that enables the AI to interoperate with other IT systems must be interfaced with. Most AI models are open source, which can simplify integration -- but integration still requires using middleware APIslike REST, which integrates the AI system with Internet-based resources; or GraphQLwhich facilitates the integration of data from multiple sources. It’s IT that decides how to deploy the optimal data stores, infrastructure storage and connectors needed to support the AI, and there are likely to be different optionsfor deployment. This is where the CIO needs to dialogue with technical staff. 4. Data quality The AI group will rely on IT to provide quality data for the AI. This is accomplished in two ways: 1) by ensuring that all data incoming into the AI data repository is “clean”, and it is accurate and it is able to interact with other data in the AI data repository; and the data is secure. Whether it is working with outside vendors, vetting vendors for clean, secure data and periodically auditing them; or defining the data transformations and security technology and operations that must be put in place internally, it is all IT’s responsibility. The CIO will need to dialogue on technical levels with vendors, and with the IT database, storage, security, systems, applications and networking groups. Related:5. AI security The datain and to AI must be secure at all times. To arrive at this point, security must be enacted on multiple levels, and it will entail technical discussions and decision making to get there. First and foremost is data security. Much of this has already been discussed under data quality, and it will involve most IT departmental teams. Second is user access authorities and activity monitoring. Who gets access to what, and how will you monitor user activities? The users can define their own authorization lists and IT can implement these -- but complication occurs when it comes to monitoring user activities. If for example, the user activities occur only with onsite data repositories, sites can use a technology like IAM, which gives IT granular visibility of every user activity. However, if cloud-based access is involved, IAM won’t be able to monitor this activity at any level of detail. It might become necessary to use CIEMsoftware instead to gain granular observation of user activity in the cloud. Then there are “umbrella” technologies like IGAthat can serve as an over-arching framework for both IAM and CIEM. The IT security groupmust decide which strategy to adopt for comprehensive protection of AI. Finally, there are malware threats that are unique to AI. Yes, you can use standard malware detection to ward off attacks from bad actors on AI data, just as you would on standard data and applications -- but the plot thickens from there. For example, there are malware injections into AI systems that can inject inaccurate data or change the labels and features of data. These skew the results derived from that data and result in erroneous recommendations and decisions. The practice is known as “data poisoning.” IT is expected to come up with a data validation technique for incoming data that can detect possible poisoning attempts and stop them. This could involve data sanitization technologies, or data source verifications, and it is possible that inserting these technologies could slow down data transport. The technical staff needs to weigh these options, and CIOs should insert themselves into the discussions. The Bottom Line The bottom line is clear: CIOs must be able to dialogue and participate in decisions at multiple AI levels: the strategic, the operational and the technical. Even if companies have dedicated data science groups, both data scientists and users will ultimately wend their way to IT, which still must make the whole thing happen. CIOs can help both their staffs and their companies if they develop a working knowledge of how AI works, in addition to understanding the strategic and operational aspects of AI -- because companies, employees and business partners all need to hear the CIO’s voice.
#what #cios #need #know #aboutWhat CIOs Need to Know About the Technical Aspects of AI IntegrationAn AI integration modifies a business process and how employees work, but it also requires an integration with IT infrastructure and systems. This is where some of IT’s most technically savvy staff will be working, and they will want to discuss technology integration approaches and ideas. Most CIOs aren’t software engineers, but they are responsible for having a working knowledge of all things IT so they can hold meaningful dialogues with their most technical employees to assist in defining technology direction. What do CIOs need to know about the technical side of AI integration? 1. AI technical integration is about embedding AI in systems and workflows The assumption here is that by the time your staff is getting into technical design and tooling decisions, that the business case and application for AI have already been decided. Now the task is deciding how to effect a technical embedding and integration of the AI into the IT infrastructure and applications that will support the business process. 2. Modeling is first and foremost AI systems are built around models that utilize data stores, algorithms for query, and machine learning that expands the AI’s body of knowledge as the AI recognizes common logic patterns in data and assimilates knowledge from them. There are many different AI models to choose from. In most cases, companies use predefined AI models from vendors and then expand on them. In other cases, companies elect to build their own models “from scratch.” Related:Building from scratch usually means that the organization has an on-board data science group with expertise in AI model building. Common AI model frameworks, provide the software resources and tools. These AI model-building technologies are not familiar to most IT staffs. The technologies use data graphs to build dataflows and structures that define how the data will move through the graph. Operational flows for the logic that operates on data must be defined. The model-building software also provides for algorithm development, model training, business rule definitions, and the machine learning that the model executes on its own as it “learns” from the data it ingests. IT might not know this stuff, but it can’t afford to ignore it. IT and CIOs need at least a working knowledge of how these opensource model building technologies work, because inevitably, these models must interface with IT infrastructure and data. 3. IT Infrastructure comes next Related:How to integrate an AI system with existing IT infrastructure is where CIOs can expect significant dialogue with their technical staffs. The AI has to be integrated seamlessly with the top to bottom tech stack if it is going to work. This means discussing how and where data from the AI will be stored, with SQL and noSQL databases being the early favorites. Middleware that enables the AI to interoperate with other IT systems must be interfaced with. Most AI models are open source, which can simplify integration -- but integration still requires using middleware APIslike REST, which integrates the AI system with Internet-based resources; or GraphQLwhich facilitates the integration of data from multiple sources. It’s IT that decides how to deploy the optimal data stores, infrastructure storage and connectors needed to support the AI, and there are likely to be different optionsfor deployment. This is where the CIO needs to dialogue with technical staff. 4. Data quality The AI group will rely on IT to provide quality data for the AI. This is accomplished in two ways: 1) by ensuring that all data incoming into the AI data repository is “clean”, and it is accurate and it is able to interact with other data in the AI data repository; and the data is secure. Whether it is working with outside vendors, vetting vendors for clean, secure data and periodically auditing them; or defining the data transformations and security technology and operations that must be put in place internally, it is all IT’s responsibility. The CIO will need to dialogue on technical levels with vendors, and with the IT database, storage, security, systems, applications and networking groups. Related:5. AI security The datain and to AI must be secure at all times. To arrive at this point, security must be enacted on multiple levels, and it will entail technical discussions and decision making to get there. First and foremost is data security. Much of this has already been discussed under data quality, and it will involve most IT departmental teams. Second is user access authorities and activity monitoring. Who gets access to what, and how will you monitor user activities? The users can define their own authorization lists and IT can implement these -- but complication occurs when it comes to monitoring user activities. If for example, the user activities occur only with onsite data repositories, sites can use a technology like IAM, which gives IT granular visibility of every user activity. However, if cloud-based access is involved, IAM won’t be able to monitor this activity at any level of detail. It might become necessary to use CIEMsoftware instead to gain granular observation of user activity in the cloud. Then there are “umbrella” technologies like IGAthat can serve as an over-arching framework for both IAM and CIEM. The IT security groupmust decide which strategy to adopt for comprehensive protection of AI. Finally, there are malware threats that are unique to AI. Yes, you can use standard malware detection to ward off attacks from bad actors on AI data, just as you would on standard data and applications -- but the plot thickens from there. For example, there are malware injections into AI systems that can inject inaccurate data or change the labels and features of data. These skew the results derived from that data and result in erroneous recommendations and decisions. The practice is known as “data poisoning.” IT is expected to come up with a data validation technique for incoming data that can detect possible poisoning attempts and stop them. This could involve data sanitization technologies, or data source verifications, and it is possible that inserting these technologies could slow down data transport. The technical staff needs to weigh these options, and CIOs should insert themselves into the discussions. The Bottom Line The bottom line is clear: CIOs must be able to dialogue and participate in decisions at multiple AI levels: the strategic, the operational and the technical. Even if companies have dedicated data science groups, both data scientists and users will ultimately wend their way to IT, which still must make the whole thing happen. CIOs can help both their staffs and their companies if they develop a working knowledge of how AI works, in addition to understanding the strategic and operational aspects of AI -- because companies, employees and business partners all need to hear the CIO’s voice. #what #cios #need #know #aboutWWW.INFORMATIONWEEK.COMWhat CIOs Need to Know About the Technical Aspects of AI IntegrationAn AI integration modifies a business process and how employees work, but it also requires an integration with IT infrastructure and systems. This is where some of IT’s most technically savvy staff will be working, and they will want to discuss technology integration approaches and ideas. Most CIOs aren’t software engineers, but they are responsible for having a working knowledge of all things IT so they can hold meaningful dialogues with their most technical employees to assist in defining technology direction. What do CIOs need to know about the technical side of AI integration? 1. AI technical integration is about embedding AI in systems and workflows The assumption here is that by the time your staff is getting into technical design and tooling decisions, that the business case and application for AI have already been decided. Now the task is deciding how to effect a technical embedding and integration of the AI into the IT infrastructure and applications that will support the business process. 2. Modeling is first and foremost AI systems are built around models that utilize data stores, algorithms for query, and machine learning that expands the AI’s body of knowledge as the AI recognizes common logic patterns in data and assimilates knowledge from them. There are many different AI models to choose from. In most cases, companies use predefined AI models from vendors and then expand on them. In other cases, companies elect to build their own models “from scratch.” Related:Building from scratch usually means that the organization has an on-board data science group with expertise in AI model building. Common AI model frameworks (e.g., Tensorflow, PyTorch, Keras, and others), provide the software resources and tools. These AI model-building technologies are not familiar to most IT staffs. The technologies use data graphs to build dataflows and structures that define how the data will move through the graph. Operational flows for the logic that operates on data must be defined. The model-building software also provides for algorithm development, model training, business rule definitions, and the machine learning that the model executes on its own as it “learns” from the data it ingests. IT might not know this stuff, but it can’t afford to ignore it. IT and CIOs need at least a working knowledge of how these opensource model building technologies work, because inevitably, these models must interface with IT infrastructure and data. 3. IT Infrastructure comes next Related:How to integrate an AI system with existing IT infrastructure is where CIOs can expect significant dialogue with their technical staffs. The AI has to be integrated seamlessly with the top to bottom tech stack if it is going to work. This means discussing how and where data from the AI will be stored, with SQL and noSQL databases being the early favorites. Middleware that enables the AI to interoperate with other IT systems must be interfaced with. Most AI models are open source, which can simplify integration -- but integration still requires using middleware APIs (application programming interfaces) like REST (representational state transfer application programming interface), which integrates the AI system with Internet-based resources; or GraphQL (graph query language,) which facilitates the integration of data from multiple sources. It’s IT that decides how to deploy the optimal data stores, infrastructure storage and connectors needed to support the AI, and there are likely to be different options (and costs) for deployment. This is where the CIO needs to dialogue with technical staff. 4. Data quality The AI group will rely on IT to provide quality data for the AI. This is accomplished in two ways: 1) by ensuring that all data incoming into the AI data repository is “clean” (i.e., the data has been transformed by software like ETL (extract-transform-load), and it is accurate and it is able to interact with other data in the AI data repository; and the data is secure (i.e., encrypted between transfer points or checked at the edges of each resource the data must traverse). Whether it is working with outside vendors, vetting vendors for clean, secure data and periodically auditing them; or defining the data transformations and security technology and operations that must be put in place internally, it is all IT’s responsibility. The CIO will need to dialogue on technical levels with vendors, and with the IT database, storage, security, systems, applications and networking groups. Related:5. AI security The data (and data access) in and to AI must be secure at all times. To arrive at this point, security must be enacted on multiple levels, and it will entail technical discussions and decision making to get there. First and foremost is data security. Much of this has already been discussed under data quality, and it will involve most IT departmental teams. Second is user access authorities and activity monitoring. Who gets access to what, and how will you monitor user activities? The users can define their own authorization lists and IT can implement these -- but complication occurs when it comes to monitoring user activities. If for example, the user activities occur only with onsite data repositories, sites can use a technology like IAM (identity access management), which gives IT granular visibility of every user activity. However, if cloud-based access is involved, IAM won’t be able to monitor this activity at any level of detail. It might become necessary to use CIEM (cloud infrastructure entitlement management) software instead to gain granular observation of user activity in the cloud. Then there are “umbrella” technologies like IGA (identity governance administration) that can serve as an over-arching framework for both IAM and CIEM. The IT security group (and their CIO) must decide which strategy to adopt for comprehensive protection of AI. Finally, there are malware threats that are unique to AI. Yes, you can use standard malware detection to ward off attacks from bad actors on AI data, just as you would on standard data and applications -- but the plot thickens from there. For example, there are malware injections into AI systems that can inject inaccurate data or change the labels and features of data. These skew the results derived from that data and result in erroneous recommendations and decisions. The practice is known as “data poisoning.” IT is expected to come up with a data validation technique for incoming data that can detect possible poisoning attempts and stop them. This could involve data sanitization technologies, or data source verifications, and it is possible that inserting these technologies could slow down data transport. The technical staff needs to weigh these options, and CIOs should insert themselves into the discussions. The Bottom Line The bottom line is clear: CIOs must be able to dialogue and participate in decisions at multiple AI levels: the strategic, the operational and the technical. Even if companies have dedicated data science groups, both data scientists and users will ultimately wend their way to IT, which still must make the whole thing happen. CIOs can help both their staffs and their companies if they develop a working knowledge of how AI works, in addition to understanding the strategic and operational aspects of AI -- because companies, employees and business partners all need to hear the CIO’s voice.0 Comments 0 Shares 0 ReviewsPlease log in to like, share and comment! -
Top 5 Decision-Making Frameworks for Effective Leadership
Sandeep Kashyap, CEO, ProofHubMay 21, 20254 Min ReadEugene Sergeev via Alamy StockIt’s normal to feel nervous when you have to make big decisions at work. After all, you never know how things will turn out. Fortunately, decision-making frameworks can help lessen those nerves and boost your confidence. They bring structure and clarity by bringing practical, proven methods that turn chaos into clarity. For IT leaders, these frameworks support critical thinking, confident action, and smarter choices -- even under pressure. Most importantly, they help you cut through the noise and ensure every decision stays aligned with your long-term business goals. This blog post will walk you through the five frameworks for effective decision-making that can help IT leaders make more informed decisions. Each one is designed to help you simplify complexity and lead with greater impact. Importance of Decision-Making FrameworksDecision-making frameworks bring consistency and logic to the decision-making process. They help you break things down and focus on the essentials. Here are the benefits of using these frameworks. Make your objectives clear: Structured decision-making frameworks help you cut through the noise and focus on what matters most, ensuring every decision aligns with your objectives. Bring teams together: The frameworks allow you to involve the right people and ensure everyone is on the same page. Related:Avoid costly mistakes: IT decisions often involve significant investments, such as new software and infrastructure upgrades. The framework helps you assess potential risk upfront and make deliberate choices. 5 Decision-Making Frameworks Every Leader Should KnowA decision-making framework provides clarity and consistency to make better decisions. Here are five frameworks that can sharpen your thinking and strengthen your leadership. 1. RAPID RAPID is a decision-making framework that helps clarify who is responsible for what when multiple stakeholders are involved. Each letter in RAPID represents a key role in the decision-making process: Recommend: The person in this role leads the effort by gathering data, analyzing options, and proposing a well-informed recommendation. Agree: These stakeholders have to work closely with the recommender to shape the best possible decision. Perform: This is the individual or team responsible for executing the decision once it's made. Input: These contributors offer valuable insights, expertise, or context that inform the recommendation. Decide: The final authority that makes the call and commits the organization to move forward. This role carries accountability for the outcome. Related:2. SPADE The SPADE framework breaks down each step of the structured decision-making process so that you can reach an informed and critical conclusion. It’s especially helpful when decisions involve multiple teams, limited time, and high visibility. Each letter in SPADE represents a crucial phase in the decision-making process: Setting: Define the decision’s scope, goal, and constraints. People: Identify and engage relevant stakeholders such as decision-makers, influencers, and executors. Alternatives: Generate options related to the decision based on criteria like cost, security, and scalability. Decide: Evaluate all options and select the best course of action. You can avoid negative consequences and bias through objective methods like private voting. Explain: Clearly document and explain the rationale behind a decision to ensure alignment across teams and maintain accountability for outcomes. 3. OODA loop The OODA loop is a four-step approach to decision-making that focuses on filtering available information, putting it in context, and quickly making the most appropriate decision. Related:The word OODA stands for: Observe: Monitor system performance, team dynamics, and industry trends to gather relevant and timely data. Orient: Analyze the information you have collected to understand the context, challenges, and opportunities. Decide: Based on your analysis, choose the most effective course of action. Act: Implement the decision quickly and efficiently. Once action is taken, the loop restarts—each decision and outcome creates new conditions to observe and evaluate. 4. Eisenhower MatrixThe Eisenhower Matrix is a task prioritization technique that helps make decisions related to tasks. It helps you organize tasks into four quadrants, based on the urgency and importance, and suggests appropriate action for tasks in each quadrant. It ensures that essential tasks are completed first, contributing to the success of projects and goals. Here is what the Eisenhower matrix includes: QuadrantDescription Action DoImportant and urgent Handle these immediately ScheduleImportant but not urgent Schedule these for later DelegateUrgent but not important Assign these to others if possible DeleteNeither urgent nor important Consider removing these altogether 5. Decision TreeA decision tree is a graphical representation that helps IT leaders map out the possible outcomes of different decisions. It helps leaders assess risks, rewards, and the potential consequences of each choice before committing to a path. Decision trees are most useful in complex decision-making processes where multiple scenarios are involved. ConclusionIT leaders deal with tough decisions every day. Which project should be prioritized? Should we adopt new tools or improve the existing ones? Who should get what tasks? To handle these challenges, leaders can use frameworks for effective decision-making like RAPID, SPADE, OODA, Eisenhower Matrix, and decision trees. These tools help bring structure and clarity to tough decisions, making it easier to move forward with confidence in a fast-changing business world. About the AuthorSandeep KashyapCEO, ProofHubSandeep Kashyap, the visionary CEO of ProofHub, boasts over 25 years of IT industry experience. He's a recognized luminary known for innovation and agility. His contributions extend to project management insights and leadership, growth and entrepreneurship. His practical expertise is evident in ProofHub's success. Recognized as Top Leadership Voice on Linkedin, Sandeep’s contributions provide invaluable insight for leaders and professionals seeking to create thriving workplaces.See more from Sandeep KashyapWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#top #decisionmaking #frameworks #effective #leadershipTop 5 Decision-Making Frameworks for Effective LeadershipSandeep Kashyap, CEO, ProofHubMay 21, 20254 Min ReadEugene Sergeev via Alamy StockIt’s normal to feel nervous when you have to make big decisions at work. After all, you never know how things will turn out. Fortunately, decision-making frameworks can help lessen those nerves and boost your confidence. They bring structure and clarity by bringing practical, proven methods that turn chaos into clarity. For IT leaders, these frameworks support critical thinking, confident action, and smarter choices -- even under pressure. Most importantly, they help you cut through the noise and ensure every decision stays aligned with your long-term business goals. This blog post will walk you through the five frameworks for effective decision-making that can help IT leaders make more informed decisions. Each one is designed to help you simplify complexity and lead with greater impact. Importance of Decision-Making FrameworksDecision-making frameworks bring consistency and logic to the decision-making process. They help you break things down and focus on the essentials. Here are the benefits of using these frameworks. Make your objectives clear: Structured decision-making frameworks help you cut through the noise and focus on what matters most, ensuring every decision aligns with your objectives. Bring teams together: The frameworks allow you to involve the right people and ensure everyone is on the same page. Related:Avoid costly mistakes: IT decisions often involve significant investments, such as new software and infrastructure upgrades. The framework helps you assess potential risk upfront and make deliberate choices. 5 Decision-Making Frameworks Every Leader Should KnowA decision-making framework provides clarity and consistency to make better decisions. Here are five frameworks that can sharpen your thinking and strengthen your leadership. 1. RAPID RAPID is a decision-making framework that helps clarify who is responsible for what when multiple stakeholders are involved. Each letter in RAPID represents a key role in the decision-making process: Recommend: The person in this role leads the effort by gathering data, analyzing options, and proposing a well-informed recommendation. Agree: These stakeholders have to work closely with the recommender to shape the best possible decision. Perform: This is the individual or team responsible for executing the decision once it's made. Input: These contributors offer valuable insights, expertise, or context that inform the recommendation. Decide: The final authority that makes the call and commits the organization to move forward. This role carries accountability for the outcome. Related:2. SPADE The SPADE framework breaks down each step of the structured decision-making process so that you can reach an informed and critical conclusion. It’s especially helpful when decisions involve multiple teams, limited time, and high visibility. Each letter in SPADE represents a crucial phase in the decision-making process: Setting: Define the decision’s scope, goal, and constraints. People: Identify and engage relevant stakeholders such as decision-makers, influencers, and executors. Alternatives: Generate options related to the decision based on criteria like cost, security, and scalability. Decide: Evaluate all options and select the best course of action. You can avoid negative consequences and bias through objective methods like private voting. Explain: Clearly document and explain the rationale behind a decision to ensure alignment across teams and maintain accountability for outcomes. 3. OODA loop The OODA loop is a four-step approach to decision-making that focuses on filtering available information, putting it in context, and quickly making the most appropriate decision. Related:The word OODA stands for: Observe: Monitor system performance, team dynamics, and industry trends to gather relevant and timely data. Orient: Analyze the information you have collected to understand the context, challenges, and opportunities. Decide: Based on your analysis, choose the most effective course of action. Act: Implement the decision quickly and efficiently. Once action is taken, the loop restarts—each decision and outcome creates new conditions to observe and evaluate. 4. Eisenhower MatrixThe Eisenhower Matrix is a task prioritization technique that helps make decisions related to tasks. It helps you organize tasks into four quadrants, based on the urgency and importance, and suggests appropriate action for tasks in each quadrant. It ensures that essential tasks are completed first, contributing to the success of projects and goals. Here is what the Eisenhower matrix includes: QuadrantDescription Action DoImportant and urgent Handle these immediately ScheduleImportant but not urgent Schedule these for later DelegateUrgent but not important Assign these to others if possible DeleteNeither urgent nor important Consider removing these altogether 5. Decision TreeA decision tree is a graphical representation that helps IT leaders map out the possible outcomes of different decisions. It helps leaders assess risks, rewards, and the potential consequences of each choice before committing to a path. Decision trees are most useful in complex decision-making processes where multiple scenarios are involved. ConclusionIT leaders deal with tough decisions every day. Which project should be prioritized? Should we adopt new tools or improve the existing ones? Who should get what tasks? To handle these challenges, leaders can use frameworks for effective decision-making like RAPID, SPADE, OODA, Eisenhower Matrix, and decision trees. These tools help bring structure and clarity to tough decisions, making it easier to move forward with confidence in a fast-changing business world. About the AuthorSandeep KashyapCEO, ProofHubSandeep Kashyap, the visionary CEO of ProofHub, boasts over 25 years of IT industry experience. He's a recognized luminary known for innovation and agility. His contributions extend to project management insights and leadership, growth and entrepreneurship. His practical expertise is evident in ProofHub's success. Recognized as Top Leadership Voice on Linkedin, Sandeep’s contributions provide invaluable insight for leaders and professionals seeking to create thriving workplaces.See more from Sandeep KashyapWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #top #decisionmaking #frameworks #effective #leadershipWWW.INFORMATIONWEEK.COMTop 5 Decision-Making Frameworks for Effective LeadershipSandeep Kashyap, CEO, ProofHubMay 21, 20254 Min ReadEugene Sergeev via Alamy StockIt’s normal to feel nervous when you have to make big decisions at work. After all, you never know how things will turn out. Fortunately, decision-making frameworks can help lessen those nerves and boost your confidence. They bring structure and clarity by bringing practical, proven methods that turn chaos into clarity. For IT leaders, these frameworks support critical thinking, confident action, and smarter choices -- even under pressure. Most importantly, they help you cut through the noise and ensure every decision stays aligned with your long-term business goals. This blog post will walk you through the five frameworks for effective decision-making that can help IT leaders make more informed decisions. Each one is designed to help you simplify complexity and lead with greater impact. Importance of Decision-Making FrameworksDecision-making frameworks bring consistency and logic to the decision-making process. They help you break things down and focus on the essentials. Here are the benefits of using these frameworks. Make your objectives clear: Structured decision-making frameworks help you cut through the noise and focus on what matters most, ensuring every decision aligns with your objectives. Bring teams together: The frameworks allow you to involve the right people and ensure everyone is on the same page. Related:Avoid costly mistakes: IT decisions often involve significant investments, such as new software and infrastructure upgrades. The framework helps you assess potential risk upfront and make deliberate choices. 5 Decision-Making Frameworks Every Leader Should KnowA decision-making framework provides clarity and consistency to make better decisions. Here are five frameworks that can sharpen your thinking and strengthen your leadership. 1. RAPID (recommend, agree, perform, input, decide) RAPID is a decision-making framework that helps clarify who is responsible for what when multiple stakeholders are involved. Each letter in RAPID represents a key role in the decision-making process: Recommend: The person in this role leads the effort by gathering data, analyzing options, and proposing a well-informed recommendation. Agree: These stakeholders have to work closely with the recommender to shape the best possible decision. Perform: This is the individual or team responsible for executing the decision once it's made. Input: These contributors offer valuable insights, expertise, or context that inform the recommendation. Decide: The final authority that makes the call and commits the organization to move forward. This role carries accountability for the outcome. Related:2. SPADE (setting, people, alternatives, decide, explain) The SPADE framework breaks down each step of the structured decision-making process so that you can reach an informed and critical conclusion. It’s especially helpful when decisions involve multiple teams, limited time, and high visibility. Each letter in SPADE represents a crucial phase in the decision-making process: Setting: Define the decision’s scope, goal, and constraints. People: Identify and engage relevant stakeholders such as decision-makers, influencers, and executors. Alternatives: Generate options related to the decision based on criteria like cost, security, and scalability. Decide: Evaluate all options and select the best course of action. You can avoid negative consequences and bias through objective methods like private voting. Explain: Clearly document and explain the rationale behind a decision to ensure alignment across teams and maintain accountability for outcomes. 3. OODA loop (observe, orient, decide, act) The OODA loop is a four-step approach to decision-making that focuses on filtering available information, putting it in context, and quickly making the most appropriate decision. Related:The word OODA stands for: Observe: Monitor system performance, team dynamics, and industry trends to gather relevant and timely data. Orient: Analyze the information you have collected to understand the context, challenges, and opportunities. Decide: Based on your analysis, choose the most effective course of action. Act: Implement the decision quickly and efficiently. Once action is taken, the loop restarts—each decision and outcome creates new conditions to observe and evaluate. 4. Eisenhower MatrixThe Eisenhower Matrix is a task prioritization technique that helps make decisions related to tasks. It helps you organize tasks into four quadrants, based on the urgency and importance, and suggests appropriate action for tasks in each quadrant. It ensures that essential tasks are completed first, contributing to the success of projects and goals. Here is what the Eisenhower matrix includes: QuadrantDescription Action DoImportant and urgent Handle these immediately ScheduleImportant but not urgent Schedule these for later DelegateUrgent but not important Assign these to others if possible DeleteNeither urgent nor important Consider removing these altogether 5. Decision TreeA decision tree is a graphical representation that helps IT leaders map out the possible outcomes of different decisions. It helps leaders assess risks, rewards, and the potential consequences of each choice before committing to a path. Decision trees are most useful in complex decision-making processes where multiple scenarios are involved. ConclusionIT leaders deal with tough decisions every day. Which project should be prioritized? Should we adopt new tools or improve the existing ones? Who should get what tasks? To handle these challenges, leaders can use frameworks for effective decision-making like RAPID, SPADE, OODA, Eisenhower Matrix, and decision trees. These tools help bring structure and clarity to tough decisions, making it easier to move forward with confidence in a fast-changing business world. About the AuthorSandeep KashyapCEO, ProofHubSandeep Kashyap, the visionary CEO of ProofHub, boasts over 25 years of IT industry experience. He's a recognized luminary known for innovation and agility. His contributions extend to project management insights and leadership, growth and entrepreneurship. His practical expertise is evident in ProofHub's success. Recognized as Top Leadership Voice on Linkedin, Sandeep’s contributions provide invaluable insight for leaders and professionals seeking to create thriving workplaces.See more from Sandeep KashyapWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 0 Reviews -
How CIOs Can Prepare Their Successors
Guiding future CIOs to eventually take the reins has evolved with the increasing depth of technology, says Michael Zastrocky, executive director of the Leadership Board for CIOs in Higher Education.
المصدر: https://www.informationweek.com/it-leadership/how-cios-can-prepare-their-successors
#How #CIOs #Can #Prepare #Their #SuccessorsHow CIOs Can Prepare Their SuccessorsGuiding future CIOs to eventually take the reins has evolved with the increasing depth of technology, says Michael Zastrocky, executive director of the Leadership Board for CIOs in Higher Education. المصدر: https://www.informationweek.com/it-leadership/how-cios-can-prepare-their-successors #How #CIOs #Can #Prepare #Their #SuccessorsWWW.INFORMATIONWEEK.COMHow CIOs Can Prepare Their SuccessorsGuiding future CIOs to eventually take the reins has evolved with the increasing depth of technology, says Michael Zastrocky, executive director of the Leadership Board for CIOs in Higher Education.0 Comments 0 Shares 0 Reviews -
WWW.INFORMATIONWEEK.COMAsk a CIO Recruiter: Where Is the ‘I’ in the Modern CIO Role?Ben Cole, Senior Executive Editor, InformationWeekMay 13, 20256 Min ReadNataly Turjeman via Alamy StockAs technology has evolved, so too has the chief information officer role. As artificial intelligence and advancing tech continue to be vital to business success, the CIO has moved away from the behind-the-scenes exec that keeps IT running smoothly and into a vital voice in the C-suite.Identifying how to best incorporate rapidly advancing tech into business processes has always been a big part of the CIO’s role, says IT leadership recruiter Tarun Inuganti, a senior client partner and global managing partner responsible for Korn Ferry’s Global Technology Officers Practice across North America, EMEA, Asia Pacific and Latin America. In this interview with InformationWeek, Inuganti discusses the changing CIO role and how advancing technology has always influenced how IT executives approach the day-to-day.This interview has been edited for clarity and length.What do CIOs need to know about the job and the current CIO job market?The role of the CIO has dramatically changed in the last five to 10 years, moving away from the back-office job that keeps our Zooms working and our bills being paid. I’m not dismissing that; it’s important, but it’s a lot more than that today, particularly given data analytics and AI. The ability of CIOs to use new technologies to enable the digital transformation journeys most organizations are on has been accelerating in the last eight to 10 years, I would say. It started with digital transformation;, now it’s data analytics and AI. CIOs need to be well-versed not just in technologies, but [in] how you apply those technologies for business enablement and growth. Differentiating themselves from anyone else is going to be important.Related:How is the CIO’s day-to-day role evolving, especially as tech like AI continues to evolve and influence the business?There are multiple dimensions to that. First, there are obviously huge opportunities AI can provide the business, whether it’s cost optimization or efficiencies, so there is a lot of pressure from boards and sometimes CEOs themselves saying ‘what are we doing in AI?’ The second side is that there are significant opportunities AI can enable the business in decision-making. The third leg is that AI is not fully leveraged today; it’s not in a very easy-to-use space.That is coming, and CIOs need to be able to prepare the organization for that change. CIOs need to prepare their teams, as well as business users, and say ‘hey, this is coming, we’ve already experimented with a few things. There are a lot of use cases applied in certain industries; how are we prepared for that?’ The CIO is part of that evolution.Related:A lot of organizations are trying to get ahead of that. One healthcare organization recently hired a chief AI officer, and when we asked the CEO why they were doing this now when the organization may not ready for it, he said he wanted everyone to start thinking about it because it is coming, and it could be so impactful on the business. Anything from a better patient-care environment to better use cases to better enablement of the patient experience -- and that is just healthcare. AI is going to change everything you and I do, and it is going to affect business as well. What are companies looking for in a modern CIO? Are things like an MBA important, or are there any specific certifications that are proving more valuable?It doesn’t hurt to have an MBA, but it is not something specific that our clients ask us for because by the time you get to the senior executive levels, most of those deep technical skills are not quite necessary. You need to know enough to call ‘bs’ when you have to, but you don’t need to get into the weeds. We don’t look for deep technologists, but if you have technology heritage or pedigree in some way, maybe it’s a bachelor’s in engineering or a bachelor’s in computer science, that is certainly helpful. An MBA is very nice to have, or a master’s degree in some applied math, is nice to have, but is not necessarily a requirement. At this time, it’s all about experience: You’ve had multiple roles; you’ve learned from those; you’ve applied them; you’re a leader and you’ve used technologies to impact change. Those are the kind of traits companies are looking for in a CIO.Related:It’s more about fit and culture. We have a lot of assessment tools to help clients make better decisions on that front.Are there certain CIO-specific skills that companies have a hard time hiring?Yes. AI is certainly right on top of the list. I try to remind clients that AI has been around in a variety of forms. It’s called AI today; a few years ago it was machine learning;, and before that it was behavior and analytics. This is the acceleration of the journey we are seeing with AI. I sometimes have to educate clients that what you talk about AI has been around in different forms for a long time. You don’t have to know it all; you just have to go sometimes by that previous experience.Tarun Inuganti, Korn FerryJust having that vision to see where technology is going and trying to stay ahead of it is important. Not necessarily chasing the shiny new toy,, new technology, but just being ahead of it is the most important skill set. Look around the corner and prepare the organization for the change that will come.Also, if you retrained some of the people, you have to be more analytical, more business minded. Those are good skills. That’s not easy to find. A lot of people [who] move into the CIO role are very technical, whether it is coding or heavily on the infrastructure side. That is a commodity today; you need to be beyond that.What are CIOs looking for in employees and the organization when they are considering taking on job?This would be their wish list: Reporting to the CEO is going to be important. Being at the leadership table, being part of the executive team, is important. Most good technology leaders want to be in the conversation when decisions are being made so they can actually help with that decision and influence that decision, rather than being told, ‘Here’s what we decided; you guys go and implement it.’ That’s what they don’t want to do. Third and very important is budget, an appetite for investment and having the resources.The fourth is all about is how you fit culturally in the organization. Can I work with this team, do I believe in its vision, do I know where they are heading? All of those are fair and important questions to ask.Any other trends you are seeing in relation to the CIO role?We all talk about the CIO, but that title in and of itself is becoming less desirable for a lot of technology leaders. They want to be called chief digital and technology officer or chief technology officer, I’ve seen a variety of titles. The CIO title just sends a signal that it is more back office.Years ago, CTOs were more infrastructure people, those that ran the back-office infrastructure. Now ‘CTO’ sends a signal that the company is thinking about technology differently.Analytical business minds are needed. Finding that balance, somebody who leans into business and connects better with the business, is the skill set that is getting more important. We still have a lot of people who get enamored by the technology and forget who their audience is.About the AuthorBen ColeSenior Executive Editor, InformationWeekBen Cole is a senior executive editor for InformationWeek. He has more than 25 years of editorial experience, and guided award-winning technology coverage as editor for TechTarget sites covering CIO strategy, regulatory compliance, data science, security, data management, business intelligence and AI. Earlier in his career, Ben worked in healthcare media and as a reporter with Massachusetts-based daily newspapers.See more from Ben ColeWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMHow to Turn Your IT Team Into an Idea GeneratorJohn Edwards, Technology Journalist & AuthorMay 7, 20255 Min ReadWavebreak Media Premium via Alamy Stock PhotoYour IT team is most likely smart, productive, and efficient. These staffers are on the front line, relentlessly addressing challenges and solving problems. They may also have many good ideas for improving performance or taking a new approach to a persistent problem. Unfortunately, these ideas may never be heard by IT decision-makers for various reasons, such as a fear of ruffling feathers or being ignored. The key to great idea generation is fostering an enterprise culture that encourages curiosity, continuous learning, and operating with a high sense of urgency, says John Kreul, CIO at insurance firm Jewelers Mutual. "Encourage team members to develop a deep understanding of the customer, plus end-to-end business knowledge, which triggers continuous improvement and disruptive ideas," he explains in an online discussion. "We promote cross-functional collaboration so technology professionals can gain fresh perspectives from business teams while also sharing and proposing innovative solutions that reshape how we work together and advance the customer experience," he says. Encouraging team members to propose valuable ideas starts with building a culture of open communication and trust, says John Russo, vice president of healthcare technology solutions at healthcare software development firm OSP Labs. "As an IT leader, creating a space where brainstorming is welcomed -- without fear of criticism -- is key," he advises in an email interview. Regular innovation sessions, where team members feel comfortable sharing ideas, can make a huge difference, Russo notes. "It's also important to actively listen and acknowledge every contribution, reinforcing that all voices matter." Related:Russo says this approach works because it fosters a sense of ownership and belonging. "When people feel heard and respected, they're more likely to share creative solutions that drive real innovation," he explains. "It also brings in diverse perspectives, which can lead to more effective problem-solving." Recognition and showing appreciation are the key to developing continual forward-thinking actions, Kreul says. "At Jewelers Mutual, we celebrate accomplishments through personal connections, team, and company-wide recognition of achievements, as well as professional development and award opportunities." Compassionate Rejection Practices Not every idea can be implemented, but rejections should never discourage future proposals, Russo advises. "The key is constructive feedback -- explaining why an idea isn't feasible at the moment while appreciating the effort behind it." Offering suggestions for refining the idea or keeping it on the radar for future needs keeps the conversation positive and productive, he adds. Related:When brainstorming collectively, IT leaders must ensure that their team knows that an idea generation exercise is a safe place for them to propose their thoughts and that it's a judgment-free zone, says Karishma Bhatnagar, product manager at freelance talent provider Upwork. "Without this basic understanding and environment, it's really hard for team members to open up and share their thoughts," she explains in an email interview. Once a session has been completed, the leader should be clear on why a particular idea has been rejected or deferred. "Budget, resources, and time constraints can be used to reject ideas so that the rejection isn’t taken personally by the team member," Bhatnagar notes. "The leader should also explicitly share that they appreciate the effort all team members have put into proposing and generating ideas although, at that this time, only a handful will be explored further." Avoiding Leadership Mistakes Leaders can unintentionally discourage idea-sharing by shutting down proposals too quickly, being overly critical, or simply by failing to follow through on good ideas. "If team members feel their ideas disappear into a void, they’ll eventually stop sharing them," Russo says. Related:IT leaders can also inadvertently discourage idea proposals if they aren't careful in the way they deal with the rejection of ideas, or if they don't provide ... a respectful time, place and platform for the team members to share their ideas, Bhatnagar says. "Not ensuring that team members have the opportunity to share and be heard will lead to the team feeling too discouraged to share their idea proposals." Last Thoughts: Empower the Team Technology innovation is not just about technology -- it’s about people, culture, and fostering a mindset of continuous improvement and disruptive ideas that create personalized experiences and value for our customers, Kreul says. "Technology leaders can enhance team performance and achieve business goals by empowering cross-discipline teams to generate ideas and empower them to act -- trust your team to do what is best." Above all, IT leaders should remind their teams that innovation is an ongoing process, Russo says. "Even ideas that don’t take off immediately can spark breakthroughs down the road," he explains. "By celebrating experimentation, learning from failures, and consistently reinforcing the value of new ideas, leaders can create an environment where creativity thrives." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMMIT Sloan CIO SymposiumTechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.MIT Sloan CIO SymposiumMay 20, 2025|Royal Sonesta Boston / Cambridge, MAJoin MIT Sloan at the Royal Sonesta Hotel for their 22nd annual CIO Symposium, Cambridge, MA on Tuesday, May 20, 2025. As we enter an AI-driven era, CIOs embark on an unpredictable journey filled with both opportunity and challenge. Many enterprises are actively applying AI to product development, marketing, operations, and HR. Join other CIOs, technology executives and MIT faculty for interactive learning, thought-provoking panels, networking, and more. Don’t miss out—secure your tickets today.Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UP0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMEssential Tools to Secure Software Supply ChainsMax Belov, Chief Technology Officer, Coherent SolutionsApril 24, 20254 Min Readnipiphon na chiangmai via Alamy StockAttacks on software supply chains to hijack sensitive data and source code occur almost daily. According to the Identity Theft Resource Center (ITRC), over 10 million individuals were affected by supply chain attacks in 2022. Those attacks targeted more than 1,700 institutions and compromised vast amounts of data. Software supply chains have grown increasingly complex, and threats have become more sophisticated. Meanwhile, AI is working in favor of hackers, supporting malicious attempts more than strengthening defenses. The larger the organization, the harder CTOs have to work to enhance supply chain security without sacrificing development velocity and time to value. More Dependencies, More Vulnerabilities Modern applications rely more on pre-built frameworks and libraries than they did just a few years ago, each coming with its own ecosystem. Security practices like DevSecOps and third-party integrations also multiply dependencies. While they deliver speed, scalability, and cost-efficiency, dependencies create more weak spots for hackers to target. Such practices are meant to reinforce security, yet they may lead to fragmented oversight that complicates vulnerability tracking. Attackers can slip through the pathways of widely used components and exploit known flaws. A single compromised package that ripples through multiple applications may be enough to result in severe damage. Related:Supply chain breaches cause devastating financial, operational, and reputational consequences. For business owners, it’s crucial to choose digital engineering partners who place paramount importance on robust security measures. Service vendors must also understand that guarantees of strong cybersecurity are becoming a decisive factor in forming new partnerships. Misplaced Trust in Third-Party Components Most supply chain attacks originate on the vendor side, which is a serious concern for the vendors. As mentioned earlier, complex ecosystems and open-source components are easy targets. CTOs and security teams shouldn't place blind trust in vendors. Instead, they need clear visibility into the development process. Creating and maintaining a software bill of materials (SBOM) for your solution can help mitigate risks by revealing a list of software components. However, SBOMs provide no insight into how these components function and what hidden risks they carry. For large-scale enterprise systems, reviewing SBOMs can be overwhelming and doesn’t fully guarantee adequate supply chain security. Continuous monitoring and a proactive security mindset -- one that assumes breaches exist and actively mitigates them -- make the situation better controllable, but they are no silver bullet. Related:Software supply chains consist of many layers, including open-source libraries, third-party APIs, cloud services and others. As they add more complexity to the chains, effectively managing these layers becomes pivotal. Without the right visibility tools in place, each layer introduces potential risk, especially when developers have little control over the origins of each component integrated into a solution. Such tools as Snyk, Black Duck, and WhiteSource (now Mend.io) help analyze software composition, by scanning components for vulnerabilities and identifying outdated or insecure ones. Risks of Automatic Updates Automatic updates are a double-edged sword; they significantly reduce the time needed to roll out patches and fixes while also exposing weak spots. When trusted vendors push well-structured automatic updates, they can also quickly deploy patches as soon as flaws are detected and before attackers exploit them. However, automatic updates can become a delivery mechanism for attacks. In the SolarWinds incident, malicious code was inserted into an automated update, which made massive data theft possible before it was detected. Blind trust in vendors and the updates they deliver increases risks. Instead, the focus should shift to integrating efficient tools to build sustainable supply chain security strategies. Related:Building Better Defenses CTOs must take a proactive stance to strengthen defenses against supply chain attacks. Hence the necessity of SBOM and software composition analysis (SCA), automated dependency tracking, and regular pruning of unused components. Several other approaches and tools can help further bolster security: Threat modeling and risk assessment help identify potential weaknesses and prioritize risks within the supply chain. Code quality ensures the code is secure and well-maintained and minimizes the risk of vulnerabilities. SAST (static application security testing) scans code for security flaws during development, allowing teams to detect and address issues earlier. Security testing validates that every system component functions as intended and is protected. Relying on vendors alone is insufficient -- CTOs must prioritize stronger, smarter security controls. They should integrate robust tools for tracking SBOM and SCA and should involve SAST and threat modeling in the software development lifecycle. Equally important are maintaining core engineering standards and performance metrics like DORA to ensure high delivery quality and velocity. By taking this route, CTOs can build and buy software confidently, staying one step ahead of hackers and protecting their brands and customer trust. Read more about:Supply ChainAbout the AuthorMax BelovChief Technology Officer, Coherent SolutionsMax Belov joined Coherent Solutions in 1998 and assumed the role of CTO two years later. He is a seasoned software architect with deep expertise in designing and implementing distributed systems, cybersecurity, cloud technology, and AI. He also leads Coherent’s R&D Lab, focusing on IoT, blockchain, and AI innovations. His commentary and bylines appeared in CIO, Silicon UK Tech News, Business Reporter, and TechRadar Pro. See more from Max BelovReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMLow-Cost AI Projects -- A Great Way to Get StartedJohn Edwards, Technology Journalist & AuthorApril 24, 20254 Min Readheliography / Stockimo via Alamy Stock PhotoOne of the great things about AI is that getting started with the technology doesn't have to be a time or money drain. Understanding AI and its long-term business value can be achieved simply by experimenting with a few inexpensive deployments. To help you get started, here are six low-budget AI projects that require only a modest financial commitment yet offer powerful insights into the technology's potential business worth. 1. Chatbot Before attempting a complex AI application, many experts advise beginning with something very simple, such as an internal chatbot. "Starting slow enables application architects and developers to consider the intricacies AI introduces to application threat models and ‘skill-up’ in low-sensitivity environments," says David Brauchler, technical director and head of AI and ML security at cybersecurity consultancy NCC Group, one of several experts interviewed online. External chatbots are just as easy to deploy. "Many small businesses struggle with responding to customer inquiries quickly, and an AI chatbot can handle frequently asked questions, provide product recommendations, and even assist with appointment bookings," says Anbang Xu, founder of JoggAI, an AI-driven video automation platform, agrees. He notes that tools like ChatGPT, DialogFlow, or ManyChat offer easy integrations with websites and social media. Related:2. Web scraper Consider building a custom web scraper to automatically monitor competitors' websites and other relevant sites, suggests Elisa Montanari, head of organic growth at work management platform provider Wrike. The scraper will summarize relevant content and deliver it in a daily or weekly digest. "In the marketing department alone, that intelligence can help you spend more time strategizing and creating content or campaigns rather than trying to piece together the competitive landscape." Montanari adds that Web scrapers are relatively simple to design, easily scalable, and relatively inexpensive. 3. Intelligent virtual assistant A great low-cost starter project, particularly for smaller businesses, is an AI-powered intelligent virtual assistant (IVA) dedicated to customer service, says Frank Schneider, AI evangelist at AI analytics firm Verint. "IVAs can handle routine customer inquiries, provide information, and even assist with basic troubleshooting." Many IVA solutions are affordable or even free, making them easily accessible to any small business, Schneider says. They're also relatively simple to create and can integrate with existing systems, requiring minimal technical expertise. Related:4. Internal knowledge base An initial AI project should be internal-facing, low risk, and useful, says Loren Absher, a director and lead analyst with technology research and advisory firm ISG. An AI-powered internal knowledge base meets all of those goals. "It lets employees quickly access company policies, training materials, and process documentation, using natural language." "This type of project is a perfect introduction to AI because it’s practical, low cost, and reduces risk by staying internal," Absher says. "It gives the company hands-on experience with AI fundamentals -- data management, model training, and user interaction -- without disrupting external operations," he notes. "Plus, it’s easy to experiment with open-source tools and pay-as-you-go AI services, so there’s no big upfront investment." The best approach to creating an AI-driven internal knowledge base is to assign a cross-functional team to the project, Absher advises. An IT or a data specialist can handle the technical side, a business process owner will ensure its usefulness, and someone from compliance or knowledge management will help keep the information accurate and secure, he says. Related:5. Ad builder Anmol Agarwal, founder of corporate training firm Alora Tech, believes that a great low-cost way to get your feet wet is using generative AI tools to enhance business productivity. "For example, use GenAI to create ads for your company, create email templates, even revise emails." Agarwal is bullish on GenAI. She notes that only minimum effort is required, since the code is already there and doesn't require programming experience. 6. Sales lead scoring An AI-powered lead scoring program is a low-cost, yet highly practical, AI starter project, says Egor Belenkov, founder and CEO of digital signage solutions provider Kitcast. With the help of historical data and behaviors, the program will help users find leads based on their likelihood of conversion into customers. "This tool will help the sales team to focus on high-potential leads and improve conversion rates significantly." This project makes a great starting point due to its ease in implementation and the value it provides, Belenkov says. "Sales teams will be able to personalize their outreach based on their needs and requirements," he explains. "It will also help the marketing team by adjusting their campaigns based on which leads are identified as most valuable." Another important benefit is the ability to analyze patterns across multiple points, such as website activity or email engagement, to predict which leads will be most likely to convert. "This eliminates the guessing game about which clients would decide to buy and which wouldn't," Belenkov says. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMEdge AI: Is it Right for Your Business?John Edwards, Technology Journalist & AuthorApril 22, 20255 Min ReadDragos Condrea via Alamy Stock PhotoIf you haven't yet heard about edge AI, you no doubt soon will. To listen to its many supporters, the technology is poised to streamline AI processing. Edge AI presents an exciting shift, says Baris Sarer, global leader of Deloitte's AI practice for technology, media, and telecom. "Instead of relying on cloud servers -- which require data to be transmitted back and forth -- we're seeing a strategic deployment of artificial intelligence models directly onto the user’s device, including smartphones, personal computers, IoT devices, and other local hardware," he explains via email. "Data is therefore both generated and processed locally, allowing for real-time processing and decision-making without the latency, cost, and privacy considerations associated with public cloud connections." Multiple Benefits By reducing latency and improving response times -- since data is processed close to where it's collected -- edge AI offers significant advantages, says Mat Gilbert, head of AI and data at Synapse, a unit of management consulting firm Capgemini Invent. It also minimizes data transmission over networks, improving privacy and security, he notes via email. "This makes edge AI crucial for applications that require rapid response times, or that operate in environments with limited or high-cost connectivity." This is particularly true when large amounts of data are collected, or when there's a need for privacy and/or keeping critical data on-premises. Related:Initial Adopters Edge AI is a foundational technology that can drive future growth, transform operations, and enhance efficiencies across industries. "It enables devices to handle complex tasks independently, transforming data processing and reducing cloud dependency," Sarer says. Examples include: Healthcare. Enhancing portable diagnostic devices and real-time health monitoring, delivering immediate insights and potentially lifesaving alerts. Autonomous vehicles. Allowing real-time decision-making and navigation, ensuring safety and operational efficiency. Industrial IoT systems. Facilitating on-site data processing, streamlining operations and boosting productivity. Retail. Enhancing customer experiences and optimizing inventory management. Consumer electronics. Elevating user engagement by improving photography, voice assistants, and personalized recommendations. Smart cities. Edge AI can play a pivotal role in managing traffic flow and urban infrastructure in real-time, contributing to improved city planning. First Steps Related:Organizations considering edge AI adoption should start with a concrete business use case, advises Debojyoti Dutta, vice president of engineering AI at cloud computing firm Nutanix. "For example, in retail, one needs to analyze visual data using computer vision for restocking, theft detection, and checkout optimization, he says in an online interview. KPIs could include increased revenue due to restocking (quicker restocking leads to more revenue and reduced cart abandonment), and theft detection. The next step, Dutta says, should be choosing the appropriate AI models and workflows, ensuring they meet each use case's needs. Finally, when implementing edge AI, it's important to define an edge-based combination data/AI architecture and stack, Dutta says. The architecture/stack may be hierarchical due to the business structure. "In retail, we can have a lower cost/power AI infrastructure at each store and more powerful edge devices at the distribution centers." Adoption Challenges While edge AI promises numerous benefits, there are also several important drawbacks. "One of the primary challenges is the complexity of deploying and managing AI models on edge devices, which often have limited computational resources compared to centralized cloud servers," Sarer says. "This can necessitate significant optimization efforts to ensure that models run efficiently on these devices." Related:Another potential sticking point is the initial cost of building an edge infrastructure and the need for specialized talent to develop and maintain edge AI solutions. "Security considerations should also be taken into account, since edge AI requires additional end-point security measures as the workloads are distributed," Sarer says. Despite these challenges, edge AI's benefits of real-time data processing, reduced latency, and enhanced data privacy, usually outweigh the drawbacks, Sarer says. "By carefully planning and addressing these potential issues, organizations can successfully leverage edge AI to drive innovation and achieve their strategic objectives." Perhaps the biggest challenge facing potential adopters are the computational constraints inherent in edge devices. By definition, edge AI models run on resource-constrained hardware, so deployed models generally require tuning to specific use cases and environments, Gilbert says. "These models can require significant power to operate effectively, which can be challenging for battery-powered devices, for example." Additionally, balancing response time needs with a need for high accuracy demands careful management. Looking Ahead Edge AI is evolving rapidly, with hardware becoming increasingly capable as software advances continue to reduce AI models' complexity and size, Gilbert says. "These developments are lowering the barriers to entry, suggesting an increasingly expansive array of applications in the near future and beyond." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMBreaking Down the Walls Between IT and OTIT and OT systems can seem worlds apart, and historically, they have been treated that way. Different teams and departments managed their operations, often with little or no communication. But over time OT systems have become increasingly networked, and those two worlds are bleeding into one another. And threat actors are taking advantage. Organizations that have IT and OT systems -- oftentimes critical infrastructure organizations -- the risk to both of these environments is present and pressing. CISOs and other security leaders are tasked with the challenge of breaking down the barriers between the two to create a comprehensive cybersecurity strategy. The Gulf Between IT and OT Why are IT and OT treated as such separate spheres when both face cybersecurity threats? “Even though there's cyber on both sides, they are fundamentally different in concept,” Ian Bramson, vice president of global industrial cybersecurity at Black & Veatch, an engineering, procurement, consulting, and construction company, tells InformationWeek. “It's one of the things that have kept them more apart traditionally.” Age is one of the most prominent differences. In a Fortinet survey of OT organizations, 74% of respondents shared that the average age of their industrial control systems is between six and 10 years old. Related:OT technology is built to last for years, if not decades, and it is deeply embedded in an organization’s operations. The lifespan of IT, on the other hand, looks quite different. “OT is looked at as having a much longer lifespan, 30 to 50 years in some cases. An IT asset, the typical laptop these days that's issued to an individual in a company, three years is about when most organization start to think about issuing a replacement,” says Chris Hallenbeck, CISO for the Americas at endpoint management company Tanium. Maintaining IT and OT systems looks very different, too. IT teams can have regular patching schedules. OT teams have to plan far in advance for maintenance windows, if the equipment can even be updated. Downtime in OT environments is complicated and costly. The skillsets required of the teams to operate IT and OT systems are also quite different. On one side, you likely have people skilled in traditional systems engineering. They may have no idea how to manage the programmable logic controllers (PLC) commonly used in OT systems. The divide between IT and OT has been, in some ways, purposeful. The Purdue model, for example, provides a framework for segmenting ICS networks, keeping them separate from corporate networks and the internet. Related:But over time, more and more occasions to cross the gulf between IT and OT systems -- intentionally and unintentionally -- have arisen. People working on the OT side want the ability to monitor and control industrial processes remotely. “If I want to do that remotely, I need to facilitate that connectivity. I need to get data out of these systems to review it and analyze it in a remote location. And then send commands back down to that system,” Sonu Shankar, CPO at Phosphorus, an enterprise xIoT cybersecurity company, explains. The very real possibility that OT and IT systems intersect accidentally is another consideration for CISOs. Hallenbeck has seen an industrial arc welder plugged into the IT side of an environment, unbeknownst to the people working at the company. “Somehow that system was even added to the IT active directory, and they just were operating it as if it was a regular Windows server, which in every way it was, except for the part where it was directly attached to an industrial system,” he shares. “It happens far too often.” Cyberattack vectors on IT and OT environments look different and result in different consequences. “On the IT side, the impact is primarily data loss and all of the second order effects of your data getting stolen or your data getting held for ransom,” says Shankar. “Disrupt the manufacturing process, disrupt food production, disrupt oil and gas production, disrupt power distribution … the effects are more obvious to us in the physical world.” Related:While the differences between IT and OT are apparent, enterprises ignore the reality of the two worlds’ convergence at their peril. As the connectivity between these systems grows, so do their dependencies and the potential consequences of an attack. Ultimately, a business does not care if a threat actor compromised an IT system or an OT system. They care about the impact. Has the attack resulted in data theft? Has it impacted physical safety? Can the business operate and generate revenue? “You have to start thinking of that holistically as one system against those consequences,” urges Bramson. Integrating IT and OT Cybersecurity How can CISOs create a cybersecurity strategy that effectively manages IT and OT? The first step is gaining a comprehensive understanding of what devices and systems are a part of both the IT and OT spheres of a business. Without that information, CISOs cannot quantify and mitigate risk. “You need to know that the systems exist. There’s this tendency to just put them on the other side of a wall, physical or virtual, and no one knows what number of them exist, what state they're in, what versions they're in,” says Hallenbeck. In one of his CISO roles, Christos Tulumba, CISO at data security and management company Cohesity, worked with a company that had multiple manufacturing plants and distribution centers. The IT and OT sides of the house operated quite separately. “I walked in there … I did my first network map, and I saw all this exposure all over,” he tells InformationWeek. “It raised a lot of alarms.” Once CISOs have that network map on the IT and OT side, they can begin to assess risk and build a strategy for mitigation. Are there devices running on default passwords? Are there devices running suboptimal configurations or vulnerable firmware? Are there unnecessary IT and OT connections? “You start prioritizing and scheduling remediation actions. You may not be able to patch every device at the same time. You may have to schedule it, and there needs to be a strategy for that,” Shankar points out. The cybersecurity world is filled with noise. The latest threats. The latest tools to thwart those threats. It can be easy to get swept up and confused. But Shankar recommends taking a step back. “The basic security hygiene is what I would start with before exploring anything more complex or advanced,” he says. “Most CISOs, most operators continue to ignore the basic security hygiene best practices and instead get distracted by all the noise out there.” And as all cybersecurity leaders know, their work is ongoing. Environments and threats are not static. CISOs need to continuously monitor IT and OT systems in the context of risk and the business’ objectives. That requires consistent engagement with IT and OT teams. “There needs to be an ongoing dialogue and ongoing reminder prompting them and challenging them to be creative on achieving those same security objectives but doing it in context of their … world,” says Hallenbeck. CISOs are going to need resources to achieve those goals. And that means communicating with other executive leaders and their boards. To be effective, those ongoing conversations are not going to be deep, technical dives into the worlds of IT and OT. They are going to be driven by business objectives and risks: dollars and cents. “Once you have your plan, be able to put it in that context that your executives will understand so that you can get the resources [and] authorities to take action,” says Bramson. “At the end of the day, [this] is a business problem and when you touch OT, you're touching the lifeline, the life’s breath of how that business operates, how it generates revenue.” Building an IT/OT Skillset IT and OT security require different skillsets in many ways, and CISOs may not have all of those skills readily at their fingertips. The digital realm is a far cry from that of industrial technology. It is important to recognize the knowledge gaps and find ways to fill them. “That can be from hiring, that can be from outside consultants’ expertise, key partnerships,” says Bramson. An outside partner with expertise in the OT space can be an asset when CISOs visit OT sites -- and they should make that in-person trip. But if someone without site-specific knowledge shows up and starts rattling off instructions, conflict with the site manager is more likely than improved cybersecurity. “I would offer that they go with a partner or with someone who's done it before; people who have the creditability, people who have been practitioners in this area, who have walked sites,” says Bramson. That can help facilitate better communication. Security leaders and OT leaders can share their perspectives and priorities to establish a shared plan that fits into the flow of business. CISOs also need internal talent on the IT and OT sides to maintain and strengthen cybersecurity. Hiring is a possibility, but the well-known talent constraints in the wider cybersecurity pool become even more pronounced when you set out to find OT security talent. “There aren't a lot of OT-specific security practitioners in general and having people within these businesses that are in the OT side that have security specific training, that's vanishingly rare,” says Hallenbeck. But CISOs needn’t despair. That talent can be developed internally through upskilling. Tulumba actually advocates for upskilling over hiring from the outside. “I've been like that my entire career. I think the best performing teams by and large are the ones that get promoted from within,” he shares. As IT and OT systems inevitability interact with one another, upskilling is important on both sides. “Ultimately cross-train your folks … to understand the IT side and the OT side,” says Tulumba.0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMWhy Cybersecurity Needs More Business-Minded LeadersThe question is no longer "Are we compliant?" but "Are we truly resilient?"0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMAsias Top Integrated Security Exhibition Is UnderwaySECON & eGISEC 2025 is underway, showcasing a large and diverse array of both physical and cybersecurity innovations and products. While there is a comprehensive display of advancements in traditional physical security measures and cybersecurity products, its the integration in the converged security realm thats arguably gaining the most attention, particularly in critical sectors. By all accounts, the depth of information in the latest security developments at the exhibition is complemented by the width of diversity in products and innovations.In particular, we plan to focus on AI-driven privacy protection, enhancing security in cloud environments, and the latest trends in pseudonymization and anonymization technologies. Additionally, understanding how privacy protection solutions are applied across various industries is a key objective, as real-world case studies provide valuable practical insights, said Lee Hyejun, Associate at EASYCERTI, a provider of AI, big data, and cloud-based privacy protection and privacy data solutions and an exhibitor at the event.Beyond its booth at the exhibition, EASYCERTIs Senior Researcher, Seunghoon Yeom will be delivering presentations at the conference within. The first, on March 20th is on the topic of latest trends and countermeasures in privacy protection. The second presentation is on the following day and covers standards pertaining to securing personal information and verification processes.Related:There are over 400 exhibitors taking part in the exhibition and the displays are enticing and informative. The variety of security issues addressed by this years new product offerings cover the gamut of known vulnerabilities with no previously known countermeasures.There are no known solutions to prevent paper document leaks around the world. Through exhibitions like this, we hope to show people that such a solution exists and the increasing number of companies and organizations are using our solution: docuBLOCK, said Myungshin Lee, CEO of ANYSELL Co., Ltd. and an exhibitor at the event.The event spreads over 28,000 m of space and is expecting more than 30,000 visitors from around the world. The products and innovations on display cover the gamut of security sectors including edge devices with on-device AI, converged security, cloud and IoT security, smart city security, automotive security, and maritime security, among others.One of the key solutions we will be presenting at SECON & eGISEC 2025 is real-time log and file encryption. This technology encrypts data the moment it is generated making it essential for industries such as finance, public sector and medical fields where both security and real-time processing are critical, said Haeun JI at iNeb Inc, a provider of encryption and data security and an exhibitor.Related:A comprehensive conference and seminar program is happening inside of SECON & eGISEC 2025. The program was developed in collaboration with prominent institutions and industry leaders. It features over 100 sessions across 30+ tracks. Attendees can join discussions on critical topics such as industrial security, advanced CCTV management, aviation protection, counterterrorism tactics, personal data privacy, and other pressing security concerns.This years key security issues featured at the exhibition include:Edge devices with on-device AI (local AI)Convergence of cybersecurity and physical securityEvolving zero-trust security modelIntensifying software supply chain security threatsCyber fraud as a service (Qshing)Cybercrime targeting youth and social media restrictionsConcerns on whether cloud security platforms and cloud service platforms can continue to coexist independentlyHidden risks in old "new" technologiesRelated:Event organizers cited as examples of hidden risks: Cloud services have suffered from human errors, leading to unintended data leaks. Similarly, ChatGPT has raised serious concerns, as users often unintentionally expose sensitive information through interactions with the AI. These risks have prompted ChatGPT bans in several countries.However, risks are growing in other areas, too, even across entire industries. For example, the financial industry is intensely attractive to thieves and fraudsters.YH Database Co., Ltd. has introduced newly released products to buyers every year since 2013, mainly introducing financial security and informatization solutions.This year, the company is showcasing AI-specialized products, including y-SmartChat, y-SmartData, and y-MobileMonitorSDK3.0, said Kim JungWon, senior executive director of YH Database.For example, y-SmartData can be used not only as an abnormal transaction detection system, FDS, that can prevent financial accidents through voice phishing and fake bank accounts, but also as an internal audit control system, ADS, that can detect and prevent illegal money laundering, and AML, a money laundering prevention system, that can detect and prevent illegal money laundering, Kim added.Attendees appear universally eager to check out possible solutions for these and the other top security issues of the day. Exhibitors are just as eager to demonstrate their technological breakthroughs and checkout the competition.Aircode expects many customers to look for an alternative solution in response to the relaxation of network separation regulations, said Yunsang Kim, presales vice president at Aircode, and also an exhibitor at the event.AirCode will be launching a browser-based virtualized web isolation product (AirRBI) that we believe is competitive in terms of functionality and efficiency compared to other solutions. And also, Aircode want to check and learn what web isolation solutions are available on the market and what features they have, added Yunsang. Aircode is also presenting a talk on Secure Web Browsing inNetwork-Separated Environments at the conference program within the exhibition.It can be difficult to choose which exhibits to visit and which presentations and keynotes to attend. Thats because there is such diversity in security topics and products.At SECON & eGISEC 2025, AhnLAb will showcase its latest security solutions built upon 30 years of comprehensive security expertise. Additionally, we are hosting booths for our subsidiaries -- NAONWORKS, Jason, and AhnLab CloudMate -- where participants can explore each companys specialized technologies in OT/ICS (industrial control system), AI, and MSP (managed service provider), as well as their synergy with AhnLab, said Junghyun Kim, Marketing Director at AhnLab, Inc. and an exhibitor at the event.The exhibition runs from March 19 to 21 and is held at Hall 3-5 in Kintex, Korea.0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMFrom AI Fling to the Real ThingLindsay Phillips, COO and Co-Founder, SkyPhi StudiosMarch 18, 20254 Min ReadLi Ding via Alamy StockWhen AI is successfully implemented, it fundamentally changes your team. Much like a romantic relationship, a new partnership is formed -- greater than the sum of its parts. You must approach AI not as a side piece but as a full-fledged partner, ready to work differently and ultimately -- better together.The question isnt whether to adopt AI, but how to ensure it leads to meaningful use. Just like picking a mate, companies must evaluate tools carefully and integrate them thoughtfully in order for the relationship to work. Think of AI adoption in terms of relationship stages: Honeymoon, conflict, commitment, and thriving.Phase 1: The HoneymoonYouve identified a need, purchased an AI tool, and are excited to get started -- swoon. Youre daydreaming about what this new team member will bring to the table.Emotions at this stage: Excitement runs high, but engagement is sporadic as the team adjusts to the new tool. Optimism might blind you to the inevitable complexities of long-term integration.Risks at this stage:Choosing the wrong (adoption) partner or going forth without an adoption plan at all.Not having crucial conversations or setting unreasonable expectations for the tool or your team, and overwhelming both.Action to be taken at this stage:Related:Define business objectives and how the tool should support those. Define clear goals.Create an adoption plan or find an adoption partner. How do you expect people to change to use the tool and are your expectations realistic?Phase 2: Conflict ArisesA heart-sinking moment; youve had your first fight. As you start working with the tool, conflicts emerge: misaligned workflows, unclear responsibilities, or differing interpretations of the tools value.Emotions at this stage: Frustration and confusion dominate. The excitement of the honeymoon gives way to chaos as the team struggles to integrate the tool into daily operations.Risks at this stage: Disengagement can tank adoption, leading to distrust in leadership and abandonment of the tool altogether.Action to be taken at this stage:Clarify roles and responsibilities. Identify which tasks AI will take over and how your team must adjust to make room for this.Redesign workflows. Map how data flows through the system. Define who handles each step and how the AIs outputs are utilized.Set expectations for both team and tool. Training is important, but its more critical to align on when and why to use the tool, than how.Phase 3: Commitment to Working Through the KinksRelated:Now comes the commitment phase. Youve decided to put in the effort to make the relationship work. This is where your team begins to norm -- finding ways to resolve conflicts, clarify roles and build trust.Emotions at this stage: Calm and determined. The team is less reactive, focused on solving problems, and unified in working toward shared goals.Risks at this stage: Complacency can derail momentum, pushing your team back into conflict or leading to abandonment if vigilance wanes.Action to be taken at this stage:Assign owners and incentives. Designate individuals responsible for AI implementation and incentivize their success.Hold regular check-ins. Create opportunities to address challenges and refine processes.Celebrate wins. Acknowledge progress to keep morale high and reinforce positive behaviors.Phase 4: Thriving TogetherYour team and AI tool are in sync, working seamlessly together. The partnership has matured into something greater than the sum of its parts --a thriving relationship. Youre no longer focused on making it work; youre discovering new ways to grow together and achieve shared goals.Emotions at this stage: Excitement and pride. Your team feels empowered by what youve built together and evangelizes the success -- confident in its ability to work, evolve and last.Related:Risks at this stage: Even in a thriving relationship, theres a risk of falling into complacency. If you stop nurturing the partnership, you may achieve some success, but youll miss out on its full potential. Staying curious and engaged ensures your (AI) partnership continues to grow stronger and more meaningful.Action to be taken at this stage:Expand responsibilities. Just as in a strong relationship, trust allows you to take on new challenges together. Build on initial success by exploring new use cases for the tool.Stay curious. Keep the spark alive by asking: What else can this tool do? Whats next for us?Foster a community of practice. Identify super-users who act as ambassadors, sharing insights and helping others deepen their connection with the tool.In Perfect HarmonyAdopting AI is not a one-and-done affair. Its a process that requires intentionality, flexibility, and commitment at every stage. By treating AI as a valued partner -- one that requires clear communication, defined roles, and ongoing support -- you can move beyond the initial honeymoon phase and build a lasting, thriving relationship.With the right approach, AI can transform your organization, allowing your team to achieve more. The key is ensuring that both sides -- human and machine -- are willing to work differently to work better together.About the AuthorLindsay PhillipsCOO and Co-Founder, SkyPhi StudiosLindsay Phillipsis the co-founder and chief operating officer of SkyPhi Studios, a change firm that delivers transformative success by empowering organizations to realize the full value of their digital investments. She specializes in guiding organizations through change, fostering collaboration, and enhancing engagement. Her expertise in leadership coaching, sales process support, and culture change initiatives helps organizations not just adopt new tools but embrace a holistic approach to transformation.See more from Lindsay PhillipsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMAIs Impact on Cloud Spending: The Hunger for CapacitySpending on artificial intelligence applications, particularly generative AI, is driving up the cost of enterprise cloud computing. These costs climbed an average of 30%, according to a 2024 report commissioned by Tangoe in October, a technology expense management solution provider and conducted by Vanson Bourne.In addition, 72% of IT and financial leaders believed that GenAI-led cloud spending had become unmanageable.GenAI is creating a cloud boom that will take IT expenditures to new heights, Chris Ortbals, chief product officer at Tangoe, said in a statement. With year-over-year cloud spending up 30%, we are seeing the financial fallout of AI demands. Left unmanaged, GenAI has the potential to make innovation financially unsustainable.Ortbals even described cloud costs as lethal to GenAI.The clouds hidden costs and unpredictable invoices can become the silent killer of GenAI, added Ortbals. The more urgently companies adopt comprehensive cost management FinOps strategies, the easier it is for them to turn GenAIs promise into lasting innovation instead of runaway expenses and technical debt.Cloud costs are rising amid inflation and technical debt, Ortbals wrote in Forbes. He noted that it is the role of CIOs to pay for shared services as the habitual corporate financier even when costs increase. As these cloud costs climb, tensions rise between IT and finance, Ortbals wrote.Related:How AI Is Impacting the Cloud LandscapeCloud spending is indeed increasing because of the demands of AI, explains Matt Hobbs, cloud, engineering, data and AI leader at PwC.If you look at the resource intensity of the very specific workloads youre using it for, in combination with the fact that those resources are super constrained, it is keeping those costs really high right now, Hobbs tells InformationWeek.AI workloads are costly because organizations are hungry for capacity and they are using cloud resources to unify their data environment, he says.Speed matters a lot here, and so if youre in the cloud, you have the ability to go a lot faster than if youre running on prem, Hobbs says.In addition, as organizations move from on-prem infrastructure and shut down data centers to move to the cloud, even with AI driving up costs, companies cloud costs were increasing anyway, Hobbs suggests.In addition, Hobbs notes the duplicative costs that occur as AI companies offer their own direct LLM services and cloud providers integrate them as well.If you look at AI as a driver toward cloud costs, thats a question of, is it actually more expensive, or is it a shift toward cloud thats happening because of AI? Hobbs says.Related:As the life cycle of infrastructure gets shorter and GPUs get more powerful, cloud costs go up, explains Dmitry Panenkov, CEO and founder at cloud-management platform Emma.So basically, the life cycle is getting shorter, and each and every accelerator they release is more powerful, but on the other hand, is also more expensive, and this automatically drives up your costs, Panenkov explains. So, you need to pay more if you want to get these GPUs, and the providers need to pay more. And then if you train the models on top-notch accelerators, you pay more per hour to ramp up this capacity.Although cloud costs are increasing due to AI, organizations are not slowing down in spending on cloud or AI, according to Hobbs.Nic Benders, chief technical strategist for New Relic, agrees that spending will be robust for infrastructure such as cloud amid AIs growth.I believe IT spend is actually constrained by the amount of money in IT, not by the things to spend it on, Benders says. So, I believe that we will continue to see rapid growth in spending on infrastructure.Related:How AI Tools Help Forecast Cloud SpendingAlthough AI may make cloud costs climb, AI tools can also help manage these costs and alleviate cloud spending. Organizations can use predictive analytics to study past usage patterns. In addition, machine learning can train models on past usage patterns and auto-scale use of cloud resources.Emma uses AI to analyze the behavior of cloud workloads and allow organizations to adjust their environments to reduce their cloud bills, Panenkov says. He predicts that AI costs and thus cloud costs will go down as the price of GPU accelerators drop.We have a networking backbone that interconnects the clouds, and we have AI algorithms to define the best and most optimal route from one service provider to another, which is associated with a smaller cost, Panenkov says.Benders also sees the move to expensive infrastructure such as GPU accelerators as short-term.Just as the tech industry moved from three nodes in a cluster to thousands of nodes in a cluster and hardware got less expensive, Benders sees a similar pattern with AI.I suspect that we're going to see the same thing in the AI-driven load that, if it matures, will move away from those kinds of cutting-edge experimental systems, but thats not going to be for some years now. So, I think were in a phase right now where people are going to be spending their money on those cutting-edge systems, he says, referring to GPU accelerators.How CIOs Should Approach AI and Cloud Spending Going forwardPanenkov recommends a hybrid model of on premises and cloud to manage cloud costs.The best model to work with is a hybrid model, where you have your on-premises environment where you can train your models, Panenkov says. But in case you need to scale and pick up more GPU instances to continue your training of your model, you can scale the workloads up into the cloud, and for short period of time, you can rent certain instances with the cloud service provider, so that we think is the right approach.Hobbs advises that organizations assess what they are using AI services for when choosing their infrastructure. By deploying workloads -- whether cloud or AI -- at the edge as part of a hybrid cloud setup, organizations can drive down overall cloud costs.When enterprise data is connected, companies naturally leverage the centralized cloud, Hobbs explains. However, when data becomes disconnected at the edge, placing computing power locally can significantly lower costs.For example, Hobbs notes that a telco company might serve its customers through both private and public clouds. In this arrangement, the private cloud delivers direct value to end users, while the public cloud offers operational efficiencies for enterprises.I think it matters more where an organization is on its cloud journey -- thats what truly drives the architectural decision -- than merely following a fixed pattern of delivering an end service to a customer, Hobbs says. Spending on artificial intelligence applications, particularly generative AI, is driving up the cost of enterprise cloud computing. These costs climbed an average of 30%, according to a 2024 report commissioned by Tangoe in October, a technology expense management solution provider and conducted by Vanson Bourne.In addition, 72% of IT and financial leaders believed that GenAI-led cloud spending had become unmanageable.GenAI is creating a cloud boom that will take IT expenditures to new heights, Chris Ortbals, chief product officer at Tangoe, said in a statement. With year-over-year cloud spending up 30%, we are seeing the financial fallout of AI demands. Left unmanaged, GenAI has the potential to make innovation financially unsustainable.Ortbals even described cloud costs as lethal to GenAI.The clouds hidden costs and unpredictable invoices can become the silent killer of GenAI, added Ortbals. The more urgently companies adopt comprehensive cost management FinOps strategies, the easier it is for them to turn GenAIs promise into lasting innovation instead of runaway expenses and technical debt.Cloud costs are rising amid inflation and technical debt, Ortbals wrote in Forbes. He noted that it is the role of CIOs to pay for shared services as the habitual corporate financier even when costs increase. As these cloud costs climb, tensions rise between IT and finance, Ortbals wrote.How AI Is Impacting the Cloud LandscapeCloud spending is indeed increasing because of the demands of AI, explains Matt Hobbs, cloud, engineering, data and AI leader at PwC.If you look at the resource intensity of the very specific workloads youre using it for, in combination with the fact that those resources are super constrained, it is keeping those costs really high right now, Hobbs tells InformationWeek.AI workloads are costly because organizations are hungry for capacity and they are using cloud resources to unify their data environment, he says.Speed matters a lot here, and so if youre in the cloud, you have the ability to go a lot faster than if youre running on prem, Hobbs says.In addition, as organizations move from on-prem infrastructure and shut down data centers to move to the cloud, even with AI driving up costs, companies cloud costs were increasing anyway, Hobbs suggests.In addition, Hobbs notes the duplicative costs that occur as AI companies offer their own direct LLM services and cloud providers integrate them as well.If you look at AI as a driver toward cloud costs, thats a question of, is it actually more expensive, or is it a shift toward cloud thats happening because of AI? Hobbs says.As the life cycle of infrastructure gets shorter and GPUs get more powerful, cloud costs go up, explains Dmitry Panenkov, CEO and founder at cloud-management platform Emma.So basically, the life cycle is getting shorter, and each and every accelerator they release is more powerful, but on the other hand, is also more expensive, and this automatically drives up your costs, Panenkov explains. So, you need to pay more if you want to get these GPUs, and the providers need to pay more. And then if you train the models on top-notch accelerators, you pay more per hour to ramp up this capacity.Although cloud costs are increasing due to AI, organizations are not slowing down in spending on cloud or AI, according to Hobbs.Nic Benders, chief technical strategist for New Relic, agrees that spending will be robust for infrastructure such as cloud amid AIs growth.I believe IT spend is actually constrained by the amount of money in IT, not by the things to spend it on, Benders says. So, I believe that we will continue to see rapid growth in spending on infrastructure.How AI Tools Help Forecast Cloud SpendingAlthough AI may make cloud costs climb, AI tools can also help manage these costs and alleviate cloud spending. Organizations can use predictive analytics to study past usage patterns. In addition, machine learning can train models on past usage patterns and auto-scale use of cloud resources.Emma uses AI to analyze the behavior of cloud workloads and allow organizations to adjust their environments to reduce their cloud bills, Panenkov says. He predicts that AI costs and thus cloud costs will go down as the price of GPU accelerators drop.We have a networking backbone that interconnects the clouds, and we have AI algorithms to define the best and most optimal route from one service provider to another, which is associated with a smaller cost, Panenkov says.Benders also sees the move to expensive infrastructure such as GPU accelerators as short-term.Just as the tech industry moved from three nodes in a cluster to thousands of nodes in a cluster and hardware got less expensive, Benders sees a similar pattern with AI.I suspect that we're going to see the same thing in the AI-driven load that, if it matures, will move away from those kinds of cutting-edge experimental systems, but thats not going to be for some years now. So, I think were in a phase right now where people are going to be spending their money on those cutting-edge systems, he says, referring to GPU accelerators.How CIOs Should Approach AI and Cloud Spending Going forwardPanenkov recommends a hybrid model of on premises and cloud to manage cloud costs.The best model to work with is a hybrid model, where you have your on-premises environment where you can train your models, Panenkov says. But in case you need to scale and pick up more GPU instances to continue your training of your model, you can scale the workloads up into the cloud, and for short period of time, you can rent certain instances with the cloud service provider, so that we think is the right approach.Hobbs advises that organizations assess what they are using AI services for when choosing their infrastructure. By deploying workloads -- whether cloud or AI -- at the edge as part of a hybrid cloud setup, organizations can drive down overall cloud costs.When enterprise data is connected, companies naturally leverage the centralized cloud, Hobbs explains. However, when data becomes disconnected at the edge, placing computing power locally can significantly lower costs.For example, Hobbs notes that a telco company might serve its customers through both private and public clouds. In this arrangement, the private cloud delivers direct value to end users, while the public cloud offers operational efficiencies for enterprises.I think it matters more where an organization is on its cloud journey -- thats what truly drives the architectural decision -- than merely following a fixed pattern of delivering an end service to a customer, Hobbs says.0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMSigns Your Organization's Culture Is Hurting Your CybersecurityTechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Signs Your Organization's Culture Is Hurting Your CybersecuritySigns Your Organization's Culture Is Hurting Your CybersecurityHigh turnover, burnout, and blame-heavy environments do more than hurt morale. They also weaken security and put the organization at risk.Dark Reading, Staff & ContributorsMarch 6, 20251 Min ReadBrain light via Alamy StockThese days, the word "toxic" gets thrown around a lot in many contexts, but when used to describe organizational culture, it poses an actual threat. When employees are constantly overworked, undervalued, or forced to operate in high-stress, blame-heavy environments, mistakes are inevitable. Fatigue leads to oversight, disengagement breeds carelessness, and a lack of psychological safety prevents people from speaking up about vulnerabilities or potential risks. In an industry where even the smallest errors can have massive consequences, this kind of dysfunction can be dangerous.Rob Lee, chief of research and head of faculty at SANS Institute, says a toxic cybersecurity culture manifests when professionals feel undervalued, unsupported, or actively undermined. The warning signs are often evident long before they take a toll. High turnover, for example, is a flashing red light."When skilled professionals keep leaving, it's usually because they're burned out or their concerns are ignored," he says.Another major indicator is how organizations treat training and development. Many companies claim to support ongoing education, but when budgets for training are cut, the message is clear: Growth and expertise aren't priorities. Lee says that some businesses also make the critical mistake of investing in security tools while neglecting the people who operate them.Read the Full Article on Dark ReadingAbout the AuthorDark ReadingStaff & ContributorsDark Reading: Connecting The Information Security CommunityLong one of the most widely-read cybersecurity news sites on the Web, Dark Reading is also the most trusted online community for security professionals. Our community members include thought-leading security researchers, CISOs, and technology specialists, along with thousands of other security professionals.See more from Dark ReadingWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMHow to Create a Winning AI StrategyLisa Morgan, Freelance WriterMarch 3, 20258 Min ReadBrain light via Alamy StockArtificial intelligence continues to become more pervasive as organizations adopt it to gain a competitive advantage, reduce costs and deliver better customer experiences. All organizations have an AI strategy, whether by design or default. The former helps ensure the company is realizing greater value, simply because its leaders are putting more thought into it and working cross-functionally to make it happen, both strategically and tactically.Its very much back to the business, so what are the business objectives? And then within that, how can AI best help me achieve those objectives? says Anand Rao, distinguished service professor, applied data science and artificial intelligence atCarnegie Mellon University. From there, [it] pretty much breaks down into two things: AI automates tasks so that you can be more efficient, and it helps you make better decisions and with that comes a better customer experience, more revenue, or more consistent quality.Elements of a Winning AI StrategyKevin Surace, CEO at autonomous testing platform Appvance, says the three elements of an effective AI strategy are clarity, alignment, and agility.A winning AI strategy starts with a clear vision of what problems youre solving and why, says Surace. It aligns AI initiatives with business goals, ensuring every project delivers measurable value. And it builds in agility, allowing the organization to adapt as technology and market conditions evolve.Related:Will Rowlands-Rees, chief AI officer, at eLearning, AI services, and translation and localization solution provider Lionbridge agrees.It is critical to align your AI strategy and investments with your overall business strategy -- they cannot be divorced from each other, says Rowlands-Rees. When applied correctly, AI is a powerful tool that can accelerate your organizations ability to solve customer problems and streamline operations and therefore drive revenue growth. This offensive approach will organically lead to cost optimization as efficiencies emerge from streamlined processes and improved outcomes.Brad O'Brien, partner at global consultancy Baringa's US Financial Services practice, advocates having a clear governance framework including the definition of roles and responsibilities, setting guiding principles, and ensuring accountability at all levels.Comprehensive risk management practices are essential to identify, assess, and mitigate AI-related risks, including regular audits, bias assessments and robust data governance, says OBrien. Staying informed about, and compliant with, evolving AI regulations, such as the EU AI Act and emerging US regulations, is vital. Maintaining transparency and thorough documentation of the entire AI lifecycle builds trust with stakeholders. Engaging key stakeholders, including board members, employees and external partners, ensures alignment and support for AI initiatives. Continuous improvement, based on feedback, new data and technological advancements, is also a critical component.Related:Ashwin Rajeeva, co-founder and CTO at enterprise data observability company Acceldata, believes a successful AI strategy blends a clear business vision with technical excellence.It starts with a strong data foundation; reliable, high-quality data is non-negotiable. Scalability and adaptability are also critical as AI technologies evolve rapidly, says Rajeeva. Ethical considerations must be embedded early, ensuring transparency and fairness in AI outcomes. Most importantly, it should create tangible business value while maintaining the flexibility to adapt to future innovations.How to Avoid Common MistakesOne mistake is assuming that generative AI replaces other forms of AI. Thats incorrect because traditional types of AI -- such as computer vision, predictions, and recommendations -- use different types of models.Related:You still need to look at your use cases and standard methods. Look across the organization, look at the value chain elements, and then look at where traditional AI works and where generative AI would work, and what some of the more agent kind of stuff would work, says CMUs Rao. Then, essentially start pulling all of the use cases together and have some method of prioritizing.The accelerating rate at which AI technology is advancing is also having an effect because companies cant keep up, so organizations are questioning whether they should buy, build or wait.Change with respect to AI, and especially Gen AI, is moving very fast. Its moving so much faster that even the technology companies can keep pace, says Rao.AI is also not a solution to all problems. Like any other technology, its simply a tool that needs to be understood and managed.Proper AI strategy adoption will require iteration, experimentation, and, inevitably, failure to end up at real solutions that move the needle. This is a process that will require a lot of patience, says Lionbridges Rowlands-Rees. [E]veryone in the organization needs to understand and buy in to the fact that AI is not just a passing fad -- its the modern approach to running a business. The companies that dont embrace AI in some capacity will not be around in the future to prove everyone else wrong.Organizations face several challenges when implementing AI strategies. For example, regulatory uncertainty is a significant hurdle and navigating the complex and evolving landscape of AI regulations across different jurisdictions can be daunting.Ensuring data privacy and security is another major challenge, as organizations must protect sensitive data used by AI systems and comply with privacy laws. Mitigating biases in AI models to prevent unfair treatment and ensure compliance with anti-discrimination laws is also critical, says Baringa's OBrien. Additionally, the 'black box' nature of AI systems poses challenges in providing clear explanations of AI decisions to stakeholders and regulators. Allocating sufficient resources, including skilled personnel and financial investment, is necessary to support AI initiatives.In his view, common mistakes in AI strategy implementation include:A lack of clear governance frameworks and accountability structures.Insufficient risk management practices, such as overlooking comprehensive risk assessments and bias mitigation.Poor data management, including neglecting data privacy and security that can lead to potential breaches and regulatory non-compliance.Inadequate transparency in documenting and explaining AI processes results in a lack of trust among stakeholders.Underestimating resource needs, such as not allocating sufficient skilled personnel and financial investment, can hinder AI initiatives.Encountering resistance from employees and stakeholders who hesitate to embrace AI technologies is a common challenge.[P]rioritize governance by establishing clear frameworks and ensuring accountability at all levels. Stay informed about evolving AI regulations and ensure compliance with all relevant standards, says OBrien. Focus on transparency by maintaining thorough documentation of AI processes and decisions to build trust with stakeholders. Invest in regular training for employees on AI policies, risk management, and ethical considerations. Engage key stakeholders in the design and implementation of AI initiatives to ensure alignment and support. Finally, embrace continuous improvement by regularly updating and refining AI models and strategies based on feedback, new data and technological advancements.One of the biggest mistakes Shobhit Varshney, VP and senior partner, Americas AI leader, IBM Consulting has observed organizations selecting AI use cases based on speed of implementation rather than properly articulated business impact.Many organizations adopt AI because they want to stay competitive, but they fail to realize that they aren't focusing on the use cases that will create significant long-term value. It's common to start with simple, easy-to-automate tasks, but this approach can be limiting, says Varshney. Instead, organizations should focus on areas where AI can have the greatest impact and have enough instrumentation to capture metrics and continuously iterate and evolve the solution. The best starting point for AI use cases is unique to each business and its important to identify areas within the organization that could benefit from improvement.He also says an all-too-common mistake is automating an existing process.We need to rethink workflows to truly unlock the power of these exponential technologies. As we evolve to agentic AI, we need to ensure that we rethink the optimal way to delegate specific tasks to agents and play to the strengths of humans and AI, says Varshney.Jim Palmer, chief AI officer at AI-native business and customer communications platform Dialpad, says a common challenge is ensuring AI models have access to accurate, up-to-date data and can seamlessly integrate with existing workflows.Theres a gap between AIs theoretical potential and its practical business application. Companies invest millions in AI initiatives that prioritize speed to market over actual utility, Palmer says.Bhadresh Patel, COO of global professional services firm RGP thinks one of the biggest challenges organizations is the significant gap between ideation and execution.We often see organizations set up an AI function and expect miracles, but this approach simply doesn't work. This is why it's important to prioritize the pockets of use cases where AI can have the biggest impact on the business, says Patel. Another challenge organizations often face is when functional people do not take the time to understand the capabilities and limitations of the tools they have at their disposal. Leaders must understand why theyre making new AI investments and what the overlap is in terms of existing capabilities, training and user knowledge.Acceldatas Rajeeva says organizations often grapple with fragmented or poor-quality data, which undermines AI outcomes.Scaling AI initiatives from proof of concept to enterprise-wide deployment can be daunting, especially without robust operational frameworks. Additionally, balancing innovation with regulatory and ethical standards is challenging. A lack of skilled talent and clear success metrics further complicates these efforts, says Rajeeva. One significant misstep is treating AI as a technology-first initiative, ignoring the importance of data quality and infrastructure. Organizations sometimes over-invest in sophisticated models without aligning them with practical business goals. Another common mistake is failing to plan for scaling AI, leading to operational bottlenecks. Finally, insufficient monitoring often results in biased or unreliable AI systems.And remember, foresight and agility are more valuable than 20-20 hindsight.Start with the end in mind. Define success metrics before you write a single line of code. Build cross-functional teams that can bridge the gap between business and technology, says Appvances Surace. And remember, an AI strategy isnt static -- its a living, evolving framework that should grow with your organization and its goals.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMDriving Innovation and Efficiency Through AutomationBrandon Taylor, Digital Editorial Program ManagerFebruary 21, 20255 Min ViewInvesting in substantial automation that enables agile and strategic business operations are vital to compete and grow in todays digital landscape.In this archived keynote session, Rachel Lockett, vice president of business technology solutions and operations at Surescripts, and Jason Kikta, CISO and senior vice president of product at Automox, discuss how organizations are utilizing automation to find value and regroup to meet challenges.This segment was part of our live virtual event titled, The CIO's Guide to IT Automation in 2025: Enabling Innovation & Efficiency. The event was presented by InformationWeek on February 6, 2025.A transcript of the video follows below. Minor edits have been made for clarity.Rachel Lockett: So, the outcomes and consequences of alert fatigue in all its different forms can be ignored alerts, slowed response times, and ultimately not reacting with urgency when something is due. They can also result in burnouts. Since joining the healthcare field, I have heard more now about provider burnout.There have been news stories about alert fatigue resulting in things being missed and ignored that resulted in patient deaths. So again, let's make a correlation to the technology field. What have you seen in your experience? What have been the direst consequences and costly mistakes that you've seen because of alert fatigue and lack of automation?Related:Jason Kikta: I think one of the best and easiest examples for people to orient on when they think about it, especially at the intersection of IT and security, are the number of vulnerabilities. So, this is the slide that you and I showed the audience when we met last year. This was the projection for the number of CVEs.The number of security vulnerabilities in software was growing at an alarming rate and becoming a lot to process. We talked about this, and we said by the time we get to 2025 it's going to be up to 32,000 a year, and it's going to be bad. We had 28,000 in 2023, but then in 2024 we had 40,000! It totally blew out the curve.Now, there is some nuance here, right? This is not necessarily a bad thing in terms of cybersecurity, because part of this is vendors have gotten better as well as security researchers. They've gotten better at finding these vulnerabilities, and vendors have become more disciplined in reporting these vulnerabilities.So, there is some healthiness to those numbers being high, but it still doesn't change the base condition. I spoke to a company late last year, and their security team was trying to manually read through every CVE that was released by every vendor and match it up with their environment to see if they had it somewhere in their tech stack.Related:Then, they would make a manual determination about how they were going to proceed. Were they going to patch it? If so, how quickly were they going to patch it? It was mind boggling. I thought to myself, how do you keep up? The gentleman I spoke to chuckled and said, well, we keep up poorly. Poorly is the answer.RL: Right, because first, that's intensive labor based on the cost involved. But how can you catch up on time? There's going to be a delayed response because there's just too much volume.JK: Another great example is the National Vulnerability Database where they can't even keep up. They are the ones charged with maintaining the global authoritative database, and they've had trouble keeping up. And this was as of last summer.They don't have newer numbers out, but their last announcement in November was that we've added a lot of external contractor support, and paid a lot of money to bring on this extra capacity. We are now keeping up with all the new ones, but we're still behind in the backlog. We don't have an effective way to burn that down.Related:These problems are not getting better, in fact, they're getting worse on the demand side. So, we must fix the supplies, or maybe it's backwards. Maybe it's the supply side, right? The amount that needs to be dealt with is just going to keep rising, and the ability to keep up with it manually is going to be overwhelming. So, you must fix it through better automation and thinking through these processes more holistically.RL: You brought up exactly what I wanted to talk about next. Again, always coming at these things from the human impact perspective. A common solution, which you just described, is to throw more people at the problem, right? Hire more contractors and let's just keep throwing more people at the problem.Things like rotating responsibilities between team members can help to reduce the impact of alert fatigue for a while, but it's just not a sustainable long-term solution. There's also another industry trend that's making this harder and harder to do, and that's the shortage of technology resources. We talked about this last summer.What's happened since then? Is the problem of scarce technology resources getting better? Is it getting worse? Is it remaining the same? Where are we at?Watch the archived CIO's Guide to IT Automation in 2025: Enabling Innovation & Efficiency live webinar on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMQuick Study: The IT Hiring/Talent ChallengeJames M. Connolly, Contributing Editor and WriterFebruary 19, 20257 Min ReadInk Drop via Alamy StockSo, you told a friend that you need to hire more IT folks. The friend replied, "Hah, good luck!"Circumstances dealt IT leaders a challenging hand over the past few years, from the great resignation to executive demands for digital transformation, and onward to corporate fascination with artificial intelligence, hiring and keeping IT talent requires new strategies.There was no single cause of today's hiring challenges, and there's no single, easy answer short of hitting the lottery and retiring. However, contributors to InformationWeek have shared their experiences and advice to IT leaders on ways to staff up and skill up, all while staying under budget and keeping IT operational lights on.In this guide to todays IT hiring and talent challenges, we have compiled a collection of advice and news articles focused on finding, hiring and retaining IT talent. We hope it helps you succeed this year.A World of ChangeHelp Wanted: IT Hiring Trends in 2025ITs role is becoming more strategic. Increasingly, it is expected to drive business value as organizations focus on digital transformation.IT Security Hiring Must Adapt to Skills ShortagesDiverse recruitment strategies, expanded training, and incentivized development programs can all help organizations narrow the skills gap in an era of rapidly evolving threat landscapes.Top IT Skills and Certifications in 2025In 2025 top IT certifications in cloud security and data will offer high salaries as businesses prioritize multi-cloud and AI.How To Be Competitive in a Tight IT Employment MarketA slumping economy, emerging technologies, and over-hiring has led to a tight IT jobs market. Yet positions are still abundant for individuals possessing the right skills and attitude.The Soft Side of IT: How Non-Technical Skills Shape Career SuccessHeres why soft skills matter in IT careers and how to effectively highlight them on a resume. Show that you are a good human.Salary Report: IT in Choppy Economic Seas and Roaring Winds of ChangeLast year brought a sustained adrenaline rush for IT. Everything changed. Some of it with a whimper and some of it with a bang. Through it all IT pros held steady, but is it enough to sail safely through the end of 2024?Quick Study: The Future of Work Is HereThe workplace of the future isn't off in the future. It's been here for a few years -- even pre-pandemic.10 Unexpected, Under the Radar Predictions for 2025From looming energy shortages and forced AI confessions to the rising ranks of AI-faked employees and a glimmer of a new cyber-iron curtain, heres whats happening that may require you to change your companys course.Finding TalentAI Speeds IT Team HiringCan AI help your organization find top IT job candidates quickly and easily? A growing number of hiring experts are convinced it can.Skills-Based Hiring in IT: How to Do it RightBy focusing directly on skills instead of more subjective criteria, IT leaders can build highly capable teams. Here's what you need to know to get started.The Evolution of IT Job Interviews: Preparing for Skills-Based HiringThe traditional tech job interview process is undergoing a significant shift as companies increasingly focus on skills-based hiring and move away from the traditional emphasis of academic degrees.IT Careers: Does Skills-Based Hiring Really Work?More organizations are moving toward skills-based hiring and getting mixed results. Heres how to avoid some of the pitfalls.Jumping the IT Talent Gap: Cyber, Cloud, and Software DevsBusinesses must first determine where their IT skill sets need bolstering and then develop an upskilling strategy or focus on strategic new hires.Top Career Paths for New IT CandidatesMore organizations are moving from roles-based staffing to skills-based staffing. In IT, flexibility is key.Why IT Leaders Should Hire Veterans for Cybersecurity RolesMaintaining cybersecurity requires the effort of a team. Veterans are uniquely skilled to operate in this role and bring strengths that meet key industry needs.How to Find a Qualified IT Intern Among CandidatesIT organizations offering intern programs often find themselves swamped with applicants. Here's how to find the most knowledgeable and prepared candidates.The Search for Solid Hires Between AI Screening and GenAI ResumesDo AI-generated job applications gum up the recruitment process for hiring managers by filling inboxes with dubiously written CVs?3 Things You Should Look for When Hiring New GraduatesEach year, entry-level applicants in IT look a little different. Heres what you need to be looking for as the class of 2023 infiltrates the workforce.Why a College Degree is No Longer Necessary for IT SuccessWho needs student debt? A growing number of employers are hiring IT pros with little or no college experience.Recruiting TalentIn Global Contest for Tech Talent, US Skills Draw Top PayAfter several years of economic uncertainty and layoffs, US talent is once again attracting good pay in the global competition for tech skills. But gender disparity continues in many job categories.Hiring Hi-Tech Talent by Kickin It Old SchoolUsing elements of a traditional approach to recruiting IT professionals can attract and grow the modern workforce, but it's the soft skills shown during an interview that make a big difference.The Impact of AISkills on Hiring and Career AdvancementDemand is high for professionals with knowledge of AI, but do such talents really get implemented on the job?How to Channel a Worlds Fair Culture to Engage IT TalentEven the most well-funded and innovative companies will fail if they lack one thing: A diverse, united team. A CEO shares his experience and advice.Bridging IT Skills Gap in the Age of Digital TransformationInnovations in automation, cloud computing, big data analytics, and AI have not only changed the way businesses operate but have intensified the demand for specialized skills.5 Traits To Look for When Hiring Business and IT InnovatorsHiring resilient and forward-thinking employees is the cornerstone to innovation. If youre looking to hire a trailblazer, here are five traits to seek, as well as questions to ask.CIOs Can Build a Resilient IT Workforce with AI and Unconventional TalentAs the IT talent crunch continues, chief information officers can embrace new strategies to combine traditional IT staff with nontraditional workers and AI to augment the workforce.Pursuing Nontraditional IT Candidates: Methods to Expand Talent PipelinesEmployers winning in this labor market know how to look at adjacent skills and invest in upskilling their internal candidates while creating alternative candidate pools.Hiring with AI: Get It Right from the StartAs organizations increasingly adopt artificial intelligence in hiring, its essential that they understand how to use the technology to reduce bias rather than exacerbate it.Secrets to Hiring Top Tech TalentTo hire best-in-class IT talent, your company must have interesting technical problems to solve.Keeping TalentMeaningful Ways to Reward Your IT Team and Its AchievementsA job well done deserves a significant reward. Here's how to show appreciation to a diligent staff without busting your budget.Recognize the Contributions of Average IT PerformersEvery IT departmenthas its marginal performers. How do you get the most out of them?How to Manage a Rapidly Growing IT TeamMaintaining IT staff performance and efficiency during rapid growth requires careful planning and structure. Here's how to expand your team without missing a beat.Do Women IT Leaders Face a Glass Cliff?Are organizations more likely to promote women to top IT management posts during hopeless crisis situations? Apparently, yes.Skills Gap in Cloud Tools: Why It Exists and Ways to AddressAs enterprises shift to modernize applications, a companys most important asset is talent performance to back it up.Addressing the Skills Gap to Keep Up with the Evolution of the CloudAs cloud adoption increases, companies must focus on upskilling employees through continuous learning to maximize cloud and AI potential.The AI Skills Gap and How to Address ItWorkers are struggling to integrate AI into their skill sets. Where are we falling short in helping them leverage AI to their own benefit and the benefit of their employers?About the AuthorJames M. ConnollyContributing Editor and WriterJim Connolly is a versatile and experienced freelance technology journalist who has reported on IT trends for more than three decades. He was previouslyeditorial director of InformationWeek and Network Computing, where heoversaw the day-to-day planning and editing on the sites. He has written about enterprise computing, data analytics, the PC revolution, the evolution of the Internet, networking, IT management, and the ongoing shift to cloud-based services and mobility. He has covered breaking industry news and has led teams focused on product reviews and technology trends. He has concentrated on serving the information needs of IT decision-makers in large organizations and has worked with those managers to help them learn from their peers and share their experiences in implementing leading-edge technologies through such publications as Computerworld. Jim also has helped to launch a technology-focused startup, as one of the founding editors at TechTarget, and has served as editor of an established news organization focused on technology startups at MassHighTech.See more from James M. ConnollyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMAI Upskilling: How to Train Your Employees to Be Better Prompt EngineersLisa Morgan, Freelance WriterFebruary 19, 202510 Min ReadTithi Luadthong via Alamy StockGenerative AIs use has exploded across industries, helping people to write, code, brainstorm and more. While the interface couldnt be simpler -- just type some text in the box -- mastery of it involves continued use and constant iteration.GenAI is considered a game-changer, which is why enterprises want to scale it. While users have various resources available, like OpenAI and Gemini, proprietary LLMs and GenAI embedded in applications, companies want to ensure that employees are not compromising sensitive data.GenAIs unprecedented rate of adoption has inspired many individuals to seek training on their own, often online at sites such as Coursera, EdX, and Udemy, but employers shouldnt depend on that. Given the strategic nature of the technology, companies should invest in training for their employees.A Fast Track To Improving Prompt Engineering EfficacyAndreas Welsch, founder and chief AI strategist at boutique AI strategy consultancy Intelligence Briefing, advocates starting with a Community of Multipliers -- early tech adopters who are eager to learn about the latest technology and how to make it useful. These multipliers can teach others in their departments, helping leadership scale the training. Next, he suggests piloting training formats in one business area, gathering feedback and iterating on the concept and delivery. Then, roll it out to the entire organization to maximize utility and impact.Related:Despite ChatGPT being available for two years, Generative AI tools are still a new type of application for most business users, says Welsch. Prompt engineering training should inspire learners to think and dream big.He also believes different kinds of learning environments benefit different types of users. For example, cohort-based online sessions have proven successful for introductory levels of AI literacy while executive training expands the scope from basic prompting to GenAI products.Advanced training is best conducted in a workshop because the content requires more context and interaction, and the value comes from networking with others and having access to an expert trainer. Advanced training goes deeper into the fundamentals including LLMs, retrieval-augmented generation, vector databases and security risks, for example.Andreas Welsch, Intelligence BriefingFunction-specific, tailored workshops and trainings can provide additional level of relevance to learners when the content and examples are put into the audience's context, for example, using GenAI in marketing, says Welsch. Prompting is an important skill to learn at this early stage of GenAI maturity.Related:Digital agency Create & Grow, initiated its prompt engineering training with a focus on the basics of generative AI and its applications. Recognizing the diverse skill levels within its team, the company implemented stratified training sessions, beginning with foundational concepts for novices and advancing to complex techniques for experienced members.This approach ensures that each team member receives the appropriate level of training, maximizing learning efficiency and application effectiveness, says Georgi Todorov, founder and CEO of Create & Grow, in an email interview. Our AI specialists, in collaboration with the HR department, lead the training initiatives. This dual leadership ensures that the technical depth of AI is well-integrated with our overarching employee training programs, aligning with broader company goals and individual development plans.The companys training covers:The basics of AI and language modelsPrinciples of prompt design and response analysisUse cases specific to its industry and client requirementsEthical considerations and best practices in AI usageEducational resources including online courses, in-person workshops, and peer-led sessions, and use of resources from leading AI platforms and collaborations with AI experts that keeps training up-to-date and relevantRelated:To gauge individuals level of prompt engineering mastery, Create & Grow conducts regular assessments and chooses practical projects that reflect real-world scenarios. These assessments help the company tailor ongoing training and provide targeted support where needed.Its crucial to foster a culture of continuous learning and curiosity. Encouraging team members to experiment with AI tools and share their findings helps demystify the technology and integrate it more deeply into everyday workflows, says Todorov. Our commitment to developing prompt engineering expertise is not just about staying competitive; its about harnessing the full potential of AI to innovate and improve our client offerings.A Different TakeKelwin Fernandes, cofounder and CEO at AI strategy consulting firm NILG.AI says good prompts are not ambiguous.A quick way to improve prompts is to ask the AI model if there's any ambiguity in the prompt. Then, adjust it accordingly, says Fernandes in an email interview.His company defined a basic six-part template for efficient prompting that covers:The role the AI should play (e.g., summarizing, drafting, etc.)The human role or position the AI should imitateA description of the task, being specific and removing any ambiguityA negative prompt stating what the AI cannot do. (E.g., dont answer if youre unsure)Any context you have that the AI doesnt know (E.g., information about the company)The specific task details the AI should solve at this time.[W]e do sharing sessions and role plays where team members bring their prompts, with examples that worked and examples that didn't and we brainstorm how to improve them, says Fernandes.At video production company Bonfire Labs, prompt training includes a communal think tank on Google Chat, making knowledge accessible to all. The company also holds staff meetings in which different departments learn foundational skills, such as prompt structure or tool identification.This ensures we are constantly cross-skilling and upskilling our people to stay ahead of the game. Our head of emerging technologies also plays an integral role in training and any creative process that requires AI, further improving our best practices, says Jim Bartel, partner, managing director at Bonfire Labs in an email interview. We have found that the best people to spearhead prompt training are those who are already masters at what they do, such as our designers and VFX artists. Their expertise in refinement and attention to detail is perfect for prompting.Why Developers May Have an EdgeEdward Tian, CEO at GPTZero believes prompt engineering begins with gaining an understanding of the various language models, including ChatGPT, GPT-2, GPT-3, GPT-4, and LLaMA.Its also important to have a background in coding and an understanding of NLP, but people often have minimal knowledge about the different language models, says Tian. Understanding how their learning concepts work and how they are structured can help significantly with prompt engineering. Working with pre-trained models can also help prompt engineers really hone their skills and [gain] a further understanding of how it all works.Chris Beavis, partner and AI Specialist at design-led consultancy The Frameworks suggests using the OpenAI development portal versus ChatGPT or Gemini, for example.It offers a greater level of control and access to different models. The temperature of a model is particularly important, allowing you to flex the randomness [and] creativity of answers over determinism [or] repeatability, says Beavis in an email interview.Chris Beavis, The Frameworks[The user] should start by identifying an idea or a challenge they are facing to see what impact AI can have. Try out different approaches, remember to give specific instructions, provide examples, and be clear about the format of the result you are expecting. Some other tips include breaking problems down into steps, including relevant data sets for context and prompting the AI to ask you questions about your request if its not clear.Most employees are experimenting with AI at The Frameworks in different ways, from image generation and summarization to more advanced techniques like augmented information retrieval and model training.I certainly think there is an initial barrier to overcome [when] familiarizing yourself with how to prompt, which may suggest the need for a beginner level of training. Beyond that, I think its a learning journey that will depend on your area of interest. A developer may want to explore how to connect AI prompting to data sets via APIs, copywriters may want to use it for brainstorming or drafting and strategists may want to use it to interrogate complex data sets. Its a digital literacy question.His company is finding the most useful applications are where they use code to combine prompts with data sets, like mail merging. That way, AI can be treated as a step in a repeatable problem-solving process.As with most companies, we started by simply seeing what the technology could do, says Beavis. As we become more familiar with the capabilities, we are finding interesting uses within client projects and our own internal processes.Intelligence Briefings Welsch says for software developers, mastery is a cost function such as getting the optimal output with the shortest possible prompt (to consume the least amount of tokens). For business users, he says proficiency could be measured by awareness of common prompting techniques and frameworks.Prompting is often portrayed as a glorified science. While teaching techniques is a good start for laying a foundation, Generative AI requires users to think differently and use software differently, says Welsch. [Trainees] can learn about examples of what these tools can be used for, but it is experimenting and iterating over an open-ended conversation that they should take away from it.Engage Specialized TrainersBrendan Gutierrez McDonnell, a partner at K&L Gates in the law firm's AI solutions group, says his company uses a multifaced approach to prompt engineering training.We have relied on experiential training provider AltaClaros prompt engineering course as an introduction for our lawyers and allied professionals to the world of prompt engineering. We have supplemented that foundational training with prompt engineering courses tailored to the GenAI [and other] AI solutions that our firm has licensed, says McDonnell in an email interview. These more tailored programs have been conducted in tandem by the vendor providing the solution and by our internal community of power users familiar with the specific solution.At present, the firm is building its own internal database of prompt engineering questions that work well with the various GenAI solutions. Over time, he expects the solutions themselves will recommend the best prompt engineering guidance to solve a particular problem.The best way to develop a degree of mastery is through education from outside educational vendors like AltaClaro, solution vendors like Thomson Reuters, and by learning from your colleagues, says McDonnell. Prompt engineering is best approached as a team sport. Most importantly, you must dive in and use the program. Be creative and push your own limits and the programs limits.Brendan Gutierrez McDonnell, K&L GatesK&L Gates has training programs for beginners that cover the basics and nuanced programs for advanced users, but before jumping into prompt engineering, he believes the user should have a fundamental understanding of how a GenAI solution works and whether the information input into the program will remain confidential or not.The user [should] understand that the output needs to be verified as large language models can make mistakes. Finally, the user needs to know how to vet the output. Once the user has these basics in order, she or he can start to learn how to prompt, says McDonnell. The user should be given problems to solve so that the user can put his or her prompting to the test and then review the results with peers. Having a training partner like AltaClaro can make sure that the training experience is effective, as they are experts in building programs tailored to the way lawyers learn best.Bottom LineOrganizations are approaching GenAI training differently, but they tend to agree its necessary to jumpstart better prompting.Where to get that training varies, and the sources are not mutually exclusive. One can hire expert help on-site, create their own programs and invest in GenAI online courses depending on the level of existing knowledge and the need to provide training that advances GenAI proficiency at varying levels of mastery.Read more about:Cost of AIAbout the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments 0 Shares 0 Reviews
-
WWW.INFORMATIONWEEK.COMKey Ways to Measure AI Project ROIJohn Edwards, Technology Journalist & AuthorFebruary 18, 20257 Min ReadTithi Luadthong via Alamy StockBusinesses of all types and sizes are launching AI projects, fearing that failing to embrace the powerful new technology will place them at a competitive disadvantage. Yet in their haste to jump on the AI bandwagon, many enterprises fail to consider one critical point: Will the project meet its expected efficiency or profitability goal?Enterprises should consider several criteria to assess the ROI of individual AI projects, including alignment with strategic business goals, potential cost savings, revenue generation, and improvements in operational efficiencies, says Munir Hafez, senior vice president and CIO with credit monitoring firm TransUnion, in an email interview.Besides relying on the standard criteria used for typical software projects -- such as scalability, technology sustainability, and talent -- AI projects must also account for the costs associated with maintaining accuracy and handling model drift over time, says Narendra Narukulla, vice president, Quant analytics, at JPMorganChase.In an online interview, Narukulla points to the example of a retailer deploying a forecasting model designed to predict sales for a specific clothing brand. "After three months, the retailer notices that sales haven't increased and has launched a new sub-brand targeting Gen Z customers instead of millennials," he says. To improve the AI model's performance, an extra variable could be added to account for the new generation of customers purchasing at the store.Related:Effective ApproachesAssessing an AI project's ROI should start by ensuring that the initiative aligns with core business objectives. "Whether the goal is operational efficiency, enhanced customer engagement, or new revenue streams, the project must clearly tie into the organizations strategic priorities," says Beena Ammanath, head of technology trust and ethics at business advisory firm Deloitte, in an online interview.David Lindenbaum, head of Accenture Federal Services' GenAI center of excellence, recommends starting with a business assessment to identify and understand the AI project's end-user as well as the initiative's desired effect. "This will help refocus from a pure technical implementation into business impact," he says via email. Lindenbaum also advises continued AI project evaluation, focusing on a custom test case that will allow developers to accurately measure success and quantitively understand how well the system is operating at any given time.Ammanath believes that a comprehensive cost-benefit analysis is also essential, balancing tangible outcomes such as increased productivity with intangible ones, like improved customer satisfaction or brand perception. "Scalability and sustainability should be central considerations to ensure that AI initiatives deliver long-term value and can grow with organizational needs," she says. "Additionally, a robust risk management framework is vital to address challenges related to data quality, privacy, and ethical concerns, ensuring that projects are both resilient and adaptable."Related:Metrics MatterPotential project ROI can be measured with metrics, including projected cost savings, expected revenue increases, hours of productivity saved, and anticipated improvements in key performance indicators (KPIs) such as customer satisfaction scores, Hafez says. Additionally, metrics such as time-to-market for new products or services, as well as any expected reduction in bugs or vulnerabilities revealed by a tool such as Amazon Q Developer, can provide insights into an AI project's potential benefits.Leaders need to look past the technology to determine how investing in generative AI aligns with their overall strategy, Ammanath says. She notes that the metrics required to measure AI project ROI vary, depending on the implementation stage. For example, to measure the potential ROI, organizations should evaluate projected efficiency gains, estimated revenue growth, and strategic benefits, like improved customer loyalty or reduced downtime. "These forward-looking metrics offer insights into the initiatives promise and help leaders determine if they align with the business goals." Additionally, for current ROI, leaders should consider using metrics that look at realized outcomes, such as actual cost savings, revenue increases tied directly to AI initiatives, and improvements in key performance indicators like customer satisfaction or throughput.Related:Pulling the PlugIf an AI project consistently fails to meet expectations, terminate it in a calculated manner, Hafez recommends. "Document the lessons learned and the reasons for failure, reallocate resources to more promising initiatives, and leverage the knowledge gained to improve future projects."Once a decision has been made to end a project, yet prior to officially announcing the ventures termination, Narukulla advises identifying alternative projects or roles for the now-idled AI team talent. "In light of the ongoing shortage of skilled professionals, ensuring a smooth transition for the team to new initiatives should be a priority," he says.Narukulla adds that capturing key learnings from the terminated project should be a priority. "A thorough post-mortem analysis should be conducted to assess which strategies were successful, which aspects fell short, and what improvements can be made for future endeavors."Narukulla believes that thoroughly documenting post-mortem insights can be invaluable for future reference. "By the time a similar issue arises, new models and additional data sources may offer innovative solutions," he explains. At that point, the project may be revived in a new and useful form.Parting ThoughtsEstablishing a strong governance framework for all ongoing AI projects is essential, Hafez says. "Further, a strong partnership with legal, compliance, and privacy teams can enhance success, particularly in regulated industries." He also suggests collaborating with external partners. "Leveraging their expertise can provide valuable insights and accelerate the AI journey."When implemented and scaled properly, AI is far more than a technological tool; it's a strategic enabler of innovation and competitive advantage, Ammanath says. However, long-term success requires more than sophisticated algorithms -- it demands cultural transformation, emphasizing human collaboration, agility, and ethical foresight, she warns. "Organizations that thrive with AI establish clear governance frameworks, align business and technical teams, and prioritize long-term value creation over short-term gains."As AI continues to advance and evolve, IT leaders have an unprecedented opportunity to align investments with enterprise-wide goals, Ammanath says. "By approaching AI as a strategic lever rather than a standalone solution, organizations can position themselves at the forefront of innovation and value creation."Read more about:Cost of AIAbout the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments 0 Shares 0 Reviews
More Stories