News and Analysis Tech Leaders Trust
Recent Updates
-
The Role of the 3-2-1 Backup Rule in Cybersecurity
Daniel Pearson , CEO, KnownHostJune 12, 20253 Min ReadBusiness success concept. Cubes with arrows and target on the top.Cyber incidents are expected to cost the US billion in 2025. According to the latest estimates, this dynamic will continue to rise, reaching approximately 1.82 trillion US dollars in cybercrime costs by 2028. These figures highlight the crucial importance of strong cybersecurity strategies, which businesses must build to reduce the likelihood of risks. As technology evolves at a dramatic pace, businesses are increasingly dependent on utilizing digital infrastructure, exposing themselves to threats such as ransomware, accidental data loss, and corruption. Despite the 3-2-1 backup rule being invented in 2009, this strategy has stayed relevant for businesses over the years, ensuring that the loss of data is minimized under threat, and will be a crucial method in the upcoming years to prevent major data loss. What Is the 3-2-1 Backup Rule? The 3-2-1 backup rule is a popular backup strategy that ensures resilience against data loss. The setup consists of keeping your original data and two backups. The data also needs to be stored in two different locations, such as the cloud or a local drive. The one in the 3-2-1 backup rule represents storing a copy of your data off site, and this completes the setup. This setup has been considered a gold standard in IT security, as it minimizes points of failure and increases the chance of successful data recovery in the event of a cyber-attack. Related:Why Is This Rule Relevant in the Modern Cyber Threat Landscape? Statistics show that in 2024, 80% of companies have seen an increase in the frequency of cloud attacks. Although many businesses assume that storing data in the cloud is enough, it is certainly not failsafe, and businesses are in bigger danger than ever due to the vast development of technology and AI capabilities attackers can manipulate and use. As the cloud infrastructure has seen a similar speed of growth, cyber criminals are actively targeting these, leaving businesses with no clear recovery option. Therefore, more than ever, businesses need to invest in immutable backup solutions. Common Backup Mistakes Businesses Make A common misstep is keeping all backups on the same physical network. If malware gets in, it can quickly spread and encrypt both the primary data and the backups, wiping out everything in one go. Another issue is the lack of offline or air-gapped backups. Many businesses rely entirely on cloud-based or on-premises storage that's always connected, which means their recovery options could be compromised during an attack. Related:Finally, one of the most overlooked yet crucial steps is testing backup restoration. A backup is only useful if it can actually be restored. Too often, companies skip regular testing. This can lead to a harsh reality check when they discover, too late, that their backup data is either corrupted or completely inaccessible after a breach. How to Implement the 3-2-1 Backup Rule? To successfully implement the 3-2-1 backup strategy as part of a robust cybersecurity framework, organizations should start by diversifying their storage methods. A resilient approach typically includes a mix of local storage, cloud-based solutions, and physical media such as external hard drives. From there, it's essential to incorporate technologies that support write-once, read-many functionalities. This means backups cannot be modified or deleted, even by administrators, providing an extra layer of protection against threats. To further enhance resilience, organizations should make use of automation and AI-driven tools. These technologies can offer real-time monitoring, detect anomalies, and apply predictive analytics to maintain the integrity of backup data and flag any unusual activity or failures in the process. Lastly, it's crucial to ensure your backup strategy aligns with relevant regulatory requirements, such as GDPR in the UK or CCPA in the US. Compliance not only mitigates legal risk but also reinforces your commitment to data protection and operational continuity. Related:By blending the time-tested 3-2-1 rule with modern advances like immutable storage and intelligent monitoring, organizations can build a highly resilient backup architecture that strengthens their overall cybersecurity posture. About the AuthorDaniel Pearson CEO, KnownHostDaniel Pearson is the CEO of KnownHost, a managed web hosting service provider. Pearson also serves as a dedicated board member and supporter of the AlmaLinux OS Foundation, a non-profit organization focused on advancing the AlmaLinux OS -- an open-source operating system derived from RHEL. His passion for technology extends beyond his professional endeavors, as he actively promotes digital literacy and empowerment. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. See more from Daniel Pearson ReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#role #backup #rule #cybersecurityThe Role of the 3-2-1 Backup Rule in CybersecurityDaniel Pearson , CEO, KnownHostJune 12, 20253 Min ReadBusiness success concept. Cubes with arrows and target on the top.Cyber incidents are expected to cost the US billion in 2025. According to the latest estimates, this dynamic will continue to rise, reaching approximately 1.82 trillion US dollars in cybercrime costs by 2028. These figures highlight the crucial importance of strong cybersecurity strategies, which businesses must build to reduce the likelihood of risks. As technology evolves at a dramatic pace, businesses are increasingly dependent on utilizing digital infrastructure, exposing themselves to threats such as ransomware, accidental data loss, and corruption. Despite the 3-2-1 backup rule being invented in 2009, this strategy has stayed relevant for businesses over the years, ensuring that the loss of data is minimized under threat, and will be a crucial method in the upcoming years to prevent major data loss. What Is the 3-2-1 Backup Rule? The 3-2-1 backup rule is a popular backup strategy that ensures resilience against data loss. The setup consists of keeping your original data and two backups. The data also needs to be stored in two different locations, such as the cloud or a local drive. The one in the 3-2-1 backup rule represents storing a copy of your data off site, and this completes the setup. This setup has been considered a gold standard in IT security, as it minimizes points of failure and increases the chance of successful data recovery in the event of a cyber-attack. Related:Why Is This Rule Relevant in the Modern Cyber Threat Landscape? Statistics show that in 2024, 80% of companies have seen an increase in the frequency of cloud attacks. Although many businesses assume that storing data in the cloud is enough, it is certainly not failsafe, and businesses are in bigger danger than ever due to the vast development of technology and AI capabilities attackers can manipulate and use. As the cloud infrastructure has seen a similar speed of growth, cyber criminals are actively targeting these, leaving businesses with no clear recovery option. Therefore, more than ever, businesses need to invest in immutable backup solutions. Common Backup Mistakes Businesses Make A common misstep is keeping all backups on the same physical network. If malware gets in, it can quickly spread and encrypt both the primary data and the backups, wiping out everything in one go. Another issue is the lack of offline or air-gapped backups. Many businesses rely entirely on cloud-based or on-premises storage that's always connected, which means their recovery options could be compromised during an attack. Related:Finally, one of the most overlooked yet crucial steps is testing backup restoration. A backup is only useful if it can actually be restored. Too often, companies skip regular testing. This can lead to a harsh reality check when they discover, too late, that their backup data is either corrupted or completely inaccessible after a breach. How to Implement the 3-2-1 Backup Rule? To successfully implement the 3-2-1 backup strategy as part of a robust cybersecurity framework, organizations should start by diversifying their storage methods. A resilient approach typically includes a mix of local storage, cloud-based solutions, and physical media such as external hard drives. From there, it's essential to incorporate technologies that support write-once, read-many functionalities. This means backups cannot be modified or deleted, even by administrators, providing an extra layer of protection against threats. To further enhance resilience, organizations should make use of automation and AI-driven tools. These technologies can offer real-time monitoring, detect anomalies, and apply predictive analytics to maintain the integrity of backup data and flag any unusual activity or failures in the process. Lastly, it's crucial to ensure your backup strategy aligns with relevant regulatory requirements, such as GDPR in the UK or CCPA in the US. Compliance not only mitigates legal risk but also reinforces your commitment to data protection and operational continuity. Related:By blending the time-tested 3-2-1 rule with modern advances like immutable storage and intelligent monitoring, organizations can build a highly resilient backup architecture that strengthens their overall cybersecurity posture. About the AuthorDaniel Pearson CEO, KnownHostDaniel Pearson is the CEO of KnownHost, a managed web hosting service provider. Pearson also serves as a dedicated board member and supporter of the AlmaLinux OS Foundation, a non-profit organization focused on advancing the AlmaLinux OS -- an open-source operating system derived from RHEL. His passion for technology extends beyond his professional endeavors, as he actively promotes digital literacy and empowerment. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. See more from Daniel Pearson ReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #role #backup #rule #cybersecurityWWW.INFORMATIONWEEK.COMThe Role of the 3-2-1 Backup Rule in CybersecurityDaniel Pearson , CEO, KnownHostJune 12, 20253 Min ReadBusiness success concept. Cubes with arrows and target on the top.Cyber incidents are expected to cost the US $639 billion in 2025. According to the latest estimates, this dynamic will continue to rise, reaching approximately 1.82 trillion US dollars in cybercrime costs by 2028. These figures highlight the crucial importance of strong cybersecurity strategies, which businesses must build to reduce the likelihood of risks. As technology evolves at a dramatic pace, businesses are increasingly dependent on utilizing digital infrastructure, exposing themselves to threats such as ransomware, accidental data loss, and corruption. Despite the 3-2-1 backup rule being invented in 2009, this strategy has stayed relevant for businesses over the years, ensuring that the loss of data is minimized under threat, and will be a crucial method in the upcoming years to prevent major data loss. What Is the 3-2-1 Backup Rule? The 3-2-1 backup rule is a popular backup strategy that ensures resilience against data loss. The setup consists of keeping your original data and two backups. The data also needs to be stored in two different locations, such as the cloud or a local drive. The one in the 3-2-1 backup rule represents storing a copy of your data off site, and this completes the setup. This setup has been considered a gold standard in IT security, as it minimizes points of failure and increases the chance of successful data recovery in the event of a cyber-attack. Related:Why Is This Rule Relevant in the Modern Cyber Threat Landscape? Statistics show that in 2024, 80% of companies have seen an increase in the frequency of cloud attacks. Although many businesses assume that storing data in the cloud is enough, it is certainly not failsafe, and businesses are in bigger danger than ever due to the vast development of technology and AI capabilities attackers can manipulate and use. As the cloud infrastructure has seen a similar speed of growth, cyber criminals are actively targeting these, leaving businesses with no clear recovery option. Therefore, more than ever, businesses need to invest in immutable backup solutions. Common Backup Mistakes Businesses Make A common misstep is keeping all backups on the same physical network. If malware gets in, it can quickly spread and encrypt both the primary data and the backups, wiping out everything in one go. Another issue is the lack of offline or air-gapped backups. Many businesses rely entirely on cloud-based or on-premises storage that's always connected, which means their recovery options could be compromised during an attack. Related:Finally, one of the most overlooked yet crucial steps is testing backup restoration. A backup is only useful if it can actually be restored. Too often, companies skip regular testing. This can lead to a harsh reality check when they discover, too late, that their backup data is either corrupted or completely inaccessible after a breach. How to Implement the 3-2-1 Backup Rule? To successfully implement the 3-2-1 backup strategy as part of a robust cybersecurity framework, organizations should start by diversifying their storage methods. A resilient approach typically includes a mix of local storage, cloud-based solutions, and physical media such as external hard drives. From there, it's essential to incorporate technologies that support write-once, read-many functionalities. This means backups cannot be modified or deleted, even by administrators, providing an extra layer of protection against threats. To further enhance resilience, organizations should make use of automation and AI-driven tools. These technologies can offer real-time monitoring, detect anomalies, and apply predictive analytics to maintain the integrity of backup data and flag any unusual activity or failures in the process. Lastly, it's crucial to ensure your backup strategy aligns with relevant regulatory requirements, such as GDPR in the UK or CCPA in the US. Compliance not only mitigates legal risk but also reinforces your commitment to data protection and operational continuity. Related:By blending the time-tested 3-2-1 rule with modern advances like immutable storage and intelligent monitoring, organizations can build a highly resilient backup architecture that strengthens their overall cybersecurity posture. About the AuthorDaniel Pearson CEO, KnownHostDaniel Pearson is the CEO of KnownHost, a managed web hosting service provider. Pearson also serves as a dedicated board member and supporter of the AlmaLinux OS Foundation, a non-profit organization focused on advancing the AlmaLinux OS -- an open-source operating system derived from RHEL. His passion for technology extends beyond his professional endeavors, as he actively promotes digital literacy and empowerment. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. See more from Daniel Pearson ReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikePlease log in to like, share and comment! -
Why Companies Need to Reimagine Their AI Approach
Ivy Grant, SVP of Strategy & Operations, Twilio June 13, 20255 Min Readpeshkova via alamy stockAsk technologists and enterprise leaders what they hope AI will deliver, and most will land on some iteration of the "T" word: transformation. No surprise, AI and its “cooler than you” cousin, generative AI, have been hyped nonstop for the past 24 months. But therein lies the problem. Many organizations are rushing to implement AI without a grasp on the return on investment, leading to high spend and low impact. Without anchoring AI to clear friction points and acceleration opportunities, companies invite fatigue, anxiety and competitive risk. Two-thirds of C-suite execs say GenAI has created tension and division within their organizations; nearly half say it’s “tearing their company apart.” Mostreport adoption challenges; more than a third call it a massive disappointment. While AI's potential is irrefutable, companies need to reject the narrative of AI as a standalone strategy or transformational savior. Its true power is as a catalyst to amplify what already works and surface what could. Here are three principles to make that happen. 1. Start with friction, not function Many enterprises struggle with where to start when integrating AI. My advice: Start where the pain is greatest. Identify the processes that create the most friction and work backward from there. AI is a tool, not a solution. By mapping real pain points to AI use cases, you can hone investments to the ripest fruit rather than simply where it hangs at the lowest. Related:For example, one of our top sources of customer pain was troubleshooting undeliverable messages, which forced users to sift through error code documentation. To solve this, an AI assistant was introduced to detect anomalies, explain causes in natural language, and guide customers toward resolution. We achieved a 97% real-time resolution rate through a blend of conversational AI and live support. Most companies have long-standing friction points that support teams routinely explain. Or that you’ve developed organizational calluses over; problems considered “just the cost of doing business.” GenAI allows leaders to revisit these areas and reimagine what’s possible. 2. The need forspeed We hear stories of leaders pushing an “all or nothing” version of AI transformation: Use AI to cut functional headcount or die. Rather than leading with a “stick” through wholesale transformation mandates or threats to budgets, we must recognize AI implementation as a fundamental culture change. Just as you wouldn't expect to transform your company culture overnight by edict, it's unreasonable to expect something different from your AI transformation. Related:Some leaders have a tendency to move faster than the innovation ability or comfort level of their people. Most functional leads aren’t obstinate in their slow adoption of AI tools, their long-held beliefs to run a process or to assess risks. We hired these leaders for their decades of experience in “what good looks like” and deep expertise in incremental improvements; then we expect them to suddenly define a futuristic vision that challenges their own beliefs. As executive leaders, we must give grace, space and plenty of “carrots” -- incentives, training, and support resources -- to help them reimagine complex workflows with AI. And, we must recognize that AI has the ability to make progress in ways that may not immediately create cost efficiencies, such as for operational improvements that require data cleansing, deep analytics, forecasting, dynamic pricing, and signal sensing. These aren’t the sexy parts of AI, but they’re the types of issues that require superhuman intelligence and complex problem-solving that AI was made for. 3. A flywheel of acceleration The other transformation that AI should support is creating faster and broader “test and learn” cycles. AI implementation is not a linear process with start here and end there. Organizations that want to leverage AI as a competitive advantage should establish use cases where AI can break down company silos and act as a catalyst to identify the next opportunity. That identifies the next as a flywheel of acceleration. This flywheel builds on accumulated learnings, making small successes into larger wins while avoiding costly AI disasters from rushed implementation. Related:For example, at Twilio we are building a customer intelligence platform that analyzes thousands of conversations to identify patterns and drive insights. If we see multiple customers mention a competitor's pricing, it could signal a take-out campaign. What once took weeks to recognize and escalate can now be done in near real-time and used for highly coordinated activations across marketing, product, sales, and other teams. With every AI acceleration win, we uncover more places to improve hand-offs, activation speed, and business decision-making. That flywheel of innovation is how true AI transformation begins to drive impactful business outcomes. Ideas to Fuel Your AI Strategy Organizations can accelerate their AI implementations through these simple shifts in approach: Revisit your long-standing friction points, both customer-facing and internal, across your organization -- particularly explore the ones you thought were “the cost of doing business” Don’t just look for where AI can reduce manual processes, but find the highly complex problems and start experimenting Support your functional experts with AI-driven training, resources, tools, and incentives to help them challenge their long-held beliefs about what works for the future Treat AI implementation as a cultural change that requires time, experimentation, learning, and carrots Recognize that transformation starts with a flywheel of acceleration, where each new experiment can lead to the next big discovery The most impactful AI implementations don’t rush transformation; they strategically accelerate core capabilities and unlock new ones to drive measurable change. About the AuthorIvy GrantSVP of Strategy & Operations, Twilio Ivy Grant is Senior Vice President of Strategy & Operations at Twilio where she leads strategic planning, enterprise analytics, M&A Integration and is responsible for driving transformational initiatives that enable Twilio to continuously improve its operations. Prior to Twilio, Ivy’s career has balanced senior roles in strategy consulting at McKinsey & Company, Edelman and PwC with customer-centric operational roles at Walmart, Polo Ralph Lauren and tech startup Eversight Labs. She loves solo international travel, hugging exotic animals and boxing. Ivy has an MBA from NYU’s Stern School of Business and a BS in Applied Economics from Cornell University. See more from Ivy GrantReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#why #companies #need #reimagine #theirWhy Companies Need to Reimagine Their AI ApproachIvy Grant, SVP of Strategy & Operations, Twilio June 13, 20255 Min Readpeshkova via alamy stockAsk technologists and enterprise leaders what they hope AI will deliver, and most will land on some iteration of the "T" word: transformation. No surprise, AI and its “cooler than you” cousin, generative AI, have been hyped nonstop for the past 24 months. But therein lies the problem. Many organizations are rushing to implement AI without a grasp on the return on investment, leading to high spend and low impact. Without anchoring AI to clear friction points and acceleration opportunities, companies invite fatigue, anxiety and competitive risk. Two-thirds of C-suite execs say GenAI has created tension and division within their organizations; nearly half say it’s “tearing their company apart.” Mostreport adoption challenges; more than a third call it a massive disappointment. While AI's potential is irrefutable, companies need to reject the narrative of AI as a standalone strategy or transformational savior. Its true power is as a catalyst to amplify what already works and surface what could. Here are three principles to make that happen. 1. Start with friction, not function Many enterprises struggle with where to start when integrating AI. My advice: Start where the pain is greatest. Identify the processes that create the most friction and work backward from there. AI is a tool, not a solution. By mapping real pain points to AI use cases, you can hone investments to the ripest fruit rather than simply where it hangs at the lowest. Related:For example, one of our top sources of customer pain was troubleshooting undeliverable messages, which forced users to sift through error code documentation. To solve this, an AI assistant was introduced to detect anomalies, explain causes in natural language, and guide customers toward resolution. We achieved a 97% real-time resolution rate through a blend of conversational AI and live support. Most companies have long-standing friction points that support teams routinely explain. Or that you’ve developed organizational calluses over; problems considered “just the cost of doing business.” GenAI allows leaders to revisit these areas and reimagine what’s possible. 2. The need forspeed We hear stories of leaders pushing an “all or nothing” version of AI transformation: Use AI to cut functional headcount or die. Rather than leading with a “stick” through wholesale transformation mandates or threats to budgets, we must recognize AI implementation as a fundamental culture change. Just as you wouldn't expect to transform your company culture overnight by edict, it's unreasonable to expect something different from your AI transformation. Related:Some leaders have a tendency to move faster than the innovation ability or comfort level of their people. Most functional leads aren’t obstinate in their slow adoption of AI tools, their long-held beliefs to run a process or to assess risks. We hired these leaders for their decades of experience in “what good looks like” and deep expertise in incremental improvements; then we expect them to suddenly define a futuristic vision that challenges their own beliefs. As executive leaders, we must give grace, space and plenty of “carrots” -- incentives, training, and support resources -- to help them reimagine complex workflows with AI. And, we must recognize that AI has the ability to make progress in ways that may not immediately create cost efficiencies, such as for operational improvements that require data cleansing, deep analytics, forecasting, dynamic pricing, and signal sensing. These aren’t the sexy parts of AI, but they’re the types of issues that require superhuman intelligence and complex problem-solving that AI was made for. 3. A flywheel of acceleration The other transformation that AI should support is creating faster and broader “test and learn” cycles. AI implementation is not a linear process with start here and end there. Organizations that want to leverage AI as a competitive advantage should establish use cases where AI can break down company silos and act as a catalyst to identify the next opportunity. That identifies the next as a flywheel of acceleration. This flywheel builds on accumulated learnings, making small successes into larger wins while avoiding costly AI disasters from rushed implementation. Related:For example, at Twilio we are building a customer intelligence platform that analyzes thousands of conversations to identify patterns and drive insights. If we see multiple customers mention a competitor's pricing, it could signal a take-out campaign. What once took weeks to recognize and escalate can now be done in near real-time and used for highly coordinated activations across marketing, product, sales, and other teams. With every AI acceleration win, we uncover more places to improve hand-offs, activation speed, and business decision-making. That flywheel of innovation is how true AI transformation begins to drive impactful business outcomes. Ideas to Fuel Your AI Strategy Organizations can accelerate their AI implementations through these simple shifts in approach: Revisit your long-standing friction points, both customer-facing and internal, across your organization -- particularly explore the ones you thought were “the cost of doing business” Don’t just look for where AI can reduce manual processes, but find the highly complex problems and start experimenting Support your functional experts with AI-driven training, resources, tools, and incentives to help them challenge their long-held beliefs about what works for the future Treat AI implementation as a cultural change that requires time, experimentation, learning, and carrots Recognize that transformation starts with a flywheel of acceleration, where each new experiment can lead to the next big discovery The most impactful AI implementations don’t rush transformation; they strategically accelerate core capabilities and unlock new ones to drive measurable change. About the AuthorIvy GrantSVP of Strategy & Operations, Twilio Ivy Grant is Senior Vice President of Strategy & Operations at Twilio where she leads strategic planning, enterprise analytics, M&A Integration and is responsible for driving transformational initiatives that enable Twilio to continuously improve its operations. Prior to Twilio, Ivy’s career has balanced senior roles in strategy consulting at McKinsey & Company, Edelman and PwC with customer-centric operational roles at Walmart, Polo Ralph Lauren and tech startup Eversight Labs. She loves solo international travel, hugging exotic animals and boxing. Ivy has an MBA from NYU’s Stern School of Business and a BS in Applied Economics from Cornell University. See more from Ivy GrantReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #why #companies #need #reimagine #theirWWW.INFORMATIONWEEK.COMWhy Companies Need to Reimagine Their AI ApproachIvy Grant, SVP of Strategy & Operations, Twilio June 13, 20255 Min Readpeshkova via alamy stockAsk technologists and enterprise leaders what they hope AI will deliver, and most will land on some iteration of the "T" word: transformation. No surprise, AI and its “cooler than you” cousin, generative AI (GenAI), have been hyped nonstop for the past 24 months. But therein lies the problem. Many organizations are rushing to implement AI without a grasp on the return on investment (ROI), leading to high spend and low impact. Without anchoring AI to clear friction points and acceleration opportunities, companies invite fatigue, anxiety and competitive risk. Two-thirds of C-suite execs say GenAI has created tension and division within their organizations; nearly half say it’s “tearing their company apart.” Most (71%) report adoption challenges; more than a third call it a massive disappointment. While AI's potential is irrefutable, companies need to reject the narrative of AI as a standalone strategy or transformational savior. Its true power is as a catalyst to amplify what already works and surface what could. Here are three principles to make that happen. 1. Start with friction, not function Many enterprises struggle with where to start when integrating AI. My advice: Start where the pain is greatest. Identify the processes that create the most friction and work backward from there. AI is a tool, not a solution. By mapping real pain points to AI use cases, you can hone investments to the ripest fruit rather than simply where it hangs at the lowest. Related:For example, one of our top sources of customer pain was troubleshooting undeliverable messages, which forced users to sift through error code documentation. To solve this, an AI assistant was introduced to detect anomalies, explain causes in natural language, and guide customers toward resolution. We achieved a 97% real-time resolution rate through a blend of conversational AI and live support. Most companies have long-standing friction points that support teams routinely explain. Or that you’ve developed organizational calluses over; problems considered “just the cost of doing business.” GenAI allows leaders to revisit these areas and reimagine what’s possible. 2. The need for (dual) speed We hear stories of leaders pushing an “all or nothing” version of AI transformation: Use AI to cut functional headcount or die. Rather than leading with a “stick” through wholesale transformation mandates or threats to budgets, we must recognize AI implementation as a fundamental culture change. Just as you wouldn't expect to transform your company culture overnight by edict, it's unreasonable to expect something different from your AI transformation. Related:Some leaders have a tendency to move faster than the innovation ability or comfort level of their people. Most functional leads aren’t obstinate in their slow adoption of AI tools, their long-held beliefs to run a process or to assess risks. We hired these leaders for their decades of experience in “what good looks like” and deep expertise in incremental improvements; then we expect them to suddenly define a futuristic vision that challenges their own beliefs. As executive leaders, we must give grace, space and plenty of “carrots” -- incentives, training, and support resources -- to help them reimagine complex workflows with AI. And, we must recognize that AI has the ability to make progress in ways that may not immediately create cost efficiencies, such as for operational improvements that require data cleansing, deep analytics, forecasting, dynamic pricing, and signal sensing. These aren’t the sexy parts of AI, but they’re the types of issues that require superhuman intelligence and complex problem-solving that AI was made for. 3. A flywheel of acceleration The other transformation that AI should support is creating faster and broader “test and learn” cycles. AI implementation is not a linear process with start here and end there. Organizations that want to leverage AI as a competitive advantage should establish use cases where AI can break down company silos and act as a catalyst to identify the next opportunity. That identifies the next as a flywheel of acceleration. This flywheel builds on accumulated learnings, making small successes into larger wins while avoiding costly AI disasters from rushed implementation. Related:For example, at Twilio we are building a customer intelligence platform that analyzes thousands of conversations to identify patterns and drive insights. If we see multiple customers mention a competitor's pricing, it could signal a take-out campaign. What once took weeks to recognize and escalate can now be done in near real-time and used for highly coordinated activations across marketing, product, sales, and other teams. With every AI acceleration win, we uncover more places to improve hand-offs, activation speed, and business decision-making. That flywheel of innovation is how true AI transformation begins to drive impactful business outcomes. Ideas to Fuel Your AI Strategy Organizations can accelerate their AI implementations through these simple shifts in approach: Revisit your long-standing friction points, both customer-facing and internal, across your organization -- particularly explore the ones you thought were “the cost of doing business” Don’t just look for where AI can reduce manual processes, but find the highly complex problems and start experimenting Support your functional experts with AI-driven training, resources, tools, and incentives to help them challenge their long-held beliefs about what works for the future Treat AI implementation as a cultural change that requires time, experimentation, learning, and carrots (not just sticks) Recognize that transformation starts with a flywheel of acceleration, where each new experiment can lead to the next big discovery The most impactful AI implementations don’t rush transformation; they strategically accelerate core capabilities and unlock new ones to drive measurable change. About the AuthorIvy GrantSVP of Strategy & Operations, Twilio Ivy Grant is Senior Vice President of Strategy & Operations at Twilio where she leads strategic planning, enterprise analytics, M&A Integration and is responsible for driving transformational initiatives that enable Twilio to continuously improve its operations. Prior to Twilio, Ivy’s career has balanced senior roles in strategy consulting at McKinsey & Company, Edelman and PwC with customer-centric operational roles at Walmart, Polo Ralph Lauren and tech startup Eversight Labs. She loves solo international travel, hugging exotic animals and boxing. Ivy has an MBA from NYU’s Stern School of Business and a BS in Applied Economics from Cornell University. See more from Ivy GrantReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares -
CERT Director Greg Touhill: To Lead Is to Serve
Greg Touhill, director of the Software Engineering’s Institute’sComputer Emergency Response Teamdivision is an atypical technology leader. For one thing, he’s been in tech and other leadership positions that span the US Air Force, the US government, the private sector and now SEI’s CERT. More importantly, he’s been a major force in the cybersecurity realm, making the world a safer place and even saving lives. Touhill earned a bachelor’s degree from the Pennsylvania State University, a master’s degree from the University of Southern California, a master’s degree from the Air War College, was a senior executive fellow at the Harvard University Kennedy School of Government and completed executive education studies at the University of North Carolina. “I was a student intern at Carnegie Mellon, but I was going to college at Penn State and studying chemical engineering. As an Air Force ROTC scholarship recipient, I knew I was going to become an Air Force officer but soon realized that I didn’t necessarily want to be a chemical engineer in the Air Force,” says Touhill. “Because I passed all the mathematics, physics, and engineering courses, I ended up becoming a communications, electronics, and computer systems officer in the Air Force. I spent 30 years, one month and three days on active duty in the United States Air Force, eventually retiring as a brigadier general and having done many different types of jobs that were available to me within and even beyond my career field.” Related:Specifically, he was an operational commander at the squadron, group, and wing levels. For example, as a colonel, Touhill served as director of command, control, communications and computersfor the United States Central Command Forces, then he was appointed chief information officer and director, communications and information at Air Mobility Command. Later, he served as commander, 81st Training Wing at Kessler Air Force Base where he was promoted to brigadier general and commanded over 12,500 personnel. After that, he served as the senior defense officer and US defense attaché at the US Embassy in Kuwait, before concluding his military career as the chief information officer and director, C4 systems at the US Transportation Command, one of 10 US combatant commands, where he and his team were awarded the NSA Rowlett Award for the best cybersecurity program in the government. While in the Air Force, Touhill received numerous awards and decorations including the Bronze Star medal and the Air Force Science and Engineering Award. He is the only three-time recipient of the USAF C4 Professionalism Award. Related:Greg Touhill“I got to serve at major combatant commands, work with coalition partners from many different countries and represented the US as part of a diplomatic mission to Kuwait for two years as the senior defense official at a time when America was withdrawing forces out of Iraq. I also led the negotiation of a new bilateral defense agreement with the Kuwaitis,” says Touhill. “Then I was recruited to continue my service and was asked to serve as the deputy assistant secretary of cybersecurity and communications at the Department of Homeland Security, where I ran the operations of what is now known as the Cybersecurity and Infrastructure Security Agency. I was there at a pivotal moment because we were building up the capacity of that organization and setting the stage for it to become its own agency.” While at DHS, there were many noteworthy breaches including the infamous US Office of People Managementbreach. Those events led to Obama’s visit to the National Cybersecurity and Communications Integration Center. “I got to brief the president on the state of cybersecurity, what we had seen with the OPM breach and some other deficiencies,” says Touhill. “I was on the federal CIO council as the cybersecurity advisor to that since I’d been a federal CIO before and I got to conclude my federal career by being the first United States government chief information security officer. From there, I pivoted to industry, but I also got to return to Carnegie Mellon as a faculty member at Carnegie Mellon’s Heinz College, where I've been teaching since January 2017.” Related:Touhill has been involved in three startups, two of which were successfully acquired. He also served on three Fortune 100 advisory boards and on the Information Systems Audit and Control Association board, eventually becoming its chair for a term during the seven years he served there. Touhill just celebrated his fourth year at CERT, which he considers the pinnacle of the cybersecurity profession and everything he’s done to date. “Over my career I've led teams that have done major software builds in the national security space. I've also been the guy who's pulled cables and set up routers, hubs and switches, and I've been a system administrator. I've done everything that I could do from the keyboard up all the way up to the White House,” says Touhill. “For 40 years, the Software Engineering Institute has been leading the world in secure by design, cybersecurity, software engineering, artificial intelligence and engineering, pioneering best practices, and figuring out how to make the world a safer more secure and trustworthy place. I’ve had a hand in the making of today’s modern military and government information technology environment, beginning as a 22-year-old lieutenant, and hope to inspire the next generation to do even better.” What ‘Success’ Means Many people would be satisfied with their careers as a brigadier general, a tech leader, the White House’s first anything, or working at CERT, let alone running it. Touhill has spent his entire career making the world a safer place, so it’s not surprising that he considers his greatest achievement saving lives. “In the Middle East and Iraq, convoys were being attacked with improvised explosive devices. There were also ‘direct fire’ attacks where people are firing weapons at you and indirect fire attacks where you could be in the line of fire,” says Touhill. “The convoys were using SINCGARS line-of-site walkie-talkies for communications that are most effective when the ground is flat, and Iraq is not flat. As a result, our troops were at risk of not having reliable communications while under attack. As my team brainstormed options to remedy the situation, one of my guys found some technology, about the size of an iPhone, that could covert a radio signal, which is basically a waveform, into a digital pulse I could put on a dedicated network to support the convoy missions.” For million, Touhill and his team quickly architected, tested, and fielded the Radio over IP networkthat had a 99% reliability rate anywhere in Iraq. Better still, convoys could communicate over the network using any radios. That solution saved a minimum of six lives. In one case, the hospital doctor said if the patient had arrived five minutes later, he would have died. Sage Advice Anyone who has ever spent time in the military or in a military family knows that soldiers are very well disciplined, or they wash out. Other traits include being physically fit, mentally fit, and achieving balance in life, though that’s difficult to achieve in combat. Still, it’s a necessity. “I served three and a half years down range in combat operations. My experience taught me you could be doing 20-hour days for a year or two on end. If you haven’t built a good foundation of being disciplined and fit, it impacts your ability to maintain presence in times of stress, and CISOs work in stressful situations,” says Touhill. “Staying fit also fortifies you for the long haul, so you don’t get burned out as fast.” Another necessary skill is the ability to work well with others. “Cybersecurity is an interdisciplinary practice. One of the great joys I have as CERT director is the wide range of experts in many different fields that include software engineers, computer engineers, computer scientists, data scientists, mathematicians and physicists,” says Touhill. “I have folks who have business degrees and others who have philosophy degrees. It's really a rich community of interests all coming together towards that common goal of making the world a safer, more secure and more trusted place in the cyber domain. We’re are kind of like the cyber neighborhood watch for the whole world.” He also says that money isn’t everything, having taken a pay cut to go from being an Air Force brigadier general to the deputy assistant secretary of the Department of Homeland Security . “You’ll always do well if you pick the job that matters most. That’s what I did, and I’ve been rewarded every step,” says Touhill. The biggest challenge he sees is the complexity of cyber systems and software, which can have second, third, and fourth order effects. “Complexity raises the cost of the attack surface, increases the attack surface, raises the number of vulnerabilities and exploits human weaknesses,” says Touhill. “The No. 1 thing we need to be paying attention to is privacy when it comes to AI because AI can unearth and discover knowledge from data we already have. While it gives us greater insights at greater velocities, we need to be careful that we take precautions to better protect our privacy, civil rights and civil liberties.”
#cert #director #greg #touhill #leadCERT Director Greg Touhill: To Lead Is to ServeGreg Touhill, director of the Software Engineering’s Institute’sComputer Emergency Response Teamdivision is an atypical technology leader. For one thing, he’s been in tech and other leadership positions that span the US Air Force, the US government, the private sector and now SEI’s CERT. More importantly, he’s been a major force in the cybersecurity realm, making the world a safer place and even saving lives. Touhill earned a bachelor’s degree from the Pennsylvania State University, a master’s degree from the University of Southern California, a master’s degree from the Air War College, was a senior executive fellow at the Harvard University Kennedy School of Government and completed executive education studies at the University of North Carolina. “I was a student intern at Carnegie Mellon, but I was going to college at Penn State and studying chemical engineering. As an Air Force ROTC scholarship recipient, I knew I was going to become an Air Force officer but soon realized that I didn’t necessarily want to be a chemical engineer in the Air Force,” says Touhill. “Because I passed all the mathematics, physics, and engineering courses, I ended up becoming a communications, electronics, and computer systems officer in the Air Force. I spent 30 years, one month and three days on active duty in the United States Air Force, eventually retiring as a brigadier general and having done many different types of jobs that were available to me within and even beyond my career field.” Related:Specifically, he was an operational commander at the squadron, group, and wing levels. For example, as a colonel, Touhill served as director of command, control, communications and computersfor the United States Central Command Forces, then he was appointed chief information officer and director, communications and information at Air Mobility Command. Later, he served as commander, 81st Training Wing at Kessler Air Force Base where he was promoted to brigadier general and commanded over 12,500 personnel. After that, he served as the senior defense officer and US defense attaché at the US Embassy in Kuwait, before concluding his military career as the chief information officer and director, C4 systems at the US Transportation Command, one of 10 US combatant commands, where he and his team were awarded the NSA Rowlett Award for the best cybersecurity program in the government. While in the Air Force, Touhill received numerous awards and decorations including the Bronze Star medal and the Air Force Science and Engineering Award. He is the only three-time recipient of the USAF C4 Professionalism Award. Related:Greg Touhill“I got to serve at major combatant commands, work with coalition partners from many different countries and represented the US as part of a diplomatic mission to Kuwait for two years as the senior defense official at a time when America was withdrawing forces out of Iraq. I also led the negotiation of a new bilateral defense agreement with the Kuwaitis,” says Touhill. “Then I was recruited to continue my service and was asked to serve as the deputy assistant secretary of cybersecurity and communications at the Department of Homeland Security, where I ran the operations of what is now known as the Cybersecurity and Infrastructure Security Agency. I was there at a pivotal moment because we were building up the capacity of that organization and setting the stage for it to become its own agency.” While at DHS, there were many noteworthy breaches including the infamous US Office of People Managementbreach. Those events led to Obama’s visit to the National Cybersecurity and Communications Integration Center. “I got to brief the president on the state of cybersecurity, what we had seen with the OPM breach and some other deficiencies,” says Touhill. “I was on the federal CIO council as the cybersecurity advisor to that since I’d been a federal CIO before and I got to conclude my federal career by being the first United States government chief information security officer. From there, I pivoted to industry, but I also got to return to Carnegie Mellon as a faculty member at Carnegie Mellon’s Heinz College, where I've been teaching since January 2017.” Related:Touhill has been involved in three startups, two of which were successfully acquired. He also served on three Fortune 100 advisory boards and on the Information Systems Audit and Control Association board, eventually becoming its chair for a term during the seven years he served there. Touhill just celebrated his fourth year at CERT, which he considers the pinnacle of the cybersecurity profession and everything he’s done to date. “Over my career I've led teams that have done major software builds in the national security space. I've also been the guy who's pulled cables and set up routers, hubs and switches, and I've been a system administrator. I've done everything that I could do from the keyboard up all the way up to the White House,” says Touhill. “For 40 years, the Software Engineering Institute has been leading the world in secure by design, cybersecurity, software engineering, artificial intelligence and engineering, pioneering best practices, and figuring out how to make the world a safer more secure and trustworthy place. I’ve had a hand in the making of today’s modern military and government information technology environment, beginning as a 22-year-old lieutenant, and hope to inspire the next generation to do even better.” What ‘Success’ Means Many people would be satisfied with their careers as a brigadier general, a tech leader, the White House’s first anything, or working at CERT, let alone running it. Touhill has spent his entire career making the world a safer place, so it’s not surprising that he considers his greatest achievement saving lives. “In the Middle East and Iraq, convoys were being attacked with improvised explosive devices. There were also ‘direct fire’ attacks where people are firing weapons at you and indirect fire attacks where you could be in the line of fire,” says Touhill. “The convoys were using SINCGARS line-of-site walkie-talkies for communications that are most effective when the ground is flat, and Iraq is not flat. As a result, our troops were at risk of not having reliable communications while under attack. As my team brainstormed options to remedy the situation, one of my guys found some technology, about the size of an iPhone, that could covert a radio signal, which is basically a waveform, into a digital pulse I could put on a dedicated network to support the convoy missions.” For million, Touhill and his team quickly architected, tested, and fielded the Radio over IP networkthat had a 99% reliability rate anywhere in Iraq. Better still, convoys could communicate over the network using any radios. That solution saved a minimum of six lives. In one case, the hospital doctor said if the patient had arrived five minutes later, he would have died. Sage Advice Anyone who has ever spent time in the military or in a military family knows that soldiers are very well disciplined, or they wash out. Other traits include being physically fit, mentally fit, and achieving balance in life, though that’s difficult to achieve in combat. Still, it’s a necessity. “I served three and a half years down range in combat operations. My experience taught me you could be doing 20-hour days for a year or two on end. If you haven’t built a good foundation of being disciplined and fit, it impacts your ability to maintain presence in times of stress, and CISOs work in stressful situations,” says Touhill. “Staying fit also fortifies you for the long haul, so you don’t get burned out as fast.” Another necessary skill is the ability to work well with others. “Cybersecurity is an interdisciplinary practice. One of the great joys I have as CERT director is the wide range of experts in many different fields that include software engineers, computer engineers, computer scientists, data scientists, mathematicians and physicists,” says Touhill. “I have folks who have business degrees and others who have philosophy degrees. It's really a rich community of interests all coming together towards that common goal of making the world a safer, more secure and more trusted place in the cyber domain. We’re are kind of like the cyber neighborhood watch for the whole world.” He also says that money isn’t everything, having taken a pay cut to go from being an Air Force brigadier general to the deputy assistant secretary of the Department of Homeland Security . “You’ll always do well if you pick the job that matters most. That’s what I did, and I’ve been rewarded every step,” says Touhill. The biggest challenge he sees is the complexity of cyber systems and software, which can have second, third, and fourth order effects. “Complexity raises the cost of the attack surface, increases the attack surface, raises the number of vulnerabilities and exploits human weaknesses,” says Touhill. “The No. 1 thing we need to be paying attention to is privacy when it comes to AI because AI can unearth and discover knowledge from data we already have. While it gives us greater insights at greater velocities, we need to be careful that we take precautions to better protect our privacy, civil rights and civil liberties.” #cert #director #greg #touhill #leadWWW.INFORMATIONWEEK.COMCERT Director Greg Touhill: To Lead Is to ServeGreg Touhill, director of the Software Engineering’s Institute’s (SEI’s) Computer Emergency Response Team (CERT) division is an atypical technology leader. For one thing, he’s been in tech and other leadership positions that span the US Air Force, the US government, the private sector and now SEI’s CERT. More importantly, he’s been a major force in the cybersecurity realm, making the world a safer place and even saving lives. Touhill earned a bachelor’s degree from the Pennsylvania State University, a master’s degree from the University of Southern California, a master’s degree from the Air War College, was a senior executive fellow at the Harvard University Kennedy School of Government and completed executive education studies at the University of North Carolina. “I was a student intern at Carnegie Mellon, but I was going to college at Penn State and studying chemical engineering. As an Air Force ROTC scholarship recipient, I knew I was going to become an Air Force officer but soon realized that I didn’t necessarily want to be a chemical engineer in the Air Force,” says Touhill. “Because I passed all the mathematics, physics, and engineering courses, I ended up becoming a communications, electronics, and computer systems officer in the Air Force. I spent 30 years, one month and three days on active duty in the United States Air Force, eventually retiring as a brigadier general and having done many different types of jobs that were available to me within and even beyond my career field.” Related:Specifically, he was an operational commander at the squadron, group, and wing levels. For example, as a colonel, Touhill served as director of command, control, communications and computers (C4) for the United States Central Command Forces, then he was appointed chief information officer and director, communications and information at Air Mobility Command. Later, he served as commander, 81st Training Wing at Kessler Air Force Base where he was promoted to brigadier general and commanded over 12,500 personnel. After that, he served as the senior defense officer and US defense attaché at the US Embassy in Kuwait, before concluding his military career as the chief information officer and director, C4 systems at the US Transportation Command, one of 10 US combatant commands, where he and his team were awarded the NSA Rowlett Award for the best cybersecurity program in the government. While in the Air Force, Touhill received numerous awards and decorations including the Bronze Star medal and the Air Force Science and Engineering Award. He is the only three-time recipient of the USAF C4 Professionalism Award. Related:Greg Touhill“I got to serve at major combatant commands, work with coalition partners from many different countries and represented the US as part of a diplomatic mission to Kuwait for two years as the senior defense official at a time when America was withdrawing forces out of Iraq. I also led the negotiation of a new bilateral defense agreement with the Kuwaitis,” says Touhill. “Then I was recruited to continue my service and was asked to serve as the deputy assistant secretary of cybersecurity and communications at the Department of Homeland Security, where I ran the operations of what is now known as the Cybersecurity and Infrastructure Security Agency. I was there at a pivotal moment because we were building up the capacity of that organization and setting the stage for it to become its own agency.” While at DHS, there were many noteworthy breaches including the infamous US Office of People Management (OPM) breach. Those events led to Obama’s visit to the National Cybersecurity and Communications Integration Center. “I got to brief the president on the state of cybersecurity, what we had seen with the OPM breach and some other deficiencies,” says Touhill. “I was on the federal CIO council as the cybersecurity advisor to that since I’d been a federal CIO before and I got to conclude my federal career by being the first United States government chief information security officer. From there, I pivoted to industry, but I also got to return to Carnegie Mellon as a faculty member at Carnegie Mellon’s Heinz College, where I've been teaching since January 2017.” Related:Touhill has been involved in three startups, two of which were successfully acquired. He also served on three Fortune 100 advisory boards and on the Information Systems Audit and Control Association board, eventually becoming its chair for a term during the seven years he served there. Touhill just celebrated his fourth year at CERT, which he considers the pinnacle of the cybersecurity profession and everything he’s done to date. “Over my career I've led teams that have done major software builds in the national security space. I've also been the guy who's pulled cables and set up routers, hubs and switches, and I've been a system administrator. I've done everything that I could do from the keyboard up all the way up to the White House,” says Touhill. “For 40 years, the Software Engineering Institute has been leading the world in secure by design, cybersecurity, software engineering, artificial intelligence and engineering, pioneering best practices, and figuring out how to make the world a safer more secure and trustworthy place. I’ve had a hand in the making of today’s modern military and government information technology environment, beginning as a 22-year-old lieutenant, and hope to inspire the next generation to do even better.” What ‘Success’ Means Many people would be satisfied with their careers as a brigadier general, a tech leader, the White House’s first anything, or working at CERT, let alone running it. Touhill has spent his entire career making the world a safer place, so it’s not surprising that he considers his greatest achievement saving lives. “In the Middle East and Iraq, convoys were being attacked with improvised explosive devices. There were also ‘direct fire’ attacks where people are firing weapons at you and indirect fire attacks where you could be in the line of fire,” says Touhill. “The convoys were using SINCGARS line-of-site walkie-talkies for communications that are most effective when the ground is flat, and Iraq is not flat. As a result, our troops were at risk of not having reliable communications while under attack. As my team brainstormed options to remedy the situation, one of my guys found some technology, about the size of an iPhone, that could covert a radio signal, which is basically a waveform, into a digital pulse I could put on a dedicated network to support the convoy missions.” For $11 million, Touhill and his team quickly architected, tested, and fielded the Radio over IP network (aka “Ripper Net”) that had a 99% reliability rate anywhere in Iraq. Better still, convoys could communicate over the network using any radios. That solution saved a minimum of six lives. In one case, the hospital doctor said if the patient had arrived five minutes later, he would have died. Sage Advice Anyone who has ever spent time in the military or in a military family knows that soldiers are very well disciplined, or they wash out. Other traits include being physically fit, mentally fit, and achieving balance in life, though that’s difficult to achieve in combat. Still, it’s a necessity. “I served three and a half years down range in combat operations. My experience taught me you could be doing 20-hour days for a year or two on end. If you haven’t built a good foundation of being disciplined and fit, it impacts your ability to maintain presence in times of stress, and CISOs work in stressful situations,” says Touhill. “Staying fit also fortifies you for the long haul, so you don’t get burned out as fast.” Another necessary skill is the ability to work well with others. “Cybersecurity is an interdisciplinary practice. One of the great joys I have as CERT director is the wide range of experts in many different fields that include software engineers, computer engineers, computer scientists, data scientists, mathematicians and physicists,” says Touhill. “I have folks who have business degrees and others who have philosophy degrees. It's really a rich community of interests all coming together towards that common goal of making the world a safer, more secure and more trusted place in the cyber domain. We’re are kind of like the cyber neighborhood watch for the whole world.” He also says that money isn’t everything, having taken a pay cut to go from being an Air Force brigadier general to the deputy assistant secretary of the Department of Homeland Security . “You’ll always do well if you pick the job that matters most. That’s what I did, and I’ve been rewarded every step,” says Touhill. The biggest challenge he sees is the complexity of cyber systems and software, which can have second, third, and fourth order effects. “Complexity raises the cost of the attack surface, increases the attack surface, raises the number of vulnerabilities and exploits human weaknesses,” says Touhill. “The No. 1 thing we need to be paying attention to is privacy when it comes to AI because AI can unearth and discover knowledge from data we already have. While it gives us greater insights at greater velocities, we need to be careful that we take precautions to better protect our privacy, civil rights and civil liberties.”0 Comments 0 Shares -
CIO Chaos Mastery: Lessons from Vertiv's Bhavik Rao
Few roles evolve as quickly as that of the modern CIO. A great way to prepare for a future that is largely unknown is to build your adaptability skills through diverse work experiences, says Bhavik Rao, CIO for the Americas at Vertiv. Learn from your wins and your losses and carry on. Stay free of comfort zones and run towards the chaos. Leaders are born of challenges and not from comfort.Bhavik shares what he’s facing now, how he’s navigating it, and the hard-won lessons that helped shape his approach to IT leadership.Here’s what he had to say:What has your career path looked like so far? I actually started my career as a techno-functional consultant working with the public sector. That early experience gave me a solid grounding in both the technical and process side of enterprise systems. From there, I moved into consulting, which really opened up my world. I had the opportunity to work across multiple industries, leading everything from mobile app development and eCommerce deployments to omnichannel initiatives, data platforms, ERP rollouts, and ultimately large-scale digital transformation and IT strategy programs. It was fast paced, challenging, and incredibly rewarding. That diversity shaped the way I think today. I learned how to adapt quickly, connect dots across domains, and communicate with everyone from developers to CXOs. Eventually, that path led me to Vertiv, where I now serve as the CIO for the Americas, in addition to leading a couple of global towers, such as data/AI and engineering systems, for example. I’ve been fortunate to lead initiatives that drive operational efficiency, scale GenAI adoption, and turn technology into a true business enabler. Related:What are the highlights along your career path? There have been several defining moments, both wins and challenges, that have shaped how I lead today. One of the most pivotal chapters has been my time at Vertiv. I joined when the company was still owned by private equity. It was an intense, roll-up-your-sleeves kind of environment. Then, in 2020, we went public -- a huge milestone. But just as we were ramping up our digital transformation, COVID hit, and with it came massive supply chain disruptions. In the middle of all that chaos, I was asked to take over a large-scale transformation program that was struggling. bhBhavik RaoIt wasn’t easy. There were legacy challenges, resistance to change, and real execution pressure. But we rallied, restructured the program, and launched it. That experience taught me a lot about leading under pressure, aligning teams around outcomes, and staying focused even when everything feels like it’s shifting. Related:Another major learning moment was earlier in my career when I lost a large national account I’d spent over seven years building. That was a tough one, but it taught me resilience. I learned not to attach my identity to any one outcome and to keep moving forward with purpose. Then, there are the moments of creation, like launching VeGA, our internal GenAI platform at Vertiv. Seeing it go from idea to impact, with thousands of users and 100+ applications, has been incredibly energizing. It reminded me how powerful it is when innovation meets execution. I’ve also learned the power of being a “player-coach.” I don’t believe in leading from a distance. I get involved, understand the challenges on the ground, and then help teams move forward together. What’s your vision for the future of sovereign AI? For me, sovereign AI isn’t just a regulatory checkbox; it’s about strategic autonomy. At our company, we are trying to be very intentional about how we scale AI responsibly across our global footprint. So, when I think about sovereign AI, I define it as the ability to control how, where, and why AI is built and deployed with full alignment to your business needs, risk posture, and data boundaries. Related:I’ve seen firsthand how AI becomes a competitive advantage only when you have governance, infrastructure flexibility, and contextual intelligence built in. Our work with VeGA, for example, has shown that employees adopt AI much faster when it’s embedded into secure, business-aligned workflows and not just bolted on from the outside. For CIOs, the shift to sovereign AI means: Designing AI infrastructure that can flex whether it’s hosted internally, cloud-based, or hybrid Building internal AI fluency so your teams aren't fully reliant on black-box solutions Creating a framework for trust and explainability, especially as AI touches regulated and legal processes It’s not about doing everything in-house, but it is about knowing what’s mission-critical to control. In my view, sovereign AI is less about isolation and more about intentional ownership. What do you do for fun or to relax? Golf is my go-to. It keeps me grounded and humble! It’s one of those games that’s as much about mindset as it is about mechanics. I try to work out regularly when I am not traveling for work. I also enjoy traveling with my family and listening to podcasts. What advice would you give to young people considering a leadership path in IT? Be curious, stay hands-on, don’t rush the title, and focus on impact. Learn the business, not just the tech. Some of the best technologists I’ve worked with are the ones who understand how a supply chain works or how a sale actually closes. Also, don’t be afraid to take on messy, undefined problems. Run toward the chaos. That’s where leadership is born. And finally, surround yourself with people smarter than you. Build teams that challenge you. That’s where real growth happens.
#cio #chaos #mastery #lessons #vertiv039sCIO Chaos Mastery: Lessons from Vertiv's Bhavik RaoFew roles evolve as quickly as that of the modern CIO. A great way to prepare for a future that is largely unknown is to build your adaptability skills through diverse work experiences, says Bhavik Rao, CIO for the Americas at Vertiv. Learn from your wins and your losses and carry on. Stay free of comfort zones and run towards the chaos. Leaders are born of challenges and not from comfort.Bhavik shares what he’s facing now, how he’s navigating it, and the hard-won lessons that helped shape his approach to IT leadership.Here’s what he had to say:What has your career path looked like so far? I actually started my career as a techno-functional consultant working with the public sector. That early experience gave me a solid grounding in both the technical and process side of enterprise systems. From there, I moved into consulting, which really opened up my world. I had the opportunity to work across multiple industries, leading everything from mobile app development and eCommerce deployments to omnichannel initiatives, data platforms, ERP rollouts, and ultimately large-scale digital transformation and IT strategy programs. It was fast paced, challenging, and incredibly rewarding. That diversity shaped the way I think today. I learned how to adapt quickly, connect dots across domains, and communicate with everyone from developers to CXOs. Eventually, that path led me to Vertiv, where I now serve as the CIO for the Americas, in addition to leading a couple of global towers, such as data/AI and engineering systems, for example. I’ve been fortunate to lead initiatives that drive operational efficiency, scale GenAI adoption, and turn technology into a true business enabler. Related:What are the highlights along your career path? There have been several defining moments, both wins and challenges, that have shaped how I lead today. One of the most pivotal chapters has been my time at Vertiv. I joined when the company was still owned by private equity. It was an intense, roll-up-your-sleeves kind of environment. Then, in 2020, we went public -- a huge milestone. But just as we were ramping up our digital transformation, COVID hit, and with it came massive supply chain disruptions. In the middle of all that chaos, I was asked to take over a large-scale transformation program that was struggling. bhBhavik RaoIt wasn’t easy. There were legacy challenges, resistance to change, and real execution pressure. But we rallied, restructured the program, and launched it. That experience taught me a lot about leading under pressure, aligning teams around outcomes, and staying focused even when everything feels like it’s shifting. Related:Another major learning moment was earlier in my career when I lost a large national account I’d spent over seven years building. That was a tough one, but it taught me resilience. I learned not to attach my identity to any one outcome and to keep moving forward with purpose. Then, there are the moments of creation, like launching VeGA, our internal GenAI platform at Vertiv. Seeing it go from idea to impact, with thousands of users and 100+ applications, has been incredibly energizing. It reminded me how powerful it is when innovation meets execution. I’ve also learned the power of being a “player-coach.” I don’t believe in leading from a distance. I get involved, understand the challenges on the ground, and then help teams move forward together. What’s your vision for the future of sovereign AI? For me, sovereign AI isn’t just a regulatory checkbox; it’s about strategic autonomy. At our company, we are trying to be very intentional about how we scale AI responsibly across our global footprint. So, when I think about sovereign AI, I define it as the ability to control how, where, and why AI is built and deployed with full alignment to your business needs, risk posture, and data boundaries. Related:I’ve seen firsthand how AI becomes a competitive advantage only when you have governance, infrastructure flexibility, and contextual intelligence built in. Our work with VeGA, for example, has shown that employees adopt AI much faster when it’s embedded into secure, business-aligned workflows and not just bolted on from the outside. For CIOs, the shift to sovereign AI means: Designing AI infrastructure that can flex whether it’s hosted internally, cloud-based, or hybrid Building internal AI fluency so your teams aren't fully reliant on black-box solutions Creating a framework for trust and explainability, especially as AI touches regulated and legal processes It’s not about doing everything in-house, but it is about knowing what’s mission-critical to control. In my view, sovereign AI is less about isolation and more about intentional ownership. What do you do for fun or to relax? Golf is my go-to. It keeps me grounded and humble! It’s one of those games that’s as much about mindset as it is about mechanics. I try to work out regularly when I am not traveling for work. I also enjoy traveling with my family and listening to podcasts. What advice would you give to young people considering a leadership path in IT? Be curious, stay hands-on, don’t rush the title, and focus on impact. Learn the business, not just the tech. Some of the best technologists I’ve worked with are the ones who understand how a supply chain works or how a sale actually closes. Also, don’t be afraid to take on messy, undefined problems. Run toward the chaos. That’s where leadership is born. And finally, surround yourself with people smarter than you. Build teams that challenge you. That’s where real growth happens. #cio #chaos #mastery #lessons #vertiv039sWWW.INFORMATIONWEEK.COMCIO Chaos Mastery: Lessons from Vertiv's Bhavik RaoFew roles evolve as quickly as that of the modern CIO. A great way to prepare for a future that is largely unknown is to build your adaptability skills through diverse work experiences, says Bhavik Rao, CIO for the Americas at Vertiv. Learn from your wins and your losses and carry on. Stay free of comfort zones and run towards the chaos. Leaders are born of challenges and not from comfort.Bhavik shares what he’s facing now, how he’s navigating it, and the hard-won lessons that helped shape his approach to IT leadership.Here’s what he had to say:What has your career path looked like so far? I actually started my career as a techno-functional consultant working with the public sector. That early experience gave me a solid grounding in both the technical and process side of enterprise systems. From there, I moved into consulting, which really opened up my world. I had the opportunity to work across multiple industries, leading everything from mobile app development and eCommerce deployments to omnichannel initiatives, data platforms, ERP rollouts, and ultimately large-scale digital transformation and IT strategy programs. It was fast paced, challenging, and incredibly rewarding. That diversity shaped the way I think today. I learned how to adapt quickly, connect dots across domains, and communicate with everyone from developers to CXOs. Eventually, that path led me to Vertiv, where I now serve as the CIO for the Americas, in addition to leading a couple of global towers, such as data/AI and engineering systems, for example. I’ve been fortunate to lead initiatives that drive operational efficiency, scale GenAI adoption, and turn technology into a true business enabler. Related:What are the highlights along your career path? There have been several defining moments, both wins and challenges, that have shaped how I lead today. One of the most pivotal chapters has been my time at Vertiv. I joined when the company was still owned by private equity. It was an intense, roll-up-your-sleeves kind of environment. Then, in 2020, we went public -- a huge milestone. But just as we were ramping up our digital transformation, COVID hit, and with it came massive supply chain disruptions. In the middle of all that chaos, I was asked to take over a large-scale transformation program that was struggling. bhBhavik RaoIt wasn’t easy. There were legacy challenges, resistance to change, and real execution pressure. But we rallied, restructured the program, and launched it. That experience taught me a lot about leading under pressure, aligning teams around outcomes, and staying focused even when everything feels like it’s shifting. Related:Another major learning moment was earlier in my career when I lost a large national account I’d spent over seven years building. That was a tough one, but it taught me resilience. I learned not to attach my identity to any one outcome and to keep moving forward with purpose. Then, there are the moments of creation, like launching VeGA, our internal GenAI platform at Vertiv. Seeing it go from idea to impact, with thousands of users and 100+ applications, has been incredibly energizing. It reminded me how powerful it is when innovation meets execution. I’ve also learned the power of being a “player-coach.” I don’t believe in leading from a distance. I get involved, understand the challenges on the ground, and then help teams move forward together. What’s your vision for the future of sovereign AI? For me, sovereign AI isn’t just a regulatory checkbox; it’s about strategic autonomy. At our company, we are trying to be very intentional about how we scale AI responsibly across our global footprint. So, when I think about sovereign AI, I define it as the ability to control how, where, and why AI is built and deployed with full alignment to your business needs, risk posture, and data boundaries. Related:I’ve seen firsthand how AI becomes a competitive advantage only when you have governance, infrastructure flexibility, and contextual intelligence built in. Our work with VeGA, for example, has shown that employees adopt AI much faster when it’s embedded into secure, business-aligned workflows and not just bolted on from the outside. For CIOs, the shift to sovereign AI means: Designing AI infrastructure that can flex whether it’s hosted internally, cloud-based, or hybrid Building internal AI fluency so your teams aren't fully reliant on black-box solutions Creating a framework for trust and explainability, especially as AI touches regulated and legal processes It’s not about doing everything in-house, but it is about knowing what’s mission-critical to control. In my view, sovereign AI is less about isolation and more about intentional ownership. What do you do for fun or to relax? Golf is my go-to. It keeps me grounded and humble! It’s one of those games that’s as much about mindset as it is about mechanics. I try to work out regularly when I am not traveling for work. I also enjoy traveling with my family and listening to podcasts. What advice would you give to young people considering a leadership path in IT? Be curious, stay hands-on, don’t rush the title, and focus on impact. Learn the business, not just the tech. Some of the best technologists I’ve worked with are the ones who understand how a supply chain works or how a sale actually closes. Also, don’t be afraid to take on messy, undefined problems. Run toward the chaos. That’s where leadership is born. And finally, surround yourself with people smarter than you. Build teams that challenge you. That’s where real growth happens. -
Racing Yacht CTO Sails to Success
John Edwards, Technology Journalist & AuthorJune 5, 20254 Min ReadSailGP Australia, USA, and Great Britain racing on San Francisco Bay, CaliforniaDannaphotos via Alamy StockWarren Jones is CTO at SailGP, the organizer of what he describes as the world's most exciting race on water. The event features high-tech F50 boats that speed across the waves at 100 kilometers-per-hour. Working in cooperation with Oracle, Jones focuses on innovative solutions for remote broadcast production, data management and distribution, and a newly introduced fan engagement platform. He also leads the team that has won an IBC Innovation Awards for its ambitious and ground-breaking remote production strategy. Among the races Jones organizes is the Rolex SailGP Championship, a global competition featuring national teams battling each other in identical high-tech, high-speed 50-foot foiling catamarans at celebrated venues around the world. The event attracts the sport's top athletes, with national pride, personal glory, and bonus prize money of million at stake. Jones also supports event and office infrastructures in London and New York, and at each of the global grand prix events over the course of the season. Prior to joining SailGP, he was IT leader at the America's Cup Event Authority and Oracle Racing. In an online interview, Jones discusses the challenges he faces in bringing reliable data services to event vessels, as well as onshore officials and fans. Related:Warren JonesWhat's the biggest challenge you've faced during your tenure? One of the biggest challenges I faced was ensuring real-time data transmission from our high-performance F50 foiling catamarans to teams, broadcasters, and fans worldwide. SailGP relies heavily on technology to deliver high-speed racing insights, but ensuring seamless connectivity across different venues with variable conditions was a significant hurdle. What caused the problem? The challenge arose due to a combination of factors. The high speeds and dynamic nature of the boats made data capture and transmission difficult. Varying network infrastructure at different race locations created connectivity issues. The need to process and visualize massive amounts of data in real time placed immense pressure on our systems. How did you resolve the problem? We tackled the issue by working with T-Mobile and Ericsson in a robust and adaptive telemetry system capable of transmitting data with minimal latency over 5G. Deploying custom-built race management software that could process and distribute data efficiently. Working closely with our global partner Oracle, we optimized Cloud Compute with the Oracle Cloud. Related:What would have happened if the problem wasn't quickly resolved? Spectator experience would have suffered. Teams rely on real-time analytics for performance optimization, and broadcasters need accurate telemetry for storytelling. A failure here could have resulted in delays, miscommunication, and a diminished fan experience. How long did it take to resolve the problem? It was an ongoing challenge that required continuous innovation. The initial solution took several months to implement, but we’ve refined and improved it over multiple seasons as technology advances and new challenges emerge. Who supported you during this challenge? This was a team effort -- with our partners Oracle, T-Mobile, and Ericsson with our in-house engineers, data scientists, and IT specialists all working closely. The support from SailGP's leadership was also crucial in securing the necessary resources. Did anyone let you down? Rather than seeing it as being let down, I'd say there were unexpected challenges with some technology providers who underestimated the complexity of what we needed. However, we adapted by seeking alternative solutions and working collaboratively to overcome the hurdles. What advice do you have for other leaders who may face a similar challenge? Related:Embrace adaptability. No matter how well you plan, unforeseen challenges will arise, so build flexible solutions. Leverage partnerships. Collaborate with the best in the industry to ensure you have the expertise needed. Stay ahead of technology trends. The landscape is constantly evolving; being proactive rather than reactive is key. Prioritize resilience. Build redundancy into critical systems to ensure continuity even in the face of disruptions. Is there anything else you would like to add? SailGP is as much a technology company as it is a sports league. The intersection of innovation and competition drives us forward and solving challenges like these is what makes this role both demanding and incredibly rewarding. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#racing #yacht #cto #sails #successRacing Yacht CTO Sails to SuccessJohn Edwards, Technology Journalist & AuthorJune 5, 20254 Min ReadSailGP Australia, USA, and Great Britain racing on San Francisco Bay, CaliforniaDannaphotos via Alamy StockWarren Jones is CTO at SailGP, the organizer of what he describes as the world's most exciting race on water. The event features high-tech F50 boats that speed across the waves at 100 kilometers-per-hour. Working in cooperation with Oracle, Jones focuses on innovative solutions for remote broadcast production, data management and distribution, and a newly introduced fan engagement platform. He also leads the team that has won an IBC Innovation Awards for its ambitious and ground-breaking remote production strategy. Among the races Jones organizes is the Rolex SailGP Championship, a global competition featuring national teams battling each other in identical high-tech, high-speed 50-foot foiling catamarans at celebrated venues around the world. The event attracts the sport's top athletes, with national pride, personal glory, and bonus prize money of million at stake. Jones also supports event and office infrastructures in London and New York, and at each of the global grand prix events over the course of the season. Prior to joining SailGP, he was IT leader at the America's Cup Event Authority and Oracle Racing. In an online interview, Jones discusses the challenges he faces in bringing reliable data services to event vessels, as well as onshore officials and fans. Related:Warren JonesWhat's the biggest challenge you've faced during your tenure? One of the biggest challenges I faced was ensuring real-time data transmission from our high-performance F50 foiling catamarans to teams, broadcasters, and fans worldwide. SailGP relies heavily on technology to deliver high-speed racing insights, but ensuring seamless connectivity across different venues with variable conditions was a significant hurdle. What caused the problem? The challenge arose due to a combination of factors. The high speeds and dynamic nature of the boats made data capture and transmission difficult. Varying network infrastructure at different race locations created connectivity issues. The need to process and visualize massive amounts of data in real time placed immense pressure on our systems. How did you resolve the problem? We tackled the issue by working with T-Mobile and Ericsson in a robust and adaptive telemetry system capable of transmitting data with minimal latency over 5G. Deploying custom-built race management software that could process and distribute data efficiently. Working closely with our global partner Oracle, we optimized Cloud Compute with the Oracle Cloud. Related:What would have happened if the problem wasn't quickly resolved? Spectator experience would have suffered. Teams rely on real-time analytics for performance optimization, and broadcasters need accurate telemetry for storytelling. A failure here could have resulted in delays, miscommunication, and a diminished fan experience. How long did it take to resolve the problem? It was an ongoing challenge that required continuous innovation. The initial solution took several months to implement, but we’ve refined and improved it over multiple seasons as technology advances and new challenges emerge. Who supported you during this challenge? This was a team effort -- with our partners Oracle, T-Mobile, and Ericsson with our in-house engineers, data scientists, and IT specialists all working closely. The support from SailGP's leadership was also crucial in securing the necessary resources. Did anyone let you down? Rather than seeing it as being let down, I'd say there were unexpected challenges with some technology providers who underestimated the complexity of what we needed. However, we adapted by seeking alternative solutions and working collaboratively to overcome the hurdles. What advice do you have for other leaders who may face a similar challenge? Related:Embrace adaptability. No matter how well you plan, unforeseen challenges will arise, so build flexible solutions. Leverage partnerships. Collaborate with the best in the industry to ensure you have the expertise needed. Stay ahead of technology trends. The landscape is constantly evolving; being proactive rather than reactive is key. Prioritize resilience. Build redundancy into critical systems to ensure continuity even in the face of disruptions. Is there anything else you would like to add? SailGP is as much a technology company as it is a sports league. The intersection of innovation and competition drives us forward and solving challenges like these is what makes this role both demanding and incredibly rewarding. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #racing #yacht #cto #sails #successWWW.INFORMATIONWEEK.COMRacing Yacht CTO Sails to SuccessJohn Edwards, Technology Journalist & AuthorJune 5, 20254 Min ReadSailGP Australia, USA, and Great Britain racing on San Francisco Bay, CaliforniaDannaphotos via Alamy StockWarren Jones is CTO at SailGP, the organizer of what he describes as the world's most exciting race on water. The event features high-tech F50 boats that speed across the waves at 100 kilometers-per-hour (62 miles-per-hour). Working in cooperation with Oracle, Jones focuses on innovative solutions for remote broadcast production, data management and distribution, and a newly introduced fan engagement platform. He also leads the team that has won an IBC Innovation Awards for its ambitious and ground-breaking remote production strategy. Among the races Jones organizes is the Rolex SailGP Championship, a global competition featuring national teams battling each other in identical high-tech, high-speed 50-foot foiling catamarans at celebrated venues around the world. The event attracts the sport's top athletes, with national pride, personal glory, and bonus prize money of $12.8 million at stake. Jones also supports event and office infrastructures in London and New York, and at each of the global grand prix events over the course of the season. Prior to joining SailGP, he was IT leader at the America's Cup Event Authority and Oracle Racing. In an online interview, Jones discusses the challenges he faces in bringing reliable data services to event vessels, as well as onshore officials and fans. Related:Warren JonesWhat's the biggest challenge you've faced during your tenure? One of the biggest challenges I faced was ensuring real-time data transmission from our high-performance F50 foiling catamarans to teams, broadcasters, and fans worldwide. SailGP relies heavily on technology to deliver high-speed racing insights, but ensuring seamless connectivity across different venues with variable conditions was a significant hurdle. What caused the problem? The challenge arose due to a combination of factors. The high speeds and dynamic nature of the boats made data capture and transmission difficult. Varying network infrastructure at different race locations created connectivity issues. The need to process and visualize massive amounts of data in real time placed immense pressure on our systems. How did you resolve the problem? We tackled the issue by working with T-Mobile and Ericsson in a robust and adaptive telemetry system capable of transmitting data with minimal latency over 5G. Deploying custom-built race management software that could process and distribute data efficiently [was also important]. Working closely with our global partner Oracle, we optimized Cloud Compute with the Oracle Cloud. Related:What would have happened if the problem wasn't quickly resolved? Spectator experience would have suffered. Teams rely on real-time analytics for performance optimization, and broadcasters need accurate telemetry for storytelling. A failure here could have resulted in delays, miscommunication, and a diminished fan experience. How long did it take to resolve the problem? It was an ongoing challenge that required continuous innovation. The initial solution took several months to implement, but we’ve refined and improved it over multiple seasons as technology advances and new challenges emerge. Who supported you during this challenge? This was a team effort -- with our partners Oracle, T-Mobile, and Ericsson with our in-house engineers, data scientists, and IT specialists all working closely. The support from SailGP's leadership was also crucial in securing the necessary resources. Did anyone let you down? Rather than seeing it as being let down, I'd say there were unexpected challenges with some technology providers who underestimated the complexity of what we needed. However, we adapted by seeking alternative solutions and working collaboratively to overcome the hurdles. What advice do you have for other leaders who may face a similar challenge? Related:Embrace adaptability. No matter how well you plan, unforeseen challenges will arise, so build flexible solutions. Leverage partnerships. Collaborate with the best in the industry to ensure you have the expertise needed. Stay ahead of technology trends. The landscape is constantly evolving; being proactive rather than reactive is key. Prioritize resilience. Build redundancy into critical systems to ensure continuity even in the face of disruptions. Is there anything else you would like to add? SailGP is as much a technology company as it is a sports league. The intersection of innovation and competition drives us forward and solving challenges like these is what makes this role both demanding and incredibly rewarding. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like -
Pay for Performance -- How Do You Measure It?
More enterprises have moved to pay-for-performance salary and promotion models that measure progress toward goals -- but how do you measure goals for a maintenance programmer who barrels through a request backlog but delivers marginal value for the business, or for a business analyst whose success is predicated on forging intangibles like trust and cooperation with users so things can get done? It’s an age-old question facing companies, now that 77% of them use some type of pay-for-performance model. What are some popular pay-for-performance use cases? A factory doing piece work that pays employees based upon the number of items they assemble. A call center that pays agents based on how many calls they complete per day. A bank teller who gets rewarded for how many customers they sign up for credit cards. An IT project team that gets a bonus for completing a major project ahead of schedule. The IT example differs from the others, because it depends on team and not individual execution, but there nevertheless is something tangible to measure. The other use cases are more clearcut -- although they don’t account for pieces in the plant that were poorly assembled in haste to make quota and had to be reworked, or a call center agent who pushes calls off to someone else so they can end their calls in six minutes or less, or the teller who signs up X number of customers for credit cards, although two-thirds of them never use the credit card they signed up for. Related:In short, there are flaws in pay-for-performance models just as there are in other types of compensation models that organizations use. So, what’s the best path for IT for CIOs who want to implement pay for performance? One approach is to measure pay for performance based upon four key elements: hard results, effort, skill, and communications. The mix of these elements will vary, depending on the type of position each IT staff member performs. Here are two examples of pay per performance by position: 1. Computer maintenance programmers and help desk specialists Historically, IT departments have used hard numbers like how many open requests a computer maintenance programmer has closed, or how many calls a help desk employee has solved. There is merit in using hard results, and hard results should be factored into performance reviews for these individuals -- but hard numbers don’t tell the whole story. For example, how many times has a help desk agent gone the extra mile with a difficult user or software bug, taking the time to see the entire process through until it is thoroughly solved? lf the issue was of a global nature, did the Help Desk agent follow up by letting others who use the application know that a bug was fixed? For the maintenance programmer who has completed the most open requests, which of these requests really solved a major business pain point? For both help desk and maintenance programming employees, were the changes and fixes properly documented and communicated to everyone with a need to know? And did these employees demonstrate the skills needed to solve their issues? Related:It’s difficult to capture hard results on elements like effort, communication and skills, but one way to go about it is to survey user departments on individual levels of service and effectiveness. From there, it’s up to IT managers to determinate the “mix” of hard results, effort, communication and skills on which the employee will be evaluated, and to communicate upfront to the employee what the pay for performance assessment will be based on. 2. Business analysts and trainers Business analysts and trainers are difficult to quantify in pay for performance models because so much of their success depends upon other people. A business analyst can know everything there is to know about a particular business area and its systems, but if the analyst is working with unresponsive users, or lacks the soft skills needed to communicate with users, the pay for performance can’t be based upon the technology skillset alone. Related:IT trainers face a somewhat different dilemma when it comes to performance evaluation: they can produce the training that new staff members need before staff is deployed on key projects, but if a project gets delayed and this causes trainees to lose the knowledge that they learned, there is little the trainer can do aside from offering a refresher course. Can pay for performance be used for positions like these? It’s a mixed answer. Yes, pay per performance can be used for trainers, based upon how many individuals the trainer trains and how many new courses the trainer obtains or develops. These are the hard results. However, since so much of training’s execution depends upon other people downstream, like project managers who must start projects on time so new skills aren’t lost, managers of training should also consider pay for performance elements such as effort, skills and communication. In sum, for both business analysts and trainers, there are hard results that can be factored into a pay for performance formula, but there is also a need to survey each position’s “customers” -- those individualswho utilized the business analyst’s or trainer’s skills and products to accomplish their respective objectives in projects and training. Were these user-customers satisfied? Summary Remarks The value that IT employees contribute to overall IT and to the business at large is a combination of tangible and intangible results. Pay for performance models are well suited to gauge tangible outcomes, but they fall short when it comes to the intangibles that could be just as important. Many years ago, when Pat Riley was coaching the Los Angeles Lakers, an interviewer asked what type of metrics he used when he measured the effectiveness of individual players on the basketball court. Was it the number of points, rebounds, or assists? Riley said he used an “effort" index. For example, how many times did a player go up to get a rebound, even if he didn’t end up with the ball? Riley said the effort individual players exhibited mattered, because even if they didn’t get the rebound, they were creating situations so someone else on the team could. IT is similar. It’s why OKR International, a performance consultancy, stated “Intangibles often create or destroy value quietly -- until their impact is too big to ignore. In the long run, they are the unseen levers that determine whether strategy thrives or withers.” What CIOs and IT leadership can do when they use pay for performance is to assure that hard results, effort, communications and skills are appropriately blended for each IT staff position, and its responsibilities and realities -- because you can’t attach a numerical measurement to everything -- but you can observe visible changes that begin to manifest when a business analyst turns around what has been a hostile relationship with a user department and you begin to get things done.
#pay #performance #how #you #measurePay for Performance -- How Do You Measure It?More enterprises have moved to pay-for-performance salary and promotion models that measure progress toward goals -- but how do you measure goals for a maintenance programmer who barrels through a request backlog but delivers marginal value for the business, or for a business analyst whose success is predicated on forging intangibles like trust and cooperation with users so things can get done? It’s an age-old question facing companies, now that 77% of them use some type of pay-for-performance model. What are some popular pay-for-performance use cases? A factory doing piece work that pays employees based upon the number of items they assemble. A call center that pays agents based on how many calls they complete per day. A bank teller who gets rewarded for how many customers they sign up for credit cards. An IT project team that gets a bonus for completing a major project ahead of schedule. The IT example differs from the others, because it depends on team and not individual execution, but there nevertheless is something tangible to measure. The other use cases are more clearcut -- although they don’t account for pieces in the plant that were poorly assembled in haste to make quota and had to be reworked, or a call center agent who pushes calls off to someone else so they can end their calls in six minutes or less, or the teller who signs up X number of customers for credit cards, although two-thirds of them never use the credit card they signed up for. Related:In short, there are flaws in pay-for-performance models just as there are in other types of compensation models that organizations use. So, what’s the best path for IT for CIOs who want to implement pay for performance? One approach is to measure pay for performance based upon four key elements: hard results, effort, skill, and communications. The mix of these elements will vary, depending on the type of position each IT staff member performs. Here are two examples of pay per performance by position: 1. Computer maintenance programmers and help desk specialists Historically, IT departments have used hard numbers like how many open requests a computer maintenance programmer has closed, or how many calls a help desk employee has solved. There is merit in using hard results, and hard results should be factored into performance reviews for these individuals -- but hard numbers don’t tell the whole story. For example, how many times has a help desk agent gone the extra mile with a difficult user or software bug, taking the time to see the entire process through until it is thoroughly solved? lf the issue was of a global nature, did the Help Desk agent follow up by letting others who use the application know that a bug was fixed? For the maintenance programmer who has completed the most open requests, which of these requests really solved a major business pain point? For both help desk and maintenance programming employees, were the changes and fixes properly documented and communicated to everyone with a need to know? And did these employees demonstrate the skills needed to solve their issues? Related:It’s difficult to capture hard results on elements like effort, communication and skills, but one way to go about it is to survey user departments on individual levels of service and effectiveness. From there, it’s up to IT managers to determinate the “mix” of hard results, effort, communication and skills on which the employee will be evaluated, and to communicate upfront to the employee what the pay for performance assessment will be based on. 2. Business analysts and trainers Business analysts and trainers are difficult to quantify in pay for performance models because so much of their success depends upon other people. A business analyst can know everything there is to know about a particular business area and its systems, but if the analyst is working with unresponsive users, or lacks the soft skills needed to communicate with users, the pay for performance can’t be based upon the technology skillset alone. Related:IT trainers face a somewhat different dilemma when it comes to performance evaluation: they can produce the training that new staff members need before staff is deployed on key projects, but if a project gets delayed and this causes trainees to lose the knowledge that they learned, there is little the trainer can do aside from offering a refresher course. Can pay for performance be used for positions like these? It’s a mixed answer. Yes, pay per performance can be used for trainers, based upon how many individuals the trainer trains and how many new courses the trainer obtains or develops. These are the hard results. However, since so much of training’s execution depends upon other people downstream, like project managers who must start projects on time so new skills aren’t lost, managers of training should also consider pay for performance elements such as effort, skills and communication. In sum, for both business analysts and trainers, there are hard results that can be factored into a pay for performance formula, but there is also a need to survey each position’s “customers” -- those individualswho utilized the business analyst’s or trainer’s skills and products to accomplish their respective objectives in projects and training. Were these user-customers satisfied? Summary Remarks The value that IT employees contribute to overall IT and to the business at large is a combination of tangible and intangible results. Pay for performance models are well suited to gauge tangible outcomes, but they fall short when it comes to the intangibles that could be just as important. Many years ago, when Pat Riley was coaching the Los Angeles Lakers, an interviewer asked what type of metrics he used when he measured the effectiveness of individual players on the basketball court. Was it the number of points, rebounds, or assists? Riley said he used an “effort" index. For example, how many times did a player go up to get a rebound, even if he didn’t end up with the ball? Riley said the effort individual players exhibited mattered, because even if they didn’t get the rebound, they were creating situations so someone else on the team could. IT is similar. It’s why OKR International, a performance consultancy, stated “Intangibles often create or destroy value quietly -- until their impact is too big to ignore. In the long run, they are the unseen levers that determine whether strategy thrives or withers.” What CIOs and IT leadership can do when they use pay for performance is to assure that hard results, effort, communications and skills are appropriately blended for each IT staff position, and its responsibilities and realities -- because you can’t attach a numerical measurement to everything -- but you can observe visible changes that begin to manifest when a business analyst turns around what has been a hostile relationship with a user department and you begin to get things done. #pay #performance #how #you #measureWWW.INFORMATIONWEEK.COMPay for Performance -- How Do You Measure It?More enterprises have moved to pay-for-performance salary and promotion models that measure progress toward goals -- but how do you measure goals for a maintenance programmer who barrels through a request backlog but delivers marginal value for the business, or for a business analyst whose success is predicated on forging intangibles like trust and cooperation with users so things can get done? It’s an age-old question facing companies, now that 77% of them use some type of pay-for-performance model. What are some popular pay-for-performance use cases? A factory doing piece work that pays employees based upon the number of items they assemble. A call center that pays agents based on how many calls they complete per day. A bank teller who gets rewarded for how many customers they sign up for credit cards. An IT project team that gets a bonus for completing a major project ahead of schedule. The IT example differs from the others, because it depends on team and not individual execution, but there nevertheless is something tangible to measure. The other use cases are more clearcut -- although they don’t account for pieces in the plant that were poorly assembled in haste to make quota and had to be reworked, or a call center agent who pushes calls off to someone else so they can end their calls in six minutes or less, or the teller who signs up X number of customers for credit cards, although two-thirds of them never use the credit card they signed up for. Related:In short, there are flaws in pay-for-performance models just as there are in other types of compensation models that organizations use. So, what’s the best path for IT for CIOs who want to implement pay for performance? One approach is to measure pay for performance based upon four key elements: hard results, effort, skill, and communications. The mix of these elements will vary, depending on the type of position each IT staff member performs. Here are two examples of pay per performance by position: 1. Computer maintenance programmers and help desk specialists Historically, IT departments have used hard numbers like how many open requests a computer maintenance programmer has closed, or how many calls a help desk employee has solved. There is merit in using hard results, and hard results should be factored into performance reviews for these individuals -- but hard numbers don’t tell the whole story. For example, how many times has a help desk agent gone the extra mile with a difficult user or software bug, taking the time to see the entire process through until it is thoroughly solved? lf the issue was of a global nature, did the Help Desk agent follow up by letting others who use the application know that a bug was fixed? For the maintenance programmer who has completed the most open requests, which of these requests really solved a major business pain point? For both help desk and maintenance programming employees, were the changes and fixes properly documented and communicated to everyone with a need to know? And did these employees demonstrate the skills needed to solve their issues? Related:It’s difficult to capture hard results on elements like effort, communication and skills, but one way to go about it is to survey user departments on individual levels of service and effectiveness. From there, it’s up to IT managers to determinate the “mix” of hard results, effort, communication and skills on which the employee will be evaluated, and to communicate upfront to the employee what the pay for performance assessment will be based on. 2. Business analysts and trainers Business analysts and trainers are difficult to quantify in pay for performance models because so much of their success depends upon other people. A business analyst can know everything there is to know about a particular business area and its systems, but if the analyst is working with unresponsive users, or lacks the soft skills needed to communicate with users, the pay for performance can’t be based upon the technology skillset alone. Related:IT trainers face a somewhat different dilemma when it comes to performance evaluation: they can produce the training that new staff members need before staff is deployed on key projects, but if a project gets delayed and this causes trainees to lose the knowledge that they learned, there is little the trainer can do aside from offering a refresher course. Can pay for performance be used for positions like these? It’s a mixed answer. Yes, pay per performance can be used for trainers, based upon how many individuals the trainer trains and how many new courses the trainer obtains or develops. These are the hard results. However, since so much of training’s execution depends upon other people downstream, like project managers who must start projects on time so new skills aren’t lost, managers of training should also consider pay for performance elements such as effort (has the trainer consistently gone the extra mile to make things work?), skills and communication. In sum, for both business analysts and trainers, there are hard results that can be factored into a pay for performance formula, but there is also a need to survey each position’s “customers” -- those individuals (and their managers) who utilized the business analyst’s or trainer’s skills and products to accomplish their respective objectives in projects and training. Were these user-customers satisfied? Summary Remarks The value that IT employees contribute to overall IT and to the business at large is a combination of tangible and intangible results. Pay for performance models are well suited to gauge tangible outcomes, but they fall short when it comes to the intangibles that could be just as important. Many years ago, when Pat Riley was coaching the Los Angeles Lakers, an interviewer asked what type of metrics he used when he measured the effectiveness of individual players on the basketball court. Was it the number of points, rebounds, or assists? Riley said he used an “effort" index. For example, how many times did a player go up to get a rebound, even if he didn’t end up with the ball? Riley said the effort individual players exhibited mattered, because even if they didn’t get the rebound, they were creating situations so someone else on the team could. IT is similar. It’s why OKR International, a performance consultancy, stated “Intangibles often create or destroy value quietly -- until their impact is too big to ignore. In the long run, they are the unseen levers that determine whether strategy thrives or withers.” What CIOs and IT leadership can do when they use pay for performance is to assure that hard results, effort, communications and skills are appropriately blended for each IT staff position, and its responsibilities and realities -- because you can’t attach a numerical measurement to everything -- but you can observe visible changes that begin to manifest when a business analyst turns around what has been a hostile relationship with a user department and you begin to get things done. -
How to Convince Management Colleagues That AI Isn't a Passing Fad
John Edwards, Technology Journalist & AuthorJune 4, 20254 Min ReadRancz Andrei via Alamy Stock PhotoIt may be hard to believe, but some senior executives actually believe that AI's arrival isn't a ground-shaking event. These individuals tend to be convinced that while AI may be a useful tool in certain situations, it's not going to change business in any truly meaningful way. Call them skeptics or call them realists, but such individuals really do exist, and it's the enterprise's CIOs and other IT leaders who need to gently guide them into reality. AI adoption tends to fall into three mindsets: early adopters who recognize its benefits, skeptics who fear its risks, and a large middle group -- those who are curious, but uncertain, observes Dave McQuarrie, HP's chief commercial officer in an online interview. "The key to closing the AI adoption gap lies in engaging this middle group, equipping them with knowledge, and guiding them through practical implementation." Effective Approaches The most important move is simply getting started. Establish a group of advocates in your company to serve as your early AI adopters, McQuarrie says. "Pick two or three processes to completely automate rather than casting a wide net, and use these as case studies to learn from," he advises. "By beginning with a subset of users, leaders can develop a solid foundation as they roll out the tool more widely across their business." Related:Start small, gather data, and present your use case, demonstrating how AI can support you and your colleagues to do your jobs better and faster, recommends Nicola Cain, CEO and principal consultant at Handley Gill Limited, a UK-based legal, regulatory and compliance consultancy. "This could be by analyzing customer interactions to demonstrate how the introduction of a chatbot to give customers prompt answers to easily addressed questions ... or showing how vast volumes of network log data could be analyzed by AI to identify potentially malign incidents that warrant further investigation," she says in an email interview. Changing Mindsets Question the skeptical leader about their biggest business bottleneck, suggests Jeff Mains, CEO of business consulting firm Champion Leadership Group. "Whether it’s slow decision-making, inconsistent customer experiences, or operational inefficiencies, there's a strategic AI-driven solution for nearly every major business challenge," he explains in an online interview. "The key is showing leaders how AI directly solves their most pressing problems today." When dealing with a reluctant executive, start by identifying an AI use case, Cain says. "AI functionality already performs strongly in areas like forecasting, recognition, event detection, personalization, interaction support, recommendations, and goal-driven optimization," she states. "Good business areas to identify a potential use case could therefore be in finance, customer service, marketing, cyber security, or stock control." Related:Strengthening Your Case Executives respond to proof, not promises, Mains says. "Instead of leading with research reports, I’ve found that real, industry-specific case studies are far more impactful," he observes. "If a direct competitor has successfully integrated AI into sales, marketing, or operations, use that example, because it creates urgency." Instead of just citing AI-driven efficiency gains, Mains recommends framing AI as a way to free-up leadership to focus on high-level strategy rather than day-to-day operations. Instead of trying to pitch AI in broad terms, Mains advises aligning the technology to the company's stated goals. "If the company is struggling with customer retention, talk about how AI can improve personalization," he suggests. "If operational inefficiencies are a problem, highlight AI-driven automation." The moment AI is framed as a business enabler rather than a technology trend, the conversation shifts from resistance to curiosity. Related:When All Else Fails If leadership refuses to embrace AI, it’s important to document the cost of inaction, Mains says. "Keep track of inefficiencies, missed opportunities, and competitor advancements," he recommends. Sometimes, leadership only shifts when management’s view of the risks of staying stagnant outweigh the risks of change. "If a company refuses to innovate despite clear benefits, that’s a red flag for long-term growth." Final Thoughts For enterprises that have so far done little or nothing in the way of AI deployment, the technology may appear optional, McQuarrie observes. Yet soon, operating without AI will become as unthinkable as running a business without the internet. Enterprise leaders who delay AI adoption risk falling behind the competition. "The best approach is to embrace a mindset of humility and curiosity -- actively seek out knowledge, ask questions, and learn from peers who are already seeing AI’s impact," he says. "To stay competitive in this rapidly evolving landscape, leaders should start now." The best companies aren't just using AI to improve; they're using the technology to redefine how they do business, Mains says. Leaders who recognize AI as a business accelerator will be the ones leading their industries in the next decade. "Those who hesitate? They’ll be playing catch-up." he concludes. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#how #convince #management #colleagues #thatHow to Convince Management Colleagues That AI Isn't a Passing FadJohn Edwards, Technology Journalist & AuthorJune 4, 20254 Min ReadRancz Andrei via Alamy Stock PhotoIt may be hard to believe, but some senior executives actually believe that AI's arrival isn't a ground-shaking event. These individuals tend to be convinced that while AI may be a useful tool in certain situations, it's not going to change business in any truly meaningful way. Call them skeptics or call them realists, but such individuals really do exist, and it's the enterprise's CIOs and other IT leaders who need to gently guide them into reality. AI adoption tends to fall into three mindsets: early adopters who recognize its benefits, skeptics who fear its risks, and a large middle group -- those who are curious, but uncertain, observes Dave McQuarrie, HP's chief commercial officer in an online interview. "The key to closing the AI adoption gap lies in engaging this middle group, equipping them with knowledge, and guiding them through practical implementation." Effective Approaches The most important move is simply getting started. Establish a group of advocates in your company to serve as your early AI adopters, McQuarrie says. "Pick two or three processes to completely automate rather than casting a wide net, and use these as case studies to learn from," he advises. "By beginning with a subset of users, leaders can develop a solid foundation as they roll out the tool more widely across their business." Related:Start small, gather data, and present your use case, demonstrating how AI can support you and your colleagues to do your jobs better and faster, recommends Nicola Cain, CEO and principal consultant at Handley Gill Limited, a UK-based legal, regulatory and compliance consultancy. "This could be by analyzing customer interactions to demonstrate how the introduction of a chatbot to give customers prompt answers to easily addressed questions ... or showing how vast volumes of network log data could be analyzed by AI to identify potentially malign incidents that warrant further investigation," she says in an email interview. Changing Mindsets Question the skeptical leader about their biggest business bottleneck, suggests Jeff Mains, CEO of business consulting firm Champion Leadership Group. "Whether it’s slow decision-making, inconsistent customer experiences, or operational inefficiencies, there's a strategic AI-driven solution for nearly every major business challenge," he explains in an online interview. "The key is showing leaders how AI directly solves their most pressing problems today." When dealing with a reluctant executive, start by identifying an AI use case, Cain says. "AI functionality already performs strongly in areas like forecasting, recognition, event detection, personalization, interaction support, recommendations, and goal-driven optimization," she states. "Good business areas to identify a potential use case could therefore be in finance, customer service, marketing, cyber security, or stock control." Related:Strengthening Your Case Executives respond to proof, not promises, Mains says. "Instead of leading with research reports, I’ve found that real, industry-specific case studies are far more impactful," he observes. "If a direct competitor has successfully integrated AI into sales, marketing, or operations, use that example, because it creates urgency." Instead of just citing AI-driven efficiency gains, Mains recommends framing AI as a way to free-up leadership to focus on high-level strategy rather than day-to-day operations. Instead of trying to pitch AI in broad terms, Mains advises aligning the technology to the company's stated goals. "If the company is struggling with customer retention, talk about how AI can improve personalization," he suggests. "If operational inefficiencies are a problem, highlight AI-driven automation." The moment AI is framed as a business enabler rather than a technology trend, the conversation shifts from resistance to curiosity. Related:When All Else Fails If leadership refuses to embrace AI, it’s important to document the cost of inaction, Mains says. "Keep track of inefficiencies, missed opportunities, and competitor advancements," he recommends. Sometimes, leadership only shifts when management’s view of the risks of staying stagnant outweigh the risks of change. "If a company refuses to innovate despite clear benefits, that’s a red flag for long-term growth." Final Thoughts For enterprises that have so far done little or nothing in the way of AI deployment, the technology may appear optional, McQuarrie observes. Yet soon, operating without AI will become as unthinkable as running a business without the internet. Enterprise leaders who delay AI adoption risk falling behind the competition. "The best approach is to embrace a mindset of humility and curiosity -- actively seek out knowledge, ask questions, and learn from peers who are already seeing AI’s impact," he says. "To stay competitive in this rapidly evolving landscape, leaders should start now." The best companies aren't just using AI to improve; they're using the technology to redefine how they do business, Mains says. Leaders who recognize AI as a business accelerator will be the ones leading their industries in the next decade. "Those who hesitate? They’ll be playing catch-up." he concludes. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #how #convince #management #colleagues #thatWWW.INFORMATIONWEEK.COMHow to Convince Management Colleagues That AI Isn't a Passing FadJohn Edwards, Technology Journalist & AuthorJune 4, 20254 Min ReadRancz Andrei via Alamy Stock PhotoIt may be hard to believe, but some senior executives actually believe that AI's arrival isn't a ground-shaking event. These individuals tend to be convinced that while AI may be a useful tool in certain situations, it's not going to change business in any truly meaningful way. Call them skeptics or call them realists, but such individuals really do exist, and it's the enterprise's CIOs and other IT leaders who need to gently guide them into reality. AI adoption tends to fall into three mindsets: early adopters who recognize its benefits, skeptics who fear its risks, and a large middle group -- those who are curious, but uncertain, observes Dave McQuarrie, HP's chief commercial officer in an online interview. "The key to closing the AI adoption gap lies in engaging this middle group, equipping them with knowledge, and guiding them through practical implementation." Effective Approaches The most important move is simply getting started. Establish a group of advocates in your company to serve as your early AI adopters, McQuarrie says. "Pick two or three processes to completely automate rather than casting a wide net, and use these as case studies to learn from," he advises. "By beginning with a subset of users, leaders can develop a solid foundation as they roll out the tool more widely across their business." Related:Start small, gather data, and present your use case, demonstrating how AI can support you and your colleagues to do your jobs better and faster, recommends Nicola Cain, CEO and principal consultant at Handley Gill Limited, a UK-based legal, regulatory and compliance consultancy. "This could be by analyzing customer interactions to demonstrate how the introduction of a chatbot to give customers prompt answers to easily addressed questions ... or showing how vast volumes of network log data could be analyzed by AI to identify potentially malign incidents that warrant further investigation," she says in an email interview. Changing Mindsets Question the skeptical leader about their biggest business bottleneck, suggests Jeff Mains, CEO of business consulting firm Champion Leadership Group. "Whether it’s slow decision-making, inconsistent customer experiences, or operational inefficiencies, there's a strategic AI-driven solution for nearly every major business challenge," he explains in an online interview. "The key is showing leaders how AI directly solves their most pressing problems today." When dealing with a reluctant executive, start by identifying an AI use case, Cain says. "AI functionality already performs strongly in areas like forecasting, recognition, event detection, personalization, interaction support, recommendations, and goal-driven optimization," she states. "Good business areas to identify a potential use case could therefore be in finance, customer service, marketing, cyber security, or stock control." Related:Strengthening Your Case Executives respond to proof, not promises, Mains says. "Instead of leading with research reports, I’ve found that real, industry-specific case studies are far more impactful," he observes. "If a direct competitor has successfully integrated AI into sales, marketing, or operations, use that example, because it creates urgency." Instead of just citing AI-driven efficiency gains, Mains recommends framing AI as a way to free-up leadership to focus on high-level strategy rather than day-to-day operations. Instead of trying to pitch AI in broad terms, Mains advises aligning the technology to the company's stated goals. "If the company is struggling with customer retention, talk about how AI can improve personalization," he suggests. "If operational inefficiencies are a problem, highlight AI-driven automation." The moment AI is framed as a business enabler rather than a technology trend, the conversation shifts from resistance to curiosity. Related:When All Else Fails If leadership refuses to embrace AI, it’s important to document the cost of inaction, Mains says. "Keep track of inefficiencies, missed opportunities, and competitor advancements," he recommends. Sometimes, leadership only shifts when management’s view of the risks of staying stagnant outweigh the risks of change. "If a company refuses to innovate despite clear benefits, that’s a red flag for long-term growth." Final Thoughts For enterprises that have so far done little or nothing in the way of AI deployment, the technology may appear optional, McQuarrie observes. Yet soon, operating without AI will become as unthinkable as running a business without the internet. Enterprise leaders who delay AI adoption risk falling behind the competition. "The best approach is to embrace a mindset of humility and curiosity -- actively seek out knowledge, ask questions, and learn from peers who are already seeing AI’s impact," he says. "To stay competitive in this rapidly evolving landscape, leaders should start now." The best companies aren't just using AI to improve; they're using the technology to redefine how they do business, Mains says. Leaders who recognize AI as a business accelerator will be the ones leading their industries in the next decade. "Those who hesitate? They’ll be playing catch-up." he concludes. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like -
From Steel to the Cloud: Phoenix Global’s CIO/CTO Talks Transformation
Joao-Pierre S. Ruth, Senior EditorJune 2, 20255 Min ReadJG Photography via Alamy Stock PhotoProviding services to steel mills and mines around the world can call for real-world, heavy lifting. So when Phoenix Global decided it was time for digital transformation as new leadership took the helm, it needed a fresh plan to embrace the cloud.Jeff Suellentrop, chief information and technology officer for Phoenix Global, says the company works with some 17 steel mill sites in the US and abroad, offering slag remediation and metals recovery. “We operate all the heavy equipment, large loaders, dump trucks and basically all the heavy equipment in the steel mill,” he says. “We help process the byproduct of slags.”Slag is the byproduct of steelmaking, in which impurities are removed from the metal. Suellentrop says his company works with slag to help reclaim precious metals to return to the steelmaking process. The part of the byproduct not returned to steelmaking gets crushed and sold to construction and other industrial agencies. “It’s a very renewable process,” he says.Removing the Weight of Legacy TechServicing steel mills is Phoenix Global’s main business, Suellentrop says, with contracts that can last from five through 20 years. “It’s a fairly unique business, fairly complex compared to traditional order-to-cash type of process.Related:That includes very large asset purchases at the onset with tens of millions of dollars spent on equipment to initiate a contract, he says. “We manage all that equipment, all the personnel, and we also maintenance all that equipment.”This all requires a fairly long selling process, with each site built independently. Phoenix Global had a legacy ERP system in place, Suellentrop says, but the unique needs of the sites led to fragmented data that was not very integrated. “Our goal was to get what we call activity-based management, near real-time activity-based management,” he says.The company wanted to start fresh, jettison all the tech debt, and process debt to have a fully integrated, modernized organization. “We’ve replaced every technology in the company in the last two years,” Suellentrop says.Unfettered by the CloudThe core of that change, he says, was cloud-based SAP ERP software, for all of Phoenix Global’s finance, purchasing, processing tech, supply chain, contract management, plans, and telematics. Phoenix Global tapped Syntax Systems to transition to SAP S/4HANA Cloud.Suellentrop says his company is still deploying SAP at its sites, working toward 100% deployment, which will include mobile assets such as connecting loaders and dump trucks. “You can imagine all the telematics data, hours, fuel consumption,” he says. The system includes connecting some 1,700 associates around the world, integrating data, inventory, and managing maintenance shops through SAP. “We’ve taken out all of the hand offs; it’s all automated," Suellentrop says. “We’ve literally taken days out of the turn-around time and driven up utilization of the equipment, saved millions of dollars on inventory.”Related:Maintenance, for example, has been streamlined to let technicians work directly through SAP to order repair parts that would be available that same day. “It’s a fairly high volume of data when it comes to all the information around the assets, asset maintenance, and then obviously tracking of all the different activities and resources,” he says. “Our goal with activity-based management basically is to see near real-time P&L by site to allow us to make near real-time decisions which help us service our customer better.”In prior years, Phoenix Global saw spot implementations of new solutions for certain needs. After Suellentrop joined the company in March 2023, he was asked to architect the complete digital overhaul and digital transformation for the company. “I’m responsible, from the executive team, for that digital transformation and beginning this activity-based management,” he says. “Digital to me is really delivering it at the speed of business and creating a force multiplier. We literally changed every technology, jettisoned almost all of the legacy processes, and displaced them with best practices.”Related:That allowed Phoenix Global to get rid of unintegrated and poor processes, Suellentrop says, and leapfrog to best practices. “More importantly, it allowed us to standardize the whole data set,” he says, which meant not much data grooming was needed, quickening its use with AI. “We basically took out a whole challenge with deploying AI.”New Leadership, New StrategySuellentrop joined Phoenix Global as it emerged from a reorganization, which he says gave the company the chance to start fresh with a new leadership team that had a goal of driving improvements across the board. That included the adoption of AI and a reimagining of the business model. “The steel industry has not embraced digital quite at the pace of some other industries,” he says.The transformation plan aimed for increased safety, profitability, and efficiency. “We went into this with very distinct outcomes, how we saw this business running in the future, and then we basically align the technology to deliver those outcomes,” Suellentrop says. “I can’t stress the importance of that enough, because we had a very clear vision from the leadership team … it takes immense sponsorship, obviously, to jettison all old processes and go to best practices.”That type of change management, he says, included telling staff that processes they followed day to day, perhaps for as long as 15 years, would change. This included digitizing everything, Suellentrop says, including analog records, even for operators driving trucks. “We got rid of all the paper and pencils,” he says. “We’ve deployed tablets; we automated so they didn’t have to enter some things. We want to minimize the human data entry.”Maintenance technicians now use tablets, he says, which allows them to manage work orders, order parts, and plan their workloads.Phoenix Global plans to finish deploying the new system and operating model in the US this year with international sites to follow in 2026, Suellentrop says. “We’re doing financial planning. We have several new AI value-adders that we’re layering on this year in the plants that are deployed … we can hyper tune our processes and our profitability because we’ve got a much higher level of detail.”About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#steel #cloud #phoenix #globals #cioctoFrom Steel to the Cloud: Phoenix Global’s CIO/CTO Talks TransformationJoao-Pierre S. Ruth, Senior EditorJune 2, 20255 Min ReadJG Photography via Alamy Stock PhotoProviding services to steel mills and mines around the world can call for real-world, heavy lifting. So when Phoenix Global decided it was time for digital transformation as new leadership took the helm, it needed a fresh plan to embrace the cloud.Jeff Suellentrop, chief information and technology officer for Phoenix Global, says the company works with some 17 steel mill sites in the US and abroad, offering slag remediation and metals recovery. “We operate all the heavy equipment, large loaders, dump trucks and basically all the heavy equipment in the steel mill,” he says. “We help process the byproduct of slags.”Slag is the byproduct of steelmaking, in which impurities are removed from the metal. Suellentrop says his company works with slag to help reclaim precious metals to return to the steelmaking process. The part of the byproduct not returned to steelmaking gets crushed and sold to construction and other industrial agencies. “It’s a very renewable process,” he says.Removing the Weight of Legacy TechServicing steel mills is Phoenix Global’s main business, Suellentrop says, with contracts that can last from five through 20 years. “It’s a fairly unique business, fairly complex compared to traditional order-to-cash type of process.Related:That includes very large asset purchases at the onset with tens of millions of dollars spent on equipment to initiate a contract, he says. “We manage all that equipment, all the personnel, and we also maintenance all that equipment.”This all requires a fairly long selling process, with each site built independently. Phoenix Global had a legacy ERP system in place, Suellentrop says, but the unique needs of the sites led to fragmented data that was not very integrated. “Our goal was to get what we call activity-based management, near real-time activity-based management,” he says.The company wanted to start fresh, jettison all the tech debt, and process debt to have a fully integrated, modernized organization. “We’ve replaced every technology in the company in the last two years,” Suellentrop says.Unfettered by the CloudThe core of that change, he says, was cloud-based SAP ERP software, for all of Phoenix Global’s finance, purchasing, processing tech, supply chain, contract management, plans, and telematics. Phoenix Global tapped Syntax Systems to transition to SAP S/4HANA Cloud.Suellentrop says his company is still deploying SAP at its sites, working toward 100% deployment, which will include mobile assets such as connecting loaders and dump trucks. “You can imagine all the telematics data, hours, fuel consumption,” he says. The system includes connecting some 1,700 associates around the world, integrating data, inventory, and managing maintenance shops through SAP. “We’ve taken out all of the hand offs; it’s all automated," Suellentrop says. “We’ve literally taken days out of the turn-around time and driven up utilization of the equipment, saved millions of dollars on inventory.”Related:Maintenance, for example, has been streamlined to let technicians work directly through SAP to order repair parts that would be available that same day. “It’s a fairly high volume of data when it comes to all the information around the assets, asset maintenance, and then obviously tracking of all the different activities and resources,” he says. “Our goal with activity-based management basically is to see near real-time P&L by site to allow us to make near real-time decisions which help us service our customer better.”In prior years, Phoenix Global saw spot implementations of new solutions for certain needs. After Suellentrop joined the company in March 2023, he was asked to architect the complete digital overhaul and digital transformation for the company. “I’m responsible, from the executive team, for that digital transformation and beginning this activity-based management,” he says. “Digital to me is really delivering it at the speed of business and creating a force multiplier. We literally changed every technology, jettisoned almost all of the legacy processes, and displaced them with best practices.”Related:That allowed Phoenix Global to get rid of unintegrated and poor processes, Suellentrop says, and leapfrog to best practices. “More importantly, it allowed us to standardize the whole data set,” he says, which meant not much data grooming was needed, quickening its use with AI. “We basically took out a whole challenge with deploying AI.”New Leadership, New StrategySuellentrop joined Phoenix Global as it emerged from a reorganization, which he says gave the company the chance to start fresh with a new leadership team that had a goal of driving improvements across the board. That included the adoption of AI and a reimagining of the business model. “The steel industry has not embraced digital quite at the pace of some other industries,” he says.The transformation plan aimed for increased safety, profitability, and efficiency. “We went into this with very distinct outcomes, how we saw this business running in the future, and then we basically align the technology to deliver those outcomes,” Suellentrop says. “I can’t stress the importance of that enough, because we had a very clear vision from the leadership team … it takes immense sponsorship, obviously, to jettison all old processes and go to best practices.”That type of change management, he says, included telling staff that processes they followed day to day, perhaps for as long as 15 years, would change. This included digitizing everything, Suellentrop says, including analog records, even for operators driving trucks. “We got rid of all the paper and pencils,” he says. “We’ve deployed tablets; we automated so they didn’t have to enter some things. We want to minimize the human data entry.”Maintenance technicians now use tablets, he says, which allows them to manage work orders, order parts, and plan their workloads.Phoenix Global plans to finish deploying the new system and operating model in the US this year with international sites to follow in 2026, Suellentrop says. “We’re doing financial planning. We have several new AI value-adders that we’re layering on this year in the plants that are deployed … we can hyper tune our processes and our profitability because we’ve got a much higher level of detail.”About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #steel #cloud #phoenix #globals #cioctoWWW.INFORMATIONWEEK.COMFrom Steel to the Cloud: Phoenix Global’s CIO/CTO Talks TransformationJoao-Pierre S. Ruth, Senior EditorJune 2, 20255 Min ReadJG Photography via Alamy Stock PhotoProviding services to steel mills and mines around the world can call for real-world, heavy lifting. So when Phoenix Global decided it was time for digital transformation as new leadership took the helm, it needed a fresh plan to embrace the cloud.Jeff Suellentrop, chief information and technology officer for Phoenix Global, says the company works with some 17 steel mill sites in the US and abroad, offering slag remediation and metals recovery. “We operate all the heavy equipment, large loaders, dump trucks and basically all the heavy equipment in the steel mill,” he says. “We help process the byproduct of slags.”Slag is the byproduct of steelmaking, in which impurities are removed from the metal. Suellentrop says his company works with slag to help reclaim precious metals to return to the steelmaking process. The part of the byproduct not returned to steelmaking gets crushed and sold to construction and other industrial agencies. “It’s a very renewable process,” he says.Removing the Weight of Legacy TechServicing steel mills is Phoenix Global’s main business, Suellentrop says, with contracts that can last from five through 20 years. “It’s a fairly unique business, fairly complex compared to traditional order-to-cash type of process.Related:That includes very large asset purchases at the onset with tens of millions of dollars spent on equipment to initiate a contract, he says. “We manage all that equipment, all the personnel, and we also maintenance all that equipment.”This all requires a fairly long selling process, with each site built independently. Phoenix Global had a legacy ERP system in place, Suellentrop says, but the unique needs of the sites led to fragmented data that was not very integrated. “Our goal was to get what we call activity-based management, near real-time activity-based management,” he says.The company wanted to start fresh, jettison all the tech debt, and process debt to have a fully integrated, modernized organization. “We’ve replaced every technology in the company in the last two years,” Suellentrop says.Unfettered by the CloudThe core of that change, he says, was cloud-based SAP ERP software, for all of Phoenix Global’s finance, purchasing, processing tech, supply chain, contract management, plans, and telematics. Phoenix Global tapped Syntax Systems to transition to SAP S/4HANA Cloud.Suellentrop says his company is still deploying SAP at its sites, working toward 100% deployment, which will include mobile assets such as connecting loaders and dump trucks. “You can imagine all the telematics data, hours, fuel consumption,” he says. The system includes connecting some 1,700 associates around the world, integrating data, inventory, and managing maintenance shops through SAP. “We’ve taken out all of the hand offs; it’s all automated," Suellentrop says. “We’ve literally taken days out of the turn-around time and driven up utilization of the equipment, saved millions of dollars on inventory.”Related:Maintenance, for example, has been streamlined to let technicians work directly through SAP to order repair parts that would be available that same day. “It’s a fairly high volume of data when it comes to all the information around the assets, asset maintenance, and then obviously tracking of all the different activities and resources,” he says. “Our goal with activity-based management basically is to see near real-time P&L by site to allow us to make near real-time decisions which help us service our customer better.”In prior years, Phoenix Global saw spot implementations of new solutions for certain needs. After Suellentrop joined the company in March 2023, he was asked to architect the complete digital overhaul and digital transformation for the company. “I’m responsible, from the executive team, for that digital transformation and beginning this activity-based management,” he says. “Digital to me is really delivering it at the speed of business and creating a force multiplier. We literally changed every technology, jettisoned almost all of the legacy processes, and displaced them with best practices.”Related:That allowed Phoenix Global to get rid of unintegrated and poor processes, Suellentrop says, and leapfrog to best practices. “More importantly, it allowed us to standardize the whole data set,” he says, which meant not much data grooming was needed, quickening its use with AI. “We basically took out a whole challenge with deploying AI.”New Leadership, New StrategySuellentrop joined Phoenix Global as it emerged from a reorganization, which he says gave the company the chance to start fresh with a new leadership team that had a goal of driving improvements across the board. That included the adoption of AI and a reimagining of the business model. “The steel industry has not embraced digital quite at the pace of some other industries,” he says.The transformation plan aimed for increased safety, profitability, and efficiency. “We went into this with very distinct outcomes, how we saw this business running in the future, and then we basically align the technology to deliver those outcomes,” Suellentrop says. “I can’t stress the importance of that enough, because we had a very clear vision from the leadership team … it takes immense sponsorship, obviously, to jettison all old processes and go to best practices.”That type of change management, he says, included telling staff that processes they followed day to day, perhaps for as long as 15 years, would change. This included digitizing everything, Suellentrop says, including analog records, even for operators driving trucks. “We got rid of all the paper and pencils,” he says. “We’ve deployed tablets; we automated so they didn’t have to enter some things. We want to minimize the human data entry.”Maintenance technicians now use tablets, he says, which allows them to manage work orders, order parts, and plan their workloads.Phoenix Global plans to finish deploying the new system and operating model in the US this year with international sites to follow in 2026, Suellentrop says. “We’re doing financial planning. We have several new AI value-adders that we’re layering on this year in the plants that are deployed … we can hyper tune our processes and our profitability because we’ve got a much higher level of detail.”About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares -
Preparing Your IT Infrastructure for the AI Era
TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Preparing Your IT Infrastructure for the AI EraPreparing Your IT Infrastructure for the AI EraThis session highlights AI's unique requirements and why a traditional IT infrastructure won't be enough to support it.Brandon Taylor, Digital Editorial Program ManagerMay 31, 20255 Min ViewWith AI's unique capabilities comes unique IT infrastructure and data management needs in the modern landscape. In this archived keynote session, Eve Logunova-Parker, founder and CEO of Evenness, reveals strategies and best practices for preparing your systems and scaling your infrastructure to support advanced AI applications. This segment was part of our live webinar titled, “How to Prepare Your IT Infrastructure For AI.” The event was presented by InformationWeek on May 20, 2025.Watch the archived “How to Prepare Your IT Infrastructure For AI” live webinar on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#preparing #your #infrastructure #eraPreparing Your IT Infrastructure for the AI EraTechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Preparing Your IT Infrastructure for the AI EraPreparing Your IT Infrastructure for the AI EraThis session highlights AI's unique requirements and why a traditional IT infrastructure won't be enough to support it.Brandon Taylor, Digital Editorial Program ManagerMay 31, 20255 Min ViewWith AI's unique capabilities comes unique IT infrastructure and data management needs in the modern landscape. In this archived keynote session, Eve Logunova-Parker, founder and CEO of Evenness, reveals strategies and best practices for preparing your systems and scaling your infrastructure to support advanced AI applications. This segment was part of our live webinar titled, “How to Prepare Your IT Infrastructure For AI.” The event was presented by InformationWeek on May 20, 2025.Watch the archived “How to Prepare Your IT Infrastructure For AI” live webinar on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #preparing #your #infrastructure #eraWWW.INFORMATIONWEEK.COMPreparing Your IT Infrastructure for the AI EraTechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Preparing Your IT Infrastructure for the AI EraPreparing Your IT Infrastructure for the AI EraThis session highlights AI's unique requirements and why a traditional IT infrastructure won't be enough to support it.Brandon Taylor, Digital Editorial Program ManagerMay 31, 20255 Min ViewWith AI's unique capabilities comes unique IT infrastructure and data management needs in the modern landscape. In this archived keynote session, Eve Logunova-Parker, founder and CEO of Evenness, reveals strategies and best practices for preparing your systems and scaling your infrastructure to support advanced AI applications. This segment was part of our live webinar titled, “How to Prepare Your IT Infrastructure For AI.” The event was presented by InformationWeek on May 20, 2025.Watch the archived “How to Prepare Your IT Infrastructure For AI” live webinar on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares -
Securing Data Centers Against Cyber Risks
Michael Giannou, Global General Manager, HoneywellMay 29, 20254 Min ReadAndriy Popov via Alamy StockData centers are quickly becoming the backbone of our information-driven world. At the same time, the increasing sophistication of cybersecurity attacks, combined with the growing frequency of extreme climate events, means there is also greater operational risk than ever before, as bad actors have begun targeting cooling centers to purposefully compromise the equipment, causing irreversible loss and damage. The best defense against these threats is an integrated system centered around situational awareness and security. By taking steps to safeguard key areas, data center operators can enhance the protection of their facility and data, helping prevent costly threats and downtime. Seeing the Big Picture Developing a comprehensive awareness and monitoring system serves as a critical first step to protecting data centers. This is especially important as data centers begin to welcome more tenants into shared space, requiring vendors to consider each tenant individually and as part of the broader system. A threat to one tenant can quickly become a threat to all tenants. Centralizing all information in one system provides a single location for operators to view and analyze real-time data, allowing them to instantly access critical information, monitor incidents and respond quickly with pre-defined incident workflows. An intelligent system will integrate all security events -- including video recordings, access point clearance and data reporting -- together in one place to reduce coverage gaps and information silos. Related:Another benefit of having one comprehensive system is the ability to integrate separate aspects of the system to improve response time. For example, a centralized security system could be configured to ensure that any fire or intruder alarm immediately triggers the CCTV cameras in the vicinity of the alarm, so the security team can quickly and efficiently respond to the situation. Close partnering between systems that transcend departments such as security, IT and the management of employees, contractors and visitors is key to protecting the facility and its data, both in low-frictionand high-frictionareas. Addressing the Gaps Once a centralized security system is in place, operators can address the cybersecurity gaps where the data center is most vulnerable to bad actors. A strong, always-on cybersecurity program should be tailored to the specific facility and its compliance needs, often including: Data encryption: Whether data is stored in the system or just passing through, encryption is key to preventing unauthorized access. A strong encryption process goes beyond thwarting attacks -- it is critical for establishing trust, ensuring the authenticity of data exchanged, guaranteeing the integrity of commands to smart devices and maintaining secrecy where it is needed most. Related:Network security: Data center operators can help prevent unauthorized access and cyberattacks by developing strong intrusion detection/prevention systems, firewalls and network segmentation. Facility protection: By integrating technologies such as electronic access control, biometrics, CCTV and perimeter detection, operators can maintain security around the physical facility. Security also requires vendors to adhere to standard operating procedures, often overlooked in today’s technology-focused environment, such as enforcing visitor security policies and requiring visitors to have escorts. Regular audits and updates: It may seem to be a lower priority than the often-urgent concerns set forth above, but out-of-date firmware carries a significant cybersecurity risk. Proactive attention and system maintenance can reduce operating costs in the long run and help avoid costly downtime. Related:Looking Ahead With so many current considerations to focus on, data center operators must also look ahead to future-proof their facility. As quickly as the industry has grown in recent years, the momentum will likely continue to accelerate. One new frontier emerging is quantum security: using quantum-enhanced randomness to deliver truly unpredictable key generation and safeguard sensitive information. This enables the system to develop armor that evolves just as quickly as cybersecurity threats. As the largest companies make significant investment in data centers -- for example, Microsoft’s plan to invest approximately billion in AI-enabled datacenters in FY25 -- many in the industry are watching to see how these companies’ actions and investments shape the future of both data centers and building security overall. Another forward-looking trend is military-grade solutions entering the commercial and industrial marketplace. It is clear to understand how a system hardened for integrated perimeter security in harsh environments can also fit the security and resilience needs of a data center. In addition, those solutions have often been certified through rigorous testing and evaluation, giving operators confidence their system can withstand almost all third-party attacks. Finally, the industry will begin to prioritize modularity -- meaning systems that can be added to in the future, will work with third-party solutions and are both user-friendly and energy-efficient. This allows operators to expand their facilities to include the latest and greatest technology without a costly overhaul of their existing infrastructure. By integrating with their business systems and leaning into wider stakeholder influence, organizations can more effectively monitor and manage their facilities using modular systems. Cybersecurity risks can never truly be considered resolved -- it is constantly evolving. But by continuously revisiting the areas detailed above, data center operators can enhance their facility and systems protections, helping to protect their data now and in the future. About the AuthorMichael GiannouGlobal General Manager, HoneywellMichael Giannou is a global sales executive with over 15 years of experience leading high-performing teams and driving growth in the data center and technology sectors. As Global General Manager of Data Centers at Honeywell, he built and led a global sales team, delivering double-digit growth and now leads the company’s global data center vertical. Previously, at Schneider Electric, he grew division sales from M to M over six years. Known for transforming underperforming programs and developing trusted customer relationships, Michael is a strategic, growth-focused leader passionate about mentoring enterprise sales professionals. See more from Michael GiannouWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#securing #data #centers #against #cyberSecuring Data Centers Against Cyber RisksMichael Giannou, Global General Manager, HoneywellMay 29, 20254 Min ReadAndriy Popov via Alamy StockData centers are quickly becoming the backbone of our information-driven world. At the same time, the increasing sophistication of cybersecurity attacks, combined with the growing frequency of extreme climate events, means there is also greater operational risk than ever before, as bad actors have begun targeting cooling centers to purposefully compromise the equipment, causing irreversible loss and damage. The best defense against these threats is an integrated system centered around situational awareness and security. By taking steps to safeguard key areas, data center operators can enhance the protection of their facility and data, helping prevent costly threats and downtime. Seeing the Big Picture Developing a comprehensive awareness and monitoring system serves as a critical first step to protecting data centers. This is especially important as data centers begin to welcome more tenants into shared space, requiring vendors to consider each tenant individually and as part of the broader system. A threat to one tenant can quickly become a threat to all tenants. Centralizing all information in one system provides a single location for operators to view and analyze real-time data, allowing them to instantly access critical information, monitor incidents and respond quickly with pre-defined incident workflows. An intelligent system will integrate all security events -- including video recordings, access point clearance and data reporting -- together in one place to reduce coverage gaps and information silos. Related:Another benefit of having one comprehensive system is the ability to integrate separate aspects of the system to improve response time. For example, a centralized security system could be configured to ensure that any fire or intruder alarm immediately triggers the CCTV cameras in the vicinity of the alarm, so the security team can quickly and efficiently respond to the situation. Close partnering between systems that transcend departments such as security, IT and the management of employees, contractors and visitors is key to protecting the facility and its data, both in low-frictionand high-frictionareas. Addressing the Gaps Once a centralized security system is in place, operators can address the cybersecurity gaps where the data center is most vulnerable to bad actors. A strong, always-on cybersecurity program should be tailored to the specific facility and its compliance needs, often including: Data encryption: Whether data is stored in the system or just passing through, encryption is key to preventing unauthorized access. A strong encryption process goes beyond thwarting attacks -- it is critical for establishing trust, ensuring the authenticity of data exchanged, guaranteeing the integrity of commands to smart devices and maintaining secrecy where it is needed most. Related:Network security: Data center operators can help prevent unauthorized access and cyberattacks by developing strong intrusion detection/prevention systems, firewalls and network segmentation. Facility protection: By integrating technologies such as electronic access control, biometrics, CCTV and perimeter detection, operators can maintain security around the physical facility. Security also requires vendors to adhere to standard operating procedures, often overlooked in today’s technology-focused environment, such as enforcing visitor security policies and requiring visitors to have escorts. Regular audits and updates: It may seem to be a lower priority than the often-urgent concerns set forth above, but out-of-date firmware carries a significant cybersecurity risk. Proactive attention and system maintenance can reduce operating costs in the long run and help avoid costly downtime. Related:Looking Ahead With so many current considerations to focus on, data center operators must also look ahead to future-proof their facility. As quickly as the industry has grown in recent years, the momentum will likely continue to accelerate. One new frontier emerging is quantum security: using quantum-enhanced randomness to deliver truly unpredictable key generation and safeguard sensitive information. This enables the system to develop armor that evolves just as quickly as cybersecurity threats. As the largest companies make significant investment in data centers -- for example, Microsoft’s plan to invest approximately billion in AI-enabled datacenters in FY25 -- many in the industry are watching to see how these companies’ actions and investments shape the future of both data centers and building security overall. Another forward-looking trend is military-grade solutions entering the commercial and industrial marketplace. It is clear to understand how a system hardened for integrated perimeter security in harsh environments can also fit the security and resilience needs of a data center. In addition, those solutions have often been certified through rigorous testing and evaluation, giving operators confidence their system can withstand almost all third-party attacks. Finally, the industry will begin to prioritize modularity -- meaning systems that can be added to in the future, will work with third-party solutions and are both user-friendly and energy-efficient. This allows operators to expand their facilities to include the latest and greatest technology without a costly overhaul of their existing infrastructure. By integrating with their business systems and leaning into wider stakeholder influence, organizations can more effectively monitor and manage their facilities using modular systems. Cybersecurity risks can never truly be considered resolved -- it is constantly evolving. But by continuously revisiting the areas detailed above, data center operators can enhance their facility and systems protections, helping to protect their data now and in the future. About the AuthorMichael GiannouGlobal General Manager, HoneywellMichael Giannou is a global sales executive with over 15 years of experience leading high-performing teams and driving growth in the data center and technology sectors. As Global General Manager of Data Centers at Honeywell, he built and led a global sales team, delivering double-digit growth and now leads the company’s global data center vertical. Previously, at Schneider Electric, he grew division sales from M to M over six years. Known for transforming underperforming programs and developing trusted customer relationships, Michael is a strategic, growth-focused leader passionate about mentoring enterprise sales professionals. See more from Michael GiannouWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #securing #data #centers #against #cyberWWW.INFORMATIONWEEK.COMSecuring Data Centers Against Cyber RisksMichael Giannou, Global General Manager, HoneywellMay 29, 20254 Min ReadAndriy Popov via Alamy StockData centers are quickly becoming the backbone of our information-driven world. At the same time, the increasing sophistication of cybersecurity attacks, combined with the growing frequency of extreme climate events, means there is also greater operational risk than ever before, as bad actors have begun targeting cooling centers to purposefully compromise the equipment, causing irreversible loss and damage. The best defense against these threats is an integrated system centered around situational awareness and security. By taking steps to safeguard key areas, data center operators can enhance the protection of their facility and data, helping prevent costly threats and downtime. Seeing the Big Picture Developing a comprehensive awareness and monitoring system serves as a critical first step to protecting data centers. This is especially important as data centers begin to welcome more tenants into shared space, requiring vendors to consider each tenant individually and as part of the broader system. A threat to one tenant can quickly become a threat to all tenants. Centralizing all information in one system provides a single location for operators to view and analyze real-time data, allowing them to instantly access critical information, monitor incidents and respond quickly with pre-defined incident workflows. An intelligent system will integrate all security events -- including video recordings, access point clearance and data reporting -- together in one place to reduce coverage gaps and information silos. Related:Another benefit of having one comprehensive system is the ability to integrate separate aspects of the system to improve response time. For example, a centralized security system could be configured to ensure that any fire or intruder alarm immediately triggers the CCTV cameras in the vicinity of the alarm, so the security team can quickly and efficiently respond to the situation. Close partnering between systems that transcend departments such as security, IT and the management of employees, contractors and visitors is key to protecting the facility and its data, both in low-friction (e.g. office space) and high-friction (e.g. server space) areas. Addressing the Gaps Once a centralized security system is in place, operators can address the cybersecurity gaps where the data center is most vulnerable to bad actors. A strong, always-on cybersecurity program should be tailored to the specific facility and its compliance needs, often including: Data encryption: Whether data is stored in the system or just passing through, encryption is key to preventing unauthorized access. A strong encryption process goes beyond thwarting attacks -- it is critical for establishing trust, ensuring the authenticity of data exchanged, guaranteeing the integrity of commands to smart devices and maintaining secrecy where it is needed most. Related:Network security: Data center operators can help prevent unauthorized access and cyberattacks by developing strong intrusion detection/prevention systems, firewalls and network segmentation. Facility protection: By integrating technologies such as electronic access control, biometrics, CCTV and perimeter detection, operators can maintain security around the physical facility. Security also requires vendors to adhere to standard operating procedures, often overlooked in today’s technology-focused environment, such as enforcing visitor security policies and requiring visitors to have escorts. Regular audits and updates: It may seem to be a lower priority than the often-urgent concerns set forth above, but out-of-date firmware carries a significant cybersecurity risk. Proactive attention and system maintenance can reduce operating costs in the long run and help avoid costly downtime. Related:Looking Ahead With so many current considerations to focus on, data center operators must also look ahead to future-proof their facility. As quickly as the industry has grown in recent years, the momentum will likely continue to accelerate. One new frontier emerging is quantum security: using quantum-enhanced randomness to deliver truly unpredictable key generation and safeguard sensitive information. This enables the system to develop armor that evolves just as quickly as cybersecurity threats. As the largest companies make significant investment in data centers -- for example, Microsoft’s plan to invest approximately $80 billion in AI-enabled datacenters in FY25 -- many in the industry are watching to see how these companies’ actions and investments shape the future of both data centers and building security overall. Another forward-looking trend is military-grade solutions entering the commercial and industrial marketplace. It is clear to understand how a system hardened for integrated perimeter security in harsh environments can also fit the security and resilience needs of a data center. In addition, those solutions have often been certified through rigorous testing and evaluation, giving operators confidence their system can withstand almost all third-party attacks. Finally, the industry will begin to prioritize modularity -- meaning systems that can be added to in the future, will work with third-party solutions and are both user-friendly and energy-efficient. This allows operators to expand their facilities to include the latest and greatest technology without a costly overhaul of their existing infrastructure. By integrating with their business systems and leaning into wider stakeholder influence, organizations can more effectively monitor and manage their facilities using modular systems. Cybersecurity risks can never truly be considered resolved -- it is constantly evolving. But by continuously revisiting the areas detailed above, data center operators can enhance their facility and systems protections, helping to protect their data now and in the future. About the AuthorMichael GiannouGlobal General Manager, HoneywellMichael Giannou is a global sales executive with over 15 years of experience leading high-performing teams and driving growth in the data center and technology sectors. As Global General Manager of Data Centers at Honeywell, he built and led a global sales team, delivering double-digit growth and now leads the company’s global data center vertical. Previously, at Schneider Electric, he grew division sales from $70M to $350M over six years. Known for transforming underperforming programs and developing trusted customer relationships, Michael is a strategic, growth-focused leader passionate about mentoring enterprise sales professionals. See more from Michael GiannouWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares -
How To Measure AI Efficiency and Productivity Gains
John Edwards, Technology Journalist & AuthorMay 30, 20254 Min ReadTanapong Sungkaew via Alamy Stock PhotoAI adoption can help enterprises function more efficiently and productively in many internal and external areas. Yet to get the most value out of AI, CIOs and IT leaders need to find a way to measure their current and future gains.Measuring AI efficiency and productivity gains isn't always a straightforward process, however, observes Matt Sanchez, vice president of product for IBM's watsonx Orchestrate, a tool designed to automate tasks, focusing on the orchestration of AI assistants and AI agents."There are many factors to consider in order to gain an accurate picture of AI’s impact on your organization," Sanchez says, in an email interview. He believes the key to measuring AI effectiveness starts with setting clear, data-driven goals. "What outcomes are you trying to achieve?" he asks. "Identifying the right key performance indicators -- KPIs -- that align with your overall strategy is a great place to start."Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview.Still, with the number of organizations adopting AI rapidly increasing, C-suites and boards are now prioritizing measurable ROI.Related:"We're seeing this firsthand while working with clients in the manufacturing space specifically who are aiming to make manufacturing processes smarter and increasingly software-defined," Gaus says.Measuring AI Efficiency: The ChallengeThe challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained."AI Project Measurement MethodsOnce AI projects are underway, Gaus says measuring real-world results is key. "This includes studying factors such as actual cost reductions, revenue boosts tied directly to AI, and progress in KPIs such as customer satisfaction or operational output. "This method allows organizations to track both the anticipated and actual benefits of their AI investments over time."Related:To effectively assess AI's impact on efficiency and productivity, it's important to connect AI initiatives with broader business goals and evaluate their progress at different stages, Gaus says."In the early stages, companies should focus on estimating the potential benefits, such as enhanced efficiency, revenue growth, or strategic advantages like stronger customer loyalty or reduced operational downtime." These projections can provide a clear understanding of how AI aligns with long-term objectives, Gaus adds.Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. "Metrics should be set prior to any investment to maximize benefits and mitigate biases, such as sunk cost fallacies, confirmation bias, anchoring bias, and the like."Key AI Value MetricsMetrics can vary depending on the industry and technology being used, Gaus says. "In sectors like manufacturing, AI value metrics include improvements in efficiency, productivity, and cost reduction." Yet specific metrics depend on the type of AI technology implemented, such as machine learning.Related:Beyond tracking metrics, it's important to ensure high-quality data is used to minimize biases in AI decision-making, Sanchez says. The end goal is for AI to support the human workforce, freeing users to focus on strategic and creative work and removing potential bottlenecks. "It's also important to remember that AI isn't a one-and-done deal. It's an ongoing process that needs regular evaluation and process adjustment as the organization transforms.”Spurling recommends beginning by studying three key metrics:Worker productivity: Understanding the value of increased task completion or reduced effort by measuring the effect on day-to-day activities like faster issue resolution, more efficient collaboration, reduced process waste, or increased output quality.Ability to scale: Operationalizing AI-based self-service tools, typically with natural language capabilities, across the entire organization beyond IT to enable task or job completion in real-time, with no need for external support or augmentation.User friendliness: Expanding organization effectiveness with data-driven insights as measured by the ability of non-technical business users to leverage AI via no-code, low-code platforms.Final Note: Aligning Business and TechnologyDeloitte's digital transformation research reveals that misalignment between business and technology leaders often leads to inaccurate ROI assessments, Gaus says. "To address this, it's crucial for both sides to agree on key value priorities and success metrics."He adds it's also important to look beyond immediate financial returns and to incorporate innovation-driven KPIs, such as experimentation toleration and agile team adoption. "Without this broader perspective, up to 20% of digital investment returns may not yield their full potential," Gaus warns. "By addressing these alignment issues and tracking a comprehensive set of metrics, organizations can maximize the value from AI initiatives while fostering long-term innovation."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#how #measure #efficiency #productivity #gainsHow To Measure AI Efficiency and Productivity GainsJohn Edwards, Technology Journalist & AuthorMay 30, 20254 Min ReadTanapong Sungkaew via Alamy Stock PhotoAI adoption can help enterprises function more efficiently and productively in many internal and external areas. Yet to get the most value out of AI, CIOs and IT leaders need to find a way to measure their current and future gains.Measuring AI efficiency and productivity gains isn't always a straightforward process, however, observes Matt Sanchez, vice president of product for IBM's watsonx Orchestrate, a tool designed to automate tasks, focusing on the orchestration of AI assistants and AI agents."There are many factors to consider in order to gain an accurate picture of AI’s impact on your organization," Sanchez says, in an email interview. He believes the key to measuring AI effectiveness starts with setting clear, data-driven goals. "What outcomes are you trying to achieve?" he asks. "Identifying the right key performance indicators -- KPIs -- that align with your overall strategy is a great place to start."Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview.Still, with the number of organizations adopting AI rapidly increasing, C-suites and boards are now prioritizing measurable ROI.Related:"We're seeing this firsthand while working with clients in the manufacturing space specifically who are aiming to make manufacturing processes smarter and increasingly software-defined," Gaus says.Measuring AI Efficiency: The ChallengeThe challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained."AI Project Measurement MethodsOnce AI projects are underway, Gaus says measuring real-world results is key. "This includes studying factors such as actual cost reductions, revenue boosts tied directly to AI, and progress in KPIs such as customer satisfaction or operational output. "This method allows organizations to track both the anticipated and actual benefits of their AI investments over time."Related:To effectively assess AI's impact on efficiency and productivity, it's important to connect AI initiatives with broader business goals and evaluate their progress at different stages, Gaus says."In the early stages, companies should focus on estimating the potential benefits, such as enhanced efficiency, revenue growth, or strategic advantages like stronger customer loyalty or reduced operational downtime." These projections can provide a clear understanding of how AI aligns with long-term objectives, Gaus adds.Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. "Metrics should be set prior to any investment to maximize benefits and mitigate biases, such as sunk cost fallacies, confirmation bias, anchoring bias, and the like."Key AI Value MetricsMetrics can vary depending on the industry and technology being used, Gaus says. "In sectors like manufacturing, AI value metrics include improvements in efficiency, productivity, and cost reduction." Yet specific metrics depend on the type of AI technology implemented, such as machine learning.Related:Beyond tracking metrics, it's important to ensure high-quality data is used to minimize biases in AI decision-making, Sanchez says. The end goal is for AI to support the human workforce, freeing users to focus on strategic and creative work and removing potential bottlenecks. "It's also important to remember that AI isn't a one-and-done deal. It's an ongoing process that needs regular evaluation and process adjustment as the organization transforms.”Spurling recommends beginning by studying three key metrics:Worker productivity: Understanding the value of increased task completion or reduced effort by measuring the effect on day-to-day activities like faster issue resolution, more efficient collaboration, reduced process waste, or increased output quality.Ability to scale: Operationalizing AI-based self-service tools, typically with natural language capabilities, across the entire organization beyond IT to enable task or job completion in real-time, with no need for external support or augmentation.User friendliness: Expanding organization effectiveness with data-driven insights as measured by the ability of non-technical business users to leverage AI via no-code, low-code platforms.Final Note: Aligning Business and TechnologyDeloitte's digital transformation research reveals that misalignment between business and technology leaders often leads to inaccurate ROI assessments, Gaus says. "To address this, it's crucial for both sides to agree on key value priorities and success metrics."He adds it's also important to look beyond immediate financial returns and to incorporate innovation-driven KPIs, such as experimentation toleration and agile team adoption. "Without this broader perspective, up to 20% of digital investment returns may not yield their full potential," Gaus warns. "By addressing these alignment issues and tracking a comprehensive set of metrics, organizations can maximize the value from AI initiatives while fostering long-term innovation."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #how #measure #efficiency #productivity #gainsWWW.INFORMATIONWEEK.COMHow To Measure AI Efficiency and Productivity GainsJohn Edwards, Technology Journalist & AuthorMay 30, 20254 Min ReadTanapong Sungkaew via Alamy Stock PhotoAI adoption can help enterprises function more efficiently and productively in many internal and external areas. Yet to get the most value out of AI, CIOs and IT leaders need to find a way to measure their current and future gains.Measuring AI efficiency and productivity gains isn't always a straightforward process, however, observes Matt Sanchez, vice president of product for IBM's watsonx Orchestrate, a tool designed to automate tasks, focusing on the orchestration of AI assistants and AI agents."There are many factors to consider in order to gain an accurate picture of AI’s impact on your organization," Sanchez says, in an email interview. He believes the key to measuring AI effectiveness starts with setting clear, data-driven goals. "What outcomes are you trying to achieve?" he asks. "Identifying the right key performance indicators -- KPIs -- that align with your overall strategy is a great place to start."Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview.Still, with the number of organizations adopting AI rapidly increasing, C-suites and boards are now prioritizing measurable ROI.Related:"We're seeing this firsthand while working with clients in the manufacturing space specifically who are aiming to make manufacturing processes smarter and increasingly software-defined," Gaus says.Measuring AI Efficiency: The ChallengeThe challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained."AI Project Measurement MethodsOnce AI projects are underway, Gaus says measuring real-world results is key. "This includes studying factors such as actual cost reductions, revenue boosts tied directly to AI, and progress in KPIs such as customer satisfaction or operational output. "This method allows organizations to track both the anticipated and actual benefits of their AI investments over time."Related:To effectively assess AI's impact on efficiency and productivity, it's important to connect AI initiatives with broader business goals and evaluate their progress at different stages, Gaus says."In the early stages, companies should focus on estimating the potential benefits, such as enhanced efficiency, revenue growth, or strategic advantages like stronger customer loyalty or reduced operational downtime." These projections can provide a clear understanding of how AI aligns with long-term objectives, Gaus adds.Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. "Metrics should be set prior to any investment to maximize benefits and mitigate biases, such as sunk cost fallacies, confirmation bias, anchoring bias, and the like."Key AI Value MetricsMetrics can vary depending on the industry and technology being used, Gaus says. "In sectors like manufacturing, AI value metrics include improvements in efficiency, productivity, and cost reduction." Yet specific metrics depend on the type of AI technology implemented, such as machine learning.Related:Beyond tracking metrics, it's important to ensure high-quality data is used to minimize biases in AI decision-making, Sanchez says. The end goal is for AI to support the human workforce, freeing users to focus on strategic and creative work and removing potential bottlenecks. "It's also important to remember that AI isn't a one-and-done deal. It's an ongoing process that needs regular evaluation and process adjustment as the organization transforms.”Spurling recommends beginning by studying three key metrics:Worker productivity: Understanding the value of increased task completion or reduced effort by measuring the effect on day-to-day activities like faster issue resolution, more efficient collaboration, reduced process waste, or increased output quality.Ability to scale: Operationalizing AI-based self-service tools, typically with natural language capabilities, across the entire organization beyond IT to enable task or job completion in real-time, with no need for external support or augmentation.User friendliness: Expanding organization effectiveness with data-driven insights as measured by the ability of non-technical business users to leverage AI via no-code, low-code platforms.Final Note: Aligning Business and TechnologyDeloitte's digital transformation research reveals that misalignment between business and technology leaders often leads to inaccurate ROI assessments, Gaus says. "To address this, it's crucial for both sides to agree on key value priorities and success metrics."He adds it's also important to look beyond immediate financial returns and to incorporate innovation-driven KPIs, such as experimentation toleration and agile team adoption. "Without this broader perspective, up to 20% of digital investment returns may not yield their full potential," Gaus warns. "By addressing these alignment issues and tracking a comprehensive set of metrics, organizations can maximize the value from AI initiatives while fostering long-term innovation."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares -
NCSU CIO Marc Hoit Talks Fed Funding Limbo, AI’s Role in Shrinking Talent Pool
Shane Snider, Senior Writer, InformationWeekMay 30, 20254 Min ReadPaul Hamilton via Alamy StockWhen Marc Hoit came to North Carolina State Universityin 2008 to take on the role of vice chancellor for information technology and chief information officer, the school and its IT operation looked much different.Hoit had left his role as Interim CIO at the University of Florida, where he was also a professional in structural engineering.At the time, NC State had an IT staff of about 210 and an annual budget of million. Fast forward to 2025, and Hoit now oversees a bigger department with a budget of million. NCSU has a total of 39,603 students.Aside from taking care of the university’s massive IT needs, Hoit’s department must also lend a hand to research initiatives and academic computing needs. Before Hoit’s arrival, those functions were handled by separate departments. The administration decided to merge the functions under one CIO.“They wanted a lot of the IT to be centralized,” Hoit says in a live interview with InformationWeek. “We had a lot of pieces and had to decide how much we could centralize … It balanced out nicely.”That unified approach would prove to be beneficial, especially as technology was advancing at an unprecedented pace. While many find the pace of innovation dizzying, Hoit has a different viewpoint.Related:Marc Hoit, North Carolina State University“Really, the pace of the fundamentals has not rapidly changed,” he says. “Networking is networking. You have to ask: Do I need fiber instead of copper? Do I need bigger servers? Do I need to change routing protocols? Those are the operational pieces that make it work. You have to change, but the high-level strategy stays the same. We need to register students … we need to make that easier. We need to give them classes. We need to give them grades … those needs are consistent.”The Trump EffectThe Trump Administration’s rapid cost-cutting measures hit research universities especially hard. Just this week, the attorneys general of 16 states filed a lawsuit to block the administration from making massive federal funding cuts for research. And earlier this month, 13 US universities sued to block Trump’s cuts to research funding by the National Science Foundation. Cuts from the National Institutes of Healthand the US Department of Energy also sought to cap funds for research.Hoit says people may want to see less government spending but may not realize that the university already picks up a substantial share of the costs for those research projects. “We’ll have to adjust and figure out what to do, and that may mean that grants that paid for some expensive equipment … the university will have to pick up those on its own. And that might be difficult to accomplish.”Related:Hoit says NCSU is in a somewhat better position because its research funding is more spread out than some public institutions. “If you were a big NIH grant recipient with a medical school and a lot of money company from grants, you probably got hit harder. In our case, we have a very interesting portfolio with a broader mix of funding. And we have a lot of industry funding and partnerships.”The Trump administration’s aggressive tariff policies have also impacted universities, who must attempt to budget for hardware needs without knowing the ultimate impact of the trade war. On Wednesday, the US Court of International Trade halted the administration's sweeping tariffs on goods imported from foreign nations. But legal experts warn that the block may be temporary as the administration expects to appeal and use other potential workarounds.Hoit says the university learned lessons from the first Trump administration. “The writing was kind of on the wall then,” he says. “But a lot of the vendors are trying their best to manufacture in the US or to manufacture in lower tariff countries and to move out of the problematic ones.”Related:He said the COVID 19 pandemic was also a learning opportunity for dealing with massive supply chain disruptions. “taught us that the supply chain that we relied on to be super-fast, integrated and efficient … you can’t really rely on that.”Shrinking Talent Pool and AI SolutionAccording to the National Center for Education Statistics, colleges and universities saw a 15% drop in enrollment between 2010 and 2021. NCSU has largely bucked that trend because of explosive growth in the Research Triangle Park area of the state. But the drop in higher education ambition has created another problem for IT leaders in general: A shrinking talent pool. That’s true at the university level as well.AI could help bridge the talent gap but could cause interest to dwindle in certain tech careers.“I keep telling my civil engineering peers that the world is changing,” Hoit says. “If you can write a code that gives you the formulas and process steps in order to build a bridge, why do I need an engineer? Why don’t I just feed that to AI and let it build it. When I started teaching, I would tell people, go be a civil engineer … you’ll have a career for life. In the last three years, I’ve started thinking, ‘Hmm … How many civil engineers are we really going to need?”About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#ncsu #cio #marc #hoit #talksNCSU CIO Marc Hoit Talks Fed Funding Limbo, AI’s Role in Shrinking Talent PoolShane Snider, Senior Writer, InformationWeekMay 30, 20254 Min ReadPaul Hamilton via Alamy StockWhen Marc Hoit came to North Carolina State Universityin 2008 to take on the role of vice chancellor for information technology and chief information officer, the school and its IT operation looked much different.Hoit had left his role as Interim CIO at the University of Florida, where he was also a professional in structural engineering.At the time, NC State had an IT staff of about 210 and an annual budget of million. Fast forward to 2025, and Hoit now oversees a bigger department with a budget of million. NCSU has a total of 39,603 students.Aside from taking care of the university’s massive IT needs, Hoit’s department must also lend a hand to research initiatives and academic computing needs. Before Hoit’s arrival, those functions were handled by separate departments. The administration decided to merge the functions under one CIO.“They wanted a lot of the IT to be centralized,” Hoit says in a live interview with InformationWeek. “We had a lot of pieces and had to decide how much we could centralize … It balanced out nicely.”That unified approach would prove to be beneficial, especially as technology was advancing at an unprecedented pace. While many find the pace of innovation dizzying, Hoit has a different viewpoint.Related:Marc Hoit, North Carolina State University“Really, the pace of the fundamentals has not rapidly changed,” he says. “Networking is networking. You have to ask: Do I need fiber instead of copper? Do I need bigger servers? Do I need to change routing protocols? Those are the operational pieces that make it work. You have to change, but the high-level strategy stays the same. We need to register students … we need to make that easier. We need to give them classes. We need to give them grades … those needs are consistent.”The Trump EffectThe Trump Administration’s rapid cost-cutting measures hit research universities especially hard. Just this week, the attorneys general of 16 states filed a lawsuit to block the administration from making massive federal funding cuts for research. And earlier this month, 13 US universities sued to block Trump’s cuts to research funding by the National Science Foundation. Cuts from the National Institutes of Healthand the US Department of Energy also sought to cap funds for research.Hoit says people may want to see less government spending but may not realize that the university already picks up a substantial share of the costs for those research projects. “We’ll have to adjust and figure out what to do, and that may mean that grants that paid for some expensive equipment … the university will have to pick up those on its own. And that might be difficult to accomplish.”Related:Hoit says NCSU is in a somewhat better position because its research funding is more spread out than some public institutions. “If you were a big NIH grant recipient with a medical school and a lot of money company from grants, you probably got hit harder. In our case, we have a very interesting portfolio with a broader mix of funding. And we have a lot of industry funding and partnerships.”The Trump administration’s aggressive tariff policies have also impacted universities, who must attempt to budget for hardware needs without knowing the ultimate impact of the trade war. On Wednesday, the US Court of International Trade halted the administration's sweeping tariffs on goods imported from foreign nations. But legal experts warn that the block may be temporary as the administration expects to appeal and use other potential workarounds.Hoit says the university learned lessons from the first Trump administration. “The writing was kind of on the wall then,” he says. “But a lot of the vendors are trying their best to manufacture in the US or to manufacture in lower tariff countries and to move out of the problematic ones.”Related:He said the COVID 19 pandemic was also a learning opportunity for dealing with massive supply chain disruptions. “taught us that the supply chain that we relied on to be super-fast, integrated and efficient … you can’t really rely on that.”Shrinking Talent Pool and AI SolutionAccording to the National Center for Education Statistics, colleges and universities saw a 15% drop in enrollment between 2010 and 2021. NCSU has largely bucked that trend because of explosive growth in the Research Triangle Park area of the state. But the drop in higher education ambition has created another problem for IT leaders in general: A shrinking talent pool. That’s true at the university level as well.AI could help bridge the talent gap but could cause interest to dwindle in certain tech careers.“I keep telling my civil engineering peers that the world is changing,” Hoit says. “If you can write a code that gives you the formulas and process steps in order to build a bridge, why do I need an engineer? Why don’t I just feed that to AI and let it build it. When I started teaching, I would tell people, go be a civil engineer … you’ll have a career for life. In the last three years, I’ve started thinking, ‘Hmm … How many civil engineers are we really going to need?”About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #ncsu #cio #marc #hoit #talksWWW.INFORMATIONWEEK.COMNCSU CIO Marc Hoit Talks Fed Funding Limbo, AI’s Role in Shrinking Talent PoolShane Snider, Senior Writer, InformationWeekMay 30, 20254 Min ReadPaul Hamilton via Alamy StockWhen Marc Hoit came to North Carolina State University (NCSU) in 2008 to take on the role of vice chancellor for information technology and chief information officer, the school and its IT operation looked much different.Hoit had left his role as Interim CIO at the University of Florida, where he was also a professional in structural engineering.At the time, NC State had an IT staff of about 210 and an annual budget of $34 million. Fast forward to 2025, and Hoit now oversees a bigger department with a budget of $72 million. NCSU has a total of 39,603 students.Aside from taking care of the university’s massive IT needs, Hoit’s department must also lend a hand to research initiatives and academic computing needs. Before Hoit’s arrival, those functions were handled by separate departments. The administration decided to merge the functions under one CIO.“They wanted a lot of the IT to be centralized,” Hoit says in a live interview with InformationWeek. “We had a lot of pieces and had to decide how much we could centralize … It balanced out nicely.”That unified approach would prove to be beneficial, especially as technology was advancing at an unprecedented pace. While many find the pace of innovation dizzying, Hoit has a different viewpoint.Related:Marc Hoit, North Carolina State University“Really, the pace of the fundamentals has not rapidly changed,” he says. “Networking is networking. You have to ask: Do I need fiber instead of copper? Do I need bigger servers? Do I need to change routing protocols? Those are the operational pieces that make it work. You have to change, but the high-level strategy stays the same. We need to register students … we need to make that easier. We need to give them classes. We need to give them grades … those needs are consistent.”The Trump EffectThe Trump Administration’s rapid cost-cutting measures hit research universities especially hard. Just this week, the attorneys general of 16 states filed a lawsuit to block the administration from making massive federal funding cuts for research. And earlier this month, 13 US universities sued to block Trump’s cuts to research funding by the National Science Foundation. Cuts from the National Institutes of Health (NIH) and the US Department of Energy also sought to cap funds for research.Hoit says people may want to see less government spending but may not realize that the university already picks up a substantial share of the costs for those research projects. “We’ll have to adjust and figure out what to do, and that may mean that grants that paid for some expensive equipment … the university will have to pick up those on its own. And that might be difficult to accomplish.”Related:Hoit says NCSU is in a somewhat better position because its research funding is more spread out than some public institutions. “If you were a big NIH grant recipient with a medical school and a lot of money company from grants, you probably got hit harder. In our case, we have a very interesting portfolio with a broader mix of funding. And we have a lot of industry funding and partnerships.”The Trump administration’s aggressive tariff policies have also impacted universities, who must attempt to budget for hardware needs without knowing the ultimate impact of the trade war. On Wednesday, the US Court of International Trade halted the administration's sweeping tariffs on goods imported from foreign nations. But legal experts warn that the block may be temporary as the administration expects to appeal and use other potential workarounds.Hoit says the university learned lessons from the first Trump administration. “The writing was kind of on the wall then,” he says. “But a lot of the vendors are trying their best to manufacture in the US or to manufacture in lower tariff countries and to move out of the problematic ones.”Related:He said the COVID 19 pandemic was also a learning opportunity for dealing with massive supply chain disruptions. “[The pandemic] taught us that the supply chain that we relied on to be super-fast, integrated and efficient … you can’t really rely on that.”Shrinking Talent Pool and AI SolutionAccording to the National Center for Education Statistics (NCES), colleges and universities saw a 15% drop in enrollment between 2010 and 2021. NCSU has largely bucked that trend because of explosive growth in the Research Triangle Park area of the state. But the drop in higher education ambition has created another problem for IT leaders in general: A shrinking talent pool. That’s true at the university level as well.AI could help bridge the talent gap but could cause interest to dwindle in certain tech careers.“I keep telling my civil engineering peers that the world is changing,” Hoit says. “If you can write a code that gives you the formulas and process steps in order to build a bridge, why do I need an engineer? Why don’t I just feed that to AI and let it build it. When I started teaching, I would tell people, go be a civil engineer … you’ll have a career for life. In the last three years, I’ve started thinking, ‘Hmm … How many civil engineers are we really going to need?”About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares -
Data Center Asia - Brought to you by Informa Markets
Data Center Asia - Brought to you by Informa Markets
#data #center #asia #brought #youData Center Asia - Brought to you by Informa MarketsData Center Asia - Brought to you by Informa Markets #data #center #asia #brought #youWWW.INFORMATIONWEEK.COMData Center Asia - Brought to you by Informa MarketsData Center Asia - Brought to you by Informa Markets0 Comments 0 Shares -
John Deere’s CISO Is Always Thinking About Cyber Talent
Carrie Pallardy, Contributing ReporterMay 29, 20255 Min ReadWilliam Mullins via Alamy StockJohn Deere hired its first CISO in 2014, and James Johnson has remained in that role at the agricultural equipment company to this day. Johnson sat down with InformationWeek to talk about how he got started in his career, why working through a nation state attack was pivotal to his love of security, and how John Deere is building a talent of pipeline in the time of the cybersecurity skills gap. From Network Engineer to Chief Information Security Officer Johnson started his career as a network engineer at windows and doors company Pella. He loved working in the network space but soon realized that he might grow bored there given enough time. Derek Benz, a friend of Johnson’s and now CISO of Coca-Cola, suggested looking into security. Johnson went out and got a Certified Information Systems Security Professionalcertification, which helped him land a job as a pen tester at manufacturing and technology company Honeywell. During his time at Honeywell, the company was hit by Titan Rain, a series of coordinated cyberattacks carried out by a Chinese APT. James Johnson, CISO“Getting a chance to see how nation states target companies and what they’re capable of doing, I think really made the mission even more important to me at that point,” Johnson shares. “When you do have the nation-state attack early on your career, it’s kind of a game changer … just thinking about the value of the work that you're doing and why it matters.” Related:He spent 11 years at Honeywell, steadily working up the ranks to become a CISO overseeing various divisions within the company. And then, a call came from John Deere. John Deere’s First CISO That call came at the right time. Johnson had reached a point at Honeywell where his growth would likely be limited for a period of time. “I was pleasantly surprised by the opportunity,” says Johnson. “I had a great connection to John Deere coming out of Iowa, growing up in the farming community, seeing a lot of that … great brand and an opportunity to really build something that from scratch again.” While building a security program as a first-time CISO is an exciting opportunity, it comes with its challenges. When Johnson arrived, he noticed how trusting the culture was at John Deere. “It’s a great value that John Deere has … they really try to strive to do the right thing with integrity, but that’s not the way the world operates on the digital front,” he says. One of his mentors early on in his tenure at John Deere told him that he was going to have work on shifting the entire company culture as he built his security organization. Related:And he has made strides. When he first got there, everyone was using relatively simple passwords. Yet, the process to change those passwords was cumbersome and time-consuming. “Today, MFA is deployed across the company. We have complex passwords,” he says. “We're trying to find ways to use biometrics more.” An Evolving Role His responsibilities in the CISO role have grown over time. When he first joined, he was overseeing IT security and operations. Financial product security, data security and governance; his team have taken on more and more over time. “We built the program from about 32 people to … 220 people strong now in our organization,” he says. Johnson has been with John Deere for more than a decade. Not every CISO or CIO sticks with the same company for that long, but Johnson has found that longevity has its benefits. He has built relationships with the board and his C-suite peers “It's pretty hard to get good at something in two or three years,” he explains. “You’re there longer. You’ve got the relationships. You’ve got the ability to influence things and really make a bigger difference.” Today, he is working alongside John Deere’s leadership to navigate the thrilling possibilities and security concerns of AI. Related:Building a Talent Pipeline While the possibility of a security incident always looms in a CISO’s mind, Johnson is thinking about talent, too. “We will not succeed without the right people in our organization driving the right change,” he says. John Deere is taking multiple approaches to bringing the right people to his team. First, he looks to other teams for people who are experts and not necessarily in security. He looks for promising talent and asks, “Can I teach that person security?” And the answer to that question in many cases has been “yes.” “We’ve got folks who used to be lead engineers on the product side who now are running our product security department, and they were never interested in security at all,” he says. John Deere also makes use of cyber talent through its bug bounty program, which has paid out more than million since 2022. Having been a pen tester, Johnson knows how frustrating it can be for someone to discover a vulnerability only for a company to do nothing to fix it. “We have service-level agreements to get certain vulnerabilities that are critical, high, medium, low, fixed within a certain period of time, and in most cases, we beat those numbers,” he says. John Deere also works with Iowa State University to cultivate talent. “We put some services on campus, part of their tech center, that are services you probably would never get a chance to really work with or learn in college,” says Johnson. He knows it would be difficult to find cloud security experts, for example, so they are helping develop those experts at Iowa State. “We’ve built a pipeline of talent out of Iowa State University because they know our brand,” says Johnson. About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#john #deeres #ciso #always #thinkingJohn Deere’s CISO Is Always Thinking About Cyber TalentCarrie Pallardy, Contributing ReporterMay 29, 20255 Min ReadWilliam Mullins via Alamy StockJohn Deere hired its first CISO in 2014, and James Johnson has remained in that role at the agricultural equipment company to this day. Johnson sat down with InformationWeek to talk about how he got started in his career, why working through a nation state attack was pivotal to his love of security, and how John Deere is building a talent of pipeline in the time of the cybersecurity skills gap. From Network Engineer to Chief Information Security Officer Johnson started his career as a network engineer at windows and doors company Pella. He loved working in the network space but soon realized that he might grow bored there given enough time. Derek Benz, a friend of Johnson’s and now CISO of Coca-Cola, suggested looking into security. Johnson went out and got a Certified Information Systems Security Professionalcertification, which helped him land a job as a pen tester at manufacturing and technology company Honeywell. During his time at Honeywell, the company was hit by Titan Rain, a series of coordinated cyberattacks carried out by a Chinese APT. James Johnson, CISO“Getting a chance to see how nation states target companies and what they’re capable of doing, I think really made the mission even more important to me at that point,” Johnson shares. “When you do have the nation-state attack early on your career, it’s kind of a game changer … just thinking about the value of the work that you're doing and why it matters.” Related:He spent 11 years at Honeywell, steadily working up the ranks to become a CISO overseeing various divisions within the company. And then, a call came from John Deere. John Deere’s First CISO That call came at the right time. Johnson had reached a point at Honeywell where his growth would likely be limited for a period of time. “I was pleasantly surprised by the opportunity,” says Johnson. “I had a great connection to John Deere coming out of Iowa, growing up in the farming community, seeing a lot of that … great brand and an opportunity to really build something that from scratch again.” While building a security program as a first-time CISO is an exciting opportunity, it comes with its challenges. When Johnson arrived, he noticed how trusting the culture was at John Deere. “It’s a great value that John Deere has … they really try to strive to do the right thing with integrity, but that’s not the way the world operates on the digital front,” he says. One of his mentors early on in his tenure at John Deere told him that he was going to have work on shifting the entire company culture as he built his security organization. Related:And he has made strides. When he first got there, everyone was using relatively simple passwords. Yet, the process to change those passwords was cumbersome and time-consuming. “Today, MFA is deployed across the company. We have complex passwords,” he says. “We're trying to find ways to use biometrics more.” An Evolving Role His responsibilities in the CISO role have grown over time. When he first joined, he was overseeing IT security and operations. Financial product security, data security and governance; his team have taken on more and more over time. “We built the program from about 32 people to … 220 people strong now in our organization,” he says. Johnson has been with John Deere for more than a decade. Not every CISO or CIO sticks with the same company for that long, but Johnson has found that longevity has its benefits. He has built relationships with the board and his C-suite peers “It's pretty hard to get good at something in two or three years,” he explains. “You’re there longer. You’ve got the relationships. You’ve got the ability to influence things and really make a bigger difference.” Today, he is working alongside John Deere’s leadership to navigate the thrilling possibilities and security concerns of AI. Related:Building a Talent Pipeline While the possibility of a security incident always looms in a CISO’s mind, Johnson is thinking about talent, too. “We will not succeed without the right people in our organization driving the right change,” he says. John Deere is taking multiple approaches to bringing the right people to his team. First, he looks to other teams for people who are experts and not necessarily in security. He looks for promising talent and asks, “Can I teach that person security?” And the answer to that question in many cases has been “yes.” “We’ve got folks who used to be lead engineers on the product side who now are running our product security department, and they were never interested in security at all,” he says. John Deere also makes use of cyber talent through its bug bounty program, which has paid out more than million since 2022. Having been a pen tester, Johnson knows how frustrating it can be for someone to discover a vulnerability only for a company to do nothing to fix it. “We have service-level agreements to get certain vulnerabilities that are critical, high, medium, low, fixed within a certain period of time, and in most cases, we beat those numbers,” he says. John Deere also works with Iowa State University to cultivate talent. “We put some services on campus, part of their tech center, that are services you probably would never get a chance to really work with or learn in college,” says Johnson. He knows it would be difficult to find cloud security experts, for example, so they are helping develop those experts at Iowa State. “We’ve built a pipeline of talent out of Iowa State University because they know our brand,” says Johnson. About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #john #deeres #ciso #always #thinkingWWW.INFORMATIONWEEK.COMJohn Deere’s CISO Is Always Thinking About Cyber TalentCarrie Pallardy, Contributing ReporterMay 29, 20255 Min ReadWilliam Mullins via Alamy StockJohn Deere hired its first CISO in 2014, and James Johnson has remained in that role at the agricultural equipment company to this day. Johnson sat down with InformationWeek to talk about how he got started in his career, why working through a nation state attack was pivotal to his love of security, and how John Deere is building a talent of pipeline in the time of the cybersecurity skills gap. From Network Engineer to Chief Information Security Officer Johnson started his career as a network engineer at windows and doors company Pella. He loved working in the network space but soon realized that he might grow bored there given enough time. Derek Benz, a friend of Johnson’s and now CISO of Coca-Cola, suggested looking into security. Johnson went out and got a Certified Information Systems Security Professional (CISSP) certification, which helped him land a job as a pen tester at manufacturing and technology company Honeywell. During his time at Honeywell, the company was hit by Titan Rain, a series of coordinated cyberattacks carried out by a Chinese APT. James Johnson, CISO“Getting a chance to see how nation states target companies and what they’re capable of doing, I think really made the mission even more important to me at that point,” Johnson shares. “When you do have the nation-state attack early on your career, it’s kind of a game changer … just thinking about the value of the work that you're doing and why it matters.” Related:He spent 11 years at Honeywell, steadily working up the ranks to become a CISO overseeing various divisions within the company. And then, a call came from John Deere. John Deere’s First CISO That call came at the right time. Johnson had reached a point at Honeywell where his growth would likely be limited for a period of time. “I was pleasantly surprised by the opportunity,” says Johnson. “I had a great connection to John Deere coming out of Iowa, growing up in the farming community, seeing a lot of that … great brand and an opportunity to really build something that from scratch again.” While building a security program as a first-time CISO is an exciting opportunity, it comes with its challenges. When Johnson arrived, he noticed how trusting the culture was at John Deere. “It’s a great value that John Deere has … they really try to strive to do the right thing with integrity, but that’s not the way the world operates on the digital front,” he says. One of his mentors early on in his tenure at John Deere told him that he was going to have work on shifting the entire company culture as he built his security organization. Related:And he has made strides. When he first got there, everyone was using relatively simple passwords. Yet, the process to change those passwords was cumbersome and time-consuming. “Today, MFA is deployed across the company. We have complex passwords,” he says. “We're trying to find ways to use biometrics more.” An Evolving Role His responsibilities in the CISO role have grown over time. When he first joined, he was overseeing IT security and operations. Financial product security, data security and governance; his team have taken on more and more over time. “We built the program from about 32 people to … 220 people strong now in our organization,” he says. Johnson has been with John Deere for more than a decade. Not every CISO or CIO sticks with the same company for that long, but Johnson has found that longevity has its benefits. He has built relationships with the board and his C-suite peers “It's pretty hard to get good at something in two or three years,” he explains. “You’re there longer. You’ve got the relationships. You’ve got the ability to influence things and really make a bigger difference.” Today, he is working alongside John Deere’s leadership to navigate the thrilling possibilities and security concerns of AI. Related:Building a Talent Pipeline While the possibility of a security incident always looms in a CISO’s mind, Johnson is thinking about talent, too. “We will not succeed without the right people in our organization driving the right change,” he says. John Deere is taking multiple approaches to bringing the right people to his team. First, he looks to other teams for people who are experts and not necessarily in security. He looks for promising talent and asks, “Can I teach that person security?” And the answer to that question in many cases has been “yes.” “We’ve got folks who used to be lead engineers on the product side who now are running our product security department, and they were never interested in security at all,” he says. John Deere also makes use of cyber talent through its bug bounty program, which has paid out more than $1.5 million since 2022. Having been a pen tester, Johnson knows how frustrating it can be for someone to discover a vulnerability only for a company to do nothing to fix it. “We have service-level agreements to get certain vulnerabilities that are critical, high, medium, low, fixed within a certain period of time, and in most cases, we beat those numbers,” he says. John Deere also works with Iowa State University to cultivate talent. “We put some services on campus, part of their tech center, that are services you probably would never get a chance to really work with or learn in college,” says Johnson. He knows it would be difficult to find cloud security experts, for example, so they are helping develop those experts at Iowa State. “We’ve built a pipeline of talent out of Iowa State University because they know our brand,” says Johnson. About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares -
Juniper Networks CIO Analyzes Career Options for Leaders at the Top
John Edwards, Technology Journalist & AuthorMay 27, 20254 Min ReadSharon Mandell, CIO of Juniper NetworksSharon Mandell, chief information officer of Juniper Networks, has held various CIO and CTO roles over the past 25 years, each building upon her previous experience and matching her current life phase. In her current role, Mandell is in charge of IT strategy and implementation for an enterprise that offers high-performance networking and cybersecurity products to service providers, businesses, and public sector organizations. In a recent email interview, Mandell discussed the options available for CIOs looking to further advance their careers. What should be the next logical career step for a current CIO? That really depends on the individual CIO, their time in the role, the scope of their responsibilities, the scale of the companies they've worked for, and most importantly, their personal interests and aspirations. The next step could be another CIO role at a larger company in the same industry, or a shift to a different industry -- smaller, same-sized, or even larger -- if the challenge is compelling. It could be a smaller organization with a mission or opportunity that you’ve always wanted to take on. Some CIOs take on adjacent or additional functions -- customer support, engineering, marketing, HR. I haven't yet seen a CIO move into CFO or chief counsel, but with the right background, it's not out of the question. Related:You could step into a COO or even CEO role. Somemove into venture capital or advisory roles. There's no single "right" next step -- it's about what makes sense for your unique path and purpose. When is the best time to make a career move? When you feel like you're no longer having a significant impact or adding meaningful value in your current role. I've often felt taking on new roles can feel like being thrown into the deep end of the pool -- completely overwhelmed at first, but eventually you develop a vision and begin driving change. When those changes start to feel incremental instead of transformative, it may be time to move on. Sometimes, opportunities show up when you're not actively looking -- something that fills a gap in your background, stretches you in a big way, or offers a challenge you’ve always wanted to take on. Even if you're happy where you are -- not that the CIO role is ever truly comfortable -- you’ve got to be open to those moments. When is the best time to stay in place? I don't like leaving a role when I've taken a risk on a project, a technology, or a transformation and haven't yet seen it through to a solid or stable outcome. Related:I also don't want to leave a leadership team holding the bag, especially if I've been pushing them outside their comfort zones. I want my peers to understand why I've made certain decisions, and that usually means staying long enough to deliver real results. That said, sometimes opportunities won't wait. You’ll have to weigh whether staying to finish something or making a move offers more long-term value. At the end of the day, I want the people I leave behind to want to work with me again should the right opportunity arise. What's the biggest mistake CIOs make when planning a career move? Chasing title, prestige, or compensation as the sole driver of the decision, or assuming that "bigger" is always better. At the end of the day, what matters most is the people you surround yourself with, the impact you're able to make, and what you learn along the way. The right role should stretch you, challenge you, and allow you to contribute meaningfully to the organization's success. That’s what makes a career move truly worth it. Is there anything else you would like to add? I’ve never been someone who obsessively mapped out a career trajectory. I've made decisions based on what felt right for my life at the time. There were points when I took smaller roles because they gave me the balance I needed as a single mom. I passed on big opportunities because I didn't want to relocate. I've moved back and forth between CIO and CTO roles more than once. Related:The common thread has been this: I look for roles that challenge me, expand my perspective, and give me big new experiences to grow from -- even if I know they won't be easy or fun. Those are the ones that build grit, resilience, and passion. The money, title, and scope tend to follow if you execute well. I may not end up with the biggest job or the highest salary, but I’ve had one heck of a ride, and that’s what makes it all worth it. about:Network ComputingAbout the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#juniper #networks #cio #analyzes #careerJuniper Networks CIO Analyzes Career Options for Leaders at the TopJohn Edwards, Technology Journalist & AuthorMay 27, 20254 Min ReadSharon Mandell, CIO of Juniper NetworksSharon Mandell, chief information officer of Juniper Networks, has held various CIO and CTO roles over the past 25 years, each building upon her previous experience and matching her current life phase. In her current role, Mandell is in charge of IT strategy and implementation for an enterprise that offers high-performance networking and cybersecurity products to service providers, businesses, and public sector organizations. In a recent email interview, Mandell discussed the options available for CIOs looking to further advance their careers. What should be the next logical career step for a current CIO? That really depends on the individual CIO, their time in the role, the scope of their responsibilities, the scale of the companies they've worked for, and most importantly, their personal interests and aspirations. The next step could be another CIO role at a larger company in the same industry, or a shift to a different industry -- smaller, same-sized, or even larger -- if the challenge is compelling. It could be a smaller organization with a mission or opportunity that you’ve always wanted to take on. Some CIOs take on adjacent or additional functions -- customer support, engineering, marketing, HR. I haven't yet seen a CIO move into CFO or chief counsel, but with the right background, it's not out of the question. Related:You could step into a COO or even CEO role. Somemove into venture capital or advisory roles. There's no single "right" next step -- it's about what makes sense for your unique path and purpose. When is the best time to make a career move? When you feel like you're no longer having a significant impact or adding meaningful value in your current role. I've often felt taking on new roles can feel like being thrown into the deep end of the pool -- completely overwhelmed at first, but eventually you develop a vision and begin driving change. When those changes start to feel incremental instead of transformative, it may be time to move on. Sometimes, opportunities show up when you're not actively looking -- something that fills a gap in your background, stretches you in a big way, or offers a challenge you’ve always wanted to take on. Even if you're happy where you are -- not that the CIO role is ever truly comfortable -- you’ve got to be open to those moments. When is the best time to stay in place? I don't like leaving a role when I've taken a risk on a project, a technology, or a transformation and haven't yet seen it through to a solid or stable outcome. Related:I also don't want to leave a leadership team holding the bag, especially if I've been pushing them outside their comfort zones. I want my peers to understand why I've made certain decisions, and that usually means staying long enough to deliver real results. That said, sometimes opportunities won't wait. You’ll have to weigh whether staying to finish something or making a move offers more long-term value. At the end of the day, I want the people I leave behind to want to work with me again should the right opportunity arise. What's the biggest mistake CIOs make when planning a career move? Chasing title, prestige, or compensation as the sole driver of the decision, or assuming that "bigger" is always better. At the end of the day, what matters most is the people you surround yourself with, the impact you're able to make, and what you learn along the way. The right role should stretch you, challenge you, and allow you to contribute meaningfully to the organization's success. That’s what makes a career move truly worth it. Is there anything else you would like to add? I’ve never been someone who obsessively mapped out a career trajectory. I've made decisions based on what felt right for my life at the time. There were points when I took smaller roles because they gave me the balance I needed as a single mom. I passed on big opportunities because I didn't want to relocate. I've moved back and forth between CIO and CTO roles more than once. Related:The common thread has been this: I look for roles that challenge me, expand my perspective, and give me big new experiences to grow from -- even if I know they won't be easy or fun. Those are the ones that build grit, resilience, and passion. The money, title, and scope tend to follow if you execute well. I may not end up with the biggest job or the highest salary, but I’ve had one heck of a ride, and that’s what makes it all worth it. about:Network ComputingAbout the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #juniper #networks #cio #analyzes #careerWWW.INFORMATIONWEEK.COMJuniper Networks CIO Analyzes Career Options for Leaders at the TopJohn Edwards, Technology Journalist & AuthorMay 27, 20254 Min ReadSharon Mandell, CIO of Juniper NetworksSharon Mandell, chief information officer of Juniper Networks, has held various CIO and CTO roles over the past 25 years, each building upon her previous experience and matching her current life phase. In her current role, Mandell is in charge of IT strategy and implementation for an enterprise that offers high-performance networking and cybersecurity products to service providers, businesses, and public sector organizations. In a recent email interview, Mandell discussed the options available for CIOs looking to further advance their careers. What should be the next logical career step for a current CIO? That really depends on the individual CIO, their time in the role, the scope of their responsibilities, the scale of the companies they've worked for, and most importantly, their personal interests and aspirations. The next step could be another CIO role at a larger company in the same industry, or a shift to a different industry -- smaller, same-sized, or even larger -- if the challenge is compelling. It could be a smaller organization with a mission or opportunity that you’ve always wanted to take on. Some CIOs take on adjacent or additional functions -- customer support, engineering, marketing, HR. I haven't yet seen a CIO move into CFO or chief counsel, but with the right background, it's not out of the question. Related:You could step into a COO or even CEO role. Some [CIOs] move into venture capital or advisory roles. There's no single "right" next step -- it's about what makes sense for your unique path and purpose. When is the best time to make a career move? When you feel like you're no longer having a significant impact or adding meaningful value in your current role. I've often felt taking on new roles can feel like being thrown into the deep end of the pool -- completely overwhelmed at first, but eventually you develop a vision and begin driving change. When those changes start to feel incremental instead of transformative, it may be time to move on. Sometimes, opportunities show up when you're not actively looking -- something that fills a gap in your background, stretches you in a big way, or offers a challenge you’ve always wanted to take on. Even if you're happy where you are -- not that the CIO role is ever truly comfortable -- you’ve got to be open to those moments. When is the best time to stay in place? I don't like leaving a role when I've taken a risk on a project, a technology, or a transformation and haven't yet seen it through to a solid or stable outcome. Related:I also don't want to leave a leadership team holding the bag, especially if I've been pushing them outside their comfort zones. I want my peers to understand why I've made certain decisions, and that usually means staying long enough to deliver real results. That said, sometimes opportunities won't wait. You’ll have to weigh whether staying to finish something or making a move offers more long-term value. At the end of the day, I want the people I leave behind to want to work with me again should the right opportunity arise. What's the biggest mistake CIOs make when planning a career move? Chasing title, prestige, or compensation as the sole driver of the decision, or assuming that "bigger" is always better. At the end of the day, what matters most is the people you surround yourself with, the impact you're able to make, and what you learn along the way. The right role should stretch you, challenge you, and allow you to contribute meaningfully to the organization's success. That’s what makes a career move truly worth it. Is there anything else you would like to add? I’ve never been someone who obsessively mapped out a career trajectory. I've made decisions based on what felt right for my life at the time. There were points when I took smaller roles because they gave me the balance I needed as a single mom. I passed on big opportunities because I didn't want to relocate. I've moved back and forth between CIO and CTO roles more than once. Related:The common thread has been this: I look for roles that challenge me, expand my perspective, and give me big new experiences to grow from -- even if I know they won't be easy or fun. Those are the ones that build grit, resilience, and passion. The money, title, and scope tend to follow if you execute well. I may not end up with the biggest job or the highest salary, but I’ve had one heck of a ride, and that’s what makes it all worth it. Read more about:Network ComputingAbout the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares -
Future-Proofing Enterprise Transformation: A CIO’s Guide to AI-Driven Innovation
Enterprises today face a critical imperative. Artificial intelligence is not just a technological evolution; it is a strategic driver of business transformation. AI is reshaping competitive advantage, redefining business models, and unlocking new revenue streams. Yet, many organizations continue to treat AI as a technical add-on, rather than a core business capability. According to a 2023 McKinsey report, organizations that integrate AI across their operations are 1.5 times more likely to experience double-digit revenue growth. For CIOs and enterprise architects, AI is no longer an IT issue -- it’s a boardroom priority. AI’s role extends far beyond driving efficiencies in operational processes. It is the key to unlocking agility, reshaping decision-making, and transforming business models. CIOs must lead the charge in embedding AI as a strategic enabler, aligning technology with the enterprise’s broader objectives to drive sustainable growth. The Strategic Impact of AI: Drive Business Outcomes Many companies are on the verge of digital transformation to modernize their IT landscapes, yet AI is often viewed as a mere innovation metric, added to enhance existing systems. AI should be embedded when designing core business processes to align business architecture with IT architecture. Related:AI’s true potential lies in its ability to drive enterprise-wide agility, transform decision-making, and accelerate business outcomes. AI empowers leadership teams by providing real-time, data-driven insights that enable smarter, faster decisions. CIOs and enterprise architects must work together to ensure that AI adoption is seamless and strategically integrated, avoiding the temptation of tactical, short-term solutions that fail to scale. The CIO Playbook: Leading AI-Driven Transformation To harness AI’s full potential, CIOs need a structured roadmap for its integration into the business. Here’s a playbook for AI-first enterprise transformation: Establish a clear AI vision aligned with business goals. AI should not be siloed within the IT department; it must be part of a broader strategic vision. CIOs should collaborate closely with the C-suite to align AI initiatives with the organization’s core objectives-whether that’s driving customer engagement, enhancing operational efficiency, or unlocking new revenue streams. Invest in enterprise-wide AI integration. AI must be embedded across all facets of the organization: business, technology, data and applications. This requires a holistic approach to AI architecture, integrating AI at all levels to ensure scalability and flexibility as the business evolves. Related:Leadership Imperative: Govern AI for Long-Term Success As AI continues to evolve, CIOs must play a critical role in shaping AI governance to ensure long-term business success. This includes managing risk, ensuring ethical use, and embedding AI into the enterprise’s culture. For CIOs, AI governance is more than compliance; it’s about creating a framework that promotes innovation while safeguarding the organization’s values and business priorities. The Road Ahead For CIOs and enterprise architects, the question is no longer if they should adopt AI, but how to embed it at the heart of enterprise transformation. AI-first organizations -- those that embrace AI as a core driver of business change -- are outpacing competitors in digital transformation maturity. AI has the power to redefine leadership decision-making, enhance operational performance, and fuel new business models. But to reap the full benefits, CIOs must lead with vision, govern strategically, and integrate AI seamlessly into the organization’s DNA. Those who do will position their enterprises for long-term, sustainable growth.
#futureproofing #enterprise #transformation #cios #guideFuture-Proofing Enterprise Transformation: A CIO’s Guide to AI-Driven InnovationEnterprises today face a critical imperative. Artificial intelligence is not just a technological evolution; it is a strategic driver of business transformation. AI is reshaping competitive advantage, redefining business models, and unlocking new revenue streams. Yet, many organizations continue to treat AI as a technical add-on, rather than a core business capability. According to a 2023 McKinsey report, organizations that integrate AI across their operations are 1.5 times more likely to experience double-digit revenue growth. For CIOs and enterprise architects, AI is no longer an IT issue -- it’s a boardroom priority. AI’s role extends far beyond driving efficiencies in operational processes. It is the key to unlocking agility, reshaping decision-making, and transforming business models. CIOs must lead the charge in embedding AI as a strategic enabler, aligning technology with the enterprise’s broader objectives to drive sustainable growth. The Strategic Impact of AI: Drive Business Outcomes Many companies are on the verge of digital transformation to modernize their IT landscapes, yet AI is often viewed as a mere innovation metric, added to enhance existing systems. AI should be embedded when designing core business processes to align business architecture with IT architecture. Related:AI’s true potential lies in its ability to drive enterprise-wide agility, transform decision-making, and accelerate business outcomes. AI empowers leadership teams by providing real-time, data-driven insights that enable smarter, faster decisions. CIOs and enterprise architects must work together to ensure that AI adoption is seamless and strategically integrated, avoiding the temptation of tactical, short-term solutions that fail to scale. The CIO Playbook: Leading AI-Driven Transformation To harness AI’s full potential, CIOs need a structured roadmap for its integration into the business. Here’s a playbook for AI-first enterprise transformation: Establish a clear AI vision aligned with business goals. AI should not be siloed within the IT department; it must be part of a broader strategic vision. CIOs should collaborate closely with the C-suite to align AI initiatives with the organization’s core objectives-whether that’s driving customer engagement, enhancing operational efficiency, or unlocking new revenue streams. Invest in enterprise-wide AI integration. AI must be embedded across all facets of the organization: business, technology, data and applications. This requires a holistic approach to AI architecture, integrating AI at all levels to ensure scalability and flexibility as the business evolves. Related:Leadership Imperative: Govern AI for Long-Term Success As AI continues to evolve, CIOs must play a critical role in shaping AI governance to ensure long-term business success. This includes managing risk, ensuring ethical use, and embedding AI into the enterprise’s culture. For CIOs, AI governance is more than compliance; it’s about creating a framework that promotes innovation while safeguarding the organization’s values and business priorities. The Road Ahead For CIOs and enterprise architects, the question is no longer if they should adopt AI, but how to embed it at the heart of enterprise transformation. AI-first organizations -- those that embrace AI as a core driver of business change -- are outpacing competitors in digital transformation maturity. AI has the power to redefine leadership decision-making, enhance operational performance, and fuel new business models. But to reap the full benefits, CIOs must lead with vision, govern strategically, and integrate AI seamlessly into the organization’s DNA. Those who do will position their enterprises for long-term, sustainable growth. #futureproofing #enterprise #transformation #cios #guideWWW.INFORMATIONWEEK.COMFuture-Proofing Enterprise Transformation: A CIO’s Guide to AI-Driven InnovationEnterprises today face a critical imperative. Artificial intelligence is not just a technological evolution; it is a strategic driver of business transformation. AI is reshaping competitive advantage, redefining business models, and unlocking new revenue streams. Yet, many organizations continue to treat AI as a technical add-on, rather than a core business capability. According to a 2023 McKinsey report, organizations that integrate AI across their operations are 1.5 times more likely to experience double-digit revenue growth. For CIOs and enterprise architects, AI is no longer an IT issue -- it’s a boardroom priority. AI’s role extends far beyond driving efficiencies in operational processes. It is the key to unlocking agility, reshaping decision-making, and transforming business models. CIOs must lead the charge in embedding AI as a strategic enabler, aligning technology with the enterprise’s broader objectives to drive sustainable growth. The Strategic Impact of AI: Drive Business Outcomes Many companies are on the verge of digital transformation to modernize their IT landscapes, yet AI is often viewed as a mere innovation metric, added to enhance existing systems. AI should be embedded when designing core business processes to align business architecture with IT architecture. Related:AI’s true potential lies in its ability to drive enterprise-wide agility, transform decision-making, and accelerate business outcomes. AI empowers leadership teams by providing real-time, data-driven insights that enable smarter, faster decisions. CIOs and enterprise architects must work together to ensure that AI adoption is seamless and strategically integrated, avoiding the temptation of tactical, short-term solutions that fail to scale. The CIO Playbook: Leading AI-Driven Transformation To harness AI’s full potential, CIOs need a structured roadmap for its integration into the business. Here’s a playbook for AI-first enterprise transformation: Establish a clear AI vision aligned with business goals. AI should not be siloed within the IT department; it must be part of a broader strategic vision. CIOs should collaborate closely with the C-suite to align AI initiatives with the organization’s core objectives-whether that’s driving customer engagement, enhancing operational efficiency, or unlocking new revenue streams. Invest in enterprise-wide AI integration. AI must be embedded across all facets of the organization: business, technology, data and applications. This requires a holistic approach to AI architecture, integrating AI at all levels to ensure scalability and flexibility as the business evolves. Related:Leadership Imperative: Govern AI for Long-Term Success As AI continues to evolve, CIOs must play a critical role in shaping AI governance to ensure long-term business success. This includes managing risk, ensuring ethical use, and embedding AI into the enterprise’s culture. For CIOs, AI governance is more than compliance; it’s about creating a framework that promotes innovation while safeguarding the organization’s values and business priorities. The Road Ahead For CIOs and enterprise architects, the question is no longer if they should adopt AI, but how to embed it at the heart of enterprise transformation. AI-first organizations -- those that embrace AI as a core driver of business change -- are outpacing competitors in digital transformation maturity. AI has the power to redefine leadership decision-making, enhance operational performance, and fuel new business models. But to reap the full benefits, CIOs must lead with vision, govern strategically, and integrate AI seamlessly into the organization’s DNA. Those who do will position their enterprises for long-term, sustainable growth.0 Comments 0 Shares -
Unstructured Data Management Tips
John Edwards, Technology Journalist & AuthorMay 26, 20255 Min ReadLuis Moreira via Alamy Stock PhotoStructured data, such as names and phone numbers, fits neatly into rows and columns. Unstructured data, however, has no fixed scheme, and may have a highly complex format such as audio files or web pages. Unfortunately, there's no single best way to effectively manage unstructured data. On the bright side, there are several approaches that can be used to successfully tackle this critical, yet persistently elusive challenge. Here are five tested ways to achieve effective unstructured data management from experts who participated in online interviews. Tip 1. Use AI-powered vector databases combined with retrieval-augmented generation "One of the most effective methods I've seen is using AI-powered vector databases combined with retrieval augmented generation," says Anbang Xu, founder of AI video generator firm Jogg.AI. A former senior software engineer at Google, Xu suggests that instead of forcing unstructured data into rigid schemas, using vector databases will allow enterprises to store and retrieve data based on contextual meaning rather than exact keyword matches. "This is especially powerful for text, audio, video, and image data, where traditional search methods fall short," he notes. For example, Xu says, organizations using AI-powered embeddings can organize and query vast amounts of unstructured data by meaning rather than syntax. "This is what powers advanced AI applications like intelligent search, chatbots, and recommendation systems," he explains. "At Jogg.AI, we’ve seen first-hand how AI-driven indexing and retrieval make it significantly easier to turn raw, unstructured data into actionable insights." Related:Tip 2. Take a schema-on-read approach Another innovative approach to managing unstructured data is schema-on-read. "Unlike traditional databases, which define the schema -- the data's structure -- before it's stored, schema-on-read defers this process until the data is actually read or queried," says Kamal Hathi, senior vice president and general manager of machine-generated data monitoring and analysis software firm at Splunk, a Cisco company. This approach is particularly effective for unstructured and semi-structured data, where the schema is not predefined or rigid, Hathi says. "Traditional databases require a predefined schema, which makes working with unstructured data challenging and less flexible." The key advantage of schema-on-read is that it enables users to work with raw data without needing to apply traditional extract-transform-loadprocesses, Hathi states. "This, in turn, allows for working with the diversity typically seen in machine-generated data, such as system and application telemetry logs." Related:Tip 3. Look to the cloud Manage unstructured data by integrating it with structured data in a cloud environment using metadata tagging and AI-driven classifications, suggests Cam Ogden, a senior vice president at data integrity firm Precisely. "Traditionally, structured data -- like customer databases or financial records -- reside in well-organized systems such as relational databases or data warehouses," he says. However, to fully leverage all of their data, organizations need to break down the silos that separate structured data from other forms of data, including unstructured data such as text, images, or log files. This is where the cloud comes into play. Integrating structured and unstructured data in the cloud allows for more comprehensive analytics, enabling organizations to extract deeper insights from previously siloed information, Ogden says. AI-powered tools can classify and enrich both structured and unstructured data, making it easier to discover, analyze, and govern in a central platform, he notes. "The cloud offers the scalability and flexibility required to handle large volumes of data while supporting dynamic analytics workloads." Additionally, cloud platforms offer advanced data governance capabilities, ensuring that both structured and unstructured data remain secure, compliant, and aligned with business objectives. "This approach not only optimizes data management but also positions organizations to make more informed and effective data-driven decisions in real-time." Related:Tip 4. Use AI-powered classification and indexing One of the best ways to get a grip on unstructured data is to use AI-powered classification and indexing, says Adhiran Thirmal, a senior solutions engineer at cybersecurity firm Security Compass. "With machine learningand natural language processing, you can automatically sort, tag, and organize data based on its content and context," he explains. "Pairing this approach with a scalable data storage system, like a data lake or object storage, makes it easier to find and use information when you need it." AI takes the manual work out of organizing data, Thirmal says. "No more wasting time digging through files or struggling to keep things in order," he states. "AI can quickly surface the information you need, reducing human error and improving efficiency. It's also excellent for compliance, ensuring sensitive data -- like personal or financial information -- is properly handled and protected." Tip 5. Create a unified, sovereign data platform An innovative approach to managing unstructured data goes beyond outdated data lake methods, says Benjamin Anderson, senior vice president of technology at database services provider EnterpriseDB. A unified, sovereign data platform integrates unstructured, semi-structured, and structured data in a single system, eliminating the need for separate solutions. "This approach delivers quality-of-service features previously available only for structured data," he explains. "With a hybrid control plane, organizations can centrally manage their data across multiple environments, including various cloud platforms and on-premises infrastructure." When it comes to managing diverse forms of data, whether structured, unstructured, or semi-structured, the traditional approach required multiple databases and storage solutions, adding operational complexity, cost, and compliance risk, Anderson notes. "Consolidating structured and unstructured data into a single multi-model data platform will help accelerate transactional, analytical, and AI workloads." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#unstructured #data #management #tipsUnstructured Data Management TipsJohn Edwards, Technology Journalist & AuthorMay 26, 20255 Min ReadLuis Moreira via Alamy Stock PhotoStructured data, such as names and phone numbers, fits neatly into rows and columns. Unstructured data, however, has no fixed scheme, and may have a highly complex format such as audio files or web pages. Unfortunately, there's no single best way to effectively manage unstructured data. On the bright side, there are several approaches that can be used to successfully tackle this critical, yet persistently elusive challenge. Here are five tested ways to achieve effective unstructured data management from experts who participated in online interviews. Tip 1. Use AI-powered vector databases combined with retrieval-augmented generation "One of the most effective methods I've seen is using AI-powered vector databases combined with retrieval augmented generation," says Anbang Xu, founder of AI video generator firm Jogg.AI. A former senior software engineer at Google, Xu suggests that instead of forcing unstructured data into rigid schemas, using vector databases will allow enterprises to store and retrieve data based on contextual meaning rather than exact keyword matches. "This is especially powerful for text, audio, video, and image data, where traditional search methods fall short," he notes. For example, Xu says, organizations using AI-powered embeddings can organize and query vast amounts of unstructured data by meaning rather than syntax. "This is what powers advanced AI applications like intelligent search, chatbots, and recommendation systems," he explains. "At Jogg.AI, we’ve seen first-hand how AI-driven indexing and retrieval make it significantly easier to turn raw, unstructured data into actionable insights." Related:Tip 2. Take a schema-on-read approach Another innovative approach to managing unstructured data is schema-on-read. "Unlike traditional databases, which define the schema -- the data's structure -- before it's stored, schema-on-read defers this process until the data is actually read or queried," says Kamal Hathi, senior vice president and general manager of machine-generated data monitoring and analysis software firm at Splunk, a Cisco company. This approach is particularly effective for unstructured and semi-structured data, where the schema is not predefined or rigid, Hathi says. "Traditional databases require a predefined schema, which makes working with unstructured data challenging and less flexible." The key advantage of schema-on-read is that it enables users to work with raw data without needing to apply traditional extract-transform-loadprocesses, Hathi states. "This, in turn, allows for working with the diversity typically seen in machine-generated data, such as system and application telemetry logs." Related:Tip 3. Look to the cloud Manage unstructured data by integrating it with structured data in a cloud environment using metadata tagging and AI-driven classifications, suggests Cam Ogden, a senior vice president at data integrity firm Precisely. "Traditionally, structured data -- like customer databases or financial records -- reside in well-organized systems such as relational databases or data warehouses," he says. However, to fully leverage all of their data, organizations need to break down the silos that separate structured data from other forms of data, including unstructured data such as text, images, or log files. This is where the cloud comes into play. Integrating structured and unstructured data in the cloud allows for more comprehensive analytics, enabling organizations to extract deeper insights from previously siloed information, Ogden says. AI-powered tools can classify and enrich both structured and unstructured data, making it easier to discover, analyze, and govern in a central platform, he notes. "The cloud offers the scalability and flexibility required to handle large volumes of data while supporting dynamic analytics workloads." Additionally, cloud platforms offer advanced data governance capabilities, ensuring that both structured and unstructured data remain secure, compliant, and aligned with business objectives. "This approach not only optimizes data management but also positions organizations to make more informed and effective data-driven decisions in real-time." Related:Tip 4. Use AI-powered classification and indexing One of the best ways to get a grip on unstructured data is to use AI-powered classification and indexing, says Adhiran Thirmal, a senior solutions engineer at cybersecurity firm Security Compass. "With machine learningand natural language processing, you can automatically sort, tag, and organize data based on its content and context," he explains. "Pairing this approach with a scalable data storage system, like a data lake or object storage, makes it easier to find and use information when you need it." AI takes the manual work out of organizing data, Thirmal says. "No more wasting time digging through files or struggling to keep things in order," he states. "AI can quickly surface the information you need, reducing human error and improving efficiency. It's also excellent for compliance, ensuring sensitive data -- like personal or financial information -- is properly handled and protected." Tip 5. Create a unified, sovereign data platform An innovative approach to managing unstructured data goes beyond outdated data lake methods, says Benjamin Anderson, senior vice president of technology at database services provider EnterpriseDB. A unified, sovereign data platform integrates unstructured, semi-structured, and structured data in a single system, eliminating the need for separate solutions. "This approach delivers quality-of-service features previously available only for structured data," he explains. "With a hybrid control plane, organizations can centrally manage their data across multiple environments, including various cloud platforms and on-premises infrastructure." When it comes to managing diverse forms of data, whether structured, unstructured, or semi-structured, the traditional approach required multiple databases and storage solutions, adding operational complexity, cost, and compliance risk, Anderson notes. "Consolidating structured and unstructured data into a single multi-model data platform will help accelerate transactional, analytical, and AI workloads." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #unstructured #data #management #tipsWWW.INFORMATIONWEEK.COMUnstructured Data Management TipsJohn Edwards, Technology Journalist & AuthorMay 26, 20255 Min ReadLuis Moreira via Alamy Stock PhotoStructured data, such as names and phone numbers, fits neatly into rows and columns. Unstructured data, however, has no fixed scheme, and may have a highly complex format such as audio files or web pages. Unfortunately, there's no single best way to effectively manage unstructured data. On the bright side, there are several approaches that can be used to successfully tackle this critical, yet persistently elusive challenge. Here are five tested ways to achieve effective unstructured data management from experts who participated in online interviews. Tip 1. Use AI-powered vector databases combined with retrieval-augmented generation "One of the most effective methods I've seen is using AI-powered vector databases combined with retrieval augmented generation," says Anbang Xu, founder of AI video generator firm Jogg.AI. A former senior software engineer at Google, Xu suggests that instead of forcing unstructured data into rigid schemas, using vector databases will allow enterprises to store and retrieve data based on contextual meaning rather than exact keyword matches. "This is especially powerful for text, audio, video, and image data, where traditional search methods fall short," he notes. For example, Xu says, organizations using AI-powered embeddings can organize and query vast amounts of unstructured data by meaning rather than syntax. "This is what powers advanced AI applications like intelligent search, chatbots, and recommendation systems," he explains. "At Jogg.AI, we’ve seen first-hand how AI-driven indexing and retrieval make it significantly easier to turn raw, unstructured data into actionable insights." Related:Tip 2. Take a schema-on-read approach Another innovative approach to managing unstructured data is schema-on-read. "Unlike traditional databases, which define the schema -- the data's structure -- before it's stored, schema-on-read defers this process until the data is actually read or queried," says Kamal Hathi, senior vice president and general manager of machine-generated data monitoring and analysis software firm at Splunk, a Cisco company. This approach is particularly effective for unstructured and semi-structured data, where the schema is not predefined or rigid, Hathi says. "Traditional databases require a predefined schema, which makes working with unstructured data challenging and less flexible." The key advantage of schema-on-read is that it enables users to work with raw data without needing to apply traditional extract-transform-load (ETL) processes, Hathi states. "This, in turn, allows for working with the diversity typically seen in machine-generated data, such as system and application telemetry logs." Related:Tip 3. Look to the cloud Manage unstructured data by integrating it with structured data in a cloud environment using metadata tagging and AI-driven classifications, suggests Cam Ogden, a senior vice president at data integrity firm Precisely. "Traditionally, structured data -- like customer databases or financial records -- reside in well-organized systems such as relational databases or data warehouses," he says. However, to fully leverage all of their data, organizations need to break down the silos that separate structured data from other forms of data, including unstructured data such as text, images, or log files. This is where the cloud comes into play. Integrating structured and unstructured data in the cloud allows for more comprehensive analytics, enabling organizations to extract deeper insights from previously siloed information, Ogden says. AI-powered tools can classify and enrich both structured and unstructured data, making it easier to discover, analyze, and govern in a central platform, he notes. "The cloud offers the scalability and flexibility required to handle large volumes of data while supporting dynamic analytics workloads." Additionally, cloud platforms offer advanced data governance capabilities, ensuring that both structured and unstructured data remain secure, compliant, and aligned with business objectives. "This approach not only optimizes data management but also positions organizations to make more informed and effective data-driven decisions in real-time." Related:Tip 4. Use AI-powered classification and indexing One of the best ways to get a grip on unstructured data is to use AI-powered classification and indexing, says Adhiran Thirmal, a senior solutions engineer at cybersecurity firm Security Compass. "With machine learning (ML) and natural language processing (NLP), you can automatically sort, tag, and organize data based on its content and context," he explains. "Pairing this approach with a scalable data storage system, like a data lake or object storage, makes it easier to find and use information when you need it." AI takes the manual work out of organizing data, Thirmal says. "No more wasting time digging through files or struggling to keep things in order," he states. "AI can quickly surface the information you need, reducing human error and improving efficiency. It's also excellent for compliance, ensuring sensitive data -- like personal or financial information -- is properly handled and protected." Tip 5. Create a unified, sovereign data platform An innovative approach to managing unstructured data goes beyond outdated data lake methods, says Benjamin Anderson, senior vice president of technology at database services provider EnterpriseDB. A unified, sovereign data platform integrates unstructured, semi-structured, and structured data in a single system, eliminating the need for separate solutions. "This approach delivers quality-of-service features previously available only for structured data," he explains. "With a hybrid control plane, organizations can centrally manage their data across multiple environments, including various cloud platforms and on-premises infrastructure." When it comes to managing diverse forms of data, whether structured, unstructured, or semi-structured, the traditional approach required multiple databases and storage solutions, adding operational complexity, cost, and compliance risk, Anderson notes. "Consolidating structured and unstructured data into a single multi-model data platform will help accelerate transactional, analytical, and AI workloads." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares -
Let the AI Security War Games Begin
In February 2024, CNN reported, “A finance worker at a multinational firm was tricked into paying out million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.” In Europe, a second firm experienced a multimillion-dollar fraud when a deepfake emulated a board member in a video allegedly approving a fraudulent transfer of funds. “Banks and financial institutions are particularly at risk,” said The Hack Academy. “A study by Deloitte found that over 50% of senior executives expect deepfake scams to target their organizations soon. These attacks can undermine trust and lead to significant financial loss.” Hack Academy went on to say that AI-inspired security attacks weren’t confined to deepfakes. These attacks were also beginning to occur with increased regularity in the form of corporate espionage and misinformation campaigns. AI brings new, more dangerous tactics to traditional security attack methods like phishing, social engineering and the insertion of malware into systems. For CIOs, enterprise AI system developers, data scientists and IT network professionals, AI changes the rules and the tactics for security, given AI’s limitless potential for both good and bad. This is forcing a reset in how IT thinks about security against malicious actors and intruders. Related:How Bad Actors are Exploiting AI What exactly is IT up against? The AI tools that are available on the dark web and in public cyber marketplaces give security perpetrators a wide choice of AI weaponry. Also, IoT and edge networks now present much broader enterprise attack surfaces. Security threats can come in videos, phone calls, social media sites, corporate systems and networks, vendor clouds, IoT devices, network end points, and virtually any entry point into a corporate IT environment that electronic communications can penetrate. Here are some of the current AI-embellished security attacks that companies are seeing: Convincing deepfake videos of corporate executives and stakeholders that are intended to dupe companies in pursuing certain actions or transferring certain assets or funds. This deep faking also extends to voice simulations of key personnel that are left as voicemails in corporate phone systems. Phishing and spearfishing attacks that send convincing emailsto employees, who mistakenly open them because they think the sender is their boss, the CEO or someone else they perceive as trusted. AI supercharges these attacks because it can automate and send out a large volume of emails that hit many employee email accounts. That AI continues to “learn” with the help of machine learning so it can discover new trusted sender candidates for future attacks. Related:Adaptive messaging that uses generative AI to craft messages to users that correct grammar and that “learn” from corporate communication styles so they can more closely emulate corporate communications that make them seem legitimate. Mutating code that uses AI to change malware signatures on the fly so antivirus detection mechanisms can be evaded. Data poisoning that occurs when a corporate or cloud provider’s AI data repository is injected by malware that altersso the data produces erroneous and misleading results. Fighting Back With Tech To combat these supercharged AI-based security threats, IT has number of tools, techniques and strategies it can consider. Fighting deepfakes. Deepfakes can come in the form of videos, voicemails and photos. Since deepfakes are unstructured data objects that can’t be parsed in their native forms like real data, there are new tools on the market that can convert these objects into graphical representations that can be analyzed to evaluate whether there is something in an object that should or shouldn’t be there. The goal is to confirm authenticity. Related:Fighting phishing and spear phishing. A combination of policy and practice works best to combat phishing and spear phishing attacks. Both types of attacks are predicated on users being tricked into opening an email attachment that they believe is from a trusted sender, so the first line of defense is educatingusers on how to handle their email. For instance, a user should notify IT if they receive an email that seems unusual or unexpected, and they should never open it. IT should also review its current security tools. Is it still using older security monitoring software that doesn’t include more modern technologies like observability, which can check for security intrusions or malware at more atomic levels? Is IT still using IAMsoftware to track user identities and activities at a top level in the cloud and on top and atomic levels on premises, or has it also added cloud identity entitlements management, which gives it an atomic level view of user accesses and activities in the cloud? Better yet, has IT moved to identity governance administration, which can serve as an over-arching umbrella for IAM and CIEM plugins, plus provide detailed audit reports and automated compliance across all platforms? Fighting embedded malware code. Malware can lie dormant in systems for months, giving a bad actor the option to activate it whenever the timing is right. It’s all the more reason for IT to augment its security staff with new skillsets, such as that of the “threat hunter,” whose job is to examine networks, data and systems on a daily basis, hunting down malware that might be lurking within, and destroying it before it activates. Fighting with zero-trust networks. Internet of Thingsdevices come into companies with little or no security because IoT suppliers don’t pay much attention to it and there is a general expectation that corporate IT will configure devices to the appropriate security settings. The problem is, IT often forgets to do this. There are also times when users purchase their own IoT gear, and IT doesn’t know about it. Zero-trust networks help manage this, because they detect and report on everything that is added, subtracted or modified on the network. This gives IT visibility into new, potential security breach points. A second step is to formalize IT procedures for IoT devices so that no IoT device is deployed without the device’s security first being set to corporate standards. Fighting AI data poisoning. AI models, systems and data should be continuously monitored for accuracy. As soon as they show lowered levels of accuracy or produce unusual conclusions, the data repository, inflows and outflows should be examined for quality and non-bias of data. If contamination is found, the system should be taken down, the data sanitized, and the sources of the contamination traced, tracked and disabled. Fighting AI with AI. Most every security tool on the market today contains AI functionality to detect anomalies, abnormal data patterns and user activities. Additionally, forensics AI can dissect a security breach that does occur, isolating how it happened, where it originated from and what caused it. Since most sites don’t have on-staff forensics experts, IT will have to train staff in forensics skills. Fighting with regular audits and vulnerability testing. Minimally, IT vulnerability testing should be performed on a quarterly basis, and full security audits on an annual basis. If sites use cloud providers, they should request each provider’s latest security audit for review. An outside auditor can also help sites prepare for future AI-driven security threats, because auditors stay on top of the industry, visit many different companies, and see many different situations. An advanced knowledge of threats that loom in the future helps sites prepare for new battles. Summary AI technology is moving faster than legal rulings and regulations. This leaves most IT departments “on their own” to develop security defenses against bad actors who use AI against them. The good news is that IT already has insights into how bad actors intend to use AI, and there are tools on the market that can help defensive efforts. What’s been missing is a proactive and aggressive battle plan from IT. That has to start now.
#let #security #war #games #beginLet the AI Security War Games BeginIn February 2024, CNN reported, “A finance worker at a multinational firm was tricked into paying out million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.” In Europe, a second firm experienced a multimillion-dollar fraud when a deepfake emulated a board member in a video allegedly approving a fraudulent transfer of funds. “Banks and financial institutions are particularly at risk,” said The Hack Academy. “A study by Deloitte found that over 50% of senior executives expect deepfake scams to target their organizations soon. These attacks can undermine trust and lead to significant financial loss.” Hack Academy went on to say that AI-inspired security attacks weren’t confined to deepfakes. These attacks were also beginning to occur with increased regularity in the form of corporate espionage and misinformation campaigns. AI brings new, more dangerous tactics to traditional security attack methods like phishing, social engineering and the insertion of malware into systems. For CIOs, enterprise AI system developers, data scientists and IT network professionals, AI changes the rules and the tactics for security, given AI’s limitless potential for both good and bad. This is forcing a reset in how IT thinks about security against malicious actors and intruders. Related:How Bad Actors are Exploiting AI What exactly is IT up against? The AI tools that are available on the dark web and in public cyber marketplaces give security perpetrators a wide choice of AI weaponry. Also, IoT and edge networks now present much broader enterprise attack surfaces. Security threats can come in videos, phone calls, social media sites, corporate systems and networks, vendor clouds, IoT devices, network end points, and virtually any entry point into a corporate IT environment that electronic communications can penetrate. Here are some of the current AI-embellished security attacks that companies are seeing: Convincing deepfake videos of corporate executives and stakeholders that are intended to dupe companies in pursuing certain actions or transferring certain assets or funds. This deep faking also extends to voice simulations of key personnel that are left as voicemails in corporate phone systems. Phishing and spearfishing attacks that send convincing emailsto employees, who mistakenly open them because they think the sender is their boss, the CEO or someone else they perceive as trusted. AI supercharges these attacks because it can automate and send out a large volume of emails that hit many employee email accounts. That AI continues to “learn” with the help of machine learning so it can discover new trusted sender candidates for future attacks. Related:Adaptive messaging that uses generative AI to craft messages to users that correct grammar and that “learn” from corporate communication styles so they can more closely emulate corporate communications that make them seem legitimate. Mutating code that uses AI to change malware signatures on the fly so antivirus detection mechanisms can be evaded. Data poisoning that occurs when a corporate or cloud provider’s AI data repository is injected by malware that altersso the data produces erroneous and misleading results. Fighting Back With Tech To combat these supercharged AI-based security threats, IT has number of tools, techniques and strategies it can consider. Fighting deepfakes. Deepfakes can come in the form of videos, voicemails and photos. Since deepfakes are unstructured data objects that can’t be parsed in their native forms like real data, there are new tools on the market that can convert these objects into graphical representations that can be analyzed to evaluate whether there is something in an object that should or shouldn’t be there. The goal is to confirm authenticity. Related:Fighting phishing and spear phishing. A combination of policy and practice works best to combat phishing and spear phishing attacks. Both types of attacks are predicated on users being tricked into opening an email attachment that they believe is from a trusted sender, so the first line of defense is educatingusers on how to handle their email. For instance, a user should notify IT if they receive an email that seems unusual or unexpected, and they should never open it. IT should also review its current security tools. Is it still using older security monitoring software that doesn’t include more modern technologies like observability, which can check for security intrusions or malware at more atomic levels? Is IT still using IAMsoftware to track user identities and activities at a top level in the cloud and on top and atomic levels on premises, or has it also added cloud identity entitlements management, which gives it an atomic level view of user accesses and activities in the cloud? Better yet, has IT moved to identity governance administration, which can serve as an over-arching umbrella for IAM and CIEM plugins, plus provide detailed audit reports and automated compliance across all platforms? Fighting embedded malware code. Malware can lie dormant in systems for months, giving a bad actor the option to activate it whenever the timing is right. It’s all the more reason for IT to augment its security staff with new skillsets, such as that of the “threat hunter,” whose job is to examine networks, data and systems on a daily basis, hunting down malware that might be lurking within, and destroying it before it activates. Fighting with zero-trust networks. Internet of Thingsdevices come into companies with little or no security because IoT suppliers don’t pay much attention to it and there is a general expectation that corporate IT will configure devices to the appropriate security settings. The problem is, IT often forgets to do this. There are also times when users purchase their own IoT gear, and IT doesn’t know about it. Zero-trust networks help manage this, because they detect and report on everything that is added, subtracted or modified on the network. This gives IT visibility into new, potential security breach points. A second step is to formalize IT procedures for IoT devices so that no IoT device is deployed without the device’s security first being set to corporate standards. Fighting AI data poisoning. AI models, systems and data should be continuously monitored for accuracy. As soon as they show lowered levels of accuracy or produce unusual conclusions, the data repository, inflows and outflows should be examined for quality and non-bias of data. If contamination is found, the system should be taken down, the data sanitized, and the sources of the contamination traced, tracked and disabled. Fighting AI with AI. Most every security tool on the market today contains AI functionality to detect anomalies, abnormal data patterns and user activities. Additionally, forensics AI can dissect a security breach that does occur, isolating how it happened, where it originated from and what caused it. Since most sites don’t have on-staff forensics experts, IT will have to train staff in forensics skills. Fighting with regular audits and vulnerability testing. Minimally, IT vulnerability testing should be performed on a quarterly basis, and full security audits on an annual basis. If sites use cloud providers, they should request each provider’s latest security audit for review. An outside auditor can also help sites prepare for future AI-driven security threats, because auditors stay on top of the industry, visit many different companies, and see many different situations. An advanced knowledge of threats that loom in the future helps sites prepare for new battles. Summary AI technology is moving faster than legal rulings and regulations. This leaves most IT departments “on their own” to develop security defenses against bad actors who use AI against them. The good news is that IT already has insights into how bad actors intend to use AI, and there are tools on the market that can help defensive efforts. What’s been missing is a proactive and aggressive battle plan from IT. That has to start now. #let #security #war #games #beginWWW.INFORMATIONWEEK.COMLet the AI Security War Games BeginIn February 2024, CNN reported, “A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.” In Europe, a second firm experienced a multimillion-dollar fraud when a deepfake emulated a board member in a video allegedly approving a fraudulent transfer of funds. “Banks and financial institutions are particularly at risk,” said The Hack Academy. “A study by Deloitte found that over 50% of senior executives expect deepfake scams to target their organizations soon. These attacks can undermine trust and lead to significant financial loss.” Hack Academy went on to say that AI-inspired security attacks weren’t confined to deepfakes. These attacks were also beginning to occur with increased regularity in the form of corporate espionage and misinformation campaigns. AI brings new, more dangerous tactics to traditional security attack methods like phishing, social engineering and the insertion of malware into systems. For CIOs, enterprise AI system developers, data scientists and IT network professionals, AI changes the rules and the tactics for security, given AI’s limitless potential for both good and bad. This is forcing a reset in how IT thinks about security against malicious actors and intruders. Related:How Bad Actors are Exploiting AI What exactly is IT up against? The AI tools that are available on the dark web and in public cyber marketplaces give security perpetrators a wide choice of AI weaponry. Also, IoT and edge networks now present much broader enterprise attack surfaces. Security threats can come in videos, phone calls, social media sites, corporate systems and networks, vendor clouds, IoT devices, network end points, and virtually any entry point into a corporate IT environment that electronic communications can penetrate. Here are some of the current AI-embellished security attacks that companies are seeing: Convincing deepfake videos of corporate executives and stakeholders that are intended to dupe companies in pursuing certain actions or transferring certain assets or funds. This deep faking also extends to voice simulations of key personnel that are left as voicemails in corporate phone systems. Phishing and spearfishing attacks that send convincing emails (some with malicious attachments) to employees, who mistakenly open them because they think the sender is their boss, the CEO or someone else they perceive as trusted. AI supercharges these attacks because it can automate and send out a large volume of emails that hit many employee email accounts. That AI continues to “learn” with the help of machine learning so it can discover new trusted sender candidates for future attacks. Related:Adaptive messaging that uses generative AI to craft messages to users that correct grammar and that “learn” from corporate communication styles so they can more closely emulate corporate communications that make them seem legitimate. Mutating code that uses AI to change malware signatures on the fly so antivirus detection mechanisms can be evaded. Data poisoning that occurs when a corporate or cloud provider’s AI data repository is injected by malware that alters (“poisons”) so the data produces erroneous and misleading results. Fighting Back With Tech To combat these supercharged AI-based security threats, IT has number of tools, techniques and strategies it can consider. Fighting deepfakes. Deepfakes can come in the form of videos, voicemails and photos. Since deepfakes are unstructured data objects that can’t be parsed in their native forms like real data, there are new tools on the market that can convert these objects into graphical representations that can be analyzed to evaluate whether there is something in an object that should or shouldn’t be there. The goal is to confirm authenticity. Related:Fighting phishing and spear phishing. A combination of policy and practice works best to combat phishing and spear phishing attacks. Both types of attacks are predicated on users being tricked into opening an email attachment that they believe is from a trusted sender, so the first line of defense is educating (and repeat-educating) users on how to handle their email. For instance, a user should notify IT if they receive an email that seems unusual or unexpected, and they should never open it. IT should also review its current security tools. Is it still using older security monitoring software that doesn’t include more modern technologies like observability, which can check for security intrusions or malware at more atomic levels? Is IT still using IAM (identity access management) software to track user identities and activities at a top level in the cloud and on top and atomic levels on premises, or has it also added cloud identity entitlements management (CIEM), which gives it an atomic level view of user accesses and activities in the cloud? Better yet, has IT moved to identity governance administration (IGA), which can serve as an over-arching umbrella for IAM and CIEM plugins, plus provide detailed audit reports and automated compliance across all platforms? Fighting embedded malware code. Malware can lie dormant in systems for months, giving a bad actor the option to activate it whenever the timing is right. It’s all the more reason for IT to augment its security staff with new skillsets, such as that of the “threat hunter,” whose job is to examine networks, data and systems on a daily basis, hunting down malware that might be lurking within, and destroying it before it activates. Fighting with zero-trust networks. Internet of Things (IoT) devices come into companies with little or no security because IoT suppliers don’t pay much attention to it and there is a general expectation that corporate IT will configure devices to the appropriate security settings. The problem is, IT often forgets to do this. There are also times when users purchase their own IoT gear, and IT doesn’t know about it. Zero-trust networks help manage this, because they detect and report on everything that is added, subtracted or modified on the network. This gives IT visibility into new, potential security breach points. A second step is to formalize IT procedures for IoT devices so that no IoT device is deployed without the device’s security first being set to corporate standards. Fighting AI data poisoning. AI models, systems and data should be continuously monitored for accuracy. As soon as they show lowered levels of accuracy or produce unusual conclusions, the data repository, inflows and outflows should be examined for quality and non-bias of data. If contamination is found, the system should be taken down, the data sanitized, and the sources of the contamination traced, tracked and disabled. Fighting AI with AI. Most every security tool on the market today contains AI functionality to detect anomalies, abnormal data patterns and user activities. Additionally, forensics AI can dissect a security breach that does occur, isolating how it happened, where it originated from and what caused it. Since most sites don’t have on-staff forensics experts, IT will have to train staff in forensics skills. Fighting with regular audits and vulnerability testing. Minimally, IT vulnerability testing should be performed on a quarterly basis, and full security audits on an annual basis. If sites use cloud providers, they should request each provider’s latest security audit for review. An outside auditor can also help sites prepare for future AI-driven security threats, because auditors stay on top of the industry, visit many different companies, and see many different situations. An advanced knowledge of threats that loom in the future helps sites prepare for new battles. Summary AI technology is moving faster than legal rulings and regulations. This leaves most IT departments “on their own” to develop security defenses against bad actors who use AI against them. The good news is that IT already has insights into how bad actors intend to use AI, and there are tools on the market that can help defensive efforts. What’s been missing is a proactive and aggressive battle plan from IT. That has to start now.0 Comments 0 Shares -
Horizon3.ai Co-Founder Talks Transition From CTO to CEO
Snehal Antani has been tinkering with technology since childhood. His father, an electrical engineer, would give him broken devices and task him with fixing them. He moved into computer science as an undergraduate, eventually earning his master's degree. He then worked for IBM and eventually served as CIO for GE Capital and CTO for Splunk. In 2018, he joined Joint Special Operations Command, a division of the United States Special Operations Command, as CTO. He started Horizon3.ai, an AI pen testing company, with JSOC colleague Anthony Pillitiere in 2019. Here, he describes his unusual career path and how he deploys the skills he learned along the way to facilitate innovation. Can you tell me about your early tech education? When I went to undergrad at Purdue, I knew I was going to do computer science. What I love about computer science is that it’s horizontal -- so I can apply that to any vertical that I'm interested in. I was interested in stock trading while I was an undergrad, so I was able to write code to learn how to trade stocks. The software programming and systems architecture skills that I picked up could be applied to solve any job. What did the early portion of your career teach you?I optimized for learning. I used to sit in the hallway in front of my team lead’s office at IBM. He couldn't see me, but I could see his whiteboard. I would try to understand something he had explained to me. I was too afraid to go in and ask for more information, so I would literally sit on the floor and just stare at it, trying to make sure I understood it in detail. Related:I wanted to be an expert in distributed systems and enterprise software. The first few jobs I took were all about learning as much as I could in that domain. I was an awful speaker. I forced myself to become a better communicator. I then moved over to learn how to launch products in product management. I was an awful product manager the first year. But there was no way I was going to get better except by throwing myself into that arena and trying to figure it out. In 2012 I got recruited to be a CIO at GE Capital. I had never managed anyone before. GE made a bet on me. I learned a lot and I was able to impact the organization as well. Having a solid technical foundation and being able to communicate well were probably the two most important skills I developed early in my career. Can you describe a scenario in which you felt out of your depth? When I was in IBM, there was a customer in Germany struggling with their tech. Their banking system kept crashing. Steve Mills, who was a legendary senior vice president, sent out a message that said, "This customer is struggling. No one can figure out what's wrong. Who here knows how to fix this problem?" I was a nobody at IBM. I replied directly to Mills and said, "I think I can fix this problem. Send me." Related:Once it got there, they were explaining their problem. I had no idea what they were talking about. All I could think was, "I’m going to get fired. I just embarrassed myself and my company." Suddenly, everything in my brain clicked: every single aspect of enterprise software technology, operating systems, distributed systems. We ended up solving the problem about 90 minutes later. How has life in the C-suite changed for tech folks? I remember going into meetings at GE Capital. People thought I was there to manage the projector. Some of those teams struggled to understand the role technology played in creating a competitive edge. GE had just come off gutting and outsourcing the bulk of their technology DNA. Throughout the 2000s it didn’t seem that there was a belief that technology was a competitive advantage. I think there was a realization that they had gone too far. They started to try to bring in more technical talent. In the mid 2000s through 2015, tech was a back-office function. I believe that’s shifted dramatically, especially now when you think about AI and the advantage you can create using technology. There are certainly CIOs in my network who still view themselves as a back-office function. They don’t want to learn the business. But I believe that type of CIO is in the minority now. Related:Why did you join Joint Special Operations Command in 2018? I was 21 when 9/11 happened. I remember this feeling of both helplessness and the desire to do something about it. Was there a multiplier way to affect change -- one calorie in causing 10 calories of impact? There wasn’t an obvious way for me to do that. I remember in 2014 watching the rise of ISIS. The desire to make a difference came back at a much more intense level. The Special Operations community had invited me to do some planning sessions with them. How could they increase the velocity of innovation in order to keep up with the adversary? Terrorist organizations were able to use off the shelf technology -- open-source software, cloud computing, drones -- to innovate lethal capabilities that were otherwise only available to armies. And so, the question was, how do we accelerate the innovation velocity? A lot of that experience was drawn from my time at GE Capital. I was able to join as the first ever CTO. For me, it was about purpose and impact. There’s no clearer mission than looking at human beings putting themselves in danger to help others. Anything that we could do using technology to reduce risk to them was an incredible opportunity. How did you come to found Horizon3.ai? I met Tony, my co-founder, at JSOC. We saw a challenge: We have no idea we’re secure until the bad guys show up. Are we fixing the right vulnerabilities? Are security tools actually working? We wanted to find a way to build an autonomous system that allows you to hack yourself as often as you want. Fiercely prioritizing problems that mattered was the first thing that we were able to do because our autonomous agent was able to hack organizations, tell you exactly how it hacked them, and then tell you exactly what to fix and how to fix it. Once you fix it, you can run a retest to verify that you're good to go. Find, fix, verify is the primary experience within the product.
#horizon3ai #cofounder #talks #transition #ctoHorizon3.ai Co-Founder Talks Transition From CTO to CEOSnehal Antani has been tinkering with technology since childhood. His father, an electrical engineer, would give him broken devices and task him with fixing them. He moved into computer science as an undergraduate, eventually earning his master's degree. He then worked for IBM and eventually served as CIO for GE Capital and CTO for Splunk. In 2018, he joined Joint Special Operations Command, a division of the United States Special Operations Command, as CTO. He started Horizon3.ai, an AI pen testing company, with JSOC colleague Anthony Pillitiere in 2019. Here, he describes his unusual career path and how he deploys the skills he learned along the way to facilitate innovation. Can you tell me about your early tech education? When I went to undergrad at Purdue, I knew I was going to do computer science. What I love about computer science is that it’s horizontal -- so I can apply that to any vertical that I'm interested in. I was interested in stock trading while I was an undergrad, so I was able to write code to learn how to trade stocks. The software programming and systems architecture skills that I picked up could be applied to solve any job. What did the early portion of your career teach you?I optimized for learning. I used to sit in the hallway in front of my team lead’s office at IBM. He couldn't see me, but I could see his whiteboard. I would try to understand something he had explained to me. I was too afraid to go in and ask for more information, so I would literally sit on the floor and just stare at it, trying to make sure I understood it in detail. Related:I wanted to be an expert in distributed systems and enterprise software. The first few jobs I took were all about learning as much as I could in that domain. I was an awful speaker. I forced myself to become a better communicator. I then moved over to learn how to launch products in product management. I was an awful product manager the first year. But there was no way I was going to get better except by throwing myself into that arena and trying to figure it out. In 2012 I got recruited to be a CIO at GE Capital. I had never managed anyone before. GE made a bet on me. I learned a lot and I was able to impact the organization as well. Having a solid technical foundation and being able to communicate well were probably the two most important skills I developed early in my career. Can you describe a scenario in which you felt out of your depth? When I was in IBM, there was a customer in Germany struggling with their tech. Their banking system kept crashing. Steve Mills, who was a legendary senior vice president, sent out a message that said, "This customer is struggling. No one can figure out what's wrong. Who here knows how to fix this problem?" I was a nobody at IBM. I replied directly to Mills and said, "I think I can fix this problem. Send me." Related:Once it got there, they were explaining their problem. I had no idea what they were talking about. All I could think was, "I’m going to get fired. I just embarrassed myself and my company." Suddenly, everything in my brain clicked: every single aspect of enterprise software technology, operating systems, distributed systems. We ended up solving the problem about 90 minutes later. How has life in the C-suite changed for tech folks? I remember going into meetings at GE Capital. People thought I was there to manage the projector. Some of those teams struggled to understand the role technology played in creating a competitive edge. GE had just come off gutting and outsourcing the bulk of their technology DNA. Throughout the 2000s it didn’t seem that there was a belief that technology was a competitive advantage. I think there was a realization that they had gone too far. They started to try to bring in more technical talent. In the mid 2000s through 2015, tech was a back-office function. I believe that’s shifted dramatically, especially now when you think about AI and the advantage you can create using technology. There are certainly CIOs in my network who still view themselves as a back-office function. They don’t want to learn the business. But I believe that type of CIO is in the minority now. Related:Why did you join Joint Special Operations Command in 2018? I was 21 when 9/11 happened. I remember this feeling of both helplessness and the desire to do something about it. Was there a multiplier way to affect change -- one calorie in causing 10 calories of impact? There wasn’t an obvious way for me to do that. I remember in 2014 watching the rise of ISIS. The desire to make a difference came back at a much more intense level. The Special Operations community had invited me to do some planning sessions with them. How could they increase the velocity of innovation in order to keep up with the adversary? Terrorist organizations were able to use off the shelf technology -- open-source software, cloud computing, drones -- to innovate lethal capabilities that were otherwise only available to armies. And so, the question was, how do we accelerate the innovation velocity? A lot of that experience was drawn from my time at GE Capital. I was able to join as the first ever CTO. For me, it was about purpose and impact. There’s no clearer mission than looking at human beings putting themselves in danger to help others. Anything that we could do using technology to reduce risk to them was an incredible opportunity. How did you come to found Horizon3.ai? I met Tony, my co-founder, at JSOC. We saw a challenge: We have no idea we’re secure until the bad guys show up. Are we fixing the right vulnerabilities? Are security tools actually working? We wanted to find a way to build an autonomous system that allows you to hack yourself as often as you want. Fiercely prioritizing problems that mattered was the first thing that we were able to do because our autonomous agent was able to hack organizations, tell you exactly how it hacked them, and then tell you exactly what to fix and how to fix it. Once you fix it, you can run a retest to verify that you're good to go. Find, fix, verify is the primary experience within the product. #horizon3ai #cofounder #talks #transition #ctoWWW.INFORMATIONWEEK.COMHorizon3.ai Co-Founder Talks Transition From CTO to CEOSnehal Antani has been tinkering with technology since childhood. His father, an electrical engineer, would give him broken devices and task him with fixing them. He moved into computer science as an undergraduate, eventually earning his master's degree. He then worked for IBM and eventually served as CIO for GE Capital and CTO for Splunk. In 2018, he joined Joint Special Operations Command, a division of the United States Special Operations Command, as CTO. He started Horizon3.ai, an AI pen testing company, with JSOC colleague Anthony Pillitiere in 2019. Here, he describes his unusual career path and how he deploys the skills he learned along the way to facilitate innovation. Can you tell me about your early tech education? When I went to undergrad at Purdue, I knew I was going to do computer science. What I love about computer science is that it’s horizontal -- so I can apply that to any vertical that I'm interested in. I was interested in stock trading while I was an undergrad, so I was able to write code to learn how to trade stocks. The software programming and systems architecture skills that I picked up could be applied to solve any job. What did the early portion of your career teach you?I optimized for learning. I used to sit in the hallway in front of my team lead’s office at IBM. He couldn't see me, but I could see his whiteboard. I would try to understand something he had explained to me. I was too afraid to go in and ask for more information, so I would literally sit on the floor and just stare at it, trying to make sure I understood it in detail. Related:I wanted to be an expert in distributed systems and enterprise software. The first few jobs I took were all about learning as much as I could in that domain. I was an awful speaker. I forced myself to become a better communicator. I then moved over to learn how to launch products in product management. I was an awful product manager the first year. But there was no way I was going to get better except by throwing myself into that arena and trying to figure it out. In 2012 I got recruited to be a CIO at GE Capital. I had never managed anyone before. GE made a bet on me. I learned a lot and I was able to impact the organization as well. Having a solid technical foundation and being able to communicate well were probably the two most important skills I developed early in my career. Can you describe a scenario in which you felt out of your depth? When I was in IBM, there was a customer in Germany struggling with their tech. Their banking system kept crashing. Steve Mills, who was a legendary senior vice president, sent out a message that said, "This customer is struggling. No one can figure out what's wrong. Who here knows how to fix this problem?" I was a nobody at IBM. I replied directly to Mills and said, "I think I can fix this problem. Send me." Related:Once it got there, they were explaining their problem. I had no idea what they were talking about. All I could think was, "I’m going to get fired. I just embarrassed myself and my company." Suddenly, everything in my brain clicked: every single aspect of enterprise software technology, operating systems, distributed systems. We ended up solving the problem about 90 minutes later. How has life in the C-suite changed for tech folks? I remember going into meetings at GE Capital. People thought I was there to manage the projector. Some of those teams struggled to understand the role technology played in creating a competitive edge. GE had just come off gutting and outsourcing the bulk of their technology DNA. Throughout the 2000s it didn’t seem that there was a belief that technology was a competitive advantage. I think there was a realization that they had gone too far. They started to try to bring in more technical talent. In the mid 2000s through 2015, tech was a back-office function. I believe that’s shifted dramatically, especially now when you think about AI and the advantage you can create using technology. There are certainly CIOs in my network who still view themselves as a back-office function. They don’t want to learn the business. But I believe that type of CIO is in the minority now. Related:Why did you join Joint Special Operations Command in 2018? I was 21 when 9/11 happened. I remember this feeling of both helplessness and the desire to do something about it. Was there a multiplier way to affect change -- one calorie in causing 10 calories of impact? There wasn’t an obvious way for me to do that. I remember in 2014 watching the rise of ISIS. The desire to make a difference came back at a much more intense level. The Special Operations community had invited me to do some planning sessions with them. How could they increase the velocity of innovation in order to keep up with the adversary? Terrorist organizations were able to use off the shelf technology -- open-source software, cloud computing, drones -- to innovate lethal capabilities that were otherwise only available to armies. And so, the question was, how do we accelerate the innovation velocity? A lot of that experience was drawn from my time at GE Capital. I was able to join as the first ever CTO. For me, it was about purpose and impact. There’s no clearer mission than looking at human beings putting themselves in danger to help others. Anything that we could do using technology to reduce risk to them was an incredible opportunity. How did you come to found Horizon3.ai? I met Tony, my co-founder, at JSOC. We saw a challenge: We have no idea we’re secure until the bad guys show up. Are we fixing the right vulnerabilities? Are security tools actually working? We wanted to find a way to build an autonomous system that allows you to hack yourself as often as you want. Fiercely prioritizing problems that mattered was the first thing that we were able to do because our autonomous agent was able to hack organizations, tell you exactly how it hacked them, and then tell you exactly what to fix and how to fix it. Once you fix it, you can run a retest to verify that you're good to go. Find, fix, verify is the primary experience within the product.0 Comments 0 Shares -
The CIO's Playbook for IT Automation in 2025
TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.The CIO's Playbook for IT Automation in 2025The CIO's Playbook for IT Automation in 2025This session highlights a practical, step-by-step approach to achieving automation success and hazards to avoid.Brandon Taylor, Digital Editorial Program ManagerMay 22, 20255 Min ViewWith digital transformations everywhere all at once, the importance of automation for IT teams shouldn't come as a surprise. Determining which processes to automate first and which tools to deploy are equally as important, but so are the obstacles that will waste money and slow deployment down. In this archived keynote session, Sidney Madison Prescott, director of innovation, artificial intelligence at NRI, explores a practical, step-by-step approach to achieving automation success. This segment was part of our live webinar titled, “Perfecting IT Automation in 2025: A Roadmap for CIOs.” The event was presented by InformationWeek on April 17, 2025.Watch the archived “Perfecting IT Automation in 2025: A Roadmap for CIOs” live webinar on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
#cio039s #playbook #automationThe CIO's Playbook for IT Automation in 2025TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.The CIO's Playbook for IT Automation in 2025The CIO's Playbook for IT Automation in 2025This session highlights a practical, step-by-step approach to achieving automation success and hazards to avoid.Brandon Taylor, Digital Editorial Program ManagerMay 22, 20255 Min ViewWith digital transformations everywhere all at once, the importance of automation for IT teams shouldn't come as a surprise. Determining which processes to automate first and which tools to deploy are equally as important, but so are the obstacles that will waste money and slow deployment down. In this archived keynote session, Sidney Madison Prescott, director of innovation, artificial intelligence at NRI, explores a practical, step-by-step approach to achieving automation success. This segment was part of our live webinar titled, “Perfecting IT Automation in 2025: A Roadmap for CIOs.” The event was presented by InformationWeek on April 17, 2025.Watch the archived “Perfecting IT Automation in 2025: A Roadmap for CIOs” live webinar on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #cio039s #playbook #automationWWW.INFORMATIONWEEK.COMThe CIO's Playbook for IT Automation in 2025TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.The CIO's Playbook for IT Automation in 2025The CIO's Playbook for IT Automation in 2025This session highlights a practical, step-by-step approach to achieving automation success and hazards to avoid.Brandon Taylor, Digital Editorial Program ManagerMay 22, 20255 Min ViewWith digital transformations everywhere all at once, the importance of automation for IT teams shouldn't come as a surprise. Determining which processes to automate first and which tools to deploy are equally as important, but so are the obstacles that will waste money and slow deployment down. In this archived keynote session, Sidney Madison Prescott, director of innovation, artificial intelligence at NRI, explores a practical, step-by-step approach to achieving automation success. This segment was part of our live webinar titled, “Perfecting IT Automation in 2025: A Roadmap for CIOs.” The event was presented by InformationWeek on April 17, 2025.Watch the archived “Perfecting IT Automation in 2025: A Roadmap for CIOs” live webinar on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares
More Stories