-
- EXPLORE
-
-
-
-
News and Analysis Tech Leaders Trust
Recent Updates
-
WWW.INFORMATIONWEEK.COMCultivating Para-IT’ers and Super UsersA CIO and the IT team can extend the reach of a lean staff and build trust with user departments by cultivating “super users” and “para-IT” support people in user departments. Why do this, and what’s the best way for CIOs to go about it? At first glance, it seems that IT budgets are really growing! Gartner reports that budgets for IT in 2025 will increase a whopping 9.8% over what they were in 2024, which seems substantial. But, what’s behind this growth? “A significant portion will merely offset price increases within their recurrent spending,” said Gartner Distinguished VP Analyst, John David Lovelock. “This means that, in 2025, nominal spending versus real IT spending will be skewed, with price hikes absorbing some or all of budget growth. All major categories are reflecting higher-than-expected prices, prompting CIOs to defer and scale back their true budget expectations.” That means IT staffs will remain lean. Gartner also projects that spending will top $644 billion in 2025 for generative AI alone. The robotics market is projected to grow by 9.49% year over year through 2029. Also, edge computing is estimated to grow by 15.6% through 2030, according to Mordor Intelligence. And, estimates are that there will be four citizen developers for every IT developer in organizations, says Kissflow in citing Gartner data. Related:Additionally, there is an expanding need for business knowledge in IT, which is being asked to transition to a more service-oriented culture. In sum, more is expected of IT; IT staffs aren’t growing while user department tech deployments and requests for help are. In this environment, it makes sense for IT to cultivate super user and para-IT help. Exploring the User Tech Work Force Super users and para-IT users are the two types of users that IT works with, but they differ in the technology skills that they provide. Here is the rundown: Super users. Super users are individuals who have a detailed knowledge of the various systems that their departments run. If a typical user in a department gets confused or encounters a problem with a system, they call over the department super user to help them. This super user is likely to be upskilled in writing macros for spreadsheets and developing low code programs for reports and simple applications. He or she is also likely to be the main user contact for IT. Para-IT users. Unlike super users, who tend to naturally like and gravitate toward IT work, para-IT users are often reluctant IT recruits. A majority are asked to maintain remote networks and IT equipment in edge locations where immediate IT support isn’t available. Para-IT users perform tasks like monitoring networks for uptime, responding to elementary troubleshooting needs, securing equipment, maintaining router health, and doing backups. They also serve as a “front line” of IT network support. When an issue becomes too challenging or technical, para-IT’ers hand the issue over to IT so IT professionals can take over. Related:It should be noted that like super users, para-IT personnel are not full-time IT employees. They continue to do their respective jobs in their user departments, with the understanding that they also have several IT duties as “side tasks.” Blending User-IT Support Teams What’s the best way to build a hybrid IT support team that includes both IT and end users? Here are five key steps. 1. Survey the user opportunities. Before pursuing an IT-user support collaboration, IT leadership should assess the IT skills that exist in user departments. A small company might not have the bandwidth or the end user desire or skills to team with IT. In other cases, there is some IT savvy in user departments, and it is likely that IT already knows who the talented users are. Related:If IT leadership determines that there are technology-savvy individuals in user departments, it could make sense to consider building a hybrid IT support team. 2. Obtain management buy-in. Often, the most tech-savvy users in departments are also these departments’ most valued employees. In my own experience, I’ve encountered user area managers who really didn’t want to commit any of their employees’ time to IT “side” work. In other cases, managers were enthusiastic. User-IT hybrid support teams should only be pursued if the managers of user departments that would participate are enthusiastic about the idea. 3. Clearly define and demarcate duties. At what point does a super user in a user department turn an issue over to IT? And, if an employee is trained for para-IT duties in a remote manufacturing plant, what are the daily tech duties this employee is expected to perform, and when does IT take over? Clear responsibilities and lines of demarcation between duties should be enumerated and agreed to by IT and the end users and managers before any hybrid IT support team is started. 4. Train and support. Even department IT super users will need training in topics such as security guidelines and application workflows when they work with IT. Para-IT’ers might need even more, because many may have only limited knowledge of IT networks, apps and accountabilities that they will be expected to assume daily responsibilities for, such as rebooting routers, seeing that equipment like robots are secured in cages at ends of shifts. It’s IT’s responsibility to train these personnel, and to maintain open communication lines so users can get IT coaching and help when they need it. 5. Regularly visit with department heads. How is the user-IT support team working out? Is it helping overall productivity in the end user area or facility? CIOs should factor in some “management –by-walking-around” to the various managers of user departments engaged in hybrid IT support with IT. The goal of all these efforts is ensuring that this strategy is working for everyone.0 Comments 0 Shares 14 ViewsPlease log in to like, share and comment!
-
WWW.INFORMATIONWEEK.COMA Primer for CTOs: Taming Technical DebtJohn Edwards, Technology Journalist & AuthorMay 6, 20255 Min ReadBorka Kiss via Alamy Stock PhotoLike a hangover, technical debt is a headache that plagues many IT organizations. Technical debt accumulates when software development decisions aren't up to recommended or necessary standards when moved into production. Like financial debt, technical debt may not be a bad thing when used to drive a critical project forward, particularly if the initiative promises some type of immediate value. Unfortunately, technical debt is frequently misused as a compromise that places speed above good practices. Technical debt is a collection of design or implementation constructs that are expedient in the short term butcreate a context that can make future changes more costly or impossible, says Ipek Ozkaya, technical director of engineering, intelligent software systems, at the Carnegie Mellon University Software Engineering Institute, in an online interview. Technical debt is often created by well-intended and sometimes justified trade-offs, such as looming deadlines, uncoordinated teams unintentionally developing competing solutions, or even patterns and solutions that were at one time elegant but haven't aged well, says Deloitte CTO Bill Briggs. "There’s usually a commitment to come back and fix it in the next release or the next budgeting cycle, but priorities shift while the interest from technical debt grows," he notes in an email interview. Related:Facing Costs and Delays For many public and private sector enterprises, paying down technical debt represents a large percentage of their annual technology investment, Briggs says. "As a result, new projects that depend on aging tech have a high probability of delays and ballooning costs." Perhaps most ominously, by siphoning funds away from critical cybersecurity updates and initiatives, technical debt can play a significant negative role in breaches and service outages, potentially leading to financial, operational, and reputational risks, Briggs says. Technical debt can also make it hard, sometimes even impossible, to harness and scale promising new technologies. "Transformational impact typically requires emerging tech to be embedded in business process systems, where technical debt is likely to run rampant." Regaining Control in Software Ecosystems There's no one-size-fits-all approach to controlling technical debt, since the priority and the impact of short-term gains and long-term system, resource, and quality impacts are often context specific, says Ozkaya, co-author of the book "Managing Technical Debt: Reducing Friction in Software Development". However, teams can get ahead of unintentional technical debt by incorporating modern software development practices and investing in automated quality analysis, unit and regression testing, and continuous integration and deployment tools and practices, she notes. Related:Technical debt is a reality in today's software ecosystems, Ozkaya states. "They evolve fast, have to adjust to changing technology, new requirements need to be incorporated, and competition is rough," she observes. Virtually all organizations have some level of technical debt. "The right question to ask is not whether it's useful or not, but how it can be continuously and intentionally managed." Still, organizations don't want to find themselves drowning in unintentional technical debt. "Instead, they want to make the right tradeoffs and strategically decide when to accept technical debt and when to resolve it," Ozkaya says. A Strategy for Debt Taming Taking a head-on approach is the most effective way to address technical debt, since it gets to the core of the problem instead of slapping a new coat of paint over it, Briggs says. The first step is for leaders to work with their engineering teams to determine the current state of data management. "From there, they can create a realistic plan of action that factors in their unique strengths and weaknesses, and leaders can then make more strategic decisions around core modernization and preventative measures." Related:Managing technical debt requires a long-term view. Leaders must avoid the temptation of thinking that technical debt only applies to legacy or decades old investments, Briggs warns. "Every single technology project has the potential to add to or remove technical debt." He advises leaders to take a cue from medicine's Hippocratic Oath: "Do no harm." In other words, stop piling new debt on top of the old. Technical debt can be reduced or eliminated by outsourcing, says Nigel Gibbons, a director and senior advisor at cybersecurity advisory firm NCC Group. Focus on what you do best and outsource the rest, he recommends in an email interview. "Cloud computing and managed security services are the panacea for most organizations, offering a freedom from the ball and chain of IT infrastructure." Coming to Terms with Tech Debt Technical debt can be useful when it's a conscious, short-term trade-off that serves a larger strategic purpose, such as speed, education, or market/first-mover advantage, Gibbons says. "The crucial part is recognizing it as debt, monitoring it, and paying it down before it becomes a more serious liability," he notes. Many organizations treat technical debt as something they're resigned to live with, as inevitable as the laws of physics, Briggs observes. Some leaders vilify technical debt by blaming predecessors for allowing debt to pile up on their watch. Such attitudes are useless, however. "Leaders should be driving conversations to shine a light on the impact, implications, and potential path forward," he advises. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 10 Views
-
WWW.INFORMATIONWEEK.COMMIT Sloan CIO SymposiumTechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.MIT Sloan CIO SymposiumMay 20, 2025|Royal Sonesta Boston / Cambridge, MAJoin MIT Sloan at the Royal Sonesta Hotel for their 22nd annual CIO Symposium, Cambridge, MA on Tuesday, May 20, 2025. As we enter an AI-driven era, CIOs embark on an unpredictable journey filled with both opportunity and challenge. Many enterprises are actively applying AI to product development, marketing, operations, and HR. Join other CIOs, technology executives and MIT faculty for interactive learning, thought-provoking panels, networking, and more. Don’t miss out—secure your tickets today.Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UP0 Comments 0 Shares 18 Views
-
WWW.INFORMATIONWEEK.COMCloud Governance in the Age of AIEveryone is focused on the latest AI innovations, from multimodal GenAI advancements to specialized applications and agentic AI. But few are paying attention to one critical gap that puts all this innovation -- and its potential -- at risk: outdated cloud governance models. These models weren’t built for the pace we’re moving at today. Most organizations are still trying to govern AI infrastructure with static policies, tagging rules, and post-mortem budget alerts. That’s like trying to control a Formula 1 car with a bicycle manual -- it simply isn’t a match for the speed or complexity. And the strain is starting to show. Governance often gets treated as an insurance policy, a risk-mitigation layer, a box to check. But in today’s environment, it must be more than that. When it works, it makes the right thing the easy thing. If it slows teams down or gets bypassed altogether, it stops being governance -- and becomes a liability. Why Traditional Governance Can’t Keep Up Legacy governance models were built for more predictable environments -- where infrastructure was provisioned manually, by centralized teams, with time to review and react. That reality is gone. AI workloads are: Dynamic: Infrastructure is provisioned automatically and scales in real-time. Related:Decentralized: Workloads are launched by teams operating outside traditional IT channels. Expensive: High-powered compute jobs accumulate costs fast -- often to the tune of $10 to $100 million per model -- without clear ownership or oversight. In environments like this, reactive governance doesn’t just slow things down -- it fails. According to Gartner, only 48% of AI projects make it into production, and the average time to get there is eight months -- delays often rooted in fractured workflows, unclear ownership, or policy bottlenecks. I’ve seen it firsthand: a data team bypasses provisioning delays by using a shadow account; an AI pipeline scales unexpectedly over a weekend; cost and compliance issues surface weeks later, when it's too late to do anything but clean up the mess. These aren’t isolated events. They’re symptoms of a broader disconnect between how organizations say they want to govern the cloud -- and how their systems actually operate. When Governance Breaks, Culture Follows The deeper risk isn’t just operational. It’s cultural. When governance is built around delays, gatekeeping, or reactive controls, it sends a clear message: compliance and velocity can’t coexist. And when teams are forced to choose, they’ll choose speed -- every time. Related:I’ve seen this turn into shadow infrastructure, fragmented decision-making, and team-level workarounds that leave finance and security in the dark. It’s not that people don’t care about governance. But instead of governance being a built-in, preemptive step, it’s become something they just “work around.” And when that happens, three outcomes typically follow: Cloud sprawl: Teams stand up infrastructure wherever and however they want, with no unified oversight. Unpredictable spend: AI workloads scale unexpectedly, and finance teams are left reacting to invoices instead of managing impact. Compliance gaps: Sensitive data is processed without appropriate controls, exposing the organization to avoidable risk. By the time any of these issues are visible, policy isn’t enough to solve them. You need structural change. What AI-Era Governance Demands To support AI -- and future-proof operations in general -- governance has to shift from a reactive process to a preventive capability. It has to be built into the infrastructure, not layered on after the fact. That starts with four core principles: Platform-embedded policies: Governance logic must live where infrastructure is created. Automated controls on provisioning, access, and resource types prevent problems before they start. Related:Paved roads, not detours: The easiest path forward should also be the most compliant. When self-service tools and templates include built-in guardrails, teams stay aligned without slowing down. Real-time visibility with business context: Spend and usage data need to be transparent and visible as they happen -- tied to actual workloads, teams, and business goals. Not just cloud accounts and billing codes. Shift-left FinOps: Cost accountability can’t be a month-end task. When finance and engineering align during planning and development, governance becomes part of delivery -- not something bolted on after launch. This approach changes governance from something people avoid to something they rely on. Not a blocker; a foundation. Governance as a Strategic Advantage Done right, governance accelerates innovation. It gives teams confidence to move fast, scaling within a framework that protects the business. It connects technical decisions to business outcomes and ROI. The old model -- manual approvals, siloed oversight, static policy documents -- wasn’t built for this era of innovation. It created blind spots, and AI’s rapid acceleration only magnifies them. It’s imperative to embed AI governance into the systems, workflows, and infrastructure your teams already use. Make it automatic. Make it contextual. Make it native to how people build. Because when governance works that way -- when the right thing is also the easiest thing, the natural thing -- teams don’t resist it. They depend on it. And that’s when governance becomes strategic.0 Comments 0 Shares 16 Views
-
WWW.INFORMATIONWEEK.COMBest Practices for Managing Distributed Systems and AppsApplications and systems exist at the edge, in the data center, in a bevy of clouds, and in user departments. IT is called upon to manage all of them. But is this management doable, and how can IT approach it for the best results? First, let’s look at why enterprises use distributed systems and applications. Distributed systems and apps can deliver better performance when they are placed in the specific areas and departments of the company they serve. In such cases, they can use dedicated networks, storage and processing. If the systems are cloud-based, they can be readily scaled upward or downward in both cost and use, and there are cloud providers that maintain them. At the same time, these distributed assets become interrelated in areas of data, processing, and governance. Systems and applications must be centrally managed when they come together. Following are three challenges distributed systems and applications present for IT, and some ways IT can best address them. 1. Security and updates Challenge: The IT attack surface is expanding exponentially as enterprises augment centralized IT systems with edge and cloud-based systems and applications. This compounds in difficulty when edge and mobile devices come in the door with lax or nonexistent security settings, making them easy attack targets. The promulgation of edge networks, mobile devices and cloud-based systems also increases the need for IT to apply software security and patch updates in a timely and consistent way. Related:Best practice: It’s no longer enough to use a standard monitoring system to track network and user activities throughout the enterprise. Tools like identity access management (IAM) can track a user’s activities and permission clearances across internal IT assets, but they provide limited visibility of what might be going on in the cloud. Tools like cloud infrastructure entitlement management (CIEM) can microscopically track user activities and permissions in the cloud, but not on prem. Identity governance and administration (IGA) can bring together both IAM and CIEM under one software umbrella, but its focus is still on the user and what the user does. CIEM can’t track malware in an embedded software routine that activates, or any other anomaly that could arise as data is moved among systems. For this, observability software is needed. Observability tools can track every detail of what happens within each transaction as it moves through systems, and mobile device management (MDM) software can track the whereabouts of mobile devices. Meanwhile, security update software can be automated to push out software updates to all common computing platforms. Finally, there is the need to know when any addition, deletion or modification occurs to a network. Zero-trust networks are the best way to detect these changes. Related:The takeaway for IT is that it’s time to evaluate these different security and monitoring tools and defenses, and to create an architectural framework that identifies which tools are needed, how they fit with each other, and how they can end-to-end manage a distributed system and application environment. Best of breed IT departments are doing this today. 2. Data consistency Challenge: Data across the enterprise must be accurate and consistent if everyone is to use a single version of the truth. When accuracy and consistency measures fail, different department managers get disparate information, which generates dissonance and delays in corporate decision making. Most enterprises report issues with data accuracy, consistency, and synchronization, often brought on by disparate, distributed systems and applications. Best practice: The good news is that most organizations have installed tools such as ETL (extract, transform, and load) that have normalized and unified data that flows from variegated sources into data repositories. This has resulted in higher quality data for enterprise users. Related:Interestingly, a persistent problem when it comes to managing distributed systems is actually an “old school” problem. It’s how to manage intra-day batch and nightly batch processing. Let’s say a company is in the US but has remote manufacturing facilities in Brazil and Singapore. At some point, the finished goods, inventory, work in process, and cost information from all these systems must coalesce into a consolidated corporate “view” of the data. It is also understood that these various facilities operate in different time zones and on different schedules. Typically, you would batch together major system transaction updates during a normal nightly batch process, but nighttime in say, Philadelphia, is daytime in Singapore. When and how do you schedule the batch processing? It’s cost prohibitive and, in some cases, impossible to perform transaction updating of all data in real time, so central IT must decide how to update. Does it do periodic intra-day “data bursts” of transactions between distributed systems, and then night process the other batches of transactions that are in less dissimilar time zones? Which batch update processes will deliver the highest degree of timely and quality data to users? Optimized and orchestrated intra-day and nightly batch processing updates are a 60-year-old problem for IT. One reason it’s an old problem is that revising and streamlining batch processing schedules is one of IT’s least favorite projects. Nevertheless, best-of-class IT departments are paying attention to how and when they do their batch processing. Their ultimate goal is putting out the most useful, timely, highest quality information to users across the enterprise. 3. Waste management Challenge: With the growth of citizen IT, there are numerous applications, systems, servers, and cloud services that users have signed up for or installed, but that end up either seldom or never used. The same can be said for IT, given its history of shelfware and boneyards. In other cases, there are system and application overlaps or vendor contracts for unused services that self-renew, and that nobody pays attention to. This waste is exacerbated with distributed systems and applications that may not have a central point of control. Best practice: More IT departments are using IT asset management software and zero-trust networks to identify and track usage of IT assets across the enterprise and in the cloud. This helps them identify unused, seldom used or replicated assets that should be sunset or removed, with a corresponding cost savings. Vendor contract management is a more complicated issue, because it is possible that individual user departments have contracts for IT products and services that IT may not know about. In this case, the matter should be raised to upper management. One possible solution is to have IT or an internal contract management or audit group go out to various departments in the enterprise to collect and review contracts. Inevitably, some contracts will be found missing. In these cases, the vendor should be contacted so a copy of the contract can be obtained. The cost savings goal is to eliminate all services and products that the company isn’t actively using. Final Remarks Managing IT in a highly distributed array of physical and virtual facilities is a significant challenge for IT, but there are tools and methods that are fit for the task. In some cases, such as batch processing, even an old school approach to batch management can work. In other cases, the tools for managing security, user access and data consistency are already in-house. They just have to be orchestrated into an overall architecture that everyone in IT can understand and work toward. Because the tools and methods for distributed system and application management so clearly fall within IT’s wheelhouse, it’s the job of the CIO to educate others about IT’s security, governance, data consistency, and management needs to reduce corporate risk.0 Comments 0 Shares 21 Views
-
WWW.INFORMATIONWEEK.COMWill New HHS Leadership Lead to HIPAA Changes?John Edwards, Technology Journalist & AuthorMay 2, 20255 Min Readpalatiaphoto via Alamy Stock PhotoAlmost 30 years ago, the Health Insurance Portability and Accountability Act of 1996 went into effect to protect the use and disclosure of personal health information. But with a new regime in town, companies are watching closely to see what changes could be in the works under US Department of Health and Human Services (HSS) Secretary Robert F. Kennedy, Jr.HIPAA's primary goal is assuring that individuals' health information is properly protected, while allowing the flow of health information needed to provide high-quality healthcare to remain safe and securely accessible. The act strikes a balance that permits important uses of patient information while protecting the privacy of people who seek care. Kennedy became HHS secretary in February and is responsible for administering and overseeing all HHS programs, operating divisions, and activities. Kennedy has yet to make any formal announcements about HIPAA's future course, but that hasn't stopped healthcare industry observers from speculating about possible future moves, especially as the agency plans to cut as many as 20,000 jobs as part of the Trump Administration’s efficiency efforts.Early Signs of Changes to Come?So far, no communication has come from HHS about HIPAA specifically, says John Zimmerer, vice president, healthcare, for wireless services provider Smart Communications. "Secretary Kennedy has put the agency's initial focus on understanding the causes of and improving the treatment of chronic diseases, as part of his 'Make America Healthy Again' movement," he observes in an email interview. Related:Nonetheless, a few policy announcements could impact HIPAA specifically and health privacy in general, Zimmerer says. Most importantly, HHS has reversed a policy regarding the federal rulemaking process that requires getting input from the public."Previously, HHS would notify the public about proposed rules and seek input on proposals before finalizing them," he explains. "By rescinding the Richardson Waiver at the end of February, that appears to no longer be the case." The waiver guaranteeing public participation in federal rulemaking has been in use since 1971, but following Kennedy’s announcement in February, exemptions for public input could be won more easily.In late December, prior to the new administration and Kennedy's appointment, HHS issued a Notice of Proposed Rulemaking (NPRM) to modify the HIPAA Security Rule "to strengthen cybersecurity protections for electronic protected health information (ePHI)." Public comments were filed by March 7 and currently are being considered.Related:Industry groups sent President Trump and Kennedy a letter asking them to rescind updates to the HIPAA security rule. Zimmerer says it's unclear what the outcome of the proposed rule changes will be.David White, president of Axio, a cyber risk management provider, believes the healthcare industry is facing a crisis it's not prepared for. "The proposed updates to the HIPAA Security Rule are a direct response to a problem that’s been growing unchecked for years," he warns in an online interview."Healthcare organizations aren't prepared for the sophistication or scale of today’s cyber threats," White says. "While compliance frameworks like HIPAA set a foundation, they have historically been reactive, evolving only after a crisis." He points to the recent Change Healthcare breach in February as the latest example of how fragile the current system really is.Making Changes "Considering his libertarian leanings, and that the process to update HIPAA actually started during the first Trump administration, I suspect that Secretary Kennedy would be in favor of strengthening privacy protections," Zimmerer says. Under the proposed HIPAA Security rules, healthcare organizations would be held to a higher standard of cybersecurity, unless the final rules are changed. New HHS leaders will probably promote more robust HIPAA protections, particularly regarding online health data and patient privacy, says Bill Hall, CEO of OurRecords, a provider of compliance and quality-assurance offerings for businesses in highly regulated industries. He anticipates the arrival of AI-powered tools and deeper regulations on companies' collection, storage, and data sharing.Related:"Patients will probably get more control over their information, and businesses will face tougher compliance standards," Hall says in an online interview. The upcoming changes will affect marketers, insurers, hospitals, and entrepreneurs, he adds. "Consumers will gain more privacy protection, but companies will have to change," he predicts. The hardest aspect will be maintaining security without stifling tech innovation. "If the rules are clear and practical, they will help build trust in digital health without slowing progress.Cybersecurity Mandates Needed Stronger mandates are necessary, but they shouldn't be viewed as a silver bullet, White warns. Cybersecurity isn't about checking boxes -- it's about understanding the full attack surface. "Threat actors don't care whether an organization is a covered entity or a business associate -- they exploit the weakest link. That’s why these regulations finally address third-party risk, requiring vendors to verify their security controls annually," he states. Yet, even with new requirements, many healthcare organizations will still find themselves playing catch-up. Implementation will come through updated regulations, more enforcement actions, and possibly new guidance for healthcare providers and tech companies, Hall says. "HHS can [also] tighten restrictions on data sharing with third parties, increase audits, and fortify consent regulations," he observes. "Businesses handling health data -- whether in healthcare, insurance, or IT -- must evaluate their processes to ensure compliance." Going Beyond Compliance Compliance should be the floor -- not the ceiling, White says. "Organizations need to go beyond what's required by focusing on continuous risk analysis, rapid response capabilities, and a security culture that prioritizes resilience," he advises. "Because in healthcare, a cyberattack isn’t just an IT issue -- it’s a patient safety crisis waiting to happen." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 55 Views
-
WWW.INFORMATIONWEEK.COMHow to Choose the Right LLMLisa Morgan, Freelance WriterMay 2, 202511 Min Readformatoriginal via Alamy StockMany enterprises are realizing impressive productivity gains from large language models, but some are struggling with their choices because the compute is expensive, there are issues with the training data, or they’re chasing the latest and greatest LLM based on performance. CIOs are now feeling the pain. “One of the most common mistakes companies make is failing to align the LLM selection with their specific business objectives. Many organizations get caught up in the hype of the latest technology without considering how it will serve their unique use cases,” says Beatriz Sanz Saiz, global AI sector leader at global professional services organization EY. “Additionally, overlooking the importance of data quality and relevance can lead to suboptimal performance. Companies often underestimate the complexity of integrating LLMs into existing systems, which can create significant challenges down the line.”The consequences of such mistakes can be profound. Choosing an LLM that doesn’t fit the intended use case can result in wasted resources. It may also lead to poor user experience, as the model may not perform as expected. Ultimately, this can damage trust in AI initiatives within the organization and hinder the broader adoption of AI technologies. Related:“Companies may find themselves in a position where they need to re-evaluate their choices and start over, which can be both costly and demoralizing. The best approach is to start with a clear understanding of your business objectives and the specific problems you aim to solve,” says Saiz. “Conducting thorough research on available LLMs, with comprehensive analysis of their strengths and weaknesses is crucial.” She also recommends engaging with stakeholders across the organization because they can provide valuable insights into the requirements and expectations. Additionally, enterprises should be running pilot programs with a few selected models that can help evaluate their performance in real-world scenarios before making a full commitment. “A key consideration is whether you need a generalist LLM, a domain-specific language model (DSLM), or a hybrid approach. DSLMs, which are becoming more common in sectors like indirect tax or insurance underwriting, offer greater accuracy and efficiency for specialized tasks,” says Saiz. Beatriz Sanz Saiz, EYRegardless, the chosen model should be able to scale as the organization’s needs evolve. It’s also important to evaluate how the LLM adheres to relevant regulations and ethical standards. “My best advice is to approach LLM selection with a strategic mindset. Don’t rush the process. Take the time to understand your needs and the capabilities of the models available,” says Saiz. “Collaborate with cross-functional teams to gather diverse perspectives and insights. Lastly, maintain a commitment to continuous learning and adaptation. The AI landscape is rapidly evolving, and staying informed about new developments will empower your organization to make the best choices moving forward.” Related:It's also important not to get caught up in the latest benchmarks because it tends to skew perspectives and results. “Companies that obsess over benchmarks or the latest release risk overlooking what really matters for scale over experimentation. Benchmarks are obviously important, but the real test is how well an LLM fits in with your existing infrastructure so that you can tailor it to your use case using your own proprietary data or prompts,” says Kelly Uphoff, CTO of global financial infrastructure company Tala. “If a company is only focused on baseline performance, they might struggle to scale later for their specific use case. The real value comes from finding a model that can evolve with your existing infrastructure and data.” Clearly Define the Use Case Related:Maitreya Natu, senior scientist at AIOps solution provider Digitate, warns that choosing the right large language model is a tough decision as it impacts the company’s entire AI initiatives. “One of the most common missteps is selecting an LLM without clearly defining the use case. Organizations often start with a model and then try to fit it into their workflow rather than beginning with the problem and identifying the best AI to solve it,” says Natu. “This leads to inefficiencies, where businesses either overinvest in large, expensive models for simple tasks or deploy generic models that lack domain specificity.” Another frequent mistake is relying entirely on off-the-shelf models without fine-tuning them for industry-specific needs. Organizations are also falling short when it comes to security. Many companies use LLMs without fully understanding how their data is being processed, stored or used for retraining. “The consequences of these mistakes can be significant, resulting in irrelevant insights, wasted costs or security lapses,” says Natu. “Using a large model unnecessarily drives up computational expenses, while an underpowered model will require frequent human intervention, negating the automation benefits. To avoid these pitfalls, organizations should start with a clear understanding of their objectives.” Naveen Kumar Ramakrishna, principal software engineer at Dell Technologies, says common pitfalls include prioritizing the LLM hype over practical needs, neglecting key factors and underestimating the data and integration challenges. “There’s so much buzz around LLMs that companies jump in without fully understanding whether they actually need one,” says Ramakrishna. “Sometimes, a much simpler approach, like a rule-based system or a lightweight ML model, could solve the problem more efficiently. But people get excited about AI, and suddenly everything becomes an LLM use case, even when it’s overkill.” Companies often forget to take things like cost, latency, and model size into account. “I’ve seen situations where simpler tools could’ve saved a ton of time and resources, but people went straight for the flashiest solution,” says Ramakrishna. “They also underestimate the data and integration challenges. Companies often don’t have a clear understanding of their own data quality, size and how it moves through their systems. Integration challenges, platform compatibility and deployment logistics often get discovered way too late in the process, and by then it’s a mess to untangle. I’ve seen [a late decision on a platform] slow projects down so much that some never even make it to production.” Those situations are particularly dire when the C-suite is demanding dollar value ROI proof. “When the wrong model is chosen, projects often get dropped halfway through development. Sometimes they make it to user testing, but then poor performance or usability issues surface and the whole thing just falls apart,” says Ramakrishna. “Other times, there’s this rush to get something into production without proper validation, and that’s a recipe for failure.” Performance issues and user dissatisfaction are common. If the model’s too slow or the results aren’t accurate, end-users will lose trust and stop using the system. When an LLM gives inaccurate or incomplete results, users tend to keep re-prompting or asking more follow-up questions. That drives up the number of transactions, increasing the load on the infrastructure. It also results in higher costs without improving the outcomes. “Cost often takes a backseat at first because companies are willing to invest heavily in AI, but when the results don’t justify the expense, that changes,” says Ramakrishna. “For example, a year ago at [Dell], pretty much anyone could access our internally hosted models. But now, because of rising costs and traffic issues, getting access even to base models has become a challenge. That’s a clear sign of how quickly things can get unsustainable.” How To Choose the Right Model Like with anything tech, it’s important to define the business problems and desired outcomes before choosing an LLM. “It’s surprising how often the problem isn’t well-defined, or the expected outcomes aren’t clear. Without that foundation, it’s almost impossible to choose the right model and you end up building for the wrong goals,” says Dell’s Ramakrishna. “The right model depends on your timelines, the complexity of the task and the resources available. If speed to market is critical and the task is straightforward, an out-of-the-box model makes sense. But for more nuanced use cases, where long-term accuracy and customization matter, fine-tuning a model could be worth the effort.” Some of the criteria organizations should consider are performance, scalability, and total cost of ownership (TCO). Also, because LLMs are becoming increasingly commoditized, open-source models may be the best option because they provide more control over customization, deployment, and cost. They also help to avoid vendor lock-in. Data quality, privacy and security are also tantamount. Naveen Kumar Ramakrishna, Dell“[Data privacy and security are] non-negotiable. No company wants sensitive data leaving its environment, which is why on-premises deployments or private hosting options are often the safest bet”, says Dell’s Ramakrishna. “Bigger models aren’t always better. Choose the smallest model that meets your needs [because] it’ll save on costs and improve performance without sacrificing quality. Start small and scale thoughtfully [as] it’s tempting to go big right away, but you’ll learn much more by starting with a small, well-defined use case. Prove value first, then scale.” Max Belov, chief technology officer at digital product engineering company Coherent Solutions, says in addition to aligning the model with the use case, one should also consider how much to customize the model. “Some models excel at conversational AI, such as chatbots and virtual assistants [while] others are better for content creation. There are also multi-modal models that can handle text, images and code,” says Belov. “Models like OpenAI's GPT-4, Cohere's Command R, and Anthropic's Claude v3.5 Sonnet support cloud APIs and offer easy integration with existing systems. [They also] provide enough scalability to meet evolving business needs. These platforms provide enhanced security, compliance controls, and the ability to integrate LLMs into private cloud environments. Models like Meta's LLaMA 2 and 3, Google's Gemma and Mistral [AI LLMs] can be set up and customized in different environments, depending on specific business needs. Running an LLM on-premises offers the highest level of data control and security but requires a license.” While on-premises solutions offer greater control and security, they also require dedicated infrastructure and maintenance. “Be watchful about cybersecurity since you share sensitive data with a third-party provider using LLMs. Cloud-based models might pose higher data privacy and control risks,” says Belov. “LLMs work better for multi-step tasks, such as open-ended reasoning tasks, situations where world knowledge is needed, or unstructured and novel problems. AI applications for business in general, and LLMs in particular, don't have to be revolutionary -- they need to be practical. Establish realistic goals and evaluate where AI can enhance your business processes. Identify who and at what scale will use LLM capabilities and how will measure the success of implementing an LLM. Build your AI-driven solution iteratively with ongoing optimization.” Ken Ringdahl, chief technology officer at spend management SaaS firm Emburse says managing costs of LLMs is an acquired skill, like moving to cloud. “The use of an LLM is very similar and many are learning as they go that costs can quickly rise based on actual usage and usage patterns,” says Ringdahl. “Test as many LLMs as realistically possible within your given timeline to see which model performs the best for your specific use case. Be sure the model is well documented and understand each model's specific prompting requirements for certain tasks. Specifically, use methods like zero, one and few shot prompting to see which model consistently provides the best results.” [To] control costs, he believes organizations should understand both current and future use cases along with their usage and growth patterns,” “The larger the model size, the larger and more expensive serving the model becomes due to computational resources required. For third-party LLMs, be sure that you understand token costs,” says Ringdahl. “To ensure the highest levels of data privacy, understand and be sensitive regarding the data no matter if internal or external LLMs. Remove personal or private information that could lead to individuals. For third-party systems especially, be sure to read through the privacy policy thoroughly and understand how the organization uses the data you feed it.” About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 67 Views
-
WWW.INFORMATIONWEEK.COMConfidential Computing: CIOs Move to Secure Data in UseNathan Eddy, Freelance WriterMay 1, 20254 Min ReadBrain light via Alamy StockAs cyber threats grow more sophisticated and data privacy regulations grow sharper teeth, chief CIOs are under increasing pressure to secure enterprise data at every stage -- at rest, in motion, and now, increasingly, in use. Confidential computing, a technology that protects data while it is being processed, is becoming an essential component of enterprise security strategies. While the promise is clear, the path to implementation is complex and demands strategic coordination across business, IT, and compliance teams. Itai Schwartz, co-founder and CTO at Mind, explains confidential computing enables secure data processing even in decentralized environments, which is particularly important for AI workloads and collaborative applications. “Remote attestation capabilities further support a zero-trust approach by allowing systems to verify the integrity of workloads before granting access,” he says via email. CIOs Turning to Confidential Computing At its core, confidential computing uses trusted execution environments (TEEs) to isolate sensitive workloads from the broader computing environment. This ensures that sensitive data remains encrypted even while in use -- something traditional security methods cannot fully achieve. “CIOs should treat confidential computing as an augmentation of their existing security stack, not a replacement,” says Heath Renfrow, CISO and co-founder at Fenix24. Related:He says a balanced approach enables CIOs to enhance security posture while meeting regulatory requirements, without sacrificing business continuity. The technology is especially valuable in sectors like finance, healthcare, and the public sector, where regulatory compliance and secure multi-party data collaboration are top priorities. Confidential computing is particularly valuable in industries handling highly sensitive data, explains Anant Adya, executive vice president and head of Americas at Infosys. “It enables secure collaboration without exposing raw data, helping banks detect fraud across institutions while preserving privacy,” he explains via email. Implementation Without Disruption Despite its potential, implementing confidential computing can be disruptive if not handled carefully. This means CIOs must start with a phased and layered strategy. “Begin by identifying the most sensitive workloads, such as those involving regulated data or cross-border collaboration, and isolate them within TEEs,” Renfrow says. “Then integrate confidential computing with existing IAM, DLP, and encryption frameworks to reduce operational friction.” Related:Adya echoes that sentiment, noting organizations can integrate confidential computing by adopting a phased approach that aligns with their existing security architecture. He recommends starting with high-risk workloads like financial transactions or health data before expanding deployment. Schwartz emphasizes the importance of setting long-term expectations for deployment. “Introducing confidential computing is a big change for organizations,” he says. “A common approach is to define a policy where every new data-sensitive component will be created using confidential computing, and existing components will be migrated over time.” Jason Soroko, senior fellow at Sectigo, stresses the importance of integrating confidential computing into the broader enterprise architecture. “CIOs should consider the value of separating ‘user space’ from a ‘secure space,’” he says. Enclaves are ideal for storing secrets like PKI key pairs and digital certificates, allowing sensitive workloads to be isolated from their authentication functions. Addressing Performance and Scalability One of the main challenges CIOs face when deploying confidential computing is performance overhead. TEEs can introduce latency and may not scale easily without optimization. Related:“To address performance and scalability while maintaining business value, CIOs can prioritize high-impact workloads,” Renfrow says. “Focus TEEs on workloads with the highest confidentiality requirements, like financial modeling or AI/ML pipelines that rely on sensitive data.” Adya suggests keeping fewer sensitive computations outside TEEs to reduce the load. “Offload only the most sensitive computations, and leverage hardware acceleration and cloud-managed confidential computing services to improve efficiency,” he recommends. Soroko adds that hardware selection is critical, suggesting CIOs should be choosing TEE hardware that has an appropriate level of acceleration. “Combine TEEs with hybrid cryptographic techniques like homomorphic encryption to reduce overhead while maintaining data security,” he says. For scalability, Renfrow recommends infrastructure automation, for example adopting infrastructure-as-code and DevSecOps pipelines to dynamically provision TEE resources as needed. “This improves scalability while maintaining security controls,” he says. Aligning with Zero Trust and Compliance Confidential computing also supports zero-trust architecture by enforcing the principle of “never trust, always verify.” TEEs and remote attestation create a secure foundation for workload verification, especially in decentralized or cloud-native environments. “Confidential computing extends zero-trust into the data application layer,” Schwartz says. “This is a powerful way to ensure that sensitive operations are only performed under verified conditions.” Compliance is another major driver for adoption, with regulations such as GDPR, HIPAA, and CPRA increasingly demand data protection throughout the entire lifecycle -- including while data is in use. The growing list of regulations and compliance issues will require CIOs to demonstrate stronger safeguards during audits. “Map confidential computing capabilities directly to emerging data privacy regulations,” Renfrow says. “This approach can reduce audit complexity and strengthen the enterprise’s overall compliance posture.” Adya stresses the value of collaboration across internal teams, pointing out successful deployment requires coordination between IT security, cloud architects, data governance leaders, and compliance officers. As confidential computing matures, CIOs will play a pivotal role in shaping how enterprises adopt and scale the technology. For organizations handling large volumes of sensitive data or operating under stringent regulatory environments, confidential computing is no longer a fringe solution -- it’s becoming foundational. Success will depend on CIOs guiding adoption through a focus on integration, continuous collaboration across their enterprise, and by aligning security strategies with business objectives. “By aligning confidential computing with measurable outcomes -- like reduced risk exposure, faster partner onboarding, or simplified audit readiness -- CIOs can clearly demonstrate its business value,” Renfrow says. About the AuthorNathan EddyFreelance WriterNathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.See more from Nathan EddyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 68 Views
-
WWW.INFORMATIONWEEK.COMCISOs Take Note: Is Needless Cybersecurity Strangling Your Business?John Edwards, Technology Journalist & AuthorMay 1, 20254 Min Readnarong yuenyongkanokkul via Alamy Stock PhotoThere can never be too much cybersecurity, right? Wrong, says Jason Keirstead, vice president of security strategy at AI security developer Simbian. "Cybersecurity is not always a place where more is better," he observes in an online interview. "Having redundant tools in your security stack, duplicating functions, can create increased churn and workloads, causing security operations center analysts to deal with superfluous, unnecessary alerts and information." The problem can grow even more serious if a tool is redundant because it's outdated. "In that scenario, the outdated tool might not be keeping pace with the latest tactics and techniques being used by adversaries, causing blind spots," Keirstead warns. Additionally, outdated tools can directly affect employees, hampering organizational productivity. Aaron Shilts, president and CEO of security technology firm NetSPI, agrees. "For IT and security teams, redundant and obsolete security tools or measures increase workflows, hurt efficiency, and extend incident response and patch time," he explains via email. "When there's excessive or ineffective tools in the security stack, teams waste valuable time sifting through redundant and low-value alerts, hampering them from focusing on real threats." Related:Obsolete security tools can also falsely flag safe behaviors or, worse yet, not flag unsafe ones, says Sourya Biswas, technical director, risk management and governance, at security consulting firm NCC Group. "The world of security is ever-changing, and attackers with their dynamic tactics, techniques, and procedures need to be countered with up-to-date information and tooling," he states in an online interview. Additionally, even best-of-breed tools can cause harm when used incorrectly. "Some organizations spend money buying the best security tools the market has to offer, but not on deploying them optimally, such as by fine-tuning alert rules for their specific environments." Other organizations may add tools that perform a duplicate function, resulting in inefficiencies. "In time, when business sees security is not delivering the intended results, the buy-in collapses and the security posture degrades." Prime Offenders Most obsolete or redundant tools reside in the detection space, Keirstead says. A prime example is endpoint security agents. "Some enterprises have up to three or four different security tools deployed on the endpoint, each one consuming resources and reducing employee productivity," he notes. Additionally, excessive security controls, such as overly intrusive multi-factor authentication, can create employee friction, slowing down and challenging collaboration with partners, vendors, and customers, Shilts says. "This often results in employees finding workarounds, such as using their personal emails, which introduces security risks that are difficult to track and manage." Related:Another headache are firewalls or security gateways offering features, such as IPS/IDS capabilities, that overlap with other tools but may not be able to perform the task as well as a purpose-built system, says Erich Kron, security awareness advocate for KnowBe4, a security training firm. Unified threat management (UTM) devices, for example, can be great for small or medium businesses, but tend to be far less scalable than purpose-built equipment. "Larger organizations with complex networks and higher bandwidth throughput, or more stringent security needs, may find themselves in a situation where these all-in-one devices can't keep up with the demand, or fail to perform as needed," he observes in an online interview. Weed Control Conducting occasional audits of network equipment and the capabilities they provide, along with their limitations, can help organizations avoid unpleasant surprises created by overcomplicated configurations, underpowered devices, or outdated gear, Kron says. "Many organizations fail to audit their network devices networks on a regular basis, feeling that the effort required may not be worth the rewards," he observes. "However, when organizations do take this step, they often find devices they weren't aware of, or are vulnerable, on the network." Related:In general, an organizational security posture, including tools and procedures, should be assessed annually or even earlier if a major change is implemented, Biswas says. Ideally, to prevent conflicts of interest, such assessments should be performed by independent, expert third parties. "After all, it’s difficult for an implementor or operator to be a truly impartial assessor of their own work," he explains. "While some organizations may be able to do so via internal audit, for most it makes sense to hire an outsider to play devil’s advocate." "Having good relationships with your vendors can be very helpful when trying to make sense of new or improved capabilities, old or outdated equipment, or potential incompatibilities,” Kron says. "A good sales engineer will have the experience and knowledge to point out potential issues before they get out of hand, and a good vendor will be willing to help organizations manage the world of security devices." Keeping Pace Security tooling is not the problem -- misalignment between tools and business needs is, Shilts says. "A well-implemented security strategy supports the pace of development rather than hindering it," he explains. "By carefully selecting, configuring, and integrating tools, organizations can enhance security without sacrificing speed or efficiency." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 73 Views
-
WWW.INFORMATIONWEEK.COMThe CIO's Guide to Managing Agentic AI SystemsAs chief information officers, you've likely spent the past few years integrating various forms of artificial intelligence into your enterprise architecture. Perhaps you've implemented machine learning models for predictive analytics, deployed large language models (LLMs) for content generation, or automated routine processes with robotic process automation (RPA). But a fundamental shift is underway that will transform how we think about AI governance: the emergence of AI agents with autonomous decision-making capabilities. The Evolution of AI: From Robotic to Decision-Making The AI landscape has evolved through distinct phases, each progressively automating more complex cognitive labor: Robotic AI: Expert systems, RPAs, and workflow tools that follow rigid, predefined rules Suggestive AI: Machine learning and deep learning systems that provide recommendations based on patterns Instructive AI: Large language models that generate content and insights based on prompts Decision-making AI: Autonomous agents that take action based on their understanding of environments This most recent phase, AI agents with decision-making authority, introduces governance challenges of an entirely different magnitude. Understanding AI Agents: Architecture and Agency Related:At their core, AI agents are systems conferred with agency, the capacity to act independently in a given environment. Their architecture typically includes: Reasoning capabilities: Processing multi-modal information to plan activities Memory systems: Persisting short-term or long-term information from the environment Tool integration: Accessing backend systems to orchestrate workflows and effect change Reflection mechanisms: Assessing performance pre/post-action for self-improvement Action generators: Creating instructions for actions based on requests and environmental context The critical difference between agents and previous AI systems lies in their agency. This is either explicitly provided through access to tools and resources or implicitly coded through roles and responsibilities. The Autonomy Spectrum: A Lesson from Self-Driving Cars The concept of varying levels of agency is well-illustrated by the autonomy classification used for self-driving vehicles: Level 0: No autonomous features Level 1: Single automated tasks (e.g., automatic braking) Level 2: Multiple automated functions working in concert Level 3: "Dynamic driving tasks" with potential human intervention Level 4: Fully driverless operation in certain environments Related:Level 5: Complete autonomy without human presence This framework provides a useful mental model for CIOs considering how much agency to grant AI systems within their organizations. The AI Agency Trade-Off: Opportunities vs Risks Setting the appropriate level of agency is the key governance challenge facing technology leaders. It requires balancing two opposing forces: Higher agency creates greater possibilities for optimal solutions, compared to lower agency when the AI agent is reduced to a mere RPA solution. Higher agency increases the probability of unintended consequences This isn't merely theoretical. Even simple AI agents with limited agency can cause significant disruption if governance controls aren't properly calibrated. As Thomas Jefferson aptly noted, "The price of freedom is eternal vigilance." This applies equally to AI agents with decision-making freedom in your enterprise systems. The Fantasia Parable: A Warning for Modern CIOs Disney's "Fantasia" offers a surprisingly relevant cautionary tale for today's AI governance challenges. In the film, Mickey Mouse enchants a broom to fill buckets with water. Without proper constraints, the broom multiplies endlessly, flooding the workshop in a cascading disaster. Related:This allegorical scenario mirrors the risk of deployed AI agents: they follow their programming without comprehension of consequences, potentially creating cascading effects beyond human control. Looking to the real world and modern times, last year Air Canada's chatbot provided incorrect information about bereavement fares, leading to a lawsuit. Air Canada initially tried to defend itself by claiming the chatbot was a "separate legal entity," but was ultimately held responsible. Also, Tesla experienced several AI-driven autopilot incidents where the system failed to recognize obstacles or misinterpreted road conditions, leading to accidents. The Alignment Problem: Five Critical Risk Categories Alignment -- ensuring AI systems act in accordance with human intentions -- becomes increasingly difficult as agency increases. CIOs must address five interconnected risk categories: Negative side effects: Preventing agents from causing collateral damage while fulfilling tasks Reward hacking: Ensuring agents don't manipulate their internal reward functions Scalable oversight: Monitoring agent behavior without prohibitive costs Safe exploration: Allowing agents to make exploratory moves without damaging systems Distributional shift robustness: Maintaining optimal behavior as environments evolve There is currently a lot of promising work being done by researchers to address alignment challenges that involves algorithms, machine learning frameworks, and tools for data augmentation and adversarial training. Some of these include constrained optimization, inverse reward design, robust generalization, interpretable AI, reinforcement learning from human feedback (RLHF), contrastive fine-tuning (CFT), and synthetic data approaches. The goal is to create AI systems that are better aligned with human values and intentions, requiring ongoing human oversight and refinement of the techniques as AI capabilities advance. Solving the Trade-Off: A Framework for Engendering Trust in AI To capitalize on the transformative potential of agentic AI while mitigating risks, CIOs must enhance their organization's people, processes, and tools: People Re-skill the workforce to appropriately calibrate AI agency levels Redesign organizational structures and metrics to accommodate an agentic workforce. Agents are capable of more advanced workflows, so human capital can progress to higher-value roles. Identifying this early will save companies time and money. Develop new roles focused on agent oversight and governance Processes Map enterprise functions where AI agents can be deployed, with appropriate agency levels Establish governance controls and risk appetites across departments Implement continuous monitoring protocols with clear escalation paths Create sandbox environments for safe testing of increasingly autonomous systems Tools Deploy "governance agents" that monitor enterprise agents Implement real-time analytics for agent behavior patterns Develop automated circuit breakers that can suspend agent activities Build comprehensive audit trails of agent decisions and actions The Governance Imperative: Why CIOs Must Act Now The shift from suggestion-based AI to agentic AI represents a quantum leap in complexity. Unlike LLMs that merely offer recommendations for human consideration, agents execute workflows in real-time, often without direct oversight. This fundamental difference demands an evolution in governance strategies. If AI governance doesn't evolve at the speed of AI capabilities, organizations risk creating systems that operate beyond their ability to control. Governance solutions for the agentic era should have the following capabilities: Visual dashboards: Providing real-time updates on AI systems across the enterprise, their health and status for quick assessments. Health and risk score metrics: Implementing intuitive overall health and risk scores for AI models to simplify monitoring for both availability and assurance purposes. Automated monitoring: Employing systems for automatic detection of bias, drift, performance issues, and anomalies. Performance alerts: Setting up alerts for when models deviate from predefined performance parameters. Custom business metrics: Defining metrics aligned with organizational KPIs, ROI, and other thresholds. Audit trails: Maintaining easily accessible logs for accountability, security, and decision review. Conclusion: Navigating the Agency Frontier As CIOs, your challenge is to harness the transformative potential of AI agents while implementing governance frameworks robust enough to prevent the Fantasia scenario. This requires: A clear understanding of agency levels appropriate for different enterprise functions Governance structures that scale with increasing agent autonomy Technical safeguards that prevent cascading failures Organizational adaptations that enable effective human-agent collaboration The organizations that thrive in the agentic AI era will be those that strike the optimal balance between agency and governance -- empowering AI systems to drive innovation while maintaining appropriate human oversight. Those that ignore this governance imperative may find themselves, like Mickey Mouse, watching helplessly as their creations take on unintended lives of their own.0 Comments 0 Shares 108 Views
-
WWW.INFORMATIONWEEK.COMPreparing Your Tech Business for a Possible RecessionJohn Edwards, Technology Journalist & AuthorApril 30, 20254 Min ReadIzel Photography - A via Alamy Stock PhotoOver the past weeks, the odds of a recession increased significantly. While financial experts are divided on the likelihood of a recession happening anytime soon, prudent C-suite leaders are already taking tentative steps designed to help their enterprise survive and perhaps even prosper during an economic downturn. To prepare for a recession, tech companies should focus on cutting unnecessary costs without compromising innovation, advises Trevor Young, chief product officer at cybersecurity firm Security Compass. "This means automating processes, streamlining operations, and using resources like cloud services more efficiently," he says in an online interview. Young also suggests diversifying revenue streams. "Don’t put all of your eggs in one basket."Max Shak, CEO of team services firm Zapiy.com, stresses the importance of driving innovation during economic downturns. "Companies that maintain or even accelerate their innovation efforts during a recession can emerge stronger when the economy recovers," he says in an online discussion. "The key is to focus on products or services that meet the evolving needs of customers, whether that’s in response to changing market conditions or advancements in technology." Most Vulnerable Some tech businesses are more vulnerable to a recession than others, Shak says. Startups and smaller tech firms that are reliant on venture capital funding, or have limited cash reserves, are often at greater risk, he observes. "These businesses may struggle to secure necessary funding during a recession, and their growth could stall if their investors tighten their belts." Related:Tech firms that depend heavily on consumer discretionary spending are also likely to likely to suffer during an economic downturn, says Rose Jimenez, CFO at culture.org, a cultural news platform. "Think e-commerce platforms selling non-essential goods or subscription services that people tend to cancel first when tightening their budgets," she states in an email interview. Also endangered are ad-supported tech companies, Jimenez says. She notes that smaller social media or content platforms are particularly vulnerable, since advertising is often one of the first areas cut during economic uncertainty. "Startups that are still pre-revenue or burning through capital fast without a clear path to profitability are also at risk, especially if they're relying on new funding rounds," Jimenez adds. "When capital tightens, investors tend to get cautious, and that can put a strain on early-stage companies without strong fundamentals." Related:Firms that market non-essential products or services, such as luxury tech or expensive software solutions, are likely to suffer, Young says. Startups that aren't well-funded or companies that depend heavily on outside investment can also be at risk, as investors might pull back in tough times, he states. Especially vulnerable during a downturn are businesses that rely heavily on enterprise clients with long sales cycles, especially those in sectors such as B2B, SaaS, or enterprise software solutions. When a recession hits or even when economic uncertainty rises, large corporations tend to slow down their purchasing decisions, says Wes Lewins, CFO at financial advisory firm Networth. "Budget approvals take longer, IT investments get delayed, and non-essential upgrades are put on hold," he explains. "For tech companies whose revenue depends on landing big-ticket clients or closing long, complex deals, that slowdown can significantly impact cash flow and forecasting." Warning Signs Young sees warning signs that a recession could be on the horizon. "Things like inflation, rising interest rates, and global instability often point to tough economic times," he notes. "However, the beauty of the tech industry is its ability to innovate and pivot, so companies that stay agile and forward-thinking can actually find opportunities even when the economy is struggling." Related:While there's no recession yet, there are signals suggesting a higher risk of an economic slowdown in the near future, Jimenez says. "The next few quarters will be key, especially as businesses react to global trade tensions and consumer confidence shifts." Final Thoughts Recessions, although challenging, can also be a time to rethink and innovate, Young says. He notes that a recession can provide an opportunity to focus on digital transformation, explore new markets, or refine products. "For example, businesses in cybersecurity may see even more growth, as security threats often spike during downturns," Young explains. "The key is to be proactive, flexible, and to always stay connected to what your customers need." Shak says that preparing for a recession involves maintaining financial flexibility, focusing on customer value, and staying agile in the face of changing market conditions. "Tech companies that are proactive, innovative, and resilient are more likely to weather the storm and come out stronger on the other side." Young agrees. "If you embrace change and stay ahead of the curve, you cannot only survive a recession but come out stronger on the other side." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 65 Views
-
WWW.INFORMATIONWEEK.COMFranciscan Health’s Pursuit of Observability and AutomationJoao-Pierre S. Ruth, Senior EditorApril 30, 20254 Min Readeverything possible via Alamy Stock PhotoThe layers of tech and data used by an institution such as Franciscan Health, a 12-hospital system in Indiana that also has a presence in suburban Chicago, can need a bit of decluttering for sake of efficiency.The path to sort out data and other aspects of observability led the health system to observability platform Pantomath.Sarang Deshpande, vice president of data and analytics for Franciscan Health, says when he joined three years ago, he saw that -- much like other healthcare providers -- they operated with a combination of tools and technologies stacked together. That approach may have served in the moment, choosing the best tools available at the time, he says. It also piled up a bit of confusion.Diagnosing the ProblemAs with other types of long-running institutions, hospitals might not move swiftly when it comes to technology adoption. “The maturity typically you’ll see on the provider side around technology … digital adoption is lower than you would find in manufacturing or even on the healthcare side if you think of pharmaceuticals or medical devices,” Deshpande says.On the nonprofit side, he says, the main focus is patient care with most capital investments going into buildings, hospitals, and clinics rather than new tech. At least that may have been the case until the pandemic put the world on different footing. “Technology tends to lag a little bit, but after COVID that has changed quite a bit,” Deshpande says.Related:Prior to COVID, Franciscan Health tended to purchase technology tools based on what was needed at the time, he says, and largely on-premises. Compounding the complexity, Deshpande says there is a plethora of ways data is collected and ingested in the hospital system. “Our electronic medical record system is the biggest of all where most of our patient data comes from,” he says.On top of that, he says there are also billing and ERP systems, ITSM ticketing systems, and time-keeping systems to account for. Further, there are regulatory requirements around the hospital system’s reporting, he says.Assessing the Tech AilmentInformation that Franciscan’s system ingests, Deshpande says, includes flat file datasets, as well as data from a CMS, third-party payers, or ancillary third parties. With so many formats and inputs, he says there was not a very clear-cut way to access data. Franciscan Health must also be accountable for sending information out, Deshpande says.The varied tech tools Franciscan Health collected over the years meant there was no standardized data pipeline. “That problem was very obvious to me from the get-go,” Deshpande says. “We have tried to solve it through people and process to a large extent, but there’s only so much you can do when there are siloed teams that are accountable for one piece of the data flow.”Related:With so many pieces and layers in play, tech challenges were inevitable. “Whenever there was a failure or a data quality issue or a job didn’t run on time or got delayed, the downstream impact of that was very localized,” Deshpande says.Being accountable for accuracy, timeliness of the data, he says the issues became apparent to him. “That’s where we realized we had a big problem where the non-standardized set of tools, processes, and people in their jobs were making it very difficult for us to have any level of accuracy that our leadership demands of us,” he says.In the digital transformation era, with migrations to the cloud and more automation, Deshpande says post-COVID resources were extremely limited and most every health system seeks to do more with less. “Labor costs are off the charts,” he says. “I think that’s where most people are realizing that we need to leverage not just technology at the frontlines for our patients, but also for optimal work internally.”Related:Prescribing a StrategyThat’s where the observability platform Pantomath came into play to help transform Franciscan Health’s data operations. Deshpande says use of the platform introduced automation with the intent to reduce human error and dependency in the equation. “We will always need eyeballs on things to validate, verify, and to fix,” he says, “but basic monitoring, observation, alerting and things of that nature should be very easy to automate. Things are never as easy it seems.”Use of the platform let Franciscan Health repurpose their labor force to work smarter through AI and LLMs, Deshpande says. “We wanted a more consistent way of monitoring and solving the problem of data accuracy, data currency, and data validation.”Franciscan Health’s system comprises five different regions, he says, that historically were separate entities that came together through mergers and acquisitions. They still operate relatively independently from a daily workflow perspective, says Deshpande. That includes management of staff and patient population.Deshpande says one measurement for success of the observability effort is whether his team can conduct business, grow, and transform at the same time without additional labor -- and still deliver. He says the work continues, with at least two years out in terms of migrating all on-prem infrastructure while also building new use cases on the data platform. “The next couple of years will be all about migration, consolidation, and how can we get to a point where this modern data platform in the cloud will be up and running and we can reduce our footprint in the data center and the cost that comes with it,” Deshpande says.About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 53 Views
-
WWW.INFORMATIONWEEK.COMCommon Pitfalls and New Challenges in IT AutomationAutomation is moving from a routine IT task to a race to cross an ill-defined finish line. AI tends to be the bug smearing the windshield and making it hard to see where you’re headed. Road hazards are further complicating the drive to increased efficiency. "For some, automation is a buzzword and an uphill battle, but for most technical folks out there, it's as simple as ABC. However, many technical leads and CIOs find themselves in trouble at the starting line,” says Muhammad Nabeel, chief technology officer at Begin, an entertainment streaming service in Pakistan. At issue from the start are the usual company politics and AI -- which can be more difficult to negotiate than bean counters and C-suite heavyweights combined. “Nowadays, AI has a drastic influence on every walk of life, especially technology. Therefore, any CIO or head of technology must incorporate the AI factor,” Nabeel adds. Although AI is a dominant force, it isn’t the only play in automation. Some established tools and rules still apply. Unfortunately, so do the previous pitfalls and challenges. Heaped on top of that are all the AI problems, too. “This year, hidden costs and regulatory curveballs will bite if ignored. Beyond licensing fees, watch for integration spaghetti -- systems that don’t “talk” smoothly -- and training gaps that stall adoption. New data privacy regulations, like evolving GDPR [the European Union’s General Data Privacy Regulation] and AI transparency laws, mean CIOs must vet tools for compliance and ethical design,” says Dawson Whitfield, CEO and co-founder of Looka, an AI platform for designing logos. Related:All told, there’s a lot for IT to manage all at once. For the sake of sanity and strategy, perhaps it's best to first consider the pitfalls and challenges before trying to map out a strategy. Pitfall 1: Running into obstacles you can’t see In the process of implementing automation and getting all the moving parts right, sometimes people forget to first evaluate the process they are automating. “You don’t know what you don’t know and can’t improve what you can’t see. Without process visibility, automation efforts may lead to automating flawed processes. In effect, accelerating problems while wasting both time and resources and leading to diminished goodwill by skeptics,” says Kerry Brown, transformation evangelist at Celonis, a process mining and process intelligence provider. The aim of automating processes is to improve how the business performs. That means drawing a direct line from the automation effort to a well-defined ROI. Related:"When evaluating AI and automation opportunities for the organization, there are often gaps in understanding the business implications beyond just the technology. CIOs need to ensure that they can translate AI capabilities into concrete business strategies to demonstrate strong ROI potential for stakeholders,” says Eric Johnson, CIO at PagerDuty, an AI-first operations platform. Pitfall 2: Underestimating data quality issues Data is arguably the most boring issue on IT’s plate. That’s because it requires a ton of effort to update, label, manage and store massive amounts of data and the job is never quite done. It may be boring work, but it is essential and can be fatal if left for later. “One of the most significant mistakes CIOs make when approaching automation is underestimating the importance of data quality. Automation tools are designed to process and analyze data at scale, but they rely entirely on the quality of the input data,” says Shuai Guan, co-founder and CEO at Thunderbit, an AI web scraper tool. “If the data is incomplete, inconsistent, or inaccurate, automation will not only fail to deliver meaningful results but may also exacerbate existing issues. For example, flawed customer data fed into an automated marketing system could lead to incorrect targeting, wasted resources, and even reputational damage,” Guan adds. Related:Pitfall 3: Mistaking the task for the purpose A typical approach is to automate the easy, repetitive processes without giving thought to a problem that lurks beneath. Ignoring or overlooking the cause now may prove highly damaging in the end. "CIOs often fall into the trap of thinking automation is just about suppressing noise and reducing ticket volumes. While that’s one fairly common use case, automation can offer much more value when done strategically,” says Erik Gaston, CIO of Tanium, an autonomous endpoint management and security platform. “If CIOs focus solely on suppressing low-level tickets without addressing the root causes or understanding the broader patterns, they risk allowing those issues to snowball into more severe problems that can eventually lead to bigger risks down the road. It is often the suppressed Severity 3-4 issue that when left unattended, becomes the S1 or 2 overtime!” Gaston says. Remember also that business goals and technologies change over time and so too must processes. “Focus on high-impact areas, leverage the power of open-source tools initially, and monitor the outcome. Change when and where necessary. Do not adopt the ‘fire and forget" principle,’" says Nabeel. Pitfall 4: Failing to plan for integration Integration becomes a necessity at some point. With AI, integrating with human overseers is an immediate need. Often it must be integrated with other software as well. “One mistake is assuming AI-driven automation can run without human oversight. AI is a powerful tool, but it still requires human checks to catch errors, bias, or security risks,” says Mason Goshorn, senior security solutions engineer at Blink Ops, an AI-powered cybersecurity automation platform. However, even traditional automation tools require integration. Most in IT are aware of this but it doesn’t mean that planning for it made it into the final strategy. “Another challenge is failing to plan for integration, which can lead to vendor lock-in and disconnected systems. CIOs should choose automation tools that work with existing infrastructure and support open standards to avoid being trapped in a single provider’s ecosystem,” says Goshorn. Pitfall 5: Not allowing the data to drive decisions in what to automate Often the plan isn’t really a plan but rather a rush to automate the low-hanging fruit to show a fast win. Unfortunately, a fast win isn’t necessarily the same as a big win. A cost-benefit analysis will steer you true whereas a quick pick might lead you astray. “For pipelines that occur less frequently or require little time, automation provides lesser value. Like most business processes, a cost can be associated with automation, and the cost savings should exceed the cost of implementation and maintenance,” says David Brauchler, technical director & head of AI and ML security at cybersecurity consultancy, NCC Group, a cybersecurity company. Identifying what processes should not be automated early on is another way to save effort, time and wasted cost. “Any process that requires complex human reasoning, emotion, or interaction, or does not follow established rules and structures, are not suitable for automation. Of course, AI is blurring that distinction and getting better at simulating complex human behaviors and establishing structures where none seem to exist. However, considering the current state of development and possible legal and moral ramifications, such processes should be deprioritized for automation,” says Sourya Biswas, technical director, risk management and governance, NCC Group. “Also, considering the lead time to analyze, implement and integrate automation, any process subject to major changes in operating conditions in the near future should not be considered for automation as it is likely that the ROI won’t be positive before the process itself becomes obsolete,” Biswas adds. Pitfall 6: Focusing solely on cost Given that economies are uncertain around the world from inflation, political upheaval, and other factors, it’s understandable that cost concerns are elevated now. But that narrow focus can leave you blind to other budget impacts. “CIOs risk choosing the wrong technology, leading to integration challenges, unnecessary complexity, or vendor lock-in. A common pitfall is focusing solely on cost savings rather than broader benefits like agility, innovation, and customer experience, which can limit the actual value of automation,” says Derek Ashmore, application transformation principal at Asperitas, an IT consultancy. Rising New Challenges 2025 is ushering in a lot of new challenges for IT to surmount in automation implementations. Although changes in regulation and associated compliance costs are ongoing issues, they are even more so now. “This year, CIOs should be particularly vigilant about emerging regulatory requirements that could impact their automation strategies. Staying informed about industry-specific regulations and compliance standards is essential, especially regarding how automated systems handle data,” says Chris Drumgoole, EVP, Global Infrastructure Services at DXC Technology, a global technology services provider. It isn’t just federal regulations you must watch closely, but regional and state regulations too. “The integration of AI into IT automation is accelerating, with technologies like generative AI and agentic AI playing pivotal roles. State legislatures in the US are actively introducing AI-related bills, with hundreds proposed in 2025,” says Ashmore. Ashmore warns that these legislative efforts include comprehensive consumer protection, sector-specific regulations on automated decision-making, chatbot oversight, generative AI transparency, data center energy usage, and public safety concerning advanced AI models. “This surge in state-level regulation adds complexity to compliance for organizations implementing IT automation,” Ashmore adds. Some of the rising challenges are more directly attached to automation implementations. Unexpected expenses in operationalizing AI, increasing complexity in multi-cloud integration, and integration requirements across growing ecosystems are all putting pressure on IT, according to Deepak Singh, president and chief technology officer at Adeptia, an AI and self-service platform. Also lurking in the background, but soon to raise its ugly head, is the problem of a growing shadow AI. Business users are routinely turning to free and low-cost AI subscription models to get their work done without corporate oversight or interference. On top of that is the growing number of AI models integrated or embedded in enterprise software and hardware, as well as in private devices like smartphones. That is a lot of unattended and potentially unsecured AI wandering around in the organization. For example, that is a lot of AI that can be gathering data to train future AI models on, and some of that data may be proprietary. Last, but certainly not least, is the dearth of talent necessary to remake business processes in AI’s image and fit for automation tools of all kinds. “Leaders should focus on upskilling the talent they already have and investing in communities to build strong talent pipelines. This way, as automation increases, the workforce can take an oversight role and enjoy more capacity to focus on innovation that can improve the bottom line,” said Tim Gaus, Smart Manufacturing business leader at Deloitte, a consulting firm. The key to success lies in training domain experts to use AI and other technologies, and to accurately evaluate what processes can and can’t be successfully automated. “Key to this for manufacturers is ensuring talent can span the manufacturing and IT disciplines. This can take the form of educating production staff in IT but must also focus on ensuring that IT staff and partners understand the real challenges and data environment on the production floor. IT and OT (Operational Technology) can no longer have walls between them and must operate toward common goals with sufficient understanding of each other’s domains,” Gaus says.0 Comments 0 Shares 81 Views
-
WWW.INFORMATIONWEEK.COMPrincipal Financial Group CIO on Being a Technologist and Business LeaderCarrie Pallardy, Contributing ReporterApril 29, 20255 Min ReadKathleen KayEarly on in her career, Kathleen Kay never pictured herself in the C-suite. She went to a public high school in Detroit and initially started on a pre-med track in her ungraduated studies. But she discovered computer science instead. She started her career at General Motors, where a mentor recognized her potential. Today, she is executive vice president and CIO of Principal Financial Group, an investment management and insurance company. In a conversation with InformationWeek, Kay traces her career trajectory, sharing how she grew into CTO and CIO leadership roles at multiple companies. The Path to the C-Suite Kay has come a long way since her high school days. “It would never have occurred to me I would be in a position like this,” she shares. She wasn’t even sure if she was going to go to college, but an organization that worked with kids at her high school helped her recognize that opportunity. She got a scholarship and initially decided to pursue a pre-med track, only to discover she hated biology. “In Detroit ... we were always looking for good, stable, well-paying careers. Computer science was paying well. So I thought, ‘I'm going to take a computer science class and see if I like it,’” she shares. “And I ended up loving it.” Related:Like many of her peers in Detroit at the time, Kay got her start in the automotive industry. Her first job was in a research laboratory at GM. She worked alongside various researchers in specialties like social sciences, anthropology, psychology, and operations science. The role afforded her a great deal of flexibility to explore her interests, and there she was drawn to learning systems. Kay then worked with Chevrolet division of the company, where she caught the eye of an internal leadership program that selected high-potential employees. “For me, just getting through college and into a company like General Motors was the dream,” Kay recalls. “I became a part of that group and that's when I started realizing I could be in leadership positions, that I could do bigger things.” GM hired Ralph Szygenda, its first CIO, in the 1990s. “That's when I thought, ‘Wow, I would love to be in a role like that,’ because it's a combination of having to understand business as well as technology. And it was what I really liked doing,” says Kay. Leadership in Different Industries Kay worked with Maryann Goebel, who was the CIO at GM North America between 2003 and 2006. And Goebel became a lifetime mentor and the first person to recommend Kay for roles outside of GM. Related:Kay’s first role outside of GM and the automotive industry was as senior vice president and CTO at Comerica Bank. The next steppingstone in her career was as enterprise CTO at SunTrust Bank. From there, she moved to Pacific Gas and Electric Company, where she worked her way up to senior vice president and CIO. In each successive role, Kay’s responsibilities grew and broadened. As her career unfolded, she learned how to be a good technologist and business leader. “I think [for] many of us, and I did it early in my career, we get enamored with a technology and then try and hunt for a place to put it,” says Kay. “What would often happen is you’d have this mismatch of technology that wouldn't really solve the business problem at hand.” In each of her different roles, Kay found that she needed the ability to empower people and remove blockers with technology, but she needed a keen understanding of the business and its specific industry to accomplish that. “Being a good technologist is really understanding the business problem at hand,” she explains. The CIO Role at Principal Financial Group Principal Financial Group approached Kay during its hunt for a new CIO. “This company at the time ... had recognized that what got them here to 140 years wasn't going to be what gets them to another 140 years. So, having this humility to recognize that and willingness to address it and pivot was really appealing to me,” Kay shares. Related:She accepted the job and started in May 2020, just a few months into the global COVID pandemic. Kay aspires to be an accessible leader, a management style challenged by a 100% remote workforce. Stopping by someone’s desk or leaving her door open wasn’t an option. Even informal meetings would need to be scheduled via a virtual meeting platform. She started hosting a weekly, open virtual meeting, Coffee with Kathy, as an experiment. Anybody in her organization could join for an unstructured conversation about work or anything else happening in the wider world. That meeting drew hundreds of participants, and it has outlived the pandemic. Today, it is a monthly meeting that remains virtual so team members working remotely and in other offices can still join. “It's really broken down barriers. I have so many people on my team who say, ‘I would have never thought I would be talking directly with the CIO of a company. I never thought I would feel comfortable sending a message or getting time on the calendar,’” says Kay. Today, this meeting is one among many in Kay’s busy schedule. “In the CIO role, I am front and center facing off with my business colleagues really understanding strategy, helping define it, challenging ways of doing things,” she says. On any given day, she could be having financial discussions, checking on the progress of different initiatives, and spending time mentoring people on her team. Kay notes her pride in how well-aligned technology and strategy have become at Principal Financial Group. When she first started, each of the company’s lines of business had a strategy and related technology plans. But there were a lot of shared technologies that weren’t necessarily coordinated across these different lines of business. Kay and her team took a step back to look for ways to integrate technology into strategies across the entire business. “As a result, there are capabilities that we're delivering much more quickly and iteratively than we could in the past because of how we've designed this. So, we've built things that have really improved the customer experience,” Kay explains. Over the course of her career, Kay has found that technology leadership can transcend industry lines. “Being a technologist gives you an opportunity to really go across different industries,” says Kay. “I've had that luxury. You'll often find that what you learned in one industry can be leveraged in another industry. And then you learn even new things. You bring new ideas, and I think having this background and the ability to move to different industries has been super gratifying.” About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 78 Views
-
WWW.INFORMATIONWEEK.COMAgentic AI May Find the Best Business Partner in ItselfAI has undoubtedly revolutionized work for many industries, but in some areas, it’s reaching its limits. Human-AI interactions, namely with GenAI, have been the base of AI’s integration recently. We’ve finessed the “we ask, it performs” relationship. But this introduces a simple problem: Our limits are its limits. We can ask it all we want (and get good answers), but AI “lives” a life of follow-the-leader. Enter agentic AI. It’s been the talk of the town for good reason, as it fills in many of the shortcomings of its predecessor. But it demands major alterations to our relationship with AI, as its best functions are working with other AI agents, not us. We step back, lose control and become less important. So, is it worth it? It promises to be the next great tool for businesses; what are the most important roles it can play in IT, how does security factor in, and what should leaders keep in mind? Scaling Down and Dialing In Large language models (LLMs), though very useful in other ways, aren’t the key for AI agents. The niche is AI agents’ game: they work best in specialized areas. Agents should be trained on niche tasks, with the ability to interact with other agents to complete more complex functions. AI agents can be key in DevOps, for example, equipped to carry out software testing, deployment, pipeline optimization, incident management and more. Naturally, a lot of work in this area requires collaboration, which is AI agents’ superpower. Agents can, for instance, use proactive testing to predict a possible failure and contact other bots to make a patch to resolve it. It can then connect with other agents, such as a UX agent, to check for possible side effects of the fix, and ensure a smooth deployment. Related:Cybersecurity experience finds similar benefits. A specialized agent can monitor for threats with a refined set of skills, while another can focus on response and clean up. IT compliance, cloud computing, and almost any sector in IT can benefit from the level of finesse and collaboration that these agents can bring to the table. This is all done within the parameters that we give it, so any link along the chain can be filled with a human agent when needed. Beauty lies in its flexibility. The Elephant in the Room The big question is about accuracy. With great power comes great responsibility. As AI agents feed data to other agents and make dual decisions, accuracy and reliability of the output become some of the most important duties. It all comes down to training. Quality data is everything in AI, and the same is true (and it may be even more important) when it comes to specialized agents. Beyond initial training, models need to be continuously fine-tuned with real-world data. Related:Professionals and teams who have mastered data quality will find themselves a step ahead when implementing. With proper data and proper modularity, a high level of accuracy and consistency can be achieved. Data issues are not so much a limit as they are an element of preparation and maintenance. A Word for Change Leaders ROI is a precious thing, and we always ask ourselves if any given program or software is really worth it. When implementing and mastering agentic AI, we need to leave that mindset behind. Long-term thinking doesn’t really have a place here; instead, we need to make intuitive, tactical decisions on where we can implement now without compromising security. Short-term use cases allow for immediate action and quicker learning to build a strong foundation for a strategic future. There is no best time or perfect model. AI agents bring a new level of collaboration to the increasingly interconnected IT industry. In the end, it's up to us to learn where to be involved and when to step back. With all the industry changes and tumult, I personally don’t mind taking a back seat for this one. Related:0 Comments 0 Shares 87 Views
-
WWW.INFORMATIONWEEK.COMHow AI is Transforming Data CentersJohn Edwards, Technology Journalist & AuthorApril 28, 20254 Min ReadAndriy Popov via Alamy Stock PhotoAI is rapidly transforming data centers, as the massive computational workloads required to support generative AI, autonomous systems, and numerous other advanced technologies are pressing current facilities to their limits. By 2030, data centers are expected to reach 35 gigawatts of power consumption annually, up from 17 gigawatts in 2022, according to management consulting firm McKinsey & Company. AI is fundamentally reshaping the data center landscape, not just in scale but also in purpose, says Vivian Lee, a managing director and partner with Boston Consulting Group. "What used to be infrastructure built to support enterprise IT is now being retooled to meet the massive and growing demands of AI, particularly large language models," she notes in an email interview. Rapid Growth AI is driving major changes in how data centers are designed and built, especially in terms of density, says Graham Merriman, leader of Rogers-O'Brien Construction's data center projects. "We're seeing more computing and more power packed into tighter footprints," he observes in an online discussion. "That shift is also reshaping the supporting infrastructure, particularly cooling." AI is accelerating data center industry growth beyond any previous market expectations, says Gordon Bell, a principal at professional services firm Ernst & Young. "This dynamic not only results in higher power, capital, and resource requirements to develop new data centers, but it also changes the ways large data center users approach lease versus buy, market selection, and data center design decisions," he explains in an online interview. "The need to train large frontier models has driven significant increases in aggregate data center demand, as well as the size of individual hyperscale data center campuses." Related:Operational Impact Bell points out that AI runs on graphics processing units (GPUs), which are more power-consumptive than traditional central processing nits (CPUs). This shift requires more power, as well as more cooling throughout the data center, he notes. "Traditionally, data centers were air-cooled, but the market is shifting toward liquid-cooling technologies given the increased power density of AI workloads." AI won't increase data center staff size, but it will change the maintenance playbook, Merriman says. "With advanced cooling systems comes more specialized maintenance requirements," he explains. "The industry is also adjusting to new protocols around liquid cooling and environmental controls that are more sensitive to performance fluctuations." Related:Traditional data centers will face significant challenges in adapting to AI-powered operations and supporting AI-driven workloads, predicts Steve Carlini, chief data center and AI advocate at digital automation and energy management firm Schneider Electric. "Many legacy facilities weren't designed to support the high-power densities and cooling requirements needed for AI applications," he observes in an email interview. Carlini notes that modernization efforts -- such as upgrading the electrical infrastructure, deploying liquid cooling, and enhancing energy efficiency -- while costly, can extend the lifespan of older data centers. "Those unable to adapt may struggle to remain viable in a rapidly evolving, AI-dominated landscape." Operations are also being challenged by supply chain constraints, Lee says. "Critical components like transformers, cooling systems, and backup generators now have lead times measured in years rather than months," she explains. "In response, operators are shifting to bulk procurement strategies and centralized logistics to keep project timelines on track." Cost Impact AI workloads require significantly more electricity, so operating costs will go up, Merriman says. "To manage these challenges, facilities are moving toward closed-loop cooling systems that help reduce water usage and improve thermal efficiency." Related:While investing in AI-capable data centers will be costly, it also has the potential to significantly reduce operating expenses, says David Hunt, senior director of development operations at credit reporting firm TransUnion. "AI optimizes energy consumption, reduces cooling expenses, and minimizes the need for manual intervention, leading to lower operational costs," he observes in an online interview. "However, the increased power demand for AI workloads can also drive-up energy costs." Carlini notes that AI-driven workloads are expected to more than triple by 2030. "Strategic investments in AI-ready infrastructure, energy efficiency, and collaboration between industry leaders and policymakers will be essential for building a resilient, high-performance data center ecosystem capable of supporting AI's continued growth." Final Thoughts AI will continue driving record-setting levels of data center development over the next several years, Bell predicts. "At the same time, GPU manufacturers have announced product roadmaps that include even more power-hungry chips," he says. "These dynamics will continue to shape industry growth." Integrating AI into data centers isn't just technology, it's also about strategic planning and investment, Hunt says. "Organizations need to consider the long-term benefits and challenges of AI adoption, including the environmental impact and the need for skilled personnel to manage these advanced systems consistent with internal governance requirements," he states. "Collaboration between AI developers, data center operators, and policymakers will be crucial in shaping the future of data centers." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 49 Views
-
WWW.INFORMATIONWEEK.COMBabak Hodjat Talks Groundbreaking Work on Natural Language TechTechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Babak Hodjat Talks Groundbreaking Work on Natural Language TechBabak Hodjat Talks Groundbreaking Work on Natural Language TechThe CTO of AI for Cognizant previously developed patented, natural language tech that found its way into Apple’s Siri. He discusses the need for CTOs to research constantly.Joao-Pierre S. Ruth, Senior EditorApril 28, 2025InformationWeekA desire for consistency in how AI performs, and with the results it delivers, is shared among many companies and their customers, says Babak Hodjat, CTO of AI for Cognizant. Delivering such consistency may require internal and external effort to mature the technology.Hodjat’s experience with AI runs from the early days of his career, through the development of natural language technology found in Apple’s Siri digital assistant, to his current role at Cognizant.In addition to discussing some of his pioneering work, he shared his perspective on the need CTOs have to research innovations in development outside of their organizations, ways CTOs can set the future they want to see in motion, and how a book of all things can be his go-to “device” that offers him inspiration in his role.About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 87 Views
-
WWW.INFORMATIONWEEK.COMQuick Study: AI Power Consumption and SustainabilityJames M. Connolly, Contributing Editor and WriterApril 25, 20254 Min ReadInk Drop via Alamy StockIn the adoption of new technology, there has always been the need for tradeoffs. For every opportunity, there has been a cost, a need to achieve the good by understanding the bad. The early computers offered efficiency, but they raised fears of depersonalization; PCs threatened IT security; the internet connected the world but enabled cybercrime. In the end, nobody wanted to go back to typewriters, rotary dial phones, and adding machines. Somehow humanity has found ways to balance the negative and positive side of each emerging technology. Today, artificial intelligence offers a prime example of the good/bad, plus/minus equation. AI promises to make work easier, cure diseases and solve global problems. Yet, we know about the risks: Biased data, lack of transparency, loss of jobs, and more. Recently, a new concern has emerged: AI models require so much electrical power that the energy demand seems ready to bring down the global grids. The irony is that some experts say AI actually has the potential to help achieve greater sustainability by finding answers to multiple energy issues. We recently launched a special report on this topic in which we investigated the thorny issues surrounding the true cost of AI. For example, what’s the price tag CIOs have to pay in the short term and what’s the cost to their business -- and to society -- in the long-term? And over the past year or so, InformationWeek writers have examined the energy dilemma that AI raises. The articles in this Quick Study share the thoughts of key experts in the AI and energy fields. They can help you and your IT organization understand the two sides of the AI energy issues. The Problem Confronting the AI Energy Drain Artificial intelligence technology is working its way into nearly every aspect of modern life. But what are the energy costs? Can they be reduced? The AI Power Paradox The energy needed to train AI models is draining the power grid. But AI may also be key to sustainable energy management. AI Driving Data Center Energy Appetite As organizations scramble to integrate AI platforms into their businesses, the impact on data center energy requirements is skyrocketing, with future demand certain to rise. Will Future AI Demands Derail Sustainable Energy Initiatives? As AI use grows, so will its energy demands. How will power-hungry AI deployments affect sustainable energy initiatives? Pulling Back the Curtain: The Infrastructure Behind the Magic of AI Here’s a look at the “magic” behind artificial intelligence development, which requires density in design, strategic land selection, and power availability. Possible Solutions AI, Data Centers, and Energy Use: The Path to Sustainability The increasing use of AI and data centers is leading to a surge in energy consumption, posing risks for energy, tech, and data companies. It also presents an opportunity for these companies to decarbonize, build trust, and reduce long-term costs. Accenture Makes $1B AI Power Play with Udacity Purchase The company will use Udacity to build out its LearnVantage business to focus on AI-fueled technology training. Supercharging AI With the Power of Quantum Computing How can we supercharge artificial intelligence? Through the power of quantum computing and its potential to pave the way for a more sustainable and efficient future with AI. Clean, Lean Data Is the Cornerstone of AI Sustainability Messy data is making AI inefficient and hampering its sustainability. So why aren’t more organizations doing a better job of optimizing their data for AI? Sustainable AI: Wishful Thinking or Corporate Imperative? With the increasingly popular use of AI in the enterprise, it becomes crucial to ensure that these technologies are harnessed in a climate-neutral way. Infrastructure Sustainability and the Data Center Power Dilemma Microsoft’s plan to tap into a reactor at Three Mile Island to power data centers fuels questions about how far our voracious appetite for energy might go. A Sustainability Spin to the Issue How AI Impacts Sustainability Opportunities and Risks AI can be used to drive sustainability initiatives, yet the technology itself has an environmental cost. How can we strike a balance? AI and the War Against Plastic Waste Plastic waste is one of today’s most complex environmental challenges, and people are putting AI to work to understand it and solve it. Embracing AI for Competitive Edge and Social Impact AI is revolutionizing business by enhancing operational efficiency, innovation, and competitiveness, while also addressing global challenges like healthcare and sustainability. It enables companies to thrive while contributing to broader societal impact. AI Will Make Cars and Trucks Smarter, Faster, and Safer AI promises to revolutionize driving. Here's a look at what's coming down the road, and how AI can help make cars safer and more energy efficient. SAP’s Sophia Mendelsohn on Using AI to Scale Sustainability How GenAI can be put to work to free up ESG professionals and connect the dots on resource planning within the enterprise. About the AuthorJames M. ConnollyContributing Editor and WriterJim Connolly is a versatile and experienced freelance technology journalist who has reported on IT trends for more than three decades. He was previously editorial director of InformationWeek and Network Computing, where he oversaw the day-to-day planning and editing on the sites. He has written about enterprise computing, data analytics, the PC revolution, the evolution of the Internet, networking, IT management, and the ongoing shift to cloud-based services and mobility. He has covered breaking industry news and has led teams focused on product reviews and technology trends. He has concentrated on serving the information needs of IT decision-makers in large organizations and has worked with those managers to help them learn from their peers and share their experiences in implementing leading-edge technologies through such publications as Computerworld. Jim also has helped to launch a technology-focused startup, as one of the founding editors at TechTarget, and has served as editor of an established news organization focused on technology startups at MassHighTech.See more from James M. ConnollyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 114 Views
-
WWW.INFORMATIONWEEK.COMHow Your Organization Can Benefit from Platform EngineeringJohn Edwards, Technology Journalist & AuthorApril 25, 20255 Min ReadArtemisDiana via Alamy Stock PhotoPlatform engineering is a discipline that's designed to improve software developer productivity, application cycle time, and speed to market by providing common, reusable tools and capabilities via an internal developer platform. The platform creates a bridge between developers and infrastructure, speeding complex tasks that would normally be challenging, and perhaps even impossible, for individual developers to manage independently. Platform engineering is also the practice of building and maintaining an internal developer platform that provides a set of tools and services to help development teams build, test, and deploy software more efficiently, explains Brett Smith, a distinguished software developer with analytics software and services firm SAS, in an online interview. "Ideally, the platform is self-service, freeing the team to focus on updates and improvements." Platform engineering advocates the continuous application of practices that provide an improved, more productive developer experience by delivering tools and capabilities to standardize the software development process and make it more efficient, says Faruk Muratovic, engineering leader at Deloitte Consulting, in an online interview. A core platform engineering component is a cloud-native services catalog that allows development teams to seamlessly provision infrastructure, configure pipelines, and integrate DevOps tooling, Muratovic says. "With platform engineering, development teams are empowered to create a development environment that optimizes performance and drives successful deployment." Related:A Helping Hand Platform engineering significantly improves development team productivity by streamlining workflows, automating tasks, and removing infrastructure-related obstacles, observes Vinod Chavan, cloud platform engineering services leader at IBM Consulting. "By reducing manual effort in deploying and managing applications, developers can focus on writing code and innovating rather than managing infrastructure," he notes in an email interview. Process automation and standardization minimizes human error and enhances consistency and speed across the development lifecycle, Muratovic says. Additionally, by providing self-service development models, platform engineering significantly reduces dependency on traditional IT services teams since it allows full-stack product pods to deploy and manage their own environments, he adds. Embedded monitoring, security, and compliance policies ensure that enterprise policies are followed without adding overhead, Muratovic says. "Platform engineering also supports Infrastructure as a Code (IaC) capabilities, which provide development teams with pre-configured networking, storage, compute, and CI/CD (continuous integration/continuous delivery) pipelines." Related:An often-overlooked platform engineering benefit is regular tool updates, Smith notes. Enterprise Benefits Platform engineering gives enterprises the structure and automation needed to scale efficiently while lowering costs and strengthening operational resilience, Chavan says. "By eliminating inefficiencies and reducing manual labor, it optimizes resource usage and enables business growth without unnecessary complexity or costs." He adds that by providing a stable environment that can support the seamless integration of advanced tools, platform engineering can also play a key role in helping organizations leverage AI and other emerging technologies. Platform engineering can reduce operational friction, increase monitoring ability, and enhance flexibility when deploying workloads into hybrid cloud environments. "Overhead costs can be reduced by automating repetitive, manual tasks, and access controls and compliance protocols can be standardized," Muratovic says. Related:By taking advantage of reusable platform services, such as API gateways, monitoring, orchestration, and shared authentication, platform engineering can also build a strong foundation for application and systems scalability. "Additionally, organizations that develop product-oriented and cloud-first models can pre-define reference architectures and develop best practices to encourage adoption and enhance system reliability and security," Muratovic says. A centralized and structured platform also helps organizations strengthen security and compliance by providing better visibility into infrastructure, applications and workflows, Chavan says. "With real-time monitoring and automated governance, businesses can quickly detect risks, address security issues before they escalate, and stay up to date with evolving compliance regulations." Potential Pitfalls When building a platform, a common pitfall is creating a system that's too complex and doesn't address the specific problems facing development and operations teams, Chavan says. Additionally, failing to build strong governance and oversight can also lead to control issues, which can lead to security or compliance problems. Muratovic warns against over-engineering and failing to align with developer culture. "Over-engineering is simply creating systems that are too complex for the problems they were intended to solve, which increases maintenance costs and slows productivity -- both of which can erode value," he says. "Also, if the shift to platform engineering isn't aligned with developer needs, developers may become resistant to the effort, which can significantly delay adoption." Another pitfall is overly rigid implementation. "It's crucial to find a balance between standardization across the enterprise and providing too many choices for developers," Muratovic says. "Too much rigidity and developers won’t like the experience; too much flexibility leads to chaos and inefficiency." Final Thoughts Platform engineering isn't just about the technology, Chavan observes. It's also about creating a collaborative and continuously improving work culture. "By equipping developers and operators with the right tools and well-designed processes, organizations can streamline workflows and increase space for innovation." Platform engineering isn’t simply about technology; its value lies in creating a development operating model that empowers developers while aligning with business needs, Muratovic says. He believes that the discipline will constantly evolve as needs and goals change, so it's crucial to create a culture of openness and collaboration between platform engineers, operations teams, and developers. Muratovic notes that by focusing on the developer experience -- particularly self-service, automation, governance, compliance, and security -- platform engineering can provide organizations with a flexible, scalable, resilient ecosystem that fuels the agility and innovation that drives sustained growth. "Platform engineering is how you herd the cats, eliminate the unicorns, and eradicate the chaos from your software supply chain," Smith concludes. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 88 Views
-
WWW.INFORMATIONWEEK.COMEssential Tools to Secure Software Supply ChainsMax Belov, Chief Technology Officer, Coherent SolutionsApril 24, 20254 Min Readnipiphon na chiangmai via Alamy StockAttacks on software supply chains to hijack sensitive data and source code occur almost daily. According to the Identity Theft Resource Center (ITRC), over 10 million individuals were affected by supply chain attacks in 2022. Those attacks targeted more than 1,700 institutions and compromised vast amounts of data. Software supply chains have grown increasingly complex, and threats have become more sophisticated. Meanwhile, AI is working in favor of hackers, supporting malicious attempts more than strengthening defenses. The larger the organization, the harder CTOs have to work to enhance supply chain security without sacrificing development velocity and time to value. More Dependencies, More Vulnerabilities Modern applications rely more on pre-built frameworks and libraries than they did just a few years ago, each coming with its own ecosystem. Security practices like DevSecOps and third-party integrations also multiply dependencies. While they deliver speed, scalability, and cost-efficiency, dependencies create more weak spots for hackers to target. Such practices are meant to reinforce security, yet they may lead to fragmented oversight that complicates vulnerability tracking. Attackers can slip through the pathways of widely used components and exploit known flaws. A single compromised package that ripples through multiple applications may be enough to result in severe damage. Related:Supply chain breaches cause devastating financial, operational, and reputational consequences. For business owners, it’s crucial to choose digital engineering partners who place paramount importance on robust security measures. Service vendors must also understand that guarantees of strong cybersecurity are becoming a decisive factor in forming new partnerships. Misplaced Trust in Third-Party Components Most supply chain attacks originate on the vendor side, which is a serious concern for the vendors. As mentioned earlier, complex ecosystems and open-source components are easy targets. CTOs and security teams shouldn't place blind trust in vendors. Instead, they need clear visibility into the development process. Creating and maintaining a software bill of materials (SBOM) for your solution can help mitigate risks by revealing a list of software components. However, SBOMs provide no insight into how these components function and what hidden risks they carry. For large-scale enterprise systems, reviewing SBOMs can be overwhelming and doesn’t fully guarantee adequate supply chain security. Continuous monitoring and a proactive security mindset -- one that assumes breaches exist and actively mitigates them -- make the situation better controllable, but they are no silver bullet. Related:Software supply chains consist of many layers, including open-source libraries, third-party APIs, cloud services and others. As they add more complexity to the chains, effectively managing these layers becomes pivotal. Without the right visibility tools in place, each layer introduces potential risk, especially when developers have little control over the origins of each component integrated into a solution. Such tools as Snyk, Black Duck, and WhiteSource (now Mend.io) help analyze software composition, by scanning components for vulnerabilities and identifying outdated or insecure ones. Risks of Automatic Updates Automatic updates are a double-edged sword; they significantly reduce the time needed to roll out patches and fixes while also exposing weak spots. When trusted vendors push well-structured automatic updates, they can also quickly deploy patches as soon as flaws are detected and before attackers exploit them. However, automatic updates can become a delivery mechanism for attacks. In the SolarWinds incident, malicious code was inserted into an automated update, which made massive data theft possible before it was detected. Blind trust in vendors and the updates they deliver increases risks. Instead, the focus should shift to integrating efficient tools to build sustainable supply chain security strategies. Related:Building Better Defenses CTOs must take a proactive stance to strengthen defenses against supply chain attacks. Hence the necessity of SBOM and software composition analysis (SCA), automated dependency tracking, and regular pruning of unused components. Several other approaches and tools can help further bolster security: Threat modeling and risk assessment help identify potential weaknesses and prioritize risks within the supply chain. Code quality ensures the code is secure and well-maintained and minimizes the risk of vulnerabilities. SAST (static application security testing) scans code for security flaws during development, allowing teams to detect and address issues earlier. Security testing validates that every system component functions as intended and is protected. Relying on vendors alone is insufficient -- CTOs must prioritize stronger, smarter security controls. They should integrate robust tools for tracking SBOM and SCA and should involve SAST and threat modeling in the software development lifecycle. Equally important are maintaining core engineering standards and performance metrics like DORA to ensure high delivery quality and velocity. By taking this route, CTOs can build and buy software confidently, staying one step ahead of hackers and protecting their brands and customer trust. Read more about:Supply ChainAbout the AuthorMax BelovChief Technology Officer, Coherent SolutionsMax Belov joined Coherent Solutions in 1998 and assumed the role of CTO two years later. He is a seasoned software architect with deep expertise in designing and implementing distributed systems, cybersecurity, cloud technology, and AI. He also leads Coherent’s R&D Lab, focusing on IoT, blockchain, and AI innovations. His commentary and bylines appeared in CIO, Silicon UK Tech News, Business Reporter, and TechRadar Pro. See more from Max BelovReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 150 Views
-
WWW.INFORMATIONWEEK.COMLow-Cost AI Projects -- A Great Way to Get StartedJohn Edwards, Technology Journalist & AuthorApril 24, 20254 Min Readheliography / Stockimo via Alamy Stock PhotoOne of the great things about AI is that getting started with the technology doesn't have to be a time or money drain. Understanding AI and its long-term business value can be achieved simply by experimenting with a few inexpensive deployments. To help you get started, here are six low-budget AI projects that require only a modest financial commitment yet offer powerful insights into the technology's potential business worth. 1. Chatbot Before attempting a complex AI application, many experts advise beginning with something very simple, such as an internal chatbot. "Starting slow enables application architects and developers to consider the intricacies AI introduces to application threat models and ‘skill-up’ in low-sensitivity environments," says David Brauchler, technical director and head of AI and ML security at cybersecurity consultancy NCC Group, one of several experts interviewed online. External chatbots are just as easy to deploy. "Many small businesses struggle with responding to customer inquiries quickly, and an AI chatbot can handle frequently asked questions, provide product recommendations, and even assist with appointment bookings," says Anbang Xu, founder of JoggAI, an AI-driven video automation platform, agrees. He notes that tools like ChatGPT, DialogFlow, or ManyChat offer easy integrations with websites and social media. Related:2. Web scraper Consider building a custom web scraper to automatically monitor competitors' websites and other relevant sites, suggests Elisa Montanari, head of organic growth at work management platform provider Wrike. The scraper will summarize relevant content and deliver it in a daily or weekly digest. "In the marketing department alone, that intelligence can help you spend more time strategizing and creating content or campaigns rather than trying to piece together the competitive landscape." Montanari adds that Web scrapers are relatively simple to design, easily scalable, and relatively inexpensive. 3. Intelligent virtual assistant A great low-cost starter project, particularly for smaller businesses, is an AI-powered intelligent virtual assistant (IVA) dedicated to customer service, says Frank Schneider, AI evangelist at AI analytics firm Verint. "IVAs can handle routine customer inquiries, provide information, and even assist with basic troubleshooting." Many IVA solutions are affordable or even free, making them easily accessible to any small business, Schneider says. They're also relatively simple to create and can integrate with existing systems, requiring minimal technical expertise. Related:4. Internal knowledge base An initial AI project should be internal-facing, low risk, and useful, says Loren Absher, a director and lead analyst with technology research and advisory firm ISG. An AI-powered internal knowledge base meets all of those goals. "It lets employees quickly access company policies, training materials, and process documentation, using natural language." "This type of project is a perfect introduction to AI because it’s practical, low cost, and reduces risk by staying internal," Absher says. "It gives the company hands-on experience with AI fundamentals -- data management, model training, and user interaction -- without disrupting external operations," he notes. "Plus, it’s easy to experiment with open-source tools and pay-as-you-go AI services, so there’s no big upfront investment." The best approach to creating an AI-driven internal knowledge base is to assign a cross-functional team to the project, Absher advises. An IT or a data specialist can handle the technical side, a business process owner will ensure its usefulness, and someone from compliance or knowledge management will help keep the information accurate and secure, he says. Related:5. Ad builder Anmol Agarwal, founder of corporate training firm Alora Tech, believes that a great low-cost way to get your feet wet is using generative AI tools to enhance business productivity. "For example, use GenAI to create ads for your company, create email templates, even revise emails." Agarwal is bullish on GenAI. She notes that only minimum effort is required, since the code is already there and doesn't require programming experience. 6. Sales lead scoring An AI-powered lead scoring program is a low-cost, yet highly practical, AI starter project, says Egor Belenkov, founder and CEO of digital signage solutions provider Kitcast. With the help of historical data and behaviors, the program will help users find leads based on their likelihood of conversion into customers. "This tool will help the sales team to focus on high-potential leads and improve conversion rates significantly." This project makes a great starting point due to its ease in implementation and the value it provides, Belenkov says. "Sales teams will be able to personalize their outreach based on their needs and requirements," he explains. "It will also help the marketing team by adjusting their campaigns based on which leads are identified as most valuable." Another important benefit is the ability to analyze patterns across multiple points, such as website activity or email engagement, to predict which leads will be most likely to convert. "This eliminates the guessing game about which clients would decide to buy and which wouldn't," Belenkov says. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 127 Views
-
WWW.INFORMATIONWEEK.COMStrategies for Navigating a Multiday OutageIT outages are a nightmare scenario for a business. Operations grind to halt. Internal teams and customers, possibly thousands of them, are thrown into confusion. Lost revenue piles up by the minute. Each year, businesses lose $400 billion to unplanned downtime, according to Oxford Economics. While enterprises can do their best to prevent this scenario, we have seen multiple examples of outages that stretch out over days. Businesses may not be able to control when an outage happens, but they can control how they respond. What Causes Multiday Outages? Outages can stem from all manner of causes. In 2023, we saw Scattered Spider and ALPHV hit MGM Resorts International with a ransomware attack caused widespread disruption at its hotels and casinos. Slot machines were down. Guests couldn’t use the digital keys for their rooms. But malicious attacks aren’t the only causes behind outages. The culprit can be something as seemingly innocuous as an update. In July 2024, a faulty sensor software update caused the CrowdStrike outage, resulting in global disruption that lasted for days. The ubiquitous reliance on third parties means that a company may not be directly responsible for the incident; it might suffer an outage due to an issue that originates with one of their vendors, like CrowdStrike. Last year, fast food behemoth McDonald’s, too, had a global outage caused by a configuration change made by one of its third parties. Related:In the beginning of this year, Capital One and several banks had to weather a multiday outage. In this case, the vendor Fidelity Information Services (FIS) experienced power loss and hardware failure that kicked off outages for its customers. Regardless of the cause, enterprise teams need to know how to work through outages. “We all understand that it's not if a breach happens or an outage occurs, it's when that occurs. [It’s] how you respond. That's what everybody looks at,” says Eric Schmitt, global CISO at claims management company Sedgwick. The right response can minimize the long-term damage and give a company the opportunity to rebuild trust in its brand. How Can Companies Prepare for One? A multiday outage is a scenario that should be thoroughly covered by incident response and business continuity planning. A business should know its risks and build a plan around them. And often, that means using your imagination for the worst-case scenarios. “The black swan. It's the things that you don't think of. The things that you don't know can happen really, you have to plan for this," says Sebastian Straub, principal solutions architect at N2WS, an AWS and Azure backup and recovery company. Related:Planning for those unforeseeable events is a multidisciplinary exercise. Different teams need to weigh in and participate in tabletop exercises to best prepare a company for the possibility of a lengthy outage. “It should never be a single team in a vacuum trying to identify all the risks that may impact the company,” says Schmitt. What Happens During the Response? So, an outage happens. What now? It is time to take that incident response plan off the shelf and put it into action. “There should be an incident commander or someone who's designated within the organization to take [the] lead in these types of incidents,” says Quentin Rhoads-Herrera, senior director of cybersecurity platforms at cybersecurity company Stratascale. However, the incident is very discovered, employees need to be ready to alert the teams involved in incident response and all of the stakeholders being impacted by the outage. “You need to alert all of the different departments to the fact that, yes, we are experiencing an outage, and sometimes people are just too reluctant to do that,” says Straub. Related:Once the right people are alerted, they can work through remediation and attribution. Communication is one of the most important aspects of working through an outage that drags on, and it is one of the toughest pieces to get right. “You see in many, many outages that communications are one of the weakest things,” says Schmitt. It is hard to find the balance between transparency, accuracy, and risk management when information about an outage is flooding in and changing so quickly. “You don't want to pass along incorrect information but being transparent and crisp in your communication outbound helps build trust with your end users, your investors, your clients, whoever it may be,” says Rhoads-Herrera. Finding that balance is made easier when you include your communications and legal teams in incident response planning, rather than waiting until you’re in the thick of a real-life incident. While a specific outage and the timeline for recovery are going to dictate what information a business is able to share, committing to a regular cadence of communication, every few hours or once a day, goes a long way. “Long-term, if you're providing quality services and you're not letting your customers or stakeholders down in your communications during the event, I think your brand can recover from that,” Schmitt encourages. The pressure to get operations back up and running is immense. And that goal is paramount, but it is important to not lose sight of the human element. People are going to be working long days not only during the initial response but beyond that. “These events are not eight hours and done. They're going to be multiday initial response, and the long-term remediation could stretch out of months or even years,” Schmitt points out. People are going to be tired and stressed. Emotions are going to run high. If leaders don’t pay attention to their people, they risk more mistakes being made and burnout that leads to employee churn in the long-term. One of the most important ways to safeguard the people responsible for working through a lengthy outage is an issue of culture. People need to know that mistakes happen. It is ok to speak up and get everyone on the same page to work through recovery. “[Make] sure people understand that you don't need to be updating your resume on one screen while you're responding to an event on the other,” says Schmitt. Getting lost in the trenches of the response can be easy. But there should be a leader who keeps an eye on people and their hours worked. When someone is hitting 10- and 12-hour days, enforce breaks. “I saw a firm … put all of their employees up in very close hotel rooms. They made sure lunch, breakfast, and dinner was catered. They had rotating teams going in and out so that people had downtime. They had rest,” Rhoads-Herrera shares. How Can Companies Learn from Experience? An outage, like any other major incident, needs to undergo a thorough postmortem. What went well in the response? What didn’t? How can the incident response plan be updated? As much temptation there may be to forget about an outage, taking the time to answer these questions is valuable. “If you're trying to hide what the actual issue was, you're trying to downplay it, well then you're robbing yourself of the opportunity to grow and become stronger and more versatile,” says Straub. Breaking down the cause of an outage and enterprise’s response is constructive, but playing the blame game rarely is. “It's all about listing the facts and digging into what exactly happened, being open and transparent about it that leads to a better outcome versus passing blame or walking in trying to deflect,” says Rhoads-Herrera. Are We Going to See More Multiday Outages? Reliance on third parties is only growing, and the concomitant risk of that interconnectedness along with it. Cyberattacks are in no way slowing down. Natural disasters are happening more often and becoming more destructive. Any of these can cause outages, and it is certainly possible that we will see more of them. “The companies that are going to be most successful in the future are those that are looking at: what are my risks and making the investment to address those so that when the next event happens, regardless of root cause, they're able to quickly pivot and recover more quickly,” says Schmitt.0 Comments 0 Shares 120 Views
-
WWW.INFORMATIONWEEK.COMKnowledge Gaps Influence CEO IT DecisionsRichard Pallardy, Freelance WriterApril 23, 202510 Min ReadFeodora Chiosea via Alamy StockCEOs are increasingly honest about their IT knowledge deficiencies. Anyone who has worked in tech in the past several decades has a story or two about the imperious and dismissive attitude taken by the C-suite toward tech issues. It is a cost center, a gamble, an unworthy investment. There are plenty of CEOs and other executives who still refuse to engage with the tech side of the business. But they are now viewed as dinosaurs -- relics of an age where tech was a novelty. Now, CEOs and their cohorts have been compelled to acknowledge these errors. Many are attempting to correct them -- both personally and on an organizational level. A recent Istari survey found that 72% of CEOs felt uncomfortable making cybersecurity decisions. Respondents to the survey acknowledged the need to trust the knowledge of their tech counterparts -- an encouraging finding for CIOs. The difficulty of this shift is understandable. CEOs were initially only responsible for industrial operations and the money they produced. Following the Industrial Revolution, their responsibilities became largely financial. Now they must juggle both fiscal and technological aspects to remain competitive. Strategic implementation of technology, both in expanding business and defending it against attackers, is increasingly essential. Doing so requires a working knowledge of tech trends and how they can be leveraged across the organization. This may be a difficult ask for people who come from strictly business backgrounds. Thus, it is incumbent upon them to both educate themselves and consult with their CIOs to ensure that informed decisions are made. Related:According to a 2021 MIT Sloan Management review, organizations whose leadership was savvy to new tech developments saw 48% more revenue growth. Now, when organizations seek a CEO, they increasingly ask whether their candidates possess the knowledge necessary to manage the risks and benefits of implementing new technologies such as AI while maintaining a strong security posture. Here, InformationWeek explores the knowledge gaps that CEOs need to be aware of -- and how they can fill them -- with insights from Ashish Nagar, CEO of customer service AI company Level AI, and Susie Wee, CEO of DevAI, an AI company working on optimizing IT workflows. What CEOs Don’t Know Business-trained CEOs may lack many technological skills -- an understanding of AI, how to best manage cybersecurity, and the ability to determine what infrastructure is a worthwhile investment. The narrow parameters of their training and the responsibilities of their previous roles leave many of them in the dark on how to manage the integration of technological aspects into the businesses they manage. Related:“Technology is not their business. The technology is used to fortify their offer,” Wee says. “The question is, how can they use technology to compete while thinking first about their customers?” Susie Wee, DevAIA 2025 report issued by Cisco offers intriguing findings about the feelings of CEOs on IT knowledge gaps. Of the CEOs surveyed, some 73% were concerned that they had lost competitive advantage due to IT knowledge gaps in their organization. And 74% felt that their deficiencies in knowledge of AI were holding them back from making informed business decisions regarding the technology. “The arc of what is possible right now with these modern technologies, especially with how fast things are changing, is what I see as the biggest gap,” Nagar says. “That’s where it creates friction between technical leaders and the CEO.” CEOs who cannot connect the dots between the capabilities of nascent tech and what it may offer in the future do a disservice to their organizations. According to Cisco, around 84% of respondents believed that CEOs will need to be increasingly informed about new technologies in coming years in order to operate effectively. However, other data from the report suggests that some CEOs view IT deficiencies as the responsibility of their teams -- only 26% saw problems with their own lack of knowledge. Related:“Some are very scared -- and actually frozen and not moving forward. They're deciding to allow legal and compliance to put up gates everywhere,” Wee observes. Other research, however, indicates that CEOs are taking ownership of their personal knowledge gaps -- 64% of respondents to an AND Digital survey felt that they were “analogue leaders.” That is, they were concerned that their skill sets did not match the increasing integration of digital into all aspects of business. And some 34% said that their digital knowledge was insufficient to lead their companies to the next growth phase. The survey found that female CEOs were more nervous about their knowledge gaps -- 46% thought they lacked the necessary technological skills. “The buck stops with me. If anything goes wrong in cyber for whatever reason, customers will not excuse me because it is in an area I can say somebody else is looking after,” said one CEO who spoke with Istari. One of their main complaints is the lack of usable data and how to obtain it. If they have structured data, many of them can adapt their existing skill sets around it and make effective decisions. But obtaining that information requires at least a general understanding of the landscape. If they can direct their subordinates to capture that data and massage it into a usable format, they can make more informed choices for their organizations. How CEOs Can Bridge the Gap CEOs are increasingly seeking tech training -- 78% were enrolled in digital upskilling courses according to the AND Digital survey. Some CEOs are even engaging in reverse mentoring, where they form partnerships in which their subordinates share their skill sets in a semi-structured environment, allowing them to leverage that knowledge. Advisory boards and other programs that put CEOs in contact with their tech teams are also useful in facilitating upward knowledge transfer. Digital immersion programs in which executives are embedded with their tech teams give them on-the-ground experience and allow them to integrate their experiences into the decisions that will ultimately influence the daily work of these groups. “In our organization we have weekly technology days where people share best practices on what people are learning in their lines of work,” Nagar says. There are even simulation programs, which allow CEOs to test their tech knowledge against real-life scenarios and then view the results within the safety of an artificial environment -- thus gaining useful feedback at no cost to their actual business. Wee thinks that they should be encouraging their teams to learn along with them. “When the Internet was formed, there were companies that did not allow people to use the Internet,” she says. They fell behind. The same may be true when CEOs do not see the benefits of encouraging their employees to experiment with AI, for example, and doing the same themselves. The tech side can play its part in getting CEOs on board too. “The question is: how to meet them where they're at,” Wee says. To her, that means showing them the more pragmatic sides of new technology such as AI -- the tasks it can perform and how that can benefit the business. “Because of the recent technology changes, there’s much more space on the CEO agenda for technology,” Nagar adds. How It Pays Off A 2024 article in Nature describes the correlation between CEOs who have a background in scientific research and how the enterprises they run digitalize. The correlation is a positive one -- companies run by CEOs who know tech tend to be more aggressive on innovation and reap its benefits more rapidly. CEOs with scientific and technological knowledge bases are uniquely positioned to see the benefits of implementing new technology, investing in technological infrastructure and supporting cybersecurity safeguarding. While plenty of CEOs who come from other backgrounds can do the same, they may be more reticent given their lack of understanding of the underlying principles. A heightened awareness of the influence of tech, even on businesses without a strict technological focus, allows leadership to capitalize on developments and trends as they emerge rather than after they have been proven by peer organizations -- often saving on costs and offering a competitive advantage. Novel technology may be secured at bargain rates when it first becomes available -- and at the same time, the talent required to run it may be more available as well. Leadership that is able to discern these trends as they emerge can uniquely position an organization to capitalize on them. “What CEOs are finding is that customers want to have an experience that is extremely technology forward: frictionless, faster, better, cheaper. If that is the case, the CEO has to know about the technology changes, because the decisions they're making right now are not just for today,” Nagar imparts. “I think the motivation comes from working backwards from the customer.” Ashish Nagar, Level AICEOs who emphasize digital strategy -- and remain on the cutting edge by refining their own knowledge -- are far more likely to be hawkish on digital strategy and reap the resulting revenue. Technologically literate CEOs are more attuned to risk management, too. They are more likely to solicit and examine data on the risks particular to their business and allocate resources accordingly. Rather than viewing cybersecurity as merely a cost center, they are able to discern the long-term benefits of a healthy security program and to understand that their cyber team adds immeasurable value even during periods where attacks do not occur. When an incident does occur, they are also more proficient at managing the situation in concert with their CIO and other tech staff -- no one in cybersecurity wants to work under a CEO who panics under fire. Leveraging CIOs Cisco reports this year that almost 80% of CEOs are now leaning more than ever on their CIOs and CTOs for vital tech knowledge. And 83% acknowledge that CTOs now play a key role in their business. Istari has found additional support for this notion -- their surveys find that CEOs now view their CIOs and CTOs as invaluable collaborators. Still, CIOs remain nervous about these collaborations. Some 30% of American CIOs and 50% of their European counterparts did not think that their CEOs were equally accountable for tech problems. The tension cuts both ways. As one CEO told Istari, “At that moment of an attack, you put the company into the hands of supply chain people and IT people. And those are not groups you would normally, or intuitively, give that kind of confidence and trust to.” Participants in the survey -- both CEOs and CIOs -- urged a greater move toward both shared accountability and responsibility. Not only should both parties face the music when something goes wrong; they should also be equally involved in preparing for and obviating these crises in the first place. CIO to CEO Pipeline Cisco suggests that some 82% of CEOs anticipate a growing number of CTOs entering their ranks soon. Indeed, many of the world’s top CEOs don’t come from traditional business backgrounds. Sam Altman, Jeff Bezos, Demis Hassabis, and Mark Zuckerberg rose to their positions through their knowledge of engineering and tech. The trend has been observed for nearly a decade -- Harvard Business Review flagged it in 2018. People with this sort of mindset appear to flourish in modern business, with its ever-growing reliance on technology. They are perfectly positioned to both reap the benefits and manage the multitude of problems that ensue. The traditional business mindset, while not obsolete, is not as easily adaptable to such a volatile, multi-dimensional ecosystem. CIOs are already more involved in business strategy, with some 55% claiming they have been proactive in this regard according to one report. “A CTO role is a business role compared to being a pure technologist,” Wee says of her own experience. “You’re linking the needs of the business and its customers together with technology advancements -- and the technical teams who can deliver it.” This forward-facing mindset has been a fundamental shift in the C-suite -- CIOs, CTOs, and CISOs are no longer in the background. Their strategic capabilities and ability to forecast coming tech trends are increasingly valuable. And they may ultimately lead those who hold these positions to even more prominent leadership roles. About the AuthorRichard PallardyFreelance WriterRichard Pallardy is a freelance writer based in Chicago. He has written for such publications as Vice, Discover, Science Magazine, and the Encyclopedia Britannica.See more from Richard PallardyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 132 Views
-
WWW.INFORMATIONWEEK.COMSurgical Center CIO Builds an IT DepartmentJohn Edwards, Technology Journalist & AuthorApril 23, 20255 Min ReadRusty Strange, Regent Surgical HealthSince 2001, Regent Surgical Health has developed and managed surgery center partnerships between hospitals and physicians. The firm, based in Franklin, Tennessee, works to improve and evolve the ambulatory surgical center (ASC) model. Rusty Strange, Regent's CIO, is used to facing challenges in a field where lives are at stake. He joined Regent after a 17-year stint at ambulatory surgery center operations firm Amsurg, where he served as vice president of IT infrastructure and operations. In an online interview, Strange discusses the challenge he faced in building an entire IT department. What is the biggest challenge you ever faced? The biggest challenge I faced when I came to Regent was building an IT department from the ground up. As background, I was the first IT employee. At the time, we had no centralized IT structure -- each ambulatory surgical center ASC operated with fragmented, non-standard systems managed by local staff or unvetted third parties. There was no cohesive strategy for clinical applications, data management, cybersecurity, or operational support. What caused the problem? The issue arose from rapid growth. The company was acquired, transforming into a high-growth organization overnight. Multiple ASCs were added to our portfolio over a short period, but we lacked the infrastructure to have sustainable success. There was no dedicated IT budget, no standardized software or hardware, and no staff trained to handle the increasing complexity of healthcare technology. This left us vulnerable to inefficiencies, security risks, and a lack of data to inform important decisions. Related:How did you resolve the problem? I started by conducting a full assessment of existing systems across all locations to identify gaps and risks. I developed a multi-year plan to address foundational needs/capabilities, secured buy-in for an initial budget to hire our first functional area leaders, and partnered with a few firms that could provide us with the additional people resources to execute on multiple fronts. We standardized hardware and software, implementing cloud-based systems and a scalable network architecture. We also established policies for cybersecurity, business continuity, and staff training, while gradually scaling the team and outsourcing specialized tasks like penetration testing to additional trusted partners. What would have happened if the problem wasn't swiftly resolved? Without a stable IT department, the company would have been unable to grow effectively. Important data would have been at risk and unutilized, potentially leading to violations and missed insights. Operational inefficiencies, like mismatched scheduling systems or billing errors, would have eroded profitability and frustrated surgeons and patients alike. Over time, our reputation as a first-class ASC management partner would have suffered, potentially stalling further growth or even losing existing centers to competitors. Related:How long did it take to resolve the problem? It took about 18 months to establish a fully operational IT department. The first six months were spent laying the foundation, hiring the core team, standardizing systems, and addressing immediate risks. The next year focused on refining processes, expanding the team, and rolling out core capabilities. It was a phased approach, but we hit key milestones early to stabilize operations and gain organizational buy-in/trust. Who supported you during this challenge? The entire leadership team was a critical ally, trusting the vision and advocating for the investments needed to achieve it. My initial hires were integral, they were able to adopt an entrepreneurial mindset, often setting direction while also being responsible for tactical execution. Our ASC administrators also stepped up, providing insights into their workflows and championing the changes with their staff. External partners helped accelerate implementation once we had the resources and process to engage them properly. Related:Did anyone let you down? Not everyone was the right fit and not everyone in the organization was ready for the accelerated pace of change, but those were not personal failures, just circumstantial and provided learning opportunities for me and others in the company. What advice do you have for other leaders? Start with a clear vision and get fellow-executive buy-in early -- without it, you're facing a steep uphill climb. Prioritize quick wins, like fixing the most glaring risks and user pain points to build momentum and credibility. Hire a small, versatile team you can trust -- quality beats quantity when you’re starting out. Be patient but persistent; building something from scratch takes time, but cutting corners will haunt you later. Communicate constantly -- stakeholders need to understand why the change matters. Lastly, build a “team first” mindset so that individuals know they are supported and can go to others to brainstorm or for assistance. Is there anything else you would like to add? This experience reinforced the critical role technology plays in ASCs, where efficiency and patient safety are non-negotiable. It also taught me that resilience isn’t just about systems -- it’s about people. It’s proof that even the toughest challenges can transform an organization if you tackle them head-on with the right team and strategy. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 133 Views
-
WWW.INFORMATIONWEEK.COMEdge AI: Is it Right for Your Business?John Edwards, Technology Journalist & AuthorApril 22, 20255 Min ReadDragos Condrea via Alamy Stock PhotoIf you haven't yet heard about edge AI, you no doubt soon will. To listen to its many supporters, the technology is poised to streamline AI processing. Edge AI presents an exciting shift, says Baris Sarer, global leader of Deloitte's AI practice for technology, media, and telecom. "Instead of relying on cloud servers -- which require data to be transmitted back and forth -- we're seeing a strategic deployment of artificial intelligence models directly onto the user’s device, including smartphones, personal computers, IoT devices, and other local hardware," he explains via email. "Data is therefore both generated and processed locally, allowing for real-time processing and decision-making without the latency, cost, and privacy considerations associated with public cloud connections." Multiple Benefits By reducing latency and improving response times -- since data is processed close to where it's collected -- edge AI offers significant advantages, says Mat Gilbert, head of AI and data at Synapse, a unit of management consulting firm Capgemini Invent. It also minimizes data transmission over networks, improving privacy and security, he notes via email. "This makes edge AI crucial for applications that require rapid response times, or that operate in environments with limited or high-cost connectivity." This is particularly true when large amounts of data are collected, or when there's a need for privacy and/or keeping critical data on-premises. Related:Initial Adopters Edge AI is a foundational technology that can drive future growth, transform operations, and enhance efficiencies across industries. "It enables devices to handle complex tasks independently, transforming data processing and reducing cloud dependency," Sarer says. Examples include: Healthcare. Enhancing portable diagnostic devices and real-time health monitoring, delivering immediate insights and potentially lifesaving alerts. Autonomous vehicles. Allowing real-time decision-making and navigation, ensuring safety and operational efficiency. Industrial IoT systems. Facilitating on-site data processing, streamlining operations and boosting productivity. Retail. Enhancing customer experiences and optimizing inventory management. Consumer electronics. Elevating user engagement by improving photography, voice assistants, and personalized recommendations. Smart cities. Edge AI can play a pivotal role in managing traffic flow and urban infrastructure in real-time, contributing to improved city planning. First Steps Related:Organizations considering edge AI adoption should start with a concrete business use case, advises Debojyoti Dutta, vice president of engineering AI at cloud computing firm Nutanix. "For example, in retail, one needs to analyze visual data using computer vision for restocking, theft detection, and checkout optimization, he says in an online interview. KPIs could include increased revenue due to restocking (quicker restocking leads to more revenue and reduced cart abandonment), and theft detection. The next step, Dutta says, should be choosing the appropriate AI models and workflows, ensuring they meet each use case's needs. Finally, when implementing edge AI, it's important to define an edge-based combination data/AI architecture and stack, Dutta says. The architecture/stack may be hierarchical due to the business structure. "In retail, we can have a lower cost/power AI infrastructure at each store and more powerful edge devices at the distribution centers." Adoption Challenges While edge AI promises numerous benefits, there are also several important drawbacks. "One of the primary challenges is the complexity of deploying and managing AI models on edge devices, which often have limited computational resources compared to centralized cloud servers," Sarer says. "This can necessitate significant optimization efforts to ensure that models run efficiently on these devices." Related:Another potential sticking point is the initial cost of building an edge infrastructure and the need for specialized talent to develop and maintain edge AI solutions. "Security considerations should also be taken into account, since edge AI requires additional end-point security measures as the workloads are distributed," Sarer says. Despite these challenges, edge AI's benefits of real-time data processing, reduced latency, and enhanced data privacy, usually outweigh the drawbacks, Sarer says. "By carefully planning and addressing these potential issues, organizations can successfully leverage edge AI to drive innovation and achieve their strategic objectives." Perhaps the biggest challenge facing potential adopters are the computational constraints inherent in edge devices. By definition, edge AI models run on resource-constrained hardware, so deployed models generally require tuning to specific use cases and environments, Gilbert says. "These models can require significant power to operate effectively, which can be challenging for battery-powered devices, for example." Additionally, balancing response time needs with a need for high accuracy demands careful management. Looking Ahead Edge AI is evolving rapidly, with hardware becoming increasingly capable as software advances continue to reduce AI models' complexity and size, Gilbert says. "These developments are lowering the barriers to entry, suggesting an increasingly expansive array of applications in the near future and beyond." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 140 Views
-
WWW.INFORMATIONWEEK.COMWill Cuts at NOAA and FEMA Impact Disaster Recovery for CIOs?Carrie Pallardy, Contributing ReporterApril 22, 20254 Min ReadNovember 1, 2019: Flooding in the village of Dolgeville, Herkimer County, New YorkPhilip Scalia via Alamy Stock PhotoNatural disasters are indiscriminate. Businesses and critical infrastructure are all vulnerable. In the wake of a disaster, public and private organizations face the responsibility of recovery and resilience. That typically requires public-private coordination, but sweeping staff cuts at the federal level could significantly reshape what those partnerships look like. More than 600 workers were laid off and the total job cuts may exceed 1,000 at the National Oceanographic and Atmospheric Administration (NOAA), of which the National Weather Service is a part. More than 200 employees at the Federal Emergency Management Agency (FEMA) have lost their jobs as well. Legal pushback resulted in some employees being reinstated across various federal agencies, but confusion still abounds, NBC News reports. InformationWeek spoke with a local emergency manager and a cybersecurity leader to better understand the role these federal agencies play in disaster response and how their tenuous future could impact recovery and resilience. Public-Private Partnership and Disaster Recovery CIOs at enterprises need plans for operational continuity, disaster recovery, and cyber resilience. When a natural disaster hits, they can face major service disruptions and a heightened vulnerability to cyber threats. Related:“Hurricane Sandy in New York or floods in New Orleans or fires in LA, they may create opportunities for folks to be a little more vulnerable to cyberattacks,” says Matthew DeChant is CEO of Security Counsel, a cybersecurity management consulting firm. “The disaster itself [creates] an opportunity for bad actors to step in.” Speed is essential, whether responding to a weather-related incident or a cyberattack. “What we typically say to our clients is that in order to run a really good information security program you have to be very good at intelligence gathering,” says DeChant. For weather-related disasters, the National Weather Service is a critical source of intelligence. “The National Weather Service in particular is a huge partner of emergency managers at the local, state and federal level. Any time that we are expecting a weather-based incident, we are in constant communication with the national weather service,” Josh Morton, first vice president of the International Association for Emergency Managers and director of the Saluda County Emergency Management Division in South Carolina, tells InformationWeek. FEMA plays a pivotal role in disaster recovery by facilitating access to federal resources, such as the Army Corps of Engineers. “Without FEMA or some other entity that allows us to access those resources through some type of centralized agency … you would have local jurisdictions and state governments attempting to navigate the complexities of the federal government without assistance,” Morton points out. Related:FEMA’s other role in disaster recovery comes in the form of federal funding. “All disasters begin and end locally. The local emergency management office is really who is driving the train whenever it comes to the response. Once the local government becomes overwhelmed, then we move on to the state government,” Morton explains. “Once we get to a point where the state becomes overwhelmed, that's when FEMA gets involved.” The Cuts The Department of Government Efficiency (DOGE) is orchestrating job cuts in the name of efficiency. In theory, greater efficiency would be a positive. “I don't think you will find anybody in [emergency] management that doesn't feel like that there is reform needed,” Morton shares. “Following a disaster most of us end up having the higher contractors just to help us get through the federal paperwork. There's a lot of barriers to accessing federal funding and federal resources.” Related:But are these mass job cuts achieving the goal of greater efficiency? In the case of FEMA and NOAA, cuts could compound preexisting staff shortages. In 2023, the US Government Accountability Office reported that action needed to be taken to address staffing shortages at FEMA as disasters increase in frequency and complexity. When Hurricane Helene hit last year, Saluda County, where Morton works, was one of the affected areas. “A slower more intricate reform is what is needed. What we really need right now is a scalpel and not a hacksaw,” says Morton. “If we simply go in and start just throwing everything out without taking a hard look at these programs, we're going to do a lot more damage than good.” Rethinking Disaster Recovery Plans “All business is generally run on good intelligence about their marketplace and various other factors here. So, if you can't get it from the government today then you're going to need to replace it,” says DeChant. “Not every local emergency management office has the resources to be able to have commercial products available,” says Morton. “So, really having that resource in the national weather service is very beneficial to public safety.” With the shifts in the federal government, Mortan says it is more vital than ever for organizations to make sure they have insurance resources available. Enterprise leadership may also have to adapt in unexpected ways should calamity strike under these circumstances. “There's going to be a lot of uncertainty and that hurts the ability to make decisions with confidence,” says DeChant. About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 133 Views
-
WWW.INFORMATIONWEEK.COMCIO Angelic Gibson: Quell AI Fears by Making Learning FunLisa Morgan, Freelance WriterApril 22, 20257 Min ReadFirn via Alamy StockEffective technology leadership today prioritizes people as much as technology. Just ask Angelic Gibson, CIO at accounts payable software provider AvidXchange. Gibson began her career in 1999 as a software engineer and used her programming and people skills to consistently climb the corporate ladder, working for various companies including mattress company Sleepy’s and cosmetic company Estee Lauder. By the time she landed at Stony Brook University, she had worked her way up to technology strategist and senior software engineer/architect before becoming director, IT operations for American Tire distributors. By 2013, she was SVP, information technology for technology solutions provider TKXS and for the past seven years she’s been CIO at AvidXchange. “I moved from running large enterprise IT departments to SaaS companies, so building SaaS platforms and taking them to market while also running internal IT delivery is what I’ve been doing for the past 13 years. I love building world class technology that scales,” says Gibson. “It's exciting to me because technology is hard work and you’re always a plethora of problems, so you wake up every day, knowing you get to solve difficult, complex problems. Very few people handle complex transformations well, so getting to do complex transformations with really smart people is invigorating. It inspires me to come to work every day.” Related:Angelic GibsonOne thing Gibson and her peers realized is that AI is anything but static. Its capabilities continue to expand as it becomes more sophisticated, so human-machine partnerships necessarily evolve. Many organizations have experienced significant pushback from workers who think AI is an existential threat. Organizations downsizing through intelligent automation, and the resulting headlines, aren’t helping to ease AI-related fears. Bottom line, it’s a change management issue that needs to be addressed thoughtfully. “Technology has always been about increasing automation to ensure quality and increase speed to market, so to me, it's just another tool to do that,” says Gibson. “You’ve got to meet people where they're at, so we do a lot of talking about fears and constraints. Let’s put it on the table, let’s talk about it, and then let’s shift to the art of the possible. What if [AI] doesn't take your job? What could you be doing?” The point is to get employees to reimagine their roles. To facilitate this, Gibson identified people who could be AI champions, such as principal senior engineers who would love to automate lower level thinking so they can spend more time thinking critically. Related:“What we have found is we’ve met resistance from more senior level talent versus new talent, such as individuals working in business units who have learned AI to increasingly automate their roles,” says Gibson. “We have tons of use cases like that. Many employees have automated their traditional business operations role and now they're helping us increase automation throughout the enterprise.” Making AI Fun to Learn Today’s engineers are constantly learning to keep pace with technology changes. Gibson has gamified learning by showcasing who’s leveraging AI in interesting ways, which has increased productivity and quality while impacting AvidXchange customers in a positive way. “We gamify it through hackathons and showcase it to the whole company at an all-hands meeting, just taking a moment to recognize awesome work,” says Gibson. “And then there are the brass tacks: We’ve got to get work done and have real productivity gains that we're accountable for driving.” Over the last five years, Gibson has been creating a learning environment that curates the kinds of classes she wants every technologist to learn and understand, such as a prompt engineering certification course. Their progress is also tracked. Related:“We certify compliance and security annually. We do the same thing, with any new tech skill that we need our teammates to learn,” says Gibson. “We have them go through certification and compliance training on that skill set to show that they’re participating in the training. It doesn't matter if you’re a business analyst or an engineer, everyone's required to do it, because AI can have a positive impact in any role.” Establish a Strong Foundation for Learning Gibson has also established an AI Center of Excellence (CoE), made up of 22 internal AI thought leaders who are tasked with keeping up with all the trends. The group is responsible for bringing in different GenAI tools and deep learning technologies. They’re also responsible for running proofs of concept (POC). When the project is ready for production, the CoE ensures it has passed all AvidXchange cybersecurity requirements. “Any POC must prove that it's going to add value,” says Gibson. “We’re not just throwing a slew of technology out there for technology’s sake, so we need to make sure that it’s fit for purpose and that it works in our environment.” To help ensure the success of projects, Gibson has established a hub and spoke operating model, so every business unit has an AI champion that works in partnership with the CoE. In addition, AvidXchange made AI training mandatory as of January 2024, because AI is central to its account payables solution. In fact, the largest customer use cases have achieved 99% payment processing accuracy using AI to extract data from PDFs and do quality checks, though humans do a final review to ensure that level of accuracy. “What we’ve done is to take our customer-facing tool sets or internal business operations and hook it up to that data model. It can answer questions like, ‘What’s the status of my payment?’ We are now turning the lights on for AI agents to be available to our internal and external customer bases.” Some employees working in different business units have transitioned to Gibson’s team specifically to work on AI. While they don’t have the STEM background traditional IT candidates have, they have deep domain expertise. AvidXchange upskills these employees on STEM so they can understand how AI works. “If you don't understand how an AI agent works, it’s hard for you to understand if it’s hallucinating or if you're going to have quality issues,” says Gibson. “So, we need to make sure the answers are sound and accurate by making the agents quote their sources, so it’s easier for people to validate outputs.” Focus on Optimization and Acceleration Instead of looking at AI as a human replacement, Gibson believes it’s wiser to harness an AI-assisted ways of working to increase productivity and efficiency across the board. For example, AvidXchange specifically tracks KPIs designed to drive improvement. In addition, its success targets are broken down from the year to quarters and months to ensure the KPIs are being met. If not, the status updates enable the company to course correct as necessary. “We have three core mindsets: Connected as people, growth-minded, and customer-obsessed. Meanwhile, we’re constantly thinking about how we can go faster and deliver higher quality for our customers and nurture positive relationships across the organization so we can achieve a culture of candor and care,” says Gibson. “We have the data so we can see who’s adopting tools and who isn’t, and for those who aren’t, we have a conversation about any fear they may have and how we can work through that together. We [also] want a good ecosystem of proven technologies that are easy to use. It’s also important that people know they can come to us because it’s a trusted partnership.” She also believes success is a matter of balance. “Any time you make a sweeping change that feels urgent, the human component can get lost, so it’s important to bring people along,” says Gibson. “There’s this art right now of how fast you can go safely while not losing people in the process. You need to constantly look at that to make sure you’re in balance.” About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 152 Views
-
WWW.INFORMATIONWEEK.COMThe Kraft Group CIO Talks Gillette Stadium Updates and FIFA World Cup PrepJoao-Pierre S. Ruth, Senior EditorApril 18, 20259 Min ReadElevated view of Gillette Stadium, home New England Patriots, NFL Team. Playing against Dallas Cowboys, October 16, 2011, Foxborough, Boston, MAVisions of America LLC via Alamy Stock PhotoThe gridiron action of the New England Patriots naturally takes center stage in the public eye, but when the team’s owner, holding company The Kraft Group, wanted to update certain tech resources, the plan encompassed its extensive operations.Michael Israel, CIO for The Kraft Group, discussed with InformationWeek the plan for networking upgrades -- facilitated through NWN -- at Gillette Stadium, home field for the Patriots, as well as the holding company’s other business lines, which include paper and packaging, real estate development, and the New England Revolution Major League Soccer club.Talk us through not only the update effort for the stadium, but what were the initial thoughts, initial plans, and pain points that got the process started for the company.The roots of the business are in the paper manufacturing side. We have a paper, cardboard recycling mill in Montville, Conn. I have 10 cardboard box manufacturing plants from Red Lion, Pa. up through Dover, N.H., in the northeast. International Forest Products, which is a large commodities business which moves paper-based products all over the world. When we talk about our network, we have a standardized platform across all of our enterprise businesses and my team is responsible for maintaining and securing all of the businesses.Related:We have a life cycle attached to everything that we buy and when we look at what the next five years brings to us, we were looking and saying we have the host of networking projects coming up. It will be the largest networking set of upgrades that we do from a strategic point over that period. So, the first of which NWN is currently working on is a migration to a new voice over IP platform. Our existing platform was end-of-life, moving to a new cloud-based platform, new Cisco platform. They are managing that transition for us and that again covers our entire enterprise.[We're] building a new facility for the New England Patriots, their practice facility, which will be ready next April. Behind that we have FIFA World Cup coming in next June-July [in 2026] and we have essentially seven matches here. It’s the equivalent of seven Super Bowls over a six-week period.Behind that comes a refresh of our Wi-Fi environment, refresh of our overall core networking environment. Then it’s time for a refresh of our firewalls. I have over 80 firewalls in my environment, whether virtual or physical. And to add insult to injury, on top of all of that, we may have a new stadium that we’re building up in Everett for our soccer team, which is potentially scheduled to open in 2029 or 2030.Related:So as we were looking at all of this, the goal here is to create one strategic focus for all of these projects and not think about them individually. Sat down with NWN saying, “Hey, typically I will be managing two to three years in advance. We need to take a look at what we’re going to do over the next five years to make sure that we’re planning for growth. We’re planning to manage all of this from standards and from a central location.”Putting together what that strategic plan looks like over that period of time and building a relationship with NWN to be able to support it, augment the staff that I have. I don’t have enough resources internal to handle all of this myself. And that’s a large endeavor, so that’s where this partnership started to form.Can you describe the scale of your operations further? You mentioned hosting the equivalent of several Super Bowls in terms of operations at the stadium.If you take the stadium as a whole and we focus there for a second, for Taylor Swift concert or a FIFA event coming in -- for Taylor Swift, we had 62,000 unique visitors on our Wi-Fi network at one time. There’s 1,800 WAPs (wireless access points) supporting the stadium and our campus here now.Related:I got a note on my radio during one of the evenings saying there’s 62,000 people. I said, “How can that be? There’s only 52,000 guests.” Well, it turns out there was a TikTok challenge in one of our parking lots and there were 10,000 teenagers on the network doing TikTok. These are the things that we don’t plan for, and FIFA is going to be a similar situation where typically we’re planning for how many people are physically sitting in the stadium for a FIFA event. Our parking lots are becoming activation zones, so we’re going to have to plan to support not just who's physically entering and scanning tickets and sitting in the bowl, but who’s on the grounds as a whole.And that’s something that we haven’t had to do in the past. It’s something that some of the warmer stadiums down in the South or in the in the West Coast who host Super Bowls, they're used to that type of scenario, but there are 16 venues throughout North America that are supporting FIFA and many of them, like us, we’re not used to having that large-size crowd and your planning to support that is critical for us as we start to do this. We are now 15 months away, 14 months away. We’re in high gear right now.What led the push to make changes? The interests are of the guests to the stadium? The team’s needs? Or was it to meet the latest standards and expectations in technology and networking?If you think about the networks, and it’s kind of irrelevant whether it’s here at the stadium or in our manufacturing plants, the networks have physically been -- if it’s plugged in, if it’s a Wi-Fi attachment, etcetera, you can track what is going on and what your average bandwidth utilization is.What we were seeing over the last year with the increased adoption of AI, with the increased adoption of IoT in these environments, you’re having more devices that are missio- critical, for example, on a Wi-Fi network, whereas in the past -- OK, there’s 50,000 people in my bowl and they’re on TikTok; they’re on Instagram; they’re doing whatever. We want them to have a good experience, but it’s not mission critical in my eyes. But now, if you’re coming to the gate and we’re adopting systems that are doing facial recognition for you to enter and touching a digital wallet and shredding your ticket and hitting your credit card and doing all these things -- they need to be lightning fast.Michael IsraelIf I’m doing transactions on mobile point of sale terminals, half of my point-of-sale terminals are now mobile devices hanging off of Wi-Fi. There’s all almost 500 mobile point of sale terminals going around. If they are spinning and waiting to connect, you’re going to lose business. Same thing in my manufacturing plants where my forklifts are now connected to Wi-Fi. We’re tracking the trailers as they come in and watching for demurrage charges and looking at all of these pieces. These are these are IoT devices that weren’t on the network in the past and if the forklift isn’t connecting, the operators are not being told where to put the materials that they’re grabbing.Basically, they stop until they can reconnect. I can’t have that.The focus and the importance of the network continues to outpace what we think it’s going to do, so what I did last year is kind of irrelevant because as the applications and as the needs are inherently changing, we are society doesn’t like to wait.If someone’s looking to buy something and that point-of-sale terminal is processing and processing -- we did a project last year with autonomous purchasing, where you enter a concession stand and you pick things off the shelf, and it knows what you’re taking. Most stadiums have it at this point in time. But when we started that project, the vendor -- their merchant was actually processing in Europe and the time to get an approval was 11 seconds. If you walked up to one of my regular point-of-sale, belly-up concession stands, the approval was coming in two and a half seconds. We turned around and said you can’t wait nine seconds. People are in a queue line to get an approval on a credit card. We dug into it and found well, we’re hopping here, here, here and it’s coming from Europe.We had to get with that vendor and say, “You need to change how you’re processing.” It’s a question we hadn’t asked before, but had to get it back in line because, this is not necessarily just a technology piece here, but if you’re holding up a queue line, that’s not a satisfactory relationship. If you think about every person going into that concession stand -- 11 seconds, 11 seconds, 11 seconds -- for every six people, you’re delaying a minute. These are the things that as we’re going through planning sessions, it’s not necessarily, “Oh, it’s the latest technology, but what’s the speed of transaction, what’s the speed of throughput?” We have to be very diligent throughout that process.How far out do you typically plan your IT budget? How often do you reassess to see what the ROI has been for a project such as this?Typically, I am looking 18 months into the future. This is one of the rare times where I'm actually looking 36 to 48 months into the future because of everything that’s kind of stacked up one after another, and I don’t have the latitude if one starts to slip that -- I can't take a 5-year set of projects and make it 9 years. I got to have the depth to say, “Hey, we’re going to finish this, but be ready because while we’re finishing up this voice over IP project, we’re now in FIFA planning. We’re now in network consolidation planning.” They’re just stacked up one after another behind that and the decisions we make now are going to impact what we’re doing in 12 months, 24 months, etcetera.Where do things stand right now in terms of this project? What’s on the road map ahead?Right now, we are in the heart of our voice over IP migration, which is the first major project we’ve set forth with NWN. We’re expecting that to be finished before football season starts. And then we’ll have an overlap of a couple of months and planning out what our core network upgrades are going to look like -- we’ll be in the planning phases, and they’ll start in late fall, early winter, right before football season ends.About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 177 Views
-
WWW.INFORMATIONWEEK.COMLunar Data Centers Loom on the Near HorizonCarrie Pallardy, Contributing ReporterApril 21, 20258 Min Readtdbp via Alamy Stock PhotoWe are looking far afield for the future of data centers: in deserts, under the sea, and of course, in space. Data centers in strange places are steadily moving from the realm of imagination to reality. Lonestar Data Holdings, for one, recently achieved milestones in testing its commercial lunar data center in orbit. How does Lonestar’s most recent mission push us forward on the path to commercial data centers around and on the Moon? What are the unique challenges that must be solved for launching and maintaining these data centers? As more governments and enterprises look to space, what lies ahead for competition and cooperation on the Moon and beyond? The Mission On Feb. 26, Lonestar launched its Freedom data center payload onboard the Athena Lunar Lander, a commercial Moon lander sent by American space exploration company Intuitive Machines. The landing did not go exactly as planned. The system landed on its side and powered down days earlier than anticipated, CNN reports. But Lonestar achieved several testing milestones prior to the landing. The company’s technology demonstrated its ability to operate in the harsh environment of space. Lonestar was able to test its data storage capabilities and execute edge processing functions. Related:Lunar Opportunities and Challenges Lunar data centers offer a number of advantages over their terrestrial counterparts. Ready access to solar power and natural cooling are useful, and their remote location is key to their appeal. “Throw in all the problems with climate change, natural disasters, human error, wars, nation states going after immutable data that's held in data centers,” says Chris Stott, CEO of Lonestar. Data center customers want to put their data somewhere that is secure, accessible, and in compliance with data sovereignty laws. And space beckons. While the promise of lunar data centers as a core piece of resiliency and disaster recovery strategy is clear, there is a lot of work being poured into making them a tangible, commercial option. Cost is an obvious hurdle for any space-based project. But given the appetite for space exploration and commercialization, there is certainly money to be found. Lonestar raised $5 million in seed funding in 2023, and the company is working on finishing its Series A funding, according to Stott. Other companies with celestial data center ambitions are attracting millions, too. Starcloud, previously Lumen Orbit, has raised more than $20 million, according to GeekWire. Starcloud is focused on space-based data centers not on the Moon but in low Earth orbit. Related:Companies need that kind of funding because it is expensive to launch these data centers and to design them. A lunar data center isn’t going to look like one you would see on Earth. “When you take something into space, you have to redesign everything,” Stott acknowledges. The data center needs to operate in the vacuum of space. It needs to be built with space-qualified material; it must meet low outgassing criteria. It needs to be able to operate in an environment of extremes. On the lunar surface, a data center would be faced with two weeks of day and two weeks of night. “You’ve got 250 degrees Celsius in the sun,” says Stott. “But when it gets to lunar night it goes … instantly to minus 200 degree Celsius. It gets really cold. So cold it fractures silicone.” Lonestar is focusing its near-term efforts on placing its data centers at Lagrange points, specific spots between the Earth and Moon in which objects remain stable. With this approach, the data center will only experience four hours of shade every 90 days, and it will have batteries to power it during that time, Stott explains. “That changed everything for us because it means we don't have to wait for a ride to the Moon. We don't have to use a lunar lander. We can solve the day-night issue,” he adds. Related:Terrestrial data centers have white space and grey space. The former includes the servers and racks, while the latter supports those: communication, cooling, power. The same concept applies to space-based data centers, but the white space is referred to as a payload. “It's the load that pays … whether it be a camera or whether it be an astronaut or whether it be a data center,” says Stott. “Then our gray space: power, thermal and communications. It’s the satellite, it's the solar panels, the batteries for power, and satellite antennas for communications.” When something in a data center fails or breaks in a terrestrial data center, it is a relatively simple matter to have someone walk in the door and fix it. Those boots on the ground aren’t exactly a readily available option for lunar data centers. Gregory Ratcliff is chief innovation officer at Vertiv, a company that provides critical infrastructure solutions, including data centers. Vertiv is not directly involved in lunar data center projects, but it has plenty of experience here on Earth. Ratcliff tells InformationWeek, “Fault tolerance is really going to matter. [You’ll] have a redundancy of systems, redundancy of those servers and in some cases, you might just let it fail until you do the upgrade and work around it, which is a little different than we do in modern data centers on Earth.” And then, of course, there are the logistical demands of arranging to launch anything into space. “They always say the hardest thing about getting to space is getting permission,” says Stott. A Commercial Offering Caddis Cloud Solutions, an advisory firm that specializes in data center development, is working with Lonestar. “We're really the … organization helping vet customers, understand the technical solutions that customers are looking for, presenting those solutions, helping them build out the physical infrastructure on ground,” Caddis Cloud Solutions CEO Scott Jarnagin tells InformationWeek. Lonestar’s lunar data center aims to provide resiliency as a service and disaster recovery and edge processing services. And already there are government and enterprise customers on board. It is working with the state of Florida to provide data storage, for example. On the edge processing side, Lonestar counts Vint Cerf, one of the trailblazers behind the architecture of internet, among its customers. Lonestar is also working with other data center operators. “They can provide the solutions to their customers as an extension of disaster recovery services,” Jarnigan explains. Lonestar is planning to launch six data storage spacecrafts between 2027 and 2030. They will orbit the Moon at the Lunar L1 Lagrange Point. “Each one carrying multi petabytes worth of storage and doing a ton of edge processing as well. Think of it like a smart device up in orbit around the moon,” says Stott. “And they are precursors to what we'll put in the moon later on.” It is booking capacity for those upcoming missions. While Lonestar is gearing up for those next missions, it is not alone in the world of space-based data centers. Plenty of companies, like Starcloud, are working on low Earth orbit data centers. Stott considers Lonestar to be a “different flavor” of space-based data center. “We are a very niche, premium, high-latency, high-security application. We don't want to be close to the planet. We want to be far enough away that we can still operate safely and have line of sight communications without any of the other complications that come with that,” he says. The Future of Data Centers While Lonestar is starting its commercial data centers in lunar orbit, it still plans to return to the surface of the Moon. And, of course, there is plenty of interest focused on launching a plethora of lunar technology. NASA’s Artemis program is focused on establishing long-term presence on the Moon. The Lunar Surface Technology Research (LuSTR) program and Lunar Surface Innovation Initiative are driving the development of technologies to support Artemis missions to the Moon, as well as exploration on Mars. As Lonestar and other space-based data center initiatives advance, what of terrestrial data centers? Ratcliff anticipates that advances made in lunar data centers will be useful here on Earth as well. “It'll feed backwards … power routing, sensor optimization, digital twins,” he says. “So, this is going to push us to be better both on Earth and on the Moon.” For now, the Moon feels almost like a blank slate. But as more and more public and private enterprises launch lunar satellites and establish technology on its surface, competition for real estate -- for data centers and otherwise -- will heat up. While wealthy governments and enterprises will have a leg up in the competition, it isn’t going to be a complete free-for-all. Plenty of space law exists today. Any initiative that goes to the Moon is subject to the laws of its country of origin. “If you're an American company and you're flying in space, American law applies to you. You don't get to skip anything,” says Stott. Even within the bounds of law, there is an element of racing. Companies and countries want to reap the benefits of lunar initiatives. “Back in the 60s, it was flags and footprints. Today, it's resources and revenue,” says Stott. “When we're looking at the Moon, it is now just part of Earth’s economics sphere. It's just another place we go to do business.” But there is also a history of collaboration in space. “If you think back just not too long ago, the ISS [International Space Station] was built by a whole bunch of different countries … it was completely outside of politics and seems to work pretty well,” Ratcliff points out. The groups developing and launching lunar technology will have to figure out how to do so without compromising safety, and that will require at least some level of cooperation with one another. Success on the Moon is likely just the beginning for the data center industry. “One day we will have Martian data centers. We will have Jovian based data centers. Anywhere that humanity goes, we now take two things with us: the law and data,” says Stott. In all likelihood, we will have something else with us: cybercriminals. Space may be far more remote than any corner we could find here on Earth, but that doesn’t mean threat actors won’t seek and find vulnerabilities that enable cyberattacks in space . “We are a hedge against terrestrial problems, but, of course, we have to stay one step ahead in terms of cybersecurity,” Stott recognizes. About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comments 0 Shares 135 Views
-
WWW.INFORMATIONWEEK.COMBuilding Secure Cloud Infrastructure for Agentic AIResearch and advisory firm Gartner predicts that agentic AI will be in 33% of enterprise software applications and enable autonomous decision making for 15% of day-to-day work by 2028. As enterprises work toward that future, leaders must consider whether existing cloud infrastructure is ready for that influx of AI agents. “Ultimately, they are run, hosted, and are accessed across hybrid cloud environments,” says Nataraj Nagaratnam, IBM fellow and CTO of cloud security at technology and consulting company IBM. “You can protect your agentic [AI], but if you leave your front door open at the infrastructure level, whether it is on-prem, private cloud, or public cloud … the threat and risk increases.” InformationWeek spoke with Nagaratnam and two other experts in cloud security and AI to understand why a secure cloud infrastructure matters and what enterprises can be doing to ensure they have that foundation in place as agentic AI use cases ramp up. Security and Risk Considerations The security and risk concerns of adopting agentic AI are not entirely unfamiliar to organizations. When organizations first looked at moving to the cloud, security, legacy tech debt, and potential data leakage were big pieces of the puzzle. “All the same principles end up being true, just when you move to an agentic-based environment, every possible exposure or weakness in that infrastructure becomes more vivid,” Matt Hobbs, cloud, engineering, data, and AI leader at professional services network PwC, tells InformationWeek. Related:For as novel and exciting as agentic AI feels, security and risk management of this technology starts with the basics. “Have you done the basic hygiene?” Nagaratnam asks. “Do you have enough authentication in place?” Data is everything in the world of AI. It fuels AI agents, and it is a precious enterprise resource that carries a lot of risk. That risk isn’t new, but it does grow with agentic AI. “It's not only the structured data that traditionally we have dealt with but [also] the explosion of unstructured data and content that GenAI and therefore the agentic era is able to tap into,” Nagaratnam points out. AI agents add not only the risk of exposing that data, but also the potential for malicious action. “Can I get this agent to reveal information it's not supposed to reveal? Can I compromise it? Can I take advantage or inject malicious code?” Nagaratnam asks. Enterprise leaders also need to think about the compliance dimensions of introducing agentic AI. “The agents and the system need to be compliant, but you inherit the compliance of that underlying … cloud infrastructure,” Nagaratnam says. Related:The Right Stakeholders Any organization that has embarked on its AI journey likely already realizes the necessity of involving multiple stakeholders from across the business. CIOs, CTOs, and CISOs -- people already immersed in cloud security -- are natural leaders for the adoption of agentic AI. Legal and regulatory experts also have a place in these internal conversations around cloud infrastructure and embracing AI. With the advent of agentic AI, it can also be helpful to involve the people who would be working with AI agents. “I would actually grab the people that are in the weeds right now doing the job that you're trying to create some automation around,” says Alexander Hogancamp, director of AI and automation at RTS Labs, an enterprise AI consulting company. Involving these people can help enterprises identify use cases, recognize potential risks, and better understand how agentic AI can improve and automate workflows. The AI space moves at a rapid clip -- as fast as a tidal wave, racehorse, rocket ship, choose your simile -- and just keeping up with the onslaught of developments is its own challenge. Setting up an AI working group can empower organizations to stay abreast of everything happening in AI. They can dedicate working hours to exploring advancements in AI and regularly meet to talk about what this means for their teams, their infrastructure, and their business overall. Related:“These are hobbyists, people with passion,” says Hogancamp. “Identifying those resources early is really, really valuable.” Building an internal team is critical, but no enterprise is an island in the world of agentic AI. Almost certainly, companies will be working with external vendors that need to be a part of the conversation. Cloud providers, AI model providers, and AI platform providers are all involved in an enterprise’s agentic AI journey. Each of these players needs to undergo third-party risk assessment. What data do they have access to? How are their models trained? What security protocols and frameworks are in place? What potential compliance risks do they introduce? Getting Ready for Agentic AI The speed at which AI is moving is challenging for businesses. How can they keep up while still managing the security risks? Striking that balance is hard, but Hobbs encourages businesses to find a path forward rather than waiting indefinitely. “If you froze all innovation right now and said, ‘What we have is what we're going to have for the next 10 years,’ you'd still spend the next 10 years ingesting, adopting, retrofitting your business, he says. Rather than waiting indefinitely, organizations can accept that there will be a learning curve for agentic AI. Each company will have to determine its own level of readiness for agentic AI. And cloud native organizations may have a leg up. “If you think of cloud native organizations that started with a modern infrastructure for how they host things, they then built a modern data environment on top of it. They built role-based security in and around API access,” Hobbs explains. “You're in a lot more prepared spot because you know how to extend that modern infrastructure into an agentic infrastructure. Organizations that are largely operating with an on-prem infrastructure and haven’t tackled modernizing cloud infrastructure likely have more work ahead of adopting agentic AI. As enterprise teams assess their infrastructure ahead of agentic AI deployment, technical debt will be an important consideration. “If you haven’t addressed the technical debt that exists within the environment you're going to be moving very, very slow in comparison,” Hobbs warns. So, you feel that you are ready to start capturing the value of agentic AI. Where do you begin? “Don't start with a multi-agent network on your first use case,” Hogancamp recommends. “If you try to jump right into agents do everything now and not do anything different, then you're probably going to have a bad time.” Enterprises need to develop the ability to observe and audit AI agents. “The more you allow the agent to do, the more substantially complex the decision tree can really be,” says Hogancamp. As AI agents become more capable, enterprise leaders need to think of them like they would an employee. “You'd have to look at it as just the same as if you had an employee in your organization without the appropriate guidance, parameters, policy approaches, good judgment considerations,” says Hobbs. “If you have things that are exposed internally and you start to build agents that go and interrogate within your environment and leverage data that they should not be, you could be violating regulation. You're certainly violating your own policies. You could be violating the agreement that you have with your customers.” Once enterprises find success with monitoring, testing, and validating a single agent, they can begin to add more. Robust logging, tracing, and monitoring are essential as AI agents act autonomously, making decisions that impact business outcomes. And as more and more agents are integrated into enterprise workflows -- ingesting sensitive data as they work -- enterprise leaders will need increasingly automated security to continuously monitor them in their cloud infrastructure. “Gone are the days where a CISO gives us a set of policies and controls and says [you] should do it. Because it becomes hard for developers to even understand and interpret. So, security automation is at the core of solving this,” says Nagaratnam. As agentic AI use cases take off, executives and boards are going to want to see its value, and Hobbs is seeing a spike in conversations around measuring that ROI. “Is it efficiency in a process and reducing cost and pushing it to more AI? That's a different set of measurements. Is it general productivity? That's a different set of measurement,” he says. Without a secure cloud foundation, enterprises will likely struggle to capture the ROI they are chasing. “We need to modernize data platforms. We need to modernize our security landscape. We need understand how we're doing master data management better so that [we] can take advantage and drive faster speed in the adoption of an agentic workforce or any AI trajectory,” says Hobbs.0 Comments 0 Shares 147 Views
More Stories