InformationWeek
InformationWeek
News and Analysis Tech Leaders Trust
  • 1 persone piace questo elemento
  • 403 Articoli
  • 2 Foto
  • 0 Video
  • 0 Anteprima
  • Science &Technology
Cerca
Aggiornamenti recenti
  • WWW.INFORMATIONWEEK.COM
    Knowledge Gaps Influence CEO IT Decisions
    Richard Pallardy, Freelance WriterApril 23, 202510 Min ReadFeodora Chiosea via Alamy StockCEOs are increasingly honest about their IT knowledge deficiencies. Anyone who has worked in tech in the past several decades has a story or two about the imperious and dismissive attitude taken by the C-suite toward tech issues. It is a cost center, a gamble, an unworthy investment. There are plenty of CEOs and other executives who still refuse to engage with the tech side of the business. But they are now viewed as dinosaurs -- relics of an age where tech was a novelty. Now, CEOs and their cohorts have been compelled to acknowledge these errors. Many are attempting to correct them -- both personally and on an organizational level. A recent Istari survey found that 72% of CEOs felt uncomfortable making cybersecurity decisions. Respondents to the survey acknowledged the need to trust the knowledge of their tech counterparts -- an encouraging finding for CIOs.  The difficulty of this shift is understandable. CEOs were initially only responsible for industrial operations and the money they produced. Following the Industrial Revolution, their responsibilities became largely financial. Now they must juggle both fiscal and technological aspects to remain competitive.  Strategic implementation of technology, both in expanding business and defending it against attackers, is increasingly essential. Doing so requires a working knowledge of tech trends and how they can be leveraged across the organization. This may be a difficult ask for people who come from strictly business backgrounds. Thus, it is incumbent upon them to both educate themselves and consult with their CIOs to ensure that informed decisions are made.  Related:According to a 2021 MIT Sloan Management review, organizations whose leadership was savvy to new tech developments saw 48% more revenue growth. Now, when organizations seek a CEO, they increasingly ask whether their candidates possess the knowledge necessary to manage the risks and benefits of implementing new technologies such as AI while maintaining a strong security posture.  Here, InformationWeek explores the knowledge gaps that CEOs need to be aware of -- and how they can fill them -- with insights from Ashish Nagar, CEO of customer service AI company Level AI, and Susie Wee, CEO of DevAI, an AI company working on optimizing IT workflows. What CEOs Don’t Know Business-trained CEOs may lack many technological skills -- an understanding of AI, how to best manage cybersecurity, and the ability to determine what infrastructure is a worthwhile investment. The narrow parameters of their training and the responsibilities of their previous roles leave many of them in the dark on how to manage the integration of technological aspects into the businesses they manage.  Related:“Technology is not their business. The technology is used to fortify their offer,” Wee says. “The question is, how can they use technology to compete while thinking first about their customers?” Susie Wee, DevAIA 2025 report issued by Cisco offers intriguing findings about the feelings of CEOs on IT knowledge gaps. Of the CEOs surveyed, some 73% were concerned that they had lost competitive advantage due to IT knowledge gaps in their organization. And 74% felt that their deficiencies in knowledge of AI were holding them back from making informed business decisions regarding the technology.  “The arc of what is possible right now with these modern technologies, especially with how fast things are changing, is what I see as the biggest gap,” Nagar says. “That’s where it creates friction between technical leaders and the CEO.” CEOs who cannot connect the dots between the capabilities of nascent tech and what it may offer in the future do a disservice to their organizations. According to Cisco, around 84% of respondents believed that CEOs will need to be increasingly informed about new technologies in coming years in order to operate effectively. However, other data from the report suggests that some CEOs view IT deficiencies as the responsibility of their teams -- only 26% saw problems with their own lack of knowledge. Related:“Some are very scared -- and actually frozen and not moving forward. They're deciding to allow legal and compliance to put up gates everywhere,” Wee observes. Other research, however, indicates that CEOs are taking ownership of their personal knowledge gaps -- 64% of respondents to an AND Digital survey felt that they were “analogue leaders.” That is, they were concerned that their skill sets did not match the increasing integration of digital into all aspects of business. And some 34% said that their digital knowledge was insufficient to lead their companies to the next growth phase. The survey found that female CEOs were more nervous about their knowledge gaps -- 46% thought they lacked the necessary technological skills. “The buck stops with me. If anything goes wrong in cyber for whatever reason, customers will not excuse me because it is in an area I can say somebody else is looking after,” said one CEO who spoke with Istari. One of their main complaints is the lack of usable data and how to obtain it. If they have structured data, many of them can adapt their existing skill sets around it and make effective decisions. But obtaining that information requires at least a general understanding of the landscape. If they can direct their subordinates to capture that data and massage it into a usable format, they can make more informed choices for their organizations. How CEOs Can Bridge the Gap CEOs are increasingly seeking tech training -- 78% were enrolled in digital upskilling courses according to the AND Digital survey. Some CEOs are even engaging in reverse mentoring, where they form partnerships in which their subordinates share their skill sets in a semi-structured environment, allowing them to leverage that knowledge. Advisory boards and other programs that put CEOs in contact with their tech teams are also useful in facilitating upward knowledge transfer.  Digital immersion programs in which executives are embedded with their tech teams give them on-the-ground experience and allow them to integrate their experiences into the decisions that will ultimately influence the daily work of these groups.  “In our organization we have weekly technology days where people share best practices on what people are learning in their lines of work,” Nagar says. There are even simulation programs, which allow CEOs to test their tech knowledge against real-life scenarios and then view the results within the safety of an artificial environment -- thus gaining useful feedback at no cost to their actual business. Wee thinks that they should be encouraging their teams to learn along with them. “When the Internet was formed, there were companies that did not allow people to use the Internet,” she says. They fell behind. The same may be true when CEOs do not see the benefits of encouraging their employees to experiment with AI, for example, and doing the same themselves. The tech side can play its part in getting CEOs on board too. “The question is: how to meet them where they're at,” Wee says. To her, that means showing them the more pragmatic sides of new technology such as AI -- the tasks it can perform and how that can benefit the business. “Because of the recent technology changes, there’s much more space on the CEO agenda for technology,” Nagar adds. How It Pays Off A 2024 article in Nature describes the correlation between CEOs who have a background in scientific research and how the enterprises they run digitalize. The correlation is a positive one -- companies run by CEOs who know tech tend to be more aggressive on innovation and reap its benefits more rapidly.  CEOs with scientific and technological knowledge bases are uniquely positioned to see the benefits of implementing new technology, investing in technological infrastructure and supporting cybersecurity safeguarding. While plenty of CEOs who come from other backgrounds can do the same, they may be more reticent given their lack of understanding of the underlying principles.  A heightened awareness of the influence of tech, even on businesses without a strict technological focus, allows leadership to capitalize on developments and trends as they emerge rather than after they have been proven by peer organizations -- often saving on costs and offering a competitive advantage.  Novel technology may be secured at bargain rates when it first becomes available -- and at the same time, the talent required to run it may be more available as well. Leadership that is able to discern these trends as they emerge can uniquely position an organization to capitalize on them.  “What CEOs are finding is that customers want to have an experience that is extremely technology forward: frictionless, faster, better, cheaper. If that is the case, the CEO has to know about the technology changes, because the decisions they're making right now are not just for today,” Nagar imparts. “I think the motivation comes from working backwards from the customer.” Ashish Nagar, Level AICEOs who emphasize digital strategy -- and remain on the cutting edge by refining their own knowledge -- are far more likely to be hawkish on digital strategy and reap the resulting revenue. Technologically literate CEOs are more attuned to risk management, too. They are more likely to solicit and examine data on the risks particular to their business and allocate resources accordingly.  Rather than viewing cybersecurity as merely a cost center, they are able to discern the long-term benefits of a healthy security program and to understand that their cyber team adds immeasurable value even during periods where attacks do not occur. When an incident does occur, they are also more proficient at managing the situation in concert with their CIO and other tech staff -- no one in cybersecurity wants to work under a CEO who panics under fire.  Leveraging CIOs Cisco reports this year that almost 80% of CEOs are now leaning more than ever on their CIOs and CTOs for vital tech knowledge. And 83% acknowledge that CTOs now play a key role in their business. Istari has found additional support for this notion -- their surveys find that CEOs now view their CIOs and CTOs as invaluable collaborators. Still, CIOs remain nervous about these collaborations. Some 30% of American CIOs and 50% of their European counterparts did not think that their CEOs were equally accountable for tech problems.  The tension cuts both ways. As one CEO told Istari, “At that moment of an attack, you put the company into the hands of supply chain people and IT people. And those are not groups you would normally, or intuitively, give that kind of confidence and trust to.” Participants in the survey -- both CEOs and CIOs -- urged a greater move toward both shared accountability and responsibility. Not only should both parties face the music when something goes wrong; they should also be equally involved in preparing for and obviating these crises in the first place.  CIO to CEO Pipeline Cisco suggests that some 82% of CEOs anticipate a growing number of CTOs entering their ranks soon.  Indeed, many of the world’s top CEOs don’t come from traditional business backgrounds. Sam Altman, Jeff Bezos, Demis Hassabis, and Mark Zuckerberg rose to their positions through their knowledge of engineering and tech. The trend has been observed for nearly a decade -- Harvard Business Review flagged it in 2018. People with this sort of mindset appear to flourish in modern business, with its ever-growing reliance on technology. They are perfectly positioned to both reap the benefits and manage the multitude of problems that ensue.  The traditional business mindset, while not obsolete, is not as easily adaptable to such a volatile, multi-dimensional ecosystem. CIOs are already more involved in business strategy, with some 55% claiming they have been proactive in this regard according to one report. “A CTO role is a business role compared to being a pure technologist,” Wee says of her own experience. “You’re linking the needs of the business and its customers together with technology advancements -- and the technical teams who can deliver it.” This forward-facing mindset has been a fundamental shift in the C-suite -- CIOs, CTOs, and CISOs are no longer in the background. Their strategic capabilities and ability to forecast coming tech trends are increasingly valuable. And they may ultimately lead those who hold these positions to even more prominent leadership roles. About the AuthorRichard PallardyFreelance WriterRichard Pallardy is a freelance writer based in Chicago. He has written for such publications as Vice, Discover, Science Magazine, and the Encyclopedia Britannica.See more from Richard PallardyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 50 Views
  • WWW.INFORMATIONWEEK.COM
    Surgical Center CIO Builds an IT Department
    John Edwards, Technology Journalist & AuthorApril 23, 20255 Min ReadRusty Strange, Regent Surgical HealthSince 2001, Regent Surgical Health has developed and managed surgery center partnerships between hospitals and physicians. The firm, based in Franklin, Tennessee, works to improve and evolve the ambulatory surgical center (ASC) model. Rusty Strange, Regent's CIO, is used to facing challenges in a field where lives are at stake. He joined Regent after a 17-year stint at ambulatory surgery center operations firm Amsurg, where he served as vice president of IT infrastructure and operations. In an online interview, Strange discusses the challenge he faced in building an entire IT department. What is the biggest challenge you ever faced? The biggest challenge I faced when I came to Regent was building an IT department from the ground up. As background, I was the first IT employee. At the time, we had no centralized IT structure -- each ambulatory surgical center ASC operated with fragmented, non-standard systems managed by local staff or unvetted third parties. There was no cohesive strategy for clinical applications, data management, cybersecurity, or operational support. What caused the problem? The issue arose from rapid growth. The company was acquired, transforming into a high-growth organization overnight. Multiple ASCs were added to our portfolio over a short period, but we lacked the infrastructure to have sustainable success. There was no dedicated IT budget, no standardized software or hardware, and no staff trained to handle the increasing complexity of healthcare technology. This left us vulnerable to inefficiencies, security risks, and a lack of data to inform important decisions. Related:How did you resolve the problem? I started by conducting a full assessment of existing systems across all locations to identify gaps and risks. I developed a multi-year plan to address foundational needs/capabilities, secured buy-in for an initial budget to hire our first functional area leaders, and partnered with a few firms that could provide us with the additional people resources to execute on multiple fronts. We standardized hardware and software, implementing cloud-based systems and a scalable network architecture. We also established policies for cybersecurity, business continuity, and staff training, while gradually scaling the team and outsourcing specialized tasks like penetration testing to additional trusted partners. What would have happened if the problem wasn't swiftly resolved? Without a stable IT department, the company would have been unable to grow effectively. Important data would have been at risk and unutilized, potentially leading to violations and missed insights. Operational inefficiencies, like mismatched scheduling systems or billing errors, would have eroded profitability and frustrated surgeons and patients alike. Over time, our reputation as a first-class ASC management partner would have suffered, potentially stalling further growth or even losing existing centers to competitors. Related:How long did it take to resolve the problem? It took about 18 months to establish a fully operational IT department. The first six months were spent laying the foundation, hiring the core team, standardizing systems, and addressing immediate risks. The next year focused on refining processes, expanding the team, and rolling out core capabilities. It was a phased approach, but we hit key milestones early to stabilize operations and gain organizational buy-in/trust. Who supported you during this challenge? The entire leadership team was a critical ally, trusting the vision and advocating for the investments needed to achieve it. My initial hires were integral, they were able to adopt an entrepreneurial mindset, often setting direction while also being responsible for tactical execution. Our ASC administrators also stepped up, providing insights into their workflows and championing the changes with their staff. External partners helped accelerate implementation once we had the resources and process to engage them properly. Related:Did anyone let you down? Not everyone was the right fit and not everyone in the organization was ready for the accelerated pace of change, but those were not personal failures, just circumstantial and provided learning opportunities for me and others in the company. What advice do you have for other leaders? Start with a clear vision and get fellow-executive buy-in early -- without it, you're facing a steep uphill climb. Prioritize quick wins, like fixing the most glaring risks and user pain points to build momentum and credibility. Hire a small, versatile team you can trust -- quality beats quantity when you’re starting out. Be patient but persistent; building something from scratch takes time, but cutting corners will haunt you later. Communicate constantly -- stakeholders need to understand why the change matters. Lastly, build a “team first” mindset so that individuals know they are supported and can go to others to brainstorm or for assistance. Is there anything else you would like to add? This experience reinforced the critical role technology plays in ASCs, where efficiency and patient safety are non-negotiable. It also taught me that resilience isn’t just about systems -- it’s about people. It’s proof that even the toughest challenges can transform an organization if you tackle them head-on with the right team and strategy. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 52 Views
  • WWW.INFORMATIONWEEK.COM
    Will Cuts at NOAA and FEMA Impact Disaster Recovery for CIOs?
    Carrie Pallardy, Contributing ReporterApril 22, 20254 Min ReadNovember 1, 2019: Flooding in the village of Dolgeville, Herkimer County, New YorkPhilip Scalia via Alamy Stock PhotoNatural disasters are indiscriminate. Businesses and critical infrastructure are all vulnerable. In the wake of a disaster, public and private organizations face the responsibility of recovery and resilience. That typically requires public-private coordination, but sweeping staff cuts at the federal level could significantly reshape what those partnerships look like.  More than 600 workers were laid off and the total job cuts may exceed 1,000 at the National Oceanographic and Atmospheric Administration (NOAA), of which the National Weather Service is a part. More than 200 employees at the Federal Emergency Management Agency (FEMA) have lost their jobs as well.  Legal pushback resulted in some employees being reinstated across various federal agencies, but confusion still abounds, NBC News reports.  InformationWeek spoke with a local emergency manager and a cybersecurity leader to better understand the role these federal agencies play in disaster response and how their tenuous future could impact recovery and resilience.  Public-Private Partnership and Disaster Recovery  CIOs at enterprises need plans for operational continuity, disaster recovery, and cyber resilience. When a natural disaster hits, they can face major service disruptions and a heightened vulnerability to cyber threats. Related:“Hurricane Sandy in New York or floods in New Orleans or fires in LA, they may create opportunities for folks to be a little more vulnerable to cyberattacks,” says Matthew DeChant is CEO of Security Counsel, a cybersecurity management consulting firm. “The disaster itself [creates] an opportunity for bad actors to step in.” Speed is essential, whether responding to a weather-related incident or a cyberattack. “What we typically say to our clients is that in order to run a really good information security program you have to be very good at intelligence gathering,” says DeChant.  For weather-related disasters, the National Weather Service is a critical source of intelligence. “The National Weather Service in particular is a huge partner of emergency managers at the local, state and federal level. Any time that we are expecting a weather-based incident, we are in constant communication with the national weather service,” Josh Morton, first vice president of the International Association for Emergency Managers and director of the Saluda County Emergency Management Division in South Carolina, tells InformationWeek.  FEMA plays a pivotal role in disaster recovery by facilitating access to federal resources, such as the Army Corps of Engineers. “Without FEMA or some other entity that allows us to access those resources through some type of centralized agency … you would have local jurisdictions and state governments attempting to navigate the complexities of the federal government without assistance,” Morton points out. Related:FEMA’s other role in disaster recovery comes in the form of federal funding.  “All disasters begin and end locally. The local emergency management office is really who is driving the train whenever it comes to the response. Once the local government becomes overwhelmed, then we move on to the state government,” Morton explains. “Once we get to a point where the state becomes overwhelmed, that's when FEMA gets involved.”  The Cuts The Department of Government Efficiency (DOGE) is orchestrating job cuts in the name of efficiency. In theory, greater efficiency would be a positive.  “I don't think you will find anybody in [emergency] management that doesn't feel like that there is reform needed,” Morton shares. “Following a disaster most of us end up having the higher contractors just to help us get through the federal paperwork. There's a lot of barriers to accessing federal funding and federal resources.” Related:But are these mass job cuts achieving the goal of greater efficiency? In the case of FEMA and NOAA, cuts could compound preexisting staff shortages. In 2023, the US Government Accountability Office reported that action needed to be taken to address staffing shortages at FEMA as disasters increase in frequency and complexity.  When Hurricane Helene hit last year, Saluda County, where Morton works, was one of the affected areas.  “A slower more intricate reform is what is needed. What we really need right now is a scalpel and not a hacksaw,” says Morton. “If we simply go in and start just throwing everything out without taking a hard look at these programs, we're going to do a lot more damage than good.” Rethinking Disaster Recovery Plans  “All business is generally run on good intelligence about their marketplace and various other factors here. So, if you can't get it from the government today then you're going to need to replace it,” says DeChant. “Not every local emergency management office has the resources to be able to have commercial products available,” says Morton. “So, really having that resource in the national weather service is very beneficial to public safety.” With the shifts in the federal government, Mortan says it is more vital than ever for organizations to make sure they have insurance resources available. Enterprise leadership may also have to adapt in unexpected ways should calamity strike under these circumstances. “There's going to be a lot of uncertainty and that hurts the ability to make decisions with confidence,” says DeChant. About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 46 Views
  • WWW.INFORMATIONWEEK.COM
    CIO Angelic Gibson: Quell AI Fears by Making Learning Fun
    Lisa Morgan, Freelance WriterApril 22, 20257 Min ReadFirn via Alamy StockEffective technology leadership today prioritizes people as much as technology. Just ask Angelic Gibson, CIO at accounts payable software provider AvidXchange. Gibson began her career in 1999 as a software engineer and used her programming and people skills to consistently climb the corporate ladder, working for various companies including mattress company Sleepy’s and cosmetic company Estee Lauder. By the time she landed at Stony Brook University, she had worked her way up to technology strategist and senior software engineer/architect before becoming director, IT operations for American Tire distributors. By 2013, she was SVP, information technology for technology solutions provider TKXS and for the past seven years she’s been CIO at AvidXchange. “I moved from running large enterprise IT departments to SaaS companies, so building SaaS platforms and taking them to market while also running internal IT delivery is what I’ve been doing for the past 13 years. I love building world class technology that scales,” says Gibson. “It's exciting to me because technology is hard work and you’re always a plethora of problems, so you wake up every day, knowing you get to solve difficult, complex problems. Very few people handle complex transformations well, so getting to do complex transformations with really smart people is invigorating. It inspires me to come to work every day.” Related:Angelic GibsonOne thing Gibson and her peers realized is that AI is anything but static. Its capabilities continue to expand as it becomes more sophisticated, so human-machine partnerships necessarily evolve. Many organizations have experienced significant pushback from workers who think AI is an existential threat. Organizations downsizing through intelligent automation, and the resulting headlines, aren’t helping to ease AI-related fears. Bottom line, it’s a change management issue that needs to be addressed thoughtfully. “Technology has always been about increasing automation to ensure quality and increase speed to market, so to me, it's just another tool to do that,” says Gibson. “You’ve got to meet people where they're at, so we do a lot of talking about fears and constraints. Let’s put it on the table, let’s talk about it, and then let’s shift to the art of the possible. What if [AI] doesn't take your job? What could you be doing?” The point is to get employees to reimagine their roles. To facilitate this, Gibson identified people who could be AI champions, such as principal senior engineers who would love to automate lower level thinking so they can spend more time thinking critically. Related:“What we have found is we’ve met resistance from more senior level talent versus new talent, such as individuals working in business units who have learned AI to increasingly automate their roles,” says Gibson. “We have tons of use cases like that. Many employees have automated their traditional business operations role and now they're helping us increase automation throughout the enterprise.” Making AI Fun to Learn Today’s engineers are constantly learning to keep pace with technology changes. Gibson has gamified learning by showcasing who’s leveraging AI in interesting ways, which has increased productivity and quality while impacting AvidXchange customers in a positive way. “We gamify it through hackathons and showcase it to the whole company at an all-hands meeting, just taking a moment to recognize awesome work,” says Gibson. “And then there are the brass tacks: We’ve got to get work done and have real productivity gains that we're accountable for driving.” Over the last five years, Gibson has been creating a learning environment that curates the kinds of classes she wants every technologist to learn and understand, such as a prompt engineering certification course. Their progress is also tracked. Related:“We certify compliance and security annually. We do the same thing, with any new tech skill that we need our teammates to learn,” says Gibson. “We have them go through certification and compliance training on that skill set to show that they’re participating in the training. It doesn't matter if you’re a business analyst or an engineer, everyone's required to do it, because AI can have a positive impact in any role.” Establish a Strong Foundation for Learning Gibson has also established an AI Center of Excellence (CoE), made up of 22 internal AI thought leaders who are tasked with keeping up with all the trends. The group is responsible for bringing in different GenAI tools and deep learning technologies. They’re also responsible for running proofs of concept (POC). When the project is ready for production, the CoE ensures it has passed all AvidXchange cybersecurity requirements. “Any POC must prove that it's going to add value,” says Gibson. “We’re not just throwing a slew of technology out there for technology’s sake, so we need to make sure that it’s fit for purpose and that it works in our environment.” To help ensure the success of projects, Gibson has established a hub and spoke operating model, so every business unit has an AI champion that works in partnership with the CoE. In addition, AvidXchange made AI training mandatory as of January 2024, because AI is central to its account payables solution. In fact, the largest customer use cases have achieved 99% payment processing accuracy using AI to extract data from PDFs and do quality checks, though humans do a final review to ensure that level of accuracy.  “What we’ve done is to take our customer-facing tool sets or internal business operations and hook it up to that data model. It can answer questions like, ‘What’s the status of my payment?’ We are now turning the lights on for AI agents to be available to our internal and external customer bases.” Some employees working in different business units have transitioned to Gibson’s team specifically to work on AI. While they don’t have the STEM background traditional IT candidates have, they have deep domain expertise. AvidXchange upskills these employees on STEM so they can understand how AI works. “If you don't understand how an AI agent works, it’s hard for you to understand if it’s hallucinating or if you're going to have quality issues,” says Gibson. “So, we need to make sure the answers are sound and accurate by making the agents quote their sources, so it’s easier for people to validate outputs.” Focus on Optimization and Acceleration Instead of looking at AI as a human replacement, Gibson believes it’s wiser to harness an AI-assisted ways of working to increase productivity and efficiency across the board. For example, AvidXchange specifically tracks KPIs designed to drive improvement. In addition, its success targets are broken down from the year to quarters and months to ensure the KPIs are being met. If not, the status updates enable the company to course correct as necessary. “We have three core mindsets: Connected as people, growth-minded, and customer-obsessed. Meanwhile, we’re constantly thinking about how we can go faster and deliver higher quality for our customers and nurture positive relationships across the organization so we can achieve a culture of candor and care,” says Gibson. “We have the data so we can see who’s adopting tools and who isn’t, and for those who aren’t, we have a conversation about any fear they may have and how we can work through that together. We [also] want a good ecosystem of proven technologies that are easy to use. It’s also important that people know they can come to us because it’s a trusted partnership.” She also believes success is a matter of balance.  “Any time you make a sweeping change that feels urgent, the human component can get lost, so it’s important to bring people along,” says Gibson. “There’s this art right now of how fast you can go safely while not losing people in the process. You need to constantly look at that to make sure you’re in balance.” About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 43 Views
  • WWW.INFORMATIONWEEK.COM
    Edge AI: Is it Right for Your Business?
    John Edwards, Technology Journalist & AuthorApril 22, 20255 Min ReadDragos Condrea via Alamy Stock PhotoIf you haven't yet heard about edge AI, you no doubt soon will. To listen to its many supporters, the technology is poised to streamline AI processing. Edge AI presents an exciting shift, says Baris Sarer, global leader of Deloitte's AI practice for technology, media, and telecom. "Instead of relying on cloud servers -- which require data to be transmitted back and forth -- we're seeing a strategic deployment of artificial intelligence models directly onto the user’s device, including smartphones, personal computers, IoT devices, and other local hardware," he explains via email. "Data is therefore both generated and processed locally, allowing for real-time processing and decision-making without the latency, cost, and privacy considerations associated with public cloud connections." Multiple Benefits By reducing latency and improving response times -- since data is processed close to where it's collected -- edge AI offers significant advantages, says Mat Gilbert, head of AI and data at Synapse, a unit of management consulting firm Capgemini Invent. It also minimizes data transmission over networks, improving privacy and security, he notes via email. "This makes edge AI crucial for applications that require rapid response times, or that operate in environments with limited or high-cost connectivity." This is particularly true when large amounts of data are collected, or when there's a need for privacy and/or keeping critical data on-premises. Related:Initial Adopters Edge AI is a foundational technology that can drive future growth, transform operations, and enhance efficiencies across industries. "It enables devices to handle complex tasks independently, transforming data processing and reducing cloud dependency," Sarer says. Examples include: Healthcare. Enhancing portable diagnostic devices and real-time health monitoring, delivering immediate insights and potentially lifesaving alerts. Autonomous vehicles. Allowing real-time decision-making and navigation, ensuring safety and operational efficiency. Industrial IoT systems. Facilitating on-site data processing, streamlining operations and boosting productivity. Retail. Enhancing customer experiences and optimizing inventory management. Consumer electronics. Elevating user engagement by improving photography, voice assistants, and personalized recommendations. Smart cities. Edge AI can play a pivotal role in managing traffic flow and urban infrastructure in real-time, contributing to improved city planning. First Steps Related:Organizations considering edge AI adoption should start with a concrete business use case, advises Debojyoti Dutta, vice president of engineering AI at cloud computing firm Nutanix. "For example, in retail, one needs to analyze visual data using computer vision for restocking, theft detection, and checkout optimization, he says in an online interview. KPIs could include increased revenue due to restocking (quicker restocking leads to more revenue and reduced cart abandonment), and theft detection. The next step, Dutta says, should be choosing the appropriate AI models and workflows, ensuring they meet each use case's needs. Finally, when implementing edge AI, it's important to define an edge-based combination data/AI architecture and stack, Dutta says. The architecture/stack may be hierarchical due to the business structure. "In retail, we can have a lower cost/power AI infrastructure at each store and more powerful edge devices at the distribution centers." Adoption Challenges While edge AI promises numerous benefits, there are also several important drawbacks. "One of the primary challenges is the complexity of deploying and managing AI models on edge devices, which often have limited computational resources compared to centralized cloud servers," Sarer says. "This can necessitate significant optimization efforts to ensure that models run efficiently on these devices." Related:Another potential sticking point is the initial cost of building an edge infrastructure and the need for specialized talent to develop and maintain edge AI solutions. "Security considerations should also be taken into account, since edge AI requires additional end-point security measures as the workloads are distributed," Sarer says. Despite these challenges, edge AI's benefits of real-time data processing, reduced latency, and enhanced data privacy, usually outweigh the drawbacks, Sarer says. "By carefully planning and addressing these potential issues, organizations can successfully leverage edge AI to drive innovation and achieve their strategic objectives." Perhaps the biggest challenge facing potential adopters are the computational constraints inherent in edge devices. By definition, edge AI models run on resource-constrained hardware, so deployed models generally require tuning to specific use cases and environments, Gilbert says. "These models can require significant power to operate effectively, which can be challenging for battery-powered devices, for example." Additionally, balancing response time needs with a need for high accuracy demands careful management. Looking Ahead Edge AI is evolving rapidly, with hardware becoming increasingly capable as software advances continue to reduce AI models' complexity and size, Gilbert says. "These developments are lowering the barriers to entry, suggesting an increasingly expansive array of applications in the near future and beyond." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 42 Views
  • WWW.INFORMATIONWEEK.COM
    The Kraft Group CIO Talks Gillette Stadium Updates and FIFA World Cup Prep
    Joao-Pierre S. Ruth, Senior EditorApril 18, 20259 Min ReadElevated view of Gillette Stadium, home New England Patriots, NFL Team. Playing against Dallas Cowboys, October 16, 2011, Foxborough, Boston, MAVisions of America LLC via Alamy Stock PhotoThe gridiron action of the New England Patriots naturally takes center stage in the public eye, but when the team’s owner, holding company The Kraft Group, wanted to update certain tech resources, the plan encompassed its extensive operations.Michael Israel, CIO for The Kraft Group, discussed with InformationWeek the plan for networking upgrades -- facilitated through NWN -- at Gillette Stadium, home field for the Patriots, as well as the holding company’s other business lines, which include paper and packaging, real estate development, and the New England Revolution Major League Soccer club.Talk us through not only the update effort for the stadium, but what were the initial thoughts, initial plans, and pain points that got the process started for the company.The roots of the business are in the paper manufacturing side. We have a paper, cardboard recycling mill in Montville, Conn. I have 10 cardboard box manufacturing plants from Red Lion, Pa. up through Dover, N.H., in the northeast. International Forest Products, which is a large commodities business which moves paper-based products all over the world. When we talk about our network, we have a standardized platform across all of our enterprise businesses and my team is responsible for maintaining and securing all of the businesses.Related:We have a life cycle attached to everything that we buy and when we look at what the next five years brings to us, we were looking and saying we have the host of networking projects coming up. It will be the largest networking set of upgrades that we do from a strategic point over that period. So, the first of which NWN is currently working on is a migration to a new voice over IP platform. Our existing platform was end-of-life, moving to a new cloud-based platform, new Cisco platform. They are managing that transition for us and that again covers our entire enterprise.[We're] building a new facility for the New England Patriots, their practice facility, which will be ready next April. Behind that we have FIFA World Cup coming in next June-July [in 2026] and we have essentially seven matches here. It’s the equivalent of seven Super Bowls over a six-week period.Behind that comes a refresh of our Wi-Fi environment, refresh of our overall core networking environment. Then it’s time for a refresh of our firewalls. I have over 80 firewalls in my environment, whether virtual or physical. And to add insult to injury, on top of all of that, we may have a new stadium that we’re building up in Everett for our soccer team, which is potentially scheduled to open in 2029 or 2030.Related:So as we were looking at all of this, the goal here is to create one strategic focus for all of these projects and not think about them individually. Sat down with NWN saying, “Hey, typically I will be managing two to three years in advance. We need to take a look at what we’re going to do over the next five years to make sure that we’re planning for growth. We’re planning to manage all of this from standards and from a central location.”Putting together what that strategic plan looks like over that period of time and building a relationship with NWN to be able to support it, augment the staff that I have. I don’t have enough resources internal to handle all of this myself. And that’s a large endeavor, so that’s where this partnership started to form.Can you describe the scale of your operations further? You mentioned hosting the equivalent of several Super Bowls in terms of operations at the stadium.If you take the stadium as a whole and we focus there for a second, for Taylor Swift concert or a FIFA event coming in -- for Taylor Swift, we had 62,000 unique visitors on our Wi-Fi network at one time. There’s 1,800 WAPs (wireless access points) supporting the stadium and our campus here now.Related:I got a note on my radio during one of the evenings saying there’s 62,000 people. I said, “How can that be? There’s only 52,000 guests.” Well, it turns out there was a TikTok challenge in one of our parking lots and there were 10,000 teenagers on the network doing TikTok. These are the things that we don’t plan for, and FIFA is going to be a similar situation where typically we’re planning for how many people are physically sitting in the stadium for a FIFA event. Our parking lots are becoming activation zones, so we’re going to have to plan to support not just who's physically entering and scanning tickets and sitting in the bowl, but who’s on the grounds as a whole.And that’s something that we haven’t had to do in the past. It’s something that some of the warmer stadiums down in the South or in the in the West Coast who host Super Bowls, they're used to that type of scenario, but there are 16 venues throughout North America that are supporting FIFA and many of them, like us, we’re not used to having that large-size crowd and your planning to support that is critical for us as we start to do this. We are now 15 months away, 14 months away. We’re in high gear right now.What led the push to make changes? The interests are of the guests to the stadium? The team’s needs? Or was it to meet the latest standards and expectations in technology and networking?If you think about the networks, and it’s kind of irrelevant whether it’s here at the stadium or in our manufacturing plants, the networks have physically been -- if it’s plugged in, if it’s a Wi-Fi attachment, etcetera, you can track what is going on and what your average bandwidth utilization is.What we were seeing over the last year with the increased adoption of AI, with the increased adoption of IoT in these environments, you’re having more devices that are missio- critical, for example, on a Wi-Fi network, whereas in the past -- OK, there’s 50,000 people in my bowl and they’re on TikTok; they’re on Instagram; they’re doing whatever. We want them to have a good experience, but it’s not mission critical in my eyes. But now, if you’re coming to the gate and we’re adopting systems that are doing facial recognition for you to enter and touching a digital wallet and shredding your ticket and hitting your credit card and doing all these things -- they need to be lightning fast.Michael IsraelIf I’m doing transactions on mobile point of sale terminals, half of my point-of-sale terminals are now mobile devices hanging off of Wi-Fi. There’s all almost 500 mobile point of sale terminals going around. If they are spinning and waiting to connect, you’re going to lose business. Same thing in my manufacturing plants where my forklifts are now connected to Wi-Fi. We’re tracking the trailers as they come in and watching for demurrage charges and looking at all of these pieces. These are these are IoT devices that weren’t on the network in the past and if the forklift isn’t connecting, the operators are not being told where to put the materials that they’re grabbing.Basically, they stop until they can reconnect. I can’t have that.The focus and the importance of the network continues to outpace what we think it’s going to do, so what I did last year is kind of irrelevant because as the applications and as the needs are inherently changing, we are society doesn’t like to wait.If someone’s looking to buy something and that point-of-sale terminal is processing and processing -- we did a project last year with autonomous purchasing, where you enter a concession stand and you pick things off the shelf, and it knows what you’re taking. Most stadiums have it at this point in time. But when we started that project, the vendor -- their merchant was actually processing in Europe and the time to get an approval was 11 seconds. If you walked up to one of my regular point-of-sale, belly-up concession stands, the approval was coming in two and a half seconds. We turned around and said you can’t wait nine seconds. People are in a queue line to get an approval on a credit card. We dug into it and found well, we’re hopping here, here, here and it’s coming from Europe.We had to get with that vendor and say, “You need to change how you’re processing.” It’s a question we hadn’t asked before, but had to get it back in line because, this is not necessarily just a technology piece here, but if you’re holding up a queue line, that’s not a satisfactory relationship. If you think about every person going into that concession stand -- 11 seconds, 11 seconds, 11 seconds -- for every six people, you’re delaying a minute. These are the things that as we’re going through planning sessions, it’s not necessarily, “Oh, it’s the latest technology, but what’s the speed of transaction, what’s the speed of throughput?”  We have to be very diligent throughout that process.How far out do you typically plan your IT budget? How often do you reassess to see what the ROI has been for a project such as this?Typically, I am looking 18 months into the future. This is one of the rare times where I'm actually looking 36 to 48 months into the future because of everything that’s kind of stacked up one after another, and I don’t have the latitude if one starts to slip that -- I can't take a 5-year set of projects and make it 9 years. I got to have the depth to say, “Hey, we’re going to finish this, but be ready because while we’re finishing up this voice over IP project, we’re now in FIFA planning. We’re now in network consolidation planning.” They’re just stacked up one after another behind that and the decisions we make now are going to impact what we’re doing in 12 months, 24 months, etcetera.Where do things stand right now in terms of this project? What’s on the road map ahead?Right now, we are in the heart of our voice over IP migration, which is the first major project we’ve set forth with NWN. We’re expecting that to be finished before football season starts. And then we’ll have an overlap of a couple of months and planning out what our core network upgrades are going to look like -- we’ll be in the planning phases, and they’ll start in late fall, early winter, right before football season ends.About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 70 Views
  • WWW.INFORMATIONWEEK.COM
    Lunar Data Centers Loom on the Near Horizon
    Carrie Pallardy, Contributing ReporterApril 21, 20258 Min Readtdbp via Alamy Stock PhotoWe are looking far afield for the future of data centers: in deserts, under the sea, and of course, in space. Data centers in strange places are steadily moving from the realm of imagination to reality. Lonestar Data Holdings, for one, recently achieved milestones in testing its commercial lunar data center in orbit.  How does Lonestar’s most recent mission push us forward on the path to commercial data centers around and on the Moon? What are the unique challenges that must be solved for launching and maintaining these data centers? As more governments and enterprises look to space, what lies ahead for competition and cooperation on the Moon and beyond?  The Mission  On Feb. 26, Lonestar launched its Freedom data center payload onboard the Athena Lunar Lander, a commercial Moon lander sent by American space exploration company Intuitive Machines.  The landing did not go exactly as planned. The system landed on its side and powered down days earlier than anticipated, CNN reports. But Lonestar achieved several testing milestones prior to the landing. The company’s technology demonstrated its ability to operate in the harsh environment of space. Lonestar was able to test its data storage capabilities and execute edge processing functions.   Related:Lunar Opportunities and Challenges Lunar data centers offer a number of advantages over their terrestrial counterparts. Ready access to solar power and natural cooling are useful, and their remote location is key to their appeal.  “Throw in all the problems with climate change, natural disasters, human error, wars, nation states going after immutable data that's held in data centers,” says Chris Stott, CEO of Lonestar. Data center customers want to put their data somewhere that is secure, accessible, and in compliance with data sovereignty laws. And space beckons.  While the promise of lunar data centers as a core piece of resiliency and disaster recovery strategy is clear, there is a lot of work being poured into making them a tangible, commercial option.  Cost is an obvious hurdle for any space-based project. But given the appetite for space exploration and commercialization, there is certainly money to be found. Lonestar raised $5 million in seed funding in 2023, and the company is working on finishing its Series A funding, according to Stott.  Other companies with celestial data center ambitions are attracting millions, too. Starcloud, previously Lumen Orbit, has raised more than $20 million, according to GeekWire. Starcloud is focused on space-based data centers not on the Moon but in low Earth orbit.  Related:Companies need that kind of funding because it is expensive to launch these data centers and to design them. A lunar data center isn’t going to look like one you would see on Earth.  “When you take something into space, you have to redesign everything,” Stott acknowledges.  The data center needs to operate in the vacuum of space. It needs to be built with space-qualified material; it must meet low outgassing criteria. It needs to be able to operate in an environment of extremes.  On the lunar surface, a data center would be faced with two weeks of day and two weeks of night.  “You’ve got 250 degrees Celsius in the sun,” says Stott. “But when it gets to lunar night it goes … instantly to minus 200 degree Celsius. It gets really cold. So cold it fractures silicone.” Lonestar is focusing its near-term efforts on placing its data centers at Lagrange points, specific spots between the Earth and Moon in which objects remain stable. With this approach, the data center will only experience four hours of shade every 90 days, and it will have batteries to power it during that time, Stott explains.  “That changed everything for us because it means we don't have to wait for a ride to the Moon. We don't have to use a lunar lander. We can solve the day-night issue,” he adds. Related:Terrestrial data centers have white space and grey space. The former includes the servers and racks, while the latter supports those: communication, cooling, power. The same concept applies to space-based data centers, but the white space is referred to as a payload. “It's the load that pays … whether it be a camera or whether it be an astronaut or whether it be a data center,” says Stott. “Then our gray space: power, thermal and communications. It’s the satellite, it's the solar panels, the batteries for power, and satellite antennas for communications.” When something in a data center fails or breaks in a terrestrial data center, it is a relatively simple matter to have someone walk in the door and fix it. Those boots on the ground aren’t exactly a readily available option for lunar data centers.  Gregory Ratcliff is chief innovation officer at Vertiv, a company that provides critical infrastructure solutions, including data centers. Vertiv is not directly involved in lunar data center projects, but it has plenty of experience here on Earth.  Ratcliff tells InformationWeek, “Fault tolerance is really going to matter. [You’ll] have a redundancy of systems, redundancy of those servers and in some cases, you might just let it fail until you do the upgrade and work around it, which is a little different than we do in modern data centers on Earth.”   And then, of course, there are the logistical demands of arranging to launch anything into space. “They always say the hardest thing about getting to space is getting permission,” says Stott.  A Commercial Offering Caddis Cloud Solutions, an advisory firm that specializes in data center development, is working with Lonestar. “We're really the … organization helping vet customers, understand the technical solutions that customers are looking for, presenting those solutions, helping them build out the physical infrastructure on ground,” Caddis Cloud Solutions CEO Scott Jarnagin tells InformationWeek.  Lonestar’s lunar data center aims to provide resiliency as a service and disaster recovery and edge processing services. And already there are government and enterprise customers on board. It is working with the state of Florida to provide data storage, for example. On the edge processing side, Lonestar counts Vint Cerf, one of the trailblazers behind the architecture of internet, among its customers.  Lonestar is also working with other data center operators. “They can provide the solutions to their customers as an extension of disaster recovery services,” Jarnigan explains.  Lonestar is planning to launch six data storage spacecrafts between 2027 and 2030. They will orbit the Moon at the Lunar L1 Lagrange Point.  “Each one carrying multi petabytes worth of storage and doing a ton of edge processing as well. Think of it like a smart device up in orbit around the moon,” says Stott. “And they are precursors to what we'll put in the moon later on.” It is booking capacity for those upcoming missions.  While Lonestar is gearing up for those next missions, it is not alone in the world of space-based data centers. Plenty of companies, like Starcloud, are working on low Earth orbit data centers. Stott considers Lonestar to be a “different flavor” of space-based data center.  “We are a very niche, premium, high-latency, high-security application. We don't want to be close to the planet. We want to be far enough away that we can still operate safely and have line of sight communications without any of the other complications that come with that,” he says.  The Future of Data Centers While Lonestar is starting its commercial data centers in lunar orbit, it still plans to return to the surface of the Moon.  And, of course, there is plenty of interest focused on launching a plethora of lunar technology. NASA’s Artemis program is focused on establishing long-term presence on the Moon. The Lunar Surface Technology Research (LuSTR) program and Lunar Surface Innovation Initiative are driving the development of technologies to support Artemis missions to the Moon, as well as exploration on Mars.  As Lonestar and other space-based data center initiatives advance, what of terrestrial data centers?  Ratcliff anticipates that advances made in lunar data centers will be useful here on Earth as well. “It'll feed backwards … power routing, sensor optimization, digital twins,” he says. “So, this is going to push us to be better both on Earth and on the Moon.” For now, the Moon feels almost like a blank slate. But as more and more public and private enterprises launch lunar satellites and establish technology on its surface, competition for real estate -- for data centers and otherwise -- will heat up.  While wealthy governments and enterprises will have a leg up in the competition, it isn’t going to be a complete free-for-all. Plenty of space law exists today. Any initiative that goes to the Moon is subject to the laws of its country of origin. “If you're an American company and you're flying in space, American law applies to you. You don't get to skip anything,” says Stott.  Even within the bounds of law, there is an element of racing. Companies and countries want to reap the benefits of lunar initiatives. “Back in the 60s, it was flags and footprints. Today, it's resources and revenue,” says Stott. “When we're looking at the Moon, it is now just part of Earth’s economics sphere. It's just another place we go to do business.” But there is also a history of collaboration in space. “If you think back just not too long ago, the ISS [International Space Station] was built by a whole bunch of different countries … it was completely outside of politics and seems to work pretty well,” Ratcliff points out.  The groups developing and launching lunar technology will have to figure out how to do so without compromising safety, and that will require at least some level of cooperation with one another.  Success on the Moon is likely just the beginning for the data center industry. “One day we will have Martian data centers. We will have Jovian based data centers. Anywhere that humanity goes, we now take two things with us: the law and data,” says Stott.  In all likelihood, we will have something else with us: cybercriminals. Space may be far more remote than any corner we could find here on Earth, but that doesn’t mean threat actors won’t seek and find vulnerabilities that enable cyberattacks in space .  “We are a hedge against terrestrial problems, but, of course, we have to stay one step ahead in terms of cybersecurity,” Stott recognizes. About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 55 Views
  • WWW.INFORMATIONWEEK.COM
    Building Secure Cloud Infrastructure for Agentic AI
    Research and advisory firm Gartner predicts that agentic AI will be in 33% of enterprise software applications and enable autonomous decision making for 15% of day-to-day work by 2028. As enterprises work toward that future, leaders must consider whether existing cloud infrastructure is ready for that influx of AI agents.  “Ultimately, they are run, hosted, and are accessed across hybrid cloud environments,” says Nataraj Nagaratnam, IBM fellow and CTO of cloud security at technology and consulting company IBM. “You can protect your agentic [AI], but if you leave your front door open at the infrastructure level, whether it is on-prem, private cloud, or public cloud … the threat and risk increases.” InformationWeek spoke with Nagaratnam and two other experts in cloud security and AI to understand why a secure cloud infrastructure matters and what enterprises can be doing to ensure they have that foundation in place as agentic AI use cases ramp up.  Security and Risk Considerations  The security and risk concerns of adopting agentic AI are not entirely unfamiliar to organizations. When organizations first looked at moving to the cloud, security, legacy tech debt, and potential data leakage were big pieces of the puzzle.  “All the same principles end up being true, just when you move to an agentic-based environment, every possible exposure or weakness in that infrastructure becomes more vivid,” Matt Hobbs, cloud, engineering, data, and AI leader at professional services network PwC, tells InformationWeek.  Related:For as novel and exciting as agentic AI feels, security and risk management of this technology starts with the basics. “Have you done the basic hygiene?” Nagaratnam asks. “Do you have enough authentication in place?” Data is everything in the world of AI. It fuels AI agents, and it is a precious enterprise resource that carries a lot of risk. That risk isn’t new, but it does grow with agentic AI.  “It's not only the structured data that traditionally we have dealt with but [also] the explosion of unstructured data and content that GenAI and therefore the agentic era is able to tap into,” Nagaratnam points out.  AI agents add not only the risk of exposing that data, but also the potential for malicious action. “Can I get this agent to reveal information it's not supposed to reveal? Can I compromise it? Can I take advantage or inject malicious code?” Nagaratnam asks. Enterprise leaders also need to think about the compliance dimensions of introducing agentic AI. “The agents and the system need to be compliant, but you inherit the compliance of that underlying … cloud infrastructure,” Nagaratnam says.  Related:The Right Stakeholders Any organization that has embarked on its AI journey likely already realizes the necessity of involving multiple stakeholders from across the business. CIOs, CTOs, and CISOs -- people already immersed in cloud security -- are natural leaders for the adoption of agentic AI. Legal and regulatory experts also have a place in these internal conversations around cloud infrastructure and embracing AI.  With the advent of agentic AI, it can also be helpful to involve the people who would be working with AI agents. “I would actually grab the people that are in the weeds right now doing the job that you're trying to create some automation around,” says Alexander Hogancamp, director of AI and automation at RTS Labs, an enterprise AI consulting company.  Involving these people can help enterprises identify use cases, recognize potential risks, and better understand how agentic AI can improve and automate workflows.  The AI space moves at a rapid clip -- as fast as a tidal wave, racehorse, rocket ship, choose your simile -- and just keeping up with the onslaught of developments is its own challenge. Setting up an AI working group can empower organizations to stay abreast of everything happening in AI. They can dedicate working hours to exploring advancements in AI and regularly meet to talk about what this means for their teams, their infrastructure, and their business overall.  Related:“These are hobbyists, people with passion,” says Hogancamp. “Identifying those resources early is really, really valuable.” Building an internal team is critical, but no enterprise is an island in the world of agentic AI. Almost certainly, companies will be working with external vendors that need to be a part of the conversation.  Cloud providers, AI model providers, and AI platform providers are all involved in an enterprise’s agentic AI journey. Each of these players needs to undergo third-party risk assessment. What data do they have access to? How are their models trained? What security protocols and frameworks are in place? What potential compliance risks do they introduce?  Getting Ready for Agentic AI  The speed at which AI is moving is challenging for businesses. How can they keep up while still managing the security risks? Striking that balance is hard, but Hobbs encourages businesses to find a path forward rather than waiting indefinitely. “If you froze all innovation right now and said, ‘What we have is what we're going to have for the next 10 years,’ you'd still spend the next 10 years ingesting, adopting, retrofitting your business, he says.  Rather than waiting indefinitely, organizations can accept that there will be a learning curve for agentic AI.  Each company will have to determine its own level of readiness for agentic AI. And cloud native organizations may have a leg up.  “If you think of cloud native organizations that started with a modern infrastructure for how they host things, they then built a modern data environment on top of it. They built role-based security in and around API access,” Hobbs explains. “You're in a lot more prepared spot because you know how to extend that modern infrastructure into an agentic infrastructure. Organizations that are largely operating with an on-prem infrastructure and haven’t tackled modernizing cloud infrastructure likely have more work ahead of adopting agentic AI.  As enterprise teams assess their infrastructure ahead of agentic AI deployment, technical debt will be an important consideration. “If you haven’t addressed the technical debt that exists within the environment you're going to be moving very, very slow in comparison,” Hobbs warns.  So, you feel that you are ready to start capturing the value of agentic AI. Where do you begin?  “Don't start with a multi-agent network on your first use case,” Hogancamp recommends. “If you try to jump right into agents do everything now and not do anything different, then you're probably going to have a bad time.” Enterprises need to develop the ability to observe and audit AI agents. “The more you allow the agent to do, the more substantially complex the decision tree can really be,” says Hogancamp.  As AI agents become more capable, enterprise leaders need to think of them like they would an employee.  “You'd have to look at it as just the same as if you had an employee in your organization without the appropriate guidance, parameters, policy approaches, good judgment considerations,” says Hobbs. “If you have things that are exposed internally and you start to build agents that go and interrogate within your environment and leverage data that they should not be, you could be violating regulation. You're certainly violating your own policies. You could be violating the agreement that you have with your customers.” Once enterprises find success with monitoring, testing, and validating a single agent, they can begin to add more.  Robust logging, tracing, and monitoring are essential as AI agents act autonomously, making decisions that impact business outcomes. And as more and more agents are integrated into enterprise workflows -- ingesting sensitive data as they work -- enterprise leaders will need increasingly automated security to continuously monitor them in their cloud infrastructure.  “Gone are the days where a CISO gives us a set of policies and controls and says [you] should do it. Because it becomes hard for developers to even understand and interpret. So, security automation is at the core of solving this,” says Nagaratnam.  As agentic AI use cases take off, executives and boards are going to want to see its value, and Hobbs is seeing a spike in conversations around measuring that ROI.  “Is it efficiency in a process and reducing cost and pushing it to more AI? That's a different set of measurements. Is it general productivity? That's a different set of measurement,” he says.  Without a secure cloud foundation, enterprises will likely struggle to capture the ROI they are chasing. “We need to modernize data platforms. We need to modernize our security landscape. We need understand how we're doing master data management better so that [we] can take advantage and drive faster speed in the adoption of an agentic workforce or any AI trajectory,” says Hobbs.  
    0 Commenti 0 condivisioni 104 Views
  • WWW.INFORMATIONWEEK.COM
    Nailing the Initiative: LexisNexis Leverages Agentic AI
    Jeff Reihl, CTO for the legal and professional side of LexisNexis, discusses how the introduction of AI changed their project plans.
    0 Commenti 0 condivisioni 63 Views
  • WWW.INFORMATIONWEEK.COM
    Why Polyfunctional Robots Are Gaining Momentum
    John Edwards, Technology Journalist & AuthorApril 18, 20255 Min ReadA multi-purpose robodog called Spot at a new technology fair in Turin, Italy, 2021Wirestock, Inc. via Alamy Stock PhotoAs technology advances, attention is rapidly turning toward polyfunctional robots, which incorporate a design and intelligent software that enables them to handle more than one task. Some models are adaptable enough to learn on the job, allowing them to fulfill tasks they weren't originally designed to handle. Liz James, a managing consultant with advisory firm NCC Group, describes polyfunctional robots as robotics systems designed for a wide range of different assignments rather than the single, highly optimized task. "Behind the technology is a desire to increase automation and reduce labor costs," she explains in an email interview. Growth Drivers The future of polyfunctional robots lies in their adaptability and ability to seamlessly integrate into connected systems, says Rodger Desai, CEO of secure identity verification provider Prove. "These robots are no longer limited to a single task," he says in an online interview. "They are evolving into generalists, capable of performing a wide range of functions, from assembly lines to medical assistance." In logistics environments, for example, robots are evolving from task-specific pick-and-place units to adaptive systems capable of sorting, packing, and inspecting while responding to real-time operational changes. Related:At Work Polyfunctional robots are already revolutionizing data center management, particularly in hardware maintenance and environmental monitoring, says Nick Esposito, founder of NYCServers, an IT infrastructure and hosting provider. In an email interview, he points to a colleague who manages a 50,000-square-foot facility that uses polyfunctional robots equipped with sensors and modular tools to perform various essential tasks, such as replacing faulty drives and checking server temperatures. "These robots quickly identify hot spots that could cause hardware failures and then replace components, saving hours compared to manual processes," he explains. Previously, separate teams handled hardware and environmental monitoring, resulting in delays and inefficiencies. "Now, a single robot performs both roles resulting in faster response times and fewer disruptions." Evolving AI and machine learning technologies will further accelerate polyfunctional robot trends, allowing adopters to autonomously analyze and improve workflows, Desai says. "This will make them indispensable in industries with high variability, such as e-commerce and agriculture, where conditions change on a daily basis," he says. "Just as cloud-based systems reduce programming complexity, polyfunctional robot adoption will spread to smaller businesses, which are currently falling behind large enterprises in robotics integration." Related:Market Players Boston Dynamics is among several leading polyfunctional robot manufacturers. One of the firm's mobile robots is Spot, which is targeted at construction and oil industries where it's used to conduct inspections and make data-driven decisions aimed at reducing manual labor costs while improving worker safety. "Additionally, Boston Dynamics' Stretch robot is transforming logistics, allowing companies, such as DHL, to automate warehouse unloading, increasing efficiency by as much as 25%," says Stanislav Khilobochenko, a vice president at medical device manufacturer at Clario in an online interview. On the industrial side, ABB Robotics offers YuMi, a robot that works on assembly lines, supporting human-robot collaboration in electronics and automotive manufacturing. Khilobochenko notes that YuMi recently assisted a major European manufacturer by reducing production time while maintaining precision in complex assembly tasks. Innovative robotics makers succeed because they invest in versatility and integration, Khilobochenko observes. "Boston Dynamics focuses on adaptability, making their robots useful across multiple industries," he says. "ABB thrives on precision and scalability, having formed partnerships with major corporations such as BMW and Nestlé." Related:Future Outlook With advancements in modular design and interoperability, polyfunctional robots have the potential to reshape industries by increasing efficiency, flexibility, and scalability across a wide range of applications, Desai says. James, meanwhile, expects polyfunctional robot adoption to grow steadily in settings where relatively low-skill and low-complexity tasks are currently handled by humans. "This is especially true in logistics and freight tasks, where there has already been significant investment in specialized robotic solutions." Remote facility monitoring is likely to gain widespread adoption in the near future, James says. "This is already being trialed by some infrastructure operators, using mobile robotics platforms and computer vision to take periodic measurements and/or samples around key areas." She also anticipates the arrival of "porter robots" delivering food and beverages to tables in restaurants. "I can also see this potentially being applied to ... porter functions in hospitals and care facilities, too." The Human Element It's important to ensure that the human element isn’t lost within the polyfunctional hype, James says. "As with automation, there's a potential for large parts of the economy to be impacted by this technology, and that could harm people who currently survive performing basic tasks," she explains. "These polyfunctional technologies should be rolled out in a very considered way, both at a societal level and at an individual organizational level." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 75 Views
  • WWW.INFORMATIONWEEK.COM
    How Will the Role of Chief AI Officer Evolve in 2025?
    Given the outsized role AI has taken in discussions about the future of work, not to mention humanity, it is no surprise that a C-level role focused on this technology has emerged.  “There's this trend line when something is massive, important, game-changing from an industry perspective, and people don't know how to react to it, they name a C-Level title who is ultimately responsible and accountable for incubating new ideas, trying new ways of working, and pivoting an organization culturally,” Casey Foss, chief commercial officer at West Monroe, a business and tech consulting firm, tells InformationWeek.  West Monroe conducted a survey of 1,000 professionals at the director, vice president, and senior vice president levels to get an idea of what they expect the C-suite to look like in five years. The chief AI officer (CAIO) role played a prominent part in the responses; 40% believe that this position will grow in influence and importance over the next five years.  What exactly does the CAIO role look like today, and how will it have to change to keep up with the breakneck development of AI technology and its capabilities?  What Does a Chief AI Officer Do? When a new leadership role begins its rise to prominence, there is a lot of room for individuals and companies to define what it looks like. A CAIO’s job at one company might look quite different from another.  Related:“Some AI officers are identifying use cases. Some are heavily focused on the technology. Some are heavily focused on upskilling the people and delivering value through how they do the work,” says Foss.  For Ivalua, a cloud-based procurement software company, AI was so important that the company’s founder David Khuat-Duy shifted from his position as CEO to CAIO at the beginning of this year.  His first objective in his new role is to deploy AI internally at the company. Then, he wants to take those lessons learned to customers.  LinkedIn appointed its CAIO, Deepak Agarwal, at the beginning of this year as well. “To help LinkedIn use the best AI technology available for our purpose and goals, my team and I focus on developing and deploying cutting-edge AI solutions that enhance how members and customers connect, learn, and grow on the platform,” he tells InformationWeek via email.  Given just how quickly AI is advancing, a primary responsibility of CAIO could be keeping up with those changes and understanding what that means for their enterprises.  Vivek Mohindra, senior vice president, corporate strategy at Dell Technologies, a technology solutions company, works closely with John Roese, Dell’s CTO and CAIO. “John and I collaborated to set up what we call AI radar. We really track on a daily basis the changes in our landscape and think about what the implications of that could be,” he shares.  Related:CAIOs could be heading up efforts to build models internally or finding ways to leverage externally built models. And managing data is intrinsic to that task.” There’s a lot of data categorization, storage, cleaning that needs to happen,” says Khuat-Duy.  As CAIOs identify use-cases for AI and champion their implementation, they are likely to be spearheading the accompanying changes in process and culture. “Chief AI officers must also serve as internal advocates for AI while guiding teams through emerging regulations, ethical considerations, and increasing stakeholder expectations for what AI can achieve,” says Agarwal.  The regulatory and ethical dimensions of the job are no small piece. AI governance is integral to the CAIO’s responsibilities.  No matter how a CAOI is tasked with doing their job, the overarching goal is almost certainly going to be delivering value from AI to their enterprise. How Does the Role Fit into the C-Suite? AI is poised to touch every aspect of business operations, if it isn’t already. That puts the CAIO in a position that requires communication and coordination with other executives and their teams.  Related:Roles like CTO, CIO, and chief data officer are natural complements to the CAIO. Indeed, Dell’s CAIO is also its CTO.  “My weekly meetings with the CTO are extremely important both because the CTO's office builds out a lot of the architecture that we have to fit into but also we have a big impact on with that architecture has to look like in order to get the data to the right place,” says Craig Martell, chief AI officer at Cohesity, an AI-powered data security company.  They might find themselves in regular conversations with a chief people officer or chief human resources officer about sourcing talent and how AI is reshaping the day-to-day for existing talent.  Interaction with the CFO is inevitable. How much of the budget can a CAIO secure for their AI strategy? AI comes with cybersecurity concerns. Naturally, the CISO is going to want face time with a CAIO to understand how to mitigate those concerns. Of course, CEOs and boards are going to want to know how AI can drive an enterprise toward its business goals.  Martell also finds himself spending a good deal of time on compliance issues, particularly around data usage. “The chief AI officers are going to have to become much more legally adept,” he notes. That is going to mean coordination with chief legal and compliance officers.  How Could the Role Change? The AI landscape is no stranger to shakeups. DeepSeek came onto the scene, sparking an avalanche of discussion around the possibility of a cheaper model undercutting the more entrenched players.  The enticing possibilities of AGI and quantum computing hover in the future, albeit one of uncertain timing. Big questions about how to regulate AI are still open. What do all of these potential changes mean for the position that is meant to shepherd organizations’ AI strategies?  For now, the role is less about exploring the possibilities of AI and more about delivering on its immediate, concrete value.  “This year, the role of the chief AI officer will shift from piloting AI initiatives to operationalizing AI at scale across the organization,” says Agarwal.  And as for those potential upheavals down the road? CAIO officers will no doubt have to be nimble, but Martell doesn’t see their fundamental responsibilities changing.  “You still have to gather the data within your company to be able to use with that model and then you still have to evaluate whether or not that model that you built is delivering against your business goals. That has never changed,” says Martell. Will Chief AI Officers Face Pressure to Deliver? AI is at the inflection point between hype and strategic value. “I think there's going to be a ton of pressure to find the right use cases and deploy AI at scale to make sure that we're getting companies to value,” says Foss.  CAIOs could feel that pressure keenly this year as boards and other executive leaders increasingly ask to see ROI on massive AI investments.  “Companies who have set these roles up appropriately, and more importantly the underlying work correctly, will see the ROI measurements, and I don't think that chief AI officers [at those] organizations should feel any pressure,” says Mohindra.  Will Chief AI Officers Last in the C-Suite? AI is certainly not going anywhere, but what about the CAIO?  Khuat-Duy argues that there will continue to be the need for a central team that manages this technology. “Managing data and the architecture around LLMs is clearly something that needs to be thought [about] in a central, global way for a company,” he says.  Mohindra envisions the CAIO role at Dell as a temporary one.  “This role is finite by design. It is to launch and integrate AI until it becomes inseparable from how our company operates and it is embedded in the DNA of the company, at which point you really don't need a separate role to capitalize the momentum that one needs for an AI-powered enterprise,” he says.  That could mean the CAIO simply steps into a different position. Or, the role gets folded into another. “I think the most likely path is sort of a combination of data and AI,” says Martell. The fate of the role, like its current form, is likely to be dictated by the needs of individual companies.  
    0 Commenti 0 condivisioni 58 Views
  • WWW.INFORMATIONWEEK.COM
    State-Led Security: Offensive Strategies and Immutable Storage
    The lack of nationwide security and privacy ordinance means that data governance is placed in the hands of states to develop their own regulations and requirements, yet less than half of all states have passed data privacy regulations as of February 2025. States such as California, Colorado, Indiana, and Maryland have comprehensive privacy laws whereas states such as Nevada, Vermont, and Washington have narrow privacy laws in effect. Some states enact strict policies and penalties in the face of a cyber-attack or breach. Other states offer the ability to correct security flaws without facing punishments or consequences. Recently, the Electronic Privacy Information Center (EPIC) issued a report outlining how state security laws fail to protect privacy and ways to improve.  With the onset of emerging technologies such as AI and quantum computing, it’s never been more critical to ensure that data is protected. This means that in the near future, businesses need to reevaluate their policies and procedures to meet evolving standards. Security teams who do not have the proper resources or knowledge are left vulnerable to attacks like ransomware.  During this turbulent time, it is important for business and security team leaders to equip themselves with a robust cyber resilience plan and strategy. The main concern is the ability for threat actors to take advantage of evolving legislation causing weaknesses in networks and systems.  Related:Threat Actors will Take AdvantageBad actors are aware of how vulnerable businesses currently are with changing policies and regulations and may try to capitalize on the current landscape. Threat actors will take advantage of the fact that security teams are not getting the most up-to-date threat information and analytics from national researchers. Recent cuts to the Multi-State Information Sharing and Analysis Center for example, means that organizations no longer have access to intelligence briefings on emerging cybersecurity threats, notices on the latest security patches, incident response support and penetration testing.  IT teams cannot equally fight blind spots in their networks such as misconfigurations and exposures while also staying ahead of advanced sophisticated attack strategies. The only way to combat this is to ensure a proactive offensive cybersecurity strategy that is prepared ahead of inevitable attacks. Adopting an Offensive Cybersecurity Strategy The key to mitigating fines, reputational damage, and operational loss lies in being on the offense and having a well-documented remediation strategy. This approach includes strong security controls, regular software and system updates, network monitoring and visibility, frequent employee training, incident response planning and ensuring immutable backup and segmentation of storage for your data.  Related:Strong access controls mean granting only the necessary access to employees so that they may perform their specific job function without viewing other data or information. This can be done using multifactor authentication, requiring multiple forms of verification. On top of this, conducting regular system and software updates that can patch vulnerabilities and scan for any rectifiable weaknesses in the system is a must. However, once these updates are made it is also important to have a granular view of the network and ecosystem. A robust employee training program should also be incorporated. Employees who have strong cyber maturity are less likely to leave a backdoor open for bad actors to break through.  No offensive security approach is complete without incidence response planning. If roles and responsibilities are outlined prior to an attack, then operational downtime may be minimized if a plan is put in motion at the first sign of malicious behavior.  Related:Deploying Immutable StorageIt is important to highlight that one of the best ways to ensure your data is protected and secured is to employ immutable storage. This is because it stores a backup copy of unalterable and undeletable data, offering strong protection against data tampering or loss. Applying facets of zero trust to your immutable storage (as mentioned in ZTDR best practices) completely segments the backup software from the backup storage and adheres to the 3-2-1 backup rule as well as the extended 3-2-1-1-0 backup rule. Employing a 3-2-1-1-0 backup strategy effectively leverages the strengths of both immutable and traditional backups, optimizing security and resource allocation. Immutable backups can be established through various infrastructures and stored across diverse platforms, including on-premise and cloud environments. Unlike conventional backups that may be susceptible to changes, immutable backups create unchangeable copies of your valuable data, offering an ironclad defense against accidental or malicious modifications. Another benefit of immutable backup is its ability to help companies maintain data integrity and comply with legal and regulatory data retention requirements, ensuring that original data copies are preserved accurately. Overall, with less federal oversight of security and privacy regulations, these requirements are now in states' hands. Some states offer a window to rectify security flaws without further penalty, while others enact stiff penalties for a customer breach along with requiring direct engagement from a state regulator. Therefore, business leaders need to keep their data safe to mitigate monetary loss and reputational damage by adopting an offensive cybersecurity strategy and deploying truly immutable storage to ensure compliance and resiliency.  
    0 Commenti 0 condivisioni 83 Views
  • WWW.INFORMATIONWEEK.COM
    Disinformation Security: Protection and Tactics
    John Edwards, Technology Journalist & AuthorApril 17, 20255 Min ReadTanapong Sungkaew via Alamy Stock PhotoDisinformation is on the rise as various media platforms make it easy for anyone to smear an enterprise for fun, strategic advantage, political gain, or even outright blackmail. Coping with this trend is proving to be both challenging and expensive. Disinformation is the deliberate spreading of false information with the intent to deceive or manipulate a target audience, often for political, economic or social gain, states Craig Watt, a threat intelligence consultant with cybersecurity firm Quorum Cyber. "This is different from misinformation, which is the sharing of false information without ill intent," he observes in an online interview. Disinformation can arrive in various forms, including propaganda, industrial sabotage, and conspiracy theories, says George Vlasto, head of trust and safety at Resolver, a unit of Kroll, a risk and financial advisory services firm. "The common theme is a narrative-based attack on a specific issue, entity or person," he notes via email. Disinformation Damage Disinformation can hurt an enterprise in several ways. Perhaps the most pernicious harm is reputational damage resulting from the spread of false information. "This can lead to a loss of trust among clients and partners," Watt says. "Erosion of trust can also manifest within the organization itself, affecting employee morale and productivity." Related:Direct financial losses can occur if false information is spread about a company’s financial stability, resulting in plummeting stock prices, Watt says. "Disinformation can also disrupt business operations if false information is disseminated regarding things such as supply chain issues." Specific disinformation can quickly metastasize into widespread misinformation, Vlasto warns. "If a particular piece of disinformation is widely shared by unwitting Internet users, it can rapidly become difficult to contain and may have a significant impact on brand reputation," he says. "Widely shared false allegations, even when disproved, can linger in the public imagination for a long time." A Growing Threat Disinformation is definitely on the rise, Watt says. "Technology advancements within social media and other digital platforms have made it easier to spread disinformation quickly and to a widespread demographic," he explains. "Additionally, advancements in artificial intelligence have enabled the creation of more sophisticated and convincing false content." Most ominously, disinformation is increasingly being weaponized as a tool for political and social manipulation, often by state-sponsored campaigns that aim to influence elections, destabilize societies, and undermine democratic institutions, Watt warns. Related:Protection Strategies The most effective way to protect against disinformation is to own the narrative, Vlasto states. "Monitor disinformation trends relevant to your sector and preempt these [falsehoods] with clear factual updates about your business," he says. Having a well-understood playbook in place to counter false narratives is also important, especially during significant political or business events, Vlasto says. "For example, if you're engaged in a sensitive M&A process, consider how you would respond to false information about the potential transaction," he explains. Protecting against disinformation involves a combination of awareness, critical thinking, and proactive measures, Watt says. Verify sources by checking their credibility and reputation before believing or sharing information, he suggests. "Information should also be cross-referenced across multiple reliable sources to ensure its accuracy." "Verify, verify, verify, and make sure the information is coming from the best and highest source," recommends Lisa Silverman, a senior managing director at risk and financial crimes advisory firm K2 Integrity. "If someone sends you something, ask where they got their information and, ideally, verify it through another -- hopefully an unbiased and trusted -- source." Related:If information seems truly wacky, double- and triple-check it, Silverman suggests. Yet also understand that seemingly preposterous information can sometimes be true. "We recently had a situation where a retired and very senior military officer had been reporting a piece of information about his career for about 10 years," she says. "When we undertook what we thought would be a routine verification as part of a larger project, that information turned out to be completely inaccurate." This revelation caused significant concern for the client, Silverman says, "yet the matter was eventually addressed without the public scandal that would have occurred if the facts had come out in a different way." Critical Thinking Watt advises individuals and teams to embrace critical thinking and to always be skeptical of sensational claims and clickbait headlines. "Before sharing any information, take a moment to verify its authenticity," he recommends. Sharing false information, even unintentionally, can contribute to the problem. Watt also recommends disinformation targets to report the fabrication to the operator of the platform where it was found. Vlasto believes that maintaining situational awareness is essential for spotting the migration of a narrative from the margins to the mainstream. "Like any risk mitigation strategy, the best way to deal with disinformation is at the greatest distance from your core interests," he suggests. "Don't wait until the digital barbarians are at the gate -- plan your response options in advance and ensure you have early visibility of emerging risks." Looking Forward "We can't control the intent of disinformation actors or the capabilities at their disposal," Watt acknowledges. "However, by gaining awareness of how disinformation tactics are employed, we can begin to halt the progress of these campaigns and contribute to the free sharing of legitimate content." About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 73 Views
  • WWW.INFORMATIONWEEK.COM
    Breaking Down the Walls Between IT and OT
    IT and OT systems can seem worlds apart, and historically, they have been treated that way. Different teams and departments managed their operations, often with little or no communication. But over time OT systems have become increasingly networked, and those two worlds are bleeding into one another. And threat actors are taking advantage.  Organizations that have IT and OT systems -- oftentimes critical infrastructure organizations -- the risk to both of these environments is present and pressing. CISOs and other security leaders are tasked with the challenge of breaking down the barriers between the two to create a comprehensive cybersecurity strategy.  The Gulf Between IT and OT  Why are IT and OT treated as such separate spheres when both face cybersecurity threats? “Even though there's cyber on both sides, they are fundamentally different in concept,” Ian Bramson, vice president of global industrial cybersecurity at Black & Veatch, an engineering, procurement, consulting, and construction company, tells InformationWeek. “It's one of the things that have kept them more apart traditionally.” Age is one of the most prominent differences. In a Fortinet survey of OT organizations, 74% of respondents shared that the average age of their industrial control systems is between six and 10 years old.  Related:OT technology is built to last for years, if not decades, and it is deeply embedded in an organization’s operations. The lifespan of IT, on the other hand, looks quite different. “OT is looked at as having a much longer lifespan, 30 to 50 years in some cases. An IT asset, the typical laptop these days that's issued to an individual in a company, three years is about when most organization start to think about issuing a replacement,” says Chris Hallenbeck, CISO for the Americas at endpoint management company Tanium.  Maintaining IT and OT systems looks very different, too. IT teams can have regular patching schedules. OT teams have to plan far in advance for maintenance windows, if the equipment can even be updated. Downtime in OT environments is complicated and costly.  The skillsets required of the teams to operate IT and OT systems are also quite different. On one side, you likely have people skilled in traditional systems engineering. They may have no idea how to manage the programmable logic controllers (PLC) commonly used in OT systems.  The divide between IT and OT has been, in some ways, purposeful. The Purdue model, for example, provides a framework for segmenting ICS networks, keeping them separate from corporate networks and the internet.  Related:But over time, more and more occasions to cross the gulf between IT and OT systems -- intentionally and unintentionally -- have arisen.  People working on the OT side want the ability to monitor and control industrial processes remotely. “If I want to do that remotely, I need to facilitate that connectivity. I need to get data out of these systems to review it and analyze it in a remote location. And then send commands back down to that system,” Sonu Shankar, CPO at Phosphorus, an enterprise xIoT cybersecurity company, explains.  The very real possibility that OT and IT systems intersect accidentally is another consideration for CISOs. Hallenbeck has seen an industrial arc welder plugged into the IT side of an environment, unbeknownst to the people working at the company.  “Somehow that system was even added to the IT active directory, and they just were operating it as if it was a regular Windows server, which in every way it was, except for the part where it was directly attached to an industrial system,” he shares. “It happens far too often.” Cyberattack vectors on IT and OT environments look different and result in different consequences.  “On the IT side, the impact is primarily data loss and all of the second order effects of your data getting stolen or your data getting held for ransom,” says Shankar. “Disrupt the manufacturing process, disrupt food production, disrupt oil and gas production, disrupt power distribution … the effects are more obvious to us in the physical world.” Related:While the differences between IT and OT are apparent, enterprises ignore the reality of the two worlds’ convergence at their peril. As the connectivity between these systems grows, so do their dependencies and the potential consequences of an attack.  Ultimately, a business does not care if a threat actor compromised an IT system or an OT system. They care about the impact. Has the attack resulted in data theft? Has it impacted physical safety? Can the business operate and generate revenue?  “You have to start thinking of that holistically as one system against those consequences,” urges Bramson.  Integrating IT and OT Cybersecurity How can CISOs create a cybersecurity strategy that effectively manages IT and OT? The first step is gaining a comprehensive understanding of what devices and systems are a part of both the IT and OT spheres of a business. Without that information, CISOs cannot quantify and mitigate risk. “You need to know that the systems exist. There’s this tendency to just put them on the other side of a wall, physical or virtual, and no one knows what number of them exist, what state they're in, what versions they're in,” says Hallenbeck.  In one of his CISO roles, Christos Tulumba, CISO at data security and management company Cohesity, worked with a company that had multiple manufacturing plants and distribution centers. The IT and OT sides of the house operated quite separately.  “I walked in there … I did my first network map, and I saw all this exposure all over,” he tells InformationWeek. “It raised a lot of alarms.” Once CISOs have that network map on the IT and OT side, they can begin to assess risk and build a strategy for mitigation. Are there devices running on default passwords? Are there devices running suboptimal configurations or vulnerable firmware? Are there unnecessary IT and OT connections?  “You start prioritizing and scheduling remediation actions. You may not be able to patch every device at the same time. You may have to schedule it, and there needs to be a strategy for that,” Shankar points out.  The cybersecurity world is filled with noise. The latest threats. The latest tools to thwart those threats. It can be easy to get swept up and confused. But Shankar recommends taking a step back.  “The basic security hygiene is what I would start with before exploring anything more complex or advanced,” he says. “Most CISOs, most operators continue to ignore the basic security hygiene best practices and instead get distracted by all the noise out there.” And as all cybersecurity leaders know, their work is ongoing. Environments and threats are not static. CISOs need to continuously monitor IT and OT systems in the context of risk and the business’ objectives. That requires consistent engagement with IT and OT teams.  “There needs to be an ongoing dialogue and ongoing reminder prompting them and challenging them to be creative on achieving those same security objectives but doing it in context of their … world,” says Hallenbeck.  CISOs are going to need resources to achieve those goals. And that means communicating with other executive leaders and their boards. To be effective, those ongoing conversations are not going to be deep, technical dives into the worlds of IT and OT. They are going to be driven by business objectives and risks: dollars and cents.  “Once you have your plan, be able to put it in that context that your executives will understand so that you can get the resources [and] authorities to take action,” says Bramson. “At the end of the day, [this] is a business problem and when you touch OT, you're touching the lifeline, the life’s breath of how that business operates, how it generates revenue.” Building an IT/OT Skillset IT and OT security require different skillsets in many ways, and CISOs may not have all of those skills readily at their fingertips. The digital realm is a far cry from that of industrial technology. It is important to recognize the knowledge gaps and find ways to fill them.  “That can be from hiring, that can be from outside consultants’ expertise, key partnerships,” says Bramson.  An outside partner with expertise in the OT space can be an asset when CISOs visit OT sites -- and they should make that in-person trip. But if someone without site-specific knowledge shows up and starts rattling off instructions, conflict with the site manager is more likely than improved cybersecurity. “I would offer that they go with a partner or with someone who's done it before; people who have the creditability, people who have been practitioners in this area, who have walked sites,” says Bramson. That can help facilitate better communication. Security leaders and OT leaders can share their perspectives and priorities to establish a shared plan that fits into the flow of business.  CISOs also need internal talent on the IT and OT sides to maintain and strengthen cybersecurity. Hiring is a possibility, but the well-known talent constraints in the wider cybersecurity pool become even more pronounced when you set out to find OT security talent.  “There aren't a lot of OT-specific security practitioners in general and having people within these businesses that are in the OT side that have security specific training, that's vanishingly rare,” says Hallenbeck.  But CISOs needn’t despair. That talent can be developed internally through upskilling. Tulumba actually advocates for upskilling over hiring from the outside. “I've been like that my entire career. I think the best performing teams by and large are the ones that get promoted from within,” he shares. As IT and OT systems inevitability interact with one another, upskilling is important on both sides. “Ultimately cross-train your folks … to understand the IT side and the OT side,” says Tulumba.  
    0 Commenti 0 condivisioni 80 Views
  • WWW.INFORMATIONWEEK.COM
    3 Ways to Build a Culture of Experimentation to Fuel Innovation
    Tameem Iftikhar, CTO, GrowthLoopApril 16, 20254 Min ReadBrain light via Alamy StockBuilding a thriving tech company isn’t all about better code or faster product launches -- you have to foster an environment where experimentation is the norm. Establishing a culture where employees can safely push boundaries encourages adaptability, drives long-term innovation, and leads to more engaged teams. These are critical advantages in the face of high turnover and intense competition.   Through my own process of trial and error, I’ve learned three key strategies engineering leaders can use to make fearless experimentation part of their team’s DNA. Strategy #1: Normalize failure and crazy ideasA few months into my first job, I took down several production servers while trying to improve performance. Instead of blaming me, my manager focused on what we could learn from the experience. That moment gave me the confidence to push boundaries again. From then on, I knew that failure was not an end, but a steppingstone to future wins. It’s now a mindset that I encourage every leader to adopt. Innovation is messy and risky -- here’s how leaders can embrace the chaos and bold thinking: Build a "no-judgment zone": Before every brainstorming and feedback session, re-establish that there are no bad ideas. This might seem straightforward, but it can make the team feel safe, suggesting radical solutions and voicing their opinions.  Related:Encourage "what if?" questions: Out-there ideas like “What would this look like if we had no technical constraints?” or “What would it take to make this 10x better instead of just 10%?” encourage teams to consider problems and solutions from a new perspective. Leaders should walk the walk by asking these same types of questions in meetings. Celebrate the process, not just the outcome: Acknowledge smart risks – even if they don’t succeed. Whether it’s a shoutout in a team meeting or a more detailed discussion in Slack, take the time to highlight the idea and why it is worth pursuing.    Use failure to fuel future successes: If a project falls short of its goals, don’t bury it and move on right away. Instead, hold a session to discuss the positives and what can be done differently next time. This turns missteps into momentum and helps the team get more savvy with every experiment.  Strategy #2: Give experimentation a frameworkFor experimentation to flourish, leaders must provide teams with the guidelines and resources they need to turn bold thoughts into tangible products. I suggest the following: Allow for proof-of-concept testing: Dedicate space for testing in the product development lifecycle, especially when designing technical specifications. Related:Make room for wild ideas: One of my favorite approaches is adding a "Crazy Ideas" section to our product or technical spec templates. Just having it there inspires the team to push boundaries and propose unconventional solutions. Establish hackathons with purpose: At our company, we encourage hackathons that step outside our product roadmap to broaden our thinking. And don’t forget to make them fun! Let teams pitch and vote on ideas, adding some excitement to the process. Use AI to unlock creativity: AI allows developers to build faster and focus on higher-order thinking. Provide the team with AI tools that automate repetitive tasks, speed up iteration cycles, and generate quick proofs-of-concept, allowing them to spend more time innovating and less on process-heavy tasks. AI also helps teams prototype multiple versions of a new solution, letting them test and adjust at speed. I’ve seen these strategies produce incredible results from my teams. Our hackathons have led to some of our most important breakthroughs, including our first AI feature and the implementation of internal tools that have significantly improved our workflows. Related:Strategy #3: Test, learn, and refineHigh-performing teams know that experimentation isn't failure -- it’s insight in disguise. Here’s how to maintain a strong understanding of how each project is progressing: Set clear success metrics: Experimentation works best when teams know what they’re testing for. The key is setting a clear purpose for each experiment and determining quickly whether it’s heading in the right direction. Regularly ask internal teams or customers for feedback to get fresh perspectives. Share what works (and what doesn’t): Prioritize open knowledge-sharing across teams, breaking down communication silos in the process. Whether through Slack check-ins or full-company meetings, the more teams learn from each other, the faster innovation compounds.  Run micro-pilots: Leverage these small-scale, real-world tests with a subset of users. Instead of waiting to perfect a feature internally, my team launches a basic version to 5-10% of our customers. This controlled rollout lets us quickly gather feedback and usage data without the risk of a full product launch missing the mark. Make experimentation visible: For example, host weekly “demo days” where every team presents its latest experiments, including wins, failures, and lessons learned. Moments like this foster cross-team collaboration, which is key to staying agile. Most transformative technologies -- from email to generative AI -- probably sounded off the wall at first. But because the engineers behind them were allowed to push boundaries, we have tools that have changed our lives. Leaders must create an environment where engineering teams can take risks, even if they sometimes fail. The companies that experiment today will be the ones leading innovation tomorrow. About the AuthorTameem IftikharCTO, GrowthLoopTameem Iftikhar is the chief technology officer at GrowthLoop, a seasoned entrepreneur, and a technology leader specializing in AI and machine learning. He co-founded Rucksack and Divebox, and has worked as an engineer and developer with Symantec and IBM. Tameem holds a Bachelor of Applied Science in Electrical and Computer Engineering from the University of Toronto. See more from Tameem IftikharWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 99 Views
  • WWW.INFORMATIONWEEK.COM
    How to Tell When You're Working Your IT Team Too Hard
    John Edwards, Technology Journalist & AuthorApril 16, 20255 Min ReadDmitriy Shironosov via Alamy Stock PhotoIn an era of unprecedented technological advancement, IT teams are expected to embrace new tasks and achieve fresh goals without missing a beat. All too often, however, the result is an overburdened IT workforce that's frustrated and burned out. It doesn't have to be that way, says Ravindra Patil, a vice president at data science solutions provider Tredence. "Overwork tends to come from an 'always-on' culture, where remote work and digital tools make people feel they must be available all the time," he explains in an online interview. Warning Signs One of the earliest signs that a team is reaching its breaking point is an increasing number of errors, missed steps, or just plain sloppy work, says Archie Payne, president CalTek Staffing, a machine learning recruitment and staffing firm. "These are indications that the team is trying to work faster than is realistic, which is likely to happen when they have too much work on their to-do lists," he explains in an email interview. "This is likely to be paired with a general decline in morale, which can come across as more complaints, more cynical or frustrated comments, a lack of enthusiasm for the work, or increased emotional volatility." IT leaders can also detect overwork through various warning signs, such as a mounting number of sick leaves, high turnover rates, increasing mistakes, and overall lower work quality, Patil says. He adds that beleaguered team members may also look tired, act emotionally, or seem unengaged during meetings. "Keeping an eye on things like overtime, slower progress, or falling performance despite long hours can also show that the team is under too much pressure." Related:John Russo, vice president of technology solutions at healthcare software provider OSP Labs, says that a sudden drop in creativity and problem-solving are also strong signs indicating team weariness. In an email interview, he states that an IT team that's stretched too thin will stop generating innovative ideas, opting instead to complete tasks mechanically. Another strong unrest indicator is a change in communication patterns. "If the team members delay responses, or seem disengaged during discussions, it's worth digging deeper," Russo recommends. Working under unrelenting high pressure is a recipe for burnout, and that's the greatest risk if you keep pushing your IT team too hard, Payne says. "Burnout could drive employees to quit, forcing you to waste resources on recruiting replacements," he warns. "Even if they stay, burned-out employees are less productive and more likely to make mistakes, so your overall team productivity and work quality will likely suffer." Related:Pressure Release The simplest and most effective answer to burnout is reducing the team’s workload. This can be accomplished in several ways, Payne says. Review the IT team's current assignments, then consider whether some of the tasks could be assigned to another team or department, which may be more adequately staffed. "If all of the work must be done by IT, that may mean it's time to expand the team," he advises. Meanwhile, adding temporary freelance talent during workload spikes can relieve IT team pressure during peak times without committing to adding new hires who may not be needed over the long-term. Careful planning, focusing on important tasks, and delaying or skipping less critical ones, can also make workloads more manageable, Patil says. Setting realistic deadlines can help, too, preventing the dread that can, over time, lead to burnout. He also advises using automation tools whenever possible to cut down on repetitive tasks, making work easier and less stressful. Patil says that Tredence reduces team pressure with initiatives, such as "No-Meeting Fridays," which gives team members uninterrupted time to focus and recharge. "Flexible schedules and open communication also help our teams stay balanced," he adds. Related:IT leaders should schedule regular check-ins with their teams to identify stress points as soon as possible, Russo advises. "When employees feel heard and validated, they're more likely to share their concerns before burnout sets in," he explains. At OSP Labs, Russo introduced flexible work models into well-being initiatives. "This policy allows my team members to set their own hours, with more freedom to balance work and personal time." Russo says he also makes a concerted effort to celebrate his team's accomplishments with "thoughtful goodies and high-fives". Such small initiatives, he notes, eventually make a huge difference. Parting Thoughts Long-term excessive pressure can lead to burnout, leaving team members feeling completely drained, Patil says. "This lowers productivity and may cause employees to leave, leading to more stress for those who stay." Health issues may also arise. including anxiety, depression, or even physical problems. Russo recommends setting realistic expectations and encouraging a culture in which asking for help isn't seen as a weakness. "Create an environment where open communication about workloads is the norm, not the exception," he advises. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 97 Views
  • WWW.INFORMATIONWEEK.COM
    Former CTIO of US Space Force Talks DeepSeek Security
    Lisa Costa, the former chief technology and innovation officer for the U.S. Space Force and current advisor to Seekr, discusses building on big ideas with limited resources, and addressing security challenges emerging from AI. On DeepSeek, she cautions, 'Don’t trust a black box from a gray zone.'
    0 Commenti 0 condivisioni 98 Views
  • WWW.INFORMATIONWEEK.COM
    Today’s Technology Should Be Designed By and For All Minds
    For an industry that encourages and rewards learning and thinking differently, it’s disappointing that the tech world continues to lag in incorporating neurodivergent perspectives into product design and development. When you consider that one in five people have learning and thinking differences, omitting their perspectives -- particularly in AI development -- is not only problematic, but also limiting. How can AI scale its impact if those creating it overlook the 70 million people in the US who learn and think differently?  This was one of my takeaways from the inaugural conference of the International Association for Safe and Ethical AI, which I attended last month. Experts in academia, civil society, industry, media, and government discussed and debated the latest developments in AI safety and ethics. But the value of neurodiversity in design and development was not on the agenda. This worries me for two reasons. First, it means that AI models are being brought to market without issues around bias, fairness, and equity having been considered. And second, global experts have not accounted for the long-term consequences of excluding millions of perspectives from a technology that’s being developed at an unprecedented rate.  As the conversation around inclusivity and diversity evolves, it’s vital that tech experts understand the value of authentic intelligence. That means training and developing tech by people with a broad range of experiences, including diversity in how they think and process information, to authentically account for all user experiences. AI should account for neurodivergence. For that to happen, it must be built by neurodivergent minds. And you have to start at the development stage.  Related:AI Accessibility Is a Necessity  While AI has come a long way, greater accessibility through the development of ethical and inclusive AI has not. Big tech has made strides with mobile accessibility offerings like Apple’s Live Speech and Eye Tracking as well as Google’s Guided Frame and Lookout. This is still widely regarded as niche, but it shouldn’t be.  As a nonprofit that supports the millions of people in the US who learn and think differently, Understood.org designs and develops resources that help all minds, while prioritizing inputs from experts and the one-third of our workforce who identify as neurodivergent. We’re constantly evolving with the goal of making our vast content library more accessible for everyone. For instance, our AI-powered assistant now includes a voice-to-text feature for asking questions. It generates clear, concise responses written at an eighth grade reading level.  Related:All organizations must prioritize and respect that brains are wired differently and tap into the unique and diverse perspectives that they bring to the table. Here’s how to do that: Start with cognitively diverse data and teams. You know the popular phrase “garbage in, garbage out”? That’s where authenticity can play a role. Ensuring that datasets are trustworthy, inclusive, and unbiased will have a valuable ripple: You’ll have a wider range of use cases and you’ll be able to better identify risks. That’s a win for all users.Understand that a diverse and inclusive culture leads to enhanced productivity, innovation, and positive financial outcomes. According to Accenture, the economic output of the US could be improved by almost $25 billion if 1% more persons with disabilities entered the workforce. What’s more, Gartner found that 75% of organizations whose decision-making teams reflect a diverse and inclusive culture -- with a particular emphasis on cognitive diversity -- see enhanced productivity, innovation, and positive financial outcomes. Companies can and should hire from the growing diverse talent pool. Use AI to boost confidence and help people thrive. An EY report found that because of generative AI, 65% of respondents felt confident about their work. A slightly smaller percentage (61%) said they were relieved that AI could help remove distressing obstacles at work. The same report found that many neurodivergent employees (85%) think generative AI creates a more inclusive workplace. The time for companies to level the playing field is long overdue. In 2025, it’s not just about providing employees with the tools they need to perform “simple” tasks like being more productive. It’s about designing tools in a way that helps employees thrive in all aspects of their lives. Related:AI is changing the way we live and work. Its evolution is faster than any of us could have predicted. As we get closer to the time of artificial general intelligence (AGI) -- which experts predict we’ll achieve by 2027 -- we need to be strategic and smart about shaping the AI landscape to benefit all. One thing is for certain: AI will never serve all unless it is developed by all. Let’s work together today so it’s possible tomorrow. The millions of Americans who learn and think differently deserve that. 
    0 Commenti 0 condivisioni 97 Views
  • WWW.INFORMATIONWEEK.COM
    Late to AI? Here's How CIOs Can Catch Up Quickly
    John Edwards, Technology Journalist & AuthorApril 15, 20255 Min Readeverything possible via Alamy Stock PhotoFew IT leaders dispute the fact that AI is this decade's breakthrough technology. Yet this wasn't always the case. In fact, until relatively recently, many AI cynics failed to recognize the technology's potential and, therefore, fell behind more astute competitors. As they begin to make up for lost time, business and technology leaders should focus on key readiness areas: data infrastructure, governance, regulatory compliance, risk management, and workforce training, says Jim Rowan, head of AI at Deloitte Consulting. "These foundational steps are essential for success in an AI-driven future," he notes in an email interview. Rowan cites Deloitte's most recent State of Generative AI in the Enterprise report, in which 78% of respondents stated they expect to increase their overall AI spending in the next fiscal year. However, the majority of organizations anticipate it will take at least a year to overcome adoption challenges. "These findings underscore the importance of a deliberate yet agile approach to AI readiness that addresses both regulation and talent challenges to AI adoption." Getting Ready The key to getting up to speed in AI lies in hiring the best advisor you can find, someone who has expertise in your company's area, advises Melissa Ruzzi, AI director at SaaS security firm AppOmni. "Some companies think the best way is to hire grad students fresh out of college," she notes via email. Yet nothing beats domain expertise and implementation experience. "This is the fastest way to catch up." Related:Many organizations underestimate the amount of cultural change needed to help team members adopt and effectively use AI technologies, Rowan says. Workforce training and education early in the AI journey is essential. To foster familiarity and innovation, team members need access to AI tools as well as hands-on experience. "Talent and training gaps can't be overlooked if organizations aim to achieve sustained growth and maximize ROI," he says. Every company has multiple projects that can benefit from AI, Ruzzi says. "It's best to have an in-house AI expert who understands the technology and its applications," she advises. "If not, hire consultants and contractors with domain experience to help decide where to get started." Many new AI adopters begin by focusing on internal projects tied to customer delivery timelines, Ruzzi says. Others decide to start with a small customer-facing project so they can prove AI's added value. The decision depends very much on the ROI goal, she notes. "Small projects of short duration can be a good starting point, so the success can be more quickly measured." Related:Security Matters AI security must always be addressed and ensured, regardless of the project's size or scope, Ruzzi advises. View developing an initial AI project as being similar to installing a new SaaS application, she suggests. "It's crucial to make sure that configurations, such as accessibility and access to data aren't posing a risk of public data exposure or, worse yet, are vulnerable to data injection that could poison your models." To minimize the security risk created by novice AI teams, start with simple implementations and proofs of concepts, such as internal chatbots, recommends David Brauchler, technical director and head of AI and ML security at cybersecurity consulting firm NCC Group. "Starting slow enables application architects and developers to consider the intricacies AI introduces to application threat models," he explains in an email interview. AI also creates new data risk concerns, including the technology's inability to reliably distinguish between trusted and untrusted content. "Application designers need to consider risks that they might not be used to addressing in traditional software stacks," Brauchler says. Related:Organizations should already be training their employees on the risks associated with AI as part of their standard security training, Brauchler advises. "Training programs help address common pitfalls organizations encounter that lead to shadow AI and data leakage," he says. Organizations that aren't already providing guidance on security issues should incorporate these risks into their training programs as quickly as they can. "For employees who contribute to the software development lifecycle, technical training should begin before developing AI applications." Final Thoughts As organizations gain experience with GenAI, they will begin to understand both the rewards and challenges of deploying the technology at scale, Rowan says. "The need for disciplined action has grown," he observes. As technical preparedness has improved, regulatory uncertainty and risk management have emerged as significant barriers to AI progress, particularly for newcomers, Rowan says. "Talent and workforce issues remain important, yet access to specialized technical talent no longer seems to be a dire emergency." Although tempting, Brauchler warns against rushing into AI. "AI will still be here in a few years [and] taking a thoughtful, measured approach to AI business strategy and security is the best way to avoid unnecessary risks," he concludes.About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 86 Views
  • WWW.INFORMATIONWEEK.COM
    CTOs Watch to See If Stargate Propels US to Global AI Dominance
    What will $500 billion poured into AI infrastructure over the next four years in the United States accomplish? CIOs and CTOs will have to watch the Stargate Project to find out. The initiative -- a collaboration between several high-profile players in the AI space -- has been plugged by President Trump. Billions are already being invested, and construction on several data centers has already begun, AP news reports.  With competition for AI dominance at a fever pitch, how much of a role could Stargate have in tipping the scales in favor of the US?  Stargate Partners  Stargate has a lot of AI star power behind it.  “The players that are part of the project are all people who are very invested in building out a computing infrastructure for AI already and building frontier AI systems,” says Peter N. Salib, law and policy advisor to the nonprofit Center for AI Safety and codirector of the Center for Law & AI Risk, an organization focused on establishing law and AI safety as a scholarly field.  OpenAI, software company Oracle, AI investment firm MGX, and investment holding company SoftBank are the four initial equity funders powering the project. SoftBank is taking the lead on financial responsibility, while OpenAI is tackling operations.  Related:The project has also attracted technology partners, including semiconductor company Arm, tech giant Microsoft, and chip company Nvidia  That is a lot of cooks in the kitchen, all very motivated to push the field of AI forward, ultimately achieving AGI.  “What I hope is that this becomes a model or an example of how titans of industry and government and ultimately and eventually the community are able to work together for the benefit of mankind,” says Jason Hardy, CTO for AI at data infrastructure company Hitachi Vantara. Time will tell if each partner delivers on their promises and ultimately plays well with others. “I would effectively call this a moonshot. So, it'll be interesting to see over the next year or so how it progresses,” Hardy adds.  The Goals The Stargate Project is focused on the “development and construction of large-scale AI data centers,” according to its request for proposals. And there is no doubt about AI’s voracious appetites. Advances in the field -- feeding those appetites -- will require more infrastructure, but is that alone the answer to capturing the lead? Randall Hunt, CTO at Caylent, an AWS cloud consulting and engineering company, thinks not. “Infrastructure alone is a very brute force approach to solving artificial general intelligence or artificial super intelligence,” he says.  Related:He goes on to voice some additional goals that could be beneficial in the pursuit of AI advancements.  “I think that we need significant efficiency and architectural improvements in the underlying implementation of these networks,” Hunt argues. “And I think a broader initiative that focuses not just on pure infrastructure but also investment in theoretical work and investment in academia and investment and the software side would be pretty valuable.” Funding In January, Elon Musk took to X, claiming Stargate lacked the funding. OpenAI’s CEO Sam Altman refuted the claim, and a source told Forbes that the initial $100 billion in equity is “ready to go.” That still leaves $400 billion to be gathered over the next four years.  “Sometimes … in these large agreements, you'll have pledges of capital and then you have investor underperformance. They don't ever actually send the desired capital,” Hunt points out.  Stargate is in its early days, and while a shortfall in funding for this project is possible, overall spending on AI and its requisite infrastructure is likely to soar well beyond the $500 billion point in the coming years.  “I will never underestimate the private sector’s ability and desire to put up money towards this race,” says Caroline Winnett, executive director of startup accelerator Berkeley SkyDeck.  Related:Apple, for example, is pumping $500 billion into US facilities, including a server production facility for its AI products, CNN reports.  Performance Metrics The initiative already has data centers under construction in Texas and several other states lined up as possibilities for its campuses, Reuters reports. How will we know if Stargate is delivering on its goals and putting those billions to good use?  At the most basic level, we can look at data center capacity. How many megawatts have resulted from the buildout of Stargate’s data centers? There are, of course, more nuanced questions about efficiency and energy usage.  “Does putting more data through more compute continue to get you ever more capable systems? All the … evidence we have seems to point towards the prediction that it will, and if that turns out not to be true, that will be a big surprise and a big knock against this Stargate approach of doing extremely large-scale compute and power clusters,” says Salib. The ultimate goal of AGI looms large. Will Stargate and its participants be the first to achieve it? Tracking the Global AI Race The emergence of DeepSeek from a Chinese startup threw fuel on the competitive fires. And OpenAI is certainly cognizant of the flames. "As news emerged about DeepSeek, it makes it clear this is a very real competition and the stakes could not be bigger,” said Chris Lehane, OpenAI’s chief global affairs officer, Reuters reports.  Among the slew of executive orders Trump signed upon taking offices is one aimed at American leadership in AI.  “I think it's true to say that the US and China do understand themselves as racing towards something like artificial general intelligence and that this project might make help them to race faster,” says Salib.  Stargate could propel the US forward in this breakneck sprint, but it needn’t be the initiative on which the country pins all of its hopes. Moonshots like this are not guaranteed successes. But it is hardly a solo shot.  “Stargate is emblematic of the scale of which frontier AI is going to be developed in the next two to five years, but it's not the only project that is going to look the way that Stargate looks,” says Salib.  Even if Stargate fizzles out for one reason or another, it is highly unlikely that the US will find itself falling completely behind. There will still likely be plenty to learn from the endeavor, and there will be other projects and players with skin in the game.  “Whether this initiative moves forward or not, no matter what happens with it, everybody's going after this golden carrot known as AGI,” Winnett points out.  Predictions on the arrival of AGI vary, but it seems all but certain that it is coming. And the road there is hardly written in the stars. There is still plenty of room for surprises and disruption.  “People think these entrenched players like OpenAI and Anthropic and AWS, that they've got a moat that can't be overcome, but we're still in the wild west days,” says Hunt. “The model that's winning today is not necessarily the model that's winning tomorrow. As tech companies and governments pound the pavement in this ongoing race, there are some big, open questions.  “A lot of regulation is going to need to be looked at and evaluated to see how we can improve on power generation. Does nuclear need to be a part of it, for example?” says Hardy.  And then, there are other thorny concepts to grapple with: What is the cost of racing in the first place? Is the world ready for what it means when we reach a point where a winner can be declared?  “As with say the missile gap of the Cold War era, racing has its own dangers,” Salib points out. “Both sides would really like to have the most powerful systems as quickly as possible and seem willing to risk losing control of their own systems for the sake of winning that capabilities race.” Along the way, the environmental strain and energy usage associated with AI has costs.  “We're hoping that AI can produce solutions that will actually make very significant progress [on] how these tools end up interacting with the environment [and] solve their own issues,” says Winnett.  For now, it seems that the race is still on, whether or not those solutions materialize.  As AGI grows closers, Salib hopes we will spend more time thinking not only about its value but its risks. “The risks of misuse of these extremely powerful systems, arms races around these extremely powerful systems, and also loss of control the systems themselves as they become very capable. It is time for all of us to take all of that very seriously in a way that I think most of the policy world is not yet,” he urges.  
    0 Commenti 0 condivisioni 73 Views
  • WWW.INFORMATIONWEEK.COM
    How CIOs Can Prepare for Tariffs, Recession Fears
    Shane Snider, Senior Writer, InformationWeekApril 14, 20254 Min ReadIvan Marc Sanchez via Alamy Stock President Donald Trump’s trade policies -- particularly with major tech exporter China -- stand to have a big impact on IT department budgets. While the saga of back-and-forth tariffs seems far from over, experts say there are ways CIOs can manage budgets to brace for outcomes.CIOs are under tremendous pressure with digital transformation needs rising with demand for GenAI at a fever pitch. With a volatile geopolitical and economic landscape, IT leaders face a real headache when it comes to planning.The ongoing trade saga has many economists warning of a coming recession. Last week, JP Morgan increased their prediction on the likelihood of recession from 40% to 60%, while S&P Global pegged recession probability at 35%.The Trump administration tariff saga began in February, starting with new tariffs on goods from Mexico, Canada and China -- those tariffs were paused for 30 days and reinstated with some exemptions. Earlier this month, the administration announced a new package of “reciprocal” tariffs on dozens of nations, which tariffs on China’s goods rocketing to 34%. After a severe US stock market rout, Trump paused the new tariffs (except) for those on China, sending stocks soaring back.The back-and-forth saw China retaliate, with Trump raising the total import levy for China’s goods to 145%; China shot back with 125% retaliatory tariffs on US imports. Late last week, Trump announced that certain electronics, semiconductors, phones, computers and flat screens would be exempted. However, on Sunday he wavered on semiconductor exemptions, and said that semiconductor tariffs would come soon. It’s unclear how long any exemptions would apply.Related:The trade war seems far from over, as China has so far refused direct negotiations with US leaders.Tech leaders are forced to try to keep up with a fluid situation with budgets that were already tight.The Cost of Trade Chaos“IT infrastructure will likely see significant price increases as major manufacturing nations face high tariff rates, especially in the US,” says Mark Moccia, vice president and research director for Forrester’s CIO practice. “The rising costs could balloon budgets and force CIOs to delay or prioritize the most important projects.”But with uncertainty about where the tariffs will land, IT leaders face a difficult task in adapting for increased costs. “Nobody has a clue where this is going to go,” Moccia tells InformationWeek in a live chat. “And it will change day-to-day. It’s really hard for CIOs to have to adjust in real time like that.”Related:According to Deloitte, IT budgets for companies average 5.49% of revenue. With new AI projects taking a bite out of that spend, increasing hardware costs could be a significant drain on tight budgets. In March, China’s exports jumped 12.4% from a year earlier as businesses stockpile tech and other goods to get ahead of tariff increases, according to Reuters.Large businesses with more cash on hand were in a better position to stock up, Moccia says.What Can CIOs Do?Jim DuBois, consultant, author and former Microsoft CIO, thinks there may be a silver lining.“The willingness to pause tariffs seems to indicate that the tariffs are more a negotiating tactic than something planned to continue,” he tells InformationWeek in an email interview. “CIOs should be opportunistic about needed purchases in the current uncertainty, thoughtful about how they can influence their own company’s pricing, and double down on using AI to drive efficiency and cost savings.”Forrester’s Moccia, co-author of the firm’s report, “Technology Leaders: How to Thrive Through Volatility,” cautions against knee-jerk cuts that could impact the company’s prospects.“CIOs and other tech leaders will need to proactively analyze costs, diversify sourcing, optimize inventory and prioritize the projects that don’t sacrifice critical AI ambitions,” Moccia says, adding that staff reduction should be the last resort. “We urge CIOs to lean more heavily into other methods of spend optimization before drastically reducing labor expenses. Minimizing cuts to IT staff will allow for existing personnel to buy down more technical debt [and] improve data management capabilities to set up AI deployments for success.”Related:Moccia says IT leaders can use lessons learned during the COVID-19 pandemic.“We were in kind of a similar situation where we just didn’t really know where it was going -- with economic chaos in the markets and supply chain constraints,” he says. “And those persisted for a while. So, you did see some similar behaviors where organizations that were thinking ahead and had the capital went out and bought a ton immediately and brought it in-house. They had what they needed to execute. And others just sort of paused, or maybe they didn’t have the capital to take advantage. It’s a similar scenario.”About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 99 Views
  • WWW.INFORMATIONWEEK.COM
    Balancing AI’s Promise and Complexities: 6 Takeaways for Tech Leaders
    Murali Swaminathan, Chief Technology Officer, Freshworks April 14, 20254 Min ReadBonaventura via Alamy StockAs tech pros, the question isn’t just whether AI will disrupt our industries -- it’s how we can leverage its power in a responsible, sustainable way. At SXSW 2025, I had the privilege of serving on the “Innovation Unbridled: Balancing the Promise and Peril of AI” panel. The session provided a thought-provoking exploration of AI’s transformative potential, and the challenges tech leaders face when integrating AI into their operations. What made this panel particularly engaging was the diverse audience -- attendees from all walks of life, asking tough questions that forced us to consider AI from ethical, practical, and social perspectives. Here are six key takeaways to guide your AI journey: 1. AI should solve problems, not create them AI must address real business challenges, not introduce new ones. Too often, organizations rush to adopt the latest tools and technologies without fully understanding their impact on existing processes. The result? More complexity, confusion, and inefficiency. It’s crucial to ensure that any AI implementation directly addresses specific pain points within your organization. Whether it’s automating tasks, improving customer personalization, or enhancing decision-making, AI should add measurable value. When deploying AI, start by asking: How will this improve our business outcomes? What specific problem does it solve? Related:Actionable insight: Prioritize AI tools that seamlessly integrate with existing systems and processes. Use AI as a strategic asset to enhance productivity and deliver tangible results. 2. AI should upskill people, not replace them It’s no secret that many fear AI will lead to widespread job displacement. While this concern is valid, the reality is that AI is designed to augment human abilities, not replace them. It can handle repetitive tasks, analyze vast amounts of data, and provide real-time insights, allowing employees to focus on higher-value activities that require creativity, empathy, and complex problem-solving. The key is understanding that AI’s real value lies in enabling your team to work smarter, not harder. AI can help streamline operations and improve efficiency, but it should never be seen as a substitute for human ingenuity. Actionable insight: Invest in upskilling and reskilling your workforce to ensure employees are ready for a future where AI complements their work. Offer training programs or collaborate with educational institutions for continuous learning opportunities. 3. Balancing open-sourcing AI with ethics Related:While open-sourcing AI has the potential to democratize access and drive innovation, it also raises important ethical concerns. How can we ensure that AI tools are used responsibly and safely? What measures need to be put in place to prevent misuse or unintended harm? It’s vital to ensure that any AI system deployed in your organization follows strict ethical guidelines. Whether you’re using open-source models or proprietary tools, transparency, accountability, and safety should always be top priorities. Actionable insight: Establish a robust AI governance framework within your organization, including security protocols, ethical guidelines, and regular audits. Collaborate with legal and compliance teams to create policies that protect both your business and customers. 4. AI’s role in reshaping industries AI is transforming industries, from precision healthcare to environmental sustainability, by driving value, personalization, and innovation. To fully leverage AI’s potential, businesses must adapt their operating models and become more agile. The challenge lies not only in adopting AI but also in fostering an environment where innovation thrives. This requires rethinking organizational structures, embracing cross-functional collaboration, and cultivating a culture of continuous improvement. Related:Actionable insight: Build an agile organization that adapts quickly to AI advancements. Encourage cross-functional collaboration, experimentation, and view AI as an enabler of ongoing business transformation, not a one-off project. 5. Fostering a culture of support and growth Workforce burnout is an increasing concern as businesses push employees to adopt new technologies and work longer hours to stay competitive. While AI can alleviate some repetitive tasks, leaders must prioritize creating an environment that nurtures employee growth and well-being. Actionable insight: As you implement AI to boost efficiency, foster a culture of support and growth. Encourage flexibility, invest in employee development, and set realistic productivity expectations. Innovation should empower your team, driving both business and personal growth without compromising employee satisfaction. 6. AI regulation -- balancing innovation with responsibility AI is evolving rapidly, and the need for regulation is becoming more pressing. Strong guardrails are essential to ensure AI is developed responsibly and ethically. As tech pros, it’s our responsibility to stay ahead of the regulatory curve, ensuring that your AI initiatives align with emerging ethical standards. While regulation may evolve over time, embedding ethical considerations into your AI strategy now will help future-proof your business. Actionable insight: Stay informed about AI regulation and collaborate with industry bodies to help shape the future of AI governance. This proactive approach will protect your organization from legal challenges and demonstrate your commitment to responsible innovation. Closing Thoughts AI must be used thoughtfully, serving both business goals and societal well-being. As tech pros, it is our responsibility to harness AI in ways that solve real problems, empower employees, and drive ethical innovation. By embracing these takeaways, you can position your organization to thrive in the AI era while staying true to your values and responsibilities. About the AuthorMurali SwaminathanChief Technology Officer, Freshworks Murali Swaminathan serves as chief technology officer at Freshworks, responsible for the company’s technology roadmap and strategy, and leading global engineering and architecture teams. With 30+ years of experience, he has held leadership roles at ServiceNow, Recommind (now OpenText), and CA Technologies (now Broadcom), delivering scalable, secure solutions that drive digital transformation. Murali holds a master’s in software engineering management from Carnegie Mellon University and a bachelor’s in electronics and instrumentation from Annamalai University in India.See more from Murali SwaminathanWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 101 Views
  • WWW.INFORMATIONWEEK.COM
    Trends in Neuromorphic Computing CIOs Should Know
    John Edwards, Technology Journalist & AuthorApril 14, 20255 Min ReadScience Photo Library via Alamy Stock PhotoNeuromorphic computing is the term applied to computer elements that emulate the way the human brain and nervous system function. Proponents believe that the approach will take artificial intelligence to new heights while reducing computing platform energy requirements. "Unlike traditional computing, which incorporates separate memory and processors, neuromorphic systems rely on parallel networks of artificial neurons and synapses, similar to biological neural networks," observes Nigel Gibbons, director and senior advisor at consulting firm NCC Group in an online interview. Potential Applications The current neuromorphic computing application landscape is largely research-based, says Doug Saylors, a partner and cybersecurity co-lead with technology research and advisory firm ISG. "It's being used in multiple areas for pattern and anomaly detection, including cybersecurity, healthcare, edge AI, and defense applications," he explains via email. Potential applications will generally fall into the same areas as artificial intelligence or robotics, says Derek Gobin, a researcher in the AI division of Carnegie Mellon University's Software Engineering Institute. "The ideal is you could apply neuromorphic intelligence systems anywhere you would need or want a human brain," he notes in an online interview. Related:"Most current research is focused on edge-computing applications in places where traditional AI systems would be difficult to deploy, Gobin observes. Many neuromorphic techniques also intrinsically incorporate temporal aspects, similar to how the human brain operates in continuous time, as opposed to the discrete input-output cycles that artificial neural networks utilize." He believes that this attribute could eventually lead to the development of time-series-focused applications, such as audio processing and computer vision-based control systems. Current Development As with quantum computing research, there are multiple approaches to both neuromorphic hardware and algorithm development, Saylors says. The best-known platforms, he states, are BrainScaleS and SpiNNaker. Other players include GrAI Matter labs and BrainChip. Neuromorphic strategies are a very active area of research, Gobin says. "There are a lot of exciting findings happening every day, and you can see them starting to take shape in various public and commercial projects." He reports that both Intel and IBM are developing neuromorphic hardware for deploying neural models with extreme efficiency. "There are also quite a few startups and government proposals looking at bringing neuromorphic capabilities to the forefront, particularly for extreme environments, such as space, and places where current machine learning techniques have fallen short of expectations, such as autonomous driving." Related:Next Steps Over the short term, neuromorphic computing will likely be focused on adding AI capabilities to specialty edge devices in healthcare and defense applications, Saylors says. "AI-enabled chips for sensory use cases are a leading research area for brain/spinal trauma, remote sensors, and AI enabled platforms in aerospace and defense," he notes. An important next step for neuromorphic computing will be maturing a technology that has already proven successful in academic settings, particularly when it comes to scaling, Gobin says. "As we're beginning to see a plateau in performance from GPUs, there's interest in neuromorphic hardware that can better run artificial intelligence models -- some companies have already begun developing and prototyping chips for this purpose." Another promising use case is event-based camera technology, which shows promise as a practical and effective medium for satellite and other computer vision applications, Gobin says. "However, we have yet to see any of these technologies get wide-scale deployment," he observes. "While research is still very active with exciting developments, the next step for the neuromorphic community is really proving that this tech can live up to the hype and be a real competitor to the traditional hardware and generative AI models that are currently dominating the market." Related:Looking Ahead Given the technology's cost and complexity, coupled with the lack of skilled resources, it's likely to take another seven to 10 years before widespread usage of complex neuromorphic computing occurs, Saylors says. "However, recent research in combining neuromorphic computing with GenAI and emerging quantum computing capabilities could accelerate this by a year or two in biomedical and defense applications." Mainstream adoption hinges on hardware maturity, cost reduction, and robust software, Gibbons says. "We may see initial regular usage within the next five to 10 years in specialized low-power applications," he predicts. "Some of this will be dictated by the maturation of quantum computing." Gibbons believes that neuromorphic computing's next phase will focus on scaling integrated chips, refining and spiking neural network algorithms, and commercializing low-power systems for applications in robotics, edge AI, and real-time decision-making. Gibbons notes that neuromorphic computing may soon play an important role in advancing cybersecurity. The technology promises to offer improved anomaly detection and secure authentication, thanks to event-driven intelligence, he explains. Yet novel hardware vulnerabilities, unknown exploit vectors, and data confidentiality remain critical concerns that may hamper widespread adoption. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 100 Views
  • WWW.INFORMATIONWEEK.COM
    FICO CAO Scott Zoldi: Innovation Helps Operationalize AI
    Lisa Morgan, Freelance WriterApril 14, 20259 Min ReadBrain light via Alamy StockFICO Chief Analytics Officer Scott Zoldi has spent the last 25 years at HNC and FICO (which merged) leading analytics and AI at HNC FICO is well known in the consumer sector for credit scoring, while the FICO Platform helps businesses understand their customers better so they can provide hyper-personalized customer experiences.  “From a FICO perspective, it’s making sure that we continue to develop AI in a responsible way,” says Zoldi. “There’s a lot of [hype] about generative AI now and our focus has been around operationalizing it effectively so we can realize this concept of ‘the golden age of AI’ in terms of deploying technologies that actually work and solve business problems.” While today’s AI platforms make model governance and efficient deployment easier, and provide greater model development control, organizations still need to select an AI technique that best fits the use case. A lot of the model hallucinations and unethical behavior are based on the data on which the models are built, Zoldi says. “I see companies, including FICO, building their own data sets for specific domain problems that we want to address with generative AI. We’re also building our own foundational models, which is fully within the grasp of almost all organizations now,” he says.  Related:He says their biggest challenge is that you can never totally get rid of hallucinations. “What we need to do is basically have a risk-based approach for who’s allowed to use the outputs, when they’re allowed to use the outputs, and then maybe a secondary score, such as a AI risk score or AI trust score, that basically says this answer is consistent with the data on which it was built and the AI is likely not hallucinating.” Some reasons for building one’s own models include full control of how the model is built, and reducing the probability of bias and hallucinations based on the data quality.   “If you build a model and it produces an output, it could be hallucination or not. You won’t know unless you know the answer, and that’s really the problem. We produce AI trust scores at the same time as we produce the language models because they’re built on the same data,” says Zoldi. “[The trust score algorithms] understand what the large language models are supposed to do. They understand the knowledge anchors -- the knowledge base that the model has been trained on -- so when a user asks a question, it will look at the prompts, what the response was, and provide a trust score that indicates how well aligned the model’s response is aligned with the knowledge anchors on which the model was built. It’s basically a risk-based approach.” Related:FICO has spent considerable time focused on how to best incorporate small or focused language models as opposed to simply connecting to a generic GenAI model via an API. These “smaller” models may have eight to 10 billion parameters versus 20 billion or more than 100 billion, for example. He adds that you can take a small language model and achieve the same performance of a much larger model, because you can allow that small language model to spend more time reasoning out an answer. “And it’s powerful because it means that organizations that can only afford a smaller set of hardware can build a smaller model and deploy it in such a way that it’s less costly to use and just as performant as a large language model for a lot less cost, both in model development and in the inference costs of actually using it in a production sense.” Scott ZoldiThe company has also been using agentic AI. “Agentic AI is not new, but we now have frameworks that assign decision authority to independent AI operators. I’m okay with agentic AI, because you decompose problems into much simpler problems, and those simpler problems [require] much simpler models,” says Zoldi. “The next area is a combination of agentic AI and large language models, though building small language models and solving problems in a safe way is probably top of mind for most of our customers.” Related:For now, FICO’s primary use case for agentic AI is generating synthetic data to help counter and stay ahead of threat actors’ evolving methods. Meanwhile, FICO has been building focused language models that address financial fraud and scams, credit risks, originations, collections, behavior scoring and how to enable customer journeys. In fact, Zoldi recently created a focused model in only 31 days using a very small GPU. “I think we’ve all seen the headlines about how these humongous models with billions of parameters and thousands of GPUs, but you can go pretty far with a single GPU,” says Zoldi.  Challenges Zoldi Sees in 2025 One of the biggest challenges CIOs faces is anticipating the shifting nature of the US regulatory environment. However, Zoldi believes regulation and innovation go hand in hand. “I firmly believe that regulation and innovation inspire each other, but others are wondering how to develop their AI applications appropriately when [they’re not prescriptive],” says Zoldi. “If they don't tell you how to meet the regulation, then you're guessing how the regulations might change and how to meet them.”  Many organizations consider regulation a barrier to innovation rather than an inspiration for it.  “The innovation is basically a challenge statement like, ‘What does that innovation need to look like?’ so that I can meet my business objective, get a prediction, and have an interpretable model while also having ethical AI. That means better models,” says Zoldi. “Some people believe there shouldn’t be any constraints, but if you don’t have them, people will continue to ask for more data and ignore copyrights. You can also go down a deep learning path where models are uninterpretable, unexplainable, and often unethical.” What Innovation at FICO Looks Like At FICO, innovation and operationalization are synonymous. “We just built our first focused model last year. We’ve been demonstrating how small models on task specific domain problems perform just as well as large language models you can get commercially, and then we operationalize it,” says Zoldi. “That means I’m coming up with the most efficient way to embed AI in my software. We’re looking at unique software designs within our FICO Platform to enable the execution of these technologies efficiently.” Some time ago, Zoldi and his team wanted to add audit capabilities to the FICO Platform. To do it, they used AI blockchains. “An AI blockchain codifies how the model was developed, what needs to be monitored, and when you pull the model. Those are really important concepts to incorporate from an innovation perspective when we operationalize, so a big part of innovation is around operationalization. It’s around the sensible use of generative AI to solve very specific problems in the pockets of our business that would benefit most. We’re certainly playing with things like agentic AI and other concepts to see whether that would be the attractive direction for us in the future.” The audit capabilities FICO built can track every decision made on the platform, what decisions or configurations have changed, why they changed, when they changed and who changed them. “This is about software and the components, how strategies change, and how that model works. One of the main things is ensuring that there is auditing of all the steps that occur when an AI or machine learning model gets deployed in a platform, and how it’s being operated so you can understand things like who’s changing the model or strategy, who made that decision, whether it was tested prior to deployment and what the data is to support the solution. For us, that validation would belong in a blockchain so there is the immutable record of those configurations.” FICO uses AI blockchains when it develops and executes models, and to memorialize every decision made.  “Observability is a huge concept in AI platforms today. When we develop models, we have a blockchain that explains how we develop it so we can meet governance and regulatory requirements. On the same blockchain, are exactly what you need for real-time monitoring of AI models, and that wouldn't be possible if observability was not such a core concept in today's software,” says Zoldi. “Innovation in operationalization really comes from the fact that the software on which organizations build and deploy their decision solutions are changing as software and cloud computing advance, so the way we would have done it 25, 20, or 10 years ago is not the way that we do it most efficiently today. And that changes the way that we must operationalize. It changes the way we deploy and the way we even look at basic things like data.” Why Zoldi Has His Own Software Development Team Most software development organizations fall under a CIO or CTO, which is also true at FICO, though Zoldi also has his own software development team and works in partnership with FICO’s CTO.  “If a FICO innovation has to be operationalized, there must be a near term view to how it can be deployed. Our software development team makes sure that we come up with the right software architectures to deploy because we need the right throughput and latency,” says Zoldi. “Our CTO, Bill Waid, and I both focus a lot of our time on what are those new software designs so that we can make sure that all that value can be operationalized.” A specialized software team has been reporting to Zoldi for nearly 17 years, and one benefit is that it allows Zoldi to explore how he wants to operationalize, so he can make recommendations to the CTO and platform teams and ensure that new ideas can be operationalized responsibly. “If I want to take one of these focus language models and understand the most efficient way to deploy it and do inferencing, I'm not dependent on another team. It allows me to innovate rapidly, because everything that we develop in my team needs to be operationalized and be able to be deployed.  That way, I don't come with just an interesting algorithm and a business case. I come with an interesting algorithm, a business case and a piece of software so I can say these are the operating parameters of it. It allows me to make sure that I essentially have my own ability to prioritize where I need software talent focused from my types of problems for my AI solutions. And that's important because, I may be looking three years, four, or five years ahead, and need to know what we will need.” The other benefit is that the CTO and the larger software organization don’t have to be AI experts. “I think most high performing AI machine learning research teams like the one that I run, really need to have that software component so they have some control, and they're not in some sort of prioritization queue for getting some software attention,” says Zoldi. “Unless those people are specialized in AI, machine learning and MLOps, it’s going to be a poor experience. That’s why FICO is taking this approach and why we have the division of concerns.” About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 100 Views
  • WWW.INFORMATIONWEEK.COM
    What Top 3 Principles Define Your Role as a CIO and a CTO?
    TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.What Top 3 Principles Define Your Role as a CIO and a CTO?What Top 3 Principles Define Your Role as a CIO and a CTO?The CIO of IBM and the CIO of NMI discuss some foundational elements that help them navigate the shifting demands of providing leadership on tech.Joao-Pierre S. Ruth, Senior EditorApril 14, 2025The duties of C-suite tech leadership at enterprises are changing rapidly of late. AI shook up strategies at many companies and can lead to new demands on CIOs, CTOs, and others responsible for technology plans and use.The core principles that guide CIOs and CTOs can be essential for navigating such times, especially when organizations look to them for direction.In this episode, Matt Lyteson, CIO of IBM, and Phillip Goericke, CTO of NMI, share some key principles that define their respective roles at their organizations. They also discuss where they picked up some of the lessons that shaped those principles, how their jobs have changed since they got their starts, and whom they look to for inspiration as leaders -- as well as what they wish they knew when they got started. Listen to the full episode here.About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 100 Views
  • WWW.INFORMATIONWEEK.COM
    The End of Business as Usual: How AI-Native Companies Win
    As AI continues to evolve, the question becomes whether companies can transform their businesses while adapting their workforce strategies at the same pace. An executive mindset shift -- or mindshift -- is needed to not only reimagine businesses forward, but to also prepare workers for roles that don’t yet exist. Seismic shifts lie ahead: artificial intelligence will reshape 86% of businesses by 2030, according to a new World Economic Forum (WEF) report. That same report also predicts that AI and automation will create 170 million jobs, while displacing 92 million roles as companies adapt to technological change; 39% of existing skill sets will become outdated between 2025-2030.  Business, Not Digital, Transformation Is the Way Forward  Companies now face a new chapter in the evolution of digital transformation, one that challenges organizations to think beyond the digitization of legacy processes and workflows they prioritized over the past decade. In reality, BCG research uncovered that 70% of digital transformations still fall short of their objectives.  Before the dawn of ChatGPT, it could be argued that most digital transformation efforts focused on the digitization and optimization of legacy processes. The pursuit of efficiency, scale, and cost-cutting limited or impaired the prospect of any meaningful transformation desired business outcomes. The same may already be happening in an era of AI. Companies are prioritizing the automation of the processes and workflows digitized over the past decade, which is important, but without exploring the potential for new opportunities in an era of AI, automation may not be enough to evolve.  Related:If digital transformation was the defining strategy in the 2000s, AI-native business transformation represents a potentially better, and more adaptable way forward. Unlike digital transformation, AI represents an opportunity for business transformation. It’s an inflection point to reimagine organizations and work in a world where AI becomes inherently attached to almost every technology, action, and outcome.  The Next Chapter of AI-Native Businesses 2025 is set to be the year that not just AI, but AI agents, start to reshape the enterprise. While organizations are just beginning to recognize the possibilities of AI, they are not yet exploring the implications of businesses that accelerate AI-first transformation. Now is the time for organizations to embrace AI beyond tools and as a core component of their strategic mindset and operational framework. Related:But what does it mean to be an AI-first enterprise?  To help, let’s substitute AI-first with AI-native: AI as being native to the core of the business itself, strategy, operations, culture, and value creation. It’s also more than the implementation of AI tools across the enterprise. It's about redefining roles, work, and operations, fostering innovation, and creating a culture that embraces change.  An AI-native enterprise is characterized by the strategic integration of artificial intelligence at the core of its operations and decision-making.  An AI-native approach will fundamentally redefine how businesses operate, innovate, and engage with customers, employees, and their ecosystem. AI becomes not just a tool, but the central driver of decision-making, operational efficiency, and customer interaction. Lead in the AI Revolution or Be Left Behind AI-first is not just about using AI, it’s about making AI native to business architecture, foundationally. Make AI core to decision-making: AI is not just a tool for efficiency; it plays a central role in strategic decision-making, forecasting, and autonomous execution. Use AI to drive exponential thinking, not incremental optimization: Instead of improving traditional business processes, AI-native companies reimagine workflows, value chains, and customer experiences from scratch. Automate adaptability: AI-first companies build systems that can sense, analyze, and act autonomously in real-time across supply chains, operations, and customer engagement. Integrate AI to spur network effects and self-learning models: Continuously improve via feedback loops, fine-tune AI models, and leverage collective intelligence rather than relying solely on human input. Make data and compute as a core asset: Unlike traditional companies that prioritize physical assets or human capital, AI-first organizations treat data, compute power, and algorithmic capabilities as their primary competitive advantage. Drive workflow transformation with AI agents: AI agents are the next major evolution in AI-native businesses. They don’t just enhance workflows; they autonomously execute tasks, make decisions, and optimize operations at a scale and speed impossible for human-led organizations. You need to make sure you are designing and enhancing workflows of the future, not the past. Why? AI-native businesses will rely on agentic systems to manage core functions, drive efficiency, and create new competitive advantages. Redefine leadership for an AI-native era: C-Suites are not immune. Train executives and managers to think strategically about AI adoption, guiding their teams in AI-first decision-making and workflow transformation. Invest in reskilling programs for emerging roles: As AI automates repetitive tasks, new roles will emerge that require human creativity, problem-solving, and oversight. Companies must proactively explore and identify future job needs and provide pathways for employees to transition into high-value roles. This includes preparing for an agentic enterprise and beyond. Related:The shift from digital transformation to AI-native business transformation is not just an evolution -- it is a foundational reinvention of how organizations operate, compete, and create value. AI-native enterprises are architecting their businesses around it, making AI the backbone of strategy, decision-making, and execution. It’s about designing businesses where AI is intrinsic to every function, continuously learning, adapting, and driving innovation. AI-native leaders are also preparing for workforce evolution for the agentic enterprise, imagining new roles, and upskilling and reskilling in preparation, especially as the agentic enterprise takes shape. As AI agents become more capable, businesses must simultaneously prepare for the inevitable rise of an Agentic Enterprise. AI-native pacesetters will prepare their architecture for embedding AI agents into workflows across the enterprise to augment decision-making, operations, and customer engagement. The future won’t favor companies that use AI; it will reward those that architected for it and AI’s evolution. 
    0 Commenti 0 condivisioni 88 Views
  • WWW.INFORMATIONWEEK.COM
    How to Handle a Talented, Yet Quirky, IT Team Member
    John Edwards, Technology Journalist & AuthorApril 11, 20255 Min ReadMikhail Reshetnikov via Alamy Stock PhotoEvery IT team seems to have one -- the member who's highly dedicated and talented, yet also something of a free spirit. Knowing how to tolerate and cater to this individual's unique needs without alienating other team members isn't a task generally covered in Management 101 courses for CIOs and IT leaders, yet it's essential in order to keep your team happy and productive. Instead of trying to fit a quirky team member into a rigid mold, work to understand what makes them tick and leverage that unique perspective, suggests Anbang Xu, founder of JoggAI, an AI-powered video platform, and a former senior product manager at Apple and senior software engineer at Google. It’s important to give these individuals space to thrive in their own way, while maintaining clear communication and setting expectations, he observes in an email interview. "By focusing on their strengths, I’ve found that they can bring innovative solutions and fresh ideas that would otherwise be overlooked." Embracing Uniqueness Embrace uniqueness while setting clear expectations, recommends Chetan Honnenahalli, engineering lead at software firm Hubspot and a former team leader at Meta, Zoom, and American Express. "Focus on their strengths and the value they bring to the team but establish boundaries to ensure their behavior doesn’t disrupt team dynamics or project goals," he says in an online interview. "Frequent one-on-one check-ins can help address potential concerns while reinforcing their contributions." Related:Balance respect for individuality with the needs of the team and organization. By valuing their quirks as part of their creative process, you'll foster a sense of belonging and loyalty, Honnenahalli says. "Clear boundaries and open communication will prevent potential misunderstandings, ensuring harmony within the team." Tolerance should depend on the impact of their behavior on team dynamics and project outcomes, Honnenahalli says. "Quirks that enhance creativity or problem-solving should be celebrated, but behaviors that cause disruptions, undermine morale, or create inefficiencies should be addressed promptly." Toleration Techniques Quirky behavior can become an issue if it interferes with the employee's ability to perform their work or if it disrupts fellow team members, says Matt Erhard, managing partner with professional search firm Summit Search Group, via email. "In these cases, the best approach is to have a one-on-one conversation with that employee," he advises. "Address the specific behaviors of concern and establish some expectations and boundaries about what is and isn't acceptable within the workplace." Related:Give the quirky team member strategies and guidelines to adapt their behavior within the workplace setting, Erhard recommends. "It should be made clear that you aren't criticizing or trying to change their personality but rather establishing rules about how they're expected to interact with their colleagues or customers when they're at work." As long as a maverick's behavior doesn't impede team collaboration, project deadlines, or morale, there’s room for individuality, Xu says. "The level of quirkiness you’re willing to tolerate is really a matter of balance," he states. "If their personality adds value without disrupting the team's harmony or performance, then it’s worth embracing." Team Impact Set team norms that allow for individuality while ensuring mutual respect and collaboration, Honnenahalli recommends. Address issues directly and constructively, ensuring open dialogue and fair resolutions. "Highlight how the individual’s quirks contribute positively to the team’s success, encouraging a culture of acceptance." Open communication is vital, Erhard says. "Talk to other team members about the issues they're having and why it's a concern for them." Facilitating a dialogue between the individuals can help both parties see each other’s perspectives. Related:When to Clamp Down Leaders should aim to channel quirkiness constructively rather than working to eliminate it. For instance, if a quirky habit is distracting or counterproductive, the team leader can guide the individual toward alternatives that achieve similar results without causing friction, Honnenahalli says. Avoid suppressing individuality unless it directly conflicts with professional responsibilities or team cohesion. Help the unconventional team member channel their quirks productively rather than trying to reduce them, Xu suggests. "This means offering support and guidance in ways that allow them to thrive within the structure of the team." Remember that quirks can often be a unique asset in problem-solving and innovation. Diverse Perspectives In IT, where innovation thrives on diverse perspectives, quirky team members often deliver creative solutions and unconventional thinking, Honnenahalli says. "Leaders who manage such individuals effectively can cultivate a culture of innovation and inclusivity, boosting morale and productivity." Every team needs a mix of personalities to excel, Xu observes. "The most innovative teams I’ve worked with had a variety of thinkers -- some more conventional, others quirky in their approach." It's the diversity in thinking that drives creativity and breakthroughs. "As leaders, it’s our responsibility to cultivate an environment where these differences are not only accepted but celebrated."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 96 Views
  • WWW.INFORMATIONWEEK.COM
    Transforming Government Cyber Operations with AI
    Government cybersecurity teams face an overwhelming challenge of perpetually having too many priorities but too few resources to address them all. Instead of focusing on strategic threat mitigation, cybersecurity teams are spending their time deconflicting alerts, chasing false positives, and struggling with visibility gaps. This can lead to higher costs, inefficiencies, alert fatigue, and a dangerous lack of visibility into potential risks. Artificial intelligence has the power to help government cybersecurity teams overcome these challenges. AI can make cybersecurity processes more efficient across the entire agency, from providing remediation recommendations to automating compliance.  A great example of the benefits of AI for cybersecurity operations is user behavioral analytics (UBA), where the technology can help evaluate user traffic patterns to create a baseline of known behaviors and flag unexpected or suspicious behavior that may indicate compromise for the security team to investigate. In the area of identity and access management, automated entitlement reviews ensure users have the appropriate level of access based on their role, while AI-driven role mining strengthens security principles such as least privilege and separation of duties.  Related:Government cybersecurity teams must lean on AI to stay ahead of sophisticated adversaries and the ever-expanding attack surface. To successfully integrate AI into their workflow, these teams must understand how to best use the technology before, during, and after an incident.    Pre-Incident: Predicting and Preventing Attacks Government cybersecurity teams can leverage AI before an incident occurs to help accomplish one of their biggest goals -- becoming more predictive. While agencies have access to a lot of these tools now, AI can augment existing capabilities by providing the ideal level of unified visibility across the enterprise.  AI-enabled risk analysis should be used to identify which systems are potentially most vulnerable and where sensitive data is located. Automated penetration testing that uses AI and machine learning capabilities can then help teams identify vulnerabilities.  AI can also help cybersecurity teams determine the likelihood of a potential threat by correlating data, including real-world attack data, deep web chatter, and government alerts. AI can then provide teams with real-time risk scoring. Additionally, AI can right size the risk scoring for the organization by automating the recognition of mitigating factors and compensating controls.  Related:Once risks are established, these tools can offer prioritized recommendations and develop comprehensive response plans that consider factors humans often overlook, such as application interoperability and even personnel familiarity with tools and processes. This allows the AI to make prioritized recommendations for remediation while minimizing the potential for negative impact to the organization. Incident Response: Speed and Accuracy with AI When an incident does occur, AI should be used to support overwhelmed cybersecurity teams by creating more meaningful and accurate alerts. Once the alert goes out, automating actions like incident triage and system quarantine as much as possible can help decrease the mean time to resolution. This can occur before or after human review, depending on agencies’ operational requirements.   Cybersecurity teams can then leverage AI to tweak response plans based on environmental context and the specific threat. The machine learning solutions used to create these plans should be trained by humans to include simplified steps for faster containment, eradication, and recovery, as well as provide recommendations to lower the risk of re-occurrence.  Related:One of the biggest challenges government cybersecurity teams face during incident response is the high volume of data associated with each event. AI should be used to identify and correlate the most useful events across larger data sets, reducing the time cyber professionals need to start remediation. Generative AI simplifies investigations even further by translating analysis and answering questions in natural language, cross-correlating activity, and generating hypotheses to support informed decision-making.  To maximize AI for incident response, the technology must have access to all the data related to the event. This ensures the tools can successfully correlate threat activity that may not be apparent to the human eye -- such as events that took place days apart or on disparate parts of the network. However, this can create a challenge with existing security information and event management (SIEM) tools, which often require teams to cultivate data before ingesting to minimize false positives and reduce the cost associated with higher data volume. Cybersecurity teams should keep this in mind when developing their AI strategies for incident response.  Post-Incident: Learning and Adapting With AI Once an attack has been addressed, AI’s role doesn’t end. Post-event investigations are critical in understanding what happened during an attack and training the AI to better detect threats and prepare for the future.  AI should be used to generate an after-action report during the triage and remediation process to help inform agency leadership on next steps, including how to notify the public of the incident if needed, and better understand the cause of the event. Automated reports also help capture a more accurate representation of the event and save analysts’ time, allowing them to focus on more important tasks.  To preserve forensic evidence for potential legal investigations and avoid human error, cybersecurity teams should automate tasks such as data recovery and creation of hash calculations on information to show forensic proof of any digital evidence tampering. Cybersecurity teams should also use AI to help law enforcement identify and analyze digital evidence that can help identify the malicious actor(s). As cyber adversaries become more sophisticated in their attacks, AI is no longer just an advantage -- its potential capabilities are a necessity. The future of government cybersecurity relies on AI and human expertise working in tandem to stay ahead of threats and protect mission-critical systems.  
    0 Commenti 0 condivisioni 86 Views
  • WWW.INFORMATIONWEEK.COM
    What Are the Biggest Blind Spots for CIOs in AI Security?
    Tension between innovation and security is a tale as old as time. Innovators and CIOs want to blaze trails with new technology. CISOs and other security leaders want to take a more measured approach that mitigates risk. With the rise of AI in recent years regularly being characterized as an arms race, there is a real sense of urgency. But that risk that the security-minded worry about is still there.  Data leakage. Shadow AI. Hallucinations. Bias. Model poisoning. Prompt injection, direct and indirect. These are known risks associated with the use of AI, but that doesn’t mean business leaders are aware of all the ways they could manifest within their organizations and specific use cases. And now agentic AI is getting thrown into the mix. “Organizations are moving very, very quickly down the agentic path,” Oliver Friedrichs, founder and CEO of Pangea, a company that provides security guardrails for AI applications, tells InformationWeek. “It's eerily similar to the internet in the 1990s when it was somewhat like the Wild West and networks were wide open. Agentic applications really in most cases aren't taking security seriously because there aren't really a well-established set of security guardrails in place or available.” What are some of the security issues that enterprises might overlook as they rush to grasp the power of AI solutions? Related:Visibility  How many AI models are deployed in your organization? The answer to that question may not be as easy to answer as you think.  “I don't think people understand how pervasively AI is already deployed within large enterprises,” says Ian Swanson, CEO and founder of Protect AI, an AI and machine learning security company. “AI is not just new in the last two years. Generative AI and this influx of large language models that we’ve seen created a lot of tailwinds, but we also need to take stock an account of what we've had deployed.” Not only do you need to know what models are in use, you also need visibility into how those models arrive at decisions.  “If they're denying, let's say an insurance claim on a life insurance policy, there needs to be some history for compliance reasons and also the ability to diagnose if something goes wrong,” says Friedrichs.  If enterprise leaders do not know what AI models are in use and how those models are behaving, they can’t even begin to analyze and mitigate the associated security risks.  Auditability Swanson gave testimony before Congress during a hearing on AI security. He offers a simple metaphor: AI as cake. Would you eat a slice of cake if you didn’t know the recipe, the ingredients, the baker? As tempting as that delicious dessert might be, most people would say no.  Related:“AI is something that you can't, and you shouldn't just consume. You should understand how it's built. You should understand and make sure that it doesn't include things that are malicious,” says Swanson.  Has an AI model been secured throughout the development process? Do security teams have the ability to conduct continuous monitoring?  “It's clear that security isn't a onetime check. This is an ongoing process, and these are new muscles a lot of organizations are currently building,” Swanson adds.  Third Parties and Data Usage Third party risk is a perennial concern for security teams, and that risk balloons along with AI. AI models often have third-party components, and each additional party is another potential exposure point for enterprise data.  “The work is really on us to go through and understand then what are those third parties doing with our data for our organization,” says Harman Kaur, vice president of AI at Tanium, a cybersecurity and systems management company. Do third parties have access to your enterprise data? Are they moving that data to regions you don’t want? Are they using that data to train AI models? Enterprise teams need to dig into the terms of any agreement they make to use an AI model to answer these questions and decide how to move forward, depending on risk tolerance.   Related:Legal Risk  The legal landscape for AI is still very nascent. Regulations are still being contemplated, but that doesn’t negate the presence of legal risk. Already there are plenty of examples of lawsuits and class actions filed in response to AI use.  “When something bad happens, everybody's going to get sued. And they'll point the fingers at each other,” says Robert W. Taylor, of counsel at Carstens, Allen & Gourley, a technology and IP law firm. Developers of AI models and their customers could find themselves liable for outcomes that cause harm.  And many enterprises are exposed to that kind of risk. “When companies contemplate building or deploying these AI solutions, they don't do a holistic legal risk assessment,” Taylor observes.  Now, predicting how the legality around AI will ultimately settle, and when that will even happen, is no easy task. There is no roadmap, but that doesn’t mean enterprise teams should throw up their collective hands and plow ahead with no thought for the legal implications. “It's all about making sure you understand at a deep level where all the risk lies in whatever technologies you're using and then doing all you can [by] following reasonable practice best practices on how you mitigate those harms and documenting everything,” says Taylor.  Responsible AI Many frameworks for responsible AI use are available today, but the devil is in the details.  “One of the things that I think a lot of companies struggle with, my own clients included, is basically taking these principles of responsible AI and applying them to specific use cases,” Taylor shares.  Enterprise teams have to do the legwork to determine the risks specific to their use cases and how they can apply principles of responsible AI to mitigate them.  Security vs. Innovation  Embracing security and innovation can feel like balancing on the edge of knife. Slip one way and you feel the cut of falling behind in the AI race. Slip the other way and you might be facing the sting of overlooking security pitfalls. But doing nothing ensures you will fall behind. “We've seen it paralyzes some organizations. They have no idea how to create a framework to say is this a risk that we're willing to accept,” says Kaur.  Adopting AI with a security mindset is not to say that risk is completely avoidable. Of course it isn’t. “The reality is this is such a fast-moving space that it's like drinking from a firehose,” says Friedrichs.  Enterprise teams can take some intentional steps to better understand the risks of AI specific to their organizations while moving toward realizing the value of this technology.  Looking at all of the AI tools available in the market today is akin to being in a cakeshop, to use Swanson’s metaphor. Each one looks more delicious than the next. But enterprises can narrow the decision process down by starting with vendors that they already know and trust. It’s easier to know where that cake comes from and the risks of ingesting it.  “Who do I already trust and already exists in my organization? What can I leverage from those vendors to make me more productive today?” says Kaur. “And generally, what we've seen is with those organizations, our legal team, our security teams have already done extensive reviews. So, there's just an incremental piece that we need to do.” Leverage risk frameworks that are available, such as the AI Risk Management Framework from the National Institute of Standards and Technology (NIST). “Start figuring out what pieces are more important to you and what's really critical to you and start putting all of these tools that are coming in through that filter,” says Kaur.  Taking that approach requires a multidisciplinary effort. AI is being used across entire enterprises. Different teams will define and understand risk in different ways.  “Pull in your security teams, pull in your development teams, pull in your business teams, and have a line of sight [on] a process that wants to be improved and work backwards from that,” Swanson recommends.  AI represents staggering opportunities for enterprise, and we have just begun to work through the learning curve. But security risks, whether or not you see them, will always have to be a part of the conversation.  “There should be no AI in the enterprise without security of AI. AI has to be safe, trusted, and secure in order for it to deliver on its value,” says Swanson.
    0 Commenti 0 condivisioni 104 Views
  • WWW.INFORMATIONWEEK.COM
    Security Consulting Firm CIO Tackles Platform Consolidation
    John Edwards, Technology Journalist & AuthorApril 10, 20254 Min ReadRebecca Fox, NCC GroupRebecca Fox is group chief information officer at cybersecurity consulting firm NCC Group. Responsible for technology and application strategy and delivery, she has over 15 years of experience leading technology functions, sales, and commercial teams. During her career, Fox has led digital transformations, system implementations, organization design, and complex and diverse technical and development teams on a global scale. Fox has a technical development background, yet her experiences include large-scale project/program/portfolio management, data management and strategy, and service operations. In an online interview, Fox relates her experience trying to successfully assemble a high-stakes puzzle that was critical to her enterprise's long-term success. She notes that the project, while immensely challenging, would ultimately benefit both the organization as well as her personal expertise and confidence. What's the biggest challenge you ever faced during your tenure? A post-M&A integration -- specifically, trying to consolidate CRM platforms across multiple businesses with different cultures, processes, and emotional states. I was tasked with delivering one system, fast. On paper, it looked like a straightforward strategic priority. In reality, it pushed me and my leadership to the edge. Related:What caused the problem? I tried to move faster than the business could absorb. I had the solution, I had the plan, but I hadn’t built enough of the runway. I underestimated the emotional impact of M&A and overestimated the readiness for change. I hadn’t done the people work first. It’s like giving a child bitter medicine -- it may be the right thing, but if you don’t wrap it in understanding, empathy, and communication, they're going to spit it out. How did you resolve the problem? I had to hit pause and reframe the whole project. I focused on outcomes, not process. I also became a lot clearer on the outcome and why. But above all I prioritized relationships, because without trust, there’s no traction. What would have happened if the problem wasn't swiftly resolved? We would have launched a platform no one used. Worse, I would have burned out the team, damaged relationships, and lost momentum at a time when unity was non-negotiable. Change would have stalled, and cynicism would have grown. How long did it take to resolve the problem? The platform landed within months and was received better because of the tension and disagreement that forced us to get aligned. But the leadership lessons? That evolution has taken a career. That M&A moment was just one chapter -- a pivotal one -- but part of a much longer journey in learning how to lead through people, not just through plans. Related:Who supported you during this challenge? My team, even when I didn’t get it right the first time, and a few brave peers who gave me the kind of feedback that stings in the moment but sticks because it's true. Did anyone let you down? Yes -- me. I let myself down by pushing too hard, too fast. I let my team down by not giving them the space to speak up sooner. I’ve had to own that, grow from it, and lead differently since. What advice do you have for other leaders who may face a similar challenge? Build the relationships before you need them. The role of CIO today isn't just about technology, it’s about influence, resilience, and focus. You are the negotiator, the connector, the cheerleader, and you must anchor everything to the big three: grow revenue, increase margins, and reduce risk. That clarity makes it easier for everyone to understand the ‘why’ behind the ‘what.’ Is there anything else you would like to add? It took me too long to realize that relentless focus on the customer is what cuts through the noise. We’re not here to launch platforms. We’re here to make the business better, and that starts by aligning every decision to the outcomes that matter. Progress is messy, tension is necessary, and leadership is about showing up -- especially when it’s hard. Related:About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commenti 0 condivisioni 102 Views
Altre storie