www.informationweek.com
Carrie Pallardy, Contributing ReporterJanuary 13, 20257 Min ReadKittipong Jirasukhanont via Alamy Stock PhotoThe majority of organizations --89% of them, according to the 2024 State of the Cloud Report from Flexera --have adopted a multicloud strategy. Now they are riding the wave of the next big technology: AI. The opportunities seem boundless: chatbots, AI-assisted development, cognitive cloud computing, and the list goes on. But the power of AI in the cloud is not without risk.While enterprises are eager to put AI to use, many of them still grapple with data governance as they accumulate more and more information. AI has the potential to amplify existing enterprise risks and introduce entirely new ones. How can enterprise leaders define these risks, both internal and external, and safeguard their organizations while capturing the benefits of cloud and AI?Defining the RisksData is the lifeblood of cloud computing and AI. And where there is data, there is security risk and privacy risk. Misconfigurations, insider threats, external threat actors, compliance requirements, and third parties are among the pressing concerns enterprise leaders must addressRisk assessment is not a new concept for enterprise leadership teams. Many of the same strategies apply when evaluating the risks associated with AI. You do threat modeling and your planning phase and risk assessment. You do security requirement definitions [and] policy enforcement, says Rick Clark, global head of cloud advisory at UST, a digital transformations solutions company.Related:As AI tools flood the market and various business functions clamor to adopt them, the risk of exposing sensitive data and the attack surface expands.For many enterprises, it makes sense to consolidate data to take advantage of internal AI, but that is not without risk. Whether it's for security or development or anything, [youre] going to have to start consolidating data, and once you start consolidating data you create a single attack point, Clark points out.And those are just the risks security leaders can more easily identify. The abundance of cheap and even free GenAI tools available to employees adds another layer of complexity.It's [like] how we used to have the shadow IT. Its repeating again with this, says Amrit Jassal, CTO at Egnyte, an enterprise content management company.AI comes with novel risks as well.Poisoning of the LLMs, that I think is one of my biggest concerns right now, Clark shares with InformationWeek. Enterprises aren't watching them carefully as they're starting to build these language models.Related:How can enterprises ensure the data feeding the LLMs they use hasnt been manipulated?This early on in the AI game, enterprise teams are faced with the challenges of a managing the behavior and testing systems and tools that they may not yet fully understand.What's new and difficult and challenging in some ways for our industry is that the systems have a kind of nondeterministic behavior, Mark Ryland, director of the Office of the CISO for cloud computing services company Amazon Web Services (AWS), explains. You cant comprehensively test a system because it's designed in part to be critical, creative, meaning that the very same input doesn't result in the same output.The risks of AI and cloud can multiply with the complexity of an enterprises tech stack. With a multi-cloud strategy and often growing supply chain, security teams have to think about a sprawling attack surface and myriad points of risk.As an example, we have had to take a close look at least privilege things, not just for our customers but for our own employees as well. And, then that has to be extended not to just one provider but to multiple providers, says Jassal. It definitely becomes much more complex.AI Against the CloudWidely available AI tools will be leveraged not only by enterprises but also the attackers that target them. At this point, the threat of AI-fueled attacks on cloud environments is moderately low, according to IBMs X-Force Cloud Threat Landscape Report 2024. But the escalation of that threat is easy to imagine.Related:AI could exponentially increase threat actors capabilities via coding-assistance, increasingly sophisticated campaigns, and automated attacks.We're going to start seeing that AI can gather information to start making personalized phishing attacks, says Clark. There's going to be adversarial AI attacks, where they exploit weaknesses in your AI models even by feeding data to bypass security systems.AI model developers will, naturally, attempt to curtail this activity, but potential victims cannot assume this risk goes away. The providers of GenAI systems obviously have capabilities in place to try to detect abusive use of their systems, and I'm sure those controls are reasonably effective but not perfect, says Ryland.Even if enterprises opt to eschew AI for now, threat actors are going to use that technology against them. AI is going to be used in attacks against you. You're going to need AI to combat it, but you need to secure your AI. It's a bit of a vicious circle, says Clark.The Role of Cloud ProvidersEnterprises still have responsibility for their data in the cloud, while cloud providers play their part by securing the infrastructure of the cloud.The shared responsibility still stays, says Jassal. Ultimately if something happens, a breach etcetera, in Egnytes systems Egnyte is responsible for it whether it was due to a Google problem or Amazon problem. The customer doesn't really care.While that fundamental shared responsibility model remains, does AI change that conversation at all?Model providers are now part of the equation. Model providers have a distinct set of responsibilities, says Ryland. Those entities [take] on some responsibility to ensure that the models are behaving according to the commitments that are made around responsible AI.While different parties -- users, cloud providers, and model providers -- have different responsibilities, AI is giving them new ways to meet those responsibilities.AI-driven security, for example, is going to be essential for enterprises to protect their data in the cloud, for cloud providers to protect their infrastructure, and for AI companies to protect their models.Clark sees cloud providers playing a pivotal role here. The hyperscalers are the only ones that are going to have enough GPUs to actually automate processing threat models and the attacks. I think that they're going to have to provide services for their clients to use, he says. They're not going to give you these things for free. So, these are other services they're going to sell you.AWS, Microsoft, and Google each offer a host of tools designed to help customers secure GenAI applications. And more of those tools are likely to come.We're definitely interested in increasing the capabilities that we provide for customers for risk management, risk mitigation, things like more powerful automated testing tools, Ryland shares.Managing RiskWhile the risks of AI and cloud are complex, enterprises are not without resources to manage them.Security best practices that existed before the explosion of GenAI are still relevant today. Building an operation of an IT system with the right kinds of access controls, least privilege making sure that the data's carefully guarded and all these things that we would have done traditionally, we can now apply to a GenAI system, says Ryland.Governance policies and controls that ensure those policies are followed will also be an important strategy for managing risk, particularly as it relates to employee use of this technology.The smart CISOs [dont] try to completely block that activity but rather quickly create the right policies around that, says Ryland. Make sure employees are informed and can use the systems when appropriate, but also get proper warnings and guardrails around using external systems.And experts are developing tools specific to the use of AI.There're a lot of good frameworks in the industry, things like the OWASP top 10 risks for LLMs, that have significant adoption, Ryland adds. Security and governance teams now have some good industry practices codified with input from a lot of experts, which help them to have a set of concepts and a set of practices that help them to define and manage the risks that arise from a new technology.The AI industry is maturing, but it is still relatively nascent and quickly evolving. There is going to be a learning curve for enterprises using cloud and AI technology. I don't see how it can be avoided. There will be data leakages, says Jassal.Enterprise teams will have to work through this learning curve, and its accompanying growing pains, with continuous risk assessment and management and new tools built to help them.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports