The Cost of AI Security
www.informationweek.com
Carrie Pallardy, Contributing ReporterFebruary 6, 20256 Min ReadTithi Luadthong via Alamy StockWeve been here before. A new, exciting technology emerges with the promise of transforming business. Enterprises race to adopt it. Vendors clamor to create the most enticing use cases. Business first, security second. We saw this with the cloud, and now we are in the early stages with a new technology: AI. A survey conducted by IBM found that just 24% of GenAI projects include a security element.Now, boards are much more savvy about the necessity of cybersecurity. CEOs understand the reputational risk, says Akiba Saeedi, vice president of product management at global technology company IBM Security.That awareness means more enterprise leaders are thinking about AI in the context of security, even if the business case is winning out over security at the moment. What security costs does AI introduce into the enterprise environment? How do budgets need to adapt to handle these costs?Data SecurityData security is not a new concept, or cost, for enterprises. But it is essential to maintaining AI security.Before you can really do good AI security you really have to have good data security because at the heart of the AI is really the data, and a lot of the companies and folks that we talked to are still having trouble with the basic data layer, John Giglio, director of cloud security at cloud solutions provider SADA, an Insight company, tells InformationWeek.Related:For organizations that have not prioritized data security already, the budgeting conversation around AI security can be a difficult one. There can be very hidden costs. It can be very difficult to understand how to go about fixing those problems and identifying those hidden costs, says Giglio.Model SecurityAI models themselves need to be secured. A lot of these generative AI platforms are really just black boxes. So, were having to create new paradigms as we look at, How do we pen test these types of solutions? says Matti Pearce, vice president of information security, risk, and compliance at cybersecurity company Absolute Security.Model manipulation is also a concern. It is possible to trick the models into giving information that they shouldn't, divulging sensitive data [getting] the model to do something that [its] not necessarily meant to do, says Saeedi.What tools and processes do an enterprise need to invest in to prevent that from happening?Shadow AIAI is readily available to employees, and enterprise leaders might not know what tools are already in use throughout their organization. Shadow IT is not a new challenge; shadow AI simply compounds it.Related:If employees are feeding enterprise data to various unknown AI tools, the risk of exposure increases. Breaches that involve shadow data can be more difficult to identify and contain, ultimately resulting in more cost. Breaches involving shadow data cost an average of $5.27 million, according to IBM.Employee TrainingAny time an enterprise introduces a new technology, it comes with a learning curve. Do the employees building new AI capabilities understand the security implications?If you think about the people who are building the AI models, they are data scientists. They are researchers. Their expertise is not necessarily security, Saeedi points out.They need the time and resources to learn how to secure AI models. Enterprises also need to invest in education for end users. How can they use AI tools with security in mind? You can't secure something if you dont understand how it works, says Giglio.Employee education also needs to address the new attack capabilities AI gives to threat actors. Our awareness programs have to start really focusing on the fact that attackers can now impersonate people, says Pearce. Weve got deep fakes that are actually, really scary and can be done on video calls. We need to make sure that our staff and our organizations are ready for that.Related:Governance and ComplianceEnterprise leaders need strong governance and policies to reduce the risk of potentially costly consequences of AI use: data exposure, shadow AI, model manipulation, AI-fueled attacks, safety lapses, model discrimination.While there are not yet detailed regulations on exactly how you have to prove to auditors your compliance around the security controls you have around data or your AI models, we know that will come, says Saeedi. That will drive spending.Cyber InsuranceGenAI introduces new security capabilities and risks for enterprises, which could mean changes in the cyber insurance space. Could the right defensive tools actually reduce an enterprises risk profile and premiums? Could more sophisticated threats drive up insurance costs?It may be a little early to understand what the actual implications of GenAI are going to be on the insurance risk profile, says Giglio. It may be early, but insurance costs are an important part of the security costs conversation.Building a BudgetThe cost of AI and its security needs is going to be an ongoing conversation for enterprise leaders.Its still so early in the cycle that most security organizations are trying to get their arms around what they need to protect, whats actually different. What do [they] already have in place that can be leveraged? says Saeedi.Who is a part of these evolving conversations? CISOs, naturally, have a leading role in defining the security controls applied to an enterprises AI tools, but given the growing ubiquity of AI a multistakeholder approach is necessary. Other C-suite leaders, the legal team, and the compliance team often have a voice. Saeedi is seeing cross-functional committees forming to assess AI risks, implementation, governance, and budgeting.As these teams within enterprises begin to wrap their heads around various AI security costs, the conversation needs to include AI vendors.The really key part for any security or IT organization, when [were] talking with the vendor is to understand, Were going to use your AI platform but what are you going to do with our data?Is that vendor going to use an enterprises data for model training? How is that enterprises data secured? How does an AI vendor address the potential security risks associated with the implementation of its tool?AI vendors are increasingly prepared to have these security conversations with their customers. Major players like Microsoft and Google theyre starting to lead with those security answers in their pitch as opposed to just the GenAI capabilities because they know its coming, says Giglio.The budgeting conversation for AI features a familiar tug-of-war: innovation versus security. Allocating those dollars isnt easy, and it is early enough in the implementation process that there is plenty of room for mistakes. But there are new frameworks designed to help enterprises understand their risk, like the OWASP Top 10 for Large Language Model Applications and the AI Risk Management Framework from the National Institute of Standards and Technology (NIST). A clearer picture of risk helps enterprise leaders determine where dollars need to go.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
0 Reacties ·0 aandelen ·61 Views