WWW.INFORMATIONWEEK.COM
What CISOs Think About GenAI
Lisa Morgan, Freelance WriterJanuary 8, 20257 Min ReadMilan Surkala via Alamy StockGenAI is everywhere -- available as a standalone tool, proprietary LLMs or embedded in applications. Since everyone can easily access it, it also presents security and privacy risks, so CISOs are doing what they can to stay up on it while protecting their companies with policies.As a CISO who has to approve an organizations usage of GenAI, I need to have a centralized governance framework in place, says Sammy Basu CEO & founder of cybersecurity solution provider Careful Security.We need to educate employees about what information they can enter into AI tools, and they should refrain from uploading client confidential or restricted information because we dont have clarity on where the data may land up.Specifically, Basu created security policies and simple AI dos and donts addressing AI usage for Careful Security clients. As is typical these days, people are uploading information into AI models to stay competitive. However, Basu says a regular user would need security gateways built into their AI tools to identify and redact sensitive information. In addition, GenAI IP laws are ambiguous, so its not always clear who owns the copyright of AI generated content that has been altered by a human.From Cautious Curiosity to Risk-Aware AdoptionRelated:Ed Gaudet, CEO and founder of healthcare risk management solution provider Censinet says over the years as a user and as a CISO, his GenAI experience has transitioned from cautious curiosity to a more structured, risk-aware adoption of GenAI capabilities.It is undeniable that GenAI opens a vast array of opportunities, though careful planning and continuous learning remain critical to contain the risks that it brings, says Gaudet. I was initially cautious about GenAI at the start because of the privacy of data, IP protection and misuse. Early versions of GenAI tools, for instance, highlighted how input data was stored or used for further training. But as the technology has improved and providers have put better safeguards in place -- opt-out data and secure APIs -- I have come to see what it can do when used responsibly.Gaudet believes sensitive or proprietary data should never be input into GenAI systems, such as OpenAI or proprietary LLMs. He has also made it mandatory for teams to use only vetted and authorized tools, preferably those that run on secure, on-premises environments to reduce data exposure.Ed Gaudet, CensinetOne of the significant challenges has been educating non-technical teams on these policies, says Gaudet. GenAI is considered a black box solution by many users, and they do not always understand all the potential risks associated with data leaks or the creation of misinformation.Related:Patricia Thaine, co-founder and CEO at data privacy solution provider Private AI, says curating data for machine learning is complicated enough without having to additionally think about access controls, purpose limitation, and the security of personal and confidential company information going to third parties.This was never going to be an easy task, no matter when it happened, says Thaine. The success of this gargantuan endeavor depends almost entirely on whether organizations can maintain trust with proper AI governance in place and whether we have finally understood just how fundamentally important meticulous data curation and quality annotations are, regardless of how large a model we throw at a task.The Risks Can Outweigh the BenefitsMore workers are using GenAI for brainstorming, generating content, writing code, research, and analysis. While it has the potential to provide valuable contributions to various workflows as it matures, too much can go wrong without the proper safeguards.As a [CISO], I view this technology as presenting more risks than benefits without proper safeguards, says Harold Rivas, CISO at global cybersecurity company Trellix. Several companies have poorly adopted the technology in the hopes of promoting their products as innovative, but the technology itself has continued to impress me with its staggeringly rapid evolution. Related:However, hallucinations can get in the way. Rivas recommends conducting experiments in controlled environments and implementing guardrails for GenAI adoption. Without them, companies can fall victim to high-profile cyber incidents like they did when first adopting cloud.Dev Nag, CEO of support automation company QueryPal, says he had initial, well-founded concerns around data privacy and control, but the landscape has matured significantly in the past year.The emergence of edge AI solutions, on-device inference capabilities, and private LLM deployments has fundamentally changed our risk calculation. Where we once had to choose between functionality and data privacy, we can now deploy models that never send sensitive data outside our control boundary, says Nag. We're running quantized open-source models within our own infrastructure, which gives us both predictable performance and complete data sovereignty.The standards landscape has also evolved. The release of NIST's AI Risk Management Framework and concrete guidance from major cloud providers on AI governance, provide clear frameworks to audit against.We've implemented these controls within our existing security architecture, treating AI much like any other data-processing capability that requires appropriate safeguards. From a practical standpoint, we're now running different AI workloads based on data sensitivity, says Nag. Public-facing functions might leverage cloud APIs with appropriate controls, while sensitive data processing happens exclusively on private infrastructure using our own models. This tiered approach lets us maximize utility while maintaining strict control over sensitive data.Dev Nag, QueryPalThe rise of enterprise-grade AI platforms with SOC 2 compliance, private instances and no data retention policies has also expanded QueryPals options for semi-sensitive workloads.When combined with proper data classification and access controls, these platforms can be safely integrated into many business processes. That said, we maintain rigorous monitoring and access controls around all AI systems, says Nag. We treat model inputs and outputs as sensitive data streams that need to be tracked, logged and audited. Our incident response procedures specifically account for AI-related data exposure scenarios, and we regularly test these procedures.GenAI Is Improving Cybersecurity Detection and ResponseGreg Notch, CISO at managed detection and responseservice provider Expel, says GenAIs ability to quickly explain what happened during a security incident to both SOC analysts and impacted parties goes a long way toward improving efficiency and increasing accountability in the SOC.[GenAI] is already proving to be a game-changer for security operations, says Notch. As AI technologies flood the market, companies face the dual challenge of evaluating these tools' potential and managing risks effectively. CISOs must cut through the noise of various GenAI technologies to identify actual risks and align security programs accordingly investing significant time and effort into crafting policies, assessing new tools and helping the business understand tradeoffs. Plus, training cybersecurity teams to assess and use these tools is essential, albeit costly. It's simply the cost of doing business with GenAI.Adopting AI tools can also inadvertently shift a company's security perimeter, making it crucial to educate employees about the risks of sharing sensitive information with GenAI tools both in their professional and personal lives. Clear acceptable use policies or guardrails should be in place to guide them.The real game-changer is outcome-based planning, says Notch. Leaders should ask, What results do we need to support our business goals? What security investments are required to support these goals? And do these align with our budget constraints and business objectives? This might involve scenario planning, imagining the costs of potential data loss, legal costs and other negative business impacts as well as prevention measures, to ensure budgets cover both immediate and future security needs.Scenario-based budgets help organizations allocate resources thoughtfully and proactively, maximizing long-term value from AI investments and minimizing waste. Its about being prepared, not panicked, he says.Concentrating on basic security hygiene is the best way to protect your organization, says Notch. The No. 1 danger is letting unfounded AI threats distract organizations from hardening their standard security practices. Craft a plan for when an attack is successful whether AI was a factor or not. Having visibility and a way to remediate is crucial for when, not if, an attacker succeeds.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
0 Comments 0 Shares 38 Views