![](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/blt3c7666bd7182ad55/67a51a24c3034d20b8498c92/AI_face-Brainlight-AlamyStockPhoto.jpg)
How Are Threat Actors Using Adversarial GenAI?
www.informationweek.com
As the GenAI hype cycle continues, there is a parallel discussion about the ways in which this technology will be misused and weaponized by threat actors. Initially, much of that discussion was speculation, some of it dire. As time went on, real-world examples emerged. Threat actors are leveraging deepfakes and threat analysts are sounding the alarm over more sophisticated phishing campaigns honed by GenAI.How is this technology being abused today, and what can enterprises leaders do as threat actors continue to leverage GenAI?Threat Actors and GenAI Use CasesIts hard not to get swept up in GenAI fever. Leaders in nearly every industry continue to hear about the alluring possibilities of innovation and productivity gains. But GenAI is a tool like any other that can be used for good or ill.Attackers are just as curious as we are. They want to see how far they can go with an LLM just like we can. Which GenAI models will allow them to produce malicious code? Which ones are going to let them do more? Which ones wont? Crystal Morin, cybersecurity strategist at Sysdig, a cloud-native application protection platform (CNAPP), tells InformationWeek.Just as business use cases are in their early days, it appears to be the same for malicious intent.Related:While AI can be a useful tool for threat actors, it is not yet the game-changer it is sometimes portrayed to be, according to a new report from the Google Threat Intelligence Group (GTIG).GTIG noted that advanced persistent threat (APT) groups and information operations (IO) actors are both putting GenAI to work. It observed groups associated with China, Iran, North Korea, and Russia using Gemini.Threat actors use large language models (LLMs) in two ways, according to the report. They either use LLMs to drive more efficient attacks, or they give AI models instructions to take malicious action.GTIG saw threat actors using AI conduct various types of research and reconnaissance, create content, and troubleshoot code. Threat actors also attempted to use Gemini to abuse Google products and tried their hand at AI jailbreaks to bypass safety controls. Gemini restricted content that would enhance attackers malicious aims, and it generated safety responses to attempted jailbreaks, according to the report.One way threat actors are looking to misuse LLMs is by gaining unauthorized access via stolen credentials. The Sysdig Threat Research Team refers to this threat as LLMjacking. They may simply want to gain free access to an otherwise paid resource for relatively benign purposes, or they may be gaining access for more malicious reasons, like stealing information or using the LLM to enhance their campaigns.Related:This isn't like other abuse cases where [they] trigger an alert, and you can find the attacker and shut it down. It's not that simple, says Moring. There's not one detection analytic for LLMjacking. There are multiple things that you have to look for to trigger an alert.Counteracting GenAI MisuseAs threat actors continue to use GenAI, whether to improve tried and true tactics or eventually more in novel ways, what can be done in response?Threat actors are going to try to use any and all available platforms. What responsibility do companies offering GenAI platform have to monitor and counteract misuse and weaponization of their technology?Google, for example, has AI principles and policy guidelines that aim to address secure and safe use of its Gemini app. In its recent report, Google outlines how Gemini responded to various threat actor attempts to jailbreak the model and use it for nefarious purposes.Similarly, AWS has automated abuse detection mechanisms in place for Amazon Bedrock. Microsoft is taking legal action to disrupt malicious use of its Copilot AI.Related:From a consumer point of view, I think we'll find that there'll be a growing impetus for people to expect them to have secure applications and rightly so, says Carl Wearn, head of threat intelligence analysis and future ops at Mimecast.As time goes on, attackers will continue to probe these LLMs for vulnerabilities and ways to bypass their guardrails. Of course, there are a plethora of other GenAI platforms and tools available. And most threat actors look for the easiest means to their ends.DeepSeek has been dominating headlines not only for toppling OpenAI from its leadership position but also for its security risks. Enkrypt AI, an AI security platform, conducted red teaming research on the Chinese startups LLM and found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material.As enterprise leaders continue to utilize AI tools in their organizations, they will be tasked with recognizing and combatting potential misuse and weaponization. That will mean considering what platforms to use -- is the risk worth the benefit? -- and monitoring the GenAI tools they do use for misuse.To spot LLMjacking, Morin recommends looking for spikes in usage that are out of the ordinary, IPs from strange regions, or locations that are out of the ordinary for your organization, she says. Your security team will recognize what's normal and what's not normal for you.Business leaders will also have to consider the use of shadow AI.I think the biggest threat at the moment is going to be that potential insider threat from individuals searching unauthorized applications or even authorized ones but inputting potentially PII or personal data or confidential data that really shouldn't be entered into these models, says Wearn.Even businesses that abjure AI use internally will still face the prospect of attackers using GenAI to target them.Advancing GenAI CapabilitiesThreat actors may not yet be wielding GenAI for novel attacks, but it doesnt meant that future isnt coming. As they continue to experiment, their proficiency with the technology will grow and so will the possibility of adversarial innovation.I think attackers will be able to start customizing their own GenAIweaponizing it a little bit more. So, we're at the point now where I think we will start to see a little bit more of those scary attacks that we've been talking about for the last year or two, says Morin. But I think we're ready to combat those, too.
0 Comentários
·0 Compartilhamentos
·47 Visualizações