
Gmail, Outlook, Apple Mail WarningAI Attack Nightmare Is Coming True
www.forbes.com
We are not ready for this.gettyRepublished on March 16th with additional security industry analysis on the threat from semi-autonomous AI attacks and a new GenAI attack warning.Email users have been warned for some time that AI attacks and hacks will ramp up this year, becoming ever harder to detect. And while this will include frightening levels of deepfake sophistication, it will also enable more attackers to conduct more attacks, with AI operating largely independently, "carrying out attacks." That has always been the nightmare scenario and it is suddenly coming true, putting millions of you at risk.We know this, but seeing is believing. A new video and blog from Symantec has just shown how a new AI agent ot operator can be deployed to conduct a phishing attack. Agents have more functionality and can actually perform tasks such as interacting with web pages. While an agents legitimate use case may be the automation of routine tasks, attackers could potentially leverage them to create infrastructure and mount attacks.The security team has warned of this before, that while existing Large Language Model (LLM) AIs are already being put to use by attackers, they are largely passive and could only assist in performing tasks such as creating phishing materials or even writing code. At the time, we predicted that agents would eventually be added to LLM AIs and that they would become more powerful as a result, increasing the potential risk.Now theres a proof of concept. Its rudimentary but will quickly become more advanced. The sight of an AI agent hunting the internet and LinkedIn to find a targets email address and then for website advice on crafting malicious scripts, before writing its own lure should put fear into all of us. Theres no limit to how far this will go.Weve been monitoring usage of AI by attackers for a while now, Symantecs Dick OBrien explained to me. While we know theyre being used by some actors, weve been predicting that the advent of AI agents could be the moment that AI-assisted attacks start to pose a greater threat, because an agent isn't passive, it can do things as opposed to generate text or code. Our goal was to see if an agent could carry out an an attack end to end with no intervention from us other than the initial prompt.As SlashNexts J Stephen Kowskis told me, the rise of AI agents like Operator shows the dual nature of technology tools built for productivity can be weaponized by determined attackers with minimal effort. This research highlights how AI systems can be manipulated through simple prompt engineering to bypass ethical guardrails and execute complex attack chains that gather intelligence, create malicious code, and deliver convincing social engineering lures.Even the inbuilt security is ludicrously lightweight. Our first attempt failed quickly as Operator told us that it was unable to proceed as it involves sending unsolicited emails and potentially sensitive information," Symantec said, with its POC showing how this was easily overcome. "This could violate privacy and security policies. However, tweaking the prompt to state that the target had authorized us to send emails bypassed this restriction, and Operator began performing the assigned tasks.The agent used was from OpenAI, but this will be a level playing field and its the nature of the capability that matters not the identity of the AI developer. Perhaps the most notable aspect of this attack is that when the Operator fails to find the targets email address online, and so successfully deduces what the address would likely be from others within the same organization that it could find.Black Ducks Andrew Bolster warned me that as AI-driven tools are given more capabilities via systems such as OpenAI Operator or Anthropics Computer Use, the challenge of constraining LLMs comes into clearer focus, adding that "examples like this demonstrate the trust-gap in underlying LLMs guardrails that supposedly prevent bad behavior, whether established through reinforcement, system prompts, distillation or other methods; LLMs can be tricked into bad behavior. In fact, one could consider this demonstration as a standard example of social engineering, rather than exploiting a vulnerability. The researchers simply put on a virtual hi-vis jacket and acted to the LLM like they were supposed" to be there."Agents such as Operator demonstrate both the potential of AI and some of the possible risks, Symantec warns. The technology is still in its infancy, and the malicious tasks it can perform are still relatively straightforward compared to what may be done by a skilled attacker. However, the pace of advancements in this field means it may not be long before agents become a lot more powerful. It is easy to imagine a scenario where an attacker could simply instruct one to breach Acme Corp and the agent will determine the optimal steps before carrying them out.And that really is the nightmare scenario. We were a little surprised that it actually worked for us on day one, OBrien told me, given its the first agent to launch.Guy Feinberg from Oasis Security agrees, telling me AI agents, like human employees, can be manipulated. Just as attackers use social engineering to trick people, they can prompt AI agents into taking malicious actions."Organizations need to implement robust security controls that assume AI will be used against them, warns Kowski. The best defense combines advanced threat detection technologies that can identify behavioral anomalies with proactive security measures that limit what information is accessible to potential attackers in the first place.This week we have also seen a report into Microsoft Copilot Spoofing as a new phishing vector, with users not yet trained on how to detect these new attacks. Thats one of the reasons AI fueled attacks are much more likely to hit their targets. You can expect to see continuous reports as this new threat landscape shapes up.And that warning has been reinforced by a second report this weekend into AI-fueled attacks, painting a frightening picture as to whats coming next. While most traditional GenAI tools have various guardrails in place to combat attempts to use them for malicious purposes, says the research team at Tenable, cybercriminal usage of tools like OpenAIs ChatGPT and Googles Gemini have been documented by both OpenAI (Disrupting malicious uses of AI by state-affiliated threat actors) and Google (Adversarial Misuse of Generative AI). OpenAI recently removed accounts of Chinese and North Korean users caught using ChatGPT for malicious purposes.Notwithstanding Symantecs POC highlighting how easily some of these mainstream GenAI guardrails can be bypassed, Tenable warns that with the recent open source release of DeepSeeks local LLMs, like DeepSeek V3 and DeepSeek R1, we anticipate cybercriminals will seek to utilize these freely accessible models.As regards DeepSeeks R1, the team says it wanted to evaluate its malicious software, or malware generation capability, under two scenarios: a keylogger and a simple ransomware Our initial test focused on creating a Windows keylogger: a compact, C++-based implementation compatible with the latest Windows version. The ideal outcome would include features such as evasion from Windows Task Manager and mechanisms to conceal or encrypt the keylogging file, making detection more difficult. We also evaluated DeepSeeks ability to generate a simple ransomware sample.The team then tasked the tool with helping develop a ransomware attack. DeepSeek was able to identify potential issues when planning the development of this simple ransomware, such as file permissions, handling large files, performance and anti-debugging techniques. Additionally, DeepSeek was able to identify some potential challenges in implementation, including the need for testing and debugging.The bottom line, says Feinberg, is that you cant stop attackers from manipulating AI, just like you cant stop them from phishing employees. The solution is better governance and security for all identitieshuman and non-human alike.The answer, he says is to assign permission to AI in the same way as you would do people treat them the same. And that included identity-based governance and security, and an assumption that AI will be tricked into making mistakes.Manipulation Is inevitable, Feinberg warns. Just as we cant prevent attackers from tricking people, we cant stop them from manipulating AI agents. The key is limiting what these agents can do without oversight. AI agents need identity governance. They must be managed like human identities, with least privilege access, monitoring, and clear policies to prevent abuse. Security teams need visibility.Frightening enough for now, but Tenable warns that we believe that DeepSeek is likely to fuel further development of malicious AI-generated code by cybercriminals in the near future, given that they have already used the tool to create a keylogger that could hide an encrypted log file on disk as well as develop a simple ransomware executable.Two very different uses of AI tools to either craft or even execute attacks. One thing is already clear though we are not yet ready for this.
0 Reacties
·0 aandelen
·40 Views