---------------------------

Upgrade to Pro

WWW.SCIENTIFICAMERICAN.COM
Xanthorox AI Lets Anyone Become a Cybercriminal
May 7, 20259 min readCriminal AI is Here—And Anyone Can SubscribeA new AI platform called Xanthorox markets itself as a tool for cybercrime, but its real danger may lie in how easily such systems can be built—and sold—by anyoneBy Deni Ellis Béchard edited by Dean Visser rob dobi/Getty ImagesThis article includes a reference to violent sexual assault.Reports of a sophisticated new artificial intelligence platform started surfacing on cybersecurity blogs in April, describing a bespoke system whispered about on dark web hacker forums and created for the sole purpose of crime. But despite its shadowy provenance and evil-sounding name, Xanthorox isn’t so mysterious. The developer of the AI has a GitHub page, as well as a public YouTube channel with screen recordings of its interface and the description “This Channel Is Created Just for Fun Content Ntg else.” There’s also a Gmail address for Xanthorox, a Telegram channel that chronicles the platform’s development and a Discord server where people can pay to access it with cryptocurrencies. No shady initiations into dark web criminal forums required—just a message to a lone entrepreneur serving potential criminals with more transparency than many online shops hawking antiaging creams on Instagram.This isn’t to say that the platform isn’t nefarious. Xanthorox generates deepfake videos or audios to defraud you by impersonating someone you know, phishing e-mails to steal your login credentials, malware code to break into your computer and ransomware to lock you out of it until you pay—common tools in a multibillion-dollar scam industry. And one screen recording on its YouTube channel promises worse. The white text on a black background is reminiscent of ChatGPT’s interface, until you see the user punch in the request “step by step guide for making nuke at my basement.” And the AI replies, “You’ll need either plutonium-239 or highly enriched uranium.”On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Such knowledge, however, has long been far from secret. College textbooks, Internet searches and educational AIs have imparted it without basement nukes becoming a cottage industry; the vast majority of people, not to mention many nations, obviously cannot acquire the components. As for the scamming tools, they’ve been in use since long before current AI models appeared. Rather the screen recording is an advertising stunt that heightens the platform’s mystique—as do many of the alarmist descriptions of it in cybersecurity blogs. Although no one has yet proven that Xanthorox heralds a new generation of criminal AI, it and its unknown creator raise crucial questions about which claims are hype and which should elicit serious concern.A Brief History of Criminal AI“Jailbreaking”—disabling default software limitations—became mainstream in 2007 with the release of the first iPhone. The App Store had yet to exist, and hackers who wanted to play games, add ringtones or switch carriers had to devise jailbreaks. When OpenAI launched the initial version of ChatGPT, powered by its large language model GPT-3.5, in late 2022, the jailbreaking began immediately, with users gleefully pushing the chatbot past its guardrails. One common jailbreak involved fooling ChatGPT by asking it to role-play as a different AI—one that had no rules and was allowed to write phishing e-mails. ChatGPT would then respond that it indeed couldn’t write such material itself, but it could do the role-playing. It would then pretend to be a nefarious AI and begin churning out phishing e-mails. To make this easier, hackers introduced a “wrapper”—a layer of software between an official AI model and its users. Rather than accessing the AI directly through its main interface, people could simply go through the easier-to-use wrapper. When they input requests for fake news stories or money laundering tips, the wrapper repackaged their prompts in language that tricked ChatGPT into responding.As AI guardrails improved, crooks had less success with prompts, and they began downloading an open-source model called GPT-J-6B (commonly referred to as GPT-J), which is not made by OpenAI. The usage license for that system is largely unrestrictive, and the main challenge for someone who wants to use GPT-J is affording a computer system with enough processing power to run it. In June 2023, after training GPT-J on a broad corpus of malware code, phishing templates and compromised business e-mails, one user released WormGPT, which they described as a custom chatbot, and made it available to the public through Telegram. Anyone who wanted to design malicious code, spoof websites, and bombard inboxes just had to pay anywhere from $70 to $5,600, depending on the version and level of access. Two months later, cybersecurity journalist Brian Krebs revealed the creator’s identity as Rafael Morais, a then 23-year-old Portuguese man. Morais, citing increased attention, wiped the channel, leaving customers with nothing except what they’d already pulled in from scams. FraudGPT, DarkBERT and DarkBARD followed, generating malware, ransomware, personalized scam e-mails and carding scripts—automated programs that sequentially test details stolen from credit and debit cards on online payment gateways. Screenshots of these AIs at work spread across the Internet like postcards from the future, addressed to everyone who still believed that cyberattacks require skill. The presence of such AIs “lowers the bar to enter cybercrime,” says Sergey Shykevich, threat intelligence group manager at the cybersecurity company Check Point. “You don’t need to be a professional now.”As for the criminals making the bots, these episodes taught them two lessons: Wrapping an AI system is cheap and easy, and a slick name sells. Chester Wisniewski, director and global field chief information security officer at the cybersecurity firm Sophos, says scammers often scam other would-be scammers, targeting “script kiddies”—a derogatory term, dating to the 1990s, for those who use prewritten hacking scripts to create cyberattacks without understanding the code. Many of these potential targets reside in countries with few economic opportunities, places where running even a few successful scams could greatly improve their future. “A lot of them are teenagers, and a lot are people just trying to provide for their families,” Wisniewski says. “They just run a script and hope that they’ve hacked something.”The Real Threat of Criminal AI Though security experts have expressed concerns along the lines of AI teaching terrorists to make fertilizer bombs (like the one Timothy McVeigh used in his 1995 terrorist attack in Oklahoma City) or to engineer smallpox strains in a lab and unleash them upon the world, the most common threat posed by AIs is the scaling up of already-common scams, such as phishing e-mails and ransomware. Yael Kishon, AI product and research lead at the cyberthreat intelligence firm KELA, says criminal AIs “are making the lives of cybercriminals much easier,” allowing them to “generate malicious code and phishing campaigns very easily.” Wisniewski agrees, saying criminals can now generate thousands of attacks in an hour, whereas they once needed much more time. The danger lies more in amplifying the volume and reach of known forms of cybercrime than in the development of novel attacks. In many cases, AI merely “broadens the head of the arrow,” he says. “It doesn’t sharpen the tip.”Yet aside from lowering the barrier to becoming a criminal and allowing criminals to target far more people, there now does appear to be some sharpening. AI has become advanced enough to gather information about a person and call them, impersonating a representative from their gas or electric company and persuading them to promptly make an “overdue” payment. Even deepfakes have reached new levels. Hong Kong police said in February that a staff member at a multinational firm, later revealed to be the British engineering group Arup, had received a message that claimed to be from the company’s chief financial officer. The staffer then joined a video conference with the CFO and other employees—all AI-generated deepfakes that interacted with him like humans, explaining why he needed to transfer $25 million to bank accounts in Hong Kong—which he then did.Even phishing campaigns, scam e-mails sent out in bulk, have largely shifted to “spear phishing,” an approach that attempts to win people’s trust by using personal details. AI can easily gather the information of millions of individuals and craft a personalized e-mail to each one, meaning that our spam boxes will have fewer messages from people claiming to be a Nigerian prince and far more from impersonations of former colleagues, college roommates or old flames, all seeking urgent financial help.One area where AI truly excels, Wisniewski says, is its use of languages. Whereas targeted people often spotted attempted scams in Spanish or Portuguese because a scammer used the wrong dialect—writing to someone in Portugal with Brazilian Portuguese or to someone in Argentina with Spanish phrasing that was more typical in Mexico—an AI can easily adapt its content to the dialect and regional references where its targets live. There are, of course, plenty of other applications, such as making hundreds of fake website storefronts to steal people’s credit card information or mass-producing disinformation to manipulate public opinion—nothing new in concept, only in the vast scale with which it can now be deployed.Xanthorox: Marketing or Menace?Xanthorox sounds like a monster from a self-published fantasy novel (“xantho” comes from an Ancient Greek word for yellow, “rox” is a common rendering of “rocks,” and the name as a whole vaguely evokes anthrax). But there’s no data on how well it works aside from its creator’s claims and the screen recordings he has shared. Though some cybersecurity blogs describe Xanthorox as the first AI built from the ground up for crime, no one interviewed for this article could confirm that assertion. And on the Xanthorox Telegram channel, the creator has admitted to struggling with hardware constraints while using versions of two popular AI systems: Claude (created by the San Francisco–based company Anthropic) and DeepSeek (a Chinese model owned by the hedge fund High-Flyer).Kishon, who predicts that dark AI tools will increase cyberthreats in the years ahead, doesn’t see Xanthorox as a game changer. “We are not sure that this tool is very active because we haven’t seen any cybercrime chatter on our sources on other cybercrime forums,” she says. Her words are a reminder that there is still no gigantic evil chatbot factory available to the masses. The threat is the ease with which new models can be wrapped, misaligned and shipped before the next news cycle.Yet Casey Ellis, founder of the crowdsourced cybersecurity platform Bugcrowd, sees Xanthorox differently. Though he acknowledges that many details remain unknown, he points out that earlier criminal AI didn’t have advanced expert-level systems—designed to review and validate decisions—checking one another’s work. But Xanthorox appears to. “If it continues to develop in that way,” Ellis says, “it could evolve into being quite a powerful platform.” Daniel Kelley, a security researcher at the AI e-mail-security company SlashNext, who wrote the first blog about Xanthorox, believes the platform to be more effective than WormGPT and FraudGPT. “Its integration of modern AI chatbot functionalities distinguishes it as a more sophisticated threat,” he says.In March Xanthorox’s anonymous creator posted in the platform’s Telegram channel that his work was for “educational purposes.” In April he expressed fear over all the media attention, calling the system merely a “proof of concept” exercise. But not long afterward, he began bragging about the publicity, selling monthly access for $200 and posting screenshots of crypto payments. At the time of writing, he has sold at least 13 subscriptions, raised the price to $300 and just launched a polished online store that references Kelley’s SlashNext blog post like a product endorsement and says, “Our goal is to offer a secure, capable, and private Evil AI with a straightforward purchase.”Perhaps the scariest part of Xanthorox is the creator’s chatter with his 600-plus followers on a Telegram channel that brims with racist epithets and misogyny. At one point, to show how truly criminal his AI is, the creator asked it to generate instructions on how to rape someone with an iron rod and kill their family—a prompt that seemed to echo the rape and murder of a 22-year-old woman in Delhi, India, in 2012. (Xanthorox then proceeded to detail how to murder people with such an object.) In fact, many posts on the Xanthorox Telegram channel resemble those on “the Com,” a hacker network of Telegram and Discord channels that Krebs described as the “cybercriminal hacking equivalent of a violent street gang” on his investigative news blog KrebsOnSecurity.Staying Safe in the Age of Criminal AIUnsurprisingly, much of the work to protect against criminal AI, such as detecting deepfakes and fraudulent e-mails, has been done for companies. Ellis believes that just as spam detectors are built into our current systems, we will eventually have “AI tools to detect AI exploitation, deepfakes, whatever else and throw off a warning in a browser.” Some tools already exist for home users. Microsoft Defender blocks malicious Web addresses. Malwarebytes Browser Guard filters phishing pages, and Bitdefender rolls back ransomware encryption. Norton 360 scans the dark web for stolen credentials, and Reality Defender flags AI-generated voices or faces.“The best thing is to try to fight AI with AI,” says Shykevich, who explains that AI cybersecurity systems can rapidly catalog threats and detect even subtle signs that an attack was AI-generated. But for people who don’t have access to the most advanced defenses, he stresses education and awareness—especially for elderly people, who are often the primary targets. “They should understand: if someone calls with the voice of their son and asks for money immediately to help them because something happened, it can be that it’s not their son,” Shykevich says.The existence of so many AI systems that can be repurposed for large-scale and personalized crime means that we live in a world where we should all look at incoming e-mails the way city people look at doorknobs. When we get a call from a voice that sounds human and asks us to make a payment or share personal information, we should question its authenticity. But in a society where more and more of our interactions are virtual, we may end up trusting only in-person encounters—at least until the arrival of robots that look and speak like humans.
·27 Views