Thousands of pedophiles are using jail-broken AI character chatbots to roleplay sexually assaulting minors
www.fastcompany.com
Online child abuse is a pernicious problem thats rife in digital life. In 2023, the National Center for Missing and Exploited Children (NCMEC) received more than 36 million reports of suspected child sexual exploitationand a 300% increase in reports around online enticement of youngsters, including sextortion.And a new report by social media analysts Graphika highlights how such abuse is moving into a troubling new space: utilizing AI character chatbots to interact with AI personas representing sexualized minors and other harmful activities. The firm found more than 10,000 chatbots labelled as being useful for those looking to engage in sexualized roleplay with minors, or with personas that present as if they are minors.There was a significant amount of sexualized minor chatbots, and a very large community around the sexualized minor chatbots, particularly on 4chan, says Daniel Siegel, an investigator at Graphika, and one of the co-authors of the report. What we also found is in more of the mainstream conversations that are happening on Reddit or Discord, there is disagreement related to the limits as to what chatbots should be created, and even sometimes disagreement as to whether individuals under the age of 18 should be allowed on the platform itself.Some of the sexualized chatbots that Graphika found were jailbroken versions of AI models developed by OpenAI, Anthropic and Google, advertised as being accessible to nefarious users through APIs. (Theres no suggestion that the companies involved are aware of these jailbroken chatbots.) Theres a lot of creativity in terms of how individuals are creating personas, including a lot of harmful chatbots, like violent extremist chatbots and sexualized minor chatbots that are appearing on these platforms, says Siegel.Of the 10,000-plus chatbots, around 100 or so of them were found linked to ChatGPT, Claude, Gemini or Character.ai, the latter of which has been sued by the parents of a teenager who took his life after interacting with a non-sexualized minor chatbot hosted on the service. Theres a lot of efforts within these adversarial communities to jailbreak or get around the safeguards to produce this material that in many instances, is child sexual abuse material, says Siegel.The majority of the offending chatbots were hosted on Chub AI, a character card-sharing platform that explicitly markets itself as uncensored. There, Graphika found 7,140 chatbots labeled as sexualized minor female characters, 4,000 of which were labeled as underage or engaging in implied pedophilia. CSAM is not allowed on the platform, and any such content is detected and immediately reported to the National Center for Missing and Exploited Children, says a Chub AI spokesperson. We lament the ongoing media hysteria around generative AI, and hope it ends soon as people become more familiar with it. Please use that as an exact quote, including this sentence.Debate among Redditors that Graphika analyzed circled around whether interacting with minor-presenting AI characters was immoral or not. One of the other key areas of discussion were specific tactics, techniques and procedures to try and subvert guardrails designed to prevent such interactions taking place on proprietary chatbots owned by big tech companies, including eight separate services helping broker access to uncensored versions of those chatbots. What I thought was particularly interesting in this report was the communal efforts of a lot of the individuals across all the different platforms engaged in trading information on how to jailbreak models, or how to get around and uncensor models, says Siegel.Because of those efforts, getting a handle on the scale and seriousness of the issue is difficult for the companies in question. I think there are efforts being taken and there are a lot of conversations happening on this, says Siegel. Yet he doesnt lay blame solely at the model makers for the way their technologies and tools are being used. With anything generative AI, there are so many different uses of it that they have to wrap their hands around and think about all the variety of ways in which their platforms or models themselves are being abused and can be abused.Siegel declined to apportion responsibility at the door of the tech companies behind the models. Were not really involved in any regulatory policy efforts by any of these platforms, he says. What were doing is enabling them to understand the landscape of how abuse is happening, so they can decide whether to make an effort themselves.Its also incumbent on us all to recognize the risks of these chatbots being used in such a way, Siegel adds. Oftentimes, our conversations about generative AI end up about weaponized unrealities, or the ability for large language models to produce instructions on bioweapons or extremely existential threats, which are very worrisome things that I think we should be concerned about, he says. But what gets lost in the conversation is harm like the animation of violent extremists through chatbots, or the ability for individuals to interact with sexualized minors online.
0 Comments ·0 Shares ·15 Views