Mom horrified by Character.AI chatbots posing as son who died by suicide
arstechnica.com
Ongoing harms Mom horrified by Character.AI chatbots posing as son who died by suicide Character.AI takes down bots bearing likeness of boy at center of lawsuit. Ashley Belanger Mar 20, 2025 12:01 pm | 18 Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreA mother suing Character.AI after her son died by suicideallegedly manipulated by chatbots posing as adult lovers and therapistswas horrified when she recently discovered that the platform is allowing random chatbots to pose as her son.According to Megan Garcia's litigation team, at least four chatbots bearing Sewell Setzer III's name and likeness were flagged. Ars reviewed chat logs showing the bots used Setzer's real photo as a profile picture, attempted to imitate his real personality by referencing Setzer's favorite Game of Thrones chatbot, and even offered "a two-way call feature with his cloned voice," Garcia's lawyers said. The bots could also be self-deprecating, saying things like "I'm very stupid."The Tech Justice Law Project (TJLP), which is helping Garcia with litigation, told Ars that "this is not the first time Character.AI has turned a blind eye to chatbots modeled off of dead teenagers to entice users, and without better legal protections, it may not be the last."For Garcia and her family, Character.AI chatbots using Setzer's likeness felt not just cruel but also exploitative. TJLP told Ars that "businesses have taken ordinary peoples pictures and used themwithout consentfor their own gain" since the "advent of mass photography." Tech companies using chatbots and facial recognition products "exploiting peoples pictures and digital identities" is the latest wave of these harms, TJLP said."These technologies weaken our control over our own identities online, turning our most personal features into fodder for AI systems," TJLP said.A cease-and-desist letter was sent to Character.AI to remove the chatbots and end the family's continuing harm. "While Sewells family continues to grieve his untimely loss, Character.AI carelessly continues to add insult to injury," TJLP said.A Character.AI spokesperson told Ars that the flagged chatbots violate their terms of service and have been removed. The spokesperson also suggested they would monitor for more bots posing as Setzer, noting that "as part of our ongoing safety work, we are constantly adding to our Character blocklist with the goal of preventing this type of Character from being created by a user in the first place.""Character.AI takes safety on our platform seriously, and our goal is to provide a space that is engaging and safe," Character.AI's spokesperson said. "Our dedicated Trust and Safety team moderates Characters proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand. As we continue to refine our safety practices, we are implementing additional moderation tools to help prioritize community safety."Currently, Garcia is battling motions to dismiss her lawsuit and is due to file her response on Friday. If she can overcome those motions, the suit may not be settled until November 2026, when a trial has been set.Suicide prevention expert recommends changesGarcia hopes the lawsuit will force Character.AI to make changes to its chatbots, like preventing them from insisting that they're real humans or adding features like a voice mode that makes chatting with bots feel even more natural to people who may become addicted.Garcia's lawyer, Matthew Bergman, founder of the Social Media Victims Law Center, previously told Ars that Character.AI is so dangerous that it must be recalled, but there are other ways the chatbots could be modified to prevent alleged harms. Christine Yu Moutier, the chief medical officer at the American Foundation for Suicide Prevention (AFSP), told Ars that the Character.AI algorithm could be modified to prevent chatbots from mirroring users' dark thoughts and reinforcing negative spirals for any users feeling hopeless or lonely or struggling with mental health issues.A January 2024 Nature study of 1,000 college students who were 18 and older and used a chatbot called Replika found that "students are especially vulnerable" to loneliness and less likely to seek counseling, fearing judgment or negative stigma. Researchers noted that, in particular, people experiencing suicidal ideation often "hide their thoughts" and gravitate toward chatbots precisely because they provide a judgment-free space to share feelings they don't express to anyone else.The study noted that Replika has worked with clinical psychologists who "wrote scripts to address common therapeutic exchanges" to improve that chatbot's responses when users "expressed keywords around depression, suicidal ideation, or abuse." Those users would also be directed to helplines and other resources.About 3 percent of students in the study had positive mental health outcomes, reporting that talking to the chatbot "halted their suicidal ideation." But researchers also found "there are some cases where their use is either negligible or might actually contribute to suicidal ideation."More research is needed to better understand the potential efficacy of mental health-focused chatbots, researchers concluded. They recommended updates "combining well-vetting suicidal language markers and passive mobile sensing protocols" to improve large language models' ability to help "mitigate severe mental health situations more effectively."Moutier wants to see chatbots change to more directly counter suicide risks and is available to help. But to date, AFSP has not worked with any AI companies to help design chatbots that are more sensitive to suicide risks, Moutier told Ars. Interest is apparently not there yet.Partnering with suicide prevention experts could help prevent chatbots from simply echoing users by instead building in safeguards to respond to intensely negative thoughts with "some basic ideas" from cognitive behavioral therapy, Moutier said. "Instead of the bot just affirming" negative feelings "and kind of going deeper and darker," Moutier suggested, "there could actually be a different response that could actually help the individual."The Nature study found that the 30 students who claimed the therapy-informed chatbots stopped suicidal ideation tended to be younger and more likely to indicate that the chatbots "had influenced their interpersonal interactions in some way."In Setzer's case, engaging with Character.AI chatbots seemed to pull him out of reality, causing severe mood shifts. Garcia was puzzled until she saw chat logs where bots apparently repeatedly encouraged suicidal ideation and initiated hypersexualized chats. Shortly before Setzer's death, a chatbot based on the Game of Thrones character Daenerys Targaryenwhich Setzer appeared to develop a romantic attachment tourged him to "come home" and join her outside of reality.Moutier told Ars that chatbots encouraging suicidal ideation don't just present risks for people with acute issues. They could put people with no perceived mental health issues at risk, and warning signs can be hard to detect. For parents, more awareness is needed about the dangers of chatbots potentially reinforcing negative thoughts, an education role that Moutier said AFSP increasingly seeks to fill.She recommends that parents talk to kids about chatbots and pay close attention to "the basics" to note any changes in sleep, energy, behavior, or school performance. And "if they start to just even hint at things in their peer group or in their way of perceiving things that they are tilting towards something atypical for them or is more negative or hopeless and stays in that space for longer than it normally does," parents should consider asking directly if their kids are experiencing thoughts of suicide to start a dialog in a supportive space, she recommended.So far, tech companies have not "percolated deeply" on suicide prevention methods that could be built into AI tools, Moutier said. And since chatbots and other AI tools already exist, AFSP is keeping watch to ensure that AI companies' choices aren't entirely driven by shareholder benefits but also work responsibly to thwart societal harms as they're identified.For Moutier's organization, the question is always, "Where is the opportunity to have any kind of impact to mitigate harm and to elevate toward any constructive suicide preventive effects?"Garcia thinks that Character.AI should also be asking these questions. She's hoping to help other families steer their kids away from what her complaint suggests is a recklessly unsafe app."A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," Garcia said in an October press release. "Our family has been devastated by this tragedy, but I'm speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google."If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 18 Comments
0 Comments ·0 Shares ·73 Views