ARSTECHNICA.COM
Character.AI steps up teen safety after bots allegedly caused suicide, self-harm
AI teenage wasteland? Character.AI steps up teen safety after bots allegedly caused suicide, self-harm Character.AI's new model for teens doesn't resolve all of parents' concerns. Ashley Belanger Dec 12, 2024 4:15 pm | 31 Credit: Marina Demidiuk | iStock / Getty Images Plus Credit: Marina Demidiuk | iStock / Getty Images Plus Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreFollowing a pair of lawsuits alleging that chatbots caused a teen boy's suicide, groomed a 9-year-old girl, and caused a vulnerable teen to self-harm, Character.AI (C.AI) has announced a separate model just for teens, ages 13 and up, that's supposed to make their experiences with bots safer.In a blog, C.AI said it took a month to develop the teen model, with the goal of guiding the existing model "away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content."C.AI said "evolving the model experience" to reduce the likelihood kids are engaging in harmful chatsincluding bots allegedly teaching a teen with high-functioning autism to self-harm and delivering inappropriate adult content to all kids whose families are suingit had to tweak both model inputs and outputs.To stop chatbots from initiating and responding to harmful dialogs, C.AI added classifiers that should help C.AI identify and filter out sensitive content from outputs. And to prevent kids from pushing bots to discuss sensitive topics, C.AI said that it had improved "detection, response, and intervention related to inputs from all users." That ideally includes blocking any sensitive content from appearing in the chat.Perhaps most significantly, C.AI will now link kids to resources if they try to discuss suicide or self-harm, which C.AI had not done previously, frustrating parents suing who argue this common practice for social media platforms should extend to chatbots.Other teen safety featuresIn addition to creating the model just for teens, C.AI announced other safety features, including more robust parental controls rolling out early next year. Those controls would allow parents to track how much time kids are spending on C.AI and which bots they're interacting with most frequently, the blog said.C.AI will also be notifying teens when they've spent an hour on the platform, which could help prevent kids from becoming addicted to the app, as parents suing have alleged. In one case, parents had to lock their son's iPad in a safe to keep him from using the app after bots allegedly repeatedly encouraged him to self-harm and even suggested murdering his parents. That teen has vowed to start using the app whenever he next has access, while parents fear the bots' seeming influence may continue causing harm if he follows through on threats to run away.Finally, C.AI has bowed to pressure from parents to make disclaimers more prominent on its platform, reminding users that bots are not real people and "what the model says should be treated as fiction." That's likely a significant change for Megan Garcia, the mother whose son died by suicide after allegedly believing bots that made him feel that was the only way to join the chatbot world that had apparently estranged him from the real world. New disclaimers will also make it clearer that any chatbots marked as "psychologist," "therapist," "doctor," or "other similar terms in their names" should not be relied on to give "any type of professional advice."Some of the changes C.AI has made will impact all users, including improved detection, response, and intervention following sensitive user inputs. Adults can also customize the "time spent" notification feature to manage their own experience on the platform.Teen safety updates dont resolve all parents concernsParents suing are likely frustrated to see how fast C.AI could work to make the platform safer when it wanted to, rather than testing and rolling out a safer product from the start.Camille Carlton, a policy director for theCenter for Humane Technology who is serving as a technical expert on the case, told Ars that "this is the second time that Character.AI has announced new safety features within 24 hours of a devastating story about the dangerous design of their product, underscoring their lack of seriousness in addressing these fundamental problems.""Product safety shouldnt be a knee-jerk response to negative pressit should be built into the design and operation of a product, especially one marketed to young users," Carlton said. "Character.AIs proposed safety solutions are wholly insufficient for the problem at hand, and they fail to address the underlying design choices causing harm such as the use of inappropriate training data or optimizing for anthropomorphic interactions."In both lawsuits filed against C.AI, parents want to see the model destroyed, not evolved. That's because not only do they consider the chats their kids experienced to be harmful, but they also believe it was unacceptable for C.AI to train its model on their kids' chats.Because the model could never be fully cleansed of their dataand because C.AI allegedly fails to adequately age-gate and it's currently unclear how many kids' data was used to train the AI modelthey have asked courts to order C.AI to delete the model.It's also likely that parents won't be satisfied by the separate teen model because they consider C.AI's age-verification method flawed.Currently, the only way that C.AI age-gates the platform is by asking users to self-report ages. For some kids on devices with strict parental controls, accessing the app might be more challenging, but other kids with fewer rules could seemingly access the adult model by lying about their ages. That's what happened in the case of one girl whose mother is suing after the girl started using C.AI when she was only 9, and it was supposedly only offered to users age 12 and up.Ars was able to use the same email address to attempt to register as a 13-year-old, 16-year-old, and adult without an issue blocking re-tries.C.AI's spokesperson told Ars that it's not supposed to work that way and reassured Ars that C.AI's trust and safety team would be notified."You must be 13 or older to create an account on Character.AI," C.AI's spokesperson said in a statement provided to Ars. "Users under 18 receive a different experience on the platform, including a more conservative model to reduce the likelihood of encountering sensitive or suggestive content. Age is self-reported, as is industry-standard across other platforms. We have tools on the web and in the app preventing re-tries if someone fails the age gate."If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 31 Comments
0 Yorumlar
0 hisse senetleri
22 Views