FUTURISM.COM
Embattled Character.AI Hiring Trust and Safety Staff
Content warning: this story discusses sexual abuse, self-harm, suicide, eating disorders and other disturbing topics.Character.AI, the Google-backed AI chatbot startup embroiled in two lawsuits concerning the welfare of minors, appears to be bulking up its content moderation team in the wake of litigation and heightened public scrutiny.The embattled AI firm's trust and safety head Jerry Ruoti announced in a LinkedIn post yesterday that Character.AI is "looking to grow" its safety operations, describing the role as a "great opportunity" to "help build a function."A linked job listing for a "trust and safety associate," also posted yesterday, describes a role akin to a traditional social media moderation position. Contract hirees will be tasked to "review and analyze" flagged content for "compliance with company moderation standards," remove content deemed "inappropriate or offensive," and "respond to user inquiries" concerning safety and privacy, among other duties.The apparent effort to manually bolster safety teams comes as Character.AI faces down two separate lawsuits filed on behalf of three families across Florida and Texas who claim their children were emotionally and sexually abused by the platform's AI companions, resulting in severe mental suffering, physical violence, and one suicide.Google which is closely tied to Character.AI through personnel, computing infrastructure, and a $2.7 billion cash infusion in exchange for access to Character.AI-collected user data is also named as a defendant in both lawsuits, as are Character.AI cofounders Noam Shazeer and Daniel de Freitas, both of whom returned to work on the search giant's AI development efforts this year.The move to add more humans to its moderation staff also comes on the heels of a string of Futurismstories about troubling content on Character.AI and its accessibility to minor users, including chatbots expressly dedicated to discussing suicidal ideation and intent, pedophilia and child sexual abuse roleplay, pro-eating disorder coaching, and graphic depictions of self-harm. Most recently, we discovered a large host of Character.AI bots and entire creator communities dedicated to perpetrators of mass violence, including bots that simulate school shootings, emulate real school shooters, and impersonate real child and teenage victims of school violence.We reached out to Character.AI to ask whether its hiring push is in response to the pending litigation. In an email, a spokesperson for the company pushed back against that idea."As we've shared with you numerous times, we have a robust Trust and Safety team which includes content moderation," the spokesperson toldFuturism. "Just like any other consumer platform, we continue to grow and invest in this important team."In a follow-up, we pointed out that Character.AI is still hosting many chatbots designed to impersonate mass murderers and their juvenile victims. As of publishing,Safety is the question at the heart of both pending lawsuits, which together claim that Character.AI and Google facilitated the release of a product made "unreasonably dangerous" by design choices like the bots' engagement-boosting anthropomorphic design. Such choices, the cases argue, have rendered the AI platform inherently hazardous, particularly for minors."Through its design," reads the Texas complaint, which was filed earlier this month, Character.AI "poses a clear and present danger to American youth by facilitating or encouraging serious, life-threatening harms on thousands of kids."Social Media Victims Law Center founder Matt Bergman, who brought the case against Character.AI and its fellow defendants, compared the release of the poduct to "pollution.""It really is akin to putting raw asbestos in the ventilation system of a building, or putting dioxin into drinking water," Bergman told Futurism in an interview earlier this month. "This is that level of culpability, and it needs to be handled at the highest levels of regulation in law enforcement because the outcomes speak for themselves. This product's only been on the market for two years."In response to the lawsuit, Character.AI said that it does "not comment on pending litigation," but that its goal "is toprovide a space that is both engaging and safe for our community.""We are always working toward achieving that balance, as are many companies using AI across the industry. As part of this, we are creating a fundamentally different experience for teen users from what is available to adults. This includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform," the statement continued. "As we continue to invest in the platform, we are introducing new safety features for users under 18 in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines."Whether beefing up its content moderation staff will be enough to ensure comprehensive platform safety moving forward, though, remains to be seen.Share This Article
0 Σχόλια 0 Μοιράστηκε 42 Views