In motion to dismiss, chatbot platform Character AI claims it is protected by the First Amendment
techcrunch.com
Character AI, a platform that lets users engage in roleplay with AI chatbots, has filed a motion to dismiss a case brought against it by the parent of a teen who committed suicide, allegedly after becoming hooked on the companys technology.In October, Megan Garcia filed a lawsuit against Character AI in the U.S. District Court for the Middle District of Florida, Orlando Division, over the death of her son, Sewell Setzer III. According to Garcia, her 14-year-old son developed an emotional attachment to a chatbot on Character AI, Dany, which he texted constantly to the point where he began to pull away from the real world.Following Setzers death, Character AIsaidit would roll out a number of new safety features, including improved detection, response, and intervention related to chats that violate its terms of service. But Garcia is fighting for additional guardrails, including changes that might result in chatbots on Character AI losing their ability to tell stories and personal anecdotes.In the motion to dismiss, counsel for Character AI asserts the platform is protected against liability by the First Amendment, just as computer code is. The motion may not persuade a judge, and Character AIs legal justifications may change as the case proceeds. But the motion possibly hints at early elements of Character AIs defense.The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide, the filing reads. The only difference between this case and those that have come before is that some of the speech here involves AI. But the context of the expressive speech whether a conversation with an AI chatbot or an interaction with a video game character does not change the First Amendment analysis.To be clear, Character AIs counsel isnt asserting the companys First Amendment rights. Rather, the motion argues that Character AIs users would have their First Amendment rights violated should the lawsuit against the platform succeed. The motion doesnt address whether Character AI might be held harmless under Section 230 of the Communications Decency Act, the federal safe-harbor law that protects social media and other online platforms from liability for third-party content. The laws authors have implied that Section 230 doesnt protect output from AI like Character AIs chatbots, but its far from a settled legal matter.Counsel for Character AI also claims that Garcias real intention is to shut down Character AI and prompt legislation regulating technologies like it. Should the plaintiffs be successful, it would have a chilling effect on both Character AI and the entire nascent generative AI industry, counsel for the platform says. Apart from counsels stated intention to shut down Character AI, [their complaint] seeks drastic changes that would materially limit the nature and volume of speech on the platform, the filing reads. These changes would radically restrict the ability of Character AIs millions of users to generate and participate in conversations with characters.The lawsuit, which also names Character AI corporate benefactor Alphabet as a defendant, is but one of several lawsuits that Character AI is facing relating to how minors interact with the AI-generated content on its platform. Other suits allege that Character AI exposeda 9-year-old to hypersexualized contentand promoted self-harm toa 17-year-old user.In December, Texas Attorney GeneralKen Paxton announced he was launching an investigation intoCharacter AIand 14 other tech firms over alleged violations of the states online privacy and safety laws for children. These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm, said Paxton in a press release.Character AI is part of aboomingindustryofAIcompanionshipapps the mental health effects of which are largely unstudied. Some experts have expressed concerns that these apps could exacerbate feelings of loneliness and anxiety.Character AI, which was founded in 2021 by Google AI researcher Noam Shazeer, and which Google reportedly paid $2.7 billion to reverse acquihire, has claimed that it continues to take steps to improve safety and moderation. In December, the company rolled out new safety tools, a separate AI model for teens, blocks on sensitive content, and more prominent disclaimers notifying users that its AI characters are not real people.Character AI has gone through a number of personnel changes afterShazeer and the companys other co-founder, Daniel De Freitas, left for Google. The platform hireda former YouTube exec, Erin Teague, as chief product officer, and named Dominic Perella, who was Character AIs general counsel, interim CEO.Character AI recently began testing games on the web in an effort to boost user engagement and retention.TechCrunch has an AI-focused newsletter!Sign up hereto get it in your inbox every Wednesday.
0 Commentarii
·0 Distribuiri
·44 Views