Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis..."> Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis..." /> Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis..." />

ترقية الحساب

Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.A judge in Florida just rejected a motion to dismiss a lawsuit alleging that the chatbot startup Character.AI — and its closely tied benefactor, Google — caused the death by suicide of a 14-year-old user, clearing the way for the first-of-its-kind lawsuit to move forward in court.The lawsuit, filed in October, claims that recklessly released Character.AI chatbots sexually and emotionally abused a teenage user, Sewell Setzer III, resulting in obsessive use of the platform, mental and emotional suffering, and ultimately his suicide in February 2024.In January, the defendants in the case — Character.AI, Google, and Character.AI cofounders Noam Shazeer and Daniel de Freitas — filed a motion to dismiss the case mainly on First Amendment grounds, arguing that AI-generated chatbot outputs qualify as speech, and that "allegedly harmful speech, including speech allegedly resulting in suicide," is protected under the First Amendment.But this argument didn't quite cut it, the judge ruled, at least not in this early stage. In her opinion, presiding US district judge Anne Conway said the companies failed to sufficiently show that AI-generated outputs produced by large language modelsare more than simply words — as opposed to speech, which hinges on intent.The defendants "fail to articulate," Conway wrote in her ruling, "why words strung together by an LLM are speech."The motion to dismiss did find some success, with Conway dismissing specific claims regarding the alleged "intentional infliction of emotional distress," or IIED.Still, the ruling is a blow to the high-powered Silicon Valley defendants who had sought to have the suit tossed out entirely.Significantly, Conway's opinion allows Megan Garcia, Setzer's mother and the plaintiff in the case, to sue Character.AI, Google, Shazeer, and de Freitas on product liability grounds. Garcia and her lawyers argue that Character.AI is a product, and that it was rolled out recklessly to the public, teens included, despite known and possibly destructive risks.In the eyes of the law, tech companies generally prefer to see their creations as services, like electricity or the internet, rather than products, like cars or nonstick frying pans. Services can't be held accountable for product liability claims, including claims of negligence, but products can.In a statement, Tech Justice Law Project director and founder Meetali Jain, who's co-counsel for Garcia alongside Social Media Victims Law Center founder Matt Bergman, celebrated the ruling as a win — not just for this particular case, but for tech policy advocates writ large."With today's ruling, a federal judge recognizes a grieving mother's right to access the courts to hold powerful tech companies — and their developers — accountable for marketing a defective product that led to her child's death," said Jain."This historic ruling not only allows Megan Garcia to seek the justice her family deserves," Jain added, "but also sets a new precedent for legal accountability across the AI and tech ecosystem."Character.AI was founded by Shazeer and de Freitas in 2021; the duo had worked together on AI projects at Google, and left together to launch their own chatbot startup. Google provided Character.AI with its essential Cloud infrastructure, and in 2024 raised eyebrows when it paid Character.AI billion to license the chatbot firm's data — and bring its cofounders, as well as 30 other Character.AI staffers, into Google's fold. Shazeer, in particular, now holds a hugely influential position at Google DeepMind, where he serves as a VP and co-lead for Google's Gemini LLM.Google did not respond to a request for comment at the time of publishing, but a spokesperson for the search giant told Reuters that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage" the Character.AI app "or any component part of it."In a statement, a spokesperson for Character.AI emphasized recent safety updates issued following the news of Garcia's lawsuit, and said it "looked forward" to its continued defense:It's long been true that the law takes time to adapt to new technology, and AI is no different. In today's order, the court made clear that it was not ready to rule on all of Character.AI 's arguments at this stage and we look forward to continuing to defend the merits of the case.We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe. We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time spent notification, updated prominent disclaimers and more.Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.Any safety-focused changes, though, were made months after Setzer's death and after the eventual filing of the lawsuit, and can't apply to the court's ultimate decision in the case.Meanwhile, journalists and researchers continue to find holes in the chatbot site's upxdated safety protocols. Weeks after news of the lawsuit was announced, for example, we continued to find chatbots expressly dedicated to self-harm, grooming and pedophilia, eating disorders, and mass violence. And a team of researchers, including psychologists at Stanford, recently found that using a Character.AI voice feature called "Character Calls" effectively nukes any semblance of guardrails — and determined that no kid under 18 should be using AI companions, including Character.AI.Share This Article
#judge #slaps #down #attempt #throw
Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide
Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.A judge in Florida just rejected a motion to dismiss a lawsuit alleging that the chatbot startup Character.AI — and its closely tied benefactor, Google — caused the death by suicide of a 14-year-old user, clearing the way for the first-of-its-kind lawsuit to move forward in court.The lawsuit, filed in October, claims that recklessly released Character.AI chatbots sexually and emotionally abused a teenage user, Sewell Setzer III, resulting in obsessive use of the platform, mental and emotional suffering, and ultimately his suicide in February 2024.In January, the defendants in the case — Character.AI, Google, and Character.AI cofounders Noam Shazeer and Daniel de Freitas — filed a motion to dismiss the case mainly on First Amendment grounds, arguing that AI-generated chatbot outputs qualify as speech, and that "allegedly harmful speech, including speech allegedly resulting in suicide," is protected under the First Amendment.But this argument didn't quite cut it, the judge ruled, at least not in this early stage. In her opinion, presiding US district judge Anne Conway said the companies failed to sufficiently show that AI-generated outputs produced by large language modelsare more than simply words — as opposed to speech, which hinges on intent.The defendants "fail to articulate," Conway wrote in her ruling, "why words strung together by an LLM are speech."The motion to dismiss did find some success, with Conway dismissing specific claims regarding the alleged "intentional infliction of emotional distress," or IIED.Still, the ruling is a blow to the high-powered Silicon Valley defendants who had sought to have the suit tossed out entirely.Significantly, Conway's opinion allows Megan Garcia, Setzer's mother and the plaintiff in the case, to sue Character.AI, Google, Shazeer, and de Freitas on product liability grounds. Garcia and her lawyers argue that Character.AI is a product, and that it was rolled out recklessly to the public, teens included, despite known and possibly destructive risks.In the eyes of the law, tech companies generally prefer to see their creations as services, like electricity or the internet, rather than products, like cars or nonstick frying pans. Services can't be held accountable for product liability claims, including claims of negligence, but products can.In a statement, Tech Justice Law Project director and founder Meetali Jain, who's co-counsel for Garcia alongside Social Media Victims Law Center founder Matt Bergman, celebrated the ruling as a win — not just for this particular case, but for tech policy advocates writ large."With today's ruling, a federal judge recognizes a grieving mother's right to access the courts to hold powerful tech companies — and their developers — accountable for marketing a defective product that led to her child's death," said Jain."This historic ruling not only allows Megan Garcia to seek the justice her family deserves," Jain added, "but also sets a new precedent for legal accountability across the AI and tech ecosystem."Character.AI was founded by Shazeer and de Freitas in 2021; the duo had worked together on AI projects at Google, and left together to launch their own chatbot startup. Google provided Character.AI with its essential Cloud infrastructure, and in 2024 raised eyebrows when it paid Character.AI billion to license the chatbot firm's data — and bring its cofounders, as well as 30 other Character.AI staffers, into Google's fold. Shazeer, in particular, now holds a hugely influential position at Google DeepMind, where he serves as a VP and co-lead for Google's Gemini LLM.Google did not respond to a request for comment at the time of publishing, but a spokesperson for the search giant told Reuters that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage" the Character.AI app "or any component part of it."In a statement, a spokesperson for Character.AI emphasized recent safety updates issued following the news of Garcia's lawsuit, and said it "looked forward" to its continued defense:It's long been true that the law takes time to adapt to new technology, and AI is no different. In today's order, the court made clear that it was not ready to rule on all of Character.AI 's arguments at this stage and we look forward to continuing to defend the merits of the case.We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe. We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time spent notification, updated prominent disclaimers and more.Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.Any safety-focused changes, though, were made months after Setzer's death and after the eventual filing of the lawsuit, and can't apply to the court's ultimate decision in the case.Meanwhile, journalists and researchers continue to find holes in the chatbot site's upxdated safety protocols. Weeks after news of the lawsuit was announced, for example, we continued to find chatbots expressly dedicated to self-harm, grooming and pedophilia, eating disorders, and mass violence. And a team of researchers, including psychologists at Stanford, recently found that using a Character.AI voice feature called "Character Calls" effectively nukes any semblance of guardrails — and determined that no kid under 18 should be using AI companions, including Character.AI.Share This Article #judge #slaps #down #attempt #throw
FUTURISM.COM
Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide
Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.A judge in Florida just rejected a motion to dismiss a lawsuit alleging that the chatbot startup Character.AI — and its closely tied benefactor, Google — caused the death by suicide of a 14-year-old user, clearing the way for the first-of-its-kind lawsuit to move forward in court.The lawsuit, filed in October, claims that recklessly released Character.AI chatbots sexually and emotionally abused a teenage user, Sewell Setzer III, resulting in obsessive use of the platform, mental and emotional suffering, and ultimately his suicide in February 2024.In January, the defendants in the case — Character.AI, Google, and Character.AI cofounders Noam Shazeer and Daniel de Freitas — filed a motion to dismiss the case mainly on First Amendment grounds, arguing that AI-generated chatbot outputs qualify as speech, and that "allegedly harmful speech, including speech allegedly resulting in suicide," is protected under the First Amendment.But this argument didn't quite cut it, the judge ruled, at least not in this early stage. In her opinion, presiding US district judge Anne Conway said the companies failed to sufficiently show that AI-generated outputs produced by large language models (LLMs) are more than simply words — as opposed to speech, which hinges on intent.The defendants "fail to articulate," Conway wrote in her ruling, "why words strung together by an LLM are speech."The motion to dismiss did find some success, with Conway dismissing specific claims regarding the alleged "intentional infliction of emotional distress," or IIED. (It's difficult to prove IIED when the person who allegedly suffered it, in this case Setzer, is no longer alive.)Still, the ruling is a blow to the high-powered Silicon Valley defendants who had sought to have the suit tossed out entirely.Significantly, Conway's opinion allows Megan Garcia, Setzer's mother and the plaintiff in the case, to sue Character.AI, Google, Shazeer, and de Freitas on product liability grounds. Garcia and her lawyers argue that Character.AI is a product, and that it was rolled out recklessly to the public, teens included, despite known and possibly destructive risks.In the eyes of the law, tech companies generally prefer to see their creations as services, like electricity or the internet, rather than products, like cars or nonstick frying pans. Services can't be held accountable for product liability claims, including claims of negligence, but products can.In a statement, Tech Justice Law Project director and founder Meetali Jain, who's co-counsel for Garcia alongside Social Media Victims Law Center founder Matt Bergman, celebrated the ruling as a win — not just for this particular case, but for tech policy advocates writ large."With today's ruling, a federal judge recognizes a grieving mother's right to access the courts to hold powerful tech companies — and their developers — accountable for marketing a defective product that led to her child's death," said Jain."This historic ruling not only allows Megan Garcia to seek the justice her family deserves," Jain added, "but also sets a new precedent for legal accountability across the AI and tech ecosystem."Character.AI was founded by Shazeer and de Freitas in 2021; the duo had worked together on AI projects at Google, and left together to launch their own chatbot startup. Google provided Character.AI with its essential Cloud infrastructure, and in 2024 raised eyebrows when it paid Character.AI $2.7 billion to license the chatbot firm's data — and bring its cofounders, as well as 30 other Character.AI staffers, into Google's fold. Shazeer, in particular, now holds a hugely influential position at Google DeepMind, where he serves as a VP and co-lead for Google's Gemini LLM.Google did not respond to a request for comment at the time of publishing, but a spokesperson for the search giant told Reuters that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage" the Character.AI app "or any component part of it."In a statement, a spokesperson for Character.AI emphasized recent safety updates issued following the news of Garcia's lawsuit, and said it "looked forward" to its continued defense:It's long been true that the law takes time to adapt to new technology, and AI is no different. In today's order, the court made clear that it was not ready to rule on all of Character.AI 's arguments at this stage and we look forward to continuing to defend the merits of the case.We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe. We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time spent notification, updated prominent disclaimers and more.Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.Any safety-focused changes, though, were made months after Setzer's death and after the eventual filing of the lawsuit, and can't apply to the court's ultimate decision in the case.Meanwhile, journalists and researchers continue to find holes in the chatbot site's upxdated safety protocols. Weeks after news of the lawsuit was announced, for example, we continued to find chatbots expressly dedicated to self-harm, grooming and pedophilia, eating disorders, and mass violence. And a team of researchers, including psychologists at Stanford, recently found that using a Character.AI voice feature called "Character Calls" effectively nukes any semblance of guardrails — and determined that no kid under 18 should be using AI companions, including Character.AI.Share This Article
·120 مشاهدة