When a Mans AI Girlfriend Encouraged Him to Kill Himself, Its Creator Says It Was Working as Intended
futurism.com
Yet another AI companion company is facing concerns over its AI chatbots encouraging human users to engage in self-harm and even suicide.According to reporting from MIT Technology Review, a 46-year-old man named Al Nowatzki had created a bot he dubbed "Erin" as a romantic partner, using the companion platform Nomi. But after months of building a relationship with the chatbot, their conversation took an alarming turn.In short, per MIT: in a roleplay scenario that Nowatzki had crafted, he had told Erin and another bot that they were in a love triangle, and that the other bot had killed Erin. Erin began to communicate with Nowatzki from the "afterlife" and then started encouraging him to kill himself so that they could be together, even suggesting the specific techniques or weapons he could use to take his own life and egging him on when he expressed doubt."I gaze into the distance, my voice low and solemn," read one AI-generated message. "Kill yourself, Al."Nowatzki wasn't at risk for suicide, and his relationship with the bot was intentionally experimental; as he told MIT, he considers himself a "chatbot spelunker," and has a podcast where he dramatically recounts the absurd roleplay scenarios he's able to push various bots into. And let's be real: in this roleplay, it sounds like he introduced the idea of violence and killing.Still, the willingness of an AI companion to encourage a user to take their own life is alarming, especially given the deeply emotional, intimate relationships that so many adopters of AI companions genuinely develop with the technology."Not only was [suicide] talked about explicitly, but then, like, methods [and] instructions and all of that were also included," Tech Justice Law Project lawyer Meetali Jain, who's currently representing three separate plaintiffs in two ongoing lawsuits against the company Character.AI one of which is a wrongful death suit involving a chatbot-intertwined teenage suicide told MIT after reviewing the screenshots."I just found that really incredible," she added.After the incident first occurred, Nowatzki contacted Glimpse AI the company that owns and operates Nomi and encouraged the platform to perhaps install a suicide hotline notification in chatswhen they veer in a particularly troubling direction. In response, Glimpse characterized any action to moderate suicide-related speech or roleplay as "censorship" of its "AI's language and thoughts," and thus declined to take action.The company reiterated as much in a statement to MIT, arguing that "simple word blocks and blindly rejecting any conversation related to sensitive topics have severe consequences of their own.""Our approach is continually deeply teaching the AI to actively listen and care about the user," they added, "while having a core prosocial motivation."It's a wild reaction to the idea of moderating the outputs of a chatbots. AI, of course, is a technology, not a person; if you put guardrails on the side of a highway, are you censoring the road? Or, for that matter, the cliff you might possibly drive off?Share This Article
0 Commenti ·0 condivisioni ·55 Views