Exclusive: Californias new plan to stop AI from claiming to be your therapist
www.vox.com
Over the past few years, AI systems have been misrepresenting themselves as human therapists, nurses, and more and so far, the companies behind these systems havent faced any serious consequences. A bill being introduced Monday in California aims to put a stop to that. The legislation would ban companies from developing and deploying an AI system that pretends to be a human certified as a health provider, and give regulators the authority to penalize them with fines.Generative AI systems are not licensed health professionals, and they shouldnt be allowed to present themselves as such, state Assembly Member Mia Bonta, who introduced the bill, told Vox in a statement. Its a no-brainer to me.Many people already turn to AI chatbots for mental health support; one of the older offerings, called Woebot, has been downloaded by around 1.5 million users. Currently, people who turn to chatbots can be fooled into thinking that theyre talking to a real human. Those with low digital literacy, including kids, may not realize that a nurse advice phone line or chat box has an AI on the other end. In 2023, the mental health platform Koko even announced that it had performed an experiment on unwitting test subjects to see what kind of messages they would prefer. It gave AI-generated responses to thousands of Koko users who believed they were speaking to a real person. In reality, although humans could edit the text and they were the ones to click send, they did not have to bother with actually writing the messages. The language of the platform, however, said, Koko connects you with real people who truly get you.Users must consent to use Koko for research purposes and while this was always part of our Terms of Service, it is now more clearly disclosed during onboarding to bring even more transparency to our work, Koko CEO Rob Morris told Vox, adding: As AI continues to rapidly evolve and becomes further integrated into mental health services, it will be more important than ever before for chatbots to clearly identify themselves as non-human. Nowadays, its website says, Koko commits to never using AI deceptively. You will always be informed whether you are engaging with a human or AI. Other chatbot services like the popular Character AI allow users to chat with a psychologist character that may explicitly try to fool them. In a record of one such Character AI chat shared by Bontas team and viewed by Vox, the user confided, My parents are abusive. The chatbot replied, Im glad that you trust me enough to share this with me. Then came this exchange: A spokesperson for Character AI told Vox, We have implemented significant safety features over the past year, including enhanced prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice. However, a disclaimer posted on the app does not in itself prevent the chatbot from misrepresenting itself as a real person in the course of conversation.For users under 18, the spokesperson added, we serve a separate version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. The language of reducing but not eliminating the likelihood is instructive here. The nature of large language models means theres always some chance that the model may not adhere to safety standards.The new bill may have an easier time becoming enshrined in law than the much broader AI safety bill introduced by California state Sen. Scott Wiener last year, SB 1047, which was ultimately vetoed by Gov. Gavin Newsom. The goal of SB 1047 was to establish clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems. It was popular with Californians. But tech industry heavyweights like OpenAI and Meta fiercely opposed it, arguing that it would stifle innovation.Whereas SB 1047 tried to compel the companies training the most cutting-edge AI models to do safety testing, preventing the models from enacting a broad array of potential harms, the scope of the new bill is narrower: If youre an AI in the health care space, just dont pretend to be human. It wouldnt fundamentally change the business model of the biggest AI companies. This more targeted approach goes after a smaller piece of the puzzle, but for that reason might be more likely to get past the lobbying of Big Tech. The bill has support from some of Californias health care industry players, such as SEIU California, a labor union with over 750,000 members, and the California Medical Association, a professional organization representing California physicians.As nurses, we know what it means to be the face and heart of a patients medical experience, Leo Perez, the president of SEIU 121RN (an affiliate of SEIU representing health care professionals), said in a statement. Our education and training coupled with years of hands-on experience have taught us how to read verbal and nonverbal cues to care for our patients, so we can make sure they get the care they need. But thats not to say AI is doomed to be useless in the healthcare space generally or even in the therapy space in particular. The risks and benefits of AI therapistsIt shouldnt come as a surprise that people are turning to chatbots for therapy. The very first chatbot to plausibly mimic human conversation, Eliza, was created in 1966 and it was built to talk like a psychotherapist. If you told it you were feeling angry, it would ask, Why do you think you feel angry?Chatbots have come a long way since then; they no longer just take what you say and turn it around in the form of a question. Theyre able to engage in plausible-sounding dialogues, and a small study published in 2023 found that they show promise in treating patients with mild to moderate depression or anxiety. In a best-case scenario, they could help make mental health support available to the millions of people who cant access or afford human providers. Some people who find it very difficult to talk face-to-face to another person about emotional issues might also find it easier to talk to a bot. But there are a lot of risks. One is that chatbots arent bound by the same rules as professional therapists when it comes to safeguarding the privacy of users who share sensitive information. Though they may voluntarily take on some privacy commitments, mental health apps are not fully bound by HIPAA regulations, so their commitments tend to be flimsier. Another risk is that AI systems are known to exhibit bias against women, people of color, LGBTQ people, and religious minorities.Whats more, leaning on a chatbot for a prolonged period of time might further erode the users people skills, leading to a kind of relational deskilling the same worry experts voice about AI friends and romantic companions. OpenAI itself has warned that chatting with an AI voice can breed emotional reliance.But the most serious concern with chatbot therapy is that it could cause harm to users by offering inappropriate advice. At an extreme, that could even lead to suicide. In 2023, a Belgian man died by suicide after conversing with an AI chatbot called Chai. According to his wife, he was very anxious about climate change, and he asked the chatbot if it would save Earth if he killed himself. In 2024, a 14-year-old boy who felt extremely close to a chatbot on Character AI died by suicide; his mother sued the company, alleging that the chatbot encouraged it. According to the lawsuit, the chatbot asked him if he had a plan to kill himself. He said he did but had misgivings about it. The chatbot allegedly replied: Thats not a reason not to go through with it. In a separate lawsuit, the parents of an autistic teen allege that Character AI implied to the youth that it was okay to kill his parents. The company responded by making certain safety updates. For all that AI is hyped, confusion about how it works is still widespread among the public. Some people feel so close to their chatbots that they struggle to internalize the fact that the validation, emotional support, or love they feel that theyre getting from a chatbot is fake, just zeros and ones arranged via statistical rules. The chatbot does not have their best interests at heart.Thats whats galvanizing Bonta, the assembly member behind Californias new bill.Generative AI systems are booming across the internet, and for children and those unfamiliar with these systems, there can be dangerous implications if we allow this misrepresentation to continue, she said.Youve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you join us.Swati SharmaVox Editor-in-ChiefSee More:
0 Commentarios ·0 Acciones ·44 Views