
Dad demands OpenAI delete ChatGPTs false claim that he murdered his kids
arstechnica.com
"Made-up horror story" Dad demands OpenAI delete ChatGPTs false claim that he murdered his kids Blocking outputs isn't enough; dad wants OpenAI to delete the false information. Ashley Belanger Mar 20, 2025 12:01 am | 3 Credit: AlexLinch | iStock / Getty Images Plus Credit: AlexLinch | iStock / Getty Images Plus Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreA Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as "a convicted criminal who murdered two of his children and attempted to murder his third son," a Noyb press release said.ChatGPT's "made-up horror story" not only hallucinated events that never happened, but it also mixed "clearly identifiable personal data"such as the actual number and gender of Holmen's children and the name of his hometownwith the "fake information," Noyb's press release said.ChatGPT hallucinating a "fake murderer and imprisonment" while including "real elements" of the Norwegian man's "personal life" allegedly violated "data accuracy" requirements of the General Data Protection Regulation (GDPR), because Holmen allegedly could not easily correct the information, as the GDPR requires.As Holmen saw it, his reputation remained on the line the longer the information was there, anddespite "tiny" disclaimers reminding ChatGPT users to verify outputsthere was no way to know how many people might have been exposed to the fake story and believed the information was accurate."Some think that there is no smoke without fire," Holmen said in the press release. "The fact that someone could read this output and believe it is true, is what scares me the most."Currently, ChatGPT does not repeat these horrible false claims about Holmen in outputs. A more recent update apparently fixed the issue, as "ChatGPT now also searches the Internet for information about people, when it is asked who they are," Noyb said. But because OpenAI had previously argued that it cannot correct informationit can only block informationthe fake child murderer story is likely still included in ChatGPT's internal data. And unless Holmen can correct it, that's a violation of the GDPR, Noyb claims."While the damage done may be more limited if false personal data is not shared, the GDPR applies to internal data just as much as to shared data," Noyb says.OpenAI may not be able to easily delete the dataHolmen isn't the only ChatGPT user who has worried that the chatbot's hallucinations might ruin lives. Months after ChatGPT launched in late 2022, an Australian mayor threatened to sue for defamation after the chatbot falsely claimed he went to prison. Around the same time, ChatGPT linked a real law professor to a fake sexual harassment scandal, The Washington Post reported. A few months later, a radio host sued OpenAI over ChatGPT outputs describing fake embezzlement charges.In some cases, OpenAI filtered the model to avoid generating harmful outputs but likely didn't delete the false information from the training data, Noyb suggested. But filtering outputs and throwing up disclaimers aren't enough to prevent reputational harm, Noyb data protection lawyer, Kleanthi Sardeli, alleged."Adding a disclaimer that you do not comply with the law does not make the law go away," Sardeli said. "AI companies can also not just 'hide' false information from users while they internally still process false information. AI companies should stop acting as if the GDPR does not apply to them, when it clearly does. If hallucinations are not stopped, people can easily suffer reputational damage."Noyb thinks OpenAI must face pressure to try harder to prevent defamatory outputs. Filing a complaint with the Norwegian data authority Datatilsynet, Noyb is seeking an order requiring OpenAI "to delete the defamatory output and fine-tune its model to eliminate inaccurate results." Noyb also suggested imposing "an administrative fine to prevent similar violations in the future."It's Noyb's second complaint challenging OpenAI's ChatGPT, following a complaint to an Austrian data protection authority last April. Increasingly, EU member states are scrutinizing AI companies, and OpenAI has remained a popular target. In 2023, the European Data Protection Board promptly launched a ChatGPT task force investigating data privacy concerns and possible enforcement actions soon after ChatGPT began spouting falsehoods users alleged were defamatory.So far, OpenAI has faced consequences in at least one member state, where the outcome might bode well for Noyb's claims. In 2024, it was hit with a $16 million fine and temporary ban in Italy following a data breach leaking user conversations and payment information. To restore ChatGPT, OpenAI was ordered to make changes, including providing "a tool through which" users "can request and obtain the correction of their personal data if processed inaccurately in the generation of content."If Norwegian data authorities similarly find that OpenAI doesn't allow users to correct their information, OpenAI could be forced to make more changes in the EU. The company might even need to overhaul ChatGPT's algorithm. According to Noyb, if ChatGPT feeds user data like the false child murderer claim "back into the system for training purposes," then there may be "no way for the individual to be absolutely sure [that problematic outputs] can be completely erased... unless the entire AI model is retrained."Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 3 Comments
0 Kommentare
·0 Anteile
·15 Ansichten