ChatGPT accuses father of killing his children: Norwegian man files complaint against OpenAI
A Norwegian man, Arve Hjalmar Holmen, has filed a formal complaint against OpenAI after ChatGPT falsely accused him of murdering his two sons and receiving a 21-year prison sentence. The chatbot's fabricated story, which mixed real details from Holmen’s life with fictional allegations, has reignited concerns about the dangers of AI misinformation.
Shocking discovery
Holmen, a Trondheim resident, stumbled upon the false claims when he asked ChatGPT, “Who is Arve Hjalmar Holmen?” To his horror, the chatbot responded with a completely fictitious narrative, stating that Holmen’s sons, aged 7 and 10, were found dead in a pond near their home in December 2020. The AI even included correct details about his hometown and family, making the false story more convincing.
"What terrifies me most is the possibility that people might believe it. There’s often the assumption that where there's smoke, there's fire," Holmen told the BBC.
Legal action and data privacy concerns
Supported by the European digital rights group Noyb, Holmen has lodged a complaint with Norway’s Data Protection Authority. He argues that OpenAI violated the EU’s General Data Protection Regulation (GDPR) by providing false and defamatory information. Noyb lawyer Joakim Söderberg criticized OpenAI’s standard disclaimer, which warns users that ChatGPT may generate inaccurate responses.
"You can't just spread false information and hide behind a disclaimer that says it might not be true," Söderberg stated.
OpenAI responds
In response to the complaint, OpenAI acknowledged the issue but emphasized that the false claim stemmed from an older version of ChatGPT. The company stated it has since implemented updates with online search capabilities to improve accuracy.
A growing problem
Holmen’s case is not unique. Other AI models have also generated false claims, including Google’s Gemini suggesting bizarre advice and Apple’s AI tool fabricating news headlines. Legal experts suggest that proving defamation in AI cases can be difficult without clear evidence of damage or wide dissemination. Despite the challenges, Holmen hopes his case will set a precedent for holding AI companies accountable for the consequences of their technology. As AI continues to evolve, the need for stringent regulations and responsible AI development has never been more urgent.
Copyright © 2024-25 DB Corp ltd., All Rights Reserved
This website follows the DNPA Code of Ethics