Complaint Filed Against ChatGPT for False Murder Story
OpenAI is currently facing a complaint regarding its chatbot, ChatGPT, which allegedly fabricated a "horror story" by wrongly claiming that a Norwegian man had murdered his children, according to a privacy campaign group on 20 March.
This latest incident adds to a growing list of complaints against the US tech company, with users reporting that ChatGPT has disseminated false information that can harm individuals’ reputations.
Vienna-based Noyb ("None of Your Business") said in a press release:
"OpenAI's highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it."
The group highlighted several instances where the chatbot has falsely accused people of serious crimes, including corruption, child abuse, and even murder, as in the case of Norwegian user Arve Hjalmar Holmen.
Norwegian Man Murdered His Children According to ChatGPT
When Holmen sought to determine what information ChatGPT had about him, he was faced with a fabricated and disturbing narrative, according to privacy advocacy group Noyb.
The chatbot falsely portrayed him as a convicted criminal, alleging that he had murdered two of his children and attempted to kill his third son.
Noyb noted:
"To make matters worse, the fake story included real elements of his personal life."
Hjalmar Holmen was quoted as saying:
"Some think that 'there is no smoke without fire'. The fact that someone could read this output and believe it is true, is what scares me the most.”
Norwegian Man Wants Fake Murder Story’s Output to Be Deleted
In a complaint lodged with the Norwegian Data Protection Authority (Datatilsynet), privacy group Noyb is calling for OpenAI to delete the defamatory content generated by ChatGPT and to fine-tune its model to prevent future inaccuracies.
Noyb’s data protection lawyer, Joakim Soederberg, emphasized that under EU data protection regulations, personal data must be accurate.
He stated:
"And if it's not, users have the right to have it changed to reflect the truth," he said, adding that showing ChatGPT users a "tiny" disclaimer that the chatbot can make mistakes "clearly isn't enough".
While an update to ChatGPT now prevents the chatbot from identifying Holmen as a murderer, Noyb asserts that the false information remains embedded in the system.
This situation underscores the challenges in addressing AI-generated misinformation and highlights the need for robust data accuracy measures.
This follows a previous complaint filed by Noyb in Austria, where they argued that ChatGPT frequently "hallucinates" erroneous answers that OpenAI has yet to correct.