A Norwegian citizen has accused AI chatbot ChatGPT of spreading false and damaging information about him. The man, Arne Holmen, claims ChatGPT falsely stated that he murdered his two children and served a 21-year prison sentence — an entirely fabricated and harmful claim – BBC report.

Holmen, who has never been involved in any criminal activity, described the false information as highly damaging and defamatory. He pointed out that ChatGPT’s response appeared to combine some accurate details with false statements creating a misleading story.
For instance, the chatbot correctly mentioned his children’s approximate ages, indicating it had access to some data about him. However, the AI then constructed a completely false and misleading narrative.
A digital rights organization has called for an investigation into the incident and filed a complaint against OpenAI. They argue that AI models have violated privacy and spread false information. Holmen’s case is not unique — ChatGPT has reportedly generated incorrect details in various instances, raising concerns about compliance with European data protection laws.
Prof Stumpf says that can even apply to people who work behind the scenes on these types of models.
“Even if you are more involved in the development of these systems quite often, you do not know how they actually work, why they’re coming up with this particular information that they came up with,” she told the BBC.
Experts point to AI hallucination issues as a key factor behind such errors. Major tech companies like Apple and Google have also faced similar problems with their AI tools.
Read more: Elon Musk’s AI Chatbot Grok 3 Sparks Controversy in India with Bold Political Revelations
Specialists warn that since large language models operate through complex mechanisms, understanding their decision-making process remains challenging, leaving room for misinformation risks.