OpenAI Faces Privacy Complaint in Norway Over ChatGPT's Defamatory 'Hallucinations'
OpenAI, the organization behind the AI chatbot ChatGPT, is confronting a privacy complaint in Norway due to the chatbot generating false and defamatory information about an individual. This incident highlights ongoing concerns regarding the accuracy and reliability of AI-generated content.
A Norwegian individual discovered that ChatGPT produced fabricated information alleging he had been convicted of murdering two of his children and attempting to kill a third. These unfounded claims have caused significant distress and potential reputational harm to the individual involved.
The privacy advocacy group NOYB (None of Your Business) is supporting the affected individual by filing a complaint with Norway's data protection authority, Datatilsynet. NOYB argues that OpenAI's ChatGPT violates the General Data Protection Regulation (GDPR) by producing and disseminating inaccurate personal data. Joakim Söderberg, a data protection lawyer at NOYB, stated, "The GDPR is clear. Personal data has to be accurate. If it’s not, users have the right to have it changed to reflect the truth."
Under the GDPR, organizations are obligated to ensure the accuracy of personal data they process. The regulation grants individuals the right to rectify inaccurate data concerning them. In this case, ChatGPT's generation of false information about the complainant could be seen as a breach of these provisions. Confirmed violations of the GDPR can result in penalties of up to 4% of a company's global annual turnover.
This is not the first time ChatGPT's inaccuracies, often referred to as "hallucinations," have led to legal challenges:
- In 2023, an Australian mayor considered legal action after ChatGPT falsely claimed he had been imprisoned for bribery.
- In 2024, Italy's data protection authority fined OpenAI €15 million for processing personal data without a proper legal basis.
- In the United States, a defamation lawsuit was filed against OpenAI after ChatGPT fabricated legal accusations against a radio host.
These incidents underscore the broader issue of AI-generated misinformation and its potential legal ramifications.
OpenAI has acknowledged that ChatGPT can produce inaccurate information and has implemented disclaimers advising users to verify the chatbot's outputs. However, critics argue that such disclaimers are insufficient to mitigate the harm caused by false information. If the Norwegian data protection authority finds OpenAI in violation of the GDPR, the company could face substantial fines and be required to implement measures to prevent future inaccuracies.
The complaint filed in Norway adds to the growing scrutiny of AI systems like ChatGPT and their adherence to data protection laws. As AI technology continues to evolve, ensuring the accuracy and reliability of AI-generated content remains a critical challenge for developers and regulators alike.
Source: TechCrunch
RECOMMENDED NEWS

iOS 17.1.2, iPadOS 17.1.2 and macOS 14.1.2 patch 2 actively exploited security vulnerabilities
Apple has released a point update for iPhones, iPads and Macs. iOS 17.1.2, iPadOS 17.1.2 and macOS ...

You should now be able to get Windows 11 Version 23H2, if you have not already
Microsoft released Windows 11 version 23H2 last year to the public. It,, and the Moments Update tha...

RTX 5090 GPUs Are Failing After Latest Driver Update – What’s Going On?
NVIDIA’s latest RTX 5090 and RTX 5080 graphics cards have had a rough launch, and now, new reports ...

TikTok Offers Direct Download Amid Play Store Ban
In response to its removal from the Google Play Store in the United States, TikTok has made its And...

Kingdom Come: Deliverance 2 Turns Profit Within 24 Hours of Release
Warhorse Studios' latest release, Kingdom Come: Deliverance 2, has achieved remarkable commercial s...

Microsoft: 68 percent of users who sign in with passwords fail
Did you ever try to sign in to an online account or a device and it simply would not work? Sometime...
Comments on "OpenAI Faces Privacy Complaint in Norway Over ChatGPT's Defamatory 'Hallucinations'" :