Privacy activists have filed a complaint against OpenAI, the maker of ChatGPT, alleging that the company’s chatbot regularly “hallucinates” false information about individuals, which is a violation of Europe’s General Data Protection Regulation (GDPR). The complaint, filed by Vienna-based nonprofit noyb, accuses OpenAI of refusing to correct or erase false information and statements made about an unnamed public figure, and instead offered to block or filter results based on prompts like the figure’s name. This move increases pressure on tech firms to address the well-known but difficult-to-fix problem of AI hallucinations as they roll out AI tools to more customers. This complaint highlights the serious consequences of making up false information about individuals, and it’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals.

AI’s Dark Side
Making up false information is quite problematic in itself, but when it comes to false information about individuals, there can be serious consequences.
1–2 minutes










