ChatGPT

OpenAI’s GDPR Compliance Faces New Challenge in Europe

It’s hardly shocking that LLM chatbots like OpenAI’s ChatGPT struggle with GDPR compliance, particularly with data subject requests like rectification or deletion. Why? Because these LLMs generate responses based on training patterns rather than accessing or storing personal data from interactions—meaning they can invent data if it seems fitting to the algorithm. 💡🤖
Despite hopes that the Garante investigation in Italy and the EDPB task force might bring clarity, a year has passed with little progress. ⏳

🆕 However, noyb.eu just announced they filed a complaint in Austria, championing a public figure’s privacy rights. Here are some details (full info here):

  1. 📄 noyb accuses OpenAI of GDPR breaches for providing inaccurate information about individuals.
  2. 🚫 Issues cited include ChatGPT’s inaccuracies in personal data processing and OpenAI’s admission of its inability to correct false data or reveal training data sources.  
  3. 🛑 GDPR demands accurate personal data handling, including rights to rectification and access, which OpenAI reportedly fails to meet. This is compounded by ChatGPT’s design, which, according to OpenAI, lacks the ability to verify the factual correctness of its generated outputs, often leading to ‘hallucinated’ data.
  4. ❌ Despite filtering options, OpenAI cannot rectify false personal data without limiting overall data accessibility, an approach which does not meet the GDPR’s transparency and accuracy requirements.

🤔 Long gone are the days when “this is how it’s supposed to work” is a valid excuse—especially since this technology launched under existing GDPR regulations.

Will this be the turning point for regulatory action against non-compliance? Stay tuned! 🔍📅

Scroll to Top