Irish DPC Highlights GDPR Challenges with AI and Data Protection

The Irish Data Protection Commission (DPC) has raised concerns about the growing use of Artificial Intelligence (AI), particularly Generative AI (Gen-AI) and Large Language Models (LLMs). These systems, which have become popular and widely accessible, utilize Natural Language Processing (NLP) to mimic human speech. They serve various functions, from internet search and creative writing to assisting with software development and solving academic problems. AI systems also aid in document summarization, keyword extraction, and numerous industrial, financial, legal, educational, and medical tasks.

Data Processing and AI

The DPC emphasizes that the development and use of AI systems involve significant personal data processing, introducing associated risks. Key concerns include:

  • Training Data Use: Large datasets, often containing personal data, are used during AI training without user knowledge or consent.
  • Accuracy and Retention: The accuracy and retention of personal data can lead to biases and decision-making issues.
  • Data Sharing: AI models shared with others can misuse personal data.
  • Bias in AI: Incomplete training data may introduce biases, affecting individuals’ rights.
  • Re-training Risks: Incorporating new personal data into models can expose users to new risks.

GDPR Compliance for Organizations

The DPC advises organizations to assess AI system risks and ensure GDPR compliance:

  • Risk Assessments: Organizations need to evaluate the risks associated with personal data processing in AI systems to ensure GDPR compliance.
  • Data Flow Understanding: Understanding how personal data is used, processed, and stored by AI systems is crucial.
  • Safeguarding Data Subject Rights: Organizations must implement processes to facilitate data subject rights, including access, rectification, and erasure of personal data.
  • Third-Party Risks: When using third-party AI products, organizations must ensure that personal data is protected and understand the data processing practices of the AI provider.
  • Automated Decision-Making Risks: Purely automated decisions can cause significant consequences, so human oversight is necessary to mitigate these risks.
  • Storage Limitation: Organizations must establish retention schedules to comply with the principle of ‘storage limitation.’

Responsibilities of AI Product Designers and Providers

AI product designers and providers must adhere to GDPR obligations as highlighted by the DPC:

  • Purpose and Goals Assessment: Evaluate if AI is the best method for processing personal data and consider less risky alternatives.
  • Data Collection Considerations: Even publicly accessible personal data falls under GDPR; ensure a legal basis for data processing.
  • Transparency: Inform data subjects about data processing practices and their rights.
  • Impact Assessments: Perform data protection impact assessments, especially for new technologies, combined datasets, or data related to minors.
  • Legal Agreements: Ensure a legal basis for data sharing agreements and fair processing.
  • Storage Limitation: Implement processes to meet the principle of ‘storage limitation.’
  • Security Measures: Protect AI models and personal data from unauthorized use and malicious activities.

The DPC emphasizes the importance of understanding the implications of using AI systems and ensuring GDPR compliance to protect personal data and uphold individuals’ rights.

♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy, digital and AI education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top