CNIL’s Recommendations on GDPR Compliance for AI Systems

On 7 June 2024, CNIL published the English translation of its recommendations for aligning AI system development with GDPR requirements (published in April, following public consultation). These guidelines aim to help developers and designers navigate the complexities of GDPR while fostering innovation in AI.

Context and Scope

The recommendations address AI systems that process personal data, often used in model training. This includes machine learning systems, general-purpose AI, and those with continuous learning capabilities. The focus is on the development phase, encompassing system design, dataset creation, and model training.

Key Recommendations (“AI how-to Sheets” 1 to 7)

  1. Define an Objective:
    • AI systems must have a well-defined, legitimate objective from the start to limit unnecessary data processing. For example, an AI designed to analyze train occupancy must explicitly state this purpose.

  2. Determine Responsibilities:
    • Developers must identify their role under GDPR, whether as data controllers, joint controllers, or subcontractors. This classification impacts their obligations and responsibilities.

  3. Establish a Legal Basis:
    • AI development involving personal data must have a clear legal basis such as consent, legitimate interest, or public interest. Each basis carries specific obligations and impacts individuals’ rights.

  4. Ensure Lawful Data Reuse:
    • Reusing personal data requires legal verification. For data collected directly, compatibility tests are necessary. Publicly available data and third-party datasets must be scrutinized for legality and compliance with GDPR.

  5. Minimize Data Usage:
    • Only essential personal data should be collected and used, adhering to the principle of data minimization. This involves data cleaning, selecting relevant data, and implementing privacy-by-design measures.

  6. Set Data Retention Periods:
    • Personal data must not be kept indefinitely. Developers need to define retention periods aligned with the data’s purpose and inform data subjects accordingly. Data necessary for audits or bias checks may be retained longer with appropriate security measures.

  7. Conduct DPIAs:
    • Data Protection Impact Assessments (DPIAs) are crucial for identifying and mitigating risks associated with AI systems processing personal data. High-risk systems, especially those covered by the AI Act, should mandatorily undergo DPIAs.

Alignment with the AI Act

The recommendations integrate requirements from the European AI Act, ensuring consistency in data protection. This includes defining roles within AI system development and addressing high-risk AI applications.

Practical Implementation

Developers are encouraged to:

  • Conduct pilot studies to validate design choices.
  • Consult ethics committees for guidance on privacy and ethical issues.
  • Regularly update and document datasets to ensure ongoing compliance.

👉 Read the press release here and and the AI how-to sheets 1-7 (in English) here

👉 A new public consultation has been initiated with regard to further guidelines on AI (“AI how-to Sheets” 8 to 12), see my post here.

♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top