CNIL’s Guidance on Deploying Generative AI

CNIL released initial guidelines on 18 July 2024 to assist organizations in the responsible deployment of generative AI systems, focusing on data protection compliance. Generative AI, which creates content such as text, images, and code, often uses large datasets that include personal data. This necessitates measures to protect individual rights. 

Key Recommendations:

  1. Start with Specific Needs: Deploy AI systems only to address identified uses, avoiding deployments without a concrete purpose.
  2. Supervise Use: Define and enforce authorized and prohibited uses to mitigate risks, such as avoiding the input of personal data and limiting decision-making roles for AI.
  3. Acknowledge System Limitations: Understand the probabilistic nature of AI, which may produce plausible but incorrect results, and the need for critical verification of outputs.
  4. Choose Secure Deployment: Prefer local, secure systems or evaluate the security of external service providers. Ensure service providers do not misuse data provided to the AI.
  5. User Training and Awareness: Educate end users on the risks and limitations of AI, emphasizing the importance of verifying AI-generated content and prohibiting the input of sensitive data.
  6. Implement Governance: Ensure GDPR compliance by involving stakeholders like data protection officers and information systems managers from the beginning. Regularly review and update policies to address new risks and ethical concerns.
  7. Select Appropriate Systems: Match the AI system with specific needs, ensuring it is secure, robust, and free from biases. Evaluate system documentation and, if needed, request external evaluations.
  8. Deployment Mode: For non-confidential uses, cloud-based or API services might be acceptable with proper safeguards. For handling personal or sensitive data, “on-premise” solutions are preferred to minimize third-party data extraction risks.
  9. Continuous Monitoring: Establish an ethics committee or referent to oversee AI deployment, conduct regular audits, and collect user feedback to adapt to emerging risks and best practices.
  10. GDPR Compliance: Follow CNIL’s recommendations for AI development, including data protection impact assessments (DPIA) and involving the Data Protection Officer in overseeing compliance. Ensure transparency with users regarding AI use.

You can find the guidance here.

โ™ป๏ธ Share this if you found it useful.
๐Ÿ’ฅ Follow me on Linkedin for updates and discussions on privacy, digital and AI education.
๐Ÿ“ Subscribe to my newsletter for weekly updates and insights โ€“ subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top