CNIL publishes Q&A on the Use of Generative AI Systems

On 18 July 2024, CNIL released a comprehensive FAQ on generative AI systems, aiming to guide organizations on compliance with GDPR and the forthcoming AI Act. Generative AI, encompassing text, code, images, and more, has transformative potential but also significant risks. 

Benefits and Risks of Generative AI

Generative AI systems excel in creating personalized, high-quality content across various media. However, they operate on probabilistic logic, leading to plausible but potentially inaccurate results, known as “hallucinations.” This raises trust issues and complicates bias detection. Additionally, these systems pose misuse risks, including disinformation and malicious code generation.

Approaches to Using Generative AI

Organizations can choose between off-the-shelf models and developing custom models:

  • Off-the-Shelf Models: These are readily available, proprietary, or open-source models, adaptable through pre-prompt instructions.
  • Custom Models: Developing custom models requires significant resources but allows for specific adaptations. Performance can be enhanced by connecting to a knowledge base (RAG) or fine-tuning pre-trained models, though both methods demand substantial resources.

Choosing the Right System

Organizations should match AI systems to their specific needs, considering risks and system limitations. Factors to evaluate include safety, relevance, robustness, absence of biases, and compliance with applicable rules. System documentation and external evaluations are crucial for informed decision-making.

Deployment Methods

Generative AI can be deployed on-premise, via cloud infrastructure, or through APIs:

  • On-Premise: Offers better data security but is costly.
  • Cloud: More accessible but requires stringent contracts to secure data.
  • APIs: Simplifies control but requires caution with personal data and clear contractual terms.

Implementation and Management

Compliance with GDPR necessitates risk analysis and governance strategies. Organizations must define roles, ensure data security, and regulate AI use through internal policies. Data Protection Officers (DPOs) play a pivotal role in managing data protection issues and ethical concerns.

Training and Awareness

End-users must be trained on AI system functionalities, limitations, and authorized uses. They should verify data inputs and output quality, avoiding confidential information. Training should include recognizing biases and preventing automation bias.

Governance

Regular checks and user feedback are vital for compliance and system improvement. Establishing ethics committees or appointing dedicated contacts can enhance oversight, especially for sensitive uses.

GDPR and EU AI Act Compliance

CNIL’s recommendations cover dataset creation and AI training, emphasizing GDPR compliance. The upcoming EU AI Act, effective 1 August 2024, categorizes AI systems by risk levels and mandates transparency and accountability for general-purpose AI systems.

Get it here.

โ™ป๏ธ Share this if you found it useful.
๐Ÿ’ฅ Follow me on Linkedin for updates and discussions on privacy, digital and AI education.
๐Ÿ“ Subscribe to my newsletter for weekly updates and insights โ€“ subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top