Andreea Lisievici

CNIL’s Guidance on Deploying Generative AI

On 18 July 2024, CNIL published guidelines for organizations planning to deploy generative AI systems, emphasizing the importance of responsible and secure deployment to protect personal data. Generative AI, capable of creating various types of content, requires large datasets, often including personal data, for training. CNIL advises starting with specific needs, supervising usage, acknowledging system limitations, choosing secure and robust deployment methods, and implementing governance for GDPR compliance. Training and awareness for end users on risks and prohibited uses are crucial to mitigate potential harms and ensure proper handling of data.

CNIL’s Guidance on Deploying Generative AI Read More »

CNIL publishes Q&A on the Use of Generative AI Systems

On 18 July 2024, CNIL issued a Q&A document addressing the use of generative AI systems. The FAQ outlines the benefits, limitations, and compliance measures for deploying these systems, emphasizing GDPR adherence. Generative AI systems generate diverse content but pose risks such as inaccuracies and potential biases. CNIL highlights the need for proper system selection, deployment methods, risk analysis, and end-user training. The Q&A also provides guidance on compliance with the upcoming EU AI Act, which mandates transparency and accountability in AI system usage.

CNIL publishes Q&A on the Use of Generative AI Systems Read More »

Irish DPC Highlights GDPR Challenges with AI and Data Protection

On 18 July 2024, the Irish Data Protection Commission (DPC) highlighted important data protection issues with the growing use of Generative AI (Gen-AI) and Large Language Models (LLMs). These AI systems, which often process personal data during training and usage, raise concerns about data accuracy, retention, and potential biases. The DPC advises organizations using AI to ensure GDPR compliance by conducting risk assessments, understanding data flow, and safeguarding data subject rights. AI product designers must also consider GDPR obligations, transparency, and security to prevent misuse and protect personal data throughout the AI lifecycle.

Irish DPC Highlights GDPR Challenges with AI and Data Protection Read More »

Hamburg DPA Launches GDPR Discussion Paper on Personal Data in LLMs

The Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) has issued a discussion paper on the application of GDPR to Large Language Models (LLMs). It asserts that LLMs do not store personal data and thus do not constitute data processing under GDPR Article 4(2). However, any personal data processed within LLM-supported AI systems must comply with GDPR, particularly regarding output. The paper stresses that training LLMs with personal data must adhere to data protection laws, though violations during training do not impact the model’s lawful use in AI systems.

Hamburg DPA Launches GDPR Discussion Paper on Personal Data in LLMs Read More »

Scroll to Top