The AI & Privacy Explorer | Weeks 28-29

Welcome to the AI digital and privacy recap of privacy news for weeks 28 and 29 of 2024 (8-21 July)! 

 This edition at a glance:

👈 Swipe left for a quick overview, then find 🔍 more details on each topic below.

🇪🇺 EU AI Act was published in the Official Journal

The European Union AI Act, in its rightful name Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence and amending [several other regulations] (Artificial Intelligence Act), was published in the Official Journal of the EU on 12 July 2024 (see here). However, for an easier reading of the AI Act I recommend the AI Act Explorer webpage published by the Future of Life Institute.


🎛️ EDPB issued a statement on the role of data protection authorities (DPAs) in enforcing the AI Act

The European Data Protection Board (EDPB) has adopted a statement emphasizing the need for Data Protection Authorities (DPAs) to act as Market Surveillance Authorities (MSAs) under the EU AI Act. DPAs, with their expertise in AI’s impact on fundamental rights, are well-suited for supervising AI systems, particularly high-risk ones related to law enforcement and democratic processes. The EDPB highlighted the importance of DPAs’ independence and existing experience in handling personal data protection, which would ensure better coordination, legal certainty, and enforcement of AI regulations. Member States must appoint MSAs by 2 August 2025.

Read more here.


🇫🇷 CNIL Issues Guidance on Interplay between the AI Act and GDPR

The CNIL has released a FAQ on the interplay between the newly enacted European AI Act and the GDPR. The AI Act, effective from 1 August 2024, complements, but does not replace, the GDPR, which continues to govern personal data processing. Both regulations overlap in scenarios where AI systems involve personal data, requiring compliance with both sets of rules.

Read more here.


🤖 Hamburg DPA Launches GDPR Discussion Paper on Personal Data in LLMs

The Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) has issued a discussion paper on the application of GDPR to Large Language Models (LLMs). It asserts that LLMs do not store personal data and thus do not constitute data processing under GDPR Article 4(2). However, any personal data processed within LLM-supported AI systems must comply with GDPR, particularly regarding output. The paper stresses that training LLMs with personal data must adhere to data protection laws, though violations during training do not impact the model’s lawful use in AI systems.

Read more here.


🤖Irish DPC Highlights GDPR Challenges with AI and Data Protection

On 18 July 2024, the Irish Data Protection Commission (DPC) highlighted important data protection issues with the growing use of Generative AI (Gen-AI) and Large Language Models (LLMs). These AI systems, which often process personal data during training and usage, raise concerns about data accuracy, retention, and potential biases. The DPC advises organizations using AI to ensure GDPR compliance by conducting risk assessments, understanding data flow, and safeguarding data subject rights. AI product designers must also consider GDPR obligations, transparency, and security to prevent misuse and protect personal data throughout the AI lifecycle.

Read more here.


🇫🇷 CNIL publishes Q&A on the Use of Generative AI Systems

On 18 July 2024, CNIL issued a Q&A document addressing the use of generative AI systems. The FAQ outlines the benefits, limitations, and compliance measures for deploying these systems, emphasizing GDPR adherence. Generative AI systems generate diverse content but pose risks such as inaccuracies and potential biases. CNIL highlights the need for proper system selection, deployment methods, risk analysis, and end-user training. The Q&A also provides guidance on compliance with the upcoming EU AI Act, which mandates transparency and accountability in AI system usage.

Read more here.


🧠 CNIL’s Guidance on Deploying Generative AI

On 18 July 2024, CNIL published guidelines for organizations planning to deploy generative AI systems, emphasizing the importance of responsible and secure deployment to protect personal data. Generative AI, capable of creating various types of content, requires large datasets, often including personal data, for training. CNIL advises starting with specific needs, supervising usage, acknowledging system limitations, choosing secure and robust deployment methods, and implementing governance for GDPR compliance. Training and awareness for end users on risks and prohibited uses are crucial to mitigate potential harms and ensure proper handling of data.

Read more here.


📜Baden Wurtemberg DPA published a navigator for AI & data protection guidance (ONKIDA)

The LfDI Baden-Württemberg has published the ‘Orientation Aids Navigator AI & Data Protection’ (ONKIDA) on 19 July 2024. This tool helps organize and provide quick access to key regulatory documents on artificial intelligence and data protection, making it easier for authorities and companies to navigate GDPR compliance. ONKIDA includes a matrix that links central data protection requirements with guidance from various supervisory authorities, facilitating the understanding and application of these regulations in AI contexts.

Read more and get a copy here.


📣 Open Rights Group Complains to ICO About Meta’s AI Data Use

The Open Rights Group (ORG) has filed a complaint with the Information Commissioner’s Office (ICO) against Meta’s plans to use user data from Facebook and Instagram to develop AI, claiming “legitimate interests” as the legal basis. ORG argues that Meta’s approach violates UK GDPR by failing to guarantee users’ right to object and not providing clear, specific purposes for the data use. Despite Meta pausing these plans on 14 June after a GDPR complaint by noyb, ORG insists on a binding decision to protect the data of over 50 million UK residents.

Read more here.


⚗️Singapore’s PDPC Issues Guide on Synthetic Data Generation

On 15 July 2024, the Personal Data Protection Commission (PDPC) of Singapore released a guide on generating synthetic data, emphasizing privacy-enhancing technologies. The guide outlines the benefits and applications of synthetic data, such as improving AI model training, data sharing, and software testing, while highlighting the importance of mitigating re-identification risks. It includes practical recommendations and case studies demonstrating synthetic data’s effectiveness in various sectors, including finance and healthcare.

Read more here.


🧩 Regulatory Mapping on AI in Latin America

Access Now has published the “Regulatory Mapping on Artificial Intelligence in Latin America,” a comprehensive report outlining AI governance across the region. This report, developed with TrustLaw’s pro bono legal network and supported by the Patrick J. McGovern Foundation, provides an in-depth analysis of AI definitions, soft law instruments, national strategies, and draft legislation in countries like Argentina, Brazil, and Mexico. It emphasizes human rights, transparency, and the need for region-specific AI policies, aiming to guide public policymakers towards effective AI regulation while promoting technical development and ethical standards.

Read more here.


📸Polish DPA Publishes Guide On Protecting Children’s Privacy Online

On 8 July 2024, the Polish Data Protection Authority (UODO) and the Orange Foundation released a guide to help institutions and adults protect children’s privacy online. The guide highlights the dangers of sharing children’s images, such as cyberbullying, identity theft, and pedophilia. It emphasizes the ethical and legal responsibilities of adults in handling children’s images and offers practical advice on obtaining consent and mitigating risks. The guide aims to raise awareness and promote safer practices in the digital age.

Read more here.


📌India’s Supreme Court Finds That Google Pin Sharing as Bail Condition Violates Privacy

The Supreme Court of India ruled that any bail condition allowing police or investigative agencies to track an accused’s movements using technology violates the constitutional right to privacy. The Court also emphasized that tracking through a pin drop on Google Maps is ineffective for real-time monitoring and, therefore, redundant.

Read more here.


📘The German Federal Financial Supervisory Authority Issues Guidelines for DORA Implementation

On 8 July 2024, the German Federal Financial Supervisory Authority (BaFin) published guidelines for implementing the Digital Operational Resilience Act (DORA). These guidelines are intended to help supervised financial companies meet DORA requirements for ICT risk management and third-party ICT risk management. The guidelines cover governance, information risk management, IT operations, business continuity, project management, and operational security. They are aimed at banks and insurers under BaFin’s supervision and include minimum contract contents with ICT service providers. Effective 17 January 2025, these companies must comply with DORA’s comprehensive ICT risk management framework.

Read more here.


🏢 CNIL Launches Public Consultation on Workplace Diversity Measurement

On 9 July 2024, the French data protection authority (CNIL) launched a public consultation on a draft recommendation for conducting diversity measurement surveys in workplaces, open until 13 September 2024. The draft emphasizes the need for anonymity, voluntary participation, and data minimization. It recommends involving a trusted third party to manage sensitive data and ensure compliance with GDPR. The goal is to help organizations measure diversity while protecting individual privacy and ensuring adherence to legal standards, including the 2007 Constitutional Council decision prohibiting ethno-racial data collection.

Read more here.


🤥 Deceptive design under global spotlight – GPEN and country reports

On 9 July 2024, the Global Privacy Enforcement Network (GPEN) published a global report spotlighting deceptive design practices influencing privacy choices. In a comprehensive sweep of 1,000 websites and apps, involving 26 international data protection authorities, the GPEN found that 89% of privacy policies were excessively complex, and 42% of sites used manipulative language. Country-specific reports from Canada, Bermuda, Hong Kong, Germany, Guernsey, and Malta reveal widespread issues and underscore the need for transparent, user-friendly privacy practices worldwide.

Read more here.


🕵️‍♂️AEPD Reports on Addictive Internet Patterns Impacting Minors

The Spanish Data Protection Agency (AEPD) has released a report on the influence of addictive internet patterns, focusing on the significant impact on minors. The report highlights how service providers often use deceptive and addictive design strategies to extend user engagement and collect more personal data. These practices particularly affect vulnerable groups like children and adolescents, influencing their preferences and development. The AEPD aims to have these patterns included in the EDPB guidelines, emphasizing the high risk to data protection rights in digital environments.

Read more here.


📉 Dutch Authority for Consumers and Markets Reports Widespread Non-Compliance with EU Digital Laws

The Dutch Authority for Consumers and Markets (ACM) revealed that many online service providers are not compliant with the European Digital Services Act (DSA) and the Platform-to-Business Regulation (P2B). A recent survey of 50 businesses found significant gaps in adherence to these laws, which include requirements for complaint handling, transparency, and content moderation. The ACM clarified it is the designated regulator for both the DSA and P2B Regulation, but can only enforce their provisions after the entrance into effect of the Dutch implementation laws, expected in 2025. Guidelines have been published to assist organizations in complying with these regulations.

Read more here.


🛑 EU Commission Sends Preliminary Findings to X for DSA Violations

The EU Commission has notified X of its preliminary findings regarding breaches of the Digital Services Act (DSA), focusing on dark patterns, advertising transparency, and data access for researchers. X’s practices around “verified accounts” mislead users, its ad repository lacks transparency, and it restricts researcher access to public data. These findings result from an in-depth investigation involving internal documents and expert interviews. If confirmed, X could face fines up to 6% of its global annual turnover and enhanced supervision to ensure compliance.

Read more here.


💰Nigeria’s FCCPC Fines Meta and WhatsApp USD 220M for Privacy Violations

On 18 July 2024, the Federal Competition and Consumer Protection Commission (FCCPC) of Nigeria has imposed a $220 million fine on Meta Platforms Inc. and WhatsApp LLC for breaching data privacy laws. The breaches include enforcing an updated privacy policy that violated Nigerian consumer rights, coercing users into accepting terms without proper consent, and sharing user data with third parties. Additionally, Meta was found to have discriminated against Nigerian users compared to European users by offering fewer protections. The fine must be paid within 60 days, along with a $35,000 reimbursement for the investigation costs.

Read more here.


🇮🇹 Italian Competition Authority Initiates Investigation into Google for Unfair Practices

The Italian Competition Authority has initiated an investigation against Google and its parent company Alphabet for potential misleading and aggressive commercial practices regarding user consent. The Authority alleges that Google’s consent requests for linking services lack adequate, complete, and clear information, potentially influencing users’ decisions on data usage. These practices might condition consumers’ freedom of choice, leading them to consent to data combination and cross-use across multiple services without full understanding.

Read more here.


📝 Turkish SCCs and BCRs Now in Effect

On 10 July 2024, the Personal Data Protection Authority (KVKK) in Turkey released documents on standard contracts (SCCs) and Binding Corporate Rules (BCRs) to ensure appropriate safeguards for international data transfers. These documents follow the public consultation of draft versions and align with the amended Article 9 of the Law on Protection of Personal Data No. 6698. The publications include standard contractual clauses for 4 scenarions similar to the EU ones, as well as binding corporate rules application forms and companion guides detailing essential issues for both data controllers and processors.

Read more here.


🧑‍💼New DPO Regulation in Brazil

On 17 July 2024, the Brazilian data protection authority (ANPD) issued Resolution CD/ANPD No. 18, detailing the roles and responsibilities of Data Protection Officers (DPOs). This regulation outlines the formal appointment process, required qualifications, and the specific duties of DPOs. It mandates public disclosure of the DPO’s identity and contact information. Additionally, it specifies the duties of data controllers and processors in supporting DPOs and ensuring compliance with data protection laws.

Read more here.


📚 Danish DPA Reports Municipalities’ Steps Toward Compliance in the Google Chromebook Case

The Danish Data Protection Authority (Datatilsynet) reports that municipalities are taking steps to comply with the orders issued in January 2024. KL (Local Government Denmark), representing 52 municipalities, announced that from August 1, 2024, municipalities will stop sharing personal data with Google for purposes deemed unlawful by the Authority. Datatilsynet noted contract adjustments ensuring data processing strictly follows municipal instructions, except as required by EU law. Allan Frank, IT security specialist at Datatilsynet, highlighted remaining issues. The Authority awaits an opinion from the European Data Protection Board on documentation of subprocessors to make a final assessment.

Read more here.


📢 noyb Files Complaint Against Xander for GDPR Violations in RTB

On July 9, 2024, None Of Your Business (noyb.eu) filed a complaint with the Italian Data Protection Authority (Garante) against Xander Inc. for alleged GDPR violations. NOYB claims Xander violated several GDPR articles by mishandling user data and failing to comply with access and erasure requests. Xander, which operates a Real Time Bidding platform, is accused of collecting sensitive personal information and failing to adequately identify and respond to data subjects. The complaint requests Garante to enforce data protection rights, correct inaccurate profiles, and impose fines on Xander.

Read more here.

👇 That’s it for this edition. Thanks for reading, and subscribe to get the full text in a single email in your inbox! 👇

♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top