The Privacy Explorer | Week 27

Welcome to The Privacy Explorer recap of privacy, digital and AI news for week 27 of 2024 (1-6 July)! 

 This edition at a glance:

👈 Swipe left for a quick overview, then find 🔍 more details on each topic below.
 

🇪🇺 EU Commission’s and Microsoft’s lawsuits against the EDPS were published

The European Commission and Microsoft Ireland Operations Ltd have both initiated legal actions against the European Data Protection Supervisor (EDPS) following an investigation into the Commission’s use of Microsoft 365. This investigation resulted in a decision dated 8 March 2024, highlighting several breaches of EU data protection regulations.

 

EDPS Findings

In March 2024, the EDPS concluded that the European Commission violated various provisions of Regulation (EU) 2018/1725, the data protection law for EU institutions. The key findings included:

  • Inadequate Safeguards: The Commission did not ensure that personal data transferred outside the EU/EEA had protection equivalent to EU standards.
  • Lack of Specificity: The contract with Microsoft failed to specify the types of personal data collected and their explicit purposes.
  • Data Processing Issues: The Commission’s role as a data controller was questioned, especially regarding managing data processing operations and international transfers.

 

EDPS Orders

The EDPS imposed several corrective measures to address these issues:

  • Suspension of Data Flows: By 9 December 2024, the Commission must stop all data transfers resulting from its use of Microsoft 365 to entities in countries outside the EU/EEA not covered by an adequacy decision.
  • Compliance Requirements: The Commission must align its data processing activities with EU data protection laws by the same deadline.

 

Legal Actions by the Commission and Microsoft

Both the Commission and Microsoft have challenged the EDPS’s decision:

  • European Commission Lawsuit: Filed on 17 May 2024 and published on 1 July (Case T-262/24),
  • Microsoft Ireland Operations Lawsuit: Filed on 21 May 2024 and published on 1 July (Case T-265/24).

Both lawsuits emphasize:

  • Misinterpretation of Regulations: The plaintiffs allege that the EDPS misapplied various provisions of Regulation (EU) 2018/1725.
  • Proportionality: They argue that the corrective measures imposed are disproportionate to the alleged violations.
  • Assumptions About Data Transfers: The Commission and Microsoft dispute the EDPS’s assumption that direct data transfers occurred between the Commission and Microsoft in the US.

The outcome of these legal battles will have significant implications for the use of cloud-based services (at least) by EU institutions. If the Court upholds the EDPS’s decision, it could result in substantial changes in how these institutions manage data processing and international data transfers.

👉Relevant links:

➡️Full EDPS decision here;

➡️EDPS press release here;

➡️Commission’s action to annul the decision here;

➡️Microsoft’s action to annul the decision here.

🇳🇱 Dutch Court Orders Criteo to Cease Tracking Cookies or Face Penalty

On 22 April 2024, the Amsterdam District Court ruled in favor of a data subject against Criteo SA, a Paris-based digital marketing company. The court found Criteo in violation of GDPR, specifically Articles 5(1)(a) and 6(1), for placing tracking cookies on the data subject’s devices without consent. Despite Criteo’s argument that third-party websites should obtain consent, the court held Criteo responsible.

Background and Previous Rulings

Criteo uses Real Time Bidding (RTB) technology to place targeted ads by using tracking cookies on user devices. In June 2023, the French Data Protection Authority (CNIL) fined Criteo €40 million for GDPR violations, primarily for failing to verify user consent. On 8 August 2023 the data subject notified Criteo of the unlawful tracking, citing Dutch telecommunications law and GDPR violations. Criteo claimed third-party websites were responsible for obtaining consent.

 

Initial Court Decision and Appeal

In an October 2023 urgency procedure (“kort geding”), the Amsterdam District Court ordered Criteo to stop placing tracking cookies without consent, imposing a €250 daily penalty, up to €25,000. Criteo appealed, but the Amsterdam Court of Appeal upheld the decision in December 2023, confirming Criteo’s responsibility for ensuring consent.

Compliance and Further Legal Actions

In January 2024, Criteo voluntarily paid the €25,000 penalty but continued its tracking practices. On 17 April 2024, Criteo initiated a full trial to contest the urgency procedure’s judgment, arguing that third-party websites were responsible for consent and that compliance was technically impossible without halting all cookie placements.

Latest Ruling and Penalty

The court ruled that paying the maximum penalty did not absolve Criteo from compliance. It noted that Criteo’s business model seemed lucrative enough to make previous penalties ineffective. Therefore, the court imposed a new penalty of €500 per day, with a maximum of €50,000, until Criteo stops the unlawful cookie placements. The court dismissed Criteo’s arguments about technical impossibility, emphasizing the company’s duty to comply with data protection laws.

👉 Read more on GDPRhub here.

🇳🇴 Norwegian Court Upholds €6.4 Million Fine Against Grindr for GDPR Violations

On 1 July 2024, the Oslo District Court upheld the Norwegian Data Protection Authority’s decision to fine Grindr €6.4 million (NOK 65 million) for violations of the GDPR.

Context and Background

Grindr, a location-based social networking app for the LGBTQ community, was found to have shared personal data with third-party advertisers without a valid legal basis, breaching Article 6(1) GDPR. The shared data included GPS location, IP addresses, and advertising IDs, which could potentially identify users and expose their sexual orientation, a special category of data protected under Article 9(1) GDPR.

The Norwegian Consumer Council and noyb.eu lodged a complaint with the DPA in January 2020, alleging that Grindr’s data sharing practices for marketing purposes were unlawful. Following an investigation, the DPA imposed a fine of €6.4 million on Grindr in December 2021.

 

Legal Findings

The court confirmed that Grindr’s consent mechanism was invalid under GDPR. Users were forced to accept the privacy policy to use the app, without a genuine choice to refuse data sharing. The court ruled that:

  • Consent was not freely given, as users had no real option but to accept data sharing to use the app.
  • Grindr failed to use clear and accessible language to inform users about the extent and purpose of data sharing.
  • The app disclosed special categories of personal data, like sexual orientation, to advertisers without meeting the required exceptions under Article 9(2) GDPR.

 

Court’s Decision

The court found that Grindr’s actions constituted a serious GDPR violation, impacting a significant number of users. It determined that Grindr’s consent practices were inadequate and misleading. The court noted that Grindr continued these practices despite knowing they were non-compliant with GDPR, demonstrating a deliberate breach.

 

Fine and Implications

The €6.4 million fine represents about 30% of the maximum possible penalty under GDPR. The court emphasized the need for the fine to be dissuasive and preventive, ensuring Grindr’s future compliance with data protection regulations. The court also dismissed Grindr’s arguments that users could opt-out through phone settings or switch to a paid version of the app as insufficient for meeting GDPR consent requirements.

👉 Read more here.

🪙 EU Commission publishes preliminary findings on Meta’s ‘pay or consent’ model

 

The European Commission published its preliminary findings on Meta’s ‘pay or consent’ advertising model on 1 July 2024. This follows a formal investigation initiated on 25 March 2024, to examine Meta’s compliance with Article 5(2) of the Digital Markets Act (DMA).

Findings of the Commission

The Commission’s preliminary view is that Meta’s model does not comply with Article 5(2) of the DMA. Specifically, the findings indicate:

  • Lack of Equivalence: Meta’s model does not offer an equivalent service that uses less personal data compared to the personalized ads-based experience.
  • Infringement on Consent Rights: The model does not allow users to exercise their rights to freely consent to the combination of their personal data.

Article 5(2) of the DMA mandates that gatekeeping organizations must seek users’ consent to combine personal data across different services. If users refuse consent, they should still access a less personalized but equivalent alternative.

Next Steps

Meta must now respond in writing to the Commission’s preliminary findings. The investigation will conclude 12 months from its initiation in March 2024. Potential consequences for non-compliance include:

  • Fines: Up to 10% of Meta’s total worldwide turnover, increasing to 20% for repeated infringements.
  • Additional Remedies: In cases of systematic non-compliance, the Commission may enforce further measures such as selling parts of the business or banning acquisitions related to systemic breaches.

👉 Read the press release here.

🕵️ Spanish Public Prosecutor Investigates Meta’s AI Data Practices

On 4 July 2024, the Spanish Public Prosecutor’s Office opened an investigation into Meta’s potential misuse of Facebook and Instagram users’ personal data for artificial intelligence (AI) training. This move follows multiple legal complaints and criticisms from privacy experts regarding Meta’s data processing practices.

Context and Background 

Meta had planned to use user data to train unspecified AI technologies and informed users of this change, sparking widespread concern. In response to criticism, Meta announced on 14 June 2024 that it would delay the launch of its AI in the EU, initially set for mid-June.

Investigation Details 

The investigation, reported by Diario de Sevilla, aims to determine if Meta violated users’ privacy rights. The Public Prosecutor’s Office, led by prosecutor Manuel Campoy, points out the difficulty users faced in opting out of data collection. Meta required users to find a hidden opt-out form and explain their reasons for objecting, a process criticized by the OCU and highlighted in 11 complaints filed by the noyb organization in early June.

The source article is here but it is paywalled.

Legal Framework and Potential Outcomes 

Meta claims a legitimate interest in using user data for AI training without explicit consent. However, the Spanish investigation will assess if this practice aligns with national and EU privacy laws. The Public Prosecutor’s Office suggests that Spanish law may permit restrictive measures to prevent privacy violations.

Broader Context 

Meta is facing multiple investigations in Europe over its data practices. The Commission is investigating Meta both under the Digital Services Act and under the Digital Markets Act (links point to press releases). Preliminary findings in the DMA investigation were just published on 1 July – see above in this newsletter. In edition #23 I was writing that Meta halts AI rollout in Europe following Irish DPC’s request. Meta is also facing backlash from data protection authorities for its consent or pay model.

Meta is under fire also in Brazil – see the next story below. 

🚫 Brazil DPA Orders Temporary Suspension Regarding Meta’s AI Data Use

On 2 July 2024, the Brazilian National Data Protection Authority (ANPD) announced Decision No. 20/2024/PR/ANPD, imposing a temporary ban on Meta Platforms Inc. from processing personal data for AI training. This move follows an ex officio investigation into Meta’s updated privacy policy, effective from 26 June 2024.

Background

Meta’s new privacy policy allows the use of publicly available information on Facebook, Messenger, and Instagram to train its AI systems. The ANPD initiated an investigation to examine compliance with the General Personal Data Protection Law (LGPD).

Findings

The ANPD’s preliminary findings indicated several potential LGPD violations:

  • Inadequate Legal Basis: Meta’s reliance on “legitimate interest” for data processing was deemed inappropriate.
  • Transparency Issues: Insufficient disclosure of the policy change and its implications, failing to adequately inform data subjects.
  • Rights Limitation: Unjustified obstacles to exercising data subjects’ rights, especially the right to object.
  • Children’s Data: Inadequate safeguards for processing personal data of children and adolescents.

 

Outcomes

Given these findings, the ANPD ordered the immediate suspension of:

  • Meta’s privacy policy regarding personal data use for AI training.
  • Meta’s data processing activities for AI training, enforceable by a daily fine of BRL 50,000 (approx. $8,870) for non-compliance.

The suspension remains in effect until further notice. The ANPD emphasized the preventive measure’s role in protecting data subjects’ rights and preventing serious, irreparable harm. Meta must provide documentation and a signed statement to the ANPD within five working days, confirming compliance with the suspension order.

On 10 July 2024, the ANPD confirmed its initial decision after receiving Meta’s request for reconsideration. The ANPD granted Meta an additional five days to demonstrate compliance, considering Meta’s technical difficulties in halting data processing activities.

🛑 Online Store Vinted Fined for Shadow Banning Practices

The Lithuanian State Data Protection Inspectorate (SDPI) fined Vinted UAB €2,385,276 on 2 July 2024 for multiple GDPR violations, focusing on the company’s shadow banning practices and improper handling of data subject requests. This investigation originated from complaints in 2021 and 2022, forwarded by the French and Polish DPAs.

Context

Vinted UAB operates a popular online platform for second-hand clothing. Complaints alleged Vinted mishandled requests for data erasure under Article 17 GDPR and data access under Article 15 GDPR.

Key Findings

  • Shadow Banning Practices: Vinted excluded users without informing them, breaching GDPR principles of fairness and transparency.
  • Failure to Act on Erasure Requests: Vinted rejected requests without specifying the required grounds under Article 17(1) GDPR.
  • Lack of Transparency: Vinted did not provide reasons for inaction or information on data processing purposes.
  • Insufficient Technical and Organizational Measures: The company failed to demonstrate accountability or reasonable actions regarding data access requests.

 

Violations and Penalty

The SDPI identified violations of Articles 5(1)(a), 5(2), 12(1), and 12(4) GDPR. The fine was calculated considering the cross-border scope, the large number of affected data subjects, and the duration of infringements.

 

Interesting implications and the Digital Services Act (DSA)

This is to my knowledge the first time shadow banning is sanctioned under GDPR, and therefore it is an important precedent given that social media platforms like LinkedIn, Instagram and others have engaged in shadow banning over the years, seemingly without consequence. What’s important to note is that the new EU Digital Services Act (DSA) also addresses shadow banning, particularly for large online platforms. The DSA mandates transparency and due process for content moderation decisions, including shadow banning. According to the DSA, platforms must provide clear reasons for visibility restrictions or account suspensions. In other words, if you are subject to shadow banning you can complain in the EU to two authorites: the data protection authority under GDPR, and the Digital Services Coordinator under the DSA.

👉 Read more about the Vinted decision here.

🍪 CNIL publishes study on the Future of Digital Advertising

On 4 July 2024, the CNIL published an economic study to anticipate the impact of the end of third-party cookies in Chrome, slated for early 2025, and other significant changes in digital advertising. Conducted by Télécom Paris researchers, the study involved interviews with 25 industry experts and focused on understanding market shifts and privacy implications.

Key Insights

  1. Market Shifts and Growth:
    • Digital advertising will represent 65% of the ad market by 2030.
    • The introduction of Apple’s App Tracking Transparency (ATT) and the end of third-party cookies are significant disruptors.
  2. Advertising Alternatives:
    • The study identifies seven key alternatives:
      • Privacy Sandbox: Targets users by cohorts and interests based on browsing data.
      • Substitution Identifiers: Deterministic or probabilistic IDs.
      • Contextual Targeting: Uses keywords and natural language processing.
      • Cohort Targeting: Creates audience segments.
      • Retail Media: Ads within a distributor’s domain.
      • User-Account-Based Environments: Rely on first-party data.
      • Paywall Trackers: Monetize through subscription models.
    • None fully replicate third-party cookies’ functionality for targeting and measurement.
  3. Data Privacy and Market Impact:
    • The study notes that regulatory measures like GDPR did not significantly reduce tracking; industry-led changes are the main drivers.
    • Proprietary data will become more valuable, benefiting major platforms and “walled gardens.”
  4. Challenges for Open Internet Actors:
    • They must adopt multiple complementary solutions, leading to technical complexity and interoperability issues.
    • Smaller players need to pool data to compete effectively, highlighting the importance of cooperation and data sharing.
  5. Future Market Dynamics:
    • New entrants like ISPs will increase competition.
    • The advantage will shift to closed ecosystems and retail media.
    • Mutual dependence between large platforms and open internet content creators will persist, although data sharing will become more challenging.
  6. Regulatory and Cooperative Efforts:
    • CNIL and the Competition Authority must continue to monitor and adapt to market changes.
    • A joint declaration in December 2023 emphasizes their cooperative approach.
  7. Evolving Data Use:
    • The privacy sandbox aims to enhance privacy but won’t drastically reduce tracking.
    • Proprietary data and new data-sharing models will reshape market practices.
    • Retail media exemplifies the shift toward monetizing customer data for advertising.

The study underscores that current uncertainties stem from the economic strategies of digital giants “who have their own conception of privacy” and evolving business models rather than regulatory changes. The requirement to ask for consent before placing trackers seems well-integrated into the market, while “the refusal rate of targeted advertising in the French Internet had stabilized at less than 40% in June 2022, and did not reveal any particular phenomena of consent fatigue”.

👉 Read the study insights here.

🇳🇴 Norwegian Government Reviews Personal Data Act After Meta Fine Case

On 2 July 2024, the Norwegian government announced a review of the Personal Data Act following the 18 June decision by the Privacy Appeals Board invalidating a DPA fine due to lack of legal basis to impose daily fines on foreign companies. The Act, effective since 20 July 2018, aims to protect individual privacy and incorporates the EU GDPR.

The Minister of Justice and Public Security Emilie Enger Mehl emphasized updating the law to ensure the Norwegian Data Protection Authority (DPA) has adequate enforcement tools. The government seeks to address deficiencies and enhance privacy protections against major tech companies.

The Ministry has invited written submissions from the public, with a deadline of 1 October 2024, to gather a broad range of insights. The review will focus on Norwegian regulations but will also consider GDPR developments.

👉 Read more here.

🇺🇸 Florida, Oregon, and Texas privacy laws become effective

On 1 July 2024, the Florida Digital Bill of Rights (FDBR), Oregon Consumer Privacy Act (OCPA), and Texas Data Security and Privacy Act (TDPSA) entered into effect, joining state privacy legislation already in force in California, Connecticut, Colorado, Virginia, and Utah.

🇳🇱 Dutch DPA Publishes 2023 Annual Report

On 1 July 2024, the Dutch Data Protection Authority (AP) published its annual report for 2023. The report underscores the AP’s critical role in overseeing algorithmic and AI usage, enforcement actions, and collaborations with other regulatory bodies.

Supervision of Algorithms and AI 

The AP focused on the supervision of algorithms and AI, emphasizing the risks associated with their misuse. Government agencies, such as the Education Executive Agency and the Employee Insurance Agency, were found using algorithms for fraud detection without proper substantiation, leading to potential discrimination. The AP has been designated as the coordinating supervisor for algorithms and AI, and the Algorithm Coordination Directorate (DCA) was established within the AP in January 2023. The AP also requested clarification from OpenAI regarding ChatGPT’s data protection measures.

Enforcement Actions 

The AP highlighted key enforcement actions from 2023:

  • TikTok Fine: The Irish Data Protection Commission (DPC) imposed a fine of €345 million on TikTok, initiated by an AP request in 2021.
  • Uber Fine: A €10 million fine was levied against Uber in collaboration with the French data protection authority (CNIL) for unclear data retention practices.

The AP also reported on sector-specific enforcement measures, including personal data processing for police purposes, facial recognition technology, drone usage, and airline passenger data processing.

Algorithm and Discrimination Issues 

The AP’s report revealed persistent issues with algorithm and discrimination practices:

  • Childcare Benefits Scandal: Continued hearings by the parliamentary committee of inquiry into fraud policy and services highlighted ongoing issues, despite past findings of unlawful and discriminatory practices by the Tax and Customs Administration.
  • Government Algorithm Misuse: Instances of unsubstantiated and discriminatory algorithm usage by various government bodies, including the Education Executive Agency and several municipalities, were noted.

Independent vs. Hired Investigations 

The AP expressed concerns about the government’s increasing reliance on private investigation agencies for privacy violations. AP Chairman Aleid Wolfsen emphasized the need for strengthening the AP’s budget to at least €100 million to enhance its watchdog function, reduce reliance on private agencies, and ensure adequate investigation of privacy violations.

👉 Read the report here.

🇮🇹 Italian DPA (Garante) Publishes 2023 Annual Report

On 3 July 2024, the Italian Data Protection Authority, Garante Privacy, presented its 2023 activity report. This document outlines Garante’s significant interventions across various sectors, focusing on innovative areas such as digitalization and artificial intelligence (AI), while also maintaining efforts in ongoing issues like telemarketing and health data protection.

Key Interventions

  • Artificial Intelligence: Garante blocked ChatGPT due to unlawful personal data collection and lack of age verification, leading to subsequent transparency improvements. Investigations into the Replika chatbot and Sora AI model were also notable actions.
  • Digital Identity and Public Administration: Numerous interventions were made concerning centralized management of digital identity credentials, the Single Digital Gateway, and the National Registry of Residents.
  • Health Sector: Focused on Electronic Health Record 2.0 reforms and implementing a national telemedicine system.
  • Online Protection of Minors: Continued vigilance on social network age verification and combating revenge porn and cyberbullying through new protocols with other authorities.
  • Cybersecurity: Developed guidelines for password storage with the National Cybersecurity Agency (ACN) and addressed 2037 data breaches, particularly in public administration and the private sector.

Statistics and Enforcement

  • Complaints and Inspections: Garante responded to 9281 complaints and conducted 144 inspections.
  • Sanctions: Issued 394 corrective measures and imposed fines totalling about 8 million euros.
  • Engagement: Over 4.2 million visits to the DPA’s website (interesting metric! ) and significant international activity, including 235 meetings with global privacy bodies.

 

Broader Themes

  • Ethical Implications of Technology: Addressed ethical issues in technology use, particularly AI, and its impact on fundamental rights.
  • Digital Economy and Big Data: Focused on the data-driven economy, large platforms, and personal information monetization.
  • Consumer Protection: Targeted aggressive telemarketing practices with substantial sanctions and approved a new Code of Conduct for telemarketing.

👉 Read the report and press release here.

👇 That’s it for this edition. Thanks for reading, and subscribe to get the full text in a single email in your inbox! 👇

♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy, digital and AI education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top