This edition at a glance:
US Federal Communications Commission (FCC) Imposes Heavy Fines on Major US Carriers for Illegal Data Sharing Practices
The Federal Communications Commission (FCC) has finalized hefty fines against AT&T, Verizon, Sprint, and T-Mobile for their improper handling and unauthorized sharing of customer location data. The fines total nearly $200 million, concluding an investigation initiated during the Trump administration.
- The investigation revealed that the carriers had shared user geolocation data with third-party aggregators without obtaining customer consent, violating privacy regulations.
- Although the carriers had committed to ending these practices following public exposure and congressional inquiries in 2018, the FCC found that they had continued to share data for up to a year after their public declarations.
- The specific fines imposed are $57 million for AT&T, nearly $47 million for Verizon, $12 million for Sprint, and $80 million for T-Mobile, which has since merged with Sprint.
- All carriers have expressed intentions to appeal, challenging the FCC’s findings on both legal and factual grounds. AT&T plans to appeal, citing immediate actions taken to rectify third-party breaches and the beneficial nature of some location services. Verizon and T-Mobile also indicated plans to challenge the fines, pointing to their quick responses to isolated abuses and the discontinuation of their location data-sharing programs.
- Senator Ron Wyden, who initially prompted the investigation, praised the FCC’s decision, emphasizing the importance of consumer privacy and the need for accountability.
🇦🇹 OpenAI’s GDPR Compliance Faces New Challenge In Europe
noyb.eu has initiated a formal complaint against OpenAI with the Austrian Data Protection Authority, championing a public figure’s privacy rights and citing multiple breaches of the GDPR through ChatGPT. Key elements of the complaint include:
Inherent Inaccuracies: ChatGPT’s design leads to frequent inaccuracies in personal data, which OpenAI has not been able to address effectively.
Rectification and Access Rights: GDPR demands accurate personal data handling, including rights to rectification and access, which OpenAI reportedly fails to meet. This is compounded by ChatGPT’s design, which, according to OpenAI, lacks the ability to verify the factual correctness of its generated outputs, often leading to ‘hallucinated’ data. Additionally, despite filtering options, OpenAI cannot rectify false personal data without limiting overall data accessibility, an approach which does not meet the GDPR’s transparency and accuracy requirements.
Lack of Data Transparency: The inability to trace the origins of the data or to provide data access as required under GDPR further complicates compliance.
A year ago the Garante started an investigation in Italy, and then as EU data protection authorities started to look into ChatGPT on their own, the EDPB created a task task force – however these are all yet to render a notable result.
🇪🇺 EU Commission Initiates Formal Proceedings Against Meta for Possible Digital Services Act Violations
On 30 April 2024 the European Commission began formal proceedings against Meta to investigate whether Facebook and Instagram, designated Very Large Online Platforms, have failed to comply with the Digital Services Act. The inquiry concerns several aspects:
- Deceptive Practices and Disinformation: Meta is scrutinized for potentially deceptive advertising practices, mishandling of political content, and inadequate mechanisms for civic discourse and election monitoring.
- Content Moderation Failures: Concerns have been raised about Meta’s notice-and-action system for flagging illegal content and the internal mechanisms for handling complaints, which might not meet DSA standards.
- Election Integrity Risks: The phase-out of CrowdTangle, a tool crucial for monitoring elections, is particularly troubling given the upcoming European Parliament elections (6-9 June). This action by Meta may compromise the ability to track misinformation and voter interference effectively.
- Access to Data: The Commission is also evaluating Meta’s compliance in providing necessary data access to researchers, which is crucial for public transparency and research.
- As the proceedings unfold, the Commission may impose interim measures or accept commitments from Meta to address the identified issues. This investigation could have broad implications for how digital platforms operate within the EU, aiming to ensure they contribute positively to public discourse and uphold democratic values.
Press release is available here.
🇳🇴 Norwegian Data Protection Authority publishes annual report for 2023
Datatilsynet’s 2023 annual report reflects a transformative year under the new leadership, focusing on significant privacy cases and enhancing organizational capabilities.
- Strategic Litigations and Guidance: The agency successfully managed several high-profile cases, including a landmark decision banning Meta Inc. from behavioral advertising – in the week 16 edition I wrote that this decision has been upheld in court.
- Organizational Growth and Strategy: Significant efforts in organizational development and strategic planning were made to align with future privacy challenges up to 2030.
- Increased Inspections and Guidance: The focus has been on extensive inspections and providing detailed guidance, particularly in the public sector, to ensure compliance and enhance privacy standards.
- Focus on AI and Technology: Emphasizing the importance of ethical AI use and the protection of personal data in technological advancements to maintain public trust.
- Enhanced International Collaboration: Strengthened international partnerships to improve privacy practices both nationally and globally.
You can read the press release here and the report here (in Norwegian).
⚖️ CJEU Rules on Data Retention and Privacy in Copyright Enforcement Case (C-470/21)
In its judgement from 30 April in Case C-470/21 La Quadrature du Net and Others, the Court of Justice of the European Union (CJEU) tackled the compliance of French national legislation, which authorizes the general and indiscriminate retention of IP addresses by internet service providers (ISPs) to assist in identifying and combating copyright infringements, with EU data protection law.
Here are the key points:
- Legal Context: The French Decree n. 2010-236 mandates that ISPs assist Hadopi, a French authority, by providing civil identity data linked to IP addresses for initiating actions against individuals distributing copyrighted works without authorization.
- CJEU’s Evaluation: The CJEU clarified that while IP addresses are considered traffic data, they do not inherently reveal detailed personal information unless combined with other data. The court acknowledged that such data collection is a form of processing under GDPR but emphasized that it does not necessarily lead to a severe invasion of privacy.
- Conditions for Compliance: Retention must be limited and data access should not enable detailed profiling of individuals’ private lives. Data use must focus solely on identifying potential infringers without tracking their broader online activity.
- Safeguards and Oversight: Access to retained data must be subject to a prior review by a judicial or independent administrative body, especially when used to link an individual to specific online activities. The court also highlighted the necessity of robust safeguards against data misuse, including effective measures to prevent unlawful access and ensuring data integrity.
⚖️ CJEU Ruling on Data Access for Crime Investigation Under the ePrivacy Directive (C‑178/22)
In its judgement from 30 April 2024 in Case C-178/22 Procura della Repubblica presso il Tribunale di Bolzano, the Court of Justice of the European Union (CJEU) addressed the compatibility of Italian legislation with the ePrivacy Directive 2002/58/EC, regarding judicial authorization for accessing communication data in criminal investigations.
Here are the main takeaways:
- Legal Framework: The case revolved around Italian national law requiring judicial authorization for the acquisition of traffic and location data by the public prosecutor in cases involving crimes with a penalty of at least three years.
- Judicial Oversight: The CJEU upheld that such judicial oversight is necessary, allowing courts to assess the seriousness of the offense before granting access to data, thereby ensuring that the interference with privacy is proportionate.
- Scope of Serious Crimes: The judgment stressed that serious crimes justify such data access. However, it emphasized that not all offenses meeting the three-year penalty threshold automatically qualify as serious crimes.
- Data Protection and Privacy Rights: Reflecting on fundamental rights protection, the court underscored that any access to communication data must be strictly necessary and proportionate, respecting privacy and data protection principles.
- Implications for Member States: The ruling confirms that while access to data for serious crimes is permissible, it should not extend to all criminal offenses indiscriminately, thus EU member states should limit this to genuinely serious situations affecting public security.
🇺🇸 NIST publishes Cyber Security Framework 2.0 small business guide
The US National Institute of Standards and Technology (NIST) launched on 1 May the Cyber Security Framework 2.0 Small Business Quick Start Guide, a strategic resource designed to assist small and medium-sized businesses (SMBs) in developing their cybersecurity protocols.
First developed in 2014 and recently revised in February, the NIST Cyber Security Framework offers flexible guidance to help organizations manage cybersecurity risks effectively.
Key Features of the Guide:
- Guide Structure: Organized by Function, it breaks down essential cybersecurity outcomes into Govern, Identify, Protect, Detect, Respond, and Recover categories.
- Practical Tools: Each page of the Guide offers actions to consider, getting started tips, critical questions, and additional resources to facilitate businesses in enhancing their cybersecurity.
- The guide helps SMBs establish and communicate their cybersecurity strategies and policies, assists in identifying current risks and implementing protective measures, and provides guidance on detecting, responding to, and recovering from cybersecurity incidents.
🇳🇱 Dutch DPA publishes guidelines on data scraping and considers it “mostly illegal”
The Dutch Data Protection Authority (AP) recently emphasized the illegality of data scraping by private organizations and individuals, noting that it generally constitutes a violation of the General Data Protection Regulation (GDPR). Scraping refers to the automated collection and storage of online information, often involving sensitive personal data. Here are key points from the guidance:
- Scraping personal data usually constitutes a GDPR violation. Scraping activities such as profiling for resale, harvesting data from private or secured areas, and using public social media information for insurance assessments are examples of GDPR breaches.
- Common prohibited practices include scraping to create sellable profiles, accessing private social media, and using public profile data for insurance eligibility.
- A common misconception is that public data can be freely scraped; however, AP clarifies that visibility does not imply consent.
- Legally, scraping can rarely meet the strict conditions needed for the legitimate interest necessary under GDPR.
Read the press release here and the guidelines here (in Dutch, but you can download an automated translation into English here).
While this guidance makes many good points, it unfortunately doubles down on AP’s view that “if you only have a purely commercial interest in processing personal data, then you can cannot successfully rely on the legitimate interest basis.” This is wrong, and you don’t need to take my word or EDPB’s word on it – the VoetbalTV precedent already tested AP’s limiting view in front the District Court of Central Netherlands, who ruled against the AP’s decision and held that commercial activities could constitute a legitimate interest for processing personal data under GDPR (source) – the decision was upheld in appeal. It seems that even if that case led to VoetbalTV going bankrupt while challenging the AP decision, the AP wouldn’t refrain from doing it again. A strange hill to die on and a questionable practice given res judicata authority, but perhaps CJEU’s decision on this will be decisive – preliminary questions on this exact topic have been referred to the CJEU by the Amsterdam District Court, in a case concerning the Royal Dutch Lawn Tennis Association (KNLTB) – now Case C-621/22.
See also the Privacy Explorer #17 where I wrote about the AG Opinion in Case C-446/21, which concerns processing of publicly disclosed special category personal data for advertising – the gist of it being that no, that’s not allowed.
🇳🇱 Dutch DPA publishes general framework for facial recognition
On 2 May 2024 the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) has published a comprehensive legal framework aimed at clarifying the use of facial recognition technology. This guidance is aimed at privacy professionals and organizations considering facial recognition applications.
- Security Exceptions: Highlighting specific scenarios under which facial recognition can lawfully be used, the framework points to its necessity in high-security contexts such as nuclear facilities or where hazardous substances are involved. These applications are permissible only when a data protection impact assessment demonstrates a compelling public interest and stringent security requirements are met.
- Identity Verification Ban: The AP confirms that facial recognition used for confirming identity falls under the special categories of personal data, hence subject to the general prohibition under Article 9 GDPR.
- Personal Use Guidelines: The framework defines conditions under which facial recognition technology can be considered for personal or household use, such as unlocking personal devices. This use is outside the scope of GDPR provided the biometric data is stored locally and managed solely by the user.
Read the press release here. The framework is available here.
🇬🇧 UK ICO launches tool to help create privacy notices
The UK Information Commissioner’s Office (ICO) has launched a new privacy notice generator tool, replacing the previous template approach. Here’s what you need to know:
- Purpose: Assists in the creation of privacy notices for various stakeholders.
- User Groups: Aimed at sole traders, start-ups, charities, and small to medium enterprises.
- Future Developments: Sector-specific versions for professional services, education, health care, and the charity sector are expected by summer 2024.
- Versatility: Suitable for generating notices related to customer, supplier, staff, and volunteer information.
Feedback Encouraged: Users are urged to provide feedback via a survey, helping the ICO to refine the tool further.
Find it here.
🏛️ OECD Updates AI Principles
On 5 May 2024 the Organisation for Economic Co-operation and Development (OECD) updated its AI Principles, originally established in 2019, to address the rapidly evolving landscape of artificial intelligence, with a specific focus on general-purpose and generative AI. Main points:
- Addressing Safety Concerns: The revised principles incorporate stringent safety protocols to mitigate risks associated with AI systems, including mechanisms to override or decommission malfunctioning AI.
- Misinformation and Integrity: There is a heightened emphasis on preventing the spread of misinformation and ensuring the integrity of information processed by AI systems.
- Responsible Business Practices: The OECD calls for enhanced cooperation throughout the AI system lifecycle, involving various stakeholders from suppliers to users.
- Transparency and Accountability: Definitions and standards for transparency have been clarified, ensuring that AI systems are understandable and that entities are accountable for their functioning.
- Environmental and Interoperable Governance: Explicit references to environmental sustainability reflect its growing importance. Furthermore, the principles promote global cooperation to develop compatible AI governance frameworks.
The Press release is available here and the Recommendations here.