Welcome to the AI digital and privacy recap of privacy news for week 30 of 2024 (22-28 July)!
This edition at a glance:
Oracle Agrees to $115 Million Settlement in Consumer Privacy Lawsuit
Oracle has agreed to a $115 million settlement in a consumer privacy lawsuit dating back to 2018. The lawsuit alleged that Oracle generated $42.5 billion annually by secretly creating and selling detailed dossiers on millions of people, including non-users. These dossiers, created through direct tracking and data purchases, included personal and sensitive information. The settlement, applying to data collected from August 19, 2018, mandates Oracle to cease specific data collection practices. Approximately 220 million people are impacted, with the law firm involved seeking $28 million in fees.
Read more here.
Texas $1.4 Billion Settlement with Meta Over Its Unauthorized Capture of Personal Biometric Data
Texas Attorney General Ken Paxton has achieved a massive $1.4 billion settlement with Meta for unauthorized capture and use of biometric data from millions of Texans. This is the largest settlement obtained by a single US state, surpassing the $390 million Google settlement in 2022. The case was brought under Texas’s “Capture or Use of Biometric Identifier” Act (CUBI), marking its first successful lawsuit and settlement. Paxton’s office sued Meta in February 2022 for using facial recognition software without proper consent, violating CUBI and the Deceptive Trade Practices Act. The settlement will be paid over five years.
Read more here.
TikTok Fined £1.875 Million by UK’s Ofcom for Data Inaccuracies
Ofcom has fined TikTok £1.875 million for providing inaccurate information regarding its parental controls feature, Family Pairing. TikTok failed to deliver precise data by the requested deadline, disrupting Ofcom’s efforts to publish a child safety transparency report. Despite being aware of the inaccuracies, TikTok delayed informing Ofcom, leading to significant regulatory and operational setbacks. The fine reflects TikTok’s responsibility to ensure data accuracy and timely cooperation with regulatory demands. This is TikTok’s first penalty under the Communications Act 2003, reduced by 25% due to their cooperation in settling the case.
Read more here.
Korean PIPC Fines AliExpress $1.43M for Data Protection Violations
The Korean Personal Information Protection Commission (PIPC) has fined AliExpress, operated by Alibaba.com Singapore E-Commerce Private Limited, 1.978 billion KRW ($1.43 million) and imposed a 7.8 million KRW ($5,631) administrative fine for violations of the Korean Personal Information Protection Act (PIPA). These penalties follow investigations triggered by privacy concerns over data transferred to Chinese sellers. The PIPC also issued corrective orders to AliExpress to improve data protection measures and enhance transparency for Korean users.
Read more here.
FCC and TracFone Settle $16 Million Fine Over Data Breaches
The Federal Communications Commission (FCC) has announced a settlement with TracFone Wireless Inc., resolving investigations into three significant data breaches. The breaches, occurring between January 2021 and January 2023, exposed customers’ personal information due to vulnerabilities in application programming interfaces (APIs). The settlement includes a $16 million penalty and mandates comprehensive security measures, such as improved API security, SIM change and port-out protections, and regular security assessments.
Read more here.
Irish DPC to investigate X’s Grok AI training on user data without consent
The Irish Data Protection Commission is investigating X’s practice of sharing user data with Elon Musk’s AI startup, xAI, without explicit consent. X implemented a feature that opts users into data sharing by default, without prior notice. Users can change this setting only on the desktop version, with a mobile app option in development. The DPC had been questioning X for months and expressed surprise at the sudden rollout, which may lead to a GDPR investigation and possible fines. This situation mirrors Meta’s recent GDPR-related halt on similar AI data usage in Europe.
Read more here.
DOJ Accuses TikTok of Data Misuse on Sensitive Topics
The U.S. Department of Justice accused TikTok of gathering and sharing U.S. user data on contentious issues like abortion and gun control with its Chinese parent company, ByteDance. Court documents reveal that TikTok used an internal system called Lark to communicate and transfer sensitive data to ByteDance employees in China. The DOJ argues that TikTok’s data handling practices pose significant national security risks and could allow for covert content manipulation by the Chinese government. TikTok disputes these claims, asserting that the potential ban would violate the First Amendment. This case is part of a broader legal battle over TikTok’s future in the U.S.
Read more here.
📈 FTC Investigates Surveillance Pricing Practices
The Federal Trade Commission (FTC) issued orders to eight companies to gather detailed information on surveillance pricing practices. These companies, including Mastercard and JPMorgan Chase, use personal data such as browsing history and credit scores to set individualized prices for goods and services. The FTC aims to understand the impact of these practices on privacy, competition, and consumer protection. Chair Lina M. Khan emphasized the risks to consumer privacy and potential price exploitation. The investigation uses the FTC’s 6(b) authority to conduct comprehensive studies without specific law enforcement purposes.
Read more here.
📢 European Commission Coordinates Action Against Meta on ‘Pay or Consent’ Model
The European Commission, in collaboration with the Consumer Protection Cooperation Network, has taken coordinated action against Meta regarding its ‘pay or consent’ model. This model requires users to either pay for access to Facebook and Instagram or consent to personalized ads by Meta using their personal data. The CPC initiated this move after determining that the model may breach EU consumer laws. Key concerns include misleading information, undue pressure on consumers, and potential violations of the Unfair Commercial Practices Directive and the Unfair Contract Terms Directive. Meta must respond by 1 September 2024, or face potential sanctions.
Read more here.
🚗 US Senators Urge FTC Crackdown on Automakers’ Data Sharing
On July 26, 2024, U.S. Senators Ron Wyden and Edward J. Markey called on the FTC to investigate automakers’ unauthorized sharing of driver data with data brokers. Wyden’s investigation revealed that GM, Honda, and Hyundai shared driving and location data with Verisk Analytics without drivers’ informed consent. Hyundai received over $1 million from Verisk for data from 1.7 million cars, while Honda and GM also engaged in similar practices. The senators condemned these actions, urging the FTC to hold automakers and data brokers accountable for privacy violations and deceptive practices.
Read more here.
📝 EU Commission Publishes Biennual Report on Consumer Protection
On 25 July 2024, the European Commission released its biennial report on the actions carried out under the Consumer Protection Cooperation (CPC) Regulation. The report highlights a 41% increase in mutual assistance requests, with 440 exchanges. Key enforcement actions focused on dark patterns, misleading price reductions, and influencer marketing. The report also emphasized the importance of digital market regulation, green transition, and consumer resilience amid multiple crises. Future market trends include the impact of AI, greenwashing, and online fraud.
Read more here.
Second GDPR Report Highlights Progress and Challenges
The European Commission released its second report on the application of the General Data Protection Regulation (GDPR). Since the first report in 2020, the GDPR has significantly impacted individuals and businesses, with the EU introducing numerous initiatives to enhance digital transformation. The report highlights key achievements, including increased enforcement activities and cooperation among data protection authorities. However, it also identifies areas needing improvement, such as supporting SMEs, providing clearer guidance, and achieving consistent GDPR enforcement across the EU. Significant enforcement actions have led to fines totaling around EUR 4.2 billion.
Read more here.
🤖 U.S. Department of State Releases Risk Management Profile for AI and Human Rights
The U.S. Department of State has published the “Risk Management Profile for Artificial Intelligence and Human Rights” to guide organizations in AI design, development, and deployment with a focus on human rights. This Profile aims to bridge the gap between AI risk management and human rights considerations by integrating principles from the U.S. National Institute of Standards and Technology’s AI Risk Management Framework. It underscores the importance of international human rights in AI governance and provides actionable recommendations for organizations to follow throughout the AI lifecycle to ensure AI systems are used responsibly and ethically.
Read more here.
📜 NIST Released Guidance on Mitigating Generative AI Risks
NIST has introduced the AI RMF Generative AI Profile (NIST AI 600-1), offering a framework to help organizations manage risks specific to generative AI. This profile identifies 12 unique risks, such as increased cybersecurity threats, the spread of misinformation, and “hallucinations” in AI outputs. The document details over 200 actions developers can implement to mitigate these risks, aligning with NIST’s AI RMF. It provides a comprehensive approach to addressing the potential harms of generative AI, ensuring safer and more responsible use.
Read more here.
📢 FCC Proposes New AI Disclosure Rules for Political Ads
The FCC has introduced a proposal to standardize the disclosure of AI-generated content in political advertisements. This Notice of Proposed Rulemaking aims to bring uniformity to state laws regulating AI use in political campaigns. The proposal requires broadcasters, cable operators, Direct Broadcast Satellite (DBS) providers, and Satellite Digital Audio Radio Services (SDARS) licensees to disclose AI-generated content in political ads through on-air announcements and notices in political files. The initiative seeks to enhance transparency and help voters make informed decisions, amid concerns about AI-generated “deepfakes” misleading the public.
Read more here.
🔄 Google Halts Plans to Phase Out Third-Party Cookies
Google has reversed its decision to phase out third-party cookies from its Chrome browser, opting instead to provide users with the option to block these cookies across their web browsing. This change, announced on 22 July 2024, is said to come after feedback from regulators and stakeholders, including the UK’s Competition and Markets Authority. Google says their new strategy aims to enhance user privacy while maintaining a vibrant ad-supported internet. However, concerns remain that this approach could still limit third-party advertising and shift ad spending towards Google’s search ads, raising potential antitrust issues.
Read more here.
🔍 FTC Warns Against Misleading Anonymization Claims Through Hashing
The Federal Trade Commission (FTC) emphasizes that hashing does not render data anonymous, cautioning companies against misleading privacy claims. Hashing, which transforms data like email addresses into consistent numerical values, may obscure the original data but still allows user identification. The FTC has taken action against companies such as Nomi and BetterHelp for improperly using hashed data, demonstrating that hashed identifiers can still cause privacy harms. The calls for companies adhere to truthful privacy practices, highlighting recent cases where user tracking persisted through unique identifiers.
Read more here.
Ofcom Publishes Paper on Mitigating Deepfake Harms
Ofcom has published a discussion paper on the impacts and mitigation of deepfakes, which are AI-generated or manipulated audio-visual content that misrepresents people or events. Deepfakes have caused harm to public figures and ordinary people, often through non-consensual sexual content, financial scams, or political disinformation. Ofcom’s recent poll reveals that nearly half of respondents aged 8-15 and 16+ have encountered deepfakes in the past six months. Under the Online Safety Act, online services must address harmful content, including deepfakes, through measures such as watermarking, detection classifiers, and media literacy efforts.
Read more here.
🧠 Neurotechnologies and Mental Privacy: Societal and Ethical Challenges
The European Parliament has launched a report addressing the implications of neurotechnologies (NT) on mental privacy. Originally used for clinical purposes, NT devices are now widely accessible for cognitive and physical enhancement, which raises privacy, security, and ethical concerns. The Neurorights Foundation (NRF), established in 2017, proposes ‘neurorights’ such as mental privacy and protection from algorithmic bias. The report evaluates these proposals and recommends a balanced regulatory framework to protect users while fostering responsible NT development.
Read more here.
👇 That’s it for this edition. Thanks for reading, and subscribe to get the full text in a single email in your inbox! 👇
♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.