The AI & Privacy Explorer #38 (16-22 September 2024)

Welcome to the AI digital and privacy recap of privacy news for week 37 of 2024 (9-15 September)! 

 

🚀 Accelerate your privacy career this fall: my privacy management program that brings together a course, coaching, community and workshops starts on 14 October! Get 20% off with code INTRO20 at checkout. 👉 Register here.

🤖 My response to the Big Tech plea for less regulation

In just a short span of time this September, three major developments have reignited the debate over digital regulations in Europe and beyond. On September 9, Mario Draghi (European Commission) released his long-anticipated report “The Future of European Competitiveness” critiquing the EU’s regulatory environment. The report describes it as complex, fragmented, and burdensome, particularly for small and medium-sized enterprises. Draghi’s analysis argues that Europe’s stringent rules, such as the GDPR and the AI Act, may be stifling innovation and competitiveness, raising concerns among EU policymakers and businesses.

Just ten days later, on September 19, some of the world’s biggest tech companies – including Meta and Google – issued an open letter to EU leaders. The letter argues that Europe’s current regulatory approach to AI could stifle innovation and drive investment away from the continent. It calls for a major regulatory overhaul, pushing Europe to align with the lighter regulatory touch seen in other regions, warning that the EU’s ambitions for digital sovereignty could backfire.

On the same day, September 19, the U.S. Federal Trade Commission (FTC) issued a staff report (A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services) that paints a stark picture of the consequences of America’s relatively lax approach to privacy regulation. The report details rampant data collection and exploitation by major tech firms, often without consumers’ knowledge or consent. It warns that without significant regulatory intervention, the “commercial surveillance ecosystem will only get worse”,  highlighting the very abuses that Europe’s laws are designed to prevent.

These three developments highlight an important moment for Europe’s regulatory future. The narrative that regulations are stifling innovation is gaining momentum, but it misrepresents the real stakes. This article is a call to reconsider the path forward: Europe must not compromise on its values or weaken the protections that define its digital landscape. Better implementation of regulations is essential, but the solution is not to lower standards – it’s to ensure that technological progress serves people, not just profits.

In my  article I respond to this wave of pressure and argue that Europe’s values must remain non-negotiable. Better implementation of regulations is welcome – but compromising on standards isn’t the answer. We must stand firm in our commitment to protect individuals and ensure that technological progress serves society, not just corporate interests.

Read it here.


🍪 UK DSIT Releases Study on Cookie Settings and Privacy Decisions

The UK Department for Science, Innovation, and Technology (DSIT) published a report on 4 September 2024 examining the impact of various website cookie-setting designs on user privacy decisions. The study, conducted between August and December 2023 with 5,019 participants, focused on how different cookie consent interfaces affect user engagement and choices, with the goal of optimizing privacy outcomes and reducing the burden of website-by-website settings.

Key Findings

  1. Acceptance Bias: Despite privacy concerns, most users (up to 80%) tend to accept all cookies, even when defaults encourage declining cookies.
  2. Design Matters: Detailed and interactive cookie-setting designs, such as those offering more granularity (e.g., “Detailed” and “Scale” settings), increased user engagement. These options led to a higher rate of customization (14%) compared to simpler or opt-in/out models.
  3. Privacy Preferences: 53% of users expressed comfort with data sharing, but 42% preferred to customize settings. A small portion (5%) consistently declined all cookies.
  4. Higher Satisfaction: Participants were more satisfied when cookie settings aligned with their privacy preferences, especially when using detailed, privacy-protective options.
  5. User Misunderstanding: There remains significant confusion about what cookies collect, highlighting the need for clearer information.

Recommendations

The report advocates for more interactive and detailed cookie management systems to improve alignment between user choices and privacy preferences. It suggests browser-based cookie settings as an alternative to the current website-by-website model, which could reduce cognitive burden and increase informed decision-making.

You can find it here.

🛡️ Instagram Introduces Teen Accounts with AI-Age Verification

On 17 September 2024, Instagram launched Teen Accounts, implementing privacy settings, content restrictions, and AI-driven age verification to enhance safety for users under 18. This update follows intense scrutiny from a 2021 Wall Street Journal investigation that linked Instagram use to increased depression, eating disorders, and self-harm among teens.

Teen Accounts are private by default, restrict sensitive content, limit messaging to known contacts, and enforce screen time limits. Teens under 16 need parental permission to modify settings, while parents can supervise older teens through additional controls.

Meta is deploying AI technology to identify users who might be lying about their age by analyzing behavioral patterns, ensuring that all teens are placed in appropriate settings.

These changes began rolling out in the US, UK, Canada, and Australia, with global implementation expected by January 2025.

Read the press release here.

The UK ICO’s Executive Director of Regulatory Risk, Stephen Almond, said “We welcome Instagram’s new protections for its younger users following our engagement with them. Our Children’s code is clear that kids’ accounts must be set at ‘high privacy’ by default unless there is a compelling reason not to do so. We’ll keep pushing where we think industry can go further, and take action where companies are not doing the right thing.”


🤖 Governing AI for Humanity: UN’s Proposal for Global AI Governance

The United Nations Secretary-General’s High-level Advisory Body on Artificial Intelligence has released its final report, “Governing AI for Humanity,” addressing the urgent need for a global governance framework for AI. Informed by extensive consultations with over 2,000 participants from diverse fields and regions, the report highlights the transformative potential and significant risks of AI, emphasizing the critical need for coordinated global action.

Key Takeaways

  • Global Governance Deficit: The report identifies a significant global governance deficit in AI, noting that current norms, laws, and institutions are inadequate to fully harness AI’s transformative potential while mitigating its risks. The fragmented nature of existing AI governance efforts leaves many gaps, particularly in ensuring equitable access, accountability, and risk management.
  • Unequal Opportunities and Risks: AI offers vast benefits across sectors, from healthcare to climate action, but the report warns of a growing digital divide that threatens to confine these benefits to a limited number of states, companies, and individuals. It highlights the disproportionate impact of AI risks on vulnerable communities, urging more inclusive governance to prevent deepening existing inequalities.
  • Need for International Cooperation: The report calls for robust international cooperation to ensure equitable distribution of AI benefits and effective risk mitigation. Without coordinated action, AI development could exacerbate existing disparities, create new challenges, and further marginalize vulnerable groups, particularly in developing nations.

Recommendations

The report proposes seven key actions to address the global governance gap, emphasizing the need for common understanding, cooperation, and equitable sharing of AI benefits:

  1. International Scientific Panel on AI: Establishing an impartial body to provide independent, reliable knowledge on AI developments, risks, and best practices, akin to the IPCC for climate change.
  2. Policy Dialogues on AI Governance: Launching global policy dialogues to foster interoperability and alignment of AI governance approaches across regions and sectors, ensuring that governance efforts are harmonized and mutually reinforcing.
  3. AI Standards Exchange: Creating a global platform to harmonize definitions, standards, and benchmarks for AI, addressing disparities in current regulatory and technical frameworks.
  4. Capacity Development Network: Developing a global network to build AI capacity, particularly in underrepresented regions, ensuring equitable access to AI resources, training, and expertise.
  5. Global AI Data Framework: Promoting shared data governance and supporting the development of data commons, including provisions for hosting data trusts relevant to the Sustainable Development Goals (SDGs).
  6. Global AI Fund: Establishing a dedicated fund to support the capacity development and resource needs of underrepresented countries, enhancing global AI inclusivity and access.
  7. AI Coordination Office within the UN Secretariat: Establishing an office to coordinate global AI governance initiatives, drive coherence among international efforts, and provide support for implementation, reporting directly to the Secretary-General.

Addressing AI Risks

  • The report acknowledges numerous risks associated with AI, including algorithmic bias, threats to privacy and security, job displacement, disinformation, autonomous weapons, and the concentration of power among a few tech companies.
  • It emphasizes the importance of a vulnerability-based approach to risk management, focusing on how AI impacts different communities, particularly those that are most vulnerable. Enhanced monitoring, adaptive governance, and targeted risk mitigation strategies are essential to manage these challenges proactively.

The report serves as a call to action for the international community to engage in proactive and comprehensive efforts to govern AI responsibly. It also highlights the UN’s unique role in facilitating global AI governance and it’s capacity to provide a neutral platform for dialogue, develop shared norms, and ensure that AI governance efforts reflect the diverse needs and values of global communities.

You can find the report here, including it’s interim version from December 2023.

📊 Report on AI in the Legal Profession

The International Bar Association (IBA) and the Center for AI and Digital Policy (CAIDP) launched the report “The Future is Now: Artificial Intelligence and the Legal Profession” on 19 September 2024 at the IBA Annual Conference in Mexico City. The report highlights the widespread but uneven adoption of AI among law firms, with larger firms significantly ahead in leveraging AI for client services. Smaller firms and solo practitioners face substantial challenges in AI governance, data security, and resource allocation. The report underscores that AI’s impact on law firms will extend to their structure, hiring practices, and business models, necessitating a shift towards AI-competent attorneys and new fee structures.

Key Findings

  • AI Adoption and Regulation: 48% of legal respondents support regulation of AI in the legal profession, with 57% emphasizing the importance of consistent regulations globally. Despite widespread support, only 43% of law firms have AI policies in place.
  • Widespread AI Adoption with Disparities: Larger firms have integrated AI more extensively and effectively, especially in client-facing roles such as legal research, contract drafting, and due diligence. Smaller firms primarily use AI for internal tasks like back-office administration, marketing, and business development but face challenges in scaling up these efforts. Only 69% of respondents were aware of how AI regulations might impact their firms.
  • Training and Skill Development: The report underscores the critical need for training programs to ensure the successful adoption of AI in legal practices. Firms are urged to focus on building comprehensive training and development programs to equip lawyers, especially younger associates, with skills to work alongside AI, ensuring they develop expertise that will be crucial as they advance in their careers.
  • Challenges in AI Governance: Data governance, privacy, intellectual property, and security remain major concerns across the board, with smaller firms lacking comprehensive AI policies and dedicated resources. Many firms, regardless of size, are still in the early stages of developing governance frameworks.
  • Structural and Business Model Changes: AI is expected to drive significant changes in law firm structures, hiring policies, and business models. Firms are moving towards value-based or fixed fees and are prioritizing hiring AI-competent attorneys. This shift demands firms adapt culturally and operationally to integrate AI effectively.

Recommendations for Law Firms

  • Promote AI Adoption with Targeted Support: Encourage AI integration across all firm sizes, with special support for smaller firms to bridge gaps in resources and expertise.
  • Enhance AI Governance: Develop comprehensive AI governance frameworks that address key challenges like data security, privacy, and intellectual property.
  • Support Structural and Cultural Changes: Provide guidance on adapting firm structures, hiring practices, and operational models to integrate AI technologies effectively. Emphasize the need for a cultural shift towards innovation.
  • Facilitate Comprehensive Training Programs: Implement extensive training programs that build AI literacy and competence among legal professionals, ensuring ongoing education to keep pace with technological advancements.
  • Update Ethical Standards: Revise ethical guidelines to reflect AI use in legal practice, emphasizing proper supervision, disclosure, and maintaining professional standards in AI-generated work.
  • Foster Collaboration and Knowledge Sharing: Encourage collaboration among law firms to share best practices, knowledge, and resources related to AI adoption, helping firms learn from each other’s experiences.

You can read the report here.

🏥 Texas Settles with AI Healthcare Company Over Deceptive Claims

On 18 September 2024, the Texas Attorney General (AG) announced a groundbreaking settlement with Pieces Technologies, a Dallas-based generative AI company providing healthcare products. The settlement addresses allegations that Pieces made false and misleading claims about its AI products’ accuracy, violating the Texas Deceptive Trade Practices Consumer Protection Act (DTPA). The AI tools, marketed as enhancing clinical care in hospitals, allegedly misled healthcare providers by falsely advertising extremely low error rates.

Key Allegations

  • Misleading Metrics: Pieces claimed its AI products had hallucination rates as low as 0.001%, which the AG found likely inaccurate and deceptive.
  • Data Handling in Healthcare: Pieces integrated its AI products into hospitals, accessing real-time patient data from at least four major Texas hospitals. The AG highlighted that false accuracy claims could mislead healthcare providers, potentially risking patient safety.
  • B2B Focus: Despite operating in a business-to-business context, the deceptive practices put the public interest at risk, according to the AG.

Settlement Provisions

  1. Clear Marketing Disclosures: Pieces must clearly explain any metrics or benchmarks used in advertising, including the methodology behind them.
  2. Prohibition on Misrepresentations: Pieces is barred from making any unsubstantiated claims about its products’ accuracy, functionality, or purpose.
  3. Customer Transparency: The company must provide documentation that discloses any known harmful uses or risks associated with its AI products, detailing the data and training models used, intended purposes, and known limitations.

This settlement marks an important moment in AI implementation, showing that exaggerated claims (including marketing) and inadequate disclosures can lead to significant enforcement actions. You can find it here.

📘 Belgium Data Protection Authority Publishes Brochure on AI and Data Protection 

On 19 September the Belgian Data Protection Authority (APD) has published an informative brochure on artificial intelligence from a data protection perspective, and launched a dedicated AI section on its website to further support stakeholders in navigating the intersection of AI and data protection regulations.

The brochure is targeted at a diverse audience, including legal professionals, data protection officers, AI system developers, and technical stakeholders, offering insight into how both regulatory frameworks shape the design, deployment, and management of AI systems.

Key Takeaways for AI Developers and Stakeholders

1. Alignment of GDPR and AI Act

The brochure emphasizes the alignment between GDPR and AI Act, particularly how both sets of regulations focus on the lawful, fair, and transparent processing of personal data. AI systems must be developed with these principles in mind to ensure compliance. Developers are reminded that the AI Act introduces new obligations, especially for high-risk systems, and these must be integrated alongside existing GDPR requirements.

2. Lawful, Fair, and Transparent Processing

AI systems that process personal data must be lawful under one of the six legal bases specified in the GDPR (e.g., consent, contractual necessity). In addition, fairness and transparency are central, requiring developers to ensure that data subjects understand how their data is being processed. Transparency involves making AI interactions clear to users, for instance, by notifying them when they are engaging with an AI system.

3. Data Minimization and Purpose Limitation

One of the most critical GDPR principles is data minimization, which mandates that personal data collected by AI systems must be strictly necessary for the stated purpose. Similarly, the purpose limitation principle ensures that data is only used for the specific reasons it was collected. These principles are reinforced in the AI Act, particularly for high-risk AI systems, which must document and justify their intended purposes.

4. Security of Processing

The APD highlights the importance of robust security measures to protect personal data handled by AI systems. AI introduces specific risks—such as manipulation of training data, biases, and potential vulnerabilities—that require developers to implement strong technical and organizational measures (TOMs), including encryption, access controls, and regular monitoring.

5. Human Oversight and Accountability

Both GDPR and AI Act emphasize the importance of accountability. Developers must document their compliance processes and ensure human involvement in critical decisions, especially in high-risk AI systems like medical diagnostics or financial decision-making. Human oversight is crucial to prevent unintended consequences from fully automated systems.

6. Real-life Examples and Case Studies

The brochure includes several examples, such as AI systems in spam filtering, streaming recommendations, virtual assistants, and medical imaging analysis. These help to illustrate the application of GDPR and AI Act principles in real-world AI contexts, guiding developers on how to implement data minimization, transparency, and bias mitigation.

Less good parts

While the brochure gets many things right and provides real-life examples that help AI developers translate abstract regulatory requirements into real-world scenarios, the brochure also has limitations. Here are some that I picked:

1. Lack of Technical Depth

While the brochure provides a strong regulatory framework, it lacks the technical depth AI developers need to fully operationalize these principles. For instance, while it mentions the need for data minimization and fair processing, it does not go into detail about how developers can technically achieve these within AI workflows. Developers need more guidance on things like:

  • How to implement privacy-by-design at the system architecture level. How to think in a privacy friendly way, what questions to ask before and during the most common phases, examples of good and bad outcomes, etc.
  • Best practices for data minimization in machine learning pipelines (e.g., how to preprocess data to ensure compliance without compromising model accuracy).
  • Specific technical approaches to mitigate bias in machine learning models (e.g., algorithms or techniques for bias detection, handling imbalanced datasets).

2. No Practical Tools

The brochure would be significantly more useful if it provided practical tools such as templates or checklists, or refer to work by other authorities that provided them. For example, templates for data protection impact assessments (DPIAs) would simplify the compliance process, and the brochure could have at least linked to the Danish DPA template.

3. Insufficient Focus on the Development Lifecycle

The examples and advice provided by the APD focus heavily on high-level principles, but they don’t break down how these principles should be integrated at various stages of the AI development lifecycle. For instance, developers could benefit from step-by-step guidance on:

  • How to embed transparency and fairness in the early stages of model development (e.g., during data collection and preprocessing).
  • How to document compliance throughout the lifecycle of an AI system, from design to deployment and monitoring.
  • Concrete examples of how to implement human oversight in automated decision-making processes, especially for high-risk AI systems that affect individuals’ rights (e.g., setting up alerts or review mechanisms).

This kind of guidance would help developers integrate the data protection principles into their daily workflows rather than viewing compliance as an afterthought.

4. Limited Guidance on Transparency

While transparency is a core GDPR principle emphasized in the brochure, there is limited practical advice on how developers can make AI decision-making processes understandable to non-technical users. Developers working on complex AI systems, especially those using deep learning or neural networks, need concrete strategies for how to:

  • Explain algorithmic decisions (e.g., how an insurance premium was calculated) to users.
  • Build user-friendly interfaces that allow individuals to access information about how their data was used.
  • Address data subject rights (such as access, rectification, and erasure) in the context of AI, where decisions may be based on vast and interconnected datasets.
You can find the dedicated web section here (French or Dutch), and the brochure here in English. 

📊 ENISA Publishes Threat Landscape 2024 Report

On 19 September 2024, ENISA released its Threat Landscape 2024 report, covering cybersecurity threats observed from July 2023 to June 2024. Based on analysis of thousands of publicly reported incidents, the report identifies seven primary threats and offers insights into evolving tactics and vulnerabilities.

Key Threats Identified

  • Threats Against Availability: Denial-of-Service (DoS) attacks top the list, particularly targeting public administration (33%).
  • Ransomware: Remains widespread, impacting sectors like business services (18%) and manufacturing (17%). The report highlights the ongoing use of extortion techniques.
  • Threats Against Data: Includes data breaches and leaks, often driven by motives of financial gain or espionage.
  • Malware and Social Engineering: Tactics such as phishing and human manipulation continue to pose significant risks.
  • Information Manipulation and Interference: Reflects geopolitical tensions, with misinformation campaigns on the rise.
  • Supply Chain Attacks: Highlighted by incidents involving backdoors in open-source projects, stressing the need for secure software maintenance.
  • State-Nexus and Cybercrime Actors: Increasingly employ advanced stealth techniques like Living Off Trusted Sites (LOTS) to avoid detection.

Key Trends and Tactics

  • Living Off Trusted Sites (LOTS): Threat actors use legitimate cloud services like Microsoft and Slack to disguise command and control activities.
  • Exploitation of Vulnerabilities: Focus on edge devices and zero-day exploits by state actors, particularly from China and Russia.
  • Emerging Techniques: Adoption of AI tools by cybercriminals for cyberattacks and propaganda efforts signals future trends in cyber threats.

You can find it here.

 

📢 EU Parliament Releases Critical Report Urging Change from AI Liability Directive to Software Liability Act

On 19 September 2024, the European Parliament published a complementary impact assessment for the proposed AI Liability Directive (AILD). The European Commission initially proposed this directive in 2022 to address liability rules for AI systems, with an accompanying impact assessment. The Parliament’s complementary study was requested by the Committee on Legal Affairs (JURI) and authored by Professor Philipp Hacker. It critiques the Commission’s impact assessment, pointing out several key shortcomings and providing recommendations for improvement.

Key Critiques of the Initial Impact Assessment:

  • Incomplete Exploration of Regulatory Options: The original assessment inadequately explored alternative regulatory policies, focusing narrowly on fault-based liability and neglecting mixed or strict liability frameworks.
  • Abridged Cost-Benefit Analysis: The assessment’s evaluation of the strict liability regime was insufficient, particularly in its analysis of potential costs and benefits.

Major Recommendations:

  • Expansion of Scope: The complementary study recommends that AILD should cover not just AI systems but also general-purpose and high-impact AI technologies, as well as traditional software, to create a cohesive framework.
  • Mixed Liability Framework: A proposed mixed liability model balances fault-based and strict liability, which could provide better coverage for diverse AI systems.
  • Transition to Software Liability Regulation: The study suggests shifting from an AI-focused directive to a broader software liability regulation. This move aims to prevent fragmentation within the EU market and enhance legal clarity across jurisdictions.
  • Inclusion of Generative AI and High-Impact Systems: It emphasizes the need to include generative AI, such as ChatGPT, under the liability framework to address their unique risks and ensure evidence disclosure and liability presumptions.

The study’s recommendations could influence the EU’s approach to AI regulation by broadening the scope of liability and promoting consistency across member states. It underscores the importance of aligning the directive with existing laws, such as the AI Act and the Product Liability Directive, to ensure comprehensive coverage of AI-related risks.

The study encourages the European Parliament to consider these recommendations during the legislative process. It calls for a shift towards a regulation that would provide harmonized rules across the EU, enhancing both legal certainty and consumer protection.

You can find it here.

🤖 NIST Launches New Program to Address Cybersecurity and Privacy Risks in AI

NIST has introduced a new program to address cybersecurity and privacy risks posed by the growing use of artificial intelligence. NIST’s program aims to create standards and guidelines to ensure AI systems are secure, transparent, and trustworthy, while also enabling the use of AI to improve cybersecurity tools. A key focus will be adapting existing frameworks, such as the Cybersecurity Framework and AI Risk Management Framework, to the unique challenges AI introduces.

You can read the press release here.

🛑 California Governor Vetoes Browser Opt-Out Bill 

On 20 September 2024, California Governor Gavin Newsom vetoed AB 3048, a bill designed to require web browser developers to implement an opt-out preference signal (OOPS) that would enhance consumer privacy under the California Consumer Privacy Act (CCPA). This bill aimed to standardize privacy options across browsers, but Newsom expressed concerns about mandating technical requirements, especially on mobile operating systems.

Key Details

  • AB 3048’s Purpose: The bill sought to ensure that all browsers on desktop and mobile platforms would automatically send an OOPS, enabling consumers to signal their choice to opt out of the sale or sharing of their personal information, as defined by the CCPA.
  • Current Landscape: CCPA requires websites to respect OOPS signals when presented. However, few browsers natively support this feature, and those that do are mostly desktop-focused. Mobile operating systems generally do not offer this functionality.
  • Governor’s Concerns: Newsom argued that forcing operating system developers to incorporate such signals might disrupt the usability of mobile devices. He suggested that technical solutions should emerge from developers rather than through regulatory mandates.
  • Implications for Consumers: Without a legal requirement, consumers will rely on browsers or extensions that support the opt-out signal. Websites are still obligated to provide alternative opt-out mechanisms as required by the CCPA.
  • Other Legislative Actions: On the same day, Newsom signed eight AI-related bills addressing digital transparency, including measures requiring social media platforms to flag AI-generated content and mandate disclosures for AI-generated political advertisements.

 

👇 That’s it for this edition. Thanks for reading, and subscribe to get the full text in a single email in your inbox! 👇

♻️ Share this if you found it useful.

💥 Follow me on Linkedin for updates and discussions on privacy education.

🎓 Take my course to advance your career in privacy – learn to navigate global privacy programs and build a scalable, effective privacy program across jurisdictions.

📍 Subscribe to my newsletter for weekly updates and insights in your mailbox.

Scroll to Top