The Privacy Explorer | Week 20

 This edition at a glance:

👈 Swipe left for a quick overview, then find 🔍 more details on each topic below.

📊 New Zealand Office of the Privacy Commissioner (OPC) publishes results of survey on individuals' privacy concerns;

On 13 May 2024 the Office of the Privacy Commissioner (OPC) in New Zealand released the results of its biannual privacy survey to coincide with the start of Privacy Week 2024. The survey aimed to assess New Zealanders’ perspectives on privacy and their growing concerns in an increasingly digital world.

The survey, which included nearly 1,200 participants, highlighted a marked increase in privacy concerns among New Zealanders, with 55% indicating heightened worry over the past two years. This represents a 14% rise from the previous survey, reflecting the impact of more frequent and larger privacy breaches as well as the pervasive reach of technology into daily life.

Key findings from the survey include:

  • Desire for Data Control: 80% of respondents expressed a need for greater control and choice over how their personal information is collected and used.
  • Privacy as a Major Concern: 63% stated that protecting their personal information is a significant concern in their lives.
  • Transparency in Automated Decisions: 83% wanted to be informed when their personal data is used in automated decision-making processes.
  • Right to Deletion: 82% believed they should have the right to request businesses to delete their personal information.

In terms of specific privacy issues, the highest levels of concern were reported as follows:

  • Data Sharing Without Consent: 67% were worried about businesses or government organizations sharing their personal information without notification.
  • AI in Decision-Making: 66% expressed concern about the use of artificial intelligence by the public and private sectors to make decisions using their personal data.
  • Cybersecurity Risks: 65% were apprehensive about their personal information being compromised in cyberattacks.
  • Facial Recognition Technology: 64% were troubled by the use of facial recognition technology without their consent.

The survey also revealed behavioral changes driven by privacy concerns. For instance, 33% avoided social media, 28% avoided online browsing, shopping, and dating due to privacy worries. Moreover, 70% of respondents indicated a likelihood to switch service providers if they encountered poor privacy and security practices.

Māori respondents showed even higher levels of concern compared to non-Māori, with 32% avoiding contact with government departments due to privacy issues, contrasted with 14% among non-Māori.

Privacy Commissioner Michael Webster noted that these results underscore the growing public awareness and the proactive stance of New Zealanders towards protecting their privacy. The timing of the survey results with Privacy Week aims to emphasize the importance of privacy and foster further dialogue on the subject. Privacy Week 2024 features a series of free online talks and discussions on various privacy-related topics, continuing through May 17.

This survey reflects a broader trend of increasing public scrutiny over privacy practices and a demand for greater transparency and control over personal data in New Zealand.

The press release is available here.

🤖 ICO Launches Consultation on Data Subject Rights in Generative AI

On 13 May 2024, the Information Commissioner’s Office (ICO) launched the fourth chapter of its ongoing consultation series on generative artificial intelligence (AI). This latest chapter focuses on data subject rights, particularly concerning the training and fine-tuning processes of generative AI models. The consultation aims to provide guidance on how these rights, enshrined in data protection law, apply to the various stages of AI model development and deployment.

The consultation highlights that individuals have several rights under data protection law, which apply to personal data in training data, fine-tuning data, outputs from generative AI models, and user queries. The ICO emphasizes that organizations must implement processes to facilitate the exercise of these rights throughout the AI lifecycle.

The consultation outlines several obligations for these organizations:

  • Inform Individuals: Organizations must inform individuals if their data is being processed and provide clear, accessible information about data usage and their rights.
  • Justify Exemptions: Organizations must justify any exemptions used and safeguard individuals’ interests and freedoms.
  • Privacy Technologies: The use of privacy-enhancing technologies and pseudonymization techniques is recommended to safeguard data.

The ICO seeks views on the effectiveness of measures to prevent unauthorized data retention and usage and how organizations can balance legal obligations with innovation in generative AI.

The ICO invites stakeholders, including developers, users of generative AI, legal advisors, consultants, civil society groups, and other public bodies, to provide feedback on the consultation. Comments can be submitted until 5 pm on 10 June 2024.

Find the press release and additional information here

🍪 AEPD Issues Updated Cookie Usage Guide in Line with EDPB Opinion on “consent or pay"

The Spanish Data Protection Authority (AEPD) released an updated guide on the use of cookies on 14 May 2024. This update aligns with the European Data Protection Board’s (EDPB) Opinion 08/2024, issued in April 2024, which focuses on valid consent within ‘consent or pay’ models implemented by major online platforms. The AEPD guide emphasizes the expected issuance of a General Application Guide on the validity of consent in such models by the EDPB in early 2025.

Concerns on Valid Consent: The guide highlights concerns that large online platforms may struggle to obtain valid consent if they limit users to a binary choice between consenting to personal data processing for behavioral advertising or paying a fee to opt out. The AEPD underscores the importance of not defaulting to a payment option for services involving behavioral advertising.

Equitable Alternatives: Platforms should offer an ‘equivalent alternative’ to the version of the service that involves personal data processing for behavioral advertising. If a fee is charged for access to this alternative, another free option should be provided. This free alternative could include general or contextual advertising, which processes fewer or no personal data and aligns with the principle of data minimization.

Data Minimization Principle: The guide stresses that only necessary data for advertising activities should be processed. The free alternative is crucial in assessing whether consent for behavioral advertising is valid and if there is any detriment to the concerned party.

Practical Recommendations: The AEPD provides practical recommendations on how platforms can comply with the updated guidelines:

  • Transparency: Platforms must provide clear and complete information about the use of cookies, including their purposes and the types of data collected.
  • Consent Mechanisms: Consent should be obtained through explicit affirmative actions, and users should have the option to refuse cookies without being denied access to services.
  • Layered Information Approach: Information should be provided in layers, with essential information easily accessible and detailed information available upon request.

You can read the updated guide in Spanish here, and you can grab an automated translation into English here.

The move to update the guidance based only on a draft from EDPB is strange, but consistent with AEPD’s own practice given that earlier in March AEPD found that a ‘pay or okay’ cookie banner violated the Spanish e-Privacy Law and imposed a €5,000 fine – read more on GDPRhub.

🚗 FTC Highlights Privacy Risks and Regulatory Actions on Connected Cars, and Texas investigates several manufacturers

On 14 May 2024, the Federal Trade Commission (FTC) released a blog about privacy risks linked to the data collection practices of connected cars. As cars become more digitally integrated, they gather vast amounts of sensitive data, including biometric and location information. This can pose serious privacy and security threats to consumers.

Key Points:

  1. Geolocation Data Protection: The FTC treats geolocation data as highly sensitive, subject to strict protections under the FTC Act. Cases against companies like X-Mode and InMarket show that using such data to track visits to sensitive locations can be unlawful. These cases resulted in bans on selling such data.
  2. Unauthorized Disclosure: Companies must use sensitive information only for its intended purpose. The FTC acted against BetterHelp and Cerebral for disclosing consumer information for advertising. These actions led to significant fines and restrictions on data use.
  3. Automated Decision-Making: The FTC scrutinizes the use of sensitive data in automated systems. In the Rite Aid case, the company’s use of facial recognition technology without proper safeguards led to false positives and inappropriate actions. Rite Aid agreed to a five-year ban on such technology.

Overall, the FTC’s blog underscores the significant risks and liabilities associated with the misuse of sensitive data by connected cars. Companies are advised to limit data collection and prioritize robust privacy protections to avoid potential harms and legal repercussions. Read the full blog post here.

Note that the Texas Attorney General is probing the data privacy practices of connected-car companies, investigating Kia, General Motors, Subaru, and Mitsubishi for potentially violating state deceptive trade practices laws. I found out this same week that these companies received investigative demands in April, marking the first known state-level inquiry into connected car data practices.

🌐 CNIL Publishes Guidance on Retention Obligations for Providing Public Internet Access

On 14 May 2024, the French data protection authority (CNIL) published guidance detailing the obligations for organizations providing public internet access, including municipalities, cafes, hotels, and other public spaces. The key focus is on the retention of “traffic data” and ensuring compliance with data protection principles.

Who is Affected? Organizations offering internet access to the public, such as hotels, restaurants, cafes, transport services, and more, are subject to these legal obligations. This encompasses a wide range of entities providing Wi-Fi or computer access to their customers or users.

What is Traffic Data? Traffic data refers to the technical information generated during internet use. This includes: IP address (identifying the device used), connection details (date, time, and duration of each connection), recipient data (information identifying the recipient of communications, such as a telephone number).

Retention requirements

Identity Data

  • Retention Period: 5 years.
  • Details: User’s civil identity (name, date and place of birth, postal address, email address, telephone number).

Account Information

  • Retention Period: 1 year.
  • Details: Username, pseudonym, data for verifying or modifying the password.

Technical Data

  • Retention Period: 1 year.
  • Details: IP address, associated port, identifier number, telephone number.

Security Data

  • Retention Period: 3 months.
  • Details: Data to identify the communication origin, technical characteristics, date, time, duration of communication, recipient identification, and data on additional services used.

Employers providing internet access to employees are not subject to these data retention obligations. However, they can monitor employee internet activity if specific objectives are defined, affected individuals are informed, and the monitoring system is registered in the processing register. If employers offer Wi-Fi access to visitors, they must comply with the public internet access obligations outlined by CNIL.

Read more here.

♿ CNIL Issues Guidance on Collecting Data on Paralympic Athletes' Disabilities

On 14 May 2024, the French Data Protection Authority (CNIL) released guidance on handling personal data related to the disabilities of paralympic athletes. This guidance outlines the requirements under the General Data Protection Regulation (GDPR) for processing such sensitive data.

Key Points

According to Article 9(1) of the GDPR, health data, including disability information, is considered special category and requires strict protection. CNIL emphasizes the principle of data minimization, meaning only necessary data should be collected. Examples of necessary uses include issuing / renewing sports licenses and adapting sports practices for athletes with disabilities. 

Article 9(2) of the GDPR provides legal bases for processing sensitive data, with CNIL highlighting two primary bases for collecting disability data:

  1. Explicit Consent: Athletes or their legal representatives must give clear, informed consent.
  2. Public Interest: Processing may be justified by public interest, particularly if supported by legal frameworks. Such justifications require thorough documentation.

data protection impact assessment (DPIA) is not always mandatory for collecting disability data unless processing is on a large scale. However, CNIL encourages DPIAs as a best practice to anticipate and mitigate risks.

Read more here.

🏥 DSK Calls for Better Protection of Patient Data in Case of Hospital Closures

On 15 May 2024, the German Data Protection Conference (DSK) issued a resolution highlighting the urgent need for better protection of patient data during hospital closures. The DSK expressed concern over the increasing number of hospital closures and insolvencies, which jeopardize the secure storage and management of sensitive patient treatment records. The resolution calls on hospital administrators, political figures, and legislators to proactively address these data protection challenges.

Challenges and Solutions Identified by DSK

The DSK identified several critical challenges for hospital operators and insolvency administrators regarding patient data handling during closures. Key issues include insufficient funds to continue secure storage of patient records once a hospital declares bankruptcy and a lack of clear legal guidelines on who is responsible for data storage and deletion post-closure. Unlike certain outpatient practice regulations, no comprehensive federal or state laws outline the handling of patient data during hospital closures.

Patient records, classified as sensitive health information under GDPR Article 9, require strict protection. Currently, the regulations are inadequate to ensure this protection during hospital insolvencies or unplanned closures. Patients often lose access to their records once insolvency procedures end or fail due to lack of funds, leading to uncertainties about the secure storage and deletion of hospital records.

To mitigate these issues, the DSK proposed several solutions:

  1. Mandatory Data Protection Plans: Hospitals should develop and submit plans for secure storage of patient records in the event of insolvency or unplanned closure. These plans should be approved by relevant supervisory authorities, similar to existing regulations in North Rhine-Westphalia and Hesse.
  2. Financial Support for Data Retention: States should explore funding solutions to ensure the continued secure storage and retention of patient records during transition periods. For instance, North Rhine-Westphalia’s hospital law includes provisions for establishing patient record security funds.
  3. Collaborative Efforts: In the absence of adequate legal frameworks, hospital management, owners, and interest groups should collaboratively develop solutions to ensure secure short-term storage of patient records from closed hospitals. Involvement of data protection authority representatives could provide additional guidance.
  4. Health Ministers Conference Involvement: The DSK urged the Conference of Health Ministers to prioritize this issue at their next meeting, aiming to develop comprehensive regulations for emergency management of patient data from closed hospitals. This would mirror existing responsibilities allocated to professional chambers under certain state laws for outpatient practices.

The DSK strongly appealed to decision-makers to close existing regulatory gaps, ensuring legal clarity and security for affected patients. This call to action aims to protect patient rights and maintain the integrity of their sensitive health information amid hospital closures.

Read the full document here.

🧬 DSK Issues Position Paper on Secondary Use of Genetic Data for Research

On 15 May 2024, the German Data Protection Conference (DSK) published a position paper outlining the requirements for the secondary use of genetic data for research purposes. The DSK’s paper addresses the inherent risks associated with genetic data processing and emphasizes the need for explicit consent from data subjects due to the highly sensitive nature of genetic information. This data can reveal health predispositions and risks, not only impacting the individual but also their biological relatives, thus necessitating stringent data protection measures.

Risks and Sensitivity of Genetic Data

Genetic data is uniquely sensitive as it provides predictive insights into an individual’s health and that of their biological relatives. This data cannot be altered and carries lifelong implications, presenting significant risks of discrimination and stigma, particularly from insurers and employers. The DSK underlines that genetic data processing affects the core area of personal privacy, requiring exceptional protection.

Necessity for Explicit Consent

The DSK stresses that the processing of genetic data must be based on the explicit consent of the affected individuals. This requirement stems from the EU Charter of Fundamental Rights and the German Genetic Diagnostics Act, which mandates consent for genetic sample collection and analysis. Explicit consent is crucial for upholding the right to informational self-determination.

Proposed Legislative Framework

The DSK advocates for specific legislation governing the secondary use of genetic data for research, distinct from general data protection rules. This legislation should:

  1. Differentiate Processing Purposes: Separate rules for research and quality assurance, with broad consent applicable only to scientific research as per GDPR Recital 33.
  2. Technical and Organizational Guarantees: Implement measures like data protection by design and by default to ensure that data processing ceases upon consent withdrawal and that individual rights are upheld.
  3. Informed Decision-Making: Provide guidance and support to individuals regarding research results and incidental findings, respecting their right not to know.
  4. Broad Consent Regulation: Establish strict conditions for broad consent, including detailed information and advisory requirements.
  5. Retention Periods: Define legally mandated data retention periods.
  6. Prohibitions and Sanctions: Enforce specific prohibitions on data disclosure, especially to employers or insurers, with penalties for violations and criminal sanctions for misuse.
  7. Data Protection Impact Assessment (DPIA): Mandate DPIAs for genetic data processing activities.
  8. Special Protections: Ensure additional safeguards for vulnerable groups such as unborn children, minors, and individuals unable to consent.

The DSK insists on rigorous implementation of these safeguards, highlighting the necessity for transparency, participation, and data security. They also call for an ethical review of research projects and secure data processing environments.

Read the full document here.

🛡️ UK Government Calls for Views on New Cybersecurity Codes for AI and Software

On 15 May 2024, the UK government initiated a public consultation on the two newly introduced codes of practice, which are part of the government’s broader strategy to strengthen the UK’s cybersecurity framework and support economic growth:

  • AI Cybersecurity Code of Practice: This code provides guidelines for integrating security measures into AI systems throughout their lifecycle. It includes technical recommendations and outlines the responsibilities of developers, system operators, data controllers, and end users.
  • Software Security Code of Practice: This code comprises 21 provisions across four principles, guiding organizations on secure development practices, vulnerability management, and effective security communication. Provisions are categorized as mandatory (‘shall’) or recommended (‘should’) to balance required actions with best practices.

Stakeholders are invited to submit their views on the proposed codes of practice by 10 July 2024. Feedback can be provided through online surveys or via email to the respective addresses provided for each code:

Read the press release here.

🤖 U.S. Senate AI Working Group Releases AI Policy Roadmap

On 15 May 2024, the U.S. Bipartisan Senate AI Working Group released a comprehensive roadmap for AI policy in the United States Senate. This document, titled ‘Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate,’ outlines key policy priorities to guide the creation of bipartisan AI legislation.

Increasing Funding for AI Innovation

The roadmap advocates for increased funding to propel U.S. leadership in AI, maintain global competitiveness, and support cutting-edge AI research and development.

 Enforcing Existing Legislation

The roadmap stresses the need to enforce current consumer protection and civil rights laws, particularly regarding opaque ‘black box’ AI systems. It calls for developing legislative language to close transparency gaps, ensuring information access essential for law enforcement, and imposing case-by-case requirements on AI transparency and explainability.

 Addressing CSAM and Social Scoring

The roadmap urges legislation to tackle online child sexual abuse material (CSAM) and ban AI use for social scoring. It also calls for consideration of restrictions or bans on other potentially harmful AI applications.

Human Oversight 

The Working Group recommends including human oversight at critical stages for high-impact AI tasks. For datasets containing sensitive personal data or protected by copyright, policymakers should evaluate transparency needs. Additionally, in employment contexts, the degree of transparency with which federal agencies inform employees about AI use should be reviewed.

Accountability and Standards

Policymakers are encouraged to consider new or clarified standards to hold AI developers and deployers accountable for any harm their products may cause. The roadmap also emphasizes enforcing these standards effectively.

Workforce Impact

The roadmap underscores the need to consider AI’s impact on the workforce, including potential job displacement. It highlights the importance of upskilling and retraining workers to adapt to AI advancements.

National Security

Leading globally in adopting emerging technologies is vital for national security. The roadmap addresses national security threats, risks, and opportunities presented by AI.

Deepfakes and Content Creation

The roadmap addresses the challenges posed by deepfakes, particularly in election content and nonconsensual intimate images. It also examines AI’s impact on professional content creators and the journalism industry.

Mitigating Long-term Risks

The roadmap highlights the importance of addressing potential long-term risk scenarios associated with AI.

Comprehensive Federal Privacy Law

The Working Group supports a robust federal privacy law to protect personal information, addressing data minimization, data security, consumer rights, consent, disclosure, and data brokers.

You can read the press release here, a one-pager for the roadmap here, and the full Roadmap here.

🕒 New Zealand's OPC Issues Guide on 72-Hour Breach Notification

On 16 May 2024, the Office of the Privacy Commissioner of New Zealand (OPC) published detailed guidance regarding the 72-hour breach notification timeframe. This guidance aims to clarify the process and expectations for notifying the OPC of serious privacy breaches promptly.

Definition of ‘Becoming Aware’

The OPC defines ‘becoming aware’ of a breach as having some degree of knowledge or an assessment regarding the risk of harm from the breach. For straightforward breaches, this is immediate, while complex cases might require further inquiry. The initial assessment should focus on factors such as the sensitivity of the information, security weaknesses, and probable harm to individuals.

Prompt Notification

The guidance emphasizes that companies should notify the OPC as soon as they reasonably can after becoming aware of a notifiable breach. The 72-hour timeframe serves as a guideline to encourage prompt action. Even if certain details are still unknown, companies should initiate notification based on initial risk assessments.

Companies are allowed to provide notifications incrementally, meaning they can update the OPC and affected individuals as new information becomes available, provided this is done as soon as reasonably practicable. The OPC advises companies to notify them with available information initially and follow up with further details as needed.

The OPC’s ‘NotifyUs tool’ is recommended for companies to assess the seriousness of a breach and determine the need for notification. This tool serves as a guide, and in cases of uncertainty, the OPC encourages companies to notify them.

Internal Processes

The guidance highlights the importance of internal processes that support the quick disclosure of privacy incidents. Information known by employees or agents is considered as being known by the company, necessitating efficient internal communication to ensure prompt notification.

Read the press release here.

⚖️ ECtHR decision in Mirzoyan v. the Czech Republic

On 16 May 2024, the European Court of Human Rights (ECtHR) delivered its judgment in the case of Mirzoyan v. the Czech Republic, addressing whether the Czech authorities violated Article 8 of the European Convention on Human Rights (right to respect for family life) by withholding classified information justifying the refusal to extend a residence permit on national security grounds.

The applicant, a Russian national residing in the Czech Republic since 2006, had his applications to extend his long-term residence permit for business purposes and for family reunification refused. The refusals were based on classified documents indicating he posed a threat to national security. The applicant argued that the refusals adversely affected his right to respect for his family life under Article 8, as he lived in the Czech Republic with his wife and four children.

Facts of the case:

  • The classified information forming the basis of the decisions was not disclosed to the applicant, although parts were accessible to his lawyer.

  • The ECHR examined whether these limitations were counterbalanced by sufficient procedural safeguards.

  • The Czech Supreme Administrative Court conducted a review of the administrative decisions, having full access to the classified information. It found the information credible and sufficient, despite its non-disclosure to the applicant and his lawyer.

  • The court considered the applicant’s family situation, including his children’s best interests, but found no exceptional circumstances that would make the refusal disproportionate. The applicant did not provide substantial evidence or arguments about his family’s specific circumstances or the impact on his children.

ECHR’s Conclusion:

  • The court determined that the judicial review provided by the Supreme Administrative Court offered adequate procedural guarantees, mitigating the limitations on applicant’s rights.
  • It found that the national authorities had given due consideration to his family ties and carried out a proper balancing of interests, within their margin of appreciation.
  • Consequently, the ECHR ruled that there was no violation of Article 8 of the Convention.

👶 Virginia Enacts New Data Protection Laws for Children

On 17 May 2024, the Governor of Virginia signed Senate Bill 361 and House Bill 707, both of which will take effect on January 1, 2025, significantly enhancing protections for children’s personal data.

Senate Bill 361 prohibits operators of websites, online services, or mobile applications from collecting or using personal data of users under 18 without obtaining consent. It also bans the sale or disclosure of such data. The act defines ‘operator’ comprehensively, including entities that collect, maintain, or allow others to collect personal data directly from users. 

Key Responsibilities:

  • Operators must not process the personal data of covered users without consent if they are under 13, unless permitted by 15 U.S.C. § 6502 regulations. For users aged 13 and older, processing is only allowed if strictly necessary or with informed consent.
  • Operators must respect device communications indicating a user is a minor and adhere to settings regarding consent for data processing.
  • Within 14 days of identifying a user as a minor, operators must delete all personal data unless processing is legally allowed, and inform any third parties involved.

House Bill 707 amends the Virginia Consumer Data Protection Act (CDPA), adding specific provisions for children’s data. It prohibits data controllers from processing a known child’s personal data for targeted advertising, sale, or profiling that has legal effects. Consent from a parent or guardian is required for processing a child’s personal data. Additionally, the collection of precise geolocation data from children is restricted unless exceptions apply.

Data Protection Assessments (DPAs) are mandatory for data controllers offering online services to children. These assessments must detail the purpose of the service, categories of children’s data processed, and the purposes for data processing.

For more details, the full text of Senate Bill 361 can be found here, and House Bill 707 here.

⚠️ EU Commission Requests Information from Microsoft on Bing's Generative AI Risks

On 17 May 2024, the European Commission announced a legally binding request for information to Microsoft concerning risks posed by generative AI features in Bing, specifically “Copilot in Bing” and “Image Creator by Designer.” This action follows a lack of response from Microsoft to an earlier request made on 14 March.

The Commission’s request arises from suspicions that Bing’s generative AI might breach the Digital Services Act (DSA). Concerns include AI-induced hallucinations, viral deepfake dissemination, and automated service manipulation that could mislead voters. These risks are particularly critical as the European Parliament elections approach in June.

Under the DSA, designated very large online platforms and search engines, including Bing, must conduct thorough risk assessments and implement risk mitigation strategies (Articles 34 and 35). The Commission’s guidelines on electoral integrity underscore generative AI as a significant risk.

Microsoft must provide the requested information by 27 May 2024. Failure to comply could result in fines up to 1% of the company’s total annual income or worldwide turnover, and periodic penalties up to 5% of average daily income or turnover. Additionally, fines can be imposed for incorrect or misleading information.

Read the press release here.

📜 Turkish KVKK Publishes Draft Documents on Standard Contracts and BCRs

On May 17, 2024, the Turkish Personal Data Protection Authority (KVKK)  announced the publication of draft documents concerning standard contracts and binding corporate rules (BCRs). These drafts aim to ensure appropriate safeguards for the international transfer of personal data in compliance with the updated Article 9 of the Law on Protection of Personal Data No. 6698. The amendments to the law, which were introduced by Law No. 7499 and published in the Official Gazette on March 12, 2024, will take effect on June 1, 2024.

The draft documents are a part of a broader regulatory framework aimed at providing clear guidelines for data controllers and processors in transferring personal data abroad. Specifically, the KVKK has released several standard contracts and BCRs-related documents, which are now open for public consultation until 27 May 2024. The key documents published include:

  • Standard Contracts for Data Transfers:
    • Data Controller to Data Controller
    • Data Controller to Data Processor
    • Data Processor to Data Processor
    • Data Processor to Data Controller
  • Binding Corporate Rules (BCRs):
    • BCRs application form for data controllers
    • Companion guide on essential elements for BCRs for data controllers
    • BCRs application form for data processors
    • Companion guide on essential elements for BCRs for data processors

These drafts are intended to provide a comprehensive structure for ensuring compliance with the new data transfer regulations. I have written about KVKK’s efforts on this topic also in the previous edition of The Privacy Explorer.

You can read the press release here.

🌍 Council of Europe adopts first international Convention on AI

The Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law on 17 May 2024. This marks the first international legally binding treaty aimed at regulating the use of artificial intelligence (AI) systems, emphasizing respect for human rights, democracy, and the rule of law. The Convention covers the entire AI lifecycle, from design and development to decommissioning, adopting a risk-based approach to manage potential risks and promote responsible innovation.

The treaty does not apply to national defence matters or research and development activities unless these activities potentially interfere with human rights, democracy, or the rule of law.

The Convention will be opened for signature on 5 September 2024, in Vilnius, Lithuania.

Read the press release here.

📡Apple and Google Agree on Standard for Bluetooth Tracker Alerts

Apple and Google have introduced a new collaborative industry standard named “Detecting Unwanted Location Trackers,” designed to alert users of potential tracking by unknown Bluetooth devices. This initiative addresses growing concerns over the misuse of Bluetooth trackers like Apple’s AirTags for stalking purposes.

iPhone users with iOS 17.5 and Android users with devices running version 6.0 or higher will from now receive “[Item] Found Moving With You” alerts if an unknown Bluetooth tracker is detected moving with them. This alert system functions across both platforms, ensuring comprehensive user protection.

Read more here

📢 EU Launches Investigation into Meta's Child Safety Practices on Facebook and Instagram

On 16 May 2024, the European Commission opened formal proceedings against Meta to investigate potential violations of the Digital Services Act (DSA) concerning child protection on Facebook and Instagram. The Commission’s concern stems from preliminary analyses, Meta’s responses to formal information requests, and publicly available reports indicating possible failures in Meta’s systems, particularly those affecting minors.

Key Investigation Areas:

  1. Addictive Algorithms and “Rabbit-Hole” Effects: The Commission is examining if Facebook and Instagram’s algorithms foster behavioural addictions in children, creating rabbit-hole effects that can potentially harm minors’ mental health.
  2. Age Verification Methods: The investigation will scrutinize Meta’s age-assurance and verification processes, questioning their effectiveness and adequacy in preventing minors from accessing inappropriate content.
  3. Privacy and Safety Measures: The inquiry will also assess Meta’s compliance with DSA obligations to ensure a high level of privacy, safety, and security for minors, including default privacy settings and the design of their recommender systems.

The proceedings address potential infringements of Articles 28, 34, and 35 of the DSA. If proven, these failures could lead to significant penalties, up to 6% of Meta’s global annual turnover.

The press release is available here. This is not the first investigation of the EU Commission into Meta under the DSA – see edition #18 for a previous one involving deceptive advertising and other issues related to election integrity, and content moderation.

👇 That’s it for this edition. Thanks for reading, and subscribe to get these nuggets in your inbox! 👇
Scroll to Top