🇳🇱 Dutch DPA identifies privacy risks in the workplace and in social security
The Dutch Data Protection Authority’s 2024 issued a report highlighting privacy challenges in labour and social security sectors.
- In the labor sector, the report criticizes the frequent conduct of health-related assessments such as alcohol, drug, and medicine testing without a legal basis, which contravenes GDPR. It recommends that employers instead rely on qualified occupational health services to conduct any necessary health data processing legally and ethically.
- Enhanced surveillance of employees through increased use of surveillance technologies, including biometrics (like fingerprint and iris scanning) and digital monitoring tools (like keystroke logging and vehicle tracking) is critiqued, with a call for lawful basis and transparency in their deployment. These practices, while useful for security and management, often lack a clear legal justification, raising significant privacy concerns. The AP emphasizes the importance of ensuring that any deployment of such systems is transparent, necessary, and legally justified to protect employee privacy.
- Regarding social security, the report outlines concerns about the extensive and sometimes excessive use of personal data, particularly through AI and algorithms for profiling purposes. This includes profiling for fraud risk, which can lead to discriminatory outcomes and privacy infringements. The AP calls for social security institutions to ensure a robust legal framework for any data processing, involving data protection officers early in the project lifecycle, and conducting thorough data protection impact assessments.
🌍 Joint International Guidelines for Secure AI Deployment
The National Security Agency’s Artificial Intelligence Security Center, along with key international cybersecurity agencies, has published a detailed guideline on securing AI systems. This collaborative effort aims to standardize best practices for deploying and managing AI systems, particularly those developed externally. The document addresses critical areas such as ensuring the confidentiality, integrity, and availability of AI systems and outlines steps to mitigate known vulnerabilities effectively.
Key Aspects Covered:
- Comprehensive Security Measures: The guidance suggests a multi-faceted approach that includes securing deployment environments, continuous protection of AI systems, and rigorous operation and maintenance standards.
- Mitigation of Specific Threats: Strategies are recommended to counteract potential threats to AI systems, including secure configurations, the application of Zero Trust principles, and robust monitoring of system behaviors and vulnerabilities.
- Operational Recommendations: It advises on managing deployment environments, verifying system integrity, and safeguarding sensitive data involved in AI operations.
- Collaborative Efforts: Encourages a coordinated approach involving various organizational teams—ensuring all stakeholders understand their roles in securing AI assets.
- Regular Updates and Testing: Highlights the necessity for ongoing updates, testing, and adaptations to security practices in response to evolving threats and vulnerabilities in AI technologies.
This guidance serves as a foundational document for organizations looking to securely deploy AI technologies, ensuring they adhere to the highest standards of cybersecurity and resilience.
🇨🇿 Avast Software s.r.o. Fined CZK 351 Million (USD 14.9 million) for Unlawful Data Processing
Avast Software s.r.o., a leading cybersecurity provider, has been fined CZK 351 million (approx. USD 14,9 million) by the Czech Office for Personal Data Protection. The Czech Data Protection Authority initiated an investigation into Avast Software s.r.o. due to an anonymous complaint and significant media attention in 2020 (see initial reporting in January 2020 by Motherboard and PCMag). The probe revealed that Avast, through its antivirus software and browser extensions, sold pseudonymized browsing data linked to unique identifiers of about 100 million users. The data was not merely used for statistical analysis as claimed by Avast but was instead provided to its subsidiary Jumpshot, Inc. This company utilized the data, including pseudonymized internet browsing histories linked to unique identifiers, to offer detailed consumer behavior insights to marketers, meaning the data was actually disclosed to a large number of entities.
Key aspects:
- The transferred data was not sufficiently anonymized, contradicting Avast’s claims that its anonymization processes were sufficient and that the data was used for compatible statistical analysis. However, the DPA disagreed, highlighting that it was possible to re-identify individuals and that Avast did not adequately inform users about the true nature and extent of data processing. Moreover, Avast misled its users by not fully disclosing the nature of the data transfer and the extent of data usage, which included comprehensive tracking of user activities online.
- The DPA found Avast guilty of processing personal data without a legitimate legal basis under GDPR Article 6(1) and failing to meet transparency obligations required by Article 13.
- The DPA imposed a record fine, considering the unprecedented nature of the case in terms of the processing’s scope, the number of affected users, and the potential impact on their privacy rights.
- The matter was also subject to scrutiny by other EU data protection authorities, under the EU’s One Stop Shop mechanism.
Following the exposure of its practices in January 2020, Avast’s CEO apologized later that month and announced the shutdown of Jumpshot. This scandal extended to the U.S., where in February 2022 Avast settled with the FTC for $16.5 million for similar privacy violations and was banned from selling browsing data. On the day the settlement was announced, the FTC had published a blog post saying that “Avast promised privacy, but pirated consumers’ data for treasure**”**.
The FTC investigation focused heavily on the fact that Avast deceived users by claiming that the software would protect consumers’ privacy by blocking third party tracking, but failed to adequately inform consumers that it would sell their detailed, re-identifiable browsing data. This aspect of deceiving practices is not caught in the Czech decision (most likely because unlike the FTC, a data protection authority is only tasked with enforcement of data protection law).
Avast has merged with NortonLifeLock in September 2022, creating a new entity called Gen. Avast remained a separate brand.
So, how much does a flawed business model cost? At least 31,5 million dollars plus a ban on doing such business, not to mention the loss of public trust.
🇺🇸 CPPA Critiques American Privacy Rights Act for Limiting State Privacy Laws
The California Privacy Protection Agency (CPPA) has strongly criticized the draft of the American Privacy Rights Act (APRA), voicing concerns over its potential to significantly weaken established state privacy protections, such as the CCPA and the California Delete Act. The agency argues that the draft legislation aims to set a restrictive upper limit on state laws, which could prevent future advancements in consumer privacy protection and adaptability to technological changes.
Key points raised by the CPPA:
- APRA’s draft could effectively negate the CCPA’s “floor” approach that allows California to enhance privacy laws beyond federal standards, thereby freezing privacy protections and inhibiting future state-driven innovations.
- The draft lacks essential protections for data categories like sexual orientation, union membership, and immigration status, leaving significant vulnerabilities.
- Concerns were also raised about APRA potentially stripping the CPPA and similar state agencies of their enforcement capabilities, thereby weakening privacy oversight and limiting the FTC’s authority to enforce robustly.
- CPPA warns that the draft could allow data brokers to bypass stringent state laws, posing substantial security risks, and that the proposed penalties for non-compliance are insufficient to ensure adherence.
The agency urges a reevaluation of the draft to ensure that federal law sets a baseline for privacy rights, supporting state capabilities to enact stronger protections and address emerging privacy threats effectively.
You can read the press release here and the full letter here.
Norwegian Privacy Board Upholds Decision on Meta's Appeal Against Behavioral Marketing Ban
Meta Ireland and Facebook Norway’s appeal against the Norwegian Data Protection Authority’s (Datatilsynet) rejection of their complaint has been dismissed. The controversy started when Datatilsynet, in July 2023, issued a temporary ban on behavior-based marketing by the companies, citing GDPR Article 66 as the basis, which allows for immediate provisional measures outside the normal consistency mechanism.
The two Meta companies filed a complaint, arguing that they should be allowed an administrative appeal under general administrative law (Forvaltningsloven), challenging the interpretation that GDPR rules override national administrative procedural rights. This initial complaint was dismissed by Datatilsynet, stating that decisions under GDPR Chapter VII are not subject to administrative appeals.
They then appealed this decision, and in its April 2024 ruling the Personvernnemnda (Privacy Board) upheld the Data Protection Authority’s decision, confirming no administrative appeal is available for GDPR Chapter VII decisions.
The companies have the option to take the case to court, although the administrative appeal route through the department is closed.
You can read the official summary here.
🇺🇸 FTC Calls for Systematic Security Improvements to Protect Consumer Data
In a new blog post, the Federal Trade Commission emphasizes the importance of systematically addressing security vulnerabilities, inspired by extensive research and past enforcement actions.
- The FTC underscores the need for proactive, systematic security measures rather than ad-hoc fixes to prevent data breaches.
- the importance of integrating security from the product design phase to eliminate vulnerability classes is highlighted, in line with the Secure by Design series by the Cybersecurity and Infrastructure Security Agency (CISA).
- Approaches like using template rendering systems for XSS and query builders for SQL injections help address well-known security threats effectively.
- The FTC’s Start with Security guide and past cases show the necessity of anticipating and mitigating known vulnerabilities in software applications.
- Memory-safe programming languages are recommended to prevent common software vulnerabilities like buffer overflows and use-after-free errors.
🇪🇺 EDPB Evaluates 'Consent or Pay' Models in Digital Advertising
The European Data Protection Board (EDPB) has issued an opinion that criticizes the current use of ‘consent or pay’ models by large online platforms for behavioral advertising, highlighting the need for significant adjustments to ensure that user consent is valid.
Get all the details in my previous post on this topic.
🇳🇿 New Zealand DPA publishes survey showing significant online privacy risks for children
The Office of the Privacy Commissioner of New Zealand has conducted a comprehensive survey involving government agencies, professionals, and NGOs to address privacy concerns for children and young people. The results indicate three main areas of concern:
- Social Media and Online Risks: There is an overwhelming need for enhanced guidance and regulatory measures to manage the risks children face on social media. This includes the potential for bullying, blackmail, and the misuse of personal and financial information.
- Regulatory Enhancements: Respondents advocate for significant changes such as introducing a child-specific code of practice, a right for children to request the deletion of their data from platforms, and higher penalties for privacy violations affecting children.
- Educational Initiatives: There is a call for campaigns to better educate parents and children about privacy risks, emphasizing the importance of involving children in discussions about their digital footprints and online presence. Survey findings will be used to refine strategies for better protecting children’s privacy, considering the unique challenges posed by the digital age. The next steps may involve amendments to the Privacy Act to ensure it supports the privacy rights of young individuals adequately.
Read the report here.
🇬🇧 UK NCSC publishes new Cyber Assessment Framework
The National Cyber Security Centre (NCSC) has released version 3.2 of its Cyber Assessment Framework (CAF), a crucial update aimed at fortifying the cyber defenses of critical national infrastructure (CNI). This update responds to the escalated cyber threat landscape and feedback from prior framework applications beyond its initial regulatory scope. Key changes include:
- Enhanced Security Features: The revision significantly updates guidance on critical areas such as remote access, privileged operations, and authentication processes. It incorporates multi-factor authentication more comprehensively as outlined in Principles B2a and B2c.
- Regulatory and Framework Alignment: This version shows stronger integration with the Cyber Essentials scheme, enhancing the framework’s outcome-focused approach and ensuring it remains robust and applicable across various sectors.
- Feedback and Consultation: The update process included extensive consultations with NIS regulators and stakeholders, reflecting in improved navigation and consolidated references throughout the CAF documentation.
- Future Planning: Looking forward, the NCSC commits to adapting the CAF to reflect changes in cyber resilience regulations and the integration of emerging technologies like AI, indicating continuous evolution based on sector needs and technological advancements.
- Usage and Application: The CAF serves both as a self-assessment tool and a guideline for independent assessments, emphasizing a non-prescriptive, outcome-oriented approach to achieving cybersecurity goals. It remains a critical tool for organizations needing to meet specific security levels, potentially under regulatory scrutiny, and supports effective cyber regulation.
🌍 Council of Europe Parliamentary Assembly Endorses Draft AI Convention with Calls for Inclusive Private Sector Regulation
The Council of Europe’s Parliamentary Assembly (PACE) recently adopted Opinion 303 (2024), supporting the draft Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, but expressed a number of concerns.
Key points:
- Significant amendments proposed by PACE include ensuring that any AI-related security measures respect democratic institutions and processes, enhancing provisions for protecting health and the environment, and establishing robust mechanisms for reviewing the convention’s implementation. PACE advocates for specific provisions to ensure AI uses bolster democratic processes, like enhancing government accountability and enabling public participation. Moreover, PACE suggests to include the provision for states to limit or prohibit AI uses that conflict with human rights and the addition of specific regulations for AI applications impacting health and the environment. Furthermore, any restrictions on AI for national security should comply with international human rights standards.
- PACE expressed regret that the draft does not uniformly cover public and private sector AI activities, highlighting a critical loophole due to the significant role of private entities in AI deployment. The Assembly strongly advocates for the Convention’s comprehensive application across all sectors upon ratification by member states.
- PACE expressed concerns about the draft’s abstract language and its framework nature, suggesting it may require additional specific instruments for complete effectiveness.
The Framework Convention is set to be adopted by the Committee of Ministers soon. It represents a pioneering international effort to govern AI technologies through a unified legal framework, intending to inspire similar legislation worldwide. [Note: the Council of Europe is entirely separate from the European Union and currently has 46 member states, compared to 27 members of the European Union]
You can read the press release here, the opinion here, and the draft framework here.
🇪🇺 EDPB Announces 2024-2027 Strategy
The European Data Protection Board (EDPB) has officially launched its strategic plan for 2024-2027, outlining a comprehensive roadmap to strengthen data protection standards across Europe and globally. This strategy is structured around four pillars:
- Harmonizing Data Protection Laws: The EDPB will focus on creating uniformity in data protection laws across EU member states, promoting compliance through practical and accessible guidance.
- Enforcing Data Protection: A reinforcement of a common enforcement culture is planned, including the development of new methodologies and cooperation tools, alongside fostering global enforcement dialogue.
- Addressing the Digital Landscape: The strategy aims to embed data protection within new regulatory frameworks such as the DMA and DSA, and adapt to the challenges posed by emerging digital technologies, particularly AI.
- Global Data Protection Dialogue: The EDPB plans to intensify its participation in international data protection discussions, ensuring the EU’s voice is prominent in global forums and that data protection standards are upheld internationally.
In addition to these pillars, the EDPB has developed specific mechanisms to facilitate the redress of data privacy under the EU-US Data Privacy Framework, with a clear focus on handling complaints related to national security or commercial data use (no less than 5 documents dealing with the DPF have been published by the EDPB on 24 April – more on that in next week’s edition).