AI

AI Risks and Data Protection in Optical Character Recognition (OCR)

EDPB publishes report on data protection risks of AI for Optical Character Recognition (OCR)

On 27 June 2024, the EDPB published the results of a project under the Support Pool of Experts program, assessing data protection risks associated with AI-powered Optical Character Recognition (OCR). Conducted by external expert Isabel Barbera and completed in September 2023, the report identifies significant privacy risks in OCR technology, such as data breaches, unlawful data storage, and the unlawful handling of sensitive information. The findings emphasize the need for robust safeguards and strict compliance with data protection regulations to mitigate these risks effectively

EDPB publishes report on data protection risks of AI for Optical Character Recognition (OCR) Read More »

EDPB publishes Checklist for AI auditing

EDPB publishes Checklist for AI auditing

The EDPB, in collaboration with the Spanish data protection authority (AEPD), initiated a project to enhance the GDPR compliance of AI systems. This project includes the development and piloting of tools and a checklist to inspect and audit AI systems. Key elements involve model card requirements, system maps, bias identification and testing, adversarial audits, and the publication of audit reports. These measures aim to improve transparency and accountability in AI systems, facilitating better oversight by data protection authorities.

EDPB publishes Checklist for AI auditing Read More »

OECD Explores AI, Data Governance, and Privacy Synergies

OECD Explores AI, Data Governance, and Privacy Synergies

The OECD’s report “AI, Data Governance, and Privacy: Synergies and Areas of International Co-operation,” published on 25 June 2024, examines the critical intersection of AI and privacy. Emphasizing collaboration between AI and privacy policy communities, it addresses the challenges posed by generative AI. The report covers key developments in Privacy Enhancing Technologies (PETs) and explores the complexities of applying traditional legal frameworks like “legitimate interests” to AI practices. It calls for harmonizing OECD Privacy Guidelines with AI Principles to enhance regulatory compliance and foster international cooperation.

OECD Explores AI, Data Governance, and Privacy Synergies Read More »

Meta Halts AI Rollout in Europe Following Irish DPC Request

Meta Halts AI Rollout in Europe Following Irish DPC Request

Meta has paused the launch of its AI tools in Europe after a request from Ireland’s Data Protection Commission (DPC). This decision comes after the digital rights group Noyb filed complaints in 11 European countries, criticizing Meta’s vague AI plans and the opt-out requirement for users. Meta planned to use public posts from Facebook and Instagram to train AI models. The DPC’s decision, welcomed by other European authorities, followed intensive discussions with Meta. Meta expressed disappointment, noting it had incorporated regulatory feedback since March.

Meta Halts AI Rollout in Europe Following Irish DPC Request Read More »

IAPP AI Governance in Practice Report 2024

IAPP AI Governance in Practice Report 2024

The “AI Governance in Practice Report 2024” addresses the critical need for robust AI governance amidst rapid advancements in machine learning technology. These breakthroughs have significantly disrupted various sectors, emphasizing the responsibility of leaders to manage AI risks. The report outlines the essential principles, laws, policies, processes, and standards required for AI governance. It highlights transparency, bias mitigation, privacy, and security as key areas of focus. The report provides actionable insights to help organizations implement effective AI governance strategies and ensure the safe, responsible deployment of AI technologies.

IAPP AI Governance in Practice Report 2024 Read More »

Scroll to Top