EDPB publishes Checklist for AI auditing

The European Data Protection Board (EDPB) has launched a project under the Support Pool of Experts program, initiated by the Spanish Data Protection Authority (AEPD), to develop tools that enhance the GDPR compliance of AI systems. This project, delivered by Dr. Gemma Galdon Clavell in January 2023, aims to assist data protection authorities in inspecting and auditing AI systems through a detailed methodology and checklist.

Objective

The primary objective of this project is to help various stakeholders understand and assess data protection safeguards within the AI Act framework. It focuses on creating a comprehensive checklist and proposing tools that improve transparency and facilitate the auditing process of algorithmic systems. The checklist is intended to be applied primarily by Data Protection Authorities (DPAs), but algo-scores and leaflets will primarily address AI developers and organizations implementing AI systems.

Key Elements of the Checklist

  • Model Card Requirements: These compile information on the training and testing of AI models, including Data Protection Impact Assessments (DPIAs), data sharing agreements, and approvals from data protection authorities.
  • System Maps: These maps establish relationships and interactions between the algorithmic model, the technical system, and the decision-making process.
  • Bias Identification and Testing: This involves identifying moments and sources of bias and designing tests to determine the impact of different biases on individuals, groups, society, or the efficiency of the AI system.
  • Adversarial Audits: These are designed to challenge the system’s robustness and identify vulnerabilities.
  • Public Audit Reports: These reports enhance transparency by providing detailed audit outcomes to the public.

Algo-Score Proposal

The proposal includes an algo-score system inspired by the Nutriscore and A+++ methodologies. This scoring system evaluates AI governance, model fairness and performance, and post-market monitoring and auditing. The algo-score aims to promote transparency, accountability, and consumer choice in AI systems.

AI Leaflet Proposal

The AI leaflets, adapted from medical package leaflets, provide detailed and accessible information about AI systems. They include general information, process descriptions, data sources, model details, bias and impact metrics, and redress mechanisms. These leaflets aim to facilitate informed decision-making and compliance with GDPR and AI Act requirements.

Broader Impact

GDPR Compliance:

  • The AI auditing checklist directly supports GDPR compliance by providing detailed guidelines on data protection impact assessments (DPIAs), data sharing agreements, and transparency requirements.
  • AI leaflets ensure that necessary information about data processing and AI decision-making is accessible to data subjects, fulfilling GDPR requirements for transparency and accountability.

AI Act Alignment:

  • The algo-scores and AI leaflets are aligned with the proposed AI Act’s focus on high-risk AI systems, requiring comprehensive documentation and transparency measures.
  • The AI Act emphasizes the need for clear accountability and governance structures, which the algo-scores system aims to encapsulate by evaluating governance roles, compliance with standards, and documentation practices.

This initiative is part of a broader effort to operationalize ethical AI principles and ensure responsible AI development and deployment.

The documents detailing the project’s framework, including proposals for algo-scores and AI leaflets, were delivered in January 2023 and only the proposal for algo-scores mentions an update in June 2024.

👉 Find all three documents here.

♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy, digital and AI education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top