UNESCO Opens Consultation on AI Use in Courts

On 2 August 2024 UNESCO launched a public consultation on its draft Guidelines for the Use of AI Systems in Courts and Tribunals. These guidelines are a key part of UNESCO’s “AI and the Rule of Law” initiative under the Global Judges Programme, and are designed to ensure that AI tools used by courts and tribunals align with principles of justice, human rights, and the rule of law. They were developed following a 2023 survey revealing significant gaps in AI guidance within judicial systems – the survey found that while 93% of judicial operators were familiar with AI tools like ChatGPT, only 9% of organizations had AI guidelines or training in place. The consultation will remain open until 5 September 2024, inviting feedback from judicial professionals, legal experts, and the broader public.

Key Principles

The draft guidelines outline thirteen key principles that organizations and individual members of the judiciary must follow when adopting and using AI systems:

    1. Protection of Human Rights: AI systems must respect and promote human rights, ensuring fairness, non-discrimination, procedural fairness, and personal data protection.
    2. Proportionality: AI tools should be used to achieve legitimate and proportional ends, avoiding unnecessary harm.
    3. Safety: AI systems must avoid and address any risks of harm to individuals or society.
    4. Information Security: AI tools must protect confidential information, complying with international standards for information access.
    5. Awareness and Informed Use: Judicial operators must understand the functionalities, limitations, and risks of AI systems to make informed decisions.
    6. Transparent Use: Courts must inform stakeholders about when and how AI systems are used, particularly when decisions affect individual rights.
    7. Accountability and Auditability: The judiciary must ensure that AI systems are traceable, explainable, and subject to audits.
    8. Explainability: AI tools should be transparent, with clear explanations of how they operate and their decision-making processes.
    9. Accuracy and Reliability: AI systems must provide accurate and reliable outputs that are pertinent to judicial tasks.
    10. Human Oversight: AI should not replace human judgment, especially in decisions impacting rights and freedoms.
    11. Human-Centric Design: AI systems should complement human capabilities and respect human dignity.
    12. Responsibility: Judicial organizations and individuals must take responsibility for decisions made with AI assistance.
    13. Multi-Stakeholder Governance: Judicial bodies should engage with diverse stakeholders throughout the AI system’s lifecycle to ensure inclusivity.

 

Specific Guidance for Judicial Organizations

The guidelines also offer detailed recommendations for the bodies that govern the judiciary, courts, and tribunals:

    • Adoption of AI Tools: Courts should conduct algorithmic impact assessments before deploying AI systems to identify potential risks and ensure compliance with human rights.

    • Data Governance: There must be robust data governance frameworks to protect personal data and promote responsible data-sharing practices.

    • Risk Management: Judicial organizations should establish systems to identify, monitor, and mitigate risks associated with AI use.

    • Cybersecurity: Measures should be in place to prevent, control, and mitigate cybersecurity risks.

 

Guidance for Individuals in the Judiciary

Individual members of the judiciary, including judges and support staff, are also provided with specific guidelines:

    • AI Literacy: Individuals must develop critical AI literacy skills to understand AI tools’ functionalities, limitations, and potential risks. 

    • Transparency and Accountability: Judges and legal professionals should ensure transparency in AI use and be accountable for the decisions supported by AI systems.

    • Avoid Overreliance: AI should not be the sole basis for decisions; human judgment remains paramount.

 

Guidance on Generative AI

The guidelines emphasize caution when using generative AI systems:

  • Content Authenticity: AI-generated content must be clearly labeled, and its development must be trackable for authenticity purposes.
  • Usage Restrictions: Certain uses of generative AI, especially those that compromise human rights or judicial integrity, should be prohibited or restricted.

 

Timeline of Development

The draft guidelines are open for public consultation until 5 September 2024 (to be provided here), with the final version expected to be published in November 2024. UNESCO encourages stakeholders, including judicial professionals and the public, to provide feedback to ensure the guidelines comprehensively address the challenges and opportunities presented by AI in the judicial system.

Earlier in 2022 UNESCO published a Recommendation on the Ethics of Artificial Intelligence, which can be found here.

 

A bit of context

This consultation comes in the same month when a Dutch judge has openly used ChatGPT as a source of information in one of their decisions. According to media, the case was a neighbour dispute over solar panels, and the judge used ChatGPT to find out the average life span of a solar panel as well as the average price of electricity, butkept not also with the ruling. The decision caused widespread outrage due to the unreliability of GenAI to provide accurate information.

In UK, the the Courts and Tribunals Judiciary has issued “Artificial Intelligence (AI) Guidance for Judicial Office Holders” in December 2023, after a UK judge used ChatGPT earlier the same year to summarise an area of law, and the judge saying he was satisfied with the accuracy of the answer provided.

♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy, digital and AI education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top