NIST Released Guidance on Mitigating Generative AI Risks


The National Institute of Standards and Technology (NIST) released the AI RMF Generative AI Profile (NIST AI 600-1) on 25 July 2024. This profile helps organizations identify and mitigate the unique risks posed by generative AI, aligning risk management practices with organizational goals and priorities.

Risks Identified

  1. Cybersecurity Threats: Lowered barriers for cyberattacks, including the creation and spread of harmful content like disinformation and hate speech.
  2. Content “Hallucinations”: Generative AI systems may produce inaccurate or misleading information, leading to potential misuse or misinterpretation.
  3. Privacy Concerns: Risks related to the leakage and unauthorized use of sensitive data, with AI models sometimes revealing personal information unintentionally.
  4. Environmental Impact: The significant energy consumption and carbon footprint associated with training and operating AI models.

  • Bias and Homogenization: The potential amplification of harmful biases and uniformity in AI outputs, leading to discriminatory practices and reduced diversity in content.
  • Human-AI Configuration: Misconfigurations and poor interactions between humans and AI systems, leading to over-reliance or aversion to AI systems.
  • Information Integrity: The ease of producing and disseminating misinformation and disinformation, undermining public trust.
  • Information Security: Increased attack surfaces and vulnerabilities in AI systems, including risks from prompt injections and data poisoning.
  • Intellectual Property: Potential infringement on copyrighted content and the unauthorized use of intellectual property.
  • Obscene, Degrading, and Abusive Content: The creation and spread of explicit and harmful content, including non-consensual intimate imagery and child sexual abuse material.
  • CBRN Information or Capabilities: The facilitation of access to information and capabilities related to chemical, biological, radiological, and nuclear weapons.
  • Value Chain and Component Integration: Risks associated with the integration of third-party components, such as improperly vetted data and lack of transparency.

Mitigation Actions

NIST outlines over 200 actions that can be taken to manage these risks effectively. These actions are organized into categories according to the AI Risk Management Framework (AI RMF): Govern, Map, Measure, and Manage. Here are the key points:

Govern:

  1. Governance Structures and Policies:

    • Establish clear governance structures for managing AI risks.
    • Develop and implement policies that ensure accountability and transparency in AI system development and deployment.

  2. Legal and Regulatory Compliance:

    • Align AI development processes with existing legal and regulatory requirements.
    • Stay informed about evolving laws and regulations relevant to AI technologies.

  3. Stakeholder Engagement:

    • Engage with a diverse set of stakeholders to understand different perspectives and requirements.
    • Involve stakeholders in the risk assessment and mitigation processes.

Map:

  1. Risk Identification and Documentation:

    • Identify and document potential risks associated with AI systems at different stages of their lifecycle.
    • Use structured methodologies to map out these risks comprehensively.

  2. Contextual Understanding:

    • Understand the context in which the AI system will operate.
    • Consider the socio-technical environment and the potential impact on different user groups.

  3. Scenario Analysis:

    • Conduct scenario analyses to anticipate possible risk scenarios.
    • Prepare mitigation strategies for identified scenarios.

Measure:

  1. Performance Metrics:

    • Define and use metrics to measure the performance and impact of AI systems.
    • Regularly assess these metrics to ensure AI systems are functioning as intended.

  2. Bias and Fairness Audits:

    • Conduct regular audits to detect and mitigate biases in AI systems.
    • Ensure AI systems are fair and do not disproportionately impact any group.

  3. Robustness and Security Testing:

    • Test AI systems for robustness against adversarial attacks.
    • Implement security measures to protect AI systems from malicious actors.

Manage:

  1. Incident Response Plans:

    • Develop and maintain incident response plans for AI systems.
    • Ensure these plans are tested and updated regularly.

  2. Continuous Monitoring and Improvement:

    • Continuously monitor AI systems for performance and compliance.
    • Implement mechanisms for continuous improvement based on feedback and monitoring data.

  3. Third-Party Risk Management:

    • Ensure proper vetting and management of third-party components used in AI systems.
    • Maintain transparency in the sourcing and integration of these components.

Additional Key Points:

  1. Human-AI Interaction:

    • Define clear guidelines for human-AI interaction.
    • Ensure that users understand the capabilities and limitations of AI systems.

  2. Value Chain and Component Integration:

    • Assess and manage risks associated with integrating various components across the AI value chain.
    • Ensure that all components meet organizational standards for quality and security.

The AI RMF Generative AI Profile serves as a comprehensive companion to the NIST AI Risk Management Framework (see also the NIST AI RMF Playbook). While the AI RMF is structured to provide a systematic approach for organizations to manage all AI-related risks (Govern – Map – Measure – Manage), the Generative AI Risk Profile enhances the AI RMF by offering specific guidance for managing the unique risks associated with generative AI models.

♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy, digital and AI education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top