Hong Kong’s New AI Data Protection Framework Released

On 11 June 2024, the Office of the Privacy Commissioner for Personal Data (PCPD) in Hong Kong released the “Model Personal Data Protection Framework” to guide local enterprises on the ethical procurement, implementation, and management of AI systems that handle personal data. This framework addresses the growing need for AI governance in the face of rapid technological advancements and increasing regulatory challenges.

AI Strategy and Governance

Framework Highlights

The Model Framework provides guidelines for organizations on AI governance, risk assessment, customization, and stakeholder communication.

Key recommendations 

  1. AI Strategy and Governance: establishing an AI strategy that aligns with the organization’s objectives, emphasizing top management’s commitment to ethical AI use. It suggests forming an AI governance committee responsible for overseeing the entire AI lifecycle, from procurement to deployment. This committee should include diverse expertise from computer engineering, data science, cybersecurity, and legal compliance.

  2. Risk Assessment and Human Oversight: The framework adopts a risk-based approach, recommending thorough risk assessments to identify and mitigate potential risks, including privacy risks. It outlines the necessity of human oversight in AI operations, with higher-risk AI systems requiring more stringent human control to prevent errors and biased outcomes.

  3. Customization and Management of AI Systems: The framework highlights best practices for data preparation, ensuring compliance with the PDPO. It stresses the importance of using high-quality, relevant data and employing techniques such as anonymization and differential privacy to protect personal data. Organizations are advised to validate and test AI models rigorously before deployment.

  4. Continuous Monitoring and Management: Post-deployment, the framework calls for continuous monitoring and review of AI systems to maintain their reliability and robustness. Organizations should establish mechanisms for transparency, traceability, and auditability of AI outputs, ensuring accountability and compliance with data protection regulations.

  5. Stakeholder Engagement: Maintaining transparency through regular communication with stakeholders, including internal staff, AI suppliers, and regulators.

The PCPD has also published a leaflet summarizing key recommendations from the Model Framework to assist organizations in understanding and implementing these best practices effectively.

👉 Read the press release here, the full framework here and the leaflet here.

♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top