Loading

Blog details

Hong Kong’s AI model framework: the Personal Data (Privacy) Ordinance

Hong Kong’s AI model framework: the Personal Data (Privacy) Ordinance

The Hong Kong PCPD’s AI Model Framework provides guidelines for organisations using AI systems that process personal data, emphasising compliance with the PDPO.

 

On June 11, 2024, the Hong Kong Office of the Privacy Commissioner for Personal Data (PCPD) unveiled its Artificial Intelligence Model Personal Data Protection Framework (Model Framework). This framework serves as a set of guidelines for organisations utilising AI systems that involve processing personal data. It emphasises the importance of complying with the Personal Data (Privacy) Ordinance (PDPO) and adhering to the six Data Protection Principles (DPP) outlined in Schedule 1 during the procurement and implementation of AI systems. The Model Framework presents recommendations in four key areas: AI strategy and governance, risk assessment and human oversight, communication and engagement with stakeholders, and customization of AI models, along with the implementation and management of AI systems.

 

Hong Kong’s AI model framework is based on six major data protection principles.

 

There are six major DPP under Hong Kong’s personal data protection framework for AI users. The DPP are a set of principles that govern the collection, use, and disclosure of personal data. The first principle, DPP 1, Purpose and Manner of Collection, ensures that personal data is collected lawfully and fairly and that data users provide specific information to data subjects during the collection process. DPP 2, Accuracy and Duration of Retention, requires personal data to be accurate, up-to-date, and kept only for as long as necessary. DPP 3, Use of Personal Data, specifies that personal data should be used for the purposes for which it was collected or a directly related purpose unless the data subject consents otherwise. DPP 4, Security of Personal Data, mandates the application of appropriate security measures to protect personal data. DPP 5, Information to be Generally Available, ensures transparency by data users regarding the types of personal data they hold and the primary purposes for its use. DPP 6, Access to Personal Data, grants data subjects the right to access and correct their personal data.

 

The PDPC, inspired by the Guidance on the Ethical Development and Use of Artificial Intelligence, imposes several obligations on organisations based on the data protection principles therein.

 

Organisations must prioritise clear and regular communication with stakeholders to enhance transparency and foster trust. In addition to explaining AI decisions and results, organisations should offer avenues for individuals to provide feedback, request explanations, or seek human intervention when AI systems make significant personal decisions. Pursuant to DPP 1(3) and DPP 5, data subjects have the right to be informed about the purpose of data usage (e.g., AI training or customization), potential recipients (e.g., AI suppliers), and the organisation’s policies regarding personal data in the context of AI. They also have the right to submit data access and correction requests under sections 18 and 22 of the PDPO, which organisations can facilitate through their AI suppliers. The Model Framework also makes reference to the Guidance on the Ethical Development and Use of Artificial Intelligence, issued by the PCPD in 2021, which primarily targets organisations involved in developing and using personal data-based AI systems. The Model Framework, on the other hand, focuses on companies purchasing AI solutions from developers.

 

Organisations should develop an internal AI strategy and establish an AI governance committee for employee training in data protection laws to prevent unauthorised handling of personal data and safeguard personal data.

 

The Model Framework recommends that organisations develop an internal AI strategy to guide the implementation and use of AI solutions, as recommended by the Model Framework. This strategy should outline the purposes for which AI solutions may be procured and the methods for implementing and utilising AI systems. When implementing AI systems, an AI supplier that offers a platform for AI customization is likely to be considered a data processor under the PDPO. Consequently, any organisation (data user) transferring personal data to an AI supplier (data processor) must employ contractual or other measures to prevent unauthorised or accidental access, processing, erasure, loss, or use of the personal data, as required by DPP 4(2) of the PDPO.  Data Protection Principle 4 mandates data users to take all feasible measures to safeguard personal data in their possession against unauthorised or inadvertent access, processing, deletion, loss, or use. In cases where a data user hires a data processor to handle personal data, the data user is obligated to implement contractual or alternative measures to guarantee the data processor’s adherence to the aforementioned data security requirement. Additionally, the Model Framework recommends that organisations establish an AI governance committee. This committee should include experts from various fields and should provide AI-related training to employees, including data protection laws training for non-legal AI system users.

 

Hong Kong’s AI Model Framework highlights the importance of risk assessment, data preparation, and compliance with privacy regulations in AI implementation.

 

Hong Kong’s AI Model Framework emphasises the significance of conducting a thorough risk assessment to systematically identify, analyse, and evaluate the potential risks, including privacy risks, involved in the implementation of AI. It highlights the importance of data preparation and management in customising and deploying AI systems. To ensure compliance with the PDPO, organisations should minimise the collection of personal data. Only the necessary data should be collected and used to customise or operate the AI, and any unnecessary data should be discarded in accordance with section 26 of the PDPO. In certain cases, using anonymized, pseudonymised, or synthetic data for AI customization and input may be appropriate. Organisations must properly document the handling of data for customization and AI use to ensure data security and compliance with the PDPO requirements. While the Model Framework is not legally binding, it serves as a valuable guideline and checklist for companies seeking to adopt AI in their operations, assisting them in minimising the risks associated with AI procurement and implementation. Companies should remain vigilant and keep up with future regulatory updates related to AI adoption.

Discover how Aphaia can help ensure compliance of your data protection and AI strategy. We offer early compliance solutions for EU AI Act and full GDPR and UK GDPR compliance. We specialise in empowering organisations like yours with cutting-edge solutions designed to not only meet, but exceed the demands of today’s data landscape. Contact Aphaia today.

Provisional decision from the ICO
Prev post
Provisional decision from the ICO to fine a software company following a ransomware attack
August 22, 2024
Next post
Clearview AI faces a punishment from Dutch DPA for Illegal Facial Recognition Data Collection
September 5, 2024