The CEPD’s recent opinion on AI models highlights the GDPR’s role in responsible AI development.
The European Data Protection Board (CEPD) issued an opinion on December 18, 2024, emphasizing the importance of GDPR principles in the ethical development and deployment of AI models. This comprehensive opinion, requested by the Irish Data Protection Authority (DPA), focuses on three key areas: determining when AI models can be classified as anonymous, assessing the use of legitimate interest as a legal basis for AI-related data processing, and addressing the implications of unlawfully processed data. The CEPD aims to promote regulatory harmonization across Europe, by ensuring that AI innovation remains in alignment with data protection laws.
The CEPD states that anonymity in AI models requires case-by-case assessment.
According to the CEPD, the anonymity of AI models must be evaluated on a case-by-case basis by DPAs. For an AI model to be considered anonymous, it must be highly unlikely that individuals can be directly or indirectly identified, or that their personal data can be found through queries. The CEPD’s opinion provides flexible, non-prescriptive methods for demonstrating anonymity, reflecting the diverse nature of AI technologies.
A three-step test is proposed for determining legitimate Interest as a legal basis for AI data processing.
Organizations must first ensure that they have a lawful basis to process personal data. To lawfully process this data, organizations must rely on one of the six legal bases outlined in Article 6 of the GDPR. The CEPD has outlined criteria for determining whether legitimate interest can serve as the legal basis for processing personal data in AI development and deployment. A three-step test is proposed: the purpose of processing must be legitimate, the processing must be strictly necessary, and a balancing test must ensure that individuals’ rights are not disproportionately affected. Examples of legitimate uses include conversational AI agents and AI tools for cybersecurity, provided they meet strict requirements and respect individuals’ rights.
Individuals’ reasonable expectations for data processing in the future use of AI models must be considered.
The opinion emphasizes that DPAs must assess whether individuals might reasonably expect their personal data to be used in AI processes. Factors include whether the data is publicly available, the nature of the relationship between individuals and data controllers, the context of data collection, and the potential future uses of the AI model. Article 14 GDPR requirements apply when web scraping personal data from publicly accessible websites. Informing every data subject is usually impractical due to the large amount of data collected. However, when personal data is collected directly interacting with ChatGPT. It is crucial to inform data subjects that their user input might be used for training. Where potential harm to individuals is identified, the CEPD recommends mitigating measures, such as enhancing transparency or facilitating the exercise of data rights.
AI models using personal data unlawfully must anonymize it to avoid deployment restrictions. GDPR compliance is essential in the development and deployment of AI models.
AI models developed using unlawfully processed personal data may face restrictions on their deployment unless the data is appropriately anonymized. The CEPD warns that using such data without proper safeguards could affect the lawfulness of the entire AI system, reinforcing the need for strict compliance with the GDPR. Controllers processing personal data in the context of large language models must take all necessary steps to ensure full compliance with the GDPR requirements. This aligns with the principle of accountability outlined in Article 5(2) and Article 24 of the GDPR. The GDPR’s role in responsible AI development is paramount and its principles must be adhered to in the development and deployment of AI models.
The CEPD is committed to supporting responsible AI and data protection by providing guidance and developing guidelines, responding to AI’s rapid evolution.
Considering the rapid evolution of AI technology, the CEPD has highlighted the need for adaptable guidance. While this opinion provides general principles and criteria for DPAs, the CEPD is also developing additional guidelines to address more specific issues, such as web scraping. This forward-looking approach is indicative of the CEPD’s commitment to supporting responsible AI innovation while ensuring adequate protection of individuals’ data. This opinion serves as a vital framework for stakeholders navigating the complex intersection of AI and data protection laws.