Loading

Blog details

Recommendations on the development of AI systems from European DPAs

Recommendations on the development of AI systems from European DPAs

DPAs across Europe have provided useful recommendations for organisations involved in the development and deployment of AI systems, aiding these organisations to remain in compliance with the GDPR and other regulations applicable to AI systems. 

 

The French data protection authority, CNIL recently published its first recommendations on the development of AI systems. These recommendations are designed to help organisations ensure that their AI systems are developed and used in a responsible and ethical manner. AI system designers and developers have expressed to the CNIL that they face challenges with GDPR compliance, especially during model training. Aside from that, the recently passed EU AI Act will greatly impact the use of personal data in AI systems. Accordingly, since both the GDPR and the AI Act will be applicable when processing personal data in the context of AI systems and algorithms, the CNIL’s recommendations aim to provide a coherent and complementary approach to data protection, ensuring compliance with both regulations. These recommendations relate to the development phase of AI systems, rather than the deployment phase.

 

CNIL advocates ethical AI use based on transparency, fairness, accountability, security, and privacy.

The recommendations from CNIL concerning the responsible use of AI systems cover mainly the importance of transparency, fairness, accountability, security, and privacy. Transparency involves providing detailed information about the inner workings of these AI systems to foster trust and enable informed decision-making. Fairness in designing AI systems emphasises the need for the AI systems and outcomes to be unbiased and mitigating the risk of discrimination, ensuring equitable treatment for all individuals. Accountability involves establishing processes for reviewing and challenging AI-driven decisions, promoting responsible AI development and deployment. Security measures are of extreme importance as they protect AI systems from unauthorised access, manipulation, and malicious attacks, safeguarding sensitive data and system integrity. Privacy and data protection considerations in the use of AI systems focus on respecting individuals’ privacy, safeguarding personal data, and empowering individuals with control over their own data. These recommendations guide organisations in developing and deploying AI systems ethically and responsibly to harness their transformative potential while mitigating risks.

 

While the GDPR raises concerns that it may stifle AI innovation in Europe, CNIL aims to promote responsible AI development, especially with personal data.

 

The notion that the GDPR will stifle innovation in artificial intelligence in Europe may be a misconception. However, it is important to acknowledge that training datasets often contain personal data, therefore the GDPR and any relevant laws should be observed. Using this data poses risks to individuals, and this must be considered when developing AI systems that respect people’s rights and freedoms, especially their right to privacy. The CNIL’s recommendations are an important step towards ensuring that AI systems are developed and used in a responsible and ethical manner. 

 

Other DPAs across Europe have provided guidance on AI and data protection, emphasising the importance, and providing recommendations on the ethical use of AI.

 

In addition to the more recent guidance from CNIL, there are a number of other resources available to help organisations in Europe develop and use AI systems in a responsible manner. About a year ago, the ICO updated its detailed guidance on AI and data protection for organisations both in the public and private sector. This guidance aims not only to ensure lawfulness in the development of AI systems, but also emphasises how organisations can achieve transparency, accuracy, and fairness. This update provides new content, as well as adds a frame of reference to guidance provided in the past. The updated guidance from the ICO provides very helpful context on statistical accuracy in the development and deployment of AI systems. 

 

The AEPD issued guidance on AI and data protection for users of AI in their processing activities, emphasising quality and privacy in AI development and deployment.

 

The Spanish Data Protection Agency (AEPD) also presented guidance related to  the implications of AI on data protection. This document is designed to assist individuals and organisations that use AI in their data processing activities. It covers topics such as the relationship between AI and data protection, the different types of relationships between data controllers and third parties, the conditions that AI technologies must meet to comply with the GDPR, and the risk management of processing for rights and freedoms. The AEPD emphasises the importance of quality and privacy guarantees in the development and deployment of AI-based technologies and notes that compliance with the GDPR requires a certain level of maturity for AI models.

 

The Dutch DPA emphasises GDPR rules for AI and algorithms, reiterating the need for data protection impact assessments. 

The Dutch DPA, Autorieit Persoonsgegevens, also emphasised in its own recent guidance that the GDPR imposes very important rules on processing personal data in the realm of AI and algorithms. Organisations must have a legal basis for processing, be transparent with data subjects, limit the use of personal data to the specific purpose for its collection and adhere to the principle of data minimization. The Dutch DPA reiterated that algorithmic systems require a Data Protection Impact Assessment (DPIA) to identify privacy risks and take measures to mitigate them. Organisations must comply with transparency obligations, facilitate privacy rights. Prior consultation with the Dutch Data Protection Authority is required if there is a high risk and sufficient measures cannot be found. 

At Aphaia, we commit to being the partner guiding you through a comprehensive journey of strengthening your data defences, ensuring compliance, and providing peace of mind in an ever-evolving digital landscape. Take that first step today, and let’s improve your practices and achieve trustworthy AI. Contact Aphaia today to find out more.

Prev post
The EDPB releases its Opinion on ‘Pay or Ok’ Models
April 25, 2024
Next post
Facial Recognition Technology: legal clarification from the Netherlands DPA
May 9, 2024