Loading

Blog details

Spanish DPA AEPD publishes Guidelines on AI audits

Spanish DPA AEPD publishes Guidelines on AI audits

AEPD, the Spanish data protection authority, has published Guidelines on the requirements that should be implemented for conducting audits of data processing activities that embed AI.

Early this month, the Spanish DPA, AEPD, published Guidelines on the requirements that should be considered when undertaking audits of personal data processing activities which involve AI elements. The document addresses the special controls to which the audits of personal data processing activities comprising AI components should be subject.

Audits are part of the technical and security measures regulated in the GDPR and they are deemed essential for a proper protection of personal data. The AEPD Guidelines contain a list of audit controls among which the auditor can select the most suitable ones, on a case by case basis, depending on several factors such as the way the processing may affect GDPR compliance, the type of AI component used, type of data processing and the risks to the rights and freedoms of the data subjects that the processing activities pose.

Special features of AI audits methodology

The AEPD remarks that the audit process should be governed by the principles laid down in the GDPR, namely: lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality and accountability.

The AEPD also points out that all the controls listed in the Guidelines are not meant to be applied together. The auditor should select those ones that are relevant based on the scope of the audit and the goals it pursues.

What type of data processing do these requirements apply to and who should comply with them?

The Guidelines will be applicable where:

  • There are personal data processing activities at any stage of the AI component lifecycle; or
  • The data processing activities aim to profile individuals or make automated decisions which produce legal effects concerning the data subjects or similarly significantly affects them.

The AEPD states that in some cases it might be useful to carry out some preliminary assessments before moving forward with the audit, such as, inter-alia, an assessment of the level of anonymisation of personal data, an assessment of the risk of re-identification and an assessment of the risk of losing data stored in the cloud.

The document is especially addressed to data controllers who audit personal data processing activities that include components based on AI, to data processors and developers who wish to offer additional guarantees around their products and services, to DPOs responsible for monitoring the data processing and providing advice to the data controllers and to auditors who work with this type of processing.

Control goals and actual controls

The main body of the Guidelines consists of five audit areas that are broken down into several objectives containing the actual controls among which the auditors, or the person in charge of the process as relevant, can make their selection for the specific audit they are undertaking.

The AEPD provides an exhaustive list comprising more than a hundred of controls, which are summed up in the following paragraphs. 

  • AI component identification and transparency

This area includes the following objectives: inventory of the AI components, definition of responsibilities, and transparency.

The AEPD stresses the importance of keeping full records both of the components, -including, inter alia, ID, version, date of creation and previous versions- and the persons in charge of the process -such as their contact details, roles and responsibilities-. There are also some provisions with regard to the information that should be available to the stakeholders, especially when it comes to the data sources, the data categories involved, the model and the logic behind the AI component, and the accountability mechanisms.

  • AI component purpose

There are several objectives within this area: identification of the AI component purposes, uses and context, proportionality and necessity assessment, data recipients, data storage limitation and analysis of the data subject categories.

The controls linked to these objectives are based on the standards and requirements needed to achieve the desired outcomes and the elements that may affect said result, as for example the conditioning factors, the socioeconomic conditions, and the allocation of tasks, among others, for which a risk assessment and a DPIA are recommended.

  • AI component basis

This area is built over the following objectives: identification of the AI component development process and basic architecture, DPO involvement, adequacy of the theoretical models and methodological framework.

The controls defined in this section are mainly related to the formal elements of the process and the methodology followed. They aim to ensure the interoperability between the AI component development process and the privacy policy, to define the requirements that the DPO should meet and guarantee their proper involvement in a timely manner and to set out the relevant revision procedures.

  • Data management

The AEPD details four objectives in this area: data quality, identification of the origin of the data sources, personal data preparation and bias control. 

Whereas data protection is the ‘leitmotiv’ along the Guidelines, it is specially present in this chapter, which covers, inter alia, data governance, variables and proportionality distribution, lawful basis for processing, reasoning behind the selection of data sources and data and variables categorisation.

  • Verification and validation

Seven objectives are pursued in this area: verification and validation of the AI component, adequacy of the verification and validation process, performance, coherence, robustness, traceability and security. 

The controls set out in this area focus on ensuring data protection compliance for the ongoing implementation and use of the AI component, looking for guarantees around the existence of a standard which allows for verification and validation procedures once the AI component has been integrated, a schedule for internal inspections, an analysis of false positives and false negatives, a procedure to find anomalies and mechanisms for identifying unexpected behaviour, among others.

Final remarks

The AEPD concludes with a reminder of the fact that the Guidelines contain a data protection approach to the audit of AI components, which means, on the one hand, that it may need to be combined with additional controls derived from other perspectives and, on the other hand, that not all controls will be relevant in each case, as they should be selected according to the specific needs, considering the type of processing, the client’s requirements, and the specific features of the audit and its scope, together with the results of the risk assessment.

Does your company use AI? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR and DPA 2018 adaptation consultancy services, including data protection impact assessmentsEU AI Ethics assessments and Data Protection Officer outsourcingContact us today.

Prev post
Doctors fined by CNIL: The French DPA has sanctioned two health professionals over poor data protection.
January 13, 2021
Next post
Memorandum of Understanding signed between the ICO and the National Privacy Commission of the Philippines
January 20, 2021

Leave a Comment