“The implementation of automated decision-making and profiling in business may be directly impacted by GDPR,” warns Aphaia Partner Cristina Contero Almagro. “You may need to conduct Data Protection Impact Assessment.”

The use of algorithms or AI methods have implications for privacy, not only for the design but for the whole process, since personal data is involved both in the programming and the implementation steps. Aphaia’s privacy professionals have developed their own approach for assisting our clients, from customer recommendation algorithms used in e-commerce to the assessment of X5GON, a global EU H2020-financed AI-based network of Open Educational Resources.

And remember, if you require assistance implementing the below steps yourself, please get in touch.

GDPR requirements

Due to its nature, scope and the technology involved, AI and algorithms processing is likely to result in a high risk to the rights and freedoms of natural persons, so a DPIA should always be carried out prior to the processing. Based on our expertise, Aphaia performs a specialised DPIA for those cases where the use of AI and algorithms is involved, comprising but not limited to: the type of algorithm, the lawful basis, the characteristics of the datasets used for training, the specific restrictions for the data collection and processing and the categories of data the algorithm will base its decisions on.
Where you have implemented AI or algorithms decision-making at your company you need to put in place a specific information policy comprising some details like the logic involved or the envisaged consequences, plus enable the exercise of the data subjects’ right to object to the use of the algorithm and plan particular security measures. Aphaia analyse the use of algorithms on a case by case basis and prepare tailored policies and documents.

…and more

AI involves discrimination risks that should be addressed before and during the implementation of the algorithm. Discriminatory results might come from a wrong labelling of the training data or from an unbalanced training dataset. Having a data protection expert to check it from the very first moment the algorithm is designed may make the difference and be the key to success.

Discrimination in practice – A closer approach

A wrong labelling of the training data or an unbalanced training dataset may result even in a higher risk to the rights and freedoms of data subjects when it comes to the way algorithms work in practice. These implications are    drafted in the diagram at the right and lightly explained in the lines below.

Discrimination happens as a result of categorising individuals based on variable features, which means that an individual will be judged according to the group he or she fits best. The use of AI by loan companies is a clear example of this: an individual may have a loan denied only because he or she has been categorised into a group made of people who hardly pay their debts, but that only really means that the variables assigned to such individual are more similar to that group’s than other’s. This is the reason why defining the variables properly and assigning the suitable weights to each of them is essential in terms of privacy and discrimination.