“The implementation of automated decision-making and profiling in business may be directly impacted by GDPR,” warns Aphaia Partner Cristina Contero Almagro. “You may need to conduct a Data Protection Impact Assessment.”

The use of algorithms and AI methods have implications for privacy.  Aphaia’s experts have developed their own approach to assisting clients, from customer recommendation algorithms used in e-commerce to the assessment of X5GON, a global EU H2020-financed AI-based network of Open Educational Resources.

If you require assistance, please get in touch with the Aphaia team. We’re here to help.

GDPR requirements

AI and algorithms processing put the rights and freedoms of natural persons at risk. With that in mind, a DPIA should always be carried out prior to processing any data. Based on our expertise, Aphaia’s team performs a specialised DPIA for cases involving the use of AI and algorithms. Some of the key areas we examine include: the type of algorithm, the lawful basis, the characteristics of the datasets used for training, the specific restrictions for data collection and processing, and the categories of data the algorithm will base its decisions on.
Where you have implemented AI or algorithms decision-making in your company, you need to put a specific information policy in place comprising certain details like the logic involved or perceived consequences, as well as enable the exercise of the data subjects’ right to object to the use of the algorithm. Aphaia analyses the use of algorithms on a case by case basis and prepares tailored policies and documents.

…and more

AI involves discrimination risks that should be addressed before and during the implementation of the algorithm. Discriminatory results might come from mislabelled training data or from an unbalanced training dataset. Having a data protection expert to check this from the outset can make all the difference!

Discrimination in practice – A closer look

Incorrect labelling of training data or an unbalanced training dataset may result in an even higher risk to the rights and freedoms of data subjects when it comes to the way algorithms work in practice. These implications are drafted in the diagram on the right and briefly summarised  in the lines below.

Discrimination happens as a result of categorising individuals based on variable features, which means that an individual will be judged according to the group he or she fits into best. The use of AI by loan companies is a clear example of this: an individual may have a loan denied simply because he or she has been categorised into a group of people who don’t make a concerted effort to pay off all their debt, but that only really means that the variables assigned to such an individual are more similar to that group’s than other’s. It is for this reason that it’s important to define the variables properly- and assign suitable weight to each of them.