“The implementation of automated decision-making and profiling in business may be directly impacted by GDPR,” warns Aphaia Partner Cristina Contero Almagro. “You may need to conduct Data Protection Impact Assessment.”
The use of algorithms or AI methods have implications for privacy, not only for the design but for the whole process, since personal data is involved both in the programming and the implementation steps. Aphaia’s privacy professionals have developed their own approach for assisting our clients, from customer recommendation algorithms used in e-commerce to the assessment of X5GON, a global EU H2020-financed AI-based network of Open Educational Resources.
And remember, if you require assistance implementing the below steps yourself, please get in touch.
Discrimination in practice – A closer approach
A wrong labelling of the training data or an unbalanced training dataset may result even in a higher risk to the rights and freedoms of data subjects when it comes to the way algorithms work in practice. These implications are drafted in the diagram at the right and lightly explained in the lines below.
Discrimination happens as a result of categorising individuals based on variable features, which means that an individual will be judged according to the group he or she fits best. The use of AI by loan companies is a clear example of this: an individual may have a loan denied only because he or she has been categorised into a group made of people who hardly pay their debts, but that only really means that the variables assigned to such individual are more similar to that group’s than other’s. This is the reason why defining the variables properly and assigning the suitable weights to each of them is essential in terms of privacy and discrimination.