AI and GDPR fines

In this week’s vlog, we went through the fines under the GDPR and the role of Artificial Intelligence in this context.

One of the most talked about fines under GDPR so far, has been Facebook’s £500,000 fine from the Information Commissioner’s Office & this was for serious breaches of data protection law. The ICO’s investigation found that Facebook processed personal information of users unfairly by allowing application developers’ access to data subjects’ information without gaining clear & informed consent. Facebook also failed to keep the personal information secure because it did not do checks on apps and developers using its platform. This meant that Facebook data of up to 87 million people worldwide was harvested without their knowledge.

A subset of this data was then shared with other organisations, including the parent company of Cambridge Analytica who were involved in political campaigning in the US. The ICO found that the personal information of at least 1 million UK users was amongst the harvested data & consequently, put at risk of further misuse.

The Federal Trade Commission also formally announced its $5 billion settlement with Facebook this summer, after a long investigation into Cambridge Analytica scandal and other privacy breaches. The $5 billion fine is the 2ndlargest fine ever levied by the FTC.

Now crucially, GDPR fines are designed to make non-compliance a costly mistake for businesses regardless of its size. The most serious infringements may result in a penalty of up to £17m, or 4% of the business’ global turnover from preceding of the financial year & this is based on whichever fine is greater.

These include any violations of the articles that govern:

  • The basic principles for processing.
  • The conditions for consent.
  • The data subjects’ rights.
  • The transfer of data to an international organization or a recipient in a third country.

A misuse of AI systems may trigger most of them, because there are only two valid bases for processing when it comes to automated decision making and profiling: contract and consent. Consent must be explicit.

Automated decision making and profiling is regulated in article 22 of GDPR, which is included within Chapter III: “Rights of the data subject”.

So, Applying all GDPR security measures become an essential step in order to avoid data breaches and fines by:

  • using pseudonymisation and encryption of personal data;
  • ensuring the ongoing confidentiality, integrity, availability, resilience of processing systems and services;
  • restoring the availability and access to personal data in a timely manner in the event of a physical or technical incident;
  • Lastly  adding a process for regularly testing, assessing as well as evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

However, The use of AI requires some additional measures to safeguard the data subject’s rights and freedoms as well as legitimate interests, which are the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

The ICO, intends to impose a fine of over £180 million on British Airways for the theft of customers’ personal and financial information and £100 million to Marriott hotel for accidentally exposing 339 million guest records globally. British Airways and Marriott data breaches have nothing to do with AI. Could you imagine what the fine amount would have been if AI was involved? Feel free to share your thoughts with us.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Leave a Comment

(0 Comments)

Your email address will not be published. Required fields are marked *