Blog details

Regulating the right to privacy in the AI era

Regulating the right to privacy in the AI era

What about Privacy in the AI era? New developments in 2019 have shown that the GDPR rules on AI profiling could not be timelier. From smart billboards to home audio devices, AI has been deployed to make sense of everything we expose about ourselves, including our faces and things we casually say. Regardless of these developments, that on numerous occasions have raised concerns, legislative enforcement in the field has been somewhat slow. Will 2020 be the year when privacy regulation finally hits back?

AI technology

Despite toughening legislation, there still seems to be a clear bias towards technology deployment, irrespective of whether its implementation meets compliance requirements. Worth noting that technology, as such, is rarely ‘non-compliant’ but rather the way it’s used that raises issues.

Take smart billboards capable of reading your facial features that have been deployed at numerous busy, publicly accessible locations in 2019. Have these projects all undergone a General Data Protection Regulation (GDPR) privacy impact assessment, as required by law? One should note that video monitoring of a public space in itself bears considerable privacy risks. When adding real-time analysis of your facial features to such video monitoring, the GDPR clearly gives you the right to object to profiling. If we disregard the obvious difficulties of expressing your objection to a billboard on a busy street, how will your objection to any such profiling in the future be observed next time you pass by?

Machine learning enables us to make increasing sense of vast amounts of data. If they haven’t already, the solutions deployed in 2020 are projected to feel even more intrusive. Ironically, however, this might not be applicable where certain smart systems, put in place to learn to provide more subtle, less visibly intrusive and therefore a more effective link between our preferences and commercial offers served to us, are concerned. This might help us understand which aspect of targeted advertising we loathe more: privacy intrusion or its clumsy implementation.

The law and AI

The notion that the law is simply ‘unable to keep up with technology’ is not only an inadequate response to the problem but is also largely unfounded as a claim. The GDPR includes specific provisions on profiling and automated decision-making, specifically tailored to the use of artificial intelligence in relation to the processing of personal data. Such processing is subject to the right to obtain human intervention and the right to object to it. Additional limitations in relation to special categories of data also exist. Certain non-EU countries have started adopting similar GDPR principles including the likes of Brazil who passed the General Data Protection Law (LGPD) in 2018.

The California Consumer Privacy Act (CCPA), while less focused specifically on AI, empowers consumers by enabling them to prohibit the ‘sale of data’. This is by no means insignificant. Without the possibility to compile and merge data from different sources, its value for machine learning purposes arguably decreases. Conversely, without the ability to sell data, incentives to engage in excessive data analytics can somewhat dissipate.

When it comes to a broader framework for the regulation of artificial intelligence, the legal situation is for now less clear. Principles and rules are currently confined to non-binding guidelines, such as EU Guidelines for Trustworthy AI. But this does not impact the privacy aspects where European regulators are already able to impose fines of up to up to €20 million or 4% of the companies’ global turnover. CCPA fines are lower but might be multiplied by the number of users affected.

The AI regulatory landscape

Early in 2019, the French data protection authority CNIL imposed a fine of €50 million on Google for insufficient transparency in relation to targeted advertising. As noted by CNIL, “essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalisation, are excessively disseminated  across several documents, with buttons and links on which it is required to click to access complementary information.” Whereas the fine was far from the upper limit imposable via the GDPR, the case paves the way for further questions to be asked by data protection authorities in 2020.

For example, are machine-learning algorithms and the data sources used for them sufficiently explained? When the data protection authorities seek answers to such questions, will they rely on the information provided by companies? Alternatively, they might start digging deeper based on anecdotal evidence. How come the user is seeing a particular ad? Is this based on a sophisticated machine-learning algorithm or analysing data that should not have been analysed?

So far, privacy legal battles have largely focused on formal compliance, such as in both ‘Schrems’ cases. But AI usage trends in 2020 might force regulators to look deeper into what is actually going on inside home-based and cloud-based black boxes. As I write this article, the EU has just moved to impose a temporary ban on facial recognition in public places.

Does your company use artificial intelligence in its day to day operations? If so, failure to adhere fully to the guidelines and rules of the GDPR and Data Protection Act 2018 could result in a hefty financial penalties. Aphaia’s data protection impact assessments and Data Protection Officer outsourcing will assist you with ensuring compliance.

This article was originally published on Drooms blog.

Prev post
La ICO se pronuncia sobre cómo el Brexit puede afectar a la protección de datos.
February 5, 2020
Next post
Regulación del derecho a la privacidad en la era de la IA
February 7, 2020

Leave a Comment