Legislative enforcement and AI

Regulating the right to privacy in the AI era

What about Privacy in the AI era? New developments in 2019 have shown that the GDPR rules on AI profiling could not be timelier. From smart billboards to home audio devices, AI has been deployed to make sense of everything we expose about ourselves, including our faces and things we casually say. Regardless of these developments, that on numerous occasions have raised concerns, legislative enforcement in the field has been somewhat slow. Will 2020 be the year when privacy regulation finally hits back?

AI technology

Despite toughening legislation, there still seems to be a clear bias towards technology deployment, irrespective of whether its implementation meets compliance requirements. Worth noting that technology, as such, is rarely ‘non-compliant’ but rather the way it’s used that raises issues.

Take smart billboards capable of reading your facial features that have been deployed at numerous busy, publicly accessible locations in 2019. Have these projects all undergone a General Data Protection Regulation (GDPR) privacy impact assessment, as required by law? One should note that video monitoring of a public space in itself bears considerable privacy risks. When adding real-time analysis of your facial features to such video monitoring, the GDPR clearly gives you the right to object to profiling. If we disregard the obvious difficulties of expressing your objection to a billboard on a busy street, how will your objection to any such profiling in the future be observed next time you pass by?

Machine learning enables us to make increasing sense of vast amounts of data. If they haven’t already, the solutions deployed in 2020 are projected to feel even more intrusive. Ironically, however, this might not be applicable where certain smart systems, put in place to learn to provide more subtle, less visibly intrusive and therefore a more effective link between our preferences and commercial offers served to us, are concerned. This might help us understand which aspect of targeted advertising we loathe more: privacy intrusion or its clumsy implementation.

The law and AI

The notion that the law is simply ‘unable to keep up with technology’ is not only an inadequate response to the problem but is also largely unfounded as a claim. The GDPR includes specific provisions on profiling and automated decision-making, specifically tailored to the use of artificial intelligence in relation to the processing of personal data. Such processing is subject to the right to obtain human intervention and the right to object to it. Additional limitations in relation to special categories of data also exist. Certain non-EU countries have started adopting similar GDPR principles including the likes of Brazil who passed the General Data Protection Law (LGPD) in 2018.

The California Consumer Privacy Act (CCPA), while less focused specifically on AI, empowers consumers by enabling them to prohibit the ‘sale of data’. This is by no means insignificant. Without the possibility to compile and merge data from different sources, its value for machine learning purposes arguably decreases. Conversely, without the ability to sell data, incentives to engage in excessive data analytics can somewhat dissipate.

When it comes to a broader framework for the regulation of artificial intelligence, the legal situation is for now less clear. Principles and rules are currently confined to non-binding guidelines, such as EU Guidelines for Trustworthy AI. But this does not impact the privacy aspects where European regulators are already able to impose fines of up to up to €20 million or 4% of the companies’ global turnover. CCPA fines are lower but might be multiplied by the number of users affected.

The AI regulatory landscape

Early in 2019, the French data protection authority CNIL imposed a fine of €50 million on Google for insufficient transparency in relation to targeted advertising. As noted by CNIL, “essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalisation, are excessively disseminated  across several documents, with buttons and links on which it is required to click to access complementary information.” Whereas the fine was far from the upper limit imposable via the GDPR, the case paves the way for further questions to be asked by data protection authorities in 2020.

For example, are machine-learning algorithms and the data sources used for them sufficiently explained? When the data protection authorities seek answers to such questions, will they rely on the information provided by companies? Alternatively, they might start digging deeper based on anecdotal evidence. How come the user is seeing a particular ad? Is this based on a sophisticated machine-learning algorithm or analysing data that should not have been analysed?

So far, privacy legal battles have largely focused on formal compliance, such as in both ‘Schrems’ cases. But AI usage trends in 2020 might force regulators to look deeper into what is actually going on inside home-based and cloud-based black boxes. As I write this article, the EU has just moved to impose a temporary ban on facial recognition in public places.

Does your company use artificial intelligence in its day to day operations? If so, failure to adhere fully to the guidelines and rules of the GDPR and Data Protection Act 2018 could result in a hefty financial penalties. Aphaia’s data protection impact assessments and Data Protection Officer outsourcing will assist you with ensuring compliance.

This article was originally published on Drooms blog.

AI, IoT, Big Data

Combining AI, IoT and Big Data: Can Regulation Cope?

In last week’s vlog we delved into the regulation of AI, IoT and Big Data when used together.

AI, IoT and Big Data—these technologies and digital concepts undoubtedly play a major role in today’s highly connected era. As they are now; and continue to become more and more; integral in our day to day lives, several regulatory measures—including the GDPR and DPA 2018—have been implemented in order to protect individuals’ personal data, privacy and associated rights. But how should they be regulated when they all work together?

First of all we need to understand what each of these concepts means:

Artificial Intelligence (AI): John McCarthy, who coined the term in 1956, defined it as “the science and engineering of making intelligent machines.” A more modern definition explains AI as “the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction”.

Internet of Things (IoT): IoT is understood as “a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction”.

Big Data: a popular definition of Dig Data, provided by the Gartner IT glossary, is “…high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.”

As you can see, all of them involve data processing, so it is clear that all of them should comply with the GDPR when the information affected is personal data or with the Free Flow of non-Personal Data Regulation when it doesn’t. But are there any other mandatory requirements laid by Law apart from the data protection and privacy ones that they should meet? Actually there are, but the challenge is that each of them has different features and needs.

For example, AI raises many ethics concerns, as discussed in some of our previous videos and IoT is dependent on 5G and Telecoms Regulation, while Big Data may tackle challenges from both AI and IoT.

That said, how can regulation cope all these technologies when they apply together, e.g. in a project?

We suggest different scenarios about how this could be addressed:

o Impose an obligation to businesses, universities, public bodies, etc. to count with a legal ITC professional in the team before carrying out any project that involve the use of AI, IoT, Big Data or similar technologies. A specific certificate for these professionals might be issued by accreditation bodies.
o Tech-specific Regulation and Legislation that put together most of the ITC risks and challenges with mandatory minimum requirements on how they should be applied.
o Independent Legal Tech EU Body which launches guidance and code of conducts about the main ITC issues and challenges.

Even though regulation and codes of conduct may help to unify standards, due diligence and commitment from the managers and the team involved in a project are still essential and key to ensure appropriate safeguards regardless of the specific externally-imposed requirements.

 

If you need advice on your AI product, Aphaia offers both AI ethicsand Data Protection Impact Assessments.

 

 

GDPR Challenges For Artificial Intelligence

Data protection in algorithms

Technological development is enabling the automation of all processes, as Henry Ford did in 1914; The difference is that now instead of cars we have decisions about privacy. Since GDPR came into force on 25th May 2018, lots of questions have arisen regarding how the Regulation may block any data-based project.

In this article, we aim to clarify some of the main GDPR concepts that may apply to the processing of large amounts of data and algorithm decision-making. It has been inspired by the report the Norwegian Data Protection Authority -Datatilsynet- published in January this year: “Artificial Intelligence and Privacy”.

Artificial intelligence and the elements it comprises like algorithms and machine/deep learning are affected by GDPR for three main reasons: the huge volume of data involved, the need of a training dataset and the feature of automated decision-making without human intervention. These three ideas reflect four GDPR principles: fairness of processing, purpose limitation, data minimisation, and transparency. We are briefly explaining all of them in the following paragraphs – the first paragraph of each concept contains the issue and the second one describes how to address it according to GDPR.

One should  take into account that without a lawful basis for automated decision making (contract/consent), such processing cannot take place.

Fairness processing: A discriminatory result after automated data processing can derive from both the way the training data has been classified (supervised learning) and the characteristics of the set of Data itself (unsupervised learning). For the first case, the algorithm will produce a result that corresponds with the labels used in training, so if the training was biased, so will do the output. In the second scenario, where the training data set comprises two categories of data with different weights and the algorithm is risk-averse, the algorithm will tend to favour the group with a higher weight.

GDPR compliance at this point would require implementing regular tests in order to control the distortion of the dataset and reduce to the maximum the risk of error.

Purpose limitation: In cases where previously-retrieved personal data is to be re-used, the controller must consider whether the new purpose is compatible with the original one. If this is not the case, a new consent is required or the basis for processing must be changed. This principle applies either to the re-use of data internally and the selling of data to other companies. The only exceptions to the principle relate to scientific or historical research, or for statistical or archival purposes directly for the public interest. GDPR states that scientific research should be interpreted broadly and include technological development and demonstration, basic research, as well as applied and privately financed research. These elements would indicate that – in some cases – the development of artificial intelligence may be considered to constitute scientific research. However, when a model develops on a continuous basis, it is difficult to differentiate between development and use, and hence where research stops and usage begins. Accordingly, it is therefore difficult to reach a conclusion regarding the extent to which the development and use of these models constitute scientific research or not.

Using personal data with the aim of training algorithms should be done with a data set originally collected for such purpose, either with the consent of the parties concerned or, to anonymisation.

Data minimisation: The need to collect and maintain only the data that are strictly necessary and without duplication requires a pre-planning and detailed study before the development of the algorithm, in such a way that its purpose and usefulness are well explained and defined.

This may be achieved by making it difficult to identify the individuals by the basic data contained. The degree of identification is restricted by both the amount and the nature of the information used, as some details reveal more about a person than others. While the deletion of information is not feasible in this type of application due to the continuous learning, the default privacy and by design must govern any process of machine learning, so that it applies encryption or use of anonymized data whenever possible. The use of pseudonymisation or encryption techniques protect the data subject’s identity and help limit the extent of intervention.

Transparency, information and right to explanation: Every data processing should be subject to the previous provision of information to the data subjects, in addition to a number of additional guarantees for automated decision-making and profiling, such as the right to obtain human intervention on the part of the person responsible, to express his point of view, to challenge the decision and to receive an explanation of the decision taken after the evaluation.

GDPR does not specify whether the explanation is to refer to the general logic on which the algorithm is constructed or the specific logical path that has been followed to reach a specific decision, but the accountability principle requires the subject should be given a satisfactory explanation, which may include a list of data variables, the ETL (extract, transform and load) process or the model features.

A data protection impact assessment carried by the DPO is required before any processing involving algorithms, artificial intelligence or profiling in order to evaluate and address the risk to the rights and freedoms of data subjects.

 

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessment, and Data Protection Officer outsourcing.