EDPB Guidelines on the processing of personal data in the context of the provision of online services

The European Data Protection Board (EDPB) adopted draft guidelines on the processing of personal data in the context of the provision of online services, aiming at clarifying Article 6 (1) (b) GDPR.

Whenever we buy a car or a house, we are well-aware of the necessity of a contract. However, how does this apply when it comes to online services? Not interacting with the vendor, filling up the shopping cart ourselves and sometimes even enjoy apps and services for free make us feel no legal terms govern the transaction, but nothing could be further from the truth.

EDPB guidelines do not express a view on the validity of contracts for online services generally but explain the role of data protection as one of the main rules that impacts the provision of these services. Pursuant to GDPR, the processing of personal data can only take place when it is based on one of the six legal bases described in article 6 GDPR. Specifically, article 6 (1) (b) states that “Processing shall be lawful only if and to the extent that at least one of the following applies […] processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract”. Therefore, controllers can process personal data on a contractual necessity basis, irrespective of how the services provided are financed.

What scenarios does the contractual necessity basis comprise?

Article 6(1)(b) applies where either of two conditions are met: the processing in question must be objectively necessary for the performance of a contract with a data subject, or the processing must be objectively necessary in order to take pre-contractual steps at the request of a data subject. Accordingly, two elements are required:

-The processing taking place in the context of a valid contract with the data subject.

-The processing being necessary in order that the particular contract with the data subject can be performed.

How is “necessity” defined for this purpose?

The concept of necessity has an independent meaning in European Union law, as it must reflect the objectives of data protection law plus involve the requirements of the relevant principles including, notably, the fairness and the purpose limitation principles. This means that where other less intrusive alternatives could be adopted, or the processing is useful but not objectively necessary for performing the contractual service, then the “necessity” concept would not justify the processing.

EDPB endorses the guidance previously adopted by WP29 and warns that “‘necessary for the performance of a contract with the data subject’ must be interpreted strictly and does not cover situations where the processing is not genuinely necessary for the performance of a contract, but rather unilaterally imposed on the data subject by the controller.

How should the “necessity” be assessed?

An assessment should be put in place prior to the commencement of processing. Regard should be given to the particular aim, purpose or objective of the service. Additionally, one should consider the data subjects’ expectation and perspective when entering into the contract. EDPB refers the following questions as guidance:

-What is the nature of the service being provided to the data subject? What are its distinguishing characteristics?

-What is the exact rationale of the contract (i.e. its substance and fundamental object)?

-What are the essential elements of the contract?

-What are the mutual perspectives and expectations of the parties to the contract? How is the service promoted or advertised to the data subject? Would an ordinary user of the service reasonably expect that, considering the nature of the service, the envisaged processing will take place in order to perform the contract to which they are a party?

Where the contract consists of several separate services or elements of a service, the applicability of Article 6 (1) (b) shall be assessed separately.

Termination of contract

As general rule, the processing should stop once the contract has come to an end in full, thus the data should be erased pursuant to article 17 (1) (a) GDPR. However, sometimes it is possible to swap to a new legal basis, e.g. where data subjects have given their consent to processing after termination or the processing is necessary for complying with legal purposes. The data subject should be properly provided with this information before entering into the contract.

Specific situations

-Improvements and modifications to a service: such processing usually cannot be regarded as being objectively necessary for the performance of the contract with the user.

Fraud prevention: in the view of the EDPB, such processing is likely to go beyond what is objectively necessary for the performance of a contract. However, it could still be lawful under another basis in Article 6, such as legal obligation or legitimate interests.

Online behavioural advertising: according to WP29, contractual necessity is not a suitable legal ground for building a profile of the user’s tastes and lifestyle choices based on his clickstream and the items purchased. Furthermore, in line with ePrivacy requirements, prior consent should be obtained to place the cookies necessary to engage in behavioural advertising.

Personalisation of content: where the function of the service directly relates to personalised content, then it can be deemed as objectively necessary for the performance of the contract. Otherwise the controller should rely on a different basis to process the data.

The EDPB welcomes comments on the Guidelines, comments should be sent to EDPB@edpb.europa.eu by the 24/05/2019 at the latest.

 Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing.

Aphaia joins European AI Alliance

We are happy to announce that Aphaia has become a member of the European AI Alliance

The European AI Alliance is a multi-stakeholder forum for engaging in a broad and open discussion of all aspects of AI development and its impact on the economy and society.

As members of the European AI Alliance we will be able to interact with the High-Level Expert Group on AI (AI HLEG), which was appointed by the European Commission to support the implementation of the European strategy on AI and serves as the Steering Group of the Alliance.

We will actively contribute to the discussion on the future of AI through a dedicated platform where we will provide our thoughts on the matter. Our input and feedback will be considered to feed in to the European Commission’s policy-making in this area and the AI HLEG will be able to draw on this input when preparing its drafts and Guidelines, which includes the elaboration of recommendations on ethical, legal and societal issues related to AI.

As AI Ethics frontrunners we feel proud and grateful to be part of this exciting and challenging initiative and we are looking forward to being involved in the EU AI debate.

As part of the activities carried out so far, the AI HLEG presented a first draft of the Guidelineson AI Ethics in December 2018. Following further deliberations by the group in light of discussions on the European AI Alliance, a stakeholder consultation and meetings with representatives from Member States, the Guidelines were revised and published in April 2019.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Google announces an AI advisory board – only to dissolve it

Google creates advisory board to monitor the ethical use of AI

In line with the draft set of AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group (AI HLEG) last December, Google and other Big Tech like Amazon and Microsoft are taking steps to adopt an ethical use of AI. Google, from their side, have created an external advisory board to monitor AI ethics in the company.

GDPR states that the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests in relation to the use of AI, which makes necessary unbiased algorithms and balanced training datasets. This is an example of privacy by design that requires a privacy expert to monitor the process from the very first stage of the project.

Google also announced their AI Principles last June, with the aim of assessing AI applications in view of seven main objectives: be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence and be made available for uses that accord with these principles.

Kent Walter, Senior Vice President of Global Affairs in Google, pointed out facial recognition and fairness in machine learning as some of the most relevant topics that will be addressed by the advisory board. The advisory board is comprised of international experts in the fields of technology, ethics, linguistics, philosophy, psychology and politics.

UPDATE: However, due to some of its members receiving wide criticism, Google has scrapped the initial board composition and gone back to the drawing board.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Public consultation on the ethical principles of Artificial intelligence

The European Commission has published the results of the public consultation on the ethical principles of Artificial intelligence.

Can you imagine being part of the decision makers behind the ethical choices of those people who serve us in shops and establishments? For example, imagine going to the bank to ask for a credit card and being able to discuss with the one in charge of the ethical reasons to grant or deny your request. Or, for example, imagine parents asking the headteachers the human rights he or she has taken into account to decide whether or not their child should be enrolled. It would be crazy to think about a society where every single action is judged according to imposed ethical values used as a benchmark to determine what type of house one should have or what countries one should travel to, similar to the famous chapter of the Black Mirror series.

Well it may not be as crazy as we imagine, something similar is in the process of elaboration on the part of the European Commission, but it is not applied to people but to artificial intelligence. This is less striking because the ultimate goal of artificial intelligence is to resemble as much as possible human behaviour, but with the advantages that automation implies. In this sense, it is necessary to provide Artificial intelligence with certain ethical values that wrap their actions and decisions in a minimum of moral norms that allow their insertion into society.

For this purpose, a group of experts on Artificial Intelligence published on the 18th December a report on the ethical basis that must be present in systems that incorporate artificial intelligence (you can read a summary of the document here). The key initiatives include the establishment of framework ethical principles and practical implementation of solutions, in both cases from the “human-centric approach”, which prioritises civil, political, economic and social status of the human being.

The draft was exposed to public consultation, and now the Commission has published the results of it, which you can access here. The final document is expected to be published in March, in order to create an ethical commitment to which companies and institutions can freely adhere to.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.