EU-US Privacy Shield

EU-US Privacy Shield invalidation business implications follow-up

Since the Court of Justice of the European Union (CJEU) invalidated the EU-US Privacy Shield in their Schrems II judgement delivered two weeks ago, many questions have arisen around international data transfers to the US.

After the invalidation of the EU-US Privacy Shield by the CJEU two weeks ago, as reported by Aphaia, data transfers to the US require another valid safeguard or mechanism that provides an adequate level of data protection similar to the one granted by the GDPR.

European Data Protection Board guidelines

With the aim of clarifying the main issues derived from the invalidation of the EU-US Privacy Shield, the European Data Protection Board (EDPB) has published Frequently Asked Questions on the Schrems II judgement. These answers are expected to be developed and complemented along with further analysis, as the EDPB continues to examine and assess the CJEU decision.

In the document, the EDPB reminds that there is no grace period during which the EU-US Privacy Shield is still deemed a valid mechanisms to transfer personal data to the US, therefore businesses that were relying on this safeguard and that wish to keep on transferring data to the US should find another valid safeguard which ensures compliance with the level of protection essentially equivalent to that guaranteed within the EU by the GDPR.

What about Standard Contractual Clauses?

The CJEU considered the SCC validity depends on the ability of the data exporter and the recipient of the data to verify, prior to any transfer, and taking into account the specific circumstances, whether that level of protection can be respected in the US. This seems to be difficult though, because the Court found that US law (i.e., Section 702 FISA and EO 12333) does not ensure an essentially equivalent level of protection.

The data importer should inform the data exporter of any inability to comply with the SCCs and where necessary with any supplementary measures and the data exporter should carry out an assessment to ensure that US law does not impinge on the adequate level of protection, taking into account the circumstances of the transfer and the supplementary measures that could be put in place. The data exporter may contact the data importer to verify the legislation of its country and collaborate for the assessment. Where the result is not favourable, the transfer should be suspended. Otherwise the data exporter should notify the competent Supervisory Authority.

What about Binding Corporate Rules (BCRs)?

Given that the reason of invalidating the EU-US Privacy Shield was the degree of interference created by the US law, the CJEU judgement applies as well in the context of BCRs, since US law will also have primacy over this tool. Likewise before using SCCs, an assessment should be run by the data exporter and the competent Supervisory Authority should be reported where the result is not favourable and the data exporter plans to continue with the transfer.

What about derogations of Article 49 GDPR?

Article 49 GDPR comprises further conditions under which personal data can be transferred to a third-country in the absence of an adequacy decision and appropriate safeguards such as SCCs and BCRs, namely:

  • Consent. The CJEU points out that consent should be explicit, specific for the particular data transfer or set of transfers and informed. This element involves practical obstacles when it comes to businesses processing data from their customers, as this would imply, for instance, asking for all customers’ individual consent before storing their data on Sales Force.
  • Performance of a contract between the data subject and the controller. It is important to note that this only applies where the transfer is occasional and only for those that are objectively necessary for the performance of the contract.

What about third countries other than the US?

The CJEU has indicated that SCCs as a rule can still be used to transfer data to a third country, however the threshold set by the CJEU for transfers to the US applies for any third country, and the same goes for BCRs.

What should I do when it comes to processors transferring data to the US?

Pursuant to the EDPB FAQs, where no supplementary measures can be provided to ensure that US law does not impinge on the essentially equivalent level of protection as granted by the GDPR and if derogations under Article 49 GDPR do not apply, “the only solution is to negotiate an amendment or supplementary clause to your contract to forbid transfers to the US. Data should not only be stored but also administered elsewhere than in the US”.

What can we expect from the CJEU next?

The EDPB is currently analysing the CJEU judgment to determine the kind of supplementary measures that could be provided in addition to SCCs or BCRs, whether legal, technical or organisational measures.

ICO statement

The ICO is continuously updating their statement on the CJEU Schrems II judgement. The latest version so far dates 27th July and it confirms that EDPB FAQs still apply to UK controllers and processors. Until further guidance is provided by EU bodies and institutions, the ICO recommends to take stock of the international transfers businesses make and react promptly plus they claim that they will continue to apply a risk-based and proportionate approach in accordance with their Regulatory Action Policy.

Other European Data Protection Authorities’ statements

Some European data protection supervisory authorities have provided guidance in response to the CJEU Schrems II judgement. While most countries are still considering the implications of the decision, some other are warning about the risk of non-compliance and a few of them like Germany (particularly Berlin and Hamburg) and Netherlands have openly stated that transfers to the US are unlawful.

In general terms, the ones that are warning about the risks claim the following:

  • Data transfers to the U.S. are still possible, but require the implementation of additional safeguards.
  • The obligation to implement the requirements contained in the CJEU’s decision is both on the businesses and the data protection supervisory authorities.
  • Businesses are required to constantly monitor the level of protection in the data importer’s country
  • Businesses should run a previous assessment before transferring data to the US.

The data protection supervisory authority in Germany (Rhineland-Palatinate) has proposed a five-step assessment for businesses. We have prepared the diagram below which summarizes it:

Can the level of data protection required by the GDPR be respected in the US?

The CJEU considered that the requirements of US domestic law and, in particular, certain programmes enabling access by US public authorities to personal data transferred from the EU, result in limitations on the protection of personal data which do not satisfy GDPR requirements. Furthermore, the CJEU stated that US legislation does not gran data subjects actionable rights before the courts against the US authorities. 

In this context, it seems difficult that a company could be able to demonstrate that they can provide an adequate level of data protection to personal data transferred from the EU, because basically it would have to bypass US legislation.

Latest moves in the US Senate does not shed light in this issue, because the “Lawful Access to Encrypted Data Act” was introduced last month. It mandates service providers and device manufacturers to assist law enforcement with accessing encrypted data if assistance would aid in the execution of a lawfully obtained warrant.

Do you make international data transfers to third countries? Are you affected by Schrems II decision? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We also offer CCPA compliance servicesContact us today.

Trustworthy Artificial Intelligence

Assessment List for Trustworthy Artificial Intelligence Overview.

Early this month the High-Level Expert Group on Artificial Intelligence (AI HLEG) presented their final Assessment List for Trustworthy Artificial Intelligence.

As reported in our blog, the piloting process of the Ethics Guidelines for Trustworthy AI was launched in the first EU AI Alliance Assembly, which took place on 26th June 2019. The results have been published now and they aim to support AI developers and deployers in implementing Trustworthy AI.

Background

Following the publication of the first draft in December, on 8 April 2019 the AI HLEG presented the Ethics Guidelines for Trustworthy AI, which addressed how a trustworthy AI should be, that is: ‘lawful’, ‘ethical’ and ‘robust’ and the requirements it should meet, namely: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability.

While the theoretical requirements and principles set up the bases for achieving Trustworthy AI, there was still a need for operationalization that allowed businesses and companies to implement them in practice. This is the goal pursued by the Assessment list for Trustworthy AI, which is deemed to be the operational tool of the Guidelines.

The piloting process

The piloting process, where Aphaia has participated, involved more than 350 stakeholders.

Feedback on the assessment list was given in three ways:

  • An online survey filled in by participants registered to the process;
  • The sharing best practices on how to achieve trustworthy AI through the European AI Alliance and
  • A series of in-depth interviews.

How should I use the Assessment List for Trustworthy AI (ALTAI)?

If you are developing, deploying or using AI you should make sure that all your AI systems comply with AI Trustworthy requirements and principles before effectively implement them.

Goal: identifying the risk for people’s fundamental rights derived from the use of your AI systems and applying the relevant mitigation measures to minimise those risks while maximizing the benefit of AI.

Steps: Self-evaluation through the ALTAI is the first step to check the gaps and design an action plan. The ALTAI is intended for flexible use, by which organisations can draw on elements relevant to the particular AI system from the list or add elements to it as they see fit, taking into consideration the sector they operate in. According to the AI HLEG, for this purpose you should: 

  • Perform a Fundamental Rights Impact Assessment (FRIA) prior to self-assessing any AI system;
  • actively engage with the questions the list raises; 
  • involve all relevant stakeholders, either within and/or outside your organisation;
  • seek outside counsel or assistance where necessary and
  • put in place appropriate internal guidance and governance processes.

The seven requirements

1.Human agency and oversight

“AI systems should support human agency and human decision-making, as prescribed by the principle of respect for human autonomy”. In this section, organisation should reflect on how to deal with the affects AI systems can have on:

  • Human behaviour, in a broad sense.
  • Human perception and expectation when confronted with AI systems that ‘act’ like humans.
  • Human affection, trust and (in)dependence.

The questions derived from the topics above will help organisations to decide necessary oversight measures and governance mechanisms or approaches, such as:

  • Human-in-the-loop (HITL) or the capability for human intervention in every decision cycle of the system.
  • Human-on-the-loop (HOTL) or the capability for human intervention during the design cycle of the system and monitoring the system’s operation.
  • Human-in-command (HIC) or the capability to oversee the overall activity of the AI system and the ability to decide when and how to use the AI system in any particular situation.

Questions in this part mainly arise around AI systems interaction with end-users and their learning and training process. 

2.Technical robustness and safety

“Technical robustness requires that AI systems are developed with a preventative approach to risks and that they behave reliably and as intended while minimising unintentional and unexpected harm as well as preventing it where possible”. In this section, organisations should reflect about the following issues:

  • Resilience to attack and security.
  • Safety.
  • Accuracy.
  • Reliability, fall-back plans and reproducibility.

There are two key requirements to obtain positive results on the above:

  • Dependability, which comprises the ability of the AI systems to deliver services that can justifiably be trusted.
  • Resilience, which means the robustness of the AI systems when facing changes, either in the environment or due to the presence of other agents, human or artificial, that may interact with the AI system in an adversarial manner.

Questions in this part mainly arise around AI systems undesirable and unexpected behaviour, certification mechanisms, threats prevision, documentation procedures and risk metrics.

3.Privacy and data governance

“Closely linked to the principle of prevention of harm is privacy, a fundamental right particularly affected by AI systems”. In terms of data protection, the principle of prevention of harm involves:  

  • Adequate data governance that covers the quality and integrity of the data used.
  • Relevance of the data used  in light of the domain in which the AI systems will be deployed.
  • Data access protocols.
  • The capability of the AI system to process data in a manner that protects privacy.

Questions in this part mainly arise around the type of personal data used for training and development, the implementation of GDPR mandatory measures and requirements and AI systems alignment with relevant standards such as ISO.

4.Transparency

“A crucial component of achieving Trustworthy AI is transparency which encompasses three elements: 1) traceability, 2) explainability and 3) open communication about the limitations of the AI system”:

  • Traceability: the process of the development of AI systems should be properly documented.
  • Explainability: this item refers to the ability to explain both the technical processes of the AI system and the reasoning behind the decisions or predictions that the AI system makes, which should be understood by those directly and indirectly affected.
  • Communication: AI system’s capabilities and limitations should be communicated to the users in a manner appropriate to the use case at hand and this could encompass communication of the AI system’s level of accuracy as well as its limitations.

Questions in this part mainly arise around traceability measures such as logging practices, users surveys, information mechanisms, and the provision of training material and disclaimers.

5.Diversity, non-discrimination and fairness

“In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system’s life cycle”. When it comes to AI systems, either when training or operating, discrimination may derive from:

  • Inclusion of inadvertent historic bias.
  • Incompleteness.
  • Bad governance models.
  • Intentional exploitation of consumer biases.
  • Unfair competition.

Questions in this part mainly arise around the strategies or procedures to avoid biases, educational and awareness initiatives, accessibility, user interfaces, Universal Design principles and stakeholder participation.

6.Societal and environmental well-being 

“In line with the principles of fairness and prevention of harm, the broader society, other sentient beings and the environment should be considered as stakeholders throughout the AI system’s life cycle”.

The following factors should be taken into account:

  • Environmental well-being.
  • Impact on work and skills.
  • Impact on society at large or democracy.

Questions in this part mainly arise around the mechanisms to evaluate the environmental and societal impact, the measures to address this impact, risk of de-skilling of the workforce and the promotion of new digital skills.

7.Accountability

“The principle of accountability necessitates that mechanisms be put in place to ensure responsibility for the development, deployment and/or use of AI systems”. Closely linked to risk management, there are three elements that should be considered in this regard:

  • Measures to identify and mitigate risks.
  • Mechanisms for addressing the risks.
  • Regular audits.

Questions in this part mainly arise around audit mechanisms, third-party auditing processes, risk training, AI ethics boards and due protection for whistle-blowers, NGOs, trade unions.

Do you need assistance with the Assessment List for Trustworthy AI? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments, together with GDPR and Data Protection Act 2018 adaptation and Data Protection Officer outsourcing.

EU-US Privacy Shield invalidation

EU-US Privacy Shield invalidation business implications

On 16th July, the Court of Justice of the EU delivered a ruling in the case known as Schrems II by which it invalidated EU-US Privacy Shield and confirmed the validity of Standard Contractual Clauses, with caveats.

After the CJEU’s Advocate General Henrik Saugmandsgaardøe published his opinion in the so-called ‘Schrems II’ in January, now the CJEU has delivered their judgement, pursuant which Privacy Shield is declared invalid and SCC remain valid but can only be used under strict conditions.

What did the Court say?

Two important outcomes derive from the judgement issued by the CJEU:

1.The EU-US Privacy Shield is no longer a valid mechanism for international data transfers from the EU to the US.

It is important to note that it was invalidated with immediate effect. The main reason are US surveillance programmes. According to the CJEU, US surveillance programs are not limited to what is strictly necessary and proportional as required by EU law, plus there are no effective legal remedies in the US to ensure compliance with provisions of EU law when EU data subjects’ data is used for national surveillance programs.

2.SCC but with some important caveats.

It is no longer sufficient for a data exporter and data importer to just sign the agreement, the exporting party must do a factual assessment of whether the contract can actually be complied with in practice. Companies must verify, on a case-by-case basis, whether the law in the recipient country ensures adequate protection for personal data transferred under SCC. Where this is not the case, as it happens in the US, supplementary measures and additional safeguards should be implemented in order to attain the required level of protection; otherwise the transfer should be ceased. 

National Data Protection Authorities may suspend or prohibit transfers to third country if appropriate safeguards cannot be ensured. Based on the CJEU findings in respect of the Privacy Shield, it is difficult to see how supervisory authorities would be able to avoid such a conclusion in the case of transfers to the US. National Data Protection Authorities responses to this decision are yet to be seen.

What does the EDPS say?

On 17th July and following the CJEU ruling, the EDPS, which together with the EDPB had previously expressed their criticisms of the Privacy Shield, released their statement where they welcomed the Court reaffirmation of the importance of maintaining a high level of protection of personal data transferred from the European Union to third countries. However, they trust that “the United States will deploy all possible efforts and means to move towards a comprehensive data protection and privacy legal framework, which genuinely meets the requirements for adequate safeguards reaffirmed by the Court”.

What does the UK Government say?

The UK government intervened in the case, arguing in support of the validity of standard contractual clauses. In their response, they point out their commitment to ensuring “high data protection standards and supporting UK organisations on international data transfer issues”. They have announced that they are working alongside the ICO and international counterparts with the purpose of addressing the impacts of the judgment and ensuring that updated guidance on international data transfers will be provided soon.

EU Data Protection Authorities like Irish Data Protection Commissioner and three in Germany (Federal DPA, DPA of Hamburg and DPA of Rheinland-Pfalz) have also issued their statements. Other European DPAs are expected to do it soon.

What should I do now when transferring data from the EU to the US?

Where relying on the Privacy Shield:

  • Do not enter into any new agreement governed by the Privacy Shield.
  • Review all your current contracts, especially legacy ones, with your providers, clients or third-party processors and identify those that rely on the Privacy Shield. They should be amended to add SCC or any other valid safeguard covered by the GDPR for international data transfers.

Where relying on SCC:

Although the ICO and other national Data Protection Authorities are expected to produce detailed guidance soon, according to CJEU, when transferring personal data to third countries relying on SCC you should:

  • Make sure that security and technical measures which provide an adequate level of protection of personal data are actually implemented. You may need to review or at least ask for further information about the data importer’s technical and security measures plus consider whether additional measures should be specified to strengthen security, like tokenization and encryption.
  • Reinforce your accountability processes. Do not simply sign an appendix to your contracts including SCC, rather but have a closer look at the actual security measures and other mechanisms used by the importer, plus the actual situation in the importing country, especially regarding surveillance.

What can we expect in the near future?

It is expected that guidance will be issued from the European Commission as well as the European Data Protection Board. Apart from that, the EU may decide to renegotiate a new version of Privacy Shield that gives EU data subjects stronger privacy rights under US surveillance laws. Likewise the US came up with the Privacy Shield ten months after the Safe Harbor was declared invalid, so one could now hope for them to put in place a new mechanism which to address the CJEU’s concerns. On another note, SCC may be updated for GDPR soon.

Do you make international data transfers to third countries? Are you affected by Schrems II decision? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We also offer CCPA compliance servicesContact us today.

Artificial Intelligence: From Ethics to Regulation

Artificial Intelligence: From Ethics to Regulation

The study launched by the European Parliament last month builds on the AI-HLEG Ethics Guidelines for Trustworthy AI and seeks to find alternatives that might help to move from AI ethics to regulation. 

In our previous blogs and vlogs we have discussed the application of the Ethics Guidelines for Trustworthy AI in several contexts and industries, like retail and real estate. However, there is a need for precision in practical terms. For this purpose, the European Parliament has just published a document where different alternatives are explored in order to develop specific policy and legislation for governing AI.

Important considerations about ethics

The European Parliament considers that there are some preliminary points about ethics that should be understood before moving forward with the analysis of further implications of ethics on AI:

  1. Ethics is not about checking boxes. It should be addressed through questions based on deliberation, critique and inquiry.
  2. Ethics should be understood as a continuous process where regular checks and updates become essential.
  3. AI should be conceived of as a social experiment that makes possible to understand its ethical constraints and the kinds of things that need to be learnt. This approach may facilitate the monitoring of social effects thus they can be used to improve the technology and its introduction into society.
  4. Moral dilemmas do not make possible to satisfy all ethical principles and value commitments at the same time, which means that sometimes there will not be a specific response to a problem, but a set of various options and alternatives which an associated ‘moral residue’ each instead.
  5. The goal of ethics is to provide strong enough rationale that an individual is compelled to act in a way they believe is the right/good way.

Key AI ethics insights

According to the study, there are six main elements of AI that should be addressed when it comes to an ethical implementation of algorithms and systems:

  • Transparency. Policy makers need to deal with three aspects: the complexity of modern AI solutions, the intentional obfuscation by those who design them and the inexplicability regarding how a particular input results in a particular output.
  • Bias and fairness. Training data is deemed essential in this context and there is a need for definition of ‘fair’ and ‘accurate’ concepts.
  • Contextualization of the AI according to the society in which it has been created and clarification of the society’s role in its development.
  • Accountability and responsibility.
  • Re-design risk assessment to make them relevant.
  • Ethical technology assessments (eTA). The eTA is a written document intended to capture the dialogue between ethicist and technologist and it comprises the list of ethical issues related to the AI application for the purpose of identifying all the possible ethical risks and drawing out the possible negative consequences of implementing the AI system.

Why is regulation necessary?

The European Parliament points out the following reasons that motivate the need for legislation:

  • The criticality of ethical and human rights issues raised by AI development and deployment.
  • The need to protect people (i.e. the principle of proportionality).
  • The interest of the state (given that AI will be used in state-governed areas such as prisons, taxes, education, child welfare).
  • The need for creating a level playing field (e.g. self-regulation is not enough).
  • The need for the development of a common set of rules for all government and public administration stakeholders to uphold.

What are the policy options?

While ethics is about searching for broad answers to societal and environmental problems, regulation can codify and enforce ethically desirable behaviour.

The study proposes a number of policy options that may be adopted by European Parliamentary policy-makers:

  1. Mandatory Data Hygiene Certificate (DHB) for all AI system developers in order to be eligible to sell their solutions to government institutions and public administration bodies. This certificate would not require insight into the proprietary aspects of the AI system and it would not require organisations to share their data sets competing organisations.
  2. Mandatory ethical technology assessment (eTA) prior to deployment of the AI system to be conducted by all public and government organisations using AI systems. 
  3. Mandatory and clear definition of the goals of using AI when it comes to public administration institutions and government bodies. The aim is avoiding the deployment of AI in society in the hope of learning an unknown ‘something’. Instead, it is proposed that there must be a specific and explicit ‘something’ to be learned.
  4. Mandatory accountability report to be produced by all organisations deploying AI systems meant as a response to the ethical and human rights issues that were identified in the eTA.

Practical considerations about eTA reports

Criteria

The seven key requirements of the Ethics Guidelines for Trustworthy AI  may be used by organisations to structure an eTA, namely:

  • Human agency and oversight.
  • Technical robustness and safety.
  • Privacy and data governance.
  • Transparency.
  • Diversity, non-discrimination and fairness.
  • Societal and environmental well-being.
  • Accountability.

Otherwise the European Parliament also suggests to use the following nine criteria: (1) Dissemination and use of information; (2) Control, influence and power; (3) Impact on social contact patterns; (4) Privacy; (5) Sustainability; (6) Human reproduction; (7) Gender, minorities and justice; (8) International relations and (9) Impact on human values. 

The eTA should be concrete though, thus it may be extended in order to cover: (a) the specific context in which the AI will be used; (b) the AI methodology used; (c) the stakeholders involved and (d) an account of ethical values and human rights in need of attention.

Who is going to make eTA reports?

The tasks in the eTA log require experience in identifying ethical issues and placing them within a conceptual framework for analysis. For this reason, the European Parliament highlights the likelihood of a future role for ethics and for ethicists in the regulation process engaged in organisations around AI.

Will SMEs be able to afford this?

In order to create a level playing field across SMEs, the European Parliament plans to provide an adequate margin of three years and offer small and medium companies EU funding to assist them with report completion as well as with the necessary capacity-building measures. This model parallels the incremental process for companies to comply with the GDPR.

Does your company use AI? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessmentsEU AI Ethics assessments and Data Protection Officer outsourcingContact us today.