EU-US Privacy Shield

EU-US Privacy Shield invalidation business implications follow-up

Since the Court of Justice of the European Union (CJEU) invalidated the EU-US Privacy Shield in their Schrems II judgement delivered two weeks ago, many questions have arisen around international data transfers to the US.

After the invalidation of the EU-US Privacy Shield by the CJEU two weeks ago, as reported by Aphaia, data transfers to the US require another valid safeguard or mechanism that provides an adequate level of data protection similar to the one granted by the GDPR.

European Data Protection Board guidelines

With the aim of clarifying the main issues derived from the invalidation of the EU-US Privacy Shield, the European Data Protection Board (EDPB) has published Frequently Asked Questions on the Schrems II judgement. These answers are expected to be developed and complemented along with further analysis, as the EDPB continues to examine and assess the CJEU decision.

In the document, the EDPB reminds that there is no grace period during which the EU-US Privacy Shield is still deemed a valid mechanisms to transfer personal data to the US, therefore businesses that were relying on this safeguard and that wish to keep on transferring data to the US should find another valid safeguard which ensures compliance with the level of protection essentially equivalent to that guaranteed within the EU by the GDPR.

What about Standard Contractual Clauses?

The CJEU considered the SCC validity depends on the ability of the data exporter and the recipient of the data to verify, prior to any transfer, and taking into account the specific circumstances, whether that level of protection can be respected in the US. This seems to be difficult though, because the Court found that US law (i.e., Section 702 FISA and EO 12333) does not ensure an essentially equivalent level of protection.

The data importer should inform the data exporter of any inability to comply with the SCCs and where necessary with any supplementary measures and the data exporter should carry out an assessment to ensure that US law does not impinge on the adequate level of protection, taking into account the circumstances of the transfer and the supplementary measures that could be put in place. The data exporter may contact the data importer to verify the legislation of its country and collaborate for the assessment. Where the result is not favourable, the transfer should be suspended. Otherwise the data exporter should notify the competent Supervisory Authority.

What about Binding Corporate Rules (BCRs)?

Given that the reason of invalidating the EU-US Privacy Shield was the degree of interference created by the US law, the CJEU judgement applies as well in the context of BCRs, since US law will also have primacy over this tool. Likewise before using SCCs, an assessment should be run by the data exporter and the competent Supervisory Authority should be reported where the result is not favourable and the data exporter plans to continue with the transfer.

What about derogations of Article 49 GDPR?

Article 49 GDPR comprises further conditions under which personal data can be transferred to a third-country in the absence of an adequacy decision and appropriate safeguards such as SCCs and BCRs, namely:

  • Consent. The CJEU points out that consent should be explicit, specific for the particular data transfer or set of transfers and informed. This element involves practical obstacles when it comes to businesses processing data from their customers, as this would imply, for instance, asking for all customers’ individual consent before storing their data on Sales Force.
  • Performance of a contract between the data subject and the controller. It is important to note that this only applies where the transfer is occasional and only for those that are objectively necessary for the performance of the contract.

What about third countries other than the US?

The CJEU has indicated that SCCs as a rule can still be used to transfer data to a third country, however the threshold set by the CJEU for transfers to the US applies for any third country, and the same goes for BCRs.

What should I do when it comes to processors transferring data to the US?

Pursuant to the EDPB FAQs, where no supplementary measures can be provided to ensure that US law does not impinge on the essentially equivalent level of protection as granted by the GDPR and if derogations under Article 49 GDPR do not apply, “the only solution is to negotiate an amendment or supplementary clause to your contract to forbid transfers to the US. Data should not only be stored but also administered elsewhere than in the US”.

What can we expect from the CJEU next?

The EDPB is currently analysing the CJEU judgment to determine the kind of supplementary measures that could be provided in addition to SCCs or BCRs, whether legal, technical or organisational measures.

ICO statement

The ICO is continuously updating their statement on the CJEU Schrems II judgement. The latest version so far dates 27th July and it confirms that EDPB FAQs still apply to UK controllers and processors. Until further guidance is provided by EU bodies and institutions, the ICO recommends to take stock of the international transfers businesses make and react promptly plus they claim that they will continue to apply a risk-based and proportionate approach in accordance with their Regulatory Action Policy.

Other European Data Protection Authorities’ statements

Some European data protection supervisory authorities have provided guidance in response to the CJEU Schrems II judgement. While most countries are still considering the implications of the decision, some other are warning about the risk of non-compliance and a few of them like Germany (particularly Berlin and Hamburg) and Netherlands have openly stated that transfers to the US are unlawful.

In general terms, the ones that are warning about the risks claim the following:

  • Data transfers to the U.S. are still possible, but require the implementation of additional safeguards.
  • The obligation to implement the requirements contained in the CJEU’s decision is both on the businesses and the data protection supervisory authorities.
  • Businesses are required to constantly monitor the level of protection in the data importer’s country
  • Businesses should run a previous assessment before transferring data to the US.

The data protection supervisory authority in Germany (Rhineland-Palatinate) has proposed a five-step assessment for businesses. We have prepared the diagram below which summarizes it:

Can the level of data protection required by the GDPR be respected in the US?

The CJEU considered that the requirements of US domestic law and, in particular, certain programmes enabling access by US public authorities to personal data transferred from the EU, result in limitations on the protection of personal data which do not satisfy GDPR requirements. Furthermore, the CJEU stated that US legislation does not gran data subjects actionable rights before the courts against the US authorities. 

In this context, it seems difficult that a company could be able to demonstrate that they can provide an adequate level of data protection to personal data transferred from the EU, because basically it would have to bypass US legislation.

Latest moves in the US Senate does not shed light in this issue, because the “Lawful Access to Encrypted Data Act” was introduced last month. It mandates service providers and device manufacturers to assist law enforcement with accessing encrypted data if assistance would aid in the execution of a lawfully obtained warrant.

Do you make international data transfers to third countries? Are you affected by Schrems II decision? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We also offer CCPA compliance servicesContact us today.

Artificial Intelligence: From Ethics to Regulation

Artificial Intelligence: From Ethics to Regulation

The study launched by the European Parliament last month builds on the AI-HLEG Ethics Guidelines for Trustworthy AI and seeks to find alternatives that might help to move from AI ethics to regulation. 

In our previous blogs and vlogs we have discussed the application of the Ethics Guidelines for Trustworthy AI in several contexts and industries, like retail and real estate. However, there is a need for precision in practical terms. For this purpose, the European Parliament has just published a document where different alternatives are explored in order to develop specific policy and legislation for governing AI.

Important considerations about ethics

The European Parliament considers that there are some preliminary points about ethics that should be understood before moving forward with the analysis of further implications of ethics on AI:

  1. Ethics is not about checking boxes. It should be addressed through questions based on deliberation, critique and inquiry.
  2. Ethics should be understood as a continuous process where regular checks and updates become essential.
  3. AI should be conceived of as a social experiment that makes possible to understand its ethical constraints and the kinds of things that need to be learnt. This approach may facilitate the monitoring of social effects thus they can be used to improve the technology and its introduction into society.
  4. Moral dilemmas do not make possible to satisfy all ethical principles and value commitments at the same time, which means that sometimes there will not be a specific response to a problem, but a set of various options and alternatives which an associated ‘moral residue’ each instead.
  5. The goal of ethics is to provide strong enough rationale that an individual is compelled to act in a way they believe is the right/good way.

Key AI ethics insights

According to the study, there are six main elements of AI that should be addressed when it comes to an ethical implementation of algorithms and systems:

  • Transparency. Policy makers need to deal with three aspects: the complexity of modern AI solutions, the intentional obfuscation by those who design them and the inexplicability regarding how a particular input results in a particular output.
  • Bias and fairness. Training data is deemed essential in this context and there is a need for definition of ‘fair’ and ‘accurate’ concepts.
  • Contextualization of the AI according to the society in which it has been created and clarification of the society’s role in its development.
  • Accountability and responsibility.
  • Re-design risk assessment to make them relevant.
  • Ethical technology assessments (eTA). The eTA is a written document intended to capture the dialogue between ethicist and technologist and it comprises the list of ethical issues related to the AI application for the purpose of identifying all the possible ethical risks and drawing out the possible negative consequences of implementing the AI system.

Why is regulation necessary?

The European Parliament points out the following reasons that motivate the need for legislation:

  • The criticality of ethical and human rights issues raised by AI development and deployment.
  • The need to protect people (i.e. the principle of proportionality).
  • The interest of the state (given that AI will be used in state-governed areas such as prisons, taxes, education, child welfare).
  • The need for creating a level playing field (e.g. self-regulation is not enough).
  • The need for the development of a common set of rules for all government and public administration stakeholders to uphold.

What are the policy options?

While ethics is about searching for broad answers to societal and environmental problems, regulation can codify and enforce ethically desirable behaviour.

The study proposes a number of policy options that may be adopted by European Parliamentary policy-makers:

  1. Mandatory Data Hygiene Certificate (DHB) for all AI system developers in order to be eligible to sell their solutions to government institutions and public administration bodies. This certificate would not require insight into the proprietary aspects of the AI system and it would not require organisations to share their data sets competing organisations.
  2. Mandatory ethical technology assessment (eTA) prior to deployment of the AI system to be conducted by all public and government organisations using AI systems. 
  3. Mandatory and clear definition of the goals of using AI when it comes to public administration institutions and government bodies. The aim is avoiding the deployment of AI in society in the hope of learning an unknown ‘something’. Instead, it is proposed that there must be a specific and explicit ‘something’ to be learned.
  4. Mandatory accountability report to be produced by all organisations deploying AI systems meant as a response to the ethical and human rights issues that were identified in the eTA.

Practical considerations about eTA reports

Criteria

The seven key requirements of the Ethics Guidelines for Trustworthy AI  may be used by organisations to structure an eTA, namely:

  • Human agency and oversight.
  • Technical robustness and safety.
  • Privacy and data governance.
  • Transparency.
  • Diversity, non-discrimination and fairness.
  • Societal and environmental well-being.
  • Accountability.

Otherwise the European Parliament also suggests to use the following nine criteria: (1) Dissemination and use of information; (2) Control, influence and power; (3) Impact on social contact patterns; (4) Privacy; (5) Sustainability; (6) Human reproduction; (7) Gender, minorities and justice; (8) International relations and (9) Impact on human values. 

The eTA should be concrete though, thus it may be extended in order to cover: (a) the specific context in which the AI will be used; (b) the AI methodology used; (c) the stakeholders involved and (d) an account of ethical values and human rights in need of attention.

Who is going to make eTA reports?

The tasks in the eTA log require experience in identifying ethical issues and placing them within a conceptual framework for analysis. For this reason, the European Parliament highlights the likelihood of a future role for ethics and for ethicists in the regulation process engaged in organisations around AI.

Will SMEs be able to afford this?

In order to create a level playing field across SMEs, the European Parliament plans to provide an adequate margin of three years and offer small and medium companies EU funding to assist them with report completion as well as with the necessary capacity-building measures. This model parallels the incremental process for companies to comply with the GDPR.

Does your company use AI? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessmentsEU AI Ethics assessments and Data Protection Officer outsourcingContact us today.

AI Ethics in Asset and Investment

AI Ethics in Asset and Investment Management

Since AI systems govern most trading operations in asset and investment management, AI ethics becomes crucial to ensure no fundamental rights and principles are overridden.

“I am not uncertain”. Any Billions TV Series fan around here? Those who are will know that this is the phrase that the employees of the hedge fund say to their boss before trading when they have potentially incriminating inside information. They know that basing the investment decision on it may address liability from prosecution. What if almost the same results could be achieved by lawful means? For this purpose, AI can definitely help and AI ethics becomes essential. In this article we delve into AI ethics in asset and investment management.

How is AI used in asset and investment management?

Asset and investment management companies, especially hedge funds, have traditionally used computer models to make the majority of their trades. In recent years, AI has allowed the industry to improve this practice with algorithms and systems that are fully autonomous and do not rely on data scientists and manual updates in order to operate regularly.

AI can analyse large amounts of data at extraordinary speeds in real time, learning from any type of information that may be relevant, including news articles, images and social media posts. The insights are applied automatically and algorithms self-adjust through a process of trial and error to produce increasingly more accurate prescriptions.

Their main role is the following:

  • Finding new patterns in existing data sets;
  • making new forms of data analyzable;
  • designing new users experiences and interfaces;
  • reducing the negative effects of human biases on investment decisions.

For asset and investment management firms the above means efficiency and operational structure improvement, risk management, investment strategy design, trading efficiency and decision-making enhancement. However, it is paramount to be especially aware of the risk of other companies simulating their findings or deriving similar conclusions from equivalent techniques, therefore elements such as trade secret, property software development and continuous innovation are vital.

Why does AI ethics matter in this context?

There are many risks derived from the use of AI in asset and investment management that could be tackled with the implementation of ethical values and principles. 

Some of the issues that may come up in this context are described below:

  • Lack of auditability of AI.
  • Lack of control over data quality and robust production. 
  • Failure to monitor and keep track of AI systems’ decisions.
  • AI inability to react to unexpected events not closely related to past trends and with no historical data available, like pandemics.
  • Difficulty maintaining adherence to current protocols on data security, conduct, and cybersecurity on AI technologies that are new and have not been tested for a period enough to ensure consistency. 
  • Omission of social purpose, leaving some stakeholders behind.
  • Human biases, such as loss aversion (the preference for avoiding losses relative to generating equivalent gains) or confirmation bias (the tendency to interpret new evidence so as to affirm pre-existing beliefs).
  • AI systems own biases, derived from the training datasets, processes and models, deficiencies in coding or otherwise caused or acquired.
  • Gaps on the definition of the respective responsibilities of the third party provider and the asset management firm using the service or tool, where relevant.

How should AI ethics be applied to asset and investment management?

The risks above can be sorted into seven categories, following the requirements of the EU Commission AI-HLEG Ethics Guidelines for Trustworthy Artificial Intelligence:

Issue Failure to monitor and keep track of AI systems’ decisions. Inability to react to unexpected events.

Difficulty maintaining adherence to current protocols on data security, conduct, and cybersecurity.

Lack of control over data quality and robust production. Lack of auditability of AI. Human biases.

AI systems own biases.

Omission of social purpose. Gaps on the definition of the respective responsibilities of the third party provider and the asset management firm.
Solution Human agency and oversight. Technical robustness and safety. Privacy and data governance. Transparency. Diversity, non-discrimination and fairness. Societal and environmental well-being. Accountability.

Among the solutions identified above, human overview plays a key role. There is a need for redefining the job of data scientists which would be the ones in charge of selecting the right sources of alternative data, integrating it with existing knowledge within the firm and its philosophy or culture and making judgments about where future trends are going considering those specific contexts the AI cannot cover. 

The answer is to have AI systems and humans combining their abilities and playing complementary roles, through the so called “Human in the loop” approach, where humans monitor the results of the machine learning model. 

What should be the regulatory approach?

The financial sector is heavily regulated. Any new AI tools or digital advisors are subject to the same framework of regulation and supervision as traditional advisors and this is the reason why it is critical to ensure robust cybersecurity defenses, such as data encryption, cybersecurity insurance and incident management policies. However, the use of AI still requires to go one step further when it comes to regulation.

Currently, there is a lack of specific international regulatory standards for AI in asset and investment management. This is tricky though, because likewise it happens with the GDPR, there is a trade-off between the innovation and the respect for the fundamental rights and freedoms.

Considering the specific nature of the industry, it might be beneficial to extend the applicability of existing regulation to the uses of AI first and then running regulatory sandbox programs for testing new AI innovations in a controlled environment. This would allow to identify basic needs and to deeply understand how the technology works before moving forward with new mandatory rules.

Meanwhile, self-regulation and codes of practice may be the first step to settle the future regulatory framework, which could comprise robust and effective governance, regular checks on the use of AI systems within the company, testing and approval processes, governance committees, documented procedures and internal audits. 

A proactive and industry-led approach to AI governance and ethics for asset and investment management is necessary to foster the development of standards.

Final remarks

In words of Laurence Douglas Fink, chairman and CEO of BlackRock, “One of the key elements of human behavior is, humans have a greater fear of loss than enjoyment of success. All the academic studies will show you that the fear of loss of capital is far greater than the enjoyment of gains”. AI systems do not have neither fear of loss nor enjoyment of gains, they just have data. However, those human emotions are necessary to properly understand the market.  This is the reason why combining both of them may be the most powerful tool for the asset and investment management industry.

Do you work in another sector different from the financial one? Don’t miss our AI and data protection in industry series. We have so far covered retail, Part I and Part II.

Are you worried about COVID-19, regardless of the industry? We have prepared a blog with some relevant tips to help you out, covering COVID-19 business adaptation basics .

Subscribe to our YouTube channel to be updated on further content. 

Do you have questions about how AI is transforming the financial sector and what are the risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Spanish DPA issues fine

Spanish DPA issues €25,000 fine to Glovo for Data Protection Officer appointment violation

The Spanish Data Protection Authority (DPA) AEPD fined Glovo €25,000 for not appointing a Data Protection Officer pursuant to article 37 GDPR.

Have you ever wondered whether your business is subject to the DPO designation requirement covered by the GDPR? The ambiguity of the GDPR when it comes to the definition of the cases where the appointment of a DPO is mandatory for controllers and processors is causing confusion in the industry. The latest fine in this regard comes from the Spanish DPA, and it is the first fine in Spain imposed for Data Protection Officer appointment violation.

What happened?

According to the AEPD decision, it seems that Glovo had not appointed a DPO. Apart from that, the company’s website did not contain information about an appointed DPO.

The Spanish DPA deems the lack of DPO appointment a breach of article 37 (1) GDPR because it considers the core activities of Glovo “consist of processing operations which, by virtue of their nature, their scope and/or their purposes, require regular and systematic monitoring of data subjects on a large scale”.

Glovo, on its side, argued that they have not breached the GDPR because they had appointed a Data Protection Board which was in charge of data protection matters. Furthermore, they pointed out that they actually appointed a DPO. However, it should be noted that this appointment took place after the investigation started plus nothing about the DPO or the Data Protection Board was said in their Privacy Policy.

Similar cases

On 28 April 2020, the Belgian Data Protection Authority issued its decision whereby it fined the telecommunications and ICT company Proximus €50,000 for failing at involving the DPO in the processing of personal data breaches. Moreover, the company did not have a system in place to prevent a conflict of interest of the DPO, who also held numerous other positions within the company (head of compliance and audit department) in violation of Article 38(6) of the GDPR. As a consequence, the controller could not ensure that any such tasks and duties did not result in a conflict of interest.

Should our business be worried?

Under the GDPR, you, as controller or processor, should appoint a DPO if:

  • you are a public authority or body (except for courts acting in their judicial capacity);
  • your core activities require large scale, regular and systematic monitoring of individuals (for example, online behaviour tracking); or
  • your core activities consist of large scale processing of special categories of data or data relating to criminal convictions and offences.

It should be stressed that controllers and processors can appoint a DPO even if they are not required to.

What is the origin of Glovo and Spanish DPA disagreement?

While the Spanish DPA states that Glovo should have appointed a DPO because they process personal data on a large scale, Glovo’s counterargument is based on the fact that the GDPR does not define “large scale”.

WP29’s (current EDPB) Guidelines on Data Protection Officers partially clarify this issue.

When determining if a processing is on a large scale, the guidelines say the following factors should be taken into consideration:

  • the numbers of data subjects concerned;
  • the volume of personal data being processed;
  • the range of different data items being processed;
  • the geographical extent of the activity; and
  • the duration or permanence of the processing activity.

Our tip

Running a business online increases the need for a DPO due to the ubiquity of data on the internet, so at least it is recommended to receive advice from a data protection and privacy expert before deciding if the company is subject to the mandatory GDPR requirement.

Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and UK Data Protection Act? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.