Twitter Data Case Dispute

Twitter Data Case Dispute: European Union privacy regulators conflicted over how much, or whether—to fine, over last year’s data breach.

Twitter Data Case Dispute between European Union privacy regulators, causing delay in the progress of the most advanced cross-border privacy case involving a U.S. tech company under the GDPR.

The Twitter data case dispute, disclosed in a statement from Ireland’s Data Protection Commission, is one of the first major tests for enforcement of the GDPR. This has raised concern over other possible disagreements and delays in nearly two dozen other investigations into Facebook, Google, and other U.S. tech companies. This particular case concerns a security hole that Twitter claimed to have fixed in January 2019, which exposed the private tweets of some users, over a period of over four years.

 

This Twitter case dispute will be an early indication of how similar situations of power sharing among EU regulators.

The outcome of this Twitter case will be an early indication of how the EU’s power-sharing system among regulators will work in practice. Because Twitter has regional headquarters in Ireland, the investigation is led by Ireland’s data commission. However, cases can be objected to, by regulators in any of the 26 other EU countries involved. Under the GDPR, in cases that involve multiple countries, the lead regulator (in this case Ireland’s data commission), sends its draft decision to counterparts. They then have four weeks to submit objections, then there is additional time left to approve revisions based on those objections.  Any disagreements the regulators can’t resolve can be referred to the European Data Protection Board, which decides by way of a vote. Once the board approves a decision, the lead regulator will inform the company of that decision within a month. The voting process can take from a month to two, or two and a half months, depending on whether extensions are granted.

After consultations, with other EU authorities, there remained a number of objections, triggering the first ever dispute resolution. 

The Irish privacy regulator mentioned that it had triggered a dispute-resolution mechanism among the bloc’s privacy regulators due to a failure to resolve disagreements over its draft decision in the Twitter case. This is the first time that process has been triggered. Ireland’s data commission forwarded a draft decision to its counterparts for comments in May. 

The commission engaged in consultations with other regulators to resolve their complaints. Graham Doyle, a deputy commissioner said that despite the consultation, a number of objections remained and the matter has now been referred to the European Data Protection Board, by the Data Protection Commission. 

Under the GDPR, companies can be charged a sliding scale up to 2% of their annual revenue, considering various factors for this type of violation. 

Ireland’s data commission said that the focus of this case is on whether Twitter met its obligation for a timely notification of the data breach. Under the GDPR, regulators can fine companies up to 2% of their world-wide annual revenue for failing to notify them of a data breach within 72 hours.  This could amount to up to $69 million, based on Twitter’s 2019 revenue. However, this legislation also directs regulators to take into account the gravity and duration of the violation, the type of personal information involved, and also other factors, like whether the violation was intentional or not. This opens up lots of room for disagreement between regulators on how much should be charged for violations.

Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and Data Protection Act 2018? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Age Appropriate Design Code

Age Appropriate Design Code will come into force in less than a month.

Age Appropriate Design Code will come into force September 2nd 2020, and will be ushered in by a 12-month transition period allowing online services time to conform.

The Age Appropriate Design Code which we had initially reported on back in January when the final version of this code was first introduced, has now completed the parliamentary process, and was recently issued by the ICO to come into force on 2nd September 2020. This code of to practice for online services finalised 15 standards laid in Parliament in January of this year. Under section 123 (1) of the Data Protection Act 2018, the Information Commissioner was required to prepare this code which contains guidance on what is considered appropriate on standards of age appropriate design of relevant information society services, which are likely to be accessed by children. 

The Age Appropriate Design Code is a statutory code of practice, providing built in protection for children online.

This code is the first of its kind, and is considered, by the Information Commissioner, necessary and achievable, and is expected to make a difference. The Commissioner believes that companies will want to conform with the standards, to demonstrate their commitment to always acting in the best interests of the child. This code, although not expected to replace parental control, should increase confidence in the safety of children, as they surf the internet. The 15 principles of this code are flexible, and are not laws, but rather a statutory code of practice which provides built in protection for children spending time online, ensuring that their best interests are the primary consideration when developing and designing online services. 

The Code lays out 15 Standards, ensuring children’s best interest.

  • The best interests of the child;

The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.

  • Data protection impact assessments;

 Undertake a DPIA to assess and mitigate risks to the rights and freedoms of children who are likely to access your service, which arise from your data processing. Take into account differing ages, capacities and development needs and ensure that your DPIA builds in compliance with this code.

  • Age appropriate application;

 Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.

  • Transparency;

The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.

  • Detrimental use of data;

Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions or Government advice.

  • Policies and community standards;

 Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).

  • Default settings;

 Settings must be ‘high privacy’ by default (unless you can demonstrate a convincing reason for a different default setting, taking account of the best interests of the child).

  • Data minimisation;

 Collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged. Give children separate choices over which elements they wish to activate.

  • Data sharing;

 Do not disclose children’s data unless you can demonstrate a convincing reason to do so, taking account of the best interests of the child.

  • Geolocation;

Geolocation options should be off by default (unless you can demonstrate a convincing reason for geolocation to be switched on by default, taking account of the best interests of the child). Provide an obvious sign for children when location tracking is active. Options which make a child’s location visible to others must default back to ‘off’ at the end of each session.

  • Parental controls;

If you provide parental controls, the child should be given age appropriate information about this. If your online service allows a parent or carer to monitor their child’s online activity or track their location, you should provide an obvious sign to the child when they are being monitored.

  • Profiling;

Switch all options which use profiling ‘off’ by default (unless you can demonstrate a convincing reason for profiling to be on by default, taking account of the best interests of the child). Only allow profiling if there are appropriate measures in place to protect the child from any harmful effects (particularly, content that is detrimental to their health or wellbeing).

  • Nudge techniques

 There should be no use of nudge techniques to lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections.

  • Connected toys and devices

 If your company provides a connected toy or device, you should ensure that you include effective tools to enable conformance to this code.

  • Online tools. 

 Provide prominent and accessible tools which will help children exercise their data protection rights and report concerns.

The code will apply to any product or service likely to be accessed by children, and not just those aimed at children.

The standards laid out in this code applies to any company or institution providing products or services (including apps, programmes, websites, games or community environments, and connected toys or devices with or without a screen) not just aimed at children, but likely to be accessed by children, and which process personal data in the UK. Due to increasing concern about the position of children in the modern digital world and in the wider society, the general consensus in the UK and internationally is that more needs to be done to create a safe space for them to learn, explore, and play online. The purpose of this code is not to protect children from the digital world but to protect them within that space. The code takes account of the standards and principles set out in the UNCRC, and sets out specific protections for children’s personal data in compliance with the GDPR.

This code, which comes into effect next month, must support children’s rights.

This code is due to come into effect on September 2nd, 2020 as announced by the ICO this week. That date will begin the 12 month transitionary period, during which companies are expected to take steps towards full compliance, ensuring that all principles are considered and that their services use children’s data in ways that support the following rights of the child;

  • Freedom of expression.
  • Freedom of thought, conscience and religion.
  • Freedom of association.
  • Privacy.
  • Access information from the media (with appropriate protection from information and material injurious to their well-being).
  • Play and engage in recreational activities appropriate to their age.
  • Protection from economic, sexual or other forms of exploitation. 

Failure to conform to these standards could result in assessment notices, warnings, reprimands, enforcement notices and penalty notices (administrative fines). As a result, data protection impact assessments are suggested to ensure compliance.

Does your company offer online services likely to be accessed by minors? If so, it will be imperative that you adhere to the UK Data Protection Code once it is effected. Aphaia’s data protection impact assessments and Data Protection Officer outsourcing will assist you with ensuring compliance. Aphaia provides GDPR adaptation consultancy services and CCPA compliance, including EU AI Ethics assessments. Contact us today.

EU-US Privacy Shield

EU-US Privacy Shield invalidation business implications follow-up

Since the Court of Justice of the European Union (CJEU) invalidated the EU-US Privacy Shield in their Schrems II judgement delivered two weeks ago, many questions have arisen around international data transfers to the US.

After the invalidation of the EU-US Privacy Shield by the CJEU two weeks ago, as reported by Aphaia, data transfers to the US require another valid safeguard or mechanism that provides an adequate level of data protection similar to the one granted by the GDPR.

European Data Protection Board guidelines

With the aim of clarifying the main issues derived from the invalidation of the EU-US Privacy Shield, the European Data Protection Board (EDPB) has published Frequently Asked Questions on the Schrems II judgement. These answers are expected to be developed and complemented along with further analysis, as the EDPB continues to examine and assess the CJEU decision.

In the document, the EDPB reminds that there is no grace period during which the EU-US Privacy Shield is still deemed a valid mechanisms to transfer personal data to the US, therefore businesses that were relying on this safeguard and that wish to keep on transferring data to the US should find another valid safeguard which ensures compliance with the level of protection essentially equivalent to that guaranteed within the EU by the GDPR.

What about Standard Contractual Clauses?

The CJEU considered the SCC validity depends on the ability of the data exporter and the recipient of the data to verify, prior to any transfer, and taking into account the specific circumstances, whether that level of protection can be respected in the US. This seems to be difficult though, because the Court found that US law (i.e., Section 702 FISA and EO 12333) does not ensure an essentially equivalent level of protection.

The data importer should inform the data exporter of any inability to comply with the SCCs and where necessary with any supplementary measures and the data exporter should carry out an assessment to ensure that US law does not impinge on the adequate level of protection, taking into account the circumstances of the transfer and the supplementary measures that could be put in place. The data exporter may contact the data importer to verify the legislation of its country and collaborate for the assessment. Where the result is not favourable, the transfer should be suspended. Otherwise the data exporter should notify the competent Supervisory Authority.

What about Binding Corporate Rules (BCRs)?

Given that the reason of invalidating the EU-US Privacy Shield was the degree of interference created by the US law, the CJEU judgement applies as well in the context of BCRs, since US law will also have primacy over this tool. Likewise before using SCCs, an assessment should be run by the data exporter and the competent Supervisory Authority should be reported where the result is not favourable and the data exporter plans to continue with the transfer.

What about derogations of Article 49 GDPR?

Article 49 GDPR comprises further conditions under which personal data can be transferred to a third-country in the absence of an adequacy decision and appropriate safeguards such as SCCs and BCRs, namely:

  • Consent. The CJEU points out that consent should be explicit, specific for the particular data transfer or set of transfers and informed. This element involves practical obstacles when it comes to businesses processing data from their customers, as this would imply, for instance, asking for all customers’ individual consent before storing their data on Sales Force.
  • Performance of a contract between the data subject and the controller. It is important to note that this only applies where the transfer is occasional and only for those that are objectively necessary for the performance of the contract.

What about third countries other than the US?

The CJEU has indicated that SCCs as a rule can still be used to transfer data to a third country, however the threshold set by the CJEU for transfers to the US applies for any third country, and the same goes for BCRs.

What should I do when it comes to processors transferring data to the US?

Pursuant to the EDPB FAQs, where no supplementary measures can be provided to ensure that US law does not impinge on the essentially equivalent level of protection as granted by the GDPR and if derogations under Article 49 GDPR do not apply, “the only solution is to negotiate an amendment or supplementary clause to your contract to forbid transfers to the US. Data should not only be stored but also administered elsewhere than in the US”.

What can we expect from the CJEU next?

The EDPB is currently analysing the CJEU judgment to determine the kind of supplementary measures that could be provided in addition to SCCs or BCRs, whether legal, technical or organisational measures.

ICO statement

The ICO is continuously updating their statement on the CJEU Schrems II judgement. The latest version so far dates 27th July and it confirms that EDPB FAQs still apply to UK controllers and processors. Until further guidance is provided by EU bodies and institutions, the ICO recommends to take stock of the international transfers businesses make and react promptly plus they claim that they will continue to apply a risk-based and proportionate approach in accordance with their Regulatory Action Policy.

Other European Data Protection Authorities’ statements

Some European data protection supervisory authorities have provided guidance in response to the CJEU Schrems II judgement. While most countries are still considering the implications of the decision, some other are warning about the risk of non-compliance and a few of them like Germany (particularly Berlin and Hamburg) and Netherlands have openly stated that transfers to the US are unlawful.

In general terms, the ones that are warning about the risks claim the following:

  • Data transfers to the U.S. are still possible, but require the implementation of additional safeguards.
  • The obligation to implement the requirements contained in the CJEU’s decision is both on the businesses and the data protection supervisory authorities.
  • Businesses are required to constantly monitor the level of protection in the data importer’s country
  • Businesses should run a previous assessment before transferring data to the US.

The data protection supervisory authority in Germany (Rhineland-Palatinate) has proposed a five-step assessment for businesses. We have prepared the diagram below which summarizes it:

Can the level of data protection required by the GDPR be respected in the US?

The CJEU considered that the requirements of US domestic law and, in particular, certain programmes enabling access by US public authorities to personal data transferred from the EU, result in limitations on the protection of personal data which do not satisfy GDPR requirements. Furthermore, the CJEU stated that US legislation does not gran data subjects actionable rights before the courts against the US authorities. 

In this context, it seems difficult that a company could be able to demonstrate that they can provide an adequate level of data protection to personal data transferred from the EU, because basically it would have to bypass US legislation.

Latest moves in the US Senate does not shed light in this issue, because the “Lawful Access to Encrypted Data Act” was introduced last month. It mandates service providers and device manufacturers to assist law enforcement with accessing encrypted data if assistance would aid in the execution of a lawfully obtained warrant.

Do you make international data transfers to third countries? Are you affected by Schrems II decision? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We also offer CCPA compliance servicesContact us today.

Artificial Intelligence: From Ethics to Regulation

Artificial Intelligence: From Ethics to Regulation

The study launched by the European Parliament last month builds on the AI-HLEG Ethics Guidelines for Trustworthy AI and seeks to find alternatives that might help to move from AI ethics to regulation. 

In our previous blogs and vlogs we have discussed the application of the Ethics Guidelines for Trustworthy AI in several contexts and industries, like retail and real estate. However, there is a need for precision in practical terms. For this purpose, the European Parliament has just published a document where different alternatives are explored in order to develop specific policy and legislation for governing AI.

Important considerations about ethics

The European Parliament considers that there are some preliminary points about ethics that should be understood before moving forward with the analysis of further implications of ethics on AI:

  1. Ethics is not about checking boxes. It should be addressed through questions based on deliberation, critique and inquiry.
  2. Ethics should be understood as a continuous process where regular checks and updates become essential.
  3. AI should be conceived of as a social experiment that makes possible to understand its ethical constraints and the kinds of things that need to be learnt. This approach may facilitate the monitoring of social effects thus they can be used to improve the technology and its introduction into society.
  4. Moral dilemmas do not make possible to satisfy all ethical principles and value commitments at the same time, which means that sometimes there will not be a specific response to a problem, but a set of various options and alternatives which an associated ‘moral residue’ each instead.
  5. The goal of ethics is to provide strong enough rationale that an individual is compelled to act in a way they believe is the right/good way.

Key AI ethics insights

According to the study, there are six main elements of AI that should be addressed when it comes to an ethical implementation of algorithms and systems:

  • Transparency. Policy makers need to deal with three aspects: the complexity of modern AI solutions, the intentional obfuscation by those who design them and the inexplicability regarding how a particular input results in a particular output.
  • Bias and fairness. Training data is deemed essential in this context and there is a need for definition of ‘fair’ and ‘accurate’ concepts.
  • Contextualization of the AI according to the society in which it has been created and clarification of the society’s role in its development.
  • Accountability and responsibility.
  • Re-design risk assessment to make them relevant.
  • Ethical technology assessments (eTA). The eTA is a written document intended to capture the dialogue between ethicist and technologist and it comprises the list of ethical issues related to the AI application for the purpose of identifying all the possible ethical risks and drawing out the possible negative consequences of implementing the AI system.

Why is regulation necessary?

The European Parliament points out the following reasons that motivate the need for legislation:

  • The criticality of ethical and human rights issues raised by AI development and deployment.
  • The need to protect people (i.e. the principle of proportionality).
  • The interest of the state (given that AI will be used in state-governed areas such as prisons, taxes, education, child welfare).
  • The need for creating a level playing field (e.g. self-regulation is not enough).
  • The need for the development of a common set of rules for all government and public administration stakeholders to uphold.

What are the policy options?

While ethics is about searching for broad answers to societal and environmental problems, regulation can codify and enforce ethically desirable behaviour.

The study proposes a number of policy options that may be adopted by European Parliamentary policy-makers:

  1. Mandatory Data Hygiene Certificate (DHB) for all AI system developers in order to be eligible to sell their solutions to government institutions and public administration bodies. This certificate would not require insight into the proprietary aspects of the AI system and it would not require organisations to share their data sets competing organisations.
  2. Mandatory ethical technology assessment (eTA) prior to deployment of the AI system to be conducted by all public and government organisations using AI systems. 
  3. Mandatory and clear definition of the goals of using AI when it comes to public administration institutions and government bodies. The aim is avoiding the deployment of AI in society in the hope of learning an unknown ‘something’. Instead, it is proposed that there must be a specific and explicit ‘something’ to be learned.
  4. Mandatory accountability report to be produced by all organisations deploying AI systems meant as a response to the ethical and human rights issues that were identified in the eTA.

Practical considerations about eTA reports

Criteria

The seven key requirements of the Ethics Guidelines for Trustworthy AI  may be used by organisations to structure an eTA, namely:

  • Human agency and oversight.
  • Technical robustness and safety.
  • Privacy and data governance.
  • Transparency.
  • Diversity, non-discrimination and fairness.
  • Societal and environmental well-being.
  • Accountability.

Otherwise the European Parliament also suggests to use the following nine criteria: (1) Dissemination and use of information; (2) Control, influence and power; (3) Impact on social contact patterns; (4) Privacy; (5) Sustainability; (6) Human reproduction; (7) Gender, minorities and justice; (8) International relations and (9) Impact on human values. 

The eTA should be concrete though, thus it may be extended in order to cover: (a) the specific context in which the AI will be used; (b) the AI methodology used; (c) the stakeholders involved and (d) an account of ethical values and human rights in need of attention.

Who is going to make eTA reports?

The tasks in the eTA log require experience in identifying ethical issues and placing them within a conceptual framework for analysis. For this reason, the European Parliament highlights the likelihood of a future role for ethics and for ethicists in the regulation process engaged in organisations around AI.

Will SMEs be able to afford this?

In order to create a level playing field across SMEs, the European Parliament plans to provide an adequate margin of three years and offer small and medium companies EU funding to assist them with report completion as well as with the necessary capacity-building measures. This model parallels the incremental process for companies to comply with the GDPR.

Does your company use AI? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessmentsEU AI Ethics assessments and Data Protection Officer outsourcingContact us today.