EDPB Guidelines on the targeting of social media users overview

On 2nd September, the EDPB adopted their Guidelines 8/2020 on the targeting of social media users, which aim to clarify the implications that these practices may have on privacy and data protection.

Most social media platforms allow their users to manage their privacy preferences by enabling the option to make their profiles public or private. Pictures, videos and text are not the only personal information processed in this context though: what about analytics used to target social media users? Analytics are also personal data and they should be managed and protected accordingly. The European Data Protection Board (EDPB) is aware of the risks this creates to the fundamental rights and freedoms of individuals and has published these guidelines to provide their recommendations with regard to the roles and responsibilities of targeters and social media providers.

Actors involved in social media targeting

The EDPB explains the concepts of social media providers, users and targeters as follows:

  • Social media providers should be understood as providers of online platforms that enable the development of networks and communities of users, among which information and content is shared.
  • Users are the individuals who are registered with the service and create accounts and profiles which data is used for targeting purposes. This term also comprises those individuals that access the services without having registered.
  • Targeters are defined as natural or legal persons that communicate specific messages to the users of social media in order to advance commercial, political, or other interests, on the basis of specific parameters or criteria.
  • Other actors who may be also relevant are marketing service providers, ad networks, ad exchanges, demand-side and supply-side platforms, data brokers, data management providers (DMPs) and data analytics companies.

Identifying the roles and responsibilities of the various actors correctly is key in the process, as the interaction between social media providers and other actors may give rise to joint responsibilities under the GDPR.

Risks to the rights and freedoms of users

The EDPB highlights some of the main risks that may be derived from social media targeting:

  • Uses of personal data that go against or beyond individuals’ reasonable expectations.
  • Combination of personal data from different sources.
  • Existence of profiling activities connected to targeting.
  • Obstacles to the individual’s ability to exercise control over his or her personal data.
  • Lack of transparency regarding the role of the different actors and the processing operations.
  • Possibility of discrimination and exclusion.
  • Potential possible manipulation of users and undue influence over them.
  • Political and ideological polarisation.
  • Information overload.
  • Manipulation over children’s autonomy and their right to development.
  • Concentration in the markets of social media and targeting.

Relevant case law

The EDPB analyses the respective roles and responsibilities of social media providers and targeters through the relevant case law of the CJEU, namely the judgments in Wirtschaftsakademie (C-210/16) and Fashion ID (C-40/17):

– In Wirtschaftsakademie, the CJEU decided that the administrator of a so-called “fan page” on Facebook must be regarded as taking part in the determination of the purposes and means of the processing of personal data. The reasoning behind this decision is that the creation of a fan page involves the definition of parameters by the administrator, which has an influence on the processing of personal data for the purpose of producing statistics based on visits to the fan page, using the filters provided by Facebook.

– In Fashion ID, the CJEU decided that a website operator can be a considered a controller when it embeds a Facebook social plugin on its website that causes the browser of a visitor to transmit personal data of the visitor to Facebook. However, the liability of the website operator will be “limited to the operation or set of operations involving the processing of personal data in respect of which it actually determines the purposes and means”, therefore the website operator will not be a controller for subsequent operations carried out by Facebook after the data has been transmitted.

Roles and responsibilities of targeters and social media providers

Social media users may be targeted on the basis of provided, observed or inferred data, as well as a combination thereof.

In most cases both the targeter and the social media provider will participate in determining the purpose (e.g. to display a specific advertisement to a set of individuals social media users who make up the target audience) and means (e.g. by choosing to use the services offered by the social media provider and requesting it to target an audience based on certain criteria, on the one hand and by deciding which categories of data shall be processed, which targeting criteria shall be offered and who shall have access, on the other hand) of the processing personal data, therefore they will be deemed to be joint controllers pursuant to the Article 26 GDPR.

As pointed out by the CJEU in Fashion ID, the joint controllership status will only extend to those processing operations for which the targeter and the social media provider effectively co-determine the purposes and means, such as the processing of personal data resulting from the selection of the relevant targeting criteria, the display of the advertisement to the target audience and the processing of personal data undertaken by the social media provider to report to the targeter about the results of the targeting campaign. However, the joint control does not extend to operations involving the processing of personal data at other stages occurring before the selection of the relevant targeting criteria or after the targeting and reporting has been completed.

The EDPB also recalls that actual access to personal data is not a prerequisite for joint responsibility, thus the above analysis would remain the same even if the targeter only specified the parameters of its intended audience and did not have access to the personal data of the affected users.

Legal bases of the processing

It is important to note that, as joint controllers, both the social media provider and the targeter must be able to demonstrate the existence of a legal basis pursuant to the Article 6 GDPR to justify the processing of personal data for which each of the joint controllers is responsible.

In general terms, the two legal basis that are more likely to apply are legitimate interest and data subject’s consent.

In order to rely on legitimate interest as the lawful basis, there are three cumulative conditions that should be met:

– (i) the pursuit of a legitimate interest by each the data controller or by the third party or parties to whom the data are disclosed;
– (ii) the need to process personal data for the purposes of the legitimate interests pursued, and
– (iii) the condition that the fundamental rights and freedoms of the data subject whose data require protection do not take precedence.

In addition, opt-out should be enabled in a manner that data subjects should not only be provided with the possibility to object to the display of targeted advertising when accessing the platform, but also be provided with controls that ensure the underlying processing of his or her personal data for the targeting purpose no longer takes place after he or she has objected.

Legitimate interest will not be suitable in some circumstances though, therefore consent will be required in those cases. Intrusive profiling and tracking practices for marketing or advertising purposes that involve tracking individuals across multiple websites, locations, devices, services or data-brokering would be some of the examples.

The EDPB further notes that the consent collected for the implementation of tracking technologies needs to fulfil the conditions laid out in Article 7 GDPR in order to be valid. They highlight that pre-ticked check-boxes by the service provider which the user must then deselect to refuse his or her consent do not constitute valid consent. Moreover, based on recital 32, actions such as scrolling or swiping through a webpage or similar user activity would not under any circumstances satisfy the requirement of a clear and affirmative action, because such actions may be difficult to distinguish from other activity or interaction by a user, which means that determining that an unambiguous consent has been obtained would also not be possible. Furthermore, in such a case, it would be difficult to provide a way for the user to withdraw consent in a manner that is as easy as granting it.

The controller that should be in charge of collecting the consent from the data subjects will be the one that is involved first with them. This is because consent, in order to be valid, should be obtained prior to the processing. The EDPB also recalls that the controller gathering consent should name any other controllers to whom the data will be transferred and who wish to rely on the original consent.

Finally, where the profiling undertaken is likely to have a “similarly significant [effect]” on a data subject (for example, the display of online betting advertisements), Article 22 GDPR shall be applicable. An assessment in this regard will need to be conducted by the controller or joint controllers in each instance with reference to the specific facts of the targeting.

The EDPB welcomes comments to the Guidelines until 19th October.

You can learn more about joint controllership in our recent blog Joint controllership: key considerations by the EDPB.

 

Are you targeting social media users? You may need to adapt your processes to comply with the GDPR and the EDPB Guidelines. We can help you. Aphaia provides both GDPR, Data Protection Act 2018 and ePrivacy adaptation consultancy services, including data protection impact assessments, CCPA compliance and Data Protection Officer outsourcing.

joint controllership

Joint controllership: key considerations by the EDPB

The EDPB provides key considerations to clarify the concepts of processor, controller and joint controller in their Guidelines 07/2020.

The European Data Protection Board (EDPB) published their Guidelines 07/2020 on the concepts of controller and processor in the GDPR on 7th September, which aim to offer a precise meaning of these concepts and a criteria for their correct interpretation that is consistent throughout the European Economic Area.

Since the CJEU considered, in its Judgment in Fashion ID, C-40/17, the fashion retailer Fashion ID to be a controller jointly with Facebook by embedding the ‘Like’ button in its website, the concept of joint controllership seems to have a broader meaning, as it may apply now to some data processing that were deemed otherwise in the past.

In our blog today we go through the main insights provided by the EDPB with regard to the concept of joint controller.

The concept of joint controller in the GDPR

Pursuant to the Article 26 of the GDPR, the qualification as joint controller may arise where two or more controllers jointly determine the purposes and means of processing. The GDPR also states that the actors involved shall determine their respective responsibilities for compliance by means of an arrangement between them, whose essence shall be made available to the data subjects. However, the GDPR does not contain further provisions that specify the details around this type of processing, such as the definition of ‘jointly’ or the legal form of the arrangement.

Joint participation

The EDPB explains that joint participation can take the form of a common decision taken by the two or more actors involved in the processing or result from converging decisions by them. Thus in practice, joint participation can take several different forms and it does not require the same degree of involvement or equal responsibility by the controllers in each case.

  • Joint participation through common decision. It means deciding together and involves a common intention.
  • Joint participation through converging decisions. This one results from the case law of the CJEU on the concept of joint controllers. According to the GDPR, the requirements the decisions should meet to be considered as converging on purposes and means are the following:
    • They complement each other.
    • They are necessary for the processing to take place in such manner that they have a tangible impact on the determination of the purposes and means of the processing.

As a result, the question that should be contemplated to identify converging decisions would be along the lines of “Would the processing be possible without both parties’ participation in the sense that the processing by each party is inseparable?”.

The EDPB also highlights that the fact that one of the parties does not have access to personal data processed is not sufficient to exclude joint controllership.

 

Jointly determined purpose(s)

The EDPB considers that there are two scenarios under which the purpose pursued by two or more controllers may be deemed as jointly determined:

  • The entities involved in the same processing operation process such data for jointly defined purposes.
  • The entities involved pursue purposes which are closely linked or complementary. Such may be the case, for example, when there is a mutual benefit arising from the same processing operation, provided that each of the entities involved participates in the determination of the purposes and means of the relevant processing operation.

Jointly determined means

Joint controllership requires that two or more entities have exerted influence over the means of the processing. However, this does not mean that each entity involved needs in all cases to determine all of the means. There might be different circumstances which would qualify as joint controllership where the rest of requirements are met, even where the determination of the means is not equally shared between the parties, for example:

 

  • Different joint controllers define the means of the processing to a different extent, depending on who is effectively in a position to do so.
  • One of the entities involved provides the means of the processing and makes it available for personal data processing activities by other entities. The entity who decides to make use of those means so that personal data can be processed for a particular purpose also participates in the determination of the means of the processing. For example, the choice made by an entity to use for its own purposes a tool or other system developed by another entity, allowing the processing of personal data, will likely amount to a joint decision on the means of that processing by those entities.

 

Limits of joint controllership

The fact that several actors are involved in the same processing does not mean that they are necessarily acting as joint controllers of such processing. Not all kind of partnerships, cooperation or collaboration imply qualification of joint controllers as such qualification requires a case-by-case analysis of each processing at stake and the precise role of each entity with respect to each processing. The EDPB provides a non-exhaustive list of examples of situations where there is no joint controllership:

  • Preceding or subsequent operations: while two actors may be deemed joint controllers with regard to a specific data processing where the purpose and means of its operations are jointly determined, this does not affect the purposes and means of operations that precede or are subsequent in the chain of processing. In that case, the entity that decides alone should be considered as the sole controller of said preceding or subsequent operation.
  • Own purpose: the situation of joint controllers acting on the basis of converging decisions should be distinguished from the case of a processor, since the latter, while participating in the performance of a processing, does not process the data for its own purposes but carries out the processing on behalf of the controller.
  • Commercial benefit: the mere existence of a mutual benefit arising from a processing activity does not give rise to joint controllership. For example, if one of the entities involved is merely being paid for services rendered, it is acting as a processor rather than as a joint controller.

For instance, the use of a common data processing system or infrastructure will not in all cases lead to qualify the parties involved as joint controllers, in particular where the processing they carry out is separable and could be performed by one party without intervention from the other or where the provider is a processor in the absence of any purpose of its own. Another example would be the transmission of employee data to tax authorities.

Joint controller arrangement

Joint controllers should put in place a joint controller arrangement where they determine and agree on, in a transparent manner, their respective responsibilities for compliance with the GDPR. The following list of non-exhaustive tasks should be specified by means of said arrangement:

  • Response to data subjects requests exercised pursuant to the rights granted by the GDPR.
  • Transparency duties to provide the data subjects with the relevant information referred in Articles 13 and 14 GDPR.
  • Implementation of general data protection principles.
  • Legal basis of the processing.
  • Security measures.
  • Notification of a personal data breach to the supervisory authority and to the data subject.
  • Data Protection Impact Assessments.
  • The use of a processor.
  • Transfers of data to third countries.
  • Organisation of contact with data subjects and supervisory authorities.

The EDPB recommends documenting the relevant factors and the internal analysis carried out in order to allocate the different obligations. This analysis is part of the documentation under the accountability principle.

When it comes to the form of the arrangement, even if there is no legal requirement in the GDPR for a contract or other legal act, the EDPB recommends that such arrangement be made in the form of a binding document such as a contract or other legal binding act under EU or Member State law to which the controllers are subject.

The EDPB welcomes comments to the Guidelines until 19th October.

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both adaptation consultancy services, including data protection impact assessments, CCPA compliance and Data Protection Officer outsourcing.

EU-US Privacy Shield

EU-US Privacy Shield invalidation business implications follow-up

Since the Court of Justice of the European Union (CJEU) invalidated the EU-US Privacy Shield in their Schrems II judgement delivered two weeks ago, many questions have arisen around international data transfers to the US.

After the invalidation of the EU-US Privacy Shield by the CJEU two weeks ago, as reported by Aphaia, data transfers to the US require another valid safeguard or mechanism that provides an adequate level of data protection similar to the one granted by the GDPR.

European Data Protection Board guidelines

With the aim of clarifying the main issues derived from the invalidation of the EU-US Privacy Shield, the European Data Protection Board (EDPB) has published Frequently Asked Questions on the Schrems II judgement. These answers are expected to be developed and complemented along with further analysis, as the EDPB continues to examine and assess the CJEU decision.

In the document, the EDPB reminds that there is no grace period during which the EU-US Privacy Shield is still deemed a valid mechanisms to transfer personal data to the US, therefore businesses that were relying on this safeguard and that wish to keep on transferring data to the US should find another valid safeguard which ensures compliance with the level of protection essentially equivalent to that guaranteed within the EU by the GDPR.

What about Standard Contractual Clauses?

The CJEU considered the SCC validity depends on the ability of the data exporter and the recipient of the data to verify, prior to any transfer, and taking into account the specific circumstances, whether that level of protection can be respected in the US. This seems to be difficult though, because the Court found that US law (i.e., Section 702 FISA and EO 12333) does not ensure an essentially equivalent level of protection.

The data importer should inform the data exporter of any inability to comply with the SCCs and where necessary with any supplementary measures and the data exporter should carry out an assessment to ensure that US law does not impinge on the adequate level of protection, taking into account the circumstances of the transfer and the supplementary measures that could be put in place. The data exporter may contact the data importer to verify the legislation of its country and collaborate for the assessment. Where the result is not favourable, the transfer should be suspended. Otherwise the data exporter should notify the competent Supervisory Authority.

What about Binding Corporate Rules (BCRs)?

Given that the reason of invalidating the EU-US Privacy Shield was the degree of interference created by the US law, the CJEU judgement applies as well in the context of BCRs, since US law will also have primacy over this tool. Likewise before using SCCs, an assessment should be run by the data exporter and the competent Supervisory Authority should be reported where the result is not favourable and the data exporter plans to continue with the transfer.

What about derogations of Article 49 GDPR?

Article 49 GDPR comprises further conditions under which personal data can be transferred to a third-country in the absence of an adequacy decision and appropriate safeguards such as SCCs and BCRs, namely:

  • Consent. The CJEU points out that consent should be explicit, specific for the particular data transfer or set of transfers and informed. This element involves practical obstacles when it comes to businesses processing data from their customers, as this would imply, for instance, asking for all customers’ individual consent before storing their data on Sales Force.
  • Performance of a contract between the data subject and the controller. It is important to note that this only applies where the transfer is occasional and only for those that are objectively necessary for the performance of the contract.

What about third countries other than the US?

The CJEU has indicated that SCCs as a rule can still be used to transfer data to a third country, however the threshold set by the CJEU for transfers to the US applies for any third country, and the same goes for BCRs.

What should I do when it comes to processors transferring data to the US?

Pursuant to the EDPB FAQs, where no supplementary measures can be provided to ensure that US law does not impinge on the essentially equivalent level of protection as granted by the GDPR and if derogations under Article 49 GDPR do not apply, “the only solution is to negotiate an amendment or supplementary clause to your contract to forbid transfers to the US. Data should not only be stored but also administered elsewhere than in the US”.

What can we expect from the CJEU next?

The EDPB is currently analysing the CJEU judgment to determine the kind of supplementary measures that could be provided in addition to SCCs or BCRs, whether legal, technical or organisational measures.

ICO statement

The ICO is continuously updating their statement on the CJEU Schrems II judgement. The latest version so far dates 27th July and it confirms that EDPB FAQs still apply to UK controllers and processors. Until further guidance is provided by EU bodies and institutions, the ICO recommends to take stock of the international transfers businesses make and react promptly plus they claim that they will continue to apply a risk-based and proportionate approach in accordance with their Regulatory Action Policy.

Other European Data Protection Authorities’ statements

Some European data protection supervisory authorities have provided guidance in response to the CJEU Schrems II judgement. While most countries are still considering the implications of the decision, some other are warning about the risk of non-compliance and a few of them like Germany (particularly Berlin and Hamburg) and Netherlands have openly stated that transfers to the US are unlawful.

In general terms, the ones that are warning about the risks claim the following:

  • Data transfers to the U.S. are still possible, but require the implementation of additional safeguards.
  • The obligation to implement the requirements contained in the CJEU’s decision is both on the businesses and the data protection supervisory authorities.
  • Businesses are required to constantly monitor the level of protection in the data importer’s country
  • Businesses should run a previous assessment before transferring data to the US.

The data protection supervisory authority in Germany (Rhineland-Palatinate) has proposed a five-step assessment for businesses. We have prepared the diagram below which summarizes it:

Can the level of data protection required by the GDPR be respected in the US?

The CJEU considered that the requirements of US domestic law and, in particular, certain programmes enabling access by US public authorities to personal data transferred from the EU, result in limitations on the protection of personal data which do not satisfy GDPR requirements. Furthermore, the CJEU stated that US legislation does not gran data subjects actionable rights before the courts against the US authorities. 

In this context, it seems difficult that a company could be able to demonstrate that they can provide an adequate level of data protection to personal data transferred from the EU, because basically it would have to bypass US legislation.

Latest moves in the US Senate does not shed light in this issue, because the “Lawful Access to Encrypted Data Act” was introduced last month. It mandates service providers and device manufacturers to assist law enforcement with accessing encrypted data if assistance would aid in the execution of a lawfully obtained warrant.

Do you make international data transfers to third countries? Are you affected by Schrems II decision? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We also offer CCPA compliance servicesContact us today.

Trustworthy Artificial Intelligence

Assessment List for Trustworthy Artificial Intelligence Overview.

Early this month the High-Level Expert Group on Artificial Intelligence (AI HLEG) presented their final Assessment List for Trustworthy Artificial Intelligence.

As reported in our blog, the piloting process of the Ethics Guidelines for Trustworthy AI was launched in the first EU AI Alliance Assembly, which took place on 26th June 2019. The results have been published now and they aim to support AI developers and deployers in implementing Trustworthy AI.

Background

Following the publication of the first draft in December, on 8 April 2019 the AI HLEG presented the Ethics Guidelines for Trustworthy AI, which addressed how a trustworthy AI should be, that is: ‘lawful’, ‘ethical’ and ‘robust’ and the requirements it should meet, namely: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability.

While the theoretical requirements and principles set up the bases for achieving Trustworthy AI, there was still a need for operationalization that allowed businesses and companies to implement them in practice. This is the goal pursued by the Assessment list for Trustworthy AI, which is deemed to be the operational tool of the Guidelines.

The piloting process

The piloting process, where Aphaia has participated, involved more than 350 stakeholders.

Feedback on the assessment list was given in three ways:

  • An online survey filled in by participants registered to the process;
  • The sharing best practices on how to achieve trustworthy AI through the European AI Alliance and
  • A series of in-depth interviews.

How should I use the Assessment List for Trustworthy AI (ALTAI)?

If you are developing, deploying or using AI you should make sure that all your AI systems comply with AI Trustworthy requirements and principles before effectively implement them.

Goal: identifying the risk for people’s fundamental rights derived from the use of your AI systems and applying the relevant mitigation measures to minimise those risks while maximizing the benefit of AI.

Steps: Self-evaluation through the ALTAI is the first step to check the gaps and design an action plan. The ALTAI is intended for flexible use, by which organisations can draw on elements relevant to the particular AI system from the list or add elements to it as they see fit, taking into consideration the sector they operate in. According to the AI HLEG, for this purpose you should: 

  • Perform a Fundamental Rights Impact Assessment (FRIA) prior to self-assessing any AI system;
  • actively engage with the questions the list raises; 
  • involve all relevant stakeholders, either within and/or outside your organisation;
  • seek outside counsel or assistance where necessary and
  • put in place appropriate internal guidance and governance processes.

The seven requirements

1.Human agency and oversight

“AI systems should support human agency and human decision-making, as prescribed by the principle of respect for human autonomy”. In this section, organisation should reflect on how to deal with the affects AI systems can have on:

  • Human behaviour, in a broad sense.
  • Human perception and expectation when confronted with AI systems that ‘act’ like humans.
  • Human affection, trust and (in)dependence.

The questions derived from the topics above will help organisations to decide necessary oversight measures and governance mechanisms or approaches, such as:

  • Human-in-the-loop (HITL) or the capability for human intervention in every decision cycle of the system.
  • Human-on-the-loop (HOTL) or the capability for human intervention during the design cycle of the system and monitoring the system’s operation.
  • Human-in-command (HIC) or the capability to oversee the overall activity of the AI system and the ability to decide when and how to use the AI system in any particular situation.

Questions in this part mainly arise around AI systems interaction with end-users and their learning and training process. 

2.Technical robustness and safety

“Technical robustness requires that AI systems are developed with a preventative approach to risks and that they behave reliably and as intended while minimising unintentional and unexpected harm as well as preventing it where possible”. In this section, organisations should reflect about the following issues:

  • Resilience to attack and security.
  • Safety.
  • Accuracy.
  • Reliability, fall-back plans and reproducibility.

There are two key requirements to obtain positive results on the above:

  • Dependability, which comprises the ability of the AI systems to deliver services that can justifiably be trusted.
  • Resilience, which means the robustness of the AI systems when facing changes, either in the environment or due to the presence of other agents, human or artificial, that may interact with the AI system in an adversarial manner.

Questions in this part mainly arise around AI systems undesirable and unexpected behaviour, certification mechanisms, threats prevision, documentation procedures and risk metrics.

3.Privacy and data governance

“Closely linked to the principle of prevention of harm is privacy, a fundamental right particularly affected by AI systems”. In terms of data protection, the principle of prevention of harm involves:  

  • Adequate data governance that covers the quality and integrity of the data used.
  • Relevance of the data used  in light of the domain in which the AI systems will be deployed.
  • Data access protocols.
  • The capability of the AI system to process data in a manner that protects privacy.

Questions in this part mainly arise around the type of personal data used for training and development, the implementation of GDPR mandatory measures and requirements and AI systems alignment with relevant standards such as ISO.

4.Transparency

“A crucial component of achieving Trustworthy AI is transparency which encompasses three elements: 1) traceability, 2) explainability and 3) open communication about the limitations of the AI system”:

  • Traceability: the process of the development of AI systems should be properly documented.
  • Explainability: this item refers to the ability to explain both the technical processes of the AI system and the reasoning behind the decisions or predictions that the AI system makes, which should be understood by those directly and indirectly affected.
  • Communication: AI system’s capabilities and limitations should be communicated to the users in a manner appropriate to the use case at hand and this could encompass communication of the AI system’s level of accuracy as well as its limitations.

Questions in this part mainly arise around traceability measures such as logging practices, users surveys, information mechanisms, and the provision of training material and disclaimers.

5.Diversity, non-discrimination and fairness

“In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system’s life cycle”. When it comes to AI systems, either when training or operating, discrimination may derive from:

  • Inclusion of inadvertent historic bias.
  • Incompleteness.
  • Bad governance models.
  • Intentional exploitation of consumer biases.
  • Unfair competition.

Questions in this part mainly arise around the strategies or procedures to avoid biases, educational and awareness initiatives, accessibility, user interfaces, Universal Design principles and stakeholder participation.

6.Societal and environmental well-being 

“In line with the principles of fairness and prevention of harm, the broader society, other sentient beings and the environment should be considered as stakeholders throughout the AI system’s life cycle”.

The following factors should be taken into account:

  • Environmental well-being.
  • Impact on work and skills.
  • Impact on society at large or democracy.

Questions in this part mainly arise around the mechanisms to evaluate the environmental and societal impact, the measures to address this impact, risk of de-skilling of the workforce and the promotion of new digital skills.

7.Accountability

“The principle of accountability necessitates that mechanisms be put in place to ensure responsibility for the development, deployment and/or use of AI systems”. Closely linked to risk management, there are three elements that should be considered in this regard:

  • Measures to identify and mitigate risks.
  • Mechanisms for addressing the risks.
  • Regular audits.

Questions in this part mainly arise around audit mechanisms, third-party auditing processes, risk training, AI ethics boards and due protection for whistle-blowers, NGOs, trade unions.

Do you need assistance with the Assessment List for Trustworthy AI? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments, together with GDPR and Data Protection Act 2018 adaptation and Data Protection Officer outsourcing.