Loading

Blog details

Assessment List for Trustworthy Artificial Intelligence Overview.

Assessment List for Trustworthy Artificial Intelligence Overview.

Early this month the High-Level Expert Group on Artificial Intelligence (AI HLEG) presented their final Assessment List for Trustworthy Artificial Intelligence.

As reported in our blog, the piloting process of the Ethics Guidelines for Trustworthy AI was launched in the first EU AI Alliance Assembly, which took place on 26th June 2019. The results have been published now and they aim to support AI developers and deployers in implementing Trustworthy AI.

Background

Following the publication of the first draft in December, on 8 April 2019 the AI HLEG presented the Ethics Guidelines for Trustworthy AI, which addressed how a trustworthy AI should be, that is: ‘lawful’, ‘ethical’ and ‘robust’ and the requirements it should meet, namely: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability.

While the theoretical requirements and principles set up the bases for achieving Trustworthy AI, there was still a need for operationalization that allowed businesses and companies to implement them in practice. This is the goal pursued by the Assessment list for Trustworthy AI, which is deemed to be the operational tool of the Guidelines.

The piloting process

The piloting process, where Aphaia has participated, involved more than 350 stakeholders.

Feedback on the assessment list was given in three ways:

  • An online survey filled in by participants registered to the process;
  • The sharing best practices on how to achieve trustworthy AI through the European AI Alliance and
  • A series of in-depth interviews.

How should I use the Assessment List for Trustworthy AI (ALTAI)?

If you are developing, deploying or using AI you should make sure that all your AI systems comply with AI Trustworthy requirements and principles before effectively implement them.

Goal: identifying the risk for people’s fundamental rights derived from the use of your AI systems and applying the relevant mitigation measures to minimise those risks while maximizing the benefit of AI.

Steps: Self-evaluation through the ALTAI is the first step to check the gaps and design an action plan. The ALTAI is intended for flexible use, by which organisations can draw on elements relevant to the particular AI system from the list or add elements to it as they see fit, taking into consideration the sector they operate in. According to the AI HLEG, for this purpose you should: 

  • Perform a Fundamental Rights Impact Assessment (FRIA) prior to self-assessing any AI system;
  • actively engage with the questions the list raises; 
  • involve all relevant stakeholders, either within and/or outside your organisation;
  • seek outside counsel or assistance where necessary and
  • put in place appropriate internal guidance and governance processes.

The seven requirements

1.Human agency and oversight

“AI systems should support human agency and human decision-making, as prescribed by the principle of respect for human autonomy”. In this section, organisation should reflect on how to deal with the affects AI systems can have on:

  • Human behaviour, in a broad sense.
  • Human perception and expectation when confronted with AI systems that ‘act’ like humans.
  • Human affection, trust and (in)dependence.

The questions derived from the topics above will help organisations to decide necessary oversight measures and governance mechanisms or approaches, such as:

  • Human-in-the-loop (HITL) or the capability for human intervention in every decision cycle of the system.
  • Human-on-the-loop (HOTL) or the capability for human intervention during the design cycle of the system and monitoring the system’s operation.
  • Human-in-command (HIC) or the capability to oversee the overall activity of the AI system and the ability to decide when and how to use the AI system in any particular situation.

Questions in this part mainly arise around AI systems interaction with end-users and their learning and training process. 

2.Technical robustness and safety

“Technical robustness requires that AI systems are developed with a preventative approach to risks and that they behave reliably and as intended while minimising unintentional and unexpected harm as well as preventing it where possible”. In this section, organisations should reflect about the following issues:

  • Resilience to attack and security.
  • Safety.
  • Accuracy.
  • Reliability, fall-back plans and reproducibility.

There are two key requirements to obtain positive results on the above:

  • Dependability, which comprises the ability of the AI systems to deliver services that can justifiably be trusted.
  • Resilience, which means the robustness of the AI systems when facing changes, either in the environment or due to the presence of other agents, human or artificial, that may interact with the AI system in an adversarial manner.

Questions in this part mainly arise around AI systems undesirable and unexpected behaviour, certification mechanisms, threats prevision, documentation procedures and risk metrics.

3.Privacy and data governance

“Closely linked to the principle of prevention of harm is privacy, a fundamental right particularly affected by AI systems”. In terms of data protection, the principle of prevention of harm involves:  

  • Adequate data governance that covers the quality and integrity of the data used.
  • Relevance of the data used  in light of the domain in which the AI systems will be deployed.
  • Data access protocols.
  • The capability of the AI system to process data in a manner that protects privacy.

Questions in this part mainly arise around the type of personal data used for training and development, the implementation of GDPR mandatory measures and requirements and AI systems alignment with relevant standards such as ISO.

4.Transparency

“A crucial component of achieving Trustworthy AI is transparency which encompasses three elements: 1) traceability, 2) explainability and 3) open communication about the limitations of the AI system”:

  • Traceability: the process of the development of AI systems should be properly documented.
  • Explainability: this item refers to the ability to explain both the technical processes of the AI system and the reasoning behind the decisions or predictions that the AI system makes, which should be understood by those directly and indirectly affected.
  • Communication: AI system’s capabilities and limitations should be communicated to the users in a manner appropriate to the use case at hand and this could encompass communication of the AI system’s level of accuracy as well as its limitations.

Questions in this part mainly arise around traceability measures such as logging practices, users surveys, information mechanisms, and the provision of training material and disclaimers.

5.Diversity, non-discrimination and fairness

“In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system’s life cycle”. When it comes to AI systems, either when training or operating, discrimination may derive from:

  • Inclusion of inadvertent historic bias.
  • Incompleteness.
  • Bad governance models.
  • Intentional exploitation of consumer biases.
  • Unfair competition.

Questions in this part mainly arise around the strategies or procedures to avoid biases, educational and awareness initiatives, accessibility, user interfaces, Universal Design principles and stakeholder participation.

6.Societal and environmental well-being 

“In line with the principles of fairness and prevention of harm, the broader society, other sentient beings and the environment should be considered as stakeholders throughout the AI system’s life cycle”.

The following factors should be taken into account:

  • Environmental well-being.
  • Impact on work and skills.
  • Impact on society at large or democracy.

Questions in this part mainly arise around the mechanisms to evaluate the environmental and societal impact, the measures to address this impact, risk of de-skilling of the workforce and the promotion of new digital skills.

7.Accountability

“The principle of accountability necessitates that mechanisms be put in place to ensure responsibility for the development, deployment and/or use of AI systems”. Closely linked to risk management, there are three elements that should be considered in this regard:

  • Measures to identify and mitigate risks.
  • Mechanisms for addressing the risks.
  • Regular audits.

Questions in this part mainly arise around audit mechanisms, third-party auditing processes, risk training, AI ethics boards and due protection for whistle-blowers, NGOs, trade unions.

Do you need assistance with the Assessment List for Trustworthy AI? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments, together with GDPR and Data Protection Act 2018 adaptation and Data Protection Officer outsourcing.

Prev post
La Comisión Europea lanza un comunicado sobre la transición entre Reino Unido y la Unión Europea
July 29, 2020
Next post
Lista de Evaluación para una Inteligencia Artificial Fiable
July 31, 2020

Leave a Comment