EU AI Ethics guidelines overview
EU Commission AI Ethics Guidelines are the Talk of the Town.
The European Commission’s High-Level Expert Group (AI HLEG) produced a draft set of AI Ethics Guidelines in December 2018. The final version is due in March this year and although it won’t be legally binding, it enables all stakeholders to formally endorse and sign up to the Guidelines on a voluntary basis.
AI is the future. But regulation is crucial in order to minimise risks associated with its use. Areas of concern include the adoption of identification processes without the data subject’s consent, covert AI systems and Lethal Autonomous Weapon Systems.
Trust is deemed as the prerequisite for people and societies to develop, deploy and use AI, because without AI being demonstrably worthy of trust, subversive consequences may ensue and its uptake by citizens and consumers might be hindered.
Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable.
(1) Fundamental rights and core principles
AI HLEG believes in an approach to AI ethics based on the fundamental rights commitment of the EU Treaties and Charter of Fundamental Rights, and highlights the main ones that should apply in an AI context:
-Respect for human dignity.
-Freedom of the individual.
-Respect for democracy, justice and the rule of law.
-Equality, non-discrimination and solidarity.
In order to ensure that AI is developed in a human-centric manner and the above principles are taken into account when designing and implementing AI systems, some principles and values should be observed:
-The Principle of Beneficence (“Do Good”): AI systems should be designed for the purpose of generating prosperity, value creation, wealth maximization and sustainability.
-The Principle of non-maleficence (“Do no Harm”): by design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work.
-The Principle of Autonomy (“Preserve Human Agency”): human beings using AI should keep full self-determination, and not be subordinated to the AI system under any criterion.
-The Principle of Justice (“Be Fair”): developers and implementers need to ensure that, by using AI systems, individuals and minority groups maintain freedom from bias, stigmatisation and discrimination.
-The Principle of Explicability (“Operate transparently”), which comprises transparency from both the technical and business sides and implies that AI systems should be auditable, comprehensible and intelligible, and that users are knowingly informed of the intention of developers and technology implementers of AI.
(2) Trustworthy AI in practice
The values above must be actually implemented, which requires the development of specific requirements for AI systems. AI HLEG points out the following requirements as the most important ones:
-Accountability: special compensation mechanisms should be put in place, both monetary and non-monetary.
-Data Governance: what’s particularly relevant is the quality and integrity of the datasets used for training, plus the weights of the different categories of data.
-Design for all: systems should be designed in a way that allows all citizens to use the products or services, regardless of their age, disability status or social status.
-Governance of AI Autonomy (Human oversight): one must ensure that AI systems continue to behave as originally programmed, the levels or instances of governance will depend on the type of system and its impact on individuals.
-Non-Discrimination: bias, incomplete datasets and bad governance models should be avoided.
-Respect for (& Enhancement of) Human Autonomy: AI systems should just serve the purposes and tasks previously programmed.
-Respect for Privacy: privacy and data protection must be guaranteed at all stages of the life cycle of the AI system. This includes all data provided by the user, but also all information generated about the user over the course of his or her interactions with the AI system.
-Robustness: reliability, reproducibility, accuracy, resilience to attack and a fall-back plan are required.
-Safety: operational errors and unintended consequences should be minimised.
-Transparency: Information asymmetry should be reduced.
AI HLEG notes that a detailed analysis of any AI system is required in order to detect the main points to be addressed and provide both technical (ethics and rule of law by design, architectures for trustworthy AI, testing and validating, traceability and auditability, etc) and non-technical solutions (Regulation, codes of conduct, training, stakeholder dialogue, etc.), according to the particular context and needs on a case by case basis.
Some of these measures are already included in GDPR, in Recital 71 and Article 22, regarding the requirements that the data controllers should meet when implementing automated decision-making algorithms.
If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.
Latest posts by Cristina Contero Almagro (see all)
- Public consultation on the ethical principles of Artificial intelligence - February 15, 2019
- Spanish National Cyber-security Incident Notification and Management Guide Overview - February 8, 2019
- Smart glasses and data protection - February 1, 2019