Loading

Blog details

AI Ethics in 2 minutes

AI Ethics in 2 minutes

On our YouTube channel this week, we are discussing the basis of AI Ethics, in line with the EU approach. As a member of the EU AI Alliance, Aphaia is involved in the discussion of all aspects of AI development and its impacts, and we regularly share our thoughts and feedback with the AI HLEG.

The aim of the EU is to promote trustworthy AI. What does this mean?

Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

1) it should be lawful, complying with all applicable laws and regulations, which comprises EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the GDPR, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and numerous EU Member State laws. The AI should be aware not only of what cannot be done, but also of what should be done and what may be done.

2) it should beethical,ensuring adherence to ethical principles and values.

3) it should berobust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.  Such systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts.

Apart from the components, trustworthy AI should meet the following ethical principles:

Respect for human autonomy. AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills.

Prevention of harm. AI systems should be technically robust, and it should be ensured that they are not open to malicious use. Vulnerable persons should receive greater attention and be included in the development, deployment and use of AI systems. Particular attention must also be paid to situations where AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information.

Fairness. Ensuring equal and just distribution of both benefits and costs and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation.

Explicability. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected.

What should AI developers then take into account to create trustworthy AI?

All the above points should be considered and implemented when designing the AI, together with some other key requirements as human agency and oversight, privacy and data governance, societal and environmental wellbeing and accountability.

Do you think this would be enough? Share your thoughts with us!

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Prev post
British Airways data breach fine set at £183m based on GDPR
juli 10, 2019
Next post
Artificial Intelligence in H2020 overview
juli 17, 2019

Leave a Comment