Loading

Blog details

Second European AI Alliance Assembly overview

Second European AI Alliance Assembly overview

The second European AI Alliance Assembly was hosted online due to the COVID-19 pandemic, on Friday 9th October.

The second edition of the European AI Alliance Assembly took place last Friday 9th in a full day event which was hosted online due to the COVID-19 pandemic. The Assembly gathered together more than 1,400 viewers who followed the sessions live and were also given the option to submit their questions to panellists.

The event

This year’s edition had a particular focus on the European initiative to build an Ecosystem of Excellence and Trust in Artificial Intelligence. The sessions were broken into plenaries, parallel workshops and breakout sessions. 

The following topics were addressed:

As a member of the European AI Alliance, Aphaia was pleased to join the event and enjoy some of the sessions addressing topics which are crucial in the development and implementation of AI in Europe, such as “Requirements for trustworthy AI” and “AI and liability”.

Requirements for trustworthy AI

The speakers shared their views on the risks brought by AI systems and the approaches that should be taken to enable the widespread use of AI in the society.

Hugues Bersini, Computer Science Professor at Free University of Brussels, considered that there is a cost function whenever AI is used, and optimizing it is the actual goal: “Whenever you can align social cost with individual cost there are no real issues”.

Haydn Belfield, Academic Project Manager at CSER Cambridge University, claimed that the high risk AI systems may imply for people life chances and their fundamental rights demand a framework of regulation including mandatory requirements that should, at the same time, be  flexible, adaptable and practical. 

For Francesca Rossi, IBM fellow and the IBM AI Ethics Global Leader, transparency and explainability are key. She explained that AI should be used to support the decision making capabilities of human beings, who have to make informed decisions. This purpose cannot be achieved if AI systems are a black box.

As response to the audience questions, the speakers discussed together how many risk levels would be necessary for AI. The main conclusion was that considering that only defining high risk is already a challenge, having two risk levels (high risk and not high risk) would be a good start on which further developments may be built in the future.

The speakers briefly talked about each of the requirements highlighted by the AI-HLEG for trustworthy AI, namely: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability. 

In our view the discussions on AI and biases and human oversight where especially relevant:

AI and biases

Paul Lukowicz, Scientific Director and Head of the Research Unit “Embedded Intelligence” at the German Research Center for Artificial Intelligence defined machine learning as giving the computer methods with which it can extract procedures and information from data and stated that it is the core of the current success of AI. The challenge is that a lot of biases and discrimination in AI system come from giving data in which there are biases and discrimination. The challenge is that it is not that the developers somehow fail in giving data which is not per se representative: they actually use data that is representative and because there are discrimination and bias in our everyday life, this is what the systems learn and empathize. Linked to this issue, he considers that another pitfall is  the uncertainty, as there is no data set that covers the entire world. “We always have a level of uncertainty in life, so we have in AI systems”. 

Human oversight

Aimee Van Wynsberghe, Associate Professor in Ethics and Technology at TU Delft, raised some obstacles to human oversight:

  1. She challenged the fact that the output of an AI system is not valid until it has been reviewed and validated by an human. In her view, this can be quite difficult because there are biases that threaten human autonomy: automation bias, simulation bias, confirmation bias. Humans have a tendency to favor suggestions from automated decision-making system and ignore contradictory information that is made without automation. The other challenge in this regard is that having an AI system creating output and the human overviewing and validating is very time and resources consuming.
  2. As for the alternative based on the fact that the outputs of the AI system would become immediately effective but only if human overview is ensured afterwards, Aimee pointed out the issue of allocating the responsibility of ensuring human intervention:  “Who is going to ensure that human intervention happens? The company? Is the customer who would approach the company otherwise? Is it fair to assume that customers would have the time, the knowledge and the ability to do this?
  3. The monitoring of the AI system while in operation and the ability to intervene in real time and deactivate it would be difficult too because of human psychology: “there is lack of situational awareness that does not allow for the ability to take over”.

AI an liability

Corinna Schulze, Director of EU Government Affairs at SAP; Marco Bona, PEOPIL’s General Board Member for Italy and International Personal Injury Expert; Bernhard Koch, Professor of Civil and Comparative Law and Member of the New Technologies Formation of the EU Expert Group on liability for new technologies; Jean-Sébastien Borghetti, Private Law Professor at Université Paris II Panthéon-Assas and Dirk Staudenmaier, Head of Unit and Contract Law in the Department of Justice of the European Commission discussed the most important shortcomings on AI liability and put it in connection with the Product Liability Directive.

The following issues of the Directive were pointed out by the experts:

  • Time limit of 10 years: in the view of most of the speakers this may be an issue because it concerns producers only, which could be difficult whenever operators, users, owners and other stakeholders are involved. Furthermore 10 years is fine with  traditional products but it may not work in terms of protection of victims in relation to some AI artefacts and systems.
  • Scope: it concerns the protection of the consumers while it not address the protection of victims. Consumers and victims sometimes overlap but not always.
  • Notion of defect: it may cause some troubles with the distinction between product and services. The Directive covers only product, not services which may rise some concerns in relation to internet of things and software.

 

The Commission has made available the links of the sessions for all those who did not manage to attend the event or would like to see one or more session again.

Do you need assistance with AI Ethics? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments.

Prev post
Amazon se enfrenta a una demanda en Alemania, acusado de infringir la normativa europea de privacidad
octubre 19, 2020
Next post
Segunda Asamblea de la Alianza Europea de IA
octubre 21, 2020

Leave a Comment