Loading

Blog details

First European AI Alliance Assembly overview

First European AI Alliance Assembly overview

The first EU AI Alliance Assembly was held in The Egg, Brussels on Wednesday 26th.

One year after the creation of the AI HLEG and the European AI Alliance, the first Assembly took place this week in Brussels. It was a full day event that brought together stakeholders, including citizens, businesses, public bodies and policymakers.

The event was divided into two different parts.  The first of them comprised two panel discussions that allowed the AI HLEG to present two important milestones:

The second part was focused on the next steps of the EU AI Strategy. They delivered a panel discussion about the international governance on AI and then four workshops that were run in parallel: competitiveness, research and skills, Regulation and legal framework and societal and environmental impact. All the speakers were part either of the AI HLEG, the OECD or private businesses committed and involved with the development of the AI. These topics are the ones identified as the biggest challenges for the implementation of the AI in Europe. Aphaia attended the legal one; as AI Ethics frontrunners, it is crucial for us to be updated of the latest steps from the EU with regard to AI Regulation, in order to align our EU AI Ethics Assessmentsand other AI products with the Commission opinion. Whenever the AI HLEG moves on with new recommendations or policies, so does Aphaia.

We have put together some of the main topics that were addressed by the speakers:

Policy and Investment Recommendations on AI

This is the second deliverable of the AI HLEG and follows the publication of the Ethics Guidelines for Trustworthy AIon April 2019. The purpose of this document is setting the context on how the Trustworthy AI can actually be developed, deployed, fostered and scaled in Europe, to make the Trustworthy AI as something practical that businesses, both SME and big companies, can achieve as a real goal. In this regard, Juraj Podruzek raised the need of making AI Ethics more user-friendly.

Pekka Ala-Pietilä highlighted the key takeaways from the report:

  • Empower and protect humans and society.
  • Take up a tailored approach to the AI landscape.
  • Secure a Single European Market for Trustworthy AI.
  • Enable AI ecosystems through Sectoral Multi-Stakeholder Alliances.
  • Foster the European data economy.
  • Exploit the multi-faceted role of the public sector.
  • Strengthen and unite Europe’s research capabilities.
  • Nurture education to the Fourth Power.
  • Adopt a risk-based governance approach to AI and an ensure an appropriate regulatory framework.
  • Stimulate an open and lucrative investment environment.
  • Embrace a holistic way of working, combing a 10-year vision with a rolling action plan.

Thiébaut Weber stated that one of the main defiance is to confront the opportunities of the technology to the reality of some users. “We know that AI can bring a lot of added value to some sectors, but we also have maybe 40% of European citizens who suffer of digital illiteracy. No one is left behind; AI is for everybody, for the benefit of all of us”.

We are preparing a separate article with an overview of the Recommendations that will be publish on Aphaia’s blog, both in English and Spanish. The original document can be accessed here.

Launch of the piloting process of the AI Ethics Guidelines

The piloting process aims at gathering feedback from the AI Alliance members on the assessment list that operationalises the key requirements. All the stakeholders are invited to send their comments on how it can be improved, by means of a thorough but to the point surveyquestionnairethat will be openeduntil December 2019. In-depth interviews with a number of representative organisations will as well be used as inputs for the piloting process.

The intended result once the participants have made their contribution is proposing a revised version of the assessment list and preparing a set of AI guidelines tailored to different sectors.

Andrea Renda pointed out that proactive Regulation should be promoted, rather than self-Regulation.

International governance

Raja Chatila underlined the difficulty of implementing trustworthy AI because of the scientific and ethical issues that remain still open, like the concept of transparency, which is not just explainability. We need to take a long-term vision that include different steps and multi stakeholders, and not to impose immediately that we need every piece of machine learning system to be explainable tomorrow. “If we set this as a goal, people will work on it. Proportionality, flexibility and Regulation steer innovation and create a climate of trust”.

Moez Chakchouk, from UNESCO, stands up for thinking globally. “There is a huge difference between developed and developing countries in terms of awareness. Education is the key, we should rise awareness through the education. Robust, safe, fair and trustworthy AI has first to respect human and democratic values”.

Dirk Pilat, from the OECD, claimed the importance of all the countries putting together the work that had been done before in the AI field plus bring to reflection. “We are all trying to figure out the same issue and we need to share our practices and build a consensus”.

Ethical and legal issues

This one was an interactive workshop where the speakers set their view and then the attendants intervened with some questions. The core topic was how the AI Regulation should be framed. Should it be a completely new Regulation? Should it be self-Regulation? Should it be open to the industry?

Ursula Pachl asserted that we should have a look at the current legal framework in the EU. “It is time for everybody to get back to the core and evaluate the existing framework, the existing legislation, does it fit the purpose? What are the regulatory gaps that we have to address? It is technology that serves people and not the other way around. We should use technology according to the laws, values and ethical principles”.

Ramak Molavi, from her side, said that we should transform compliance with the law into a source of competitiveness. “Why does Regulation have such a bad reputation? Do we need specific AI Regulation? We have so many laws already in place that apply to AI (competition, neutrality, GDPR, etc). We have AI rules but how do we implement them? We need to test them before they go out, which only works with ethics by design. We cannot work with the industry working on their own and leaving the government outside.  Self-Regulation is not for those that want to move fast, it is something that would apply to mature companies. We should slow down and look what we can do to develop a sustainable AI and find an alternative to self-Regulation. We need a new approach to regulate innovation. Everyone wants regulation to be perfect from the beginning, but we need previous knowledge.”

The event finished with a review of the topics addressed in the four parallel workshops and a final reflection about the importance of approaching the AI Regulation to the reality of the businesses, companies and society.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Prev post
Fines and Penalties imposed by data protection authorities within the EU
June 26, 2019
Next post
EE fined by ICO for sending unlawful texts
July 3, 2019

Leave a Comment