Loading

Blog details

EU White Paper on Artificial Intelligence Overview

EU White Paper on Artificial Intelligence Overview

The EU White Paper on Artificial Intelligence contains a set of proposals to develop a European approach to this technology.

As reported in our blog, the leaked EU White Paper obtained by Euractiv proposes several options for AI regulations moving into the future. In our post today we are going through them to show you the most relevant ones.

The EU approach is focused on promoting the development of AI across Member States while ensuring that relevant values and principles are properly observed during the whole process of design and implementation. One of the main goals is cooperating with China and the US as the most important players in AI, but always aiming at protecting EU’s interests, including European standards and the creation of a level playing field.

Built on the existing policy framework, the EU White Paper points out three key pillars for the European strategy on AI:

  • Support for the EU’s technological and industrial capacity.

  • Readiness for the socioeconomic changes brought about by AI.

  • Existence of an appropriate ethical and legal framework.

How is AI defined?

This EU White Paper provides a definition of AI based on its nature and functions. That said, AI is conceived as “software (integrated in hardware or self-standing) which provides for the following functions:

  • Simulation of human intelligence processes.

  • Performing certain specified complex tasks.

  • Involving the acquisition, processing and rational or reasoned analysis of data”.

Europe’s position on AI

What is the role of Europe in the development of AI? Despite EU’s strict rules on privacy and data protection, Europe counts with several strengths that may help to gain leverage in the “AI race” against other markets like China or the US, namely:

  • Excellent research centres with many publications and scientific articles related to AI.

  • World-leading position in robotics and B2B markets.

  • Large amounts of public and industrial data.

  • EU funding programme.

On the negative side, there is a pressing need for significantly increasing investment levels on AI and maximising them through cooperation among Member States, Norway and Switzerland. Europe has as well a weak position in consumer applications and online platforms, which results in a competitive disadvantage in data access.

The EU White Paper offer some proposals to reinforce EU strengths on AI and address those areas that need to be boosted:

  • Establishing a world-leading AI computing and data infrastructure in Europe using as a basis High Performance Computing centres.

  • Federating knowledge and achieving excellence through the reinforcement of EU scientific community for AI and the facilitation of their collaboration and networking based on strengthen coordination.

  • Supporting research and innovation to stay at the forefront with the creation of a “Leaders Group” set up with C-level representatives of major stakeholders.

  • Fostering the uptake of AI through the Digital Innovation Hubs and the Digital Europe Programme.

  • Ensuring access to finance for AI innovators.

What are the prerequisites to achieve EU’s goals on AI?

Access to data

Ensuring access to data for EU businesses and the public sector is essential to develop AI. One of the key measures considered by the Commission for redressing the issue of data access is the development of common data spaces which combine the technical infrastructure for data sharing with governance mechanisms, organised by sector or problem area.

Regulatory framework

The above can be built on EU’s comprehensive legal framework, which includes the GDPR, the Regulation on the Free Flow of Data and the Open Data Directive. The latter may play a fundamental role indeed, as based on its latest revision, the Commission intends to adopt by early 2021 an implementing act on high-value public sector datasets, which will be available for free and in a machine-readable format.

Although AI is already subject to an extensive body of EU legislation including fundamental rights, consumer law and product safety and liability, it also poses new challenges that come from the data dependency and the connectivity within new technological ecosystems. There is therefore a need for developing a regulatory framework that covers all the specific risks that AI brings. In order to achieve this task successfully, the EU White Paper highlights the relevance of complementing and building upon the existing EU and national frameworks to provide policy continuity and ensure legal certainty.

The main risks the implementation of AI in society faces are the following:

  • Fundamental rights, including bias and discrimination.

  • Privacy and data protection.

  • Safety and liability.

It is important to note that the aforementioned risks can be the result either of flaws in the design of the AI system, problems with the availability and quality of data or issues stemming from machine learning as such.

According to the EU White Paper, the Commission identified the following weaknesses of the current legislative framework in consultation with Member States, businesses and other stakeholders:

  • Limitations of scope as regards fundamental rights.

  • Limitations of scope with respect to products: EU product safety legislation requirements do not apply to services based on AI.

  • Uncertainty as regards the division of responsibilities between different economic operators in the supply chain.

  • Changing nature of products.

  • Emergence of new risks.

  • Difficulties linked to enforcement given the opacity of AI.

How should roles and responsibilities concerning AI be attributed?

The Commission considers that, considering the amount of agents involved in the life cycle of an AI system, the principle that should guide the attribution of roles and responsibilities in the future regulatory framework is that the responsibility lies with the actor(s), who is/are best placed to address it. Therefore, the future regulatory framework for AI is expected to set up obligations for both developers and users of AI, together with other groups such as suppliers of services. This approach would ensure that risks are managed comprehensively while not going beyond what is feasible for any given economic actor.

What legal requirements should be imposed on the agents involved?

According to the EU White Paper, the Commission seems to be keen on setting up legal requirements having a preventive ex ante character rather than an ex post one, even though the latter are also referred. That said, the requirements might include:

  • Accountability, transparency and information requirements to disclose the design parameters of the AI system.

  • General design principles.

  • Requirements regarding the quality and diversity of datasets.

  • Obligation for developers to carry out an assessment of possible risks and steps to minimize them.

  • Requirements for human oversight.

  • Additional safety requirements.

Ex post requirements establish liability and possible remedies for harm or damage caused by a product or service relying on AI.

That said, which regulatory options is considering the Commission?

  1. Voluntary labeling.

This alternative would be based on a voluntary labeling framework for developers and users of AI. The requirements would be binding just once the developer or user has opted to use the label.

  1. Sectorial requirements for public administration and facial recognition.

This option would focus on the use of AI by public authorities. For this purpose, the Commission proposes the model set out by the Canadian directive on automated decision-making, in order to complement the provisions of the GDPR.

The Commission also suggests a time-limited ban (“e.g. 3-5 years”) on the use of facial recognition technology in public spaces, aiming at identifying and developing a sound methodology for assessing the impact of this technology and establishing possible risk management measures.

  1. Mandatory risk-based requirements for high-risk applications.

This option would foresee legally binding requirements for developers and users of AI, built on existing EU legislation. Given the need to ensure proportionality, it seems these new requirements might be applied only to high-risk applications of AI. This brings to light the need for a clear criteria to differentiate between “low-risk” and “high-risk” systems. The Commission provides the following definition of “high-risk”: “applications of AI which can produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage for the individual or the legal entity”, and points out the need to consider it together with the particular sector where the AI system would be deployed.

  1. Safety and liability.

Targeted amendments of the EU safety and liability legislation could be considered to address the specific risks of AI.

  1. Governance.

An effective system of enforcement is deemed an essential component of the future regulatory framework, which would require a strong system of public oversight.

Does your company use AI systems? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, EU AI Ethics assessments and Data Protection Officer outsourcing. Contact us today.

Prev post
Regulación del derecho a la privacidad en la era de la IA
febrero 7, 2020
Next post
Libro Blanco de la UE sobre Inteligencia Artificial
febrero 12, 2020

Leave a Comment