New EU AI Regulation revealed in a recently leaked document, include several intended legislations specific to high risk AI.
The European Commission has proposed a Regulation of the European Parliament and of the Council aimed at governing the use, and sale of high risk AI within the European Union. In a recently leaked document, the organisations stated that “Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being.” With that in mind, the European Commission has set forth to further define and clarify what constitutes high risk AI, and set out rules aimed at ensuring that AI is safely and effectively serving the highest good of natural persons. “It is the step towards the right direction, providing further ethical safeguards as our lifes are becoming increasingly dominaed by machine-made decisions”, comments Dr Bostjan Makarovic, Aphaia’s Managing Partner.
The document outlines harmonised rules concerning the placing on the market, putting into service and use of high-risk AI systems in the Union. It also includes harmonised transparency rules for AI systems intended to interact with natural persons and AI systems used to generate or manipulate image, audio or video content. The regulatory framework laid out in this document is intended to function without prejudice to the provisions of existing Union regulations applicable to AI, falling within the scope of this regulation.
The new AI Regulation will apply to providers and users of AI systems in the EU or in third countries to the extent that they affect persons in the EU.
This Regulation will apply to providers of AI systems in the practice of placing them on the market or putting them into service in the European Union whether they were established in the EU, or within a third country outside the Union. In addition, the regulations will apply to users of AI systems in the EU, as well as providers and users of AI systems established in third countries to the extent that these systems affect persons within the EU.
Article 3 of the leaked document defines an AI system as “software that is developed with one or more of the approaches and techniques listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing real or virtual environments.” This can constitute a component of a product or a standalone product, the output of which may serve to partially or fully automate certain activities. Annex I outlines several approaches and techniques which indicate artificial intelligence including Machine learning approaches, including supervised and supervised and reinforcement learning, logic and knowledge based approaches, and statistical approaches.
The leaked document outlines several prohibitions intended to be established for the protection of the fundamental rights of natural persons.
This document goes on in Article 4 to outline the prohibited AI practices; a list of artificial intelligence practices which are prohibited as contravening the fundamental rights protected under EU law, and Union values. Title II Article 4 prohibits the use of AI systems that manipulate human behavior, opinions or decisions through choice architectures or any other element of the user interface, which causes persons to make decisions, have opinions or behave in a manner that is to their detriment. In addition, this regulation prohibits the use of AI in any manner that exploits information or predictions about people in an effort to target their vulnerabilities, leading them to behave, form an opinion or make decisions to their detriment. The regulation will also prohibit indiscriminate surveillance applied to all natural persons in a generalised manner without differentiation. Article 4(2) does however state that these prohibitions do not apply when authorised by law, or are carried out by, or on behalf of public authorities in order to safeguard public security, subject to appropriate safeguards to the rights and freedoms of third parties and compliance with the laws of the EU.
Cristina Contero Almagro, Partner in Aphaia, points out that “It should be noted that this new Regulation mentions “monitoring and tracking of natural persons in digital or physical environments, as well as automated aggregation and analysis of personal data from various sources” as elements that the methods of surveillance could include, which means that these provisions might potentially reach any online platform which relies on automated data aggregation and analysis.
Considering that the Regulation takes a risk-based approach and that it interlinks with the GDPR in some areas, this only confirms the importance for businesses to ensure that their systems and processes comply with the data protection framework. In particular, Data Protection Impact Assessments, over which the Conformity Assessment would be built, play a paramount role”.
The new AI Regulation specifically defines what constitutes high risk AI systems, in order to outline exactly which systems will be subject to this regulation.
With regard to high risk AI systems, this document has a specific section (Annex II) dedicated to defining with precision and complete with examples, what constitutes a “high risk artificial intelligence system”. Anything that falls within that purview is subject to specific rules and regulations to ensure the best interest of persons. Compliance with these requirements will be required before the placement of these systems on the market or into service. The regulation covers the use of data sets for training these systems, documentation and record keeping, transparency, robustness, accuracy, security, and human oversight. The leaked document includes several obligations for the providers and users of these systems, as well as authorised representatives, importers and distributors.
The Regulation sets forth intended measures to support innovation in AI and aid SMEs in ensuring compliance.
This document also sets forth intended measures in support of innovation in AI. These include AI regulatory sandboxing schemes, allowed to be established by national competent authorities in any of the Member States. In order to reduce the regulatory burden for small to medium enterprises and startups, additional measures will be implemented, including priority access to these sandboxes, as well as digital hubs and testing experimentation facilities. These hubs are intended to provide relevant training to AI providers on the regulatory requirements, as well as technical and scientific support, and testing facilities for providers.
The new AI Regulation indicates the intention for the establishment of a European Artificial Intelligence Board.
The document indicated the intention to establish a European Artificial Intelligence Board, tasked with the consistent application of this regulation by the Member States. This task force will be expected to issue opinions or interpretive guidance documents clarifying the application of the Regulation, collect and share best practices among Member States, aid in the development of standards regarding AI, and continuously monitor developments in the market and their impact on fundamental rights. The European Artificial Intelligence Board will also be expected to ensure consistency and coordination in the functioning of the AI regulatory sandboxes previously mentioned. This board will issue opinions before the Commission adopts a delegated act and coordinate, in carrying out its tasks, with the relevant bodies and structures established at an EU level, including the EDPB.