Loading

Blog details

The new EU AI Regulation: What is it and to whom does it apply?

The new EU AI Regulation: What is it and to whom does it apply?

After final approval by the Council of the EU on May 21, 2024, the EU AI Act is set to go into effect. What does this Regulation entail, and to whom does it apply?

 

The EU AI Act is a legal framework that aims to regulate the development, deployment, and use of Artificial Intelligence (AI) systems within the European Union. The Act seeks to address the risks and challenges posed by AI while also allowing innovation and responsible development. The European Union’s Artificial Intelligence Act takes a risk-based approach to regulating AI systems. 

 

Within the European Union, the EU AI Act stands as a landmark piece of legislation designed to govern the development, deployment, and utilization of AI systems. This comprehensive framework aims to address the myriad risks and challenges posed by AI. It is important to note that the EU AI Act is a regulation, which means that it will apply directly in all EU Member States without the need for national implementation. The EU AI Act also includes provisions for the establishment of a European Artificial Intelligence Board, which would be responsible for advising the European Commission on AI policy and for coordinating the implementation of the Act. The Board would be composed of representatives from government, industry, academia, and civil society. 

 

The publication of the EU AI Act in the Official Journal is expected for mid-July, 2024 and it will enter into force 20 days after this date. Then after 24 months it will be fully applicable, except for: bans on prohibited practices, which will apply six months after the entry into force date; codes of practice, which will apply nine months after entry into force, general-purpose AI rules including governance, applicable 12 months after its entry into force and the obligations for high-risk systems that will apply 36 months after its entry into force.

 

The risk-based approach of the EU AI act classifies AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk.

The EU AI act classifies AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk AI systems are those that pose a serious threat to fundamental rights and freedoms, such as AI systems that could be used for emotion recognition in the workplace and education institutions or social scoring. These systems will be banned under the Act. 

High-risk AI systems are those that pose a significant risk to health and safety, such as AI systems used in medical devices or autonomous vehicles. These systems would be subject to strict requirements, including conformity assessment and CE marking. Conformity assessment would involve testing and certification by an accredited body to ensure that the AI system meets the safety and performance requirements set out in the Act. CE marking would indicate that the AI system has been assessed and found to meet these requirements. 

Limited risk AI systems are those that pose a lower risk to health and safety, such as AI systems used in customer service chatbots. These systems would be subject to less stringent requirements, such as transparency and accountability measures. Minimal risk AI systems are those that pose little or no risk to health and safety, such as AI systems used in games or entertainment. These systems can be developed and used subject to the existing legislation and providers may voluntarily choose to apply the requirements of trustworthy AI and adhere to codes of conduct. 

The EU AI Act is applicable to any entity, public or private that introduces an AI system to the EU market, regardless of location, with a few exceptions.

 

The legal framework for AI systems will be applicable to both public and private entities, regardless of their location, provided that the AI system is introduced into the EU market or its usage impacts individuals within the EU. This framework encompasses both providers, such as developers of AI-powered tools, and deployers of high-risk AI systems, such as financial institutions utilising AI-based screening tools. Additionally, importers of AI systems must ensure that the foreign provider has completed the necessary conformity assessment procedures, bears the CE marking, and provides the requisite documentation and instructions for use. Furthermore, specific responsibilities are outlined for providers of general-purpose AI models, particularly those involving large-scale generative AI models.

 

Entities providing free and open-source AI models are generally exempt from most regulatory obligations. However, this exemption does not extend to providers of general-purpose AI models that pose systemic risks, that is, those that have high-impact capabilities. Additionally, obligations do not apply to research, development, and prototyping activities that precede the release of AI models on the market. Furthermore, the regulation does not apply to AI systems developed exclusively for military, defence, or national security purposes, irrespective of the entity responsible for such activities.

Elevate your data protection standards with Aphaia. Aphaia’s expertise combines privacy regulation as well as emerging AI regulation. Schedule a consultation today and embark on a journey toward strengthening security, regulatory compliance, and the peace of mind that comes with knowing your data is in expert hands. Contact Aphaia today.

Prev post
Aphaia delivers a presentation on the new EU AI Act and the GDPR on 42Workspace in Rotterdam
June 20, 2024