Proposal for an EU AI Regulation

Proposal for an EU AI Regulation is primarily based on the perceived level of risk associated with the AI system. 

As of April 2021, the EU became the first international organization to create a legal framework for regulating Artificial Intelligence (AI) systems. The proposal for a regulation on AI lays down the mechanism for checking the AI-related risks and providing feasible solutions. The proposed regulation has a risk-based approach for the administration of the latest AI technologies. This regulation is a shift from the EU’s usual soft-law approach towards AI and robotics. The regulation is focused on laying down ground rules for the use of AI, prohibitions on abuses, and obligations for the providers and operators of “high-risk” AI applications.

AI technology is assessed and categorized into four groups based on their perceived level of risk. 

The regulation essentially distinguishes the risks and puts them into four categories, including unacceptable risks (completely banned), high-risks (allowed subject to regulations), limited risks, and minimal risks (subject to code of conduct, out of the scope of the Regulation). Although other areas are pretty clear, high-risk AI systems are in a bit of a grey area, and as such, require strict monitoring, detailed risk assessments, and satisfactory human oversight. The high-risk AI systems are those that are used in critical infrastructures like transportation, law enforcement, administration of justice, border control management, among others.

Proposal for an EU AI Regulation introduced rules and responsibilities for the providers of some AI systems, even after the system has been sold. 

The aforementioned proposal goes beyond defining the risks that require regulations. It provides civil liability for the abuse of the AI system. The regulation introduces liability on the provider for introducing the high-risk AI to the market. However, according to the proposal, the provider must ensure that the relevant data governance and management practices are observed. And, that the rules are complied with in letter and spirit. Even after the high-risk AI system is transferred through sale, the provider has to establish a proportionate postmarket monitoring system that makes sure that regulations are complied with even after the sale and that corrective measures can be and are taken when needed. 

The regulations also make it incumbent upon the provider to allow users to oversee the high-risk AI systems to prevent any potential risks. However, if the regulations are not followed, the breach of rules specified under the regulation may lead to a fine of up to 6% of a company’s annual global turnover. While there is still quite some time before the regulation will become enforceable and legally binding i.e. after 24 months of the adoption of the proposal, it is recommended that businesses and providers that would be subject to this Regulation start to review their AI systems and the practices linked to them. 

While the scope of the EU AI Regulation would be helpful for addressing the importance of regulating AI according to EU standards, UNESCO is also currently working on a global Recommendation on the Ethics of Artificial Intelligence, which would regulate AI, both at a European level and globally.

We recently created an informative vlog exploring recent developments in AI regulations, which you can check out below:

You can learn more about AI ethics and regulation on our YouTube channel.

Do you use AI in your organisation and need help ensuring compliance with AI regulations? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Leave a Comment

(0 Comments)

Your email address will not be published. Required fields are marked *