Loading

Blog details

High risk AI and the EU AI Act

High risk AI and the EU AI Act

What AI systems fall under the “high-risk” category and what requirements do they need to comply with under the EU AI Act? 

 

The European Union Artificial Intelligence (AI) Act, officially approved by the Council of Ministers through a voting process in May 2024, is a landmark piece of legislation that will have a profound impact on the development and utilization of AI within the European Union. The Act classifies AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk. AI systems that fall within the “high-risk” category are subject to stringent requirements to mitigate unnecessary risks and safeguard individuals and their data from potential dangers associated with AI systems. The European Union AI Act is a forward-looking piece of legislation that will help to ensure that AI is developed and used in a responsible and ethical manner. The Act’s stringent requirements will help to mitigate the risks associated with AI and protect individuals and their data from potential harms.

 

The EU AI Act defines high-risk AI systems based on purpose and modalities.

 

The EU AI Act defines “high-risk” and establishes a systematic methodology for identifying high-risk AI systems within the legal framework. This classification aims to provide clarity for operators and businesses. The risk classification considers certain factors like the intended purpose of the AI system, aligning with EU product safety legislation. The classification depends on the function, specific purpose, and modalities for which the system is utilised. Systems performing narrow procedural tasks, improving prior human activities, and not influencing human decisions or engaging in purely preparatory tasks are not deemed high-risk. However, certain AI systems, such as those involved in profiling natural persons, are always considered high-risk. High-risk AI systems will be subject to a number of requirements under the EU AI Act. These requirements are designed to ensure that AI systems are used in a safe and responsible manner.

 

The EU AI Act mandates transparency, accountability, robustness, accuracy, and human oversight for high-risk AI systems, aiming to minimise errors, biases, and potential harm.

 

The EU AI Act imposes several requirements on high-risk AI systems. These requirements include transparency, accountability, robustness, accuracy, and human oversight. High-risk AI systems must be transparent about their functioning, providing information on the data they use, the algorithms they employ, and the decisions they make. They must also be accountable for their decisions, with mechanisms in place for people to challenge and seek redress if they are harmed by an AI system. Furthermore, high-risk AI systems must be robust and accurate, undergoing testing and validation to eliminate errors and biases. Lastly, they must be subject to human oversight, ensuring that humans are involved in their development, deployment, and use. These requirements aim to ensure the responsible and ethical development and use of AI technology within the European Union.

 

Bias prevention in high-risk AI requires: technical soundness, security, diverse datasets, bias management, traceability, auditability, documentation, monitoring, and risk assessments.

 

To avert bias in AI systems, it is imperative that they be designed and utilised in a way that minimizes prejudice and discriminatory practices. AI systems can contribute to more equitable and non-discriminatory decisions, such as in recruitment, when they are properly developed and used effectively. To achieve this goal, all high-risk AI systems must adhere to mandatory requirements. For AI systems to be fit for their intended purpose, they must be technically sound and secure, to ensure that false positive or negative results do not disproportionately affect protected groups based on factors such as race, ethnicity, gender, or age. To minimise the risk of unfair biases embedded in the model, high-risk systems must also be trained and tested using adequately representative datasets. Bias detection, correction, and other mitigating measures should be implemented to address any potential biases. In addition, these systems must be traceable and auditable to ensure that appropriate documentation, including data used to train the algorithm, is kept for post-implementation investigations. Both before and after the market launch, compliance systems should be in place to monitor these systems regularly and promptly address any potential risks. The EU AI Act is a significant piece of legislation that is expected to shape the future of AI development and use. It is an important step towards ensuring that AI is used in a safe and responsible manner. “One should also note that the European Commission is empowered to amend the criteria for high-risk AI, which should contribute to the Act’s long-run robustness,” comments Dr Bostjan Makarovic, Aphaia’s Managing Partner. 

Discover how Aphaia can help ensure compliance of your data protection and AI strategy. We specialise in empowering organisations like yours with cutting-edge solutions designed to not only meet but exceed the demands of today’s data landscape. Contact Aphaia today.

Prev post
General-purpose AI and Systemic risk in the EU AI Act
juli 4, 2024
Biometric identification and the EU
Next post
Biometric identification and the EU AI Act
juli 18, 2024