The European Parliament has passed the EU Artificial Intelligence Act (EU AI Act) to ensure safety of EU citizen rights.
On Wednesday March 14th, the Parliament formally passed the Artificial Intelligence Act, a legislation designed to safeguard public safety and fundamental rights, while simultaneously fostering innovation. The Act was finalised in negotiations with member states in December 2023, where it garnered major support from MEPs, receiving 523 votes in support, with 46 opposed and 49 abstaining. This legislation is aimed at protecting fundamental rights, democracy, the rule of law, and environmental sustainability from the potential harm of high-risk AI technologies. Moreover, it seeks to catalyse innovation and position Europe as a global pioneer in artificial intelligence.
The EU AI Act enforces strict regulations on AI technologies considered harmful to citizens’ rights, banning several specific applications.
The EU AI Act introduces stringent regulations against certain AI applications deemed a threat to citizens’ rights. There are also various technologies which are banned altogether, based on the level of risk they present to individuals’ rights and freedoms. Among the banned technologies are biometric categorization systems that identify sensitive characteristics, indiscriminate scraping of facial images for creating facial recognition databases, emotion recognition in workplaces and schools, social scoring, predictive policing based solely on profiling. AI that manipulates human behaviours or exploits vulnerabilities are also strictly prohibited. .
Real-time biometric identification by law enforcement is allowed only in specific cases with strict oversight and judicial approval.
While the use of real-time biometric identification systems by law enforcement is generally prohibited, under the EU AI Aact, there are exceptions for specific, narrowly defined situations where stringent safeguards are upheld. For instance, such systems may be used in time-sensitive and geographically restricted scenarios. Prior judicial or administrative authorization is required, in instances where law enforcement may need to locate missing persons or prevent terrorist activities. Post-event use of these systems for investigating crimes also requires judicial approval.
The law requires high-risk AI developers to minimise risks, maintain transparency, and ensure oversight.
The EU AI Act specifies clear duties for developers and deployers of high-risk AI systems, recognizing their significant potential to impact health, safety, fundamental rights, and the environment. High-risk applications span across various sectors including critical infrastructure, education, law enforcement, and democratic processes. These applications are required, under the Act, to mitigate risks, maintain transparency, and ensure human oversight, granting citizens the right to challenge and seek explanations for AI-driven decisions affecting them.
The legislation mandates transparency, risk evaluation, and disclosure for AI systems.
General-purpose AI systems must adhere to specific transparency guidelines, including compliance with EU copyright standards and the provision of detailed training data summaries. To mitigate systemic risks, advanced AI models will undergo thorough evaluations and incident reporting. Moreover, the legislation mandates clear disclosure of artificial or manipulated content, such as “deepfakes.” To nurture innovation, the Act encourages the implementation of regulatory sandboxes and real-world testing environments, particularly benefiting SMEs and startups, to facilitate the development of groundbreaking AI technologies.
The AI Act awaits final approval and will come into force 20 days post-publication, with phased implementation over 24 to 36 months.
The regulation awaits a concluding review by lawyer-linguists and is anticipated to be officially passed before the end of the current legislative period via the corrigendum procedure. Additionally, formal approval from the Council is necessary. Following its publication in the Official Journal, the law will come into force twenty days later and become fully enforceable 24 months after the legislation comes into force, with a few exceptions: prohibitions on certain practices will be enforceable six months later; codes of practice will take effect nine months later; general-purpose AI regulations, including governance, will commence 12 months afterward; and stipulations for high-risk systems will be applicable 36 months after the legislation comes into force.
At Aphaia, we commit to being the partner guiding you through a comprehensive journey of strengthening your data defences, ensuring compliance, and providing peace of mind in an ever-evolving digital landscape. Take that first step today, and let’s improve your practices and achieve trustworthy AI. Contact Aphaia today to find out more.