Loading

Blog details

General-purpose AI and Systemic risk in the EU AI Act

General-purpose AI and Systemic risk in the EU AI Act

As Europe continues to embrace the potential of general-purpose AI, it is essential that a calculated approach is taken to address the systemic risks associated with the technology. 

 

General-purpose Artificial Intelligence (GPAI) models are ones that have a wide range of possible uses, both intended and unintended by the developers. They can be used for a variety of tasks across different sectors, frequently with very little modification or fine-tuning, and individual models may be integrated into a large number of AI systems. GPAI has long been a topic of fascination as well as of concern due to its capabilities. As technology continues to advance at a significant pace, the potential for GPAI to revolutionize industries and improve daily life is undeniable. However, the risks associated with GPAI must be carefully considered and the requirements imposed by the new EU AI Act should be duly observed. 

 

One of the primary concerns surrounding General-purpose AI is the potential for systemic risk.

 

Due to the fact that a number of these models are highly effective or often used, there may be several dangers associated with them. The new EU AI Act considers certain models to have high impact capabilities. GPAI models that were trained with a total computing power of more than 10^25 FLOPs (a unit of measurement used to quantify the computing power of a computer or a processor) are presumed to involve systemic risks, with “actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.” The capabilities of models above this threshold, which captures the currently most advanced GPAI models, like OpenAI’s GPT-4 and Google DeepMind’s Gemini, are not understood well enough at the moment. The AI Office, which was created by the Commission, has the authority to modify this criterion in light of technological advancements and to include additional criteria. The AI Act may also be amended to update the FLOP threshold (by means of a delegated act).

 

Providers of models deemed to be susceptible to systemic risks are expected to satisfy specific obligations.

 

Providers of models deemed to be susceptible to systemic risks must assess and mitigate risks, report noteworthy incidents, conduct cutting-edge testing and model assessments, uphold cybersecurity requirements, and reveal the energy consumption data for their models. These providers are required to work with the European AI Office to develop Codes of Conduct, which will serve as the primary instrument for defining the guidelines in conjunction with other experts. In addition, the EU AI Act obliges providers of such models to disclose certain information to downstream system providers, as transparency is necessary to enable a better understanding of these models. Model providers also need to have policies in place to ensure that they respect copyright law when training their models. A provider that wants to expand on a general-purpose AI model must ensure that it has all the information required to ensure that its system is secure and complies with the EU AI Act.

 

By prioritizing safety, ethics, and collaboration, European countries can help to ensure that GPAI is developed in a responsible and sustainable manner that benefits society as a whole.

 

In order to continuously and effectively mitigate the risks associated with general-purpose AI, European policymakers must take a proactive approach to regulating GPAI development. Establishing ethical guidelines for GPAI research and implementation, as well as creating oversight mechanisms to ensure compliance, will be crucial in safeguarding against potential threats. Collaborating with international partners to develop a unified approach to GPAI regulation will also be essential in addressing the global nature of the technology. In addition, investment in research and development of GPAI safety measures will be critical in ensuring that the technology is developed in a responsible manner. By prioritizing safety and ethical considerations in the design and implementation of GPAI systems, European countries can help to minimize the potential risks associated with the technology.

Discover how Aphaia can elevate your data protection and AI strategy to new heights. We specialize in empowering organizations like yours with cutting-edge solutions designed to not only meet but exceed the demands of today’s data landscape. Contact Aphaia today.

Prev post
The new EU AI Regulation: What is it and to whom does it apply?
June 27, 2024
Next post
High risk AI and the EU AI Act
July 11, 2024