Tech giants push for lighter AI regulations in Europe amid concerns over fines and transparency.
In a pivotal final effort, the world’s largest technology companies are urging the European Union (EU) to take a more lenient stance on regulating artificial intelligence (AI). Firms like Amazon, Google, and Meta are currently facing the looming possibility of massive fines under the new EU AI Act. These companies are determined to shape the final details of the law, which aims to provide a comprehensive framework for AI governance within the EU. The EU reached an agreement in May 2023 on the EU AI Act, designed to regulate the rapidly evolving technology. However, there remains uncertainty over how the law will be applied, especially to “general purpose” AI (GPAI) systems such as OpenAI’s ChatGPT. According to a recent article featured on Reuters, the lack of clarity leaves open the possibility of significant legal challenges, including copyright infringement lawsuits and fines in the billions of dollars.
The AI Act’s implementation depends on finalising its relevant codes of practice, which outline how companies should comply with the law.
The AI Act’s implementation rests on the finalisation of its relevant codes of practice, which will outline how companies are expected to comply with the law. The EU has invited industry players, academics, and other stakeholders to contribute to the drafting of these guidelines. The invitation was met with an overwhelming response, including nearly 1,000 applications, an unusually high number, which is indicative of the high stakes for the tech industry. Although the code of practice will not carry legal weight when it takes effect in late 2025, it is expected to act as a compliance checklist. Failure to adhere to it, while claiming compliance with the broader AI Act, could open companies up to legal risks. Boniface de Champris, a senior policy manager at the trade organisation CCIA Europe, which represents major tech firms like Amazon, Google, and Meta, emphasised the importance of getting the code right. “If it’s too narrow or too specific, that will become very difficult,” he said, noting that an overly rigid framework could stifle innovation.
The debate surrounding the use of copyrighted data in AI training is ongoing, with the EU AI Act mandating detailed training data summaries
One of the controversial issues revolves around the use of copyrighted data to train AI models. Companies like OpenAI and Stability AI have faced criticism for allegedly using copyrighted materials, such as books and photo archives, without permission. Under the EU AI Act, tech firms will be required to provide detailed summaries of the data used in training their models. This has sparked debate about transparency and intellectual property rights. While some businesses argue that these summaries should contain only minimal details to protect trade secrets, others insist that creators have the right to know if their work has been used without consent. OpenAI, which has been criticised for its lack of transparency, is among the firms applying to help shape the code of practice, signalling the high level of interest from leading AI developers. Maximilian Gahntz, AI policy lead at the Mozilla Foundation, expressed concern over the reluctance of some companies to be fully transparent. “The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box,” he said, referring to the often opaque nature of AI systems.
Tech startups seek manageable obligations under the AI Act that allow for continued growth and development of AI technology.
As EU officials work to draft the final code, a delicate balance must be struck between fostering innovation and ensuring proper oversight. Some industry voices have criticised the EU for focusing too much on regulation at the expense of technological advancement. Former European Central Bank chief Mario Draghi recently called for more coordinated industrial policy and faster decision-making to help the EU keep pace with global competitors like China and the United States. Homegrown European tech startups are also lobbying for special considerations under the AI Act. Maxime Ricard, policy manager at Allied for Startups, a network of organisations representing smaller tech firms, stressed the need for obligations that are manageable for startups. “We’ve insisted these obligations need to be manageable and, if possible, adapted to startups,” he said.
EU AI Act drafting involves non-profits and tech companies, and has brought up some concerns about the larger corporations weakening transparency requirements.
Non-profit organisations, including Access Now and the Future of Life Institute, have joined tech companies in applying to participate in the drafting process. While some fear that large corporations will try to dilute transparency requirements, Maximilian Gahntz from Mozilla emphasised the importance of maintaining strong transparency mandates to ensure accountability in AI development.Once the code is published in early 2025, tech companies will have until August 2025 to comply. As the global leader in AI regulation, the EU’s approach could have far-reaching implications, not just for Europe but for how AI is governed worldwide. The outcome of these deliberations will shape the future of artificial intelligence for years to come.
Discover how Aphaia can help ensure compliance of your data protection and AI strategy. We offer early compliance solutions for EU AI Act and full GDPR and UK GDPR compliance. We specialise in empowering organisations like yours with cutting-edge solutions designed to not only meet but exceed the demands of today’s data landscape. Contact Aphaia today.