Loading

Blog details

A year of Data Protection Law: 2024 review

A year of Data Protection Law: 2024 review

Throughout this year, the EU and the UK have experienced several notable developments in data protection. In this article, we will highlight some of the key milestones of 2024.

 

The year began with an ever relevant reminder, advising UK organisations on the transfer of personal data to the US under the UK GDPR, stressing risk assessments, legal mechanisms, and security measures to ensure compliance and data protection.

 

 At the beginning of 2024, we discussed guidance issued by the ICO to assist UK organisations transferring personal data to the US under Article 46 of the UK GDPR when the receiving US organisations are not self-certified under the UK Extension to the EU-US Data Privacy Framework (DPF). The ICO stated that organisations had to conduct transfer risk assessments and use mechanisms like the International Data Transfer Agreement (IDTA), Standard Contractual Clauses (SCCs), or Binding Corporate Rules (BCRs) to ensure compliance and adequate data protection. This involved evaluating factors such as national security and surveillance laws, including FISA and the CLOUD Act, and implementing supplementary measures like encryption and stricter access controls. Since the Schrems II ruling invalidated the EU-US Privacy Shield, organisations had been urged throughout the previous year to stay vigilant, monitor evolving regulations, and adopt robust security practices to maintain UK GDPR compliance while protecting individuals’ privacy rights.

 

The UK government prioritized cost-cutting and public sector efficiency in its AI strategy in 2024, diverging from increased AI investment and implementing stricter AI regulations.

 

The UK government has been reassessing its AI strategy, focusing on improving public sector efficiency rather than investing heavily in AI technology, as part of a broader effort to cut costs in its autumn budget. This shift, which included the cancellation of a £1.3 billion AI investment and potential cuts to planned projects like a San Francisco office for the UK’s AI Safety Institute, drew criticism from industry leaders. In contrast, countries like France ramped up their AI investment. Meanwhile, data protection authorities in the UK focused on tightening regulations around AI, especially concerning generative AI tools, for example Snap Inc.‘s “My AI” chatbot. This year, following an investigation into Snap’s data protection practices, the UK’s ICO issued general advice for businesses integrating AI chatbots, emphasizing the need for thorough data protection assessments. The ICO’s scrutiny of AI tools is part of its broader consultation on how data protection laws should apply to AI models, aiming to ensure fairness and compliance. This all aligns with the UK’s broader goal to foster innovation while safeguarding privacy and data protection in the rapidly advancing AI landscape.

 

The EU AI Act introduced a regulatory framework for AI, and will be enforced by Member States with support from governing bodies to ensure consistency and public trust.

 

The European Union has been dedicated to advancing its regulation of artificial intelligence with the adoption of the EU AI Act. This legislation entered into force following the EU Council’s final approval on May 21, 2024, and will become fully applicable over the following 24 to 36 months. The Act established a risk-based framework for AI systems, categorizing them into unacceptable, high, limited, and minimal risks, with bans on high-risk applications like biometric categorization and predictive policing. It mandates transparency, safety, and accountability measures for developers, including conformity assessments and CE markings for high-risk systems. Data protection authorities across Europe, such as France’s CNIL, the UK’s ICO, and Spain’s AEPD, have provided ethical AI recommendations throughout 2024 aligned with GDPR or UK GDPR, emphasizing transparency, fairness, and privacy, demonstrating the balance of safety with the ever evolving technological advancement across Europe.

 

This Act introduced a comprehensive framework for regulating AI technologies, with Member States playing a pivotal role in enforcement. This included appointing national supervisory and competent authorities to oversee compliance, conduct market surveillance, and represent their countries on the EU AI Board. Penalties for non-compliance are significant, ranging from €7.5 million to €35 million or up to 7% of global turnover, depending on the infringement’s severity. Citizens can lodge complaints and seek compensation for damages caused by high-risk AI systems or defective products under related EU directives. Supporting bodies, such as the AI Board, AI Office, Advisory Forum, and a Scientific Panel of independent experts, will ensure consistent enforcement, provide technical expertise, and advance AI governance across the EU. These measures aim to foster trustworthy AI development while safeguarding public interests and rights.

 

In April, the EDPB clarified that “Pay or OK” subscription models must comply with GDPR consent requirements, including offering a genuine free alternative and not automatically enrolling users in paid subscriptions if they withdraw consent.

With the increased use of “Pay or OK” subscription models, the EDPB released an opinion addressing the legal concerns surrounding the “Pay or OK” subscription model, which presents users with the choice of either consenting to data processing for targeted advertising or paying a fee for ad-free access. The EDPB emphasized that any such model must comply with all aspects of the GDPR, particularly regarding valid consent. Consent must be freely given, specific, informed, and unambiguous, and the user must have a genuine choice between alternatives. The EDPB’s opinion insisted that businesses must offer an equivalent alternative free of charge, which could involve less intrusive data processing or no data processing at all. It also stressed that users should be able to choose specific data processing purposes rather than consenting to a bundle of activities. Moreover, the EDPB highlighted the importance of providing clear, comprehensive information to users about how their data will be used and ensuring that withdrawing consent does not automatically place the user into a paid subscription. Additionally, businesses must adhere to principles such as data minimization and purpose limitation, ensuring that only necessary data is processed. The EDPB’s opinion also touched on the protection of children, who require additional safeguards from behavioural advertising. This guidance sought to ensure that businesses offer fair, transparent, and GDPR-compliant alternatives in their data practices, balancing user rights with business models.

 

The European Data Act entered into force in June, aiming to establish a fair data economy within the EU by protecting businesses, promoting competition, and ensuring data privacy and security.

In June of 2024, the European Data Act entered into force, seeking to establish a framework for a fair and innovative data economy within the EU, with applicability starting on September 12, 2025. The legislation safeguards businesses—particularly smaller enterprises—against unfair contractual terms, promotes fair competition, and fosters economic growth through interoperability standards. It enhances flexibility by allowing seamless switching between cloud providers, reducing costs and encouraging innovation. The Act also emphasizes data privacy and security, introducing safeguards against unlawful data requests by third countries and ensuring a reliable data-processing environment. These measures support collaboration, productivity, and trust within the data-driven ecosystem while addressing the role of private-sector data during public emergencies.

 

More recently, the Council of the EU adopted a new Product Liability Directive which now covers AI features and will apply to products placed on the market from December 2025 onward.

 

The Council of the EU recently adopted a new Product Liability Directive, modernizing the 1985 legislation to address digital technologies, AI, circular economy practices, and global trade complexities. Effective for products placed on the market from December 9, 2026, the directive ensures manufacturers are liable for defects, including those arising from updates or AI features, and extends accountability to product modifications and online platforms. Consumers benefit from reduced evidentiary burdens, expanded compensation for damages (physical, psychological, or digital), and access to evidence held by manufacturers. Liability now includes EU-based importers or distributors if manufacturers are outside the EU, ensuring victims can seek redress. The directive clarifies timeframes, holding businesses liable for up to 10 years—or 25 years for latent health-related damages—while fostering innovation, legal clarity, and sustainability.

 

Aphaia actively participated in several events within the global tech community, presenting on regulation and AI ethics throughout 2024, across various European cities. 

 

Aphaia has been actively engaging with the global tech community through various events, contributing valuable insights on the regulation and ethical implications of emerging technologies. In June 2023, Aphaia opened a new office at 42Workspace in Rotterdam, where they delivered a presentation on the EU AI Act and GDPR fundamentals, focusing on data protection and AI ethics. Later, in October 2024, Aphaia’s Managing Partner, Cristina Contero Almagro, presented on the new Gigabit Infrastructure Act (GIA) at the 12th International Workshop on Fiber Optics in Access Networks (FOAN) in Athens, highlighting the regulatory shifts designed to accelerate the deployment of high-capacity networks across Europe. In November 2024, Cristina also participated in the “AI and Society: Challenges and Opportunities” event in Pamplona, discussing the future of AI regulation in Spain, the socioeconomic impact of AI, and the importance of integrating responsible governance into AI development. In November 2024 Aphaia also ran a GDPR clinic for businesses at the 42workspace office in Rotterdam. These engagements reflect Aphaia’s commitment to advancing the understanding and responsible implementation of regulatory frameworks for both AI and digital infrastructure. 

Stay informed with our blog, where we provide updates on IT law, AI ethics, and practical guidance to help you achieve and maintain compliance. Explore the latest insights and stay ahead in an evolving regulatory landscape.

If you are looking to achieve compliance with regards to your AI systems, or the personal data protection practices in your business, Aphaia‘s team of experts can provide tailored solutions for EU AI Act and GDPR and UK GDPR compliance. We specialise in empowering organisations like yours with cutting-edge solutions designed to not only meet but exceed the demands of today’s data landscape. Contact Aphaia today.

Prev post
New EU Product Liability Directive
December 13, 2024