Loading

Blog details

Aphaia delivers a presentation on the new EU AI Act and the GDPR on 42Workspace in Rotterdam

Aphaia delivers a presentation on the new EU AI Act and the GDPR on 42Workspace in Rotterdam

Aphaia has opened its new office at 42Workspace in Rotterdam and had the chance to deliver the presentation “EU AI Act with GDPR fundamentals” to the Rotterdam tech community on 5th June.

 

42Workspace is the tech coworking space in Rotterdam, comprising a community of more than 40 startups and scale-ups providing digital services in multiple areas, such as analytics, data visualization, web design, real estate, healthcare and payment processing, among others. On its expansion into the Dutch market and after years of experience helping tech businesses to bring into compliance all their practices, procedures and systems in the context of data protection, AI ethics and telecom regulation, Aphaia has now joined 42Workspace. 

We could not think of a better way to introduce ourselves than speaking with the community about what we love the most: the latest developments on the data and tech regulatory landscape, so we prepared a presentation on the new EU Act and the GDPR that we had the chance to deliver on 5th June. In this post, we summarise the key takes from the event.

 

The Ethics Guidelines for Trustworthy AI

The Ethics Guidelines presented by the High-Level Expert Group on Artificial Intelligence in 2019 set the bases of the recently approved EU AI Act. Whereas the guidelines are not directly enforceable, they have largely inspired the requirements imposed by the EU AI Act on certain AI systems. Together with this, Recital 27 of the EU AI Act recognises the importance of the ethical principles defined in the guidelines for the purposes of ensuring that AI models and systems are trustworthy and human-centric: “Without prejudice to the legally binding requirements of this Regulation and any other applicable Union law, those guidelines contribute to the design of a coherent, trustworthy and human-centric AI, in line with the Charter and with the values on which the Union is founded […] The application of those principles should be translated, when possible, in the design and use of AI models. They should in any case serve as a basis for the drafting of codes of conduct under this Regulation.” Accordingly, principles such as human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability should be core to the design and development of any AI systems.

 

The EU AI Act

The journey to the EU AI Act from the initial EU Commission proposal to the version finally adopted by the EU Parliament and the Council has not been a smooth one, which is not surprising if one considers it has taken place during the rise of generative AI into the mainstream. 

Accordingly, the categories of AI that have been outright banned, such as those engaging in manipulative behaviour or exploiting vulnerabilities of social groups, and the categories that have been made subject to thorough regulation as “high-risk” AI, such as those that control vehicles or other machinery, are now further complemented by special rules for generative AI systems, especially if one can expect them to have high impact.

Whereas EU AI Act clearly represents a global landmark in AI regulation, it is unlikely to be as future-proof as expected by its proponents, since ongoing developments already challenge the categorisation of risk and remedies it uses. 

 

GDPR and AI

Whereas the EU AI Act is specifically addressed to AI systems and models, and it is the first-ever comprehensive legal framework on AI worldwide, it is not the first or only law currently regulating AI in EU. The GDPR applies to the design, use and implementation of AI if personal data is involved. There are two levels of risk in relation to the processing of personal data in the context of AI and different GDPR implications are associated with each of them. 

The first one would be the processing of personal data either for training an AI model or as inputs in an AI system. In this case the processing would be fully subject to the GDPR and a Data Protection Impact Assessment would be very likely required in order to identify any potential risks and mitigation measures that may be necessary, given the nature of the technology used. 

The second level would be determined by automated decision-making, when the AI makes decisions based on automated processing or profiling, producing legal or similar effects on the data subject. The GDPR restricts this processing to very specific scenarios and imposes obligations on multiple rights and measures that should be enabled and applied. 

 

It was a great pleasure for the Aphaia team to engage in such an interesting discussion with the 42Workspace community and we are grateful for all the questions raised and the amazing feedback received.

Elevate your data protection standards with Aphaia. Aphaia’s expertise combines privacy regulation as well as emerging AI regulation. Schedule a consultation today and embark on a journey toward strengthening security, regulatory compliance, and the peace of mind that comes with knowing your data is in expert hands. Contact Aphaia today.

Prev post
Right to be forgotten: how unfit data deletion protocol resulted in a fine from Dutch DPA
June 13, 2024
Next post
The new EU AI Regulation: What is it and to whom does it apply?
June 27, 2024