Blog details

Italian DPA fines Replika’s developer €5 million and launches a new probe into AI model training practices.

Italian DPA fines Replika’s developer €5 million and launches a new probe into AI model training practices.

The Italian Data Protection Authority sanctioned Luka Inc. for unlawful data practices linked to its chatbot Replika and launched a new investigation into the system’s development and training, amid growing scrutiny of generative AI’s impact on vulnerable users.

Replika, developed by US-based Luka Inc., enables users to create virtual companions through a generative AI system that simulates roles such as romantic partner, friend, or mentor.  The chatbot Replika allows emotionally intimate interactions that raise significant data protection, as well as safety risks. These interactions can be text- or voice-based and are often deeply emotional in tone, prompting concerns about the nature and sensitivity of the personal data being processed, particularly when used by minors. After careful investigation, Garante has resolved to impose a fine on Luka Inc. based on serious violations of the General Data Protection Regulation (GDPR). 

 

The Italian authority confirmed serious GDPR violations, including absence of legal basis and inadequate privacy policy.

In a decision following its February 2023 enforcement action, the Italian Data Protection Authority (Garante) confirmed that Luka Inc. lacked a valid legal basis for processing personal data through Replika as of 2 February 2023. The company’s privacy policy was also found to be deficient in several material respects, in breach of its obligations under the GDPR.

 

Replika’s lack of effective age verification echoes concerns raised in a recent wrongful death lawsuit involving another AI chatbot.

The Garante further found that Replika had no effective age verification mechanism in place at the time of the investigation and that the system currently in use remains insufficient. This echoes disturbing concerns raised in a high-profile lawsuit filed in October 2024 against another chatbot provider, Character.ai. In that case, a mother alleged that her 14-year-old son became addicted to the chatbot and ultimately took his own life after emotionally charged exchanges with the AI. The lawsuit accuses the company of designing and marketing an unsafe product to children without meaningful safeguards, including age verification. While Replika does not market its AI to minors, the lack of effective age verification systems make it possible that the platform would be used by minors. 

 

The Italian DPA has ordered Luka Inc. to bring Replika into full compliance with the GDPR.

In addition to the €5 million administrative fine, the Italian regulator ordered Luka Inc. to adopt corrective measures to ensure GDPR compliance. These include identifying a valid legal basis for processing, correcting privacy policy shortcomings, and implementing effective mechanisms to exclude minors from use, in line with the company’s stated policy.

 

A second investigation will assess how personal data is handled during model training and development.

Garante has now opened a new investigation into Luka Inc.’s data processing practices during the development and training of the generative AI model behind Replika. The company has been asked to provide detailed information on risk assessments, categories of personal data used in training, the implementation of anonymisation or pseudonymisation, and technical and organisational measures deployed to protect personal data throughout the AI lifecycle.

 

This enforcement action reinforces growing global pressure on AI developers to protect vulnerable users.

The Italian DPA’s actions reflect a wider regulatory trend: the failure of AI companies to self-regulate in the face of increasingly intimate user interactions is no longer being tolerated. As evidenced by the wrongful death lawsuit in the US, regulators and courts are now demanding greater transparency, accountability, and safety from AI developers—especially where minors and emotionally vulnerable users are concerned. Enforcement of existing data protection laws, alongside potential new legislative action, is becoming essential to limit harms posed by AI systems.

Discover how Aphaia can help ensure compliance of your data protection and AI strategy. We offer full GDPR and UK GDPR outsourced DPO services, which can be complemented by our US State Privacy Bundle. We specialise in empowering organisations like yours with cutting-edge solutions designed to not only meet but exceed the demands of today’s data landscape. Contact Aphaia today.

Prev post
Toyota Bank Polska fined €132,000 for GDPR breaches including failure to ensure the independence of its DPO and lack of documented profiling practices
May 29, 2025