Following an investigation into the technology company Snap Inc, the ICO has published data protection advice with the use of AI chatbots.
Lately, it has become increasingly common for businesses and organisations to offer the option of an AI chatbot for website visitors and app users. Whether it be a social media chatbot, or a help chatbot on a sales page, businesses have been incorporating this capability into their apps and websites. Data protection concerns have been raised since the inception of this technology. However, following a recent investigation by the UK’s ICO into technology company Snap, the ICO has decided to share some advice for businesses and organisations which make use of chatbots, on how to remain in compliance and avoid undue data protection risks.
The ICO concluded its investigation into Snap Inc.’s “My AI” chatbot and is now satisfied with Snap’s compliance after the company took significant steps to address the identified risks.
The ICO concluded its investigation into Snap Inc.’s launch of the “My AI” chatbot. The investigation was initiated in June 2023 due to concerns that Snap had not adequately assessed the data protection risks associated with the chatbot, particularly for over several million ‘My AI’ users in the UK including children aged 13 to 17. “My AI” was initially launched for Snapchat+ subscribers on February 27, 2023, and later became available to all Snapchat users on April 19, 2023. As a result of the investigation, the ICO issued a Preliminary Enforcement Notice to Snap on October 6, 2023. However, Snap has since taken significant steps to address the risks identified after conducting a thorough data protection impact assessment, and the ICO is now satisfied with Snap Inc’s compliance with data protection law regarding “My AI.”
The ICO investigation led Snap Inc to improve its data protection practices for “My AI”, setting a precedent for the industry to prioritise data protection in AI development.
The ICO’s investigation into Snap’s “My AI” chatbot resulted in Snap taking significant steps to address data protection risks, including a thorough risk assessment. The ICO is now satisfied with Snap’s compliance and will continue to monitor “My AI”. Stephen Almond, ICO Executive Director of Regulatory Risk, emphasised that this case should serve as a warning to the industry, highlighting the importance of prioritising data protection and rigorous risk assessment from the outset when developing or using generative AI. The ICO is committed to protecting the public and will use its enforcement powers, including fines, to ensure compliance.
The ICO has launched a consultation series on how data protection law should apply to the development and use of generative AI models.
While a final decision is still a few weeks away following the investigation into Snap, the ICO has launched a consultation series on how data protection law should apply to the development and use of generative AI models. The consultation series will cover a range of topics, including the lawful basis for training generative AI models, how purpose limitation should be applied at different stages in the generative AI lifecycle, the accuracy of training data and model outputs, and engineering individual rights into generative AI models. The ICO has invited all stakeholders with an interest in generative AI to respond and help inform its positions in the ongoing series.
The ICO updated its Guidance on AI and Data Protection to clarify fairness requirements and help organisations adopt new technologies responsibly.
Since the preliminary enforcement notice was issued to Snap Inc, the ICO has updated its Guidance on AI and Data Protection to address the privacy risks posed by generative AI chatbots like ‘My AI’, as well as satisfy requests from UK industry for clarification on requirements for fairness in AI. The UK GDPR specifies that, in addition to implementing data protection principles effectively in AI design, it is imperative that data protection impact assessments are conducted to assess whether the processing by the AI systems in question will lead to fair outcomes.
The update also delivered on a key ICO25 commitment to help organisations adopt new technologies while protecting people and vulnerable groups. The update supports the UK government’s vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of fairness into AI. The ICO intends to continue to ensure that its AI guidance is user friendly, reduces the burden of compliance for organisations and reflects upcoming changes in relation to AI regulation and data protection.
Elevate your data protection standards with Aphaia. Aphaia’s expertise combines privacy regulation as well as emerging AI regulation. Schedule a consultation today and embark on a journey toward strengthening security, regulatory compliance, and the peace of mind that comes with knowing your data is in expert hands. Contact Aphaia today.