Loading

Blog details

The use of AI chatbots may lead to data breaches

The use of AI chatbots may lead to data breaches

The Dutch DPA urges businesses to be vigilant as the use of AI chatbots by employees may lead to data breaches. 

 

Recently, the Dutch Data Protection Authority (AP) received multiple allegations of data breaches resulting from employees sharing the personal information of patients or consumers with an artificial intelligence (AI) chatbot. Companies that provide AI chatbots may as a result obtain illegal access to personal information provided by users who submit it into the chatbots. The AP observes that a number of workers  may utilise programs like Copilot and ChatGPT, for instance, to respond to consumer inquiries or condense bulky data. Although this can help save time and spare workers from tedious work, there are significant risks involved in using them.

 

Employees breached company policy by inserting client data into an AI chatbot, exposing sensitive information and highlighting the need for explicit policies.

 

An employee of a general practitioner clinic inserted patient medical data into an AI chatbot against company policy in one of the data breaches that the AP was made aware of. Medical data is classified as a special category of personal data and considered extremely sensitive. The act of merely providing the data to a tech business is a serious breach of the individuals’ privacy. In another instance, a telecom operator reported to the AP that one of its employees had fed an AI chatbot a file that contained, among other things, information pertaining to client addresses. It’s critical that businesses have explicit policies with their staff around the use of AI chatbots. If companies permit it, they also need to male it clear to employees which types of data they are permitted and prohibited from entering into chatbots. Companies may also have the option of making arrangements with the chatbot provider so that the data entered is not stored.

 

Organisations should ideally avoid the use of chatbots, as developers often store data without the user’s knowledge. 

 

Workers at times may use chatbots against employer policies and on their own initiative. If personal information is entered, this has automatically resulted in a data breach, as any unauthorised or unintentional access to personal data is considered a data breach. While the use of AI chatbots is sometimes prohibited by law, it is not considered a data breach when it is incorporated into, and covered by an organisation’s policy. The use of chatbots should ideally be avoided by organisations as the majority of chatbot developers store all data entered. Oftentimes, the person who entered the data is unaware that it has ended up on the servers of those tech companies. Additionally, without knowing exactly how that business intends to use that data, that information cannot be passed on to the data subject. Reporting to the AP and the victims is essential if something goes wrong and an employee uses a chatbot against the terms of the agreement to release personal data. 

Discover how Aphaia can help ensure compliance of your data protection and AI strategy. We offer early compliance solutions for EU AI Act and full GDPR and UK GDPR compliance. We specialise in empowering organisations like yours with cutting-edge solutions designed to not only meet, but exceed the demands of today’s data landscape. Contact Aphaia today.

Prev post
Enforcement notices issued to two public organisations
August 8, 2024
Provisional decision from the ICO
Next post
Provisional decision from the ICO to fine a software company following a ransomware attack
August 22, 2024