Clearview fined by the ICO for unlawful data collection and processing

Clearview AI Inc was fined over £7.5 million, and ordered to delete photos and data of UK residents from its database. 

 

The ICO has fined Clearview AI Inc £7,552,800 for using the images of people, including those in the UK, that were scraped from the web and social media profiles to create their global online database which is geared towards facial recognition use. The enforcement notice issued by the ICO orders the company to stop collecting and using the personal data of UK residents, and to delete the data of any UK residents from its systems.

 

Clearview provides customers with a service which allows them to find information on an individual through their database,using facial recognition software. 

 

Clearview AI Inc has accumulated well over 20 billion images of faces and data of individuals all over the world from data that is publicly available on the internet and social media platforms, and used this data to create an online database. This database is intended to refine facial recognition software and practices. Internet users were uninformed about the collection and use of their images. The service provided by this company allows their customers, including the police, to upload an image of a person to the company’s app, which then compares the image to all the images in their database in order to find a match. This process typically results in the compilation of a list of images that have similar characteristics with the photo provided by the customer, and also includes a link to the websites from which those images were derived.

 

Clearview’s database likely includes a substantial amount of data from UK residents, which the UK Commissioner deems “unacceptable”.

 

Considering the volume of UK internet and social media users, it is quite likely that the company’s database includes a substantial amount of data from UK residents, which was collected without their knowledge. While Clearview has ceased offering its services to UK organisations, the company still has customers in other countries, and continues to use the personal data of UK residents, making their data available to those other international clients. In a statement from the ICO, John Edwards, UK Information Commissioner said “Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images. The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.”

 

The ICO found that the company breached UK data protection laws, which landed Clearview fined by the ICO. 

 

Through its investigation, the ICO found that Clearview AI used the information of people in the UK in a way that is neither fair nor transparent, considering the fact that individuals were not made aware, nor would not reasonably expect that their personal data was being used in such a way. The company also has no process in place to delete data after some time, to prevent the data they have collected from being used indefinitely. Clearview also failed to have a legal basis for the collection of all this data. The data collected by the company also falls into the class of special category data, which has higher data protection standards under the UK GDPR, and Clearview AI failed to meet those data protection standards. To make matters worse, when approached by members of the public seeking to exercise their right to erasure, the company required that they send additional personal information in order to have that request fulfilled, which may have acted as a deterrent to those individuals. These infractions landed Clearview fined by the ICO, a total of over £7.5 million. The company was also ordered to delete any data concerning UK residents from its database. 

Does your company have all of the mandated safeguards in place to ensure the safety of the personal data you collect or process? Aphaia can help. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

AI resources from CNIL published to support professionals

A collection of AI resources from CNIL were published in an effort to aid professionals in maintaining compliance. 

 

With the developments in the use of AI systems over the years, new challenges in terms of data protection have presented. As part of its missions of information and protection of rights,  the CNIL has offered a set of content devoted to AI, to aid professionals, as well as specialists and the public in general, in this regard. These resources form part of a larger European strategy, which is aimed at encouraging excellence in the field of artificial intelligence. This strategy includes rules intended to guarantee the reliability of these AI technologies. More particularly, it is about developing a solid regulatory framework for AI based on human rights and fundamental values, building the trust of European citizens.

 

In addition to helping professionals maintain compliance, the AI resources from CNIL are also aimed at specialists in the field, and would prove helpful to the general public as well. 

 

The resources are aimed at three main audiences: AI professionals which consist of data controllers or subcontractors, specialists (AI researchers, data science experts, machine learning engineers, etc.), and the public at large. These resources can be very helpful to members of the general public who are  interested in the operation of AI systems and their implications in our daily lives or those who wish to test their operation. Specialists who handle artificial intelligence on a daily basis, and who are curious about the challenges that artificial intelligence poses to data protection will also find these resources very helpful. The resources are however, mainly tailored to professionals who process personal data based on AI systems, or who wish to do so and who want to know how to ensure their compliance with the GDPR.

 

The AI resources from CNIL include two extensive guides for professionals empowering them to take greater responsibility for remaining in compliance with the use of their AI systems. 

 

The CNIL provides two main resources which would prove helpful to AI professionals, a detailed guide for GDPR compliance, as well as a self assessment guide for organizations to assess their AI systems with regard to GDPR compliance. The guide for GDPR compliance should prove helpful at every stage of the lifespan of AI systems; the learning stages, as well as the stages of production. It encourages continuous improvement as well as continuous assessment, to ensure that once the system is deployed, it meets the operational needs for which it was designed. This guide takes into account the known challenges presented by AI systems and aims to deal with them preemptively, and on a consistent basis throughout their use. The self assessment guide provided by CNIL is to be used in conjunction with the GDPR compliance guide, and helps AI professionals to assess the maturity of their AI systems with regard to the GDPR. It aims to  empower these professionals with instructional tools which help promote transparency and user rights, prevent breaches, and maintain compliance, and best practices. 

Do you use AI in your organisation and need help ensuring compliance with AI regulations?  Aphaia can help. Aphaia also provides AI Ethics Assessments and both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Clearview AI fined and ordered to remove data

Clearview AI fined by the Italian SA after various GDPR violations, and ordered to remove data and appoint an EU representative.

 

The company Clearview AI, has been fined by yet another EU watchdog, according to this report from the EDPB. The Italian SA has also ordered the company to delete the data of Italians from its database. The company has built its database of approximately 10 billion faces from pictures scraped across the internet. The Italian SA, Garante launched an investigation after a report on several issues regarding facial recognition products which were offered by the Clearview AI Inc.The investigation revealed several issues. As a result, the Italian SA imposed a fine amounting to EUR 20 million, ordered a ban on any further collection and processing, ordered the erasure of the data, including biometric data, processed by Clearview’s facial recognition system with regard to persons in the Italian territory and also ordered the company to designate a representative in the EU.

 

The investigation by the Italian SA uncovered several infringement by Clearview AI Inc. 

 

The Italian SA’s inquiries were spurred following complaints and alerts and found that Clearview AI allows tracking Italian nationals and persons located in Italy. The inquiries and assessment by the Italian SA found several infringements by Clearview AI Inc. The personal data held by the company were processed unlawfully without an appropriate legal basis. This includes biometric and geolocation information.  In addition, the company violated several principles of the GDPR, including transparency, purpose limitation, and storage limitation. Clearview AI neglected to provide the information required by Articles 13-14 of the GDPR when personal data is collected from data subjects. Additionally, the company failed to designate a representative in the EU.

 

Clearview AI was fined €20 million and ordered to remove all Italian user data. 

 

The Italian DP imposed a fine of €20 million on the company. In addition, Garante imposed a ban on any further collection, by web scraping techniques, of images and the relevant metadata of persons in the Italian territory.  A ban was also imposed on further processing of the standard and biometric data that are handled by the Company via its facial recognition system concerning persons in the Italian territory. The Authority also ordered the erasure of all data, including biometric data, processed by its facial recognition system with regard to persons in the Italian territory. The company is also required to designate a representative in the territory of the European Union, as ordered by Garante. 

Does your company have all of the mandated safeguards in place to ensure the safety of the personal data you collect or process? Aphaia can help. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

AI powered age verification systems being tested at UK supermarkets for alcohol purchases.

AI powered age verification systems are now being tested at a few UK supermarkets to verify the ages of people attempting to buy alcohol at self-checkout.

 UK supermarkets have begun testing an automated age verification system at self-checkouts when buying alcohol. According to this BBC report, this system will utilize cameras that can estimate a customer’s age. The system is being installed in some supermarkets and will require any customer who is estimated to be under 25, to present ID to a staff member in order to check out. The system uses a camera which will guess their age using algorithms trained on a database of anonymousfaces. Customers must consent to the use of this system.

Producers of this system maintain that it is not facial recognition and that images are not retained.

The system is being produced with the intention of speeding up the self-checkout process by eliminating the need to wait for a member of staff to verify the ages of customers using self-checkout. The producer, a company called Yoti, has stressed that this system is not facial recognition and that images taken by the cameras are not saved. This is intended to protect customers’ identities. Unlike facial recognition, the system does not match faces to individuals’ faces from a database, but rather compares the face to the anonymous faces in the database in order to guess a customer’s age. According to Robin Tombs, chief executive of Yoti, “Our age-verification solutions are helping retailers like Asda meet the requirements of regulators worldwide and keep pace with consumer demands for fast and convenient services, while preserving people’s privacy.”

The producers of this AI powered age verification systems claim that scanned images are never stored, nor are they ever shown to anyone. 

This system has been tested on over 125,000 faces between the ages of six and 16. On average the system was able to guess the age of participants to within 1.5 – 2.2 years among 16-20 year olds. While there have been several concerns about privacy with regard to the use of facial recognition in public spaces, the producers of this age verification system maintain that this system is not the same as facial recognition, and that their personal data is not processed, and images are not saved. The company’s website states that the facial age estimation system can be embedded into an app, website or POS terminal. The AI powered technology can estimate an individual’s age within seconds without the need to provide identification, and without the need to store the image. It goes on to note that the image will never be seen by anyone. 

The system might not be free from potential compliance issues though. According to Dr Bostjan Makarovic, Aphaia’s Managing Partner, the system still processes one’s facial features and perceived age, then bases an automated decision on one’s right to buy a certain item upon such profiling. The implementation of one’s right to obtain human intervention will therefore be crucial.

“These systems may also raise discrimination concerns if the data used to train them was not high-quality data, therefore an AI ethics assessment may also be required to ensure full compliance” points out Cristina Contero Almagro, partner in Aphaia. 

Does your company have all of the mandated safeguards in place to ensure the safety of the personal data to collect or process? Aphaia can help. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.