AI regulatory sandbox: pilot program launched

The Government of Spain and the European Commission recently launched a pilot program for the AI regulatory sandbox. 

 

Last month, the government of Spain and the European Commission presented a pilot of the first regulatory sandbox on Artificial Intelligence. This sandbox aims to bring together the  competent authorities and the companies that develop AI to determine the best practices which will guide the implementation of future AI Regulation by the European Commission (the Artificial Intelligence Act). A timeline of two years has been placed on the implementation of this legislation. According to this report from the European Commission, this initiative by the Spanish government seeks to operationalise the requirements of future AI regulation and other features, including conformity assessments and post-market activities.

 

The AI regulatory sandbox will create an opportunity for innovators and regulators to come together and collaborate.

 

This sandbox presents a way to connect innovators and regulators within a controlled environment to foster cooperation. The aim is to  facilitate the development, testing and validation of innovative AI systems with the mindset of ensuring compliance with the requirements of the AI Regulation. In the interim of preparing for the AI Act, this initiative is expected to clarify easy-to-follow, future-proof best practices and other  guidelines. The results  are expected to facilitate the implementation of rules by companies, particularly SMEs and start-ups. 

 

The results of the pilot will determine guidelines for the implementation and use of AI throughout the European Union.

 

Due to this pilot experience, obligations for AI system providers, and how to implement them will be documented and systematised in the form of good practices and lessons learnt implementation guidelines. This will also include methods to control and follow up, to be used by supervising national authorities in charge of implementing the supervisory mechanisms that the regulation establishes. While this project is being financed by the Spanish government, it will remain open to other Member States, and could potentially become a pan-European AI regulatory sandbox. This is expected to strengthen the cooperation of all possible actors at the European level.  Cooperation at EU level with other Member States will be pursued within the framework of the Expert Group on AI and Digitalisation of Businesses established by the European Commission.

Does your company have all of the mandated safeguards in place to ensure the safety of the personal data you collect or process? Aphaia can help. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Bank fined by Hungarian SA for unlawful data processing

Budapest Bank fined by Hungarian SA, for unlawful data processing as the controller’s use of AI systems lacked transparency. 

 

Budapest Bank was recently fined by the Hungarian SA due to the fact that the data controller (Budapest Bank) performs automated analyses of customers’ satisfaction using AI on customer service phone calls. This data processing was not clearly specified to data subjects, and resulted in an investigation into the actions of the data controller last year reviewing its general data processing practice, specifically with regard to the automated analysis. The information revealed in the investigation resulted in a fine of approximately €650,000. 

 

Customers’ level of satisfaction was assessed from recorded calls using AI technology, without having informed data subjects of this processing. 

 

The data controller recorded all customer service calls which would be analysed on a daily basis. Using AI technology, certain keywords would be identified, to determine the emotional state of the customer in each recording. The result of this analysis was then stored and linked to the phone call and this stayed in the system of the software for 45 days. The point of this AI assessment is to compile a list of customers sorted by their likelihood of dissatisfaction based on the audio recording of the customer service phone call. Based on this data, designated employees are then expected to call clients, in an effort to assess their reasons for dissatisfaction. Data subjects received no communication regarding this processing, making it impossible for them to exercise their right to objection. 

 

Assessments showed that this processing posed a high risk to data subjects. 

 

While an impact assessment and legitimate interest assessment were performed and confirmed that the data processing posed a high risk to data subject rights, no action was taken to mitigate those risks. The data controller’s impact assessment confirmed that the data processing uses AI and poses a high risk to the fundamental rights of data subjects. Neither of the assessments performed provided any actual risk mitigation, and the measures which did exist on paper were insufficient and non-existent. Artificial intelligence is difficult to deploy in a transparent and safe manner, and therefore additional safeguards are necessary. It is typically difficult to confirm the results of personal data processing by AI, resulting in biased results.

 

The Hungarian SA ordered the controller to come into compliance and pay an administrative fine. 

 

The Hungarian SA determined this to be in serious infringement of several articles of the GDPR, and also considered the length of time over which these infringements persisted. The supervisory authority ordered the data controller to stop processing the emotional state of the clients, and to only continue the data processing if this processing can be made compliant with the GDPR. In addition to being ordered to come into compliance, the controller was issued an administrative fine of approximately €650,000.

Does your company have all of the mandated safeguards in place to ensure the safety of the personal data you collect or process? Aphaia can help. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

AI resources from CNIL published to support professionals

A collection of AI resources from CNIL were published in an effort to aid professionals in maintaining compliance. 

 

With the developments in the use of AI systems over the years, new challenges in terms of data protection have presented. As part of its missions of information and protection of rights,  the CNIL has offered a set of content devoted to AI, to aid professionals, as well as specialists and the public in general, in this regard. These resources form part of a larger European strategy, which is aimed at encouraging excellence in the field of artificial intelligence. This strategy includes rules intended to guarantee the reliability of these AI technologies. More particularly, it is about developing a solid regulatory framework for AI based on human rights and fundamental values, building the trust of European citizens.

 

In addition to helping professionals maintain compliance, the AI resources from CNIL are also aimed at specialists in the field, and would prove helpful to the general public as well. 

 

The resources are aimed at three main audiences: AI professionals which consist of data controllers or subcontractors, specialists (AI researchers, data science experts, machine learning engineers, etc.), and the public at large. These resources can be very helpful to members of the general public who are  interested in the operation of AI systems and their implications in our daily lives or those who wish to test their operation. Specialists who handle artificial intelligence on a daily basis, and who are curious about the challenges that artificial intelligence poses to data protection will also find these resources very helpful. The resources are however, mainly tailored to professionals who process personal data based on AI systems, or who wish to do so and who want to know how to ensure their compliance with the GDPR.

 

The AI resources from CNIL include two extensive guides for professionals empowering them to take greater responsibility for remaining in compliance with the use of their AI systems. 

 

The CNIL provides two main resources which would prove helpful to AI professionals, a detailed guide for GDPR compliance, as well as a self assessment guide for organizations to assess their AI systems with regard to GDPR compliance. The guide for GDPR compliance should prove helpful at every stage of the lifespan of AI systems; the learning stages, as well as the stages of production. It encourages continuous improvement as well as continuous assessment, to ensure that once the system is deployed, it meets the operational needs for which it was designed. This guide takes into account the known challenges presented by AI systems and aims to deal with them preemptively, and on a consistent basis throughout their use. The self assessment guide provided by CNIL is to be used in conjunction with the GDPR compliance guide, and helps AI professionals to assess the maturity of their AI systems with regard to the GDPR. It aims to  empower these professionals with instructional tools which help promote transparency and user rights, prevent breaches, and maintain compliance, and best practices. 

Do you use AI in your organisation and need help ensuring compliance with AI regulations?  Aphaia can help. Aphaia also provides AI Ethics Assessments and both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Clearview AI fined and ordered to remove data

Clearview AI fined by the Italian SA after various GDPR violations, and ordered to remove data and appoint an EU representative.

 

The company Clearview AI, has been fined by yet another EU watchdog, according to this report from the EDPB. The Italian SA has also ordered the company to delete the data of Italians from its database. The company has built its database of approximately 10 billion faces from pictures scraped across the internet. The Italian SA, Garante launched an investigation after a report on several issues regarding facial recognition products which were offered by the Clearview AI Inc.The investigation revealed several issues. As a result, the Italian SA imposed a fine amounting to EUR 20 million, ordered a ban on any further collection and processing, ordered the erasure of the data, including biometric data, processed by Clearview’s facial recognition system with regard to persons in the Italian territory and also ordered the company to designate a representative in the EU.

 

The investigation by the Italian SA uncovered several infringement by Clearview AI Inc. 

 

The Italian SA’s inquiries were spurred following complaints and alerts and found that Clearview AI allows tracking Italian nationals and persons located in Italy. The inquiries and assessment by the Italian SA found several infringements by Clearview AI Inc. The personal data held by the company were processed unlawfully without an appropriate legal basis. This includes biometric and geolocation information.  In addition, the company violated several principles of the GDPR, including transparency, purpose limitation, and storage limitation. Clearview AI neglected to provide the information required by Articles 13-14 of the GDPR when personal data is collected from data subjects. Additionally, the company failed to designate a representative in the EU.

 

Clearview AI was fined €20 million and ordered to remove all Italian user data. 

 

The Italian DP imposed a fine of €20 million on the company. In addition, Garante imposed a ban on any further collection, by web scraping techniques, of images and the relevant metadata of persons in the Italian territory.  A ban was also imposed on further processing of the standard and biometric data that are handled by the Company via its facial recognition system concerning persons in the Italian territory. The Authority also ordered the erasure of all data, including biometric data, processed by its facial recognition system with regard to persons in the Italian territory. The company is also required to designate a representative in the territory of the European Union, as ordered by Garante. 

Does your company have all of the mandated safeguards in place to ensure the safety of the personal data you collect or process? Aphaia can help. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.