CNPD ordered Statistics Portugal to suspend all data transfers within 12 hours

CNPD ordered Statistics Portugal to suspend all data transfers to a US based processor within 12 hours earlier this week.

The Portuguese DPA, Comissão Nacional de Proteção de Dados or CNPD ordered Statistics Portugal (INE) to suspend all data transfers specific to their census within 12 hours, due to an inadequate level of protection for international data transfers, IAPP reported. After receiving complaints about the conditions for the collection of data via the internet, the Authority carried out a quick investigation. This probe revealed that INE used Cloudfare Inc, a California based web infrastructure and website security company to handle census survey operations. Due to the nature of the services provided by Cloudfare, the company is directly subject to US surveillance legislation for the purposes of national security.

While the international transfers were based on SCCs, it was concluded that the data was still not adequately protected.

Even in cases where the data transfers are based on Standard Contractual Clauses, data protection authorities are obliged to suspend or prohibit data transfers where there are no guaranteesthat these can or will be complied with in the recipient country. US surveillance legislation imposes on certain companies a legal obligation to give unrestricted access to US authorities to the personal data in their possession, without being able to inform their clients of it. With Cloudfare Inc being subject to this legislation and being in possession of large amounts of personal data from Portuguese citizens, this posed some serious risk.

CNPD ordered INE to cease data transfers within 12 hours due to the sensitive nature of the information collected.

The data collection process for the census exercise being executed by INE began on April 19th and was due to be completed by May 3th, however due to the complaints received by CNPD, about a week into the process, they were ordered to cease data transfers within 12 hours. The main reason for the immediate order to cease data transfers was, in addition to the sheer amount of data being collected and processed, the sensitive nature of the data itself. The data included information like religious and health data from the individuals in this large data pool.

Of late, similar issues have been dealt with by various data protection authorities across the EU.

In recent times we have seen similar action being taken by other EU DPAs, for example in Spain and Germany, concerning data transfers on the basis of Standard Contractual Clauses. However, with these transfers being made to the U.S. or any other third country that may have not been recognized as providing an adequate level of data protection and without applying any additional measures, these present an issue. This risk is particularly difficult when dealing with particularly sensitive data, as it was the case in this instance. It is extremely important, when making international data transfers on the basis of Standard Contractual Clauses that the data is subject to a level of protection equivalent to the level provided under EU law.

Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Record AEPD fine

Record AEPD fine imposed on Vodafone

Record AEPD fine imposed on Vodafone for violations of the GDPR as well as Spanish national regulations. 

 

Vodafone Spain has recently been hit with four fines, with a record total of €8.15 million for violations of the GDPR and Spanish national laws. The company has been found guilty of unlawful telemarketing and other data security violations. Over the last two years, some 200 million calls were made resulting in 191 complaints about the company’s practices regarding consent and data processing. 

 

Customers who had opted out of receiving communication were contacted by, or on behalf of the company. 

 

Several citizens who had opposed data processing for advertising were receiving calls and text messages, resulting in 191 complaints. As a result, the company’s headquarters were inspected in September of 2019. It was found that the phone company had not been continuously monitoring their data processor, and lacked the technical and organizational structure to ensure that it was avoiding making contact with citizens who had opted out of receiving communication for advertising purposes, or opted for erasure of their data entirely. The phone company was therefore found to have violated Article 28 of the EU GDPR by neglecting to continuously monitor the data processor in this case. 

 

The company was also found to have exported data without sufficient safeguards in place for international data transfers. 

 

The phone company’s infractions also included a violation of Article 44 of the GDPR, involving a transfer of data to a third country. It was found that data processors in the Republic of Peru had also engaged in advertising activity on behalf of Vodafone. This processor was not being continuously monitored, and the AEPD’s findings revealed that the company did not even have sufficient structures and safeguards in place to conduct this monitoring. 

 

This record AEPD fine included two fines for national laws in addition to the fines for EU GDPR violations. 

 

This total fine, which was imposed last month, consisted of two fines for violations of the EU GDPR and two fines for violations of Spanish national laws. The company was fined the sum of €6 million for violating both Article 28 and Article 44 of the EU’s GDPR collectively. In addition, the AEPD, based on its national competencies, fined another €2 million for the company’s violation of Spanish telecommunications and digital rights laws, and a smaller fine of €150,000 regarding a technical Spanish law governing the use of cookies. This total fine is a new record high for the AEPD, surpassing the €6 million fine imposed on Caixabank earlier this year. 

 

Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Standard Contractual Clauses

Standard Contractual Clauses may not be enough, as suggested by recent decision by BayLDA

BayLDA, the Bavarian DPA has recently ordered a German company to cease from using Mailchimp, despite the use of Standard Contractual Clauses.

 

In the aftermath of the Schrems II ruling, we have seen some examples of the practical implications of this judgment. In the most recent case, the Bavarian DPA has ordered a German publishing company to cease from using Mailchimp, the popular US email marketing platform. While the transfer of data to Mailchimp, and by extension to the US, a third country, was based on Standard Contractual Clauses, it was still unlawful. It was found that the company did not do its due diligence in ensuring that this data was adequately protected from access requests by US surveillance authorities. 

 

While the data transfers by the German company were based on Standard Contractual Clauses, BayLDA suggested that additional due diligence needed to be done. 

 

A complaint was filed against the German publishing company with the Bavarian DPA, BayLDA, regarding the company’s occasional use of Mailchimp for their newsletter. The data transfers to Mailchimp by the German publishing company were based on Standard Contractual Clauses. However, under US surveillance law FISA 702, Mailchimp qualifies as an “electronic communication service provider”, rendering the transferred email addresses in danger of being accessed by US intelligence services. BayLDA suggested that there were additional steps needed to be taken by the publishing company, as far as due diligence is concerned, to determine whether any additional measures needed to be put in place to ensure that data transferred to Mailchimp was protected from US surveillance. 

 

Based on the decision by BayLDA, the company has ceased from using Mailchimp with immediate effect, avoiding possible fines.

 

The respondent in this case had argued that its use of Mailchimp was lawful according to GDPR Article 44. Recital 102, in part states that “Member States may conclude international agreements which involve the transfer of personal data to third countries or international organisations, as far as such agreements do not affect this Regulation or any other provisions of Union law and include an appropriate level of protection for the fundamental rights of the data subjects.” In this case, it was ultimately found that this German company was not able to adequately protect the fundamental rights of the data subjects affected, as it had not ensured that this data was sufficiently protected from access by US surveillance. The German publishing company immediately ceased from using Mailchimp for its newsletters, avoiding a possible fine from BayLDA. 

 

This decision by BayLDA provides further clarity on the practical application of Schrems II.

 

This decision by the Bavarian DPA provides further clarity to companies who may be transferring data based on Standard Contractual Clauses, that at times this may not be enough. There is still necessary due diligence to be done on transfers of data outside the EU or UK. Due to third country surveillance laws, which may not be compatible with EU or UK laws, supplementary measures may need to be carried out to adequately protect the data being transferred to service providers in those third countries. 

 

Do you make international data transfers to third countries? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, transfer impact assessments and Data Protection Officer outsourcing.  Contact us today.

The new EU AI Regulation

The new EU AI Regulation : leaked document reveals intentions to establish rules for high risk AI

New EU AI Regulation revealed in a recently leaked document, include several intended legislations specific to high risk AI. 

 

The European Commission has proposed a Regulation of the European Parliament and of the Council aimed at governing the use, and sale of high risk AI within the European Union. In a recently leaked document, the organisations stated that “Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being.” With that in mind, the European Commission has set forth to further define and clarify what constitutes high risk AI, and set out rules aimed at ensuring that AI is safely and effectively serving the highest good of natural persons. “It is the step towards the right direction, providing further ethical safeguards as our lifes are becoming increasingly dominaed by machine-made decisions”, comments Dr Bostjan Makarovic, Aphaia’s Managing Partner.

 

The document outlines harmonised rules concerning the placing on the market, putting into service and use of high-risk AI systems in the Union. It also includes harmonised transparency rules for AI systems intended to interact with natural persons and AI systems used to generate or manipulate image, audio or video content. The regulatory framework laid out in this document is intended to function without prejudice to the provisions of existing Union regulations applicable to AI, falling within the scope of this regulation. 

 

The new AI Regulation will apply to providers and users of AI systems in the EU or in third countries to the extent that they affect persons in the EU.

 

This Regulation will apply to providers of AI systems in the practice of placing them on the market or putting them into service in the European Union whether they were established in the EU, or within a third country outside the Union. In addition, the regulations will apply to users of AI systems in the EU, as well as providers and users of AI systems established in third countries to the extent that these systems affect persons within the EU. 

 

Article 3 of the leaked document defines an AI system as “software that is developed with one or more of the approaches and techniques listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing real or virtual environments.” This can constitute a component of a product or a standalone product, the output of which may serve to partially or fully automate certain activities. Annex I outlines several approaches and techniques which indicate artificial intelligence including Machine learning approaches, including supervised and supervised and reinforcement learning, logic and knowledge based approaches, and statistical approaches. 

 

The leaked document outlines several prohibitions intended to be established for the protection of the fundamental rights of natural persons. 

 

This document goes on in Article 4 to outline the prohibited AI practices; a list of artificial intelligence practices which are prohibited as contravening the fundamental rights protected under EU law, and Union values. Title II Article 4 prohibits the use of AI systems that manipulate human behavior, opinions or decisions through choice architectures or any other element of the user interface, which causes persons to make decisions, have opinions or behave in a manner that is to their detriment. In addition, this regulation prohibits the use of AI in any manner that exploits information or predictions about people in an effort to target their vulnerabilities, leading them to behave, form an opinion or make decisions to their detriment. The regulation will also prohibit indiscriminate surveillance applied to all natural persons in a generalised manner without differentiation. Article 4(2) does however state that these prohibitions do not apply when authorised by law, or are carried out by, or on behalf of public authorities in order to safeguard public security, subject to appropriate safeguards to the rights and freedoms of third parties and compliance with the laws of the EU. 

 

Cristina Contero Almagro, Partner in Aphaia, points out that “It should be noted that this new Regulation mentions “monitoring and tracking of natural persons in digital or physical environments, as well as automated aggregation and analysis of personal data from various sources” as elements that the methods of surveillance could include, which means that these provisions might potentially reach any online platform which relies on automated data aggregation and analysis. 

 

Considering that the Regulation takes a risk-based approach and that it interlinks with the GDPR in some areas, this only confirms the importance for businesses to ensure that their systems and processes comply with the data protection framework. In particular, Data Protection Impact Assessments, over which the Conformity Assessment would be built, play a paramount role”.

 

The new AI Regulation specifically defines what constitutes high risk AI systems, in order to outline exactly which systems will be subject to this regulation. 

 

With regard to high risk AI systems, this document has a specific section (Annex II) dedicated to defining with precision and complete with examples, what constitutes a “high risk artificial intelligence system”. Anything that falls within that purview is subject to specific rules and regulations to ensure the best interest of persons. Compliance with these requirements will be required before the placement of these systems on the market or into service. The regulation covers the use of data sets for training these systems, documentation and record keeping, transparency, robustness, accuracy, security, and human oversight. The leaked document includes several obligations for the providers and users of these systems, as well as authorised representatives, importers and distributors.

 

The Regulation sets forth intended measures to support innovation in AI and aid SMEs in ensuring compliance. 

 

This document also sets forth intended measures in support of innovation in AI. These include AI regulatory sandboxing schemes, allowed to be established by national competent authorities in any of the Member States. In order to reduce the regulatory burden for small to medium enterprises and startups, additional measures will be implemented, including priority access to these sandboxes, as well as digital hubs and testing experimentation facilities. These hubs are intended to provide relevant training to AI providers on the regulatory requirements, as well as technical and scientific support, and testing facilities for providers. 

 

The new AI Regulation indicates the intention for the establishment of a European Artificial Intelligence Board. 

 

The document indicated the intention to establish a European Artificial Intelligence Board, tasked with the consistent application of this regulation by the Member States. This task force will be expected to issue opinions  or interpretive guidance documents clarifying the application of the Regulation, collect and share best practices among Member States, aid in the development of standards regarding AI, and continuously monitor developments in the market and their impact on fundamental rights. The European Artificial Intelligence Board will also be expected to ensure consistency and coordination in the functioning of the AI regulatory sandboxes previously mentioned. This board will issue opinions before the Commission adopts a delegated act and coordinate, in carrying out its tasks, with the relevant bodies and structures established at an EU level, including the EDPB. 

 

Do you use AI in your organisation and need help ensuring compliance with AI regulations? We can help you. Aphaia provides EU AI Ethics Assessments, Data Protection Officer outsourcing and ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments. We can help your company get on track towards full compliance.