Record AEPD fine

Record AEPD fine imposed on Vodafone

Record AEPD fine imposed on Vodafone for violations of the GDPR as well as Spanish national regulations. 


Vodafone Spain has recently been hit with four fines, with a record total of €8.15 million for violations of the GDPR and Spanish national laws. The company has been found guilty of unlawful telemarketing and other data security violations. Over the last two years, some 200 million calls were made resulting in 191 complaints about the company’s practices regarding consent and data processing. 


Customers who had opted out of receiving communication were contacted by, or on behalf of the company. 


Several citizens who had opposed data processing for advertising were receiving calls and text messages, resulting in 191 complaints. As a result, the company’s headquarters were inspected in September of 2019. It was found that the phone company had not been continuously monitoring their data processor, and lacked the technical and organizational structure to ensure that it was avoiding making contact with citizens who had opted out of receiving communication for advertising purposes, or opted for erasure of their data entirely. The phone company was therefore found to have violated Article 28 of the EU GDPR by neglecting to continuously monitor the data processor in this case. 


The company was also found to have exported data without sufficient safeguards in place for international data transfers. 


The phone company’s infractions also included a violation of Article 44 of the GDPR, involving a transfer of data to a third country. It was found that data processors in the Republic of Peru had also engaged in advertising activity on behalf of Vodafone. This processor was not being continuously monitored, and the AEPD’s findings revealed that the company did not even have sufficient structures and safeguards in place to conduct this monitoring. 


This record AEPD fine included two fines for national laws in addition to the fines for EU GDPR violations. 


This total fine, which was imposed last month, consisted of two fines for violations of the EU GDPR and two fines for violations of Spanish national laws. The company was fined the sum of €6 million for violating both Article 28 and Article 44 of the EU’s GDPR collectively. In addition, the AEPD, based on its national competencies, fined another €2 million for the company’s violation of Spanish telecommunications and digital rights laws, and a smaller fine of €150,000 regarding a technical Spanish law governing the use of cookies. This total fine is a new record high for the AEPD, surpassing the €6 million fine imposed on Caixabank earlier this year. 


Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Standard Contractual Clauses

Standard Contractual Clauses may not be enough, as suggested by recent decision by BayLDA

BayLDA, the Bavarian DPA has recently ordered a German company to cease from using Mailchimp, despite the use of Standard Contractual Clauses.


In the aftermath of the Schrems II ruling, we have seen some examples of the practical implications of this judgment. In the most recent case, the Bavarian DPA has ordered a German publishing company to cease from using Mailchimp, the popular US email marketing platform. While the transfer of data to Mailchimp, and by extension to the US, a third country, was based on Standard Contractual Clauses, it was still unlawful. It was found that the company did not do its due diligence in ensuring that this data was adequately protected from access requests by US surveillance authorities. 


While the data transfers by the German company were based on Standard Contractual Clauses, BayLDA suggested that additional due diligence needed to be done. 


A complaint was filed against the German publishing company with the Bavarian DPA, BayLDA, regarding the company’s occasional use of Mailchimp for their newsletter. The data transfers to Mailchimp by the German publishing company were based on Standard Contractual Clauses. However, under US surveillance law FISA 702, Mailchimp qualifies as an “electronic communication service provider”, rendering the transferred email addresses in danger of being accessed by US intelligence services. BayLDA suggested that there were additional steps needed to be taken by the publishing company, as far as due diligence is concerned, to determine whether any additional measures needed to be put in place to ensure that data transferred to Mailchimp was protected from US surveillance. 


Based on the decision by BayLDA, the company has ceased from using Mailchimp with immediate effect, avoiding possible fines.


The respondent in this case had argued that its use of Mailchimp was lawful according to GDPR Article 44. Recital 102, in part states that “Member States may conclude international agreements which involve the transfer of personal data to third countries or international organisations, as far as such agreements do not affect this Regulation or any other provisions of Union law and include an appropriate level of protection for the fundamental rights of the data subjects.” In this case, it was ultimately found that this German company was not able to adequately protect the fundamental rights of the data subjects affected, as it had not ensured that this data was sufficiently protected from access by US surveillance. The German publishing company immediately ceased from using Mailchimp for its newsletters, avoiding a possible fine from BayLDA. 


This decision by BayLDA provides further clarity on the practical application of Schrems II.


This decision by the Bavarian DPA provides further clarity to companies who may be transferring data based on Standard Contractual Clauses, that at times this may not be enough. There is still necessary due diligence to be done on transfers of data outside the EU or UK. Due to third country surveillance laws, which may not be compatible with EU or UK laws, supplementary measures may need to be carried out to adequately protect the data being transferred to service providers in those third countries. 


Do you make international data transfers to third countries? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, transfer impact assessments and Data Protection Officer outsourcing.  Contact us today.

The new EU AI Regulation

The new EU AI Regulation : leaked document reveals intentions to establish rules for high risk AI

New EU AI Regulation revealed in a recently leaked document, include several intended legislations specific to high risk AI. 


The European Commission has proposed a Regulation of the European Parliament and of the Council aimed at governing the use, and sale of high risk AI within the European Union. In a recently leaked document, the organisations stated that “Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being.” With that in mind, the European Commission has set forth to further define and clarify what constitutes high risk AI, and set out rules aimed at ensuring that AI is safely and effectively serving the highest good of natural persons. “It is the step towards the right direction, providing further ethical safeguards as our lifes are becoming increasingly dominaed by machine-made decisions”, comments Dr Bostjan Makarovic, Aphaia’s Managing Partner.


The document outlines harmonised rules concerning the placing on the market, putting into service and use of high-risk AI systems in the Union. It also includes harmonised transparency rules for AI systems intended to interact with natural persons and AI systems used to generate or manipulate image, audio or video content. The regulatory framework laid out in this document is intended to function without prejudice to the provisions of existing Union regulations applicable to AI, falling within the scope of this regulation. 


The new AI Regulation will apply to providers and users of AI systems in the EU or in third countries to the extent that they affect persons in the EU.


This Regulation will apply to providers of AI systems in the practice of placing them on the market or putting them into service in the European Union whether they were established in the EU, or within a third country outside the Union. In addition, the regulations will apply to users of AI systems in the EU, as well as providers and users of AI systems established in third countries to the extent that these systems affect persons within the EU. 


Article 3 of the leaked document defines an AI system as “software that is developed with one or more of the approaches and techniques listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing real or virtual environments.” This can constitute a component of a product or a standalone product, the output of which may serve to partially or fully automate certain activities. Annex I outlines several approaches and techniques which indicate artificial intelligence including Machine learning approaches, including supervised and supervised and reinforcement learning, logic and knowledge based approaches, and statistical approaches. 


The leaked document outlines several prohibitions intended to be established for the protection of the fundamental rights of natural persons. 


This document goes on in Article 4 to outline the prohibited AI practices; a list of artificial intelligence practices which are prohibited as contravening the fundamental rights protected under EU law, and Union values. Title II Article 4 prohibits the use of AI systems that manipulate human behavior, opinions or decisions through choice architectures or any other element of the user interface, which causes persons to make decisions, have opinions or behave in a manner that is to their detriment. In addition, this regulation prohibits the use of AI in any manner that exploits information or predictions about people in an effort to target their vulnerabilities, leading them to behave, form an opinion or make decisions to their detriment. The regulation will also prohibit indiscriminate surveillance applied to all natural persons in a generalised manner without differentiation. Article 4(2) does however state that these prohibitions do not apply when authorised by law, or are carried out by, or on behalf of public authorities in order to safeguard public security, subject to appropriate safeguards to the rights and freedoms of third parties and compliance with the laws of the EU. 


Cristina Contero Almagro, Partner in Aphaia, points out that “It should be noted that this new Regulation mentions “monitoring and tracking of natural persons in digital or physical environments, as well as automated aggregation and analysis of personal data from various sources” as elements that the methods of surveillance could include, which means that these provisions might potentially reach any online platform which relies on automated data aggregation and analysis. 


Considering that the Regulation takes a risk-based approach and that it interlinks with the GDPR in some areas, this only confirms the importance for businesses to ensure that their systems and processes comply with the data protection framework. In particular, Data Protection Impact Assessments, over which the Conformity Assessment would be built, play a paramount role”.


The new AI Regulation specifically defines what constitutes high risk AI systems, in order to outline exactly which systems will be subject to this regulation. 


With regard to high risk AI systems, this document has a specific section (Annex II) dedicated to defining with precision and complete with examples, what constitutes a “high risk artificial intelligence system”. Anything that falls within that purview is subject to specific rules and regulations to ensure the best interest of persons. Compliance with these requirements will be required before the placement of these systems on the market or into service. The regulation covers the use of data sets for training these systems, documentation and record keeping, transparency, robustness, accuracy, security, and human oversight. The leaked document includes several obligations for the providers and users of these systems, as well as authorised representatives, importers and distributors.


The Regulation sets forth intended measures to support innovation in AI and aid SMEs in ensuring compliance. 


This document also sets forth intended measures in support of innovation in AI. These include AI regulatory sandboxing schemes, allowed to be established by national competent authorities in any of the Member States. In order to reduce the regulatory burden for small to medium enterprises and startups, additional measures will be implemented, including priority access to these sandboxes, as well as digital hubs and testing experimentation facilities. These hubs are intended to provide relevant training to AI providers on the regulatory requirements, as well as technical and scientific support, and testing facilities for providers. 


The new AI Regulation indicates the intention for the establishment of a European Artificial Intelligence Board. 


The document indicated the intention to establish a European Artificial Intelligence Board, tasked with the consistent application of this regulation by the Member States. This task force will be expected to issue opinions  or interpretive guidance documents clarifying the application of the Regulation, collect and share best practices among Member States, aid in the development of standards regarding AI, and continuously monitor developments in the market and their impact on fundamental rights. The European Artificial Intelligence Board will also be expected to ensure consistency and coordination in the functioning of the AI regulatory sandboxes previously mentioned. This board will issue opinions before the Commission adopts a delegated act and coordinate, in carrying out its tasks, with the relevant bodies and structures established at an EU level, including the EDPB. 


Do you use AI in your organisation and need help ensuring compliance with AI regulations? We can help you. Aphaia provides EU AI Ethics Assessments, Data Protection Officer outsourcing and ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments. We can help your company get on track towards full compliance.

Spanish court rules

Spanish court rules against mandatory geolocation from employees’ personal mobile phones

Spanish court rules against obliging delivery drivers to provide geolocation from their personal mobile phones and personal internet connection.


The Social Chamber of the Supreme Court in Spain has confirmed a decision nullifying Telepizza SAU’s ‘Tracker Project.’ This project included mandatory employee location tracking for delivery drivers, facilitated through their personal phones’ geolocation feature, with negative repercussions for system issues, which if not repaired by a deadline, put employees at risk of suspension of their contract, loss of salary, and even loss of employment. This Supreme Court ruling confirmed a ruling of a National Court which established that delivery drivers for Telepizza are not required to use their own mobile phones and installed geolocation app while working. The Chamber believes that the use, by the workers of their mobile phones differs greatly from these conditions being required using a company provided phone, as the latter would not risk violating employee rights. 


The court found that this system established by the company violated their employees’ privacy rights. 


The pizza restaurant, which operates mainly in Spain and Portugal, was denied their case appeal in this ruling, based on the fact that this type of policy violated the privacy rights of those affected, when less intrusive measures could have been used. The company’s Tracker Project forced employees to provide a personal mobile phone with a working internet connection in order to facilitate tracking of their live location, and by extension, deliveries. It was also concluded that the workers had not been given sufficient information, and as such, the implementation of the Tracker Project failed to comply with the requirements for information and prior consultation, as established in Article 64.5 of the Workers’ Statute. 


“One should not forget that employees are also data subjects whose data should be processed in compliance with the GDPR. In this case the company should have undertaken a Data Protection Impact Assessment in order to identify the risks linked to this practice and the mitigation measures that should have been applied before implementing it” comments Cristina Contero Almagro, Aphaia’s Partner.


The Court is of the view that there are less invasive ways of achieving success with tracking. 


The pizza delivery company argued that their system would simply allow clients to locate their order in real time. However, the Supreme Court is not taking to task the necessity or advantage in customers being able to track their orders, but rather highlighting that the methods used by the defendant to achieve this are not in accordance with the law. In addition, from a fundamental rights perspective, the company’s methods show flaws, not in proportionality, but in necessity, as there are other, less invasive means of achieving that functionality within their business. The court also ruled that the compensation plan intended for employees, for using their personal phone and internet connection was insufficient. In general, a more appropriate policy would have been to furnish drivers with an employer-provided phone for tracking purposes. 


Do you have questions on your policies regarding employees or location tracking? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, transfer impact assessments and Data Protection Officer outsourcing.  Contact us today.