EGL Fined for Unlawful Marketing

Italian DPA (Garante) Imposes a Double Fine on Eni Gas E Luce Totalling EUR 11.5 Million for Two Violations of the GDPR.

The Italian Data Protection Authority (Garante) imposed a double fine on Eni Gas E Luce (EGL) of EUR 11.5 million for unlawful data processing for promotional purposes and activation of unsolicited contracts.

Last month, the European Data Protection Board reported on a double fine imposed on Eni Gas E Luce, by the Italian Data Protection Authority. Following an investigation into the marketing practices of Eni Gas E Luce (EGL), the Italian Data Protection Authority imposed a total fine of EUR 11.5 million for unlawful data processing for promotional purposes and activation of unsolicited contracts. Of the two fines imposed on EGL, the first, was a fine of EUR 8.5 million, for processing in connection with telemarketing and teleselling activities and the other,of EUR 3 million, for breaches due to the conclusion of unsolicited contracts for the supply of electricity and gas under ‘free market’ conditions.

Unlawful Data Processing

Of the several infringements uncovered during the investigation, the first fine of EUR 8.5 million were for several counts of unlawful data processing. The specific violations included advertising calls made without the consent of the contacted person or despite that person’s refusal to be subjected to promotional calls, or without the required procedures for verifying the public opt-out register. The Italian DPA also found that there were no technical and organisational measures to take account of the indications provided by users. EGL also had longer than permitted data retention periods; and were acquiring data on prospective customers from list providers who had not obtained any consent for the disclosure of such data.

Activation of Unsolicited Contracts.

After receiving many complaints from customers that they received a letter of termination of the contract with the previous supplier or an initial EGL bill without ever having requested a change in supplier, the Italian DPA conducted an investigation which resulted in an additional EUR 3 million fine. In some cases, customers even reported incorrect data in the contracts and forged signatures.

Corrective and Disciplinary Measures.

The Garante has ordered that, in addition to paying the fine, EGL is to introduce specific alerts in order to detect certain procedural anomalies. The company is also prohibited from using the data made available by the list providers if those providers had not obtained specific consent from consumers, for the communication of such data to EGL. EGL is also expected to verify the consent of the persons included in the contact lists prior to the start of any promotional campaigns.They are to do so by examining a large sample of customers, and all of the aforementioned measures have to be implemented and communicated to the Italian DPA within a set timeframe, while fines must be paid within a 30 day period.

Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and UK Data Protection Act? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Statement on Personal Data

The FCA, ICO and FSCS release a Joint Statement Warning FCA Authorised Firms and IPs to be Responsible with Personal Data

The Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO) and the Financial Services Compensation Scheme (FSCS) release a joint statement warning FCA authorised companies and Insolvency Practitioners (IPs) to be responsible when dealing with customers’ personal data.

On February 7th 2020, the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO) and the Financial Services Compensation Scheme (FSCS) released a joint statement warning FCA authorised firms and insolvency practitioners (IPs) against the unlawful sale of clients’ data to claims management companies (CMCs). This is because it has come to their attention that some FCA-authorised firms and IPs have attempted to sell clients’ personal data to these CMCs unlawfully. The CMCs may not be acting in consumers’ best interest and may also be unlawfully marketing their services.

While The FCA handbook states that CMCs are required to act honestly, fairly and professionally in line with the best interests of their customers, they may not be acting in the customer’s best interest. As a matter of fact, CMCs that intend to buy and use such personal data must demonstrate their compliance with privacy laws. Although contracts may vary, standard contracts typically do not provide sufficient legal consent for personal data to be shared with CMCs to market their services, and may not be lawful.

Why Selling Customers’ Data with CMCs may not be Lawful.

Apart from the fact that most standard contracts simply do not provide the legal consent for customers’ personal data to be sold to CMCs,companies who pass on customers’ personal information may also fail to meet the requirements of the the Data Protection Act 2018 and GDPR. Thereafter, any direct marketing calls, text or emails carried out by CMCs may breach the Privacy and Electronic Communications Regulations 2003 (PECR).

What are the implications of such breaches in data protection legislation?

Companies are expected by law to abide by the Data Protection Act 2018, the GDPRand the FCA Handbook. In the case of FCA authorised companies and IPs in particular, the CMCOB Claims Management: Conduct of Business sourcebook applies. In cases where the ICO or FCA finds these companies to be in breach of any of these data protection laws, they will take appropriate action,and there could be serious legal consequences.

Time and again,we see fines being imposed on companies for breaches in these data protection laws, and just last week,we reported on the Italian DPA Fining TIM SpA in excess of EUR 27 Million for unlawful data processing.

Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and UK Data Protection Act? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

TIM Fined for Unlawful Marketing

Italian DPA Fines TIM SpA EUR 27.8 Million for Unlawful Marketing.

The Italian Data Protection Authority (DPA) Garante fined TIM SpA EUR 27,802,496 for several instances of unlawful data processing for marketing purposes.

Complex investigations were carried out after the DPA received hundreds of complaints, from January 2017 to early 2019 regarding unlawful processing for marketing purposes, in particular, unsolicited marketing calls that had been performed without any consent, from call centers acting on behalf of TIM S.p.A. In some cases, the concerned parties either had denied their consent to receive marketing calls or were part of the public opt-out register. Some complaints also mentioned unfair prize competition processes and the applicable forms, among other issues. The investigations were carried out with the aid of a specialised unit of the Italian Financial Police and revealed several critical infringements of personal data protection legislation.

Unlawful ‘cold’ marketing calls

TIM SpA, Italy’s largest telecommunications service provider, was found to have had marketing calls placed to millions of consumers, by various call centers, on their behalf, to ‘non-customers’, without their consent. There were also calls made to several customers who are on a marketing black list. Furthermore, over two hundred thousand numbers were called, which were not included in TIM’s list of marketing numbers. According to the European Data Protection Board “Other types of illicit conduct were also found such as TIM’s failure to supervise the activities of some call centres or to properly manage and update their blacklists (listing individuals who do not wish to receive marketing calls), and the fact that consent to marketing activities was mandatory in order to join the ‘Tim Party’ incentive discount scheme.”

Measures issued by the Italian DPA

In addition to imposing fines on TIM, the Italian DPA also imposed certain injunctions and prohibitions. The injunctions require TIM to check the consistency of their blacklists, and to allow customers to access discount schemes and prize competitions without having to consent to marketing interactions. Also, TIM will have to check the app activation procedures; and always specify, in clear and understandable language, the processing activities they perform along with the purposes and the relevant processing mechanisms. They are to make sure they obtain valid consent. In addition, the company is no longer allowed to use customer data collected through their three apps; ‘MyTim’, ‘TimPersonal’ and ‘TimSmartKid’ for any purposes other than to provide the relevant services without the users’ free, specific consent. This is only part of the total of 20 corrective measures imposed on TIM by the Italian DPA, which must all be implemented and the progress thereof, reported to the Italian SA according to a specific timeline, in addition to having to pay the Euro 27.8 million fine within 30 days.

Should our business be worried?

One should keep in mind that the rules on ‘cold calls’ vary from country to country, even within the GDPR framework. It is therefore important to consult an expert before deciding to engage in cold marketing calls or cold emailing. The latter is generally prohibited in all the EU Member States and the UK, with some exceptions.

Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and UK Data Protection Act? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

EU White Paper on AI

EU White Paper on Artificial Intelligence Overview

The EU White Paper on Artificial Intelligence contains a set of proposals to develop a European approach to this technology.

As reported in our blog, the leaked EU White Paper obtained by Euractiv proposes several options for AI regulations moving into the future. In our post today we are going through them to show you the most relevant ones.

The EU approach is focused on promoting the development of AI across Member States while ensuring that relevant values and principles are properly observed during the whole process of design and implementation. One of the main goals is cooperating with China and the US as the most important players in AI, but always aiming at protecting EU’s interests, including European standards and the creation of a level playing field.

Built on the existing policy framework, the EU White Paper points out three key pillars for the European strategy on AI:

  • Support for the EU’s technological and industrial capacity.

  • Readiness for the socioeconomic changes brought about by AI.

  • Existence of an appropriate ethical and legal framework.

How is AI defined?

This EU White Paper provides a definition of AI based on its nature and functions. That said, AI is conceived as “software (integrated in hardware or self-standing) which provides for the following functions:

  • Simulation of human intelligence processes.

  • Performing certain specified complex tasks.

  • Involving the acquisition, processing and rational or reasoned analysis of data”.

Europe’s position on AI

What is the role of Europe in the development of AI? Despite EU’s strict rules on privacy and data protection, Europe counts with several strengths that may help to gain leverage in the “AI race” against other markets like China or the US, namely:

  • Excellent research centres with many publications and scientific articles related to AI.

  • World-leading position in robotics and B2B markets.

  • Large amounts of public and industrial data.

  • EU funding programme.

On the negative side, there is a pressing need for significantly increasing investment levels on AI and maximising them through cooperation among Member States, Norway and Switzerland. Europe has as well a weak position in consumer applications and online platforms, which results in a competitive disadvantage in data access.

The EU White Paper offer some proposals to reinforce EU strengths on AI and address those areas that need to be boosted:

  • Establishing a world-leading AI computing and data infrastructure in Europe using as a basis High Performance Computing centres.

  • Federating knowledge and achieving excellence through the reinforcement of EU scientific community for AI and the facilitation of their collaboration and networking based on strengthen coordination.

  • Supporting research and innovation to stay at the forefront with the creation of a “Leaders Group” set up with C-level representatives of major stakeholders.

  • Fostering the uptake of AI through the Digital Innovation Hubs and the Digital Europe Programme.

  • Ensuring access to finance for AI innovators.

What are the prerequisites to achieve EU’s goals on AI?

Access to data

Ensuring access to data for EU businesses and the public sector is essential to develop AI. One of the key measures considered by the Commission for redressing the issue of data access is the development of common data spaces which combine the technical infrastructure for data sharing with governance mechanisms, organised by sector or problem area.

Regulatory framework

The above can be built on EU’s comprehensive legal framework, which includes the GDPR, the Regulation on the Free Flow of Data and the Open Data Directive. The latter may play a fundamental role indeed, as based on its latest revision, the Commission intends to adopt by early 2021 an implementing act on high-value public sector datasets, which will be available for free and in a machine-readable format.

Although AI is already subject to an extensive body of EU legislation including fundamental rights, consumer law and product safety and liability, it also poses new challenges that come from the data dependency and the connectivity within new technological ecosystems. There is therefore a need for developing a regulatory framework that covers all the specific risks that AI brings. In order to achieve this task successfully, the EU White Paper highlights the relevance of complementing and building upon the existing EU and national frameworks to provide policy continuity and ensure legal certainty.

The main risks the implementation of AI in society faces are the following:

  • Fundamental rights, including bias and discrimination.

  • Privacy and data protection.

  • Safety and liability.

It is important to note that the aforementioned risks can be the result either of flaws in the design of the AI system, problems with the availability and quality of data or issues stemming from machine learning as such.

According to the EU White Paper, the Commission identified the following weaknesses of the current legislative framework in consultation with Member States, businesses and other stakeholders:

  • Limitations of scope as regards fundamental rights.

  • Limitations of scope with respect to products: EU product safety legislation requirements do not apply to services based on AI.

  • Uncertainty as regards the division of responsibilities between different economic operators in the supply chain.

  • Changing nature of products.

  • Emergence of new risks.

  • Difficulties linked to enforcement given the opacity of AI.

How should roles and responsibilities concerning AI be attributed?

The Commission considers that, considering the amount of agents involved in the life cycle of an AI system, the principle that should guide the attribution of roles and responsibilities in the future regulatory framework is that the responsibility lies with the actor(s), who is/are best placed to address it. Therefore, the future regulatory framework for AI is expected to set up obligations for both developers and users of AI, together with other groups such as suppliers of services. This approach would ensure that risks are managed comprehensively while not going beyond what is feasible for any given economic actor.

What legal requirements should be imposed on the agents involved?

According to the EU White Paper, the Commission seems to be keen on setting up legal requirements having a preventive ex ante character rather than an ex post one, even though the latter are also referred. That said, the requirements might include:

  • Accountability, transparency and information requirements to disclose the design parameters of the AI system.

  • General design principles.

  • Requirements regarding the quality and diversity of datasets.

  • Obligation for developers to carry out an assessment of possible risks and steps to minimize them.

  • Requirements for human oversight.

  • Additional safety requirements.

Ex post requirements establish liability and possible remedies for harm or damage caused by a product or service relying on AI.

That said, which regulatory options is considering the Commission?

  1. Voluntary labeling.

This alternative would be based on a voluntary labeling framework for developers and users of AI. The requirements would be binding just once the developer or user has opted to use the label.

  1. Sectorial requirements for public administration and facial recognition.

This option would focus on the use of AI by public authorities. For this purpose, the Commission proposes the model set out by the Canadian directive on automated decision-making, in order to complement the provisions of the GDPR.

The Commission also suggests a time-limited ban (“e.g. 3-5 years”) on the use of facial recognition technology in public spaces, aiming at identifying and developing a sound methodology for assessing the impact of this technology and establishing possible risk management measures.

  1. Mandatory risk-based requirements for high-risk applications.

This option would foresee legally binding requirements for developers and users of AI, built on existing EU legislation. Given the need to ensure proportionality, it seems these new requirements might be applied only to high-risk applications of AI. This brings to light the need for a clear criteria to differentiate between “low-risk” and “high-risk” systems. The Commission provides the following definition of “high-risk”: “applications of AI which can produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage for the individual or the legal entity”, and points out the need to consider it together with the particular sector where the AI system would be deployed.

  1. Safety and liability.

Targeted amendments of the EU safety and liability legislation could be considered to address the specific risks of AI.

  1. Governance.

An effective system of enforcement is deemed an essential component of the future regulatory framework, which would require a strong system of public oversight.

Does your company use AI systems? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, EU AI Ethics assessments and Data Protection Officer outsourcing. Contact us today.