Telephone marketing rules post-Brexit

Many UK businesses are planning to shift to telephone marketing. In this blog we go through the requirements that should be met in order to do it in compliance with the ePrivacy rules.

UK businesses are no longer clearly protected by ePrivacy country of origin rule when marketing directly in EU countries, so many of them are now looking for alternatives. Are the rules on telephone marketing less strict than the ones on electronic mail marketing?

What does the ePrivacy Directive say about unsolicited communications?

Pursuant to the ePrivacy Directive “Member States shall take appropriate measures to ensure that, free of charge, unsolicited communications for purposes of direct marketing […] are not allowed either without the consent of the subscribers concerned or in respect of subscribers who do not wish to receive these communications, the choice between these options to be determined by national legislation”.

Accordingly, national implementation of the ePrivacy Directive in each Member State regulates the rules that apply in each country.

ePrivacy country of origin rule principle allows the sender to rely on the benefit of the own country less strict rules as long as there is single market. However, this does not apply to UK businesses anymore after Brexit, therefore the rules of the destination country should be considered before marketing directly in EU countries.

Automated calls

Automated calls are subject to stricter requirements. Pursuant to the ePrivacy Directive, the use of automated calling systems without human intervention (automatic calling machines) and facsimile machines (fax) for the purposes of direct marketing is only allowed in respect of subscribers who have given their prior consent.

General consent for marketing, or even consent for live calls, is not enough and it needs to cover automated calls specifically.

Telephone marketing from the UK through live calls

In EU countries

UK businesses that wish to market other businesses or individuals in EU countries should check national laws in order to confirm the following elements: 

  1. Whether consent is required;
  2. Where consent is not required, whether the number is listed in the national opt-out register or whether the data subject has explicitly objected to receiving calls from that particular business.

Most EU countries have implemented opt-out registers rather than the consent requirement, but this must be assessed on a case by case basis in order to ensure full compliance.

In the UK

UK businesses that wish to market other businesses or individuals in the UK should take the following steps:

  1. Check whether the number is registered with the TPS or CTPS.
  2. Check whether the data subject has objected to receiving calls from them.

In a nutshell, marketing calls can be freely made unless the person has opted-out from them or is registered with the TPS or CTPS. No marketing calls should be made to any number listed on TPS or CTPS unless that person has specifically consented to calls from the particular business. Telephone marketing is also prohibited when it is for the purpose of claims management services, unless the person has specifically consented to them.

Calls in relation to pension schemes are subject to special rules.

Additional requirements

Once determined that the call can be made in compliance with the relevant rules, a set of additional requirements should be applied, namely: 

  • Say who is calling;
  • Allow the number (or an alternative contact number) to be displayed to the person receiving the call;
  • Explain where the controller’s privacy policy can be found and 
  • Provide a contact address or freephone number if asked.

EU ePrivacy rules update

As reported in one of our latest blogs, earlier this month EU Member States agreed upon a negotiating mandate for revised ePrivacy rules, which would repeal the current ePrivacy Directive, starting to apply two years after its publication in the EU Official Journal. The ePrivacy Regulation may introduce new rules on telephone marketing, such as the obligation to present the calling line identification assigned to them or use a specific code or prefix identifying the fact that the call is a direct marketing call. 

 

Do you make telephone marketing? Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy rules, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Spanish DPA AEPD

Spanish DPA AEPD publishes Guidelines on AI audits

AEPD, the Spanish data protection authority, has published Guidelines on the requirements that should be implemented for conducting audits of data processing activities that embed AI.

Early this month, the Spanish DPA, AEPD, published Guidelines on the requirements that should be considered when undertaking audits of personal data processing activities which involve AI elements. The document addresses the special controls to which the audits of personal data processing activities comprising AI components should be subject.

Audits are part of the technical and security measures regulated in the GDPR and they are deemed essential for a proper protection of personal data. The AEPD Guidelines contain a list of audit controls among which the auditor can select the most suitable ones, on a case by case basis, depending on several factors such as the way the processing may affect GDPR compliance, the type of AI component used, type of data processing and the risks to the rights and freedoms of the data subjects that the processing activities pose.

Special features of AI audits methodology

The AEPD remarks that the audit process should be governed by the principles laid down in the GDPR, namely: lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality and accountability.

The AEPD also points out that all the controls listed in the Guidelines are not meant to be applied together. The auditor should select those ones that are relevant based on the scope of the audit and the goals it pursues.

What type of data processing do these requirements apply to and who should comply with them?

The Guidelines will be applicable where:

  • There are personal data processing activities at any stage of the AI component lifecycle; or
  • The data processing activities aim to profile individuals or make automated decisions which produce legal effects concerning the data subjects or similarly significantly affects them.

The AEPD states that in some cases it might be useful to carry out some preliminary assessments before moving forward with the audit, such as, inter-alia, an assessment of the level of anonymisation of personal data, an assessment of the risk of re-identification and an assessment of the risk of losing data stored in the cloud.

The document is especially addressed to data controllers who audit personal data processing activities that include components based on AI, to data processors and developers who wish to offer additional guarantees around their products and services, to DPOs responsible for monitoring the data processing and providing advice to the data controllers and to auditors who work with this type of processing.

Control goals and actual controls

The main body of the Guidelines consists of five audit areas that are broken down into several objectives containing the actual controls among which the auditors, or the person in charge of the process as relevant, can make their selection for the specific audit they are undertaking.

The AEPD provides an exhaustive list comprising more than a hundred of controls, which are summed up in the following paragraphs. 

  • AI component identification and transparency

This area includes the following objectives: inventory of the AI components, definition of responsibilities, and transparency.

The AEPD stresses the importance of keeping full records both of the components, -including, inter alia, ID, version, date of creation and previous versions- and the persons in charge of the process -such as their contact details, roles and responsibilities-. There are also some provisions with regard to the information that should be available to the stakeholders, especially when it comes to the data sources, the data categories involved, the model and the logic behind the AI component, and the accountability mechanisms.

  • AI component purpose

There are several objectives within this area: identification of the AI component purposes, uses and context, proportionality and necessity assessment, data recipients, data storage limitation and analysis of the data subject categories.

The controls linked to these objectives are based on the standards and requirements needed to achieve the desired outcomes and the elements that may affect said result, as for example the conditioning factors, the socioeconomic conditions, and the allocation of tasks, among others, for which a risk assessment and a DPIA are recommended.

  • AI component basis

This area is built over the following objectives: identification of the AI component development process and basic architecture, DPO involvement, adequacy of the theoretical models and methodological framework.

The controls defined in this section are mainly related to the formal elements of the process and the methodology followed. They aim to ensure the interoperability between the AI component development process and the privacy policy, to define the requirements that the DPO should meet and guarantee their proper involvement in a timely manner and to set out the relevant revision procedures.

  • Data management

The AEPD details four objectives in this area: data quality, identification of the origin of the data sources, personal data preparation and bias control. 

Whereas data protection is the ‘leitmotiv’ along the Guidelines, it is specially present in this chapter, which covers, inter alia, data governance, variables and proportionality distribution, lawful basis for processing, reasoning behind the selection of data sources and data and variables categorisation.

  • Verification and validation

Seven objectives are pursued in this area: verification and validation of the AI component, adequacy of the verification and validation process, performance, coherence, robustness, traceability and security. 

The controls set out in this area focus on ensuring data protection compliance for the ongoing implementation and use of the AI component, looking for guarantees around the existence of a standard which allows for verification and validation procedures once the AI component has been integrated, a schedule for internal inspections, an analysis of false positives and false negatives, a procedure to find anomalies and mechanisms for identifying unexpected behaviour, among others.

Final remarks

The AEPD concludes with a reminder of the fact that the Guidelines contain a data protection approach to the audit of AI components, which means, on the one hand, that it may need to be combined with additional controls derived from other perspectives and, on the other hand, that not all controls will be relevant in each case, as they should be selected according to the specific needs, considering the type of processing, the client’s requirements, and the specific features of the audit and its scope, together with the results of the risk assessment.

Does your company use AI? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR and DPA 2018 adaptation consultancy services, including data protection impact assessmentsEU AI Ethics assessments and Data Protection Officer outsourcingContact us today.

Standard Contractual Clauses

Draft of new Standard Contractual Clauses published by the European Commission

On 12 November 2020, the European Commission published a draft Implementing Decision on new Standard Contractual Clauses for the transfer of personal data to third countries.

The CJEU judgement in the Schrems II case has brought to light some deficiencies in the current guarantees applied to international data transfers. Apart from invalidating the Privacy Shield, the Court stipulated that additional measures are required when using Standard Contractual Clauses (SCCs) in order to ensure that the data subjects are granted a level of protection essentially equivalent to the one guaranteed by the GDPR and the EU Charter of Fundamental Rights.

You can learn more about the business implications of Schrems II decision in our blog.

What’s new?

In response to the caveats pointed out by the CJEU with regard to the use of SCCs for making international transfers, the European Commission published a draft implementing decision containing a draft new set of SCCs for transfers of personal data to third countries, which includes five main changes in relation to the current clauses (approved under the Directive 95/46/EC):

  • Modular approach to cover various transfer scenarios, including processor-controller and processor-sub-processor international data transfers.
  • More than two parties could adhere to the SCCs and additional controllers and processors should be allowed to accede to them throughout the life cycle of the contract.
  • Additional safeguards should be provided to ensure a level of protection of the personal data essentially equivalent to the one granted by the GDPR.
  • Data subjects should be provided with a copy of the SCCs upon request and they should be informed of any change of purpose and of the identity of any third party to which the personal data is disclosed.
  • The data importer should inform data subjects in a transparent and easily accessible format, through individual notice or on its website, of a contact point authorised to handle complaints or requests.

Modular approach and territorial scope

The draft of new SCCs aims to address some gaps of the current SCCs, such as the limitation of the type of data transfers that can be made under their provisions. While the current SCCs are designed for international data transfers from EU controllers to non-EU/EEA controllers and international data transfers from EU controllers to non-EU/EEA processors, the proposed new ones combine general clauses with a modular approach which would allow controllers and processors to select the module applicable to their situation and tailor their obligations to their corresponding role and responsibilities. In terms of territorial restrictions, the new SCCs do not require the data exporter to be established in the EEA, which also increases the number of scenarios that may be covered by this safeguard.

 

Additional safeguards

The new SCCs stipulates some obligations that the parties should meet for the purpose of ensuring an adequate level of data protection. The additional measures imposed by the new SCC include, inter alia, the following:

  • Application of additional requirements to address how to deal with binding requests from public authorities in the third country for disclosure of personal data. 
  • Risk assessment undertaken by the data exporter to determine whether there are any reasons to believe that the laws applicable to the data importer are not in line with the requirements laid down in the SCCs. To this end, some key elements should be taken into account, namely:
    • Duration of the contract.
    • Nature of the data transferred.
    • Type of recipient.
    • Purpose of the processing.
    • Any relevant practical experience indicating the existence or absence of prior instances of requests for disclosure from public authorities received by the data importer for the type of data transferred.
    • Laws of the third country of destination relevant in light of the circumstances of the transfer.
    • Technical and organisational measures applied during transmission and to the processing of the personal data.
  • Obligation of the data importer to notify the data exporter and the data subject about any legally binding request issued by a public authority under the law of the country of destination for disclosure of personal data or about any direct access by public authorities to the personal data.

Grace period

Once these SCCs have been approved, they will replace the current ones. A one year grace period will be granted for parties to put the new clauses in place. During this period, transfers can continue to be made on the basis of current SCCs, unless those contracts are changed. If the contracts are changed, then the parties lose the benefit of the grace provision and must move to the new clauses. If parties change existing contracts in order to introduce additional safeguards, as required by Schrems II, then they can still benefit from the grace period provision.

 

The draft is open for feedback until 10 December 2020.

 

Do you make international data transfers to third countries? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, transfer impact assessments and Data Protection Officer outsourcing.  Contact us today.

Brain implants and AI ethics

Brain implants and AI ethics

The risks derived from the use of AI in brain implants may trigger the need for AI Ethics regulation soon. 

In our last vlog we talked about the interaction between brain implants and GDPR. In this second part we will explore how AI Ethics applies to brain implants. 

How is AI used in brain implants?

The AI is the conveying element between brain implants and the persons who work with them, as the AI helps to translate the electrical signals of the brain into a human-understandable language.  

The process is twofold normally: first, neural data is gathered from brain signals which are translated into numbers. Then, the numbers are decoded and transformed into natural language, which can be either English, Spanish or any other, as programmed. This procedure also works the other way around.

Why is AI Ethics important in brain implants?

Depending on how fan of sci-fi you are, the words “brain implants” may bring to your mind some films and television series titles. On the contrary, if you are a news rather than a Netflix enthusiast, you are probably already thinking about several recent medical researches where brain implants have been used. The reality reconciles both approaches.

Brain implants have been part of medical and scientific research for more than 150 years now, when in 1870 Eduard Hitzig and Gustav Fritsch performed experiments on dogs by which they were able to produce movement through electrical stimulation of specific parts of the cerebral cortex. Significant progress has been made since then and now brain implants are being tested with several purposes, inter alia, to monitor abnormal brain activity and trigger assistive measures.

Military applications have also been explored by some institutions. For example, DARPA has been working since 2006 in the development of a neural implant to remotely control the movement of sharks, which would be potentially exploited to provide data feedback in relation to enemy ship movement or underwater explosives.

The latest development in this field comes from Neuralink, the neurotechnology company founded by Elon Musk, whose page claims that the app “would allow you to control your iOS device, keyboard and mouse directly with the activity of your brain, just by thinking about it”, meaning that the aim pursued is building a full brain interface which would make people able to connect with their devices using only their minds.

While all these applications may have a huge positive impact on technological change, it may also largely affect individual’s rights and freedoms, therefore it is paramount to have some regulation in place which establishes limits. Whereas data protection may be playing an important role in this field at the moment, it may not be enough when further progress is made. AI ethics may make up for this gap.

How can AI Ethics help?

AI Ethics may provide a framework to cover the evolving use of computer hardware and software to enhance or restore human capabilities, avoiding that this technology involves high risks for the rights and freedoms of individuals. The fact that something is technically feasible does not mean that it should be done.

The lack of appropriate regulation may even slow down the research and implementation process, as some applications would be in direct conflict with several human rights and pre-existing regulations.

Human agency and oversight

Based on the AI Ethics principles pointed out by the AI-HLEG in their “Ethics Guidelines for Trustworthy AI”, human agency and oversight is key for brain implants, although it may be tricky to implement in practice. 

Human agency and oversight stands for humans being able to make informed decisions when AI is applied. Whilst this may be quite straightforward in some medical uses where the practitioner analyses the brain activity data gathered by the implants, it is not in other scenarios. The merge of biological and artificial intelligence and the fact that we cannot really know what kind of information, or at least be aware of every single piece of data, may be gathered from brain activity makes it difficult to claim that a certain decision, where said decision is made by the same person having the implant in their brain, is actually “informed”. 

Technical robustness and safety

Technical robustness and safety is another principle that should be prioritised when developing brain implants. The absolute sensitive nature of the data involved and the direct connection with the brain makes it necessary that the technology is built to be completely impenetrable. If a breach compromising biometric data would already have catastrophic results for the individuals affected, the harm derived from an attack to people’s brains would be simply immeasurable.

Privacy and data governance

According to the principle of privacy and data governance, full respect for privacy and data protection, including data governance mechanisms, the quality and integrity of the data and legitimate access to data should be ensured. In our view, the current lawful bases for valid processing laid down in the GDPR should be redefined, as their current meaning may not make sense in a context governed by brain implants. In relation to the human agency and oversight principle and in line with our previous article, an individual could not give consent in the sense of GDPR where data is being gathered by brain implants, because the information collected by these means cannot be initially identified or controlled. The process should be reframed to allow for managing consent in two stages, the second one after the collection of the data but before any further processing takes place. Considering the evolving nature of the technology and the early stage of knowledge currently available, the retention periods should also be approached differently, as further investigation might potentially reveal that additional data, for which there might be no lawful basis for processing, may be derived from the data lawfully held.

Cambridge Analytica may be a good example of the undesired results that may come from massive data processing without the adequate safeguards, measures and limitations in place. This case addressed advertising campaigns that influenced political behaviour and elections results, which was serious enough itself. However, a similar leak in a context where brain implants are used would have involved an incalculable damage to the rights and freedoms of individuals. 

Transparency

Transparency is directly linked to the principle of privacy and data governance, as the provision of full information about the processing is one of the main pillars over which the GDPR has been built. While the technology is still in a development phase, this requirement will not be completely met because there would always be gaps in the system’s capabilities and limitations. Relevant requirements should be put in place to set up the minimum information that should be delivered to the data subjects when using brain implants.

Diversity, non-discrimination and fairness

Diversity, non-discrimination and fairness should be observed to avoid any type of unfair bias and marginalization of vulnerable groups. This principle is dual: on the one hand, it reaches the design of the brain implants and the AI governing them, which should be trained to prevent human biases, their own AI biases and the biases resulting from the interaction between human (brain) and AI (implants). On the other hand, it should be ensured that the access barriers to this technology do not create severe social inequalities.

Societal and environmental well-being

The social and societal impact of brain implants should be carefully considered. This technology may help people who have a medical condition, but its costs should be reasonably affordable to everyone. A more risky version of this circumstance is given when brain implants are not only used for medical treatment, but for human enhancement too, as it would provide some people with improved skills, which would be a detriment to those that could not pay for it.

Accountability

Taking into account the critical nature of brain implants applications, auditability is key to enable the assessment of algorithms, data and design processes. Adequate and accessible redress should be ensured, for instance, in order to be able to contact all individuals using a particular version of the system which has been proved to be exposed to vulnerabilities.

Examples

In the table below we combine our previous article with this one to provide some practical insights of brain implants applications and the AI Ethics principles and GDPR requirements that they would be linked to. For this purpose, we will use the following case as the example: “Treating depression with brain stimulation”. In each row, it is explained how each AI Ethics principle and the relevant GDPR requirement or principle would apply.

Brain implants application AI Ethics principle GDPR requirement/principle
Regular monitoring by medical practitioners to decide whether the treatment is effective. Human agency and oversight. Human intervention.
Unlawful access to  and manipulation of the brain implants would have a direct negative impact over the patient’s health plus it would reveal special categories of data. Technical robustness and safety. Technical and organisational measures.
Processing is necessary for the provision of health treatment. Privacy and data governance. Lawful basis for processing.
Full information about the treatment should be delivered before proceeding. Transparency. Explanation.
No one should be excluded from this treatment and offered a less efficient one just for the reason of not being able to afford it. Diversity, non-discrimination and fairness. No discriminatory results.
Where the treatment negatively affects the behaviour of the patient in a sense that may be risky for other people, relevant measures and safeguards should be applied. Societal and environmental well-being. Statistical and research purposes exceptions.
The hospital or the Estate should be responsible for any damages caused by this use of brain implants. Accountability. Accountability.

 

Check out our vlog exploring Brain Implants and AI Ethics:

You can learn more about AI ethics and regulation in our YouTube channel.

 

Do you have questions about how AI works in brain implants and the associated risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact AssessmentsAI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.