Automated decision making and GDPR

Automated Decision Making and the GDPR

In today’s blog we delve into automated decision making and the GDPR.

Artificial Intelligence is increasingly becoming ingrained in all facets of our societies and lives. While it certainly heralds an age of cool futuristic technology and applicationsfacial recognition and self-driving cars for example!what about when AI is utilized as an automated decision making tool? Can this pose an issue to an individuals right? What are the possible implications? Is it fair? Are there any legal provisions to ensure fairness?

In our latest vlog we explore some frequently asked questions as it relates to Artificial Intelligence, automated decision making and the GDPR. Click here to take a look.

A deeper look: GDPR and Automated Individual Decision making, including profiling

Automated decision-making is described by the ICO  asthe process of making a decision by automated means without any human involvement.

These decisions, the ICO says, can be based on factual data, as well as on digitally created profiles or inferred data. Examples of this include:

an online decision to award a loan; and
an aptitude test used for recruitment which uses pre-programmed algorithms and criteria.

Meanwhile Article 4 (4) defines profiling as any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.

The ICO offers the following examples of profiling:

collect and analyse personal data on a large scale, using algorithms, AI or machine-learning;
identify associations to build links between different behaviours and attributes;
create profiles that you apply to individuals; or
predict individualsbehaviour based on their assigned profiles.

Yet while automated decision making and profiling have several benefits for both businesses and consumers, they carry risks for people’s rights and freedoms. A false or unfair decision may lead to significant adverse effects for individuals, from discrimination to undue intrusion into private life.

Article 22 of the GDPRreferenced in our vlogseeks to address this and other risks by setting the strict parameters that the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing.

UK AI procurement guidelines

UK Government releases AI procurement guidelines

In today’s blog we review the UK Government’s new draft procurement guidelines for AI.


Artificial Intelligence (AI) is a major part of all our lives and societies. From popular voice command applications like Siri and Alexa; to business spam filters; and connected cars like Tesla; AI is all around us. Increasingly, AI is  being introduced within the public service sector with a view to improving efficiency, reducing costs, saving time and enhancing quality. Yet with ethical and privacy concerns being the opposite side of the coin, last month the UK government released draft guidelines for AI procurement within the public sector.


According to document the new AI procurement guidelines will help inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure technologies for the benefit of citizens.



The UK AI procurement guidelines are broken down into the following steps:

  1. Explore procurement processes that focus on the challenge rather than a specific solution
  2.  Define the public benefit of using AI while assessing risks
  3. Include your procurement within a strategy for AI adoption
  4. Incorporate references to legislation and codes of practice in the invitation to tender
  5. Articulate the technical feasibility and governance considerations of obtaining relevant data
  6. Develop a strategy to address technical and ethical limitations of using training data
  7. Conduct procurement with diverse multidisciplinary teams
  8. Focus on mechanisms of accountability and transparency throughout procurement
  9. Consider the life-cycle management of the AI system
  10. Creative a level and fair playing field for suppliers


Who are procurement guidances applicable to

The UK Office for AI offers that the procurement guidance are for multidisciplinary teams involved in public procurement decisions relating to AI projects:

Policy officials and organisation leads considering an AI-based solution and/or planning and delivering AI projects
Procurement officials and commercial teams responsible for the planning and delivery of AI projects
Analysts, data scientists and digital, data and technology experts who are developing project-specific requirements and evaluating, using and maintaining AI systems
Chief Data, Information, Technology and Innovation Officers considering planning and delivering AI projects
Suppliers who want to better understand the best practice processes, technical and ethical expectations for AI projects, and to tailor their offerings appropriately


A pilot of the UK AI procurement guidelines is expected to begin this Autumn.


If your company is currently considering procuring or developing an AI system, Aphaia’s AI ethics assessments  will assist in ensuring that it falls within the scope of the EU’s and UK’s ethical framework.


WhatsApp conversations as contract

WhatsApp conversations may be deemed valid contract in Spain

Using WhatsApp blue tick to sign contracts? WhatsApp chats have been considered a verbal contract between the parties by a Court in Vigo (Galicia, Spain).

WhatsApp conversations may be a legally binding contract for the parties. An unpaid rent was the origin of this ruling. The landlords sued the tenant and the Court accepted the WhatsApp messages as the valid contract that governed the legal relationship between them. The Court took into account the fact that WhatsApp was the means used by the parties to agree on all the terms of the rent and to share the relevant documents in order to formalise it.

WhatsApp messages as contract and evidence in Court

Article 1278 of Spanish Civil Code states that “contracts will be legally binding for the parties regardless of their verbal or written nature, as long as the essential elements for their validity are met [namely: consent, object and cause].

As for the use of WhatsApp messages as a valid evidence in Court, there are, however, some requirements that apply, like the need of experts reports to verify the origin of the communication, the parties identities and the content integrity. Providing the password in order to let the Court access the relevant accounts, allowing access to the device as such or gathering recognition of the existence and truthfulness of the conversation from each of the parties have been accepted by some Courts as evidence enough.

WhatsApp, smart contracts and blockchain

In the light of this ruling, one may wonder if WhatsApp conversations may become one of the “blocks” of blockchain technology and be part of the smart contracts in the future. In order to achieve this, all the messages would need to be sorted and be accessible, maybe with no time limit, for verification purposes. This hypothetical but possible scenario would involve several privacy concerns, because WhatsApp messages may be deemed personal data, thus RGPD and other pieces of legislation, like the one concerning AI, may apply.

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing.


AI, IoT, Big Data

Combining AI, IoT and Big Data: Can Regulation Cope?

In last week’s vlog we delved into the regulation of AI, IoT and Big Data when used together.

AI, IoT and Big Data—these technologies and digital concepts undoubtedly play a major role in today’s highly connected era. As they are now; and continue to become more and more; integral in our day to day lives, several regulatory measures—including the GDPR and DPA 2018—have been implemented in order to protect individuals’ personal data, privacy and associated rights. But how should they be regulated when they all work together?

First of all we need to understand what each of these concepts means:

Artificial Intelligence (AI): John McCarthy, who coined the term in 1956, defined it as “the science and engineering of making intelligent machines.” A more modern definition explains AI as “the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction”.

Internet of Things (IoT): IoT is understood as “a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction”.

Big Data: a popular definition of Dig Data, provided by the Gartner IT glossary, is “…high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.”

As you can see, all of them involve data processing, so it is clear that all of them should comply with the GDPR when the information affected is personal data or with the Free Flow of non-Personal Data Regulation when it doesn’t. But are there any other mandatory requirements laid by Law apart from the data protection and privacy ones that they should meet? Actually there are, but the challenge is that each of them has different features and needs.

For example, AI raises many ethics concerns, as discussed in some of our previous videos and IoT is dependent on 5G and Telecoms Regulation, while Big Data may tackle challenges from both AI and IoT.

That said, how can regulation cope all these technologies when they apply together, e.g. in a project?

We suggest different scenarios about how this could be addressed:

o Impose an obligation to businesses, universities, public bodies, etc. to count with a legal ITC professional in the team before carrying out any project that involve the use of AI, IoT, Big Data or similar technologies. A specific certificate for these professionals might be issued by accreditation bodies.
o Tech-specific Regulation and Legislation that put together most of the ITC risks and challenges with mandatory minimum requirements on how they should be applied.
o Independent Legal Tech EU Body which launches guidance and code of conducts about the main ITC issues and challenges.

Even though regulation and codes of conduct may help to unify standards, due diligence and commitment from the managers and the team involved in a project are still essential and key to ensure appropriate safeguards regardless of the specific externally-imposed requirements.


If you need advice on your AI product, Aphaia offers both AI ethicsand Data Protection Impact Assessments.