Brain implants and AI ethics

Brain implants and AI ethics

The risks derived from the use of AI in brain implants may trigger the need for AI Ethics regulation soon. 

In our last vlog we talked about the interaction between brain implants and GDPR. In this second part we will explore how AI Ethics applies to brain implants. 

How is AI used in brain implants?

The AI is the conveying element between brain implants and the persons who work with them, as the AI helps to translate the electrical signals of the brain into a human-understandable language.  

The process is twofold normally: first, neural data is gathered from brain signals which are translated into numbers. Then, the numbers are decoded and transformed into natural language, which can be either English, Spanish or any other, as programmed. This procedure also works the other way around.

Why is AI Ethics important in brain implants?

Depending on how fan of sci-fi you are, the words “brain implants” may bring to your mind some films and television series titles. On the contrary, if you are a news rather than a Netflix enthusiast, you are probably already thinking about several recent medical researches where brain implants have been used. The reality reconciles both approaches.

Brain implants have been part of medical and scientific research for more than 150 years now, when in 1870 Eduard Hitzig and Gustav Fritsch performed experiments on dogs by which they were able to produce movement through electrical stimulation of specific parts of the cerebral cortex. Significant progress has been made since then and now brain implants are being tested with several purposes, inter alia, to monitor abnormal brain activity and trigger assistive measures.

Military applications have also been explored by some institutions. For example, DARPA has been working since 2006 in the development of a neural implant to remotely control the movement of sharks, which would be potentially exploited to provide data feedback in relation to enemy ship movement or underwater explosives.

The latest development in this field comes from Neuralink, the neurotechnology company founded by Elon Musk, whose page claims that the app “would allow you to control your iOS device, keyboard and mouse directly with the activity of your brain, just by thinking about it”, meaning that the aim pursued is building a full brain interface which would make people able to connect with their devices using only their minds.

While all these applications may have a huge positive impact on technological change, it may also largely affect individual’s rights and freedoms, therefore it is paramount to have some regulation in place which establishes limits. Whereas data protection may be playing an important role in this field at the moment, it may not be enough when further progress is made. AI ethics may make up for this gap.

How can AI Ethics help?

AI Ethics may provide a framework to cover the evolving use of computer hardware and software to enhance or restore human capabilities, avoiding that this technology involves high risks for the rights and freedoms of individuals. The fact that something is technically feasible does not mean that it should be done.

The lack of appropriate regulation may even slow down the research and implementation process, as some applications would be in direct conflict with several human rights and pre-existing regulations.

Human agency and oversight

Based on the AI Ethics principles pointed out by the AI-HLEG in their “Ethics Guidelines for Trustworthy AI”, human agency and oversight is key for brain implants, although it may be tricky to implement in practice. 

Human agency and oversight stands for humans being able to make informed decisions when AI is applied. Whilst this may be quite straightforward in some medical uses where the practitioner analyses the brain activity data gathered by the implants, it is not in other scenarios. The merge of biological and artificial intelligence and the fact that we cannot really know what kind of information, or at least be aware of every single piece of data, may be gathered from brain activity makes it difficult to claim that a certain decision, where said decision is made by the same person having the implant in their brain, is actually “informed”. 

Technical robustness and safety

Technical robustness and safety is another principle that should be prioritised when developing brain implants. The absolute sensitive nature of the data involved and the direct connection with the brain makes it necessary that the technology is built to be completely impenetrable. If a breach compromising biometric data would already have catastrophic results for the individuals affected, the harm derived from an attack to people’s brains would be simply immeasurable.

Privacy and data governance

According to the principle of privacy and data governance, full respect for privacy and data protection, including data governance mechanisms, the quality and integrity of the data and legitimate access to data should be ensured. In our view, the current lawful bases for valid processing laid down in the GDPR should be redefined, as their current meaning may not make sense in a context governed by brain implants. In relation to the human agency and oversight principle and in line with our previous article, an individual could not give consent in the sense of GDPR where data is being gathered by brain implants, because the information collected by these means cannot be initially identified or controlled. The process should be reframed to allow for managing consent in two stages, the second one after the collection of the data but before any further processing takes place. Considering the evolving nature of the technology and the early stage of knowledge currently available, the retention periods should also be approached differently, as further investigation might potentially reveal that additional data, for which there might be no lawful basis for processing, may be derived from the data lawfully held.

Cambridge Analytica may be a good example of the undesired results that may come from massive data processing without the adequate safeguards, measures and limitations in place. This case addressed advertising campaigns that influenced political behaviour and elections results, which was serious enough itself. However, a similar leak in a context where brain implants are used would have involved an incalculable damage to the rights and freedoms of individuals. 


Transparency is directly linked to the principle of privacy and data governance, as the provision of full information about the processing is one of the main pillars over which the GDPR has been built. While the technology is still in a development phase, this requirement will not be completely met because there would always be gaps in the system’s capabilities and limitations. Relevant requirements should be put in place to set up the minimum information that should be delivered to the data subjects when using brain implants.

Diversity, non-discrimination and fairness

Diversity, non-discrimination and fairness should be observed to avoid any type of unfair bias and marginalization of vulnerable groups. This principle is dual: on the one hand, it reaches the design of the brain implants and the AI governing them, which should be trained to prevent human biases, their own AI biases and the biases resulting from the interaction between human (brain) and AI (implants). On the other hand, it should be ensured that the access barriers to this technology do not create severe social inequalities.

Societal and environmental well-being

The social and societal impact of brain implants should be carefully considered. This technology may help people who have a medical condition, but its costs should be reasonably affordable to everyone. A more risky version of this circumstance is given when brain implants are not only used for medical treatment, but for human enhancement too, as it would provide some people with improved skills, which would be a detriment to those that could not pay for it.


Taking into account the critical nature of brain implants applications, auditability is key to enable the assessment of algorithms, data and design processes. Adequate and accessible redress should be ensured, for instance, in order to be able to contact all individuals using a particular version of the system which has been proved to be exposed to vulnerabilities.


In the table below we combine our previous article with this one to provide some practical insights of brain implants applications and the AI Ethics principles and GDPR requirements that they would be linked to. For this purpose, we will use the following case as the example: “Treating depression with brain stimulation”. In each row, it is explained how each AI Ethics principle and the relevant GDPR requirement or principle would apply.

Brain implants application AI Ethics principle GDPR requirement/principle
Regular monitoring by medical practitioners to decide whether the treatment is effective. Human agency and oversight. Human intervention.
Unlawful access to  and manipulation of the brain implants would have a direct negative impact over the patient’s health plus it would reveal special categories of data. Technical robustness and safety. Technical and organisational measures.
Processing is necessary for the provision of health treatment. Privacy and data governance. Lawful basis for processing.
Full information about the treatment should be delivered before proceeding. Transparency. Explanation.
No one should be excluded from this treatment and offered a less efficient one just for the reason of not being able to afford it. Diversity, non-discrimination and fairness. No discriminatory results.
Where the treatment negatively affects the behaviour of the patient in a sense that may be risky for other people, relevant measures and safeguards should be applied. Societal and environmental well-being. Statistical and research purposes exceptions.
The hospital or the Estate should be responsible for any damages caused by this use of brain implants. Accountability. Accountability.


Check out our vlog exploring Brain Implants and AI Ethics:

You can learn more about AI ethics and regulation in our YouTube channel.


Do you have questions about how AI works in brain implants and the associated risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact AssessmentsAI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Second European AI Alliance

Second European AI Alliance Assembly overview

The second European AI Alliance Assembly was hosted online due to the COVID-19 pandemic, on Friday 9th October.

The second edition of the European AI Alliance Assembly took place last Friday 9th in a full day event which was hosted online due to the COVID-19 pandemic. The Assembly gathered together more than 1,400 viewers who followed the sessions live and were also given the option to submit their questions to panellists.

The event

This year’s edition had a particular focus on the European initiative to build an Ecosystem of Excellence and Trust in Artificial Intelligence. The sessions were broken into plenaries, parallel workshops and breakout sessions. 

The following topics were addressed:

As a member of the European AI Alliance, Aphaia was pleased to join the event and enjoy some of the sessions addressing topics which are crucial in the development and implementation of AI in Europe, such as “Requirements for trustworthy AI” and “AI and liability”.

Requirements for trustworthy AI

The speakers shared their views on the risks brought by AI systems and the approaches that should be taken to enable the widespread use of AI in the society.

Hugues Bersini, Computer Science Professor at Free University of Brussels, considered that there is a cost function whenever AI is used, and optimizing it is the actual goal: “Whenever you can align social cost with individual cost there are no real issues”.

Haydn Belfield, Academic Project Manager at CSER Cambridge University, claimed that the high risk AI systems may imply for people life chances and their fundamental rights demand a framework of regulation including mandatory requirements that should, at the same time, be  flexible, adaptable and practical. 

For Francesca Rossi, IBM fellow and the IBM AI Ethics Global Leader, transparency and explainability are key. She explained that AI should be used to support the decision making capabilities of human beings, who have to make informed decisions. This purpose cannot be achieved if AI systems are a black box.

As response to the audience questions, the speakers discussed together how many risk levels would be necessary for AI. The main conclusion was that considering that only defining high risk is already a challenge, having two risk levels (high risk and not high risk) would be a good start on which further developments may be built in the future.

The speakers briefly talked about each of the requirements highlighted by the AI-HLEG for trustworthy AI, namely: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability. 

In our view the discussions on AI and biases and human oversight where especially relevant:

AI and biases

Paul Lukowicz, Scientific Director and Head of the Research Unit “Embedded Intelligence” at the German Research Center for Artificial Intelligence defined machine learning as giving the computer methods with which it can extract procedures and information from data and stated that it is the core of the current success of AI. The challenge is that a lot of biases and discrimination in AI system come from giving data in which there are biases and discrimination. The challenge is that it is not that the developers somehow fail in giving data which is not per se representative: they actually use data that is representative and because there are discrimination and bias in our everyday life, this is what the systems learn and empathize. Linked to this issue, he considers that another pitfall is  the uncertainty, as there is no data set that covers the entire world. “We always have a level of uncertainty in life, so we have in AI systems”. 

Human oversight

Aimee Van Wynsberghe, Associate Professor in Ethics and Technology at TU Delft, raised some obstacles to human oversight:

  1. She challenged the fact that the output of an AI system is not valid until it has been reviewed and validated by an human. In her view, this can be quite difficult because there are biases that threaten human autonomy: automation bias, simulation bias, confirmation bias. Humans have a tendency to favor suggestions from automated decision-making system and ignore contradictory information that is made without automation. The other challenge in this regard is that having an AI system creating output and the human overviewing and validating is very time and resources consuming.
  2. As for the alternative based on the fact that the outputs of the AI system would become immediately effective but only if human overview is ensured afterwards, Aimee pointed out the issue of allocating the responsibility of ensuring human intervention:  “Who is going to ensure that human intervention happens? The company? Is the customer who would approach the company otherwise? Is it fair to assume that customers would have the time, the knowledge and the ability to do this?
  3. The monitoring of the AI system while in operation and the ability to intervene in real time and deactivate it would be difficult too because of human psychology: “there is lack of situational awareness that does not allow for the ability to take over”.

AI an liability

Corinna Schulze, Director of EU Government Affairs at SAP; Marco Bona, PEOPIL’s General Board Member for Italy and International Personal Injury Expert; Bernhard Koch, Professor of Civil and Comparative Law and Member of the New Technologies Formation of the EU Expert Group on liability for new technologies; Jean-Sébastien Borghetti, Private Law Professor at Université Paris II Panthéon-Assas and Dirk Staudenmaier, Head of Unit and Contract Law in the Department of Justice of the European Commission discussed the most important shortcomings on AI liability and put it in connection with the Product Liability Directive.

The following issues of the Directive were pointed out by the experts:

  • Time limit of 10 years: in the view of most of the speakers this may be an issue because it concerns producers only, which could be difficult whenever operators, users, owners and other stakeholders are involved. Furthermore 10 years is fine with  traditional products but it may not work in terms of protection of victims in relation to some AI artefacts and systems.
  • Scope: it concerns the protection of the consumers while it not address the protection of victims. Consumers and victims sometimes overlap but not always.
  • Notion of defect: it may cause some troubles with the distinction between product and services. The Directive covers only product, not services which may rise some concerns in relation to internet of things and software.


The Commission has made available the links of the sessions for all those who did not manage to attend the event or would like to see one or more session again.

Do you need assistance with AI Ethics? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments.

Brain Implants, GDPR and AI

Brain implants may be the next challenge for AI and GDPR. Our guest blogger Lahiru Bulathwela, LLM student in innovation and technology at the University of Edinburgh explores why and how.

What do brain implants, GDPR and AI share? Elon Musk recently demonstrated his brain-hacking implant on Gertrude the pig, which has begun to show a growth of interest within mainstream market consumption. Neural implants aren’t a recent development, they have been utilised by researchers and clinicians for several years; successful treatments with neural implants have helped patients who suffer from clinical depression, Parkinson’s and restoring limited movement for paralyzed individuals. The excitement for their development through companies like Neuralink is palpable, the potential for neural implants to treat individuals, and in the future, enhance people is certainly an interesting prospect. However, as with any innovative technology, the excitement of its development often overshadows concerns about its potential. For every person, the brain represents the total sum of your individuality and your identity, as such concerns surrounding neural implants are particularly sensitive.

Many potential obstacles face the development of neural implants, ranging from technological to physiological limitations. This blog will explore issues that relate to data protection as data protection is fundamentally central in our information dominated society, neural implants offer new challenges as the information it utilises is arguably, the most sensitive of data.

How do they work?

In the simplest terms, a neural implant is an invasive or non-invasive device that can interact with an individual’s brain. Certain devices can map an individual’s native neural activity and some devices can stimulate the neurons in a brain to alter functioning. While the technology is advanced, there are limitations to its efficacy, primarily surrounding our knowledge of native neural circuits. Currently, we can map an individual’s brain and record its neural activity but lack the knowledge to interpret that information in a meaningful way that would be expected of a consumer device. While limited at the moment, it is a question of when, rather than if, we will increase our understanding of native neural networks

GDPR and Neural implants

The GDPR is the most recent iteration of data protection regulation in the EU, and it sets a high regulatory standard. The GDPR is the most progressive data protection regulation to date but like other legislative tools, its development and implementation are in reaction to the rapidly evolving use of information in current society. Although the GDPR does not mention ‘neural information’ specifically within the definition of personal data in. Art.4(1), a person may be identified from said information, therefore it is personal data. :

“…’personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;”

Factors such as physical, physiological or mental could potentially be attributed to neural information. Any measured neural condition is personal data. Information gathered from a person’s brain is the most sensitive of personal information, especially when you consider if it is even possible to consent to such collection.

When we consent to our information being collected or shared, we have a degree of control of what information is made available. A data controller must justify the lawful collection of that data (Art.6 and Art.9), and ensure that consent is freely given, and not conditional for the provision of service or performance of a contract where the data is not necessary for the performance of that contract (Art.7(4)).

An individual can consent to share personal medical history with an insurer without having to share information on their shopping habits or their relationships. We can’t, however, limit that information within our brain, and neural activity cannot be hidden from a device that records electrical stimulation in the brain. The lack of control on what information our brain shares is a significant issue, despite the fact we are limited in our ability to interpret such information.

Future regulatory measures?

The GDPR is the most progressive regulatory instrument that has been implemented across the world, yet it lacks the necessary depth to deal with the ever-changing landscape of information gathering. The next iteration of data protection regulation requires the necessary foresight to effectively protect individuals from misappropriation of their information. Foresight to understand how overzealous regulation can hamper the progress of innovation, but ineffective regulation could hinder consumer confidence in neural implants.

When Elon Musk discussed his Neuralink implant as a “Fitbit in your skull” he minimised how invasive neural implants would be upon our privacy. While it could be said that individuals are more comfortable sharing their information in public fora through social media etc, there is still a choice of what information you choose to share. The lack of choice surrounding what information you provide through a neural implant necessitates the need for robust regulation; I predict that this regulation should include a requirement to use technology to enforce regulation.

Regulation through technology?

Governance using technology is a potential alternative to standard legislative tools. We have seen technologies such as Blockchain use cryptography to decentralises record keeping ensuring that information held on record is only accessible to verified individuals, and that no one single person or entity can access or control all the information.

More specific to neural implants would be the utilisation of machine learning in protecting an individual’s privacy. Machine learning is already being utilised in clinical environments to assist with brain mapping and it is possible that machine learning in the future will allow us to better understand our native neural networks. The use of machine learning to effectively regulate what information would be shared with a neural implant may seem far-fetched at this moment in time, but I would contest that this is due to a lack of understanding of our brains, rather than an algorithmic limitation. More research is required to understand what precisely can be interpreted from our neural activity, but we have the technological capability to create algorithms that could learn what information should and should not be shared.

Neural implants at present lack the sophistication to create any significant problems to an individual’s privacy, but they offer an opportunity for legislators and technologists to create proactive measures that would effectively protect individuals and build consumer confidence.

Check out our vlog exploring Brain Implants, GDPR and AI:

You can learn more about AI ethics and regulation in our YouTube channel.

Aphaia provides both GDPR, Data Protection Act 2018 and ePrivacy adaptation consultancy services, including data protection impact assessmentsCCPA compliance and Data Protection Officer outsourcing.

Amazon launches new technology

Amazon launches new technology which scans palms for identification and payment.

Amazon launches new technology in two of its physical stores, which allow for contact free identification and payment, by scanning an individual’s palm.


Amazon is on the verge of launching a new biometric payment system which scans an image of customers’ palms, according to this new BBC article. This new methodology is an attempt at a contactless replacement of traditional membership and physical loyalty cards. The accuracy and unique identifiers lie within the vein patterns in the hands of individuals, which still remain fairly inconspicuous to the naked eye. These scanners would require the customer to wave their palm a few inches away from a scanner making it a viable contactless form of ID/payment simultaneously. The system is currently being tested at two Amazon stores. Physical bills and data will be stored locally at the stores, but will not be sent to Amazon data centers, and clients will be allowed to delete the data from their website.


Amazon developers think this technology is safer and more secure than other methods of biometric identification. 


The application seems to be as accurate and effective as fingerprints, but not as easily identifiable by human vision, and therefore presumably more difficult to replicate. Amazon developers claim it is more secure than other forms of biometrics, which is especially relevant after issues with racial bias have been shown in the company’s facial recognition software that has currently been suspended by officials. Recently, we published an article on The National Biometric Information Privacy Act, which was introduced into US congress. Bills like these are an attempt to curtail any negative effects or security breaches that may arise in using biometric scanners and similar technology.


While this technology is convenient, some point to possible data security risks.


In the midst of the pandemic, the introduction of a new payment method requiring less human interaction, and no physical contact seems like a much needed innovation, however some groups are advocating against biometric forms of ID and payments due to the possible privacy issues associated with biometric data being stored by governments or large corporations. Director of the privacy rights groups Big Brother Watch, Silkie Carlo says that this new technology is invasive, unnecessary and provides just another outlet for Amazon to cultivate personal data freely despite privacy laws and agreements. 


The convenience of biometrics is not overshadowed by the possible invasion of privacy it risks, as a direct consequence. The implementation of these scanners in many different buildings is being discussed if this initial trial in Seattle locations goes well. This technology is a part of Amazon’s vision of a non human staffed supermarket, where everything is tracked by AI and machines in the store and payment can be completed using this new palm scanner for a full contactless experience.


What does the GDPR say about this type of data processing?


The scans being picked up by these machines fall under biometric data, the processing of which is prohibited, under the GDPR, unless certain conditions are met. When processing biometric data, unless at least one of those conditions are met, the processing is deemed unlawful. Article 9 of the GDPR dictates that one of the following criteria must be met in order for the processing of biometric data;


  1. Explicit consent to process that personal data has been given by the data subject for one or more specified purposes, except in instances where union on member state laws prevent the prohibition from being lifted by the data subject.
  2. Processing the biometric data is necessary for the purposes of fulfilling obligations or exercising specific rights of the controller or the data subject in the field of employment, social security or social protection law.
  3. The processing is necessary to protect the vital interests of the data subject or another natural person if the data subject is physically or legally incapable of giving consent.
  4. The processing of biometric data is carried out in the course of its legitimate activities with appropriate safeguards by a foundation, association or any other not-for-profit body, on condition that the processing relates only to members or former members of the body,  or with a person’s in regular contact with the body, in connection with its aim or purposes related to political philosophical religious or trade unionism.
  5. The processing is relating to personal data which is manifestly made public by the data subject.
  6. The processing is necessary for the establishment, exercise or defence of legal claims.
  7. The processing is necessary for reasons of substantial public interest, including in the area of public health.
  8. The processing is necessary for the purposes of private or occupational medicine.
  9. The processing is necessary for archiving purposes in the public interest, whether scientific, historical or statistical purposes.


For more clarity on what is classified as biometric data as well as other aspects of this technology, check out our post; 14 common misconceptions on biometric identification and authentication debunked.

Does your company process biometric identification data? Aphaia provides a number of services in relation to compliance with regard to data protection, including regarding biometric data: data protection impact assessments, Data Protection Officer outsourcing, and EU AI ethics assessments. Get in touch today to find out more.