Loading

Blog details

Brain implants and AI ethics

Brain implants and AI ethics

The risks derived from the use of AI in brain implants may trigger the need for AI Ethics regulation soon. 

In our last vlog we talked about the interaction between brain implants and GDPR. In this second part we will explore how AI Ethics applies to brain implants. 

How is AI used in brain implants?

The AI is the conveying element between brain implants and the persons who work with them, as the AI helps to translate the electrical signals of the brain into a human-understandable language.  

The process is twofold normally: first, neural data is gathered from brain signals which are translated into numbers. Then, the numbers are decoded and transformed into natural language, which can be either English, Spanish or any other, as programmed. This procedure also works the other way around.

Why is AI Ethics important in brain implants?

Depending on how fan of sci-fi you are, the words “brain implants” may bring to your mind some films and television series titles. On the contrary, if you are a news rather than a Netflix enthusiast, you are probably already thinking about several recent medical researches where brain implants have been used. The reality reconciles both approaches.

Brain implants have been part of medical and scientific research for more than 150 years now, when in 1870 Eduard Hitzig and Gustav Fritsch performed experiments on dogs by which they were able to produce movement through electrical stimulation of specific parts of the cerebral cortex. Significant progress has been made since then and now brain implants are being tested with several purposes, inter alia, to monitor abnormal brain activity and trigger assistive measures.

Military applications have also been explored by some institutions. For example, DARPA has been working since 2006 in the development of a neural implant to remotely control the movement of sharks, which would be potentially exploited to provide data feedback in relation to enemy ship movement or underwater explosives.

The latest development in this field comes from Neuralink, the neurotechnology company founded by Elon Musk, whose page claims that the app “would allow you to control your iOS device, keyboard and mouse directly with the activity of your brain, just by thinking about it”, meaning that the aim pursued is building a full brain interface which would make people able to connect with their devices using only their minds.

While all these applications may have a huge positive impact on technological change, it may also largely affect individual’s rights and freedoms, therefore it is paramount to have some regulation in place which establishes limits. Whereas data protection may be playing an important role in this field at the moment, it may not be enough when further progress is made. AI ethics may make up for this gap.

How can AI Ethics help?

AI Ethics may provide a framework to cover the evolving use of computer hardware and software to enhance or restore human capabilities, avoiding that this technology involves high risks for the rights and freedoms of individuals. The fact that something is technically feasible does not mean that it should be done.

The lack of appropriate regulation may even slow down the research and implementation process, as some applications would be in direct conflict with several human rights and pre-existing regulations.

Human agency and oversight

Based on the AI Ethics principles pointed out by the AI-HLEG in their “Ethics Guidelines for Trustworthy AI”, human agency and oversight is key for brain implants, although it may be tricky to implement in practice. 

Human agency and oversight stands for humans being able to make informed decisions when AI is applied. Whilst this may be quite straightforward in some medical uses where the practitioner analyses the brain activity data gathered by the implants, it is not in other scenarios. The merge of biological and artificial intelligence and the fact that we cannot really know what kind of information, or at least be aware of every single piece of data, may be gathered from brain activity makes it difficult to claim that a certain decision, where said decision is made by the same person having the implant in their brain, is actually “informed”. 

Technical robustness and safety

Technical robustness and safety is another principle that should be prioritised when developing brain implants. The absolute sensitive nature of the data involved and the direct connection with the brain makes it necessary that the technology is built to be completely impenetrable. If a breach compromising biometric data would already have catastrophic results for the individuals affected, the harm derived from an attack to people’s brains would be simply immeasurable.

Privacy and data governance

According to the principle of privacy and data governance, full respect for privacy and data protection, including data governance mechanisms, the quality and integrity of the data and legitimate access to data should be ensured. In our view, the current lawful bases for valid processing laid down in the GDPR should be redefined, as their current meaning may not make sense in a context governed by brain implants. In relation to the human agency and oversight principle and in line with our previous article, an individual could not give consent in the sense of GDPR where data is being gathered by brain implants, because the information collected by these means cannot be initially identified or controlled. The process should be reframed to allow for managing consent in two stages, the second one after the collection of the data but before any further processing takes place. Considering the evolving nature of the technology and the early stage of knowledge currently available, the retention periods should also be approached differently, as further investigation might potentially reveal that additional data, for which there might be no lawful basis for processing, may be derived from the data lawfully held.

Cambridge Analytica may be a good example of the undesired results that may come from massive data processing without the adequate safeguards, measures and limitations in place. This case addressed advertising campaigns that influenced political behaviour and elections results, which was serious enough itself. However, a similar leak in a context where brain implants are used would have involved an incalculable damage to the rights and freedoms of individuals. 

Transparency

Transparency is directly linked to the principle of privacy and data governance, as the provision of full information about the processing is one of the main pillars over which the GDPR has been built. While the technology is still in a development phase, this requirement will not be completely met because there would always be gaps in the system’s capabilities and limitations. Relevant requirements should be put in place to set up the minimum information that should be delivered to the data subjects when using brain implants.

Diversity, non-discrimination and fairness

Diversity, non-discrimination and fairness should be observed to avoid any type of unfair bias and marginalization of vulnerable groups. This principle is dual: on the one hand, it reaches the design of the brain implants and the AI governing them, which should be trained to prevent human biases, their own AI biases and the biases resulting from the interaction between human (brain) and AI (implants). On the other hand, it should be ensured that the access barriers to this technology do not create severe social inequalities.

Societal and environmental well-being

The social and societal impact of brain implants should be carefully considered. This technology may help people who have a medical condition, but its costs should be reasonably affordable to everyone. A more risky version of this circumstance is given when brain implants are not only used for medical treatment, but for human enhancement too, as it would provide some people with improved skills, which would be a detriment to those that could not pay for it.

Accountability

Taking into account the critical nature of brain implants applications, auditability is key to enable the assessment of algorithms, data and design processes. Adequate and accessible redress should be ensured, for instance, in order to be able to contact all individuals using a particular version of the system which has been proved to be exposed to vulnerabilities.

Examples

In the table below we combine our previous article with this one to provide some practical insights of brain implants applications and the AI Ethics principles and GDPR requirements that they would be linked to. For this purpose, we will use the following case as the example: “Treating depression with brain stimulation”. In each row, it is explained how each AI Ethics principle and the relevant GDPR requirement or principle would apply.

Brain implants application AI Ethics principle GDPR requirement/principle
Regular monitoring by medical practitioners to decide whether the treatment is effective. Human agency and oversight. Human intervention.
Unlawful access to  and manipulation of the brain implants would have a direct negative impact over the patient’s health plus it would reveal special categories of data. Technical robustness and safety. Technical and organisational measures.
Processing is necessary for the provision of health treatment. Privacy and data governance. Lawful basis for processing.
Full information about the treatment should be delivered before proceeding. Transparency. Explanation.
No one should be excluded from this treatment and offered a less efficient one just for the reason of not being able to afford it. Diversity, non-discrimination and fairness. No discriminatory results.
Where the treatment negatively affects the behaviour of the patient in a sense that may be risky for other people, relevant measures and safeguards should be applied. Societal and environmental well-being. Statistical and research purposes exceptions.
The hospital or the Estate should be responsible for any damages caused by this use of brain implants. Accountability. Accountability.

 

Check out our vlog exploring Brain Implants and AI Ethics:

You can learn more about AI ethics and regulation in our YouTube channel.

 

Do you have questions about how AI works in brain implants and the associated risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact AssessmentsAI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Prev post
La ICO multa a Ticketmaster UK con 1,39 millones de euros tras un ciberataque a su chatbot.
November 18, 2020
Next post
Implantes Neuronales y Ética de la IA
November 23, 2020

Leave a Comment