Brain Implants, GDPR and AI

Brain implants may be the next challenge for AI and GDPR. Our guest blogger Lahiru Bulathwela, LLM student in innovation and technology at the University of Edinburgh explores why and how.

What do brain implants, GDPR and AI share? Elon Musk recently demonstrated his brain-hacking implant on Gertrude the pig, which has begun to show a growth of interest within mainstream market consumption. Neural implants aren’t a recent development, they have been utilised by researchers and clinicians for several years; successful treatments with neural implants have helped patients who suffer from clinical depression, Parkinson’s and restoring limited movement for paralyzed individuals. The excitement for their development through companies like Neuralink is palpable, the potential for neural implants to treat individuals, and in the future, enhance people is certainly an interesting prospect. However, as with any innovative technology, the excitement of its development often overshadows concerns about its potential. For every person, the brain represents the total sum of your individuality and your identity, as such concerns surrounding neural implants are particularly sensitive.

Many potential obstacles face the development of neural implants, ranging from technological to physiological limitations. This blog will explore issues that relate to data protection as data protection is fundamentally central in our information dominated society, neural implants offer new challenges as the information it utilises is arguably, the most sensitive of data.

How do they work?

In the simplest terms, a neural implant is an invasive or non-invasive device that can interact with an individual’s brain. Certain devices can map an individual’s native neural activity and some devices can stimulate the neurons in a brain to alter functioning. While the technology is advanced, there are limitations to its efficacy, primarily surrounding our knowledge of native neural circuits. Currently, we can map an individual’s brain and record its neural activity but lack the knowledge to interpret that information in a meaningful way that would be expected of a consumer device. While limited at the moment, it is a question of when, rather than if, we will increase our understanding of native neural networks

GDPR and Neural implants

The GDPR is the most recent iteration of data protection regulation in the EU, and it sets a high regulatory standard. The GDPR is the most progressive data protection regulation to date but like other legislative tools, its development and implementation are in reaction to the rapidly evolving use of information in current society. Although the GDPR does not mention ‘neural information’ specifically within the definition of personal data in. Art.4(1), a person may be identified from said information, therefore it is personal data. :

“…’personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;”

Factors such as physical, physiological or mental could potentially be attributed to neural information. Any measured neural condition is personal data. Information gathered from a person’s brain is the most sensitive of personal information, especially when you consider if it is even possible to consent to such collection.

When we consent to our information being collected or shared, we have a degree of control of what information is made available. A data controller must justify the lawful collection of that data (Art.6 and Art.9), and ensure that consent is freely given, and not conditional for the provision of service or performance of a contract where the data is not necessary for the performance of that contract (Art.7(4)).

An individual can consent to share personal medical history with an insurer without having to share information on their shopping habits or their relationships. We can’t, however, limit that information within our brain, and neural activity cannot be hidden from a device that records electrical stimulation in the brain. The lack of control on what information our brain shares is a significant issue, despite the fact we are limited in our ability to interpret such information.

Future regulatory measures?

The GDPR is the most progressive regulatory instrument that has been implemented across the world, yet it lacks the necessary depth to deal with the ever-changing landscape of information gathering. The next iteration of data protection regulation requires the necessary foresight to effectively protect individuals from misappropriation of their information. Foresight to understand how overzealous regulation can hamper the progress of innovation, but ineffective regulation could hinder consumer confidence in neural implants.

When Elon Musk discussed his Neuralink implant as a “Fitbit in your skull” he minimised how invasive neural implants would be upon our privacy. While it could be said that individuals are more comfortable sharing their information in public fora through social media etc, there is still a choice of what information you choose to share. The lack of choice surrounding what information you provide through a neural implant necessitates the need for robust regulation; I predict that this regulation should include a requirement to use technology to enforce regulation.

Regulation through technology?

Governance using technology is a potential alternative to standard legislative tools. We have seen technologies such as Blockchain use cryptography to decentralises record keeping ensuring that information held on record is only accessible to verified individuals, and that no one single person or entity can access or control all the information.

More specific to neural implants would be the utilisation of machine learning in protecting an individual’s privacy. Machine learning is already being utilised in clinical environments to assist with brain mapping and it is possible that machine learning in the future will allow us to better understand our native neural networks. The use of machine learning to effectively regulate what information would be shared with a neural implant may seem far-fetched at this moment in time, but I would contest that this is due to a lack of understanding of our brains, rather than an algorithmic limitation. More research is required to understand what precisely can be interpreted from our neural activity, but we have the technological capability to create algorithms that could learn what information should and should not be shared.

Neural implants at present lack the sophistication to create any significant problems to an individual’s privacy, but they offer an opportunity for legislators and technologists to create proactive measures that would effectively protect individuals and build consumer confidence.

Check out our vlog exploring Brain Implants, GDPR and AI:

You can learn more about AI ethics and regulation in our YouTube channel.

Aphaia provides both GDPR, Data Protection Act 2018 and ePrivacy adaptation consultancy services, including data protection impact assessmentsCCPA compliance and Data Protection Officer outsourcing.

AI Ethics in Real Estate

AI Ethics in Real Estate is imperative to the industry and its ability to function within regulation, as the use of AI gradually expands in this sector.


The Role of AI in Real Estate and Construction


Artificial Intelligence definitely has been very difficult to implement in very aesthetic based markets such as the real estate market when it came to traditional sales or other tasks,  as it is the second least digitized industry in the world. However many companies have now adapted to using AI in the real estate and construction industries for increasingly important and more detailed tasks.

Using certain technologies that use the physical features, location and other related variables to link the best combination of features for either persons looking for a home or those trying to build one. AI has the capability to see unlikely linkages among properties which may lead to them showing it to the perfect match at the perfect price range. If accurate, it meets all of their needs and the needs of similar buyers.


The number one issue with the real estate industry and AI is the different categorizations of data on property, many aspects of a property that may add or diminish its value are very subtle and nuanced, however the increased datapool over the past few years has greatly increased the level of analysis possible by an Artificial Intelligence system. Also, over time the more persons who use the system the more sophisticated it gets, and the more the same person uses it, it filters out their dislikes and records that data to give even more accurate results.

Interactive AI would also use customer data including photos of their own home, or homes with features they appreciate, this will allow them to combine enough of their favourite features into a property which has calculated all of their preferences , down to build material. Within the construction of homes, AI can definitely increase the efficiency of space use, as many legacy home designs have very artistic considerations. If a company or client is trying to maximise the full plot of land they have available, AI can devise a very concise floor plan, and dimensions to exemplify the preferences of the builder.


Considering cost and efficiency as well, sourcing the applicable materials, its availability and price along with other considerations including health and environmental concerns, AI could streamline this process where a final Human approval from an expert of the schematics and building plan would have to be okayed. A key point to acknowledge is, the property itself is one concern but neighbourhoods play huge roles in deciding where someone may raise their family. Proximity to schools, fields, gyms, clubs, groceries and other key considerations in their immediate vicinity. What makes a 2 bedroom Flat in one city cost more than the other?

AI systems can find these correlations a lot easier and through many more facets, as long as they are trained to know what to look for. Again the key consideration of AI is it is always heavily dependent on the quality of research and data it uses as its source material.


The Need for AI Ethics in Real Estate


With all of the wonderful considerations that AI can now make, it is now up to the human element to implement the correct ethical programming or AI use to make sure certain rights of citizens are upheld. The current state-of-the-art makes It impossible for a program or piece of technology to have a conscience (that we know of) that is to say, that if not properly regulated and monitored the AI can definitely take advantage of societal detriments.

The first, and most obvious elephant in the room is AI that distinguishes persons by race and uses that as some sort of relevant criteria to match them with a proper home or neighbourhood to purchase a home/land. A machine however wouldn’t know that pairing a person of a minority ethnicity with other majority ethnic neighbourhoods, despite their price range due to “social preference” parameters is inherently prejudiced and something that cannot be allowed to happen. Also the jump in AI in the real estate market is due to more personal information being siphoned from homeowners, from real estate websites and search engines, forums, social media  and other sources.


Is enough being done to protect homeowner information or buyers’ information?


Several questions and considerations come into play regarding the ethical use of AI systems in the real estate and building markets. Is enough being done to protect homeowner information or buyers’ information? Are these data collection agencies processingthis data with all the necessary GDPR guidelines in mind? Is there total transparency on these collection or utilization processes that make every party fully aware of how the information is to be used? Any time personal information is on the table as a key consideration it is always a dicey topic and AI manufacturers and developers have to be wary not to risk ethical programming for quantitative accuracy as the long term impact could be detrimental to the guaranteed rights of civil society.


Whether the AI actively seeks out information on its own is also a consideration as it could gain access to information that could change the quality of its searches drastically and affect the objectivity of the search. Also with the ability to maximise and fully make efficient use of space materials, square footage and all other factors, AI can make it easy to cut corners in the industry as well. It could turn a two million euro project down to something a lot less dear with the sourcing of alternate materials for building material or nails and screws.

Also comfort may not  be a consideration accurately depicted by an AI system in every situation. The purpose of the project and its scale is something very important to consider as well. Architects and Contractors have to make sure that each building is sufficiently comfortable for persons to function and doesn’t skimp on any of the necessary features of a modern day building.


In addition to that, long term cost reduction and consideration for the environment when choosing energy sources for homes is something that is a personal choice for each building owner and should not be left to the agency of the machine or software, even if there is a best, or most applicable type of energy source that may be recommended by the software. There are many considerations that machines simply cannot make unless we teach them how to, and there are important questions developers must ask themselves about their code, but the two most important are : Does it work? And if Yes,is  it Ethical?


We recently released a new vlog exploring the use of AI in the real estate sector, as part of a series on AI ethics within various industries.


Are you worried about COVID-19, regardless of the industry? We have prepared a blog with some relevant tips to help you out, covering COVID-19 business adaptation basics .


Subscribe to our YouTube channel to be updated on further content.

Do you have questions about how AI is transforming the real estate sector and the associated risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Facial Recognition in Public Spaces

Facial recognition in public spaces can read your face and your mood, and record other information about you.

Facial recognition in public spaces can read your face and your mood, and record other information about you, like how often to pass in a certain spot. Smart billboards are able to tailor their offerings based on this information. As a result, many people are beginning to question their level of comfort with this technology, its degree of use, and its impact on their lives.

In our latest Youtube Vlog “Facial recognition in public spaces” we explore the thoughts and ideas many people are already having regarding this new but quickly developing facial recognition technology, and its impact in our society. We also offer answers to its specific impact on you. You can take a look at our latest vlog here:

GDPR defines ‘profiling’ as“any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”

With regard to direct marketing from these smart billboards, the GDPR states that “Where personal data are processed for the purposes of direct marketing, the data subject should have the right to object to such processing, including profiling to the extent that it is related to such direct marketing, whether with regard to initial or further processing, at any time and free of charge.” They go on to state that ”That right should be explicitly brought to the attention of the data subject and presented clearly and separately from any other information”.

In addition to this and pursuant to article 35 GDPR, “Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data”.

Smart billboards are a new and intrusive technology that may process personal data about data subjects who may even not be aware of it, which limits their rights granted under the GDPR. We took a deeper look into this in our blog “Regulating the right to privacy in the AI era”.

Due to the privacy risks that facial recognition involves, according to a leaked EU Commission white paper, the EU may place a 3 to 5 year ban on facial recognition technology within public places. US, on its side, may also impose some measures in this regard, like a moratorium on federal government use of facial recognition technology until Congress passes legislation regulating it and a prohibition of using federal funds for this technology.

Does your company utilize facial recognition software to conduct profiling or direct marketing? Aphaia’s AI ethics assessments will assist in ensuring that it falls within the scope of the EU’s and UK’s ethical framework. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. Contact us today.

EU White Paper on AI

EU White Paper on Artificial Intelligence Overview

The EU White Paper on Artificial Intelligence contains a set of proposals to develop a European approach to this technology.

As reported in our blog, the leaked EU White Paper obtained by Euractiv proposes several options for AI regulations moving into the future. In our post today we are going through them to show you the most relevant ones.

The EU approach is focused on promoting the development of AI across Member States while ensuring that relevant values and principles are properly observed during the whole process of design and implementation. One of the main goals is cooperating with China and the US as the most important players in AI, but always aiming at protecting EU’s interests, including European standards and the creation of a level playing field.

Built on the existing policy framework, the EU White Paper points out three key pillars for the European strategy on AI:

  • Support for the EU’s technological and industrial capacity.

  • Readiness for the socioeconomic changes brought about by AI.

  • Existence of an appropriate ethical and legal framework.

How is AI defined?

This EU White Paper provides a definition of AI based on its nature and functions. That said, AI is conceived as “software (integrated in hardware or self-standing) which provides for the following functions:

  • Simulation of human intelligence processes.

  • Performing certain specified complex tasks.

  • Involving the acquisition, processing and rational or reasoned analysis of data”.

Europe’s position on AI

What is the role of Europe in the development of AI? Despite EU’s strict rules on privacy and data protection, Europe counts with several strengths that may help to gain leverage in the “AI race” against other markets like China or the US, namely:

  • Excellent research centres with many publications and scientific articles related to AI.

  • World-leading position in robotics and B2B markets.

  • Large amounts of public and industrial data.

  • EU funding programme.

On the negative side, there is a pressing need for significantly increasing investment levels on AI and maximising them through cooperation among Member States, Norway and Switzerland. Europe has as well a weak position in consumer applications and online platforms, which results in a competitive disadvantage in data access.

The EU White Paper offer some proposals to reinforce EU strengths on AI and address those areas that need to be boosted:

  • Establishing a world-leading AI computing and data infrastructure in Europe using as a basis High Performance Computing centres.

  • Federating knowledge and achieving excellence through the reinforcement of EU scientific community for AI and the facilitation of their collaboration and networking based on strengthen coordination.

  • Supporting research and innovation to stay at the forefront with the creation of a “Leaders Group” set up with C-level representatives of major stakeholders.

  • Fostering the uptake of AI through the Digital Innovation Hubs and the Digital Europe Programme.

  • Ensuring access to finance for AI innovators.

What are the prerequisites to achieve EU’s goals on AI?

Access to data

Ensuring access to data for EU businesses and the public sector is essential to develop AI. One of the key measures considered by the Commission for redressing the issue of data access is the development of common data spaces which combine the technical infrastructure for data sharing with governance mechanisms, organised by sector or problem area.

Regulatory framework

The above can be built on EU’s comprehensive legal framework, which includes the GDPR, the Regulation on the Free Flow of Data and the Open Data Directive. The latter may play a fundamental role indeed, as based on its latest revision, the Commission intends to adopt by early 2021 an implementing act on high-value public sector datasets, which will be available for free and in a machine-readable format.

Although AI is already subject to an extensive body of EU legislation including fundamental rights, consumer law and product safety and liability, it also poses new challenges that come from the data dependency and the connectivity within new technological ecosystems. There is therefore a need for developing a regulatory framework that covers all the specific risks that AI brings. In order to achieve this task successfully, the EU White Paper highlights the relevance of complementing and building upon the existing EU and national frameworks to provide policy continuity and ensure legal certainty.

The main risks the implementation of AI in society faces are the following:

  • Fundamental rights, including bias and discrimination.

  • Privacy and data protection.

  • Safety and liability.

It is important to note that the aforementioned risks can be the result either of flaws in the design of the AI system, problems with the availability and quality of data or issues stemming from machine learning as such.

According to the EU White Paper, the Commission identified the following weaknesses of the current legislative framework in consultation with Member States, businesses and other stakeholders:

  • Limitations of scope as regards fundamental rights.

  • Limitations of scope with respect to products: EU product safety legislation requirements do not apply to services based on AI.

  • Uncertainty as regards the division of responsibilities between different economic operators in the supply chain.

  • Changing nature of products.

  • Emergence of new risks.

  • Difficulties linked to enforcement given the opacity of AI.

How should roles and responsibilities concerning AI be attributed?

The Commission considers that, considering the amount of agents involved in the life cycle of an AI system, the principle that should guide the attribution of roles and responsibilities in the future regulatory framework is that the responsibility lies with the actor(s), who is/are best placed to address it. Therefore, the future regulatory framework for AI is expected to set up obligations for both developers and users of AI, together with other groups such as suppliers of services. This approach would ensure that risks are managed comprehensively while not going beyond what is feasible for any given economic actor.

What legal requirements should be imposed on the agents involved?

According to the EU White Paper, the Commission seems to be keen on setting up legal requirements having a preventive ex ante character rather than an ex post one, even though the latter are also referred. That said, the requirements might include:

  • Accountability, transparency and information requirements to disclose the design parameters of the AI system.

  • General design principles.

  • Requirements regarding the quality and diversity of datasets.

  • Obligation for developers to carry out an assessment of possible risks and steps to minimize them.

  • Requirements for human oversight.

  • Additional safety requirements.

Ex post requirements establish liability and possible remedies for harm or damage caused by a product or service relying on AI.

That said, which regulatory options is considering the Commission?

  1. Voluntary labeling.

This alternative would be based on a voluntary labeling framework for developers and users of AI. The requirements would be binding just once the developer or user has opted to use the label.

  1. Sectorial requirements for public administration and facial recognition.

This option would focus on the use of AI by public authorities. For this purpose, the Commission proposes the model set out by the Canadian directive on automated decision-making, in order to complement the provisions of the GDPR.

The Commission also suggests a time-limited ban (“e.g. 3-5 years”) on the use of facial recognition technology in public spaces, aiming at identifying and developing a sound methodology for assessing the impact of this technology and establishing possible risk management measures.

  1. Mandatory risk-based requirements for high-risk applications.

This option would foresee legally binding requirements for developers and users of AI, built on existing EU legislation. Given the need to ensure proportionality, it seems these new requirements might be applied only to high-risk applications of AI. This brings to light the need for a clear criteria to differentiate between “low-risk” and “high-risk” systems. The Commission provides the following definition of “high-risk”: “applications of AI which can produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage for the individual or the legal entity”, and points out the need to consider it together with the particular sector where the AI system would be deployed.

  1. Safety and liability.

Targeted amendments of the EU safety and liability legislation could be considered to address the specific risks of AI.

  1. Governance.

An effective system of enforcement is deemed an essential component of the future regulatory framework, which would require a strong system of public oversight.

Does your company use AI systems? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, EU AI Ethics assessments and Data Protection Officer outsourcing. Contact us today.