Privacy and ethical concerns of social media

Privacy and ethical concerns have become more relevant in social media due to the prevalence of “explore”, “discover” or “for you” tabs and pages.

 

“Discover” pages on social media deliver content that the app thinks that the user would likely be interested in. This is based on several factors including user interactions, video information, account settings and device settings. These are individually weighted based on the social media algorithms. This has raised some ears regarding profiling and related privacy concerns, particularly with regard to the processing of personal data of minors. 

 

While automated decisions are allowed, as long as there are no legal ramifications, specific care and attention needs to be applied to the use of the personal data of minors. 

 

The decisions made which cause specific content to show up on “explore” and “discover” pages are by and large automated decisions based on profiling of individuals’ personal data. While this may benefit several organizations and individuals allowing large volumes of data to be analyzed and decisions made very quickly, showing only what is considered to be the most relevant content to the individual, there are certain risks involved. Much of the profiling which occurs is inconspicuous to the individual and may quite possibly have adverse effects. GDPR Article 22 does not prohibit automated decisions, not even regarding some minors, as long as those decisions do not have any legal or similarly significant effect on the individual. Working Party 29, now known as the EDPB states that “ solely automated decision making which influences a child’s choices and behavior, could potentially have a legal or similarly significant effect on them, depending upon the nature of the choices and behaviors in question.“ As a requirement of the GDPR, specific protection needs to be applied to the use of personal data when creating personality or user profiles specifically for children or to be used by children. 

 

Much of the data processed by social media apps require consent, however most minors are not able to provide their own consent. 

 

According to the latest updates of the EU ePrivacy rules much of the data processed by social media apps and websites may require consent. In many parts of the world, most minors are not legally able to provide their own consent. The age of consent in this regard varies around the world, and in some countries it can even reach up to 16 years old. However in the UK specifically, children aged 13 or over are able to provide their own consent. The parents or guardians of children younger than this are the ones who must provide consent on their behalf. As a data controller, it is important to know which data requires consent, from whom, and how this consent will be collected, and which data can be processed based on another lawful basis different to consent.

 

In developing social media apps and features it is important to consider several ethical principles. 

 

Trustworthy AI should be lawful, ethical and robust. In developing social media apps and features, it is important to ensure that the data is kept secure, the algorithms are explainable and that the content delivered to the user does not include any biases. Ethical principles like technical robustness, privacy, transparency and non-discrimination are considered paramount. Because social media algorithms serve up content to users on explore and discover pages, it is imperative that the decisions made by these AI systems are incredibly transparent and that attention is paid to whether, or how these systems may possibly be discriminatory. An AI ethics assessment can provide incredible insight into how fair these AI decisions may actually be, and how to ethically and lawfully go about developing the algorithms for social media apps and platforms. 

 

We recently published a short vlog on our YouTube channel exploring the privacy and ethical concerns in social media. Be sure to check it out, like, comment and subscribe to our channel for more AI ethics and privacy content. 

Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Brain implants and AI ethics

Brain implants and AI ethics

The risks derived from the use of AI in brain implants may trigger the need for AI Ethics regulation soon. 

In our last vlog we talked about the interaction between brain implants and GDPR. In this second part we will explore how AI Ethics applies to brain implants. 

How is AI used in brain implants?

The AI is the conveying element between brain implants and the persons who work with them, as the AI helps to translate the electrical signals of the brain into a human-understandable language.  

The process is twofold normally: first, neural data is gathered from brain signals which are translated into numbers. Then, the numbers are decoded and transformed into natural language, which can be either English, Spanish or any other, as programmed. This procedure also works the other way around.

Why is AI Ethics important in brain implants?

Depending on how fan of sci-fi you are, the words “brain implants” may bring to your mind some films and television series titles. On the contrary, if you are a news rather than a Netflix enthusiast, you are probably already thinking about several recent medical researches where brain implants have been used. The reality reconciles both approaches.

Brain implants have been part of medical and scientific research for more than 150 years now, when in 1870 Eduard Hitzig and Gustav Fritsch performed experiments on dogs by which they were able to produce movement through electrical stimulation of specific parts of the cerebral cortex. Significant progress has been made since then and now brain implants are being tested with several purposes, inter alia, to monitor abnormal brain activity and trigger assistive measures.

Military applications have also been explored by some institutions. For example, DARPA has been working since 2006 in the development of a neural implant to remotely control the movement of sharks, which would be potentially exploited to provide data feedback in relation to enemy ship movement or underwater explosives.

The latest development in this field comes from Neuralink, the neurotechnology company founded by Elon Musk, whose page claims that the app “would allow you to control your iOS device, keyboard and mouse directly with the activity of your brain, just by thinking about it”, meaning that the aim pursued is building a full brain interface which would make people able to connect with their devices using only their minds.

While all these applications may have a huge positive impact on technological change, it may also largely affect individual’s rights and freedoms, therefore it is paramount to have some regulation in place which establishes limits. Whereas data protection may be playing an important role in this field at the moment, it may not be enough when further progress is made. AI ethics may make up for this gap.

How can AI Ethics help?

AI Ethics may provide a framework to cover the evolving use of computer hardware and software to enhance or restore human capabilities, avoiding that this technology involves high risks for the rights and freedoms of individuals. The fact that something is technically feasible does not mean that it should be done.

The lack of appropriate regulation may even slow down the research and implementation process, as some applications would be in direct conflict with several human rights and pre-existing regulations.

Human agency and oversight

Based on the AI Ethics principles pointed out by the AI-HLEG in their “Ethics Guidelines for Trustworthy AI”, human agency and oversight is key for brain implants, although it may be tricky to implement in practice. 

Human agency and oversight stands for humans being able to make informed decisions when AI is applied. Whilst this may be quite straightforward in some medical uses where the practitioner analyses the brain activity data gathered by the implants, it is not in other scenarios. The merge of biological and artificial intelligence and the fact that we cannot really know what kind of information, or at least be aware of every single piece of data, may be gathered from brain activity makes it difficult to claim that a certain decision, where said decision is made by the same person having the implant in their brain, is actually “informed”. 

Technical robustness and safety

Technical robustness and safety is another principle that should be prioritised when developing brain implants. The absolute sensitive nature of the data involved and the direct connection with the brain makes it necessary that the technology is built to be completely impenetrable. If a breach compromising biometric data would already have catastrophic results for the individuals affected, the harm derived from an attack to people’s brains would be simply immeasurable.

Privacy and data governance

According to the principle of privacy and data governance, full respect for privacy and data protection, including data governance mechanisms, the quality and integrity of the data and legitimate access to data should be ensured. In our view, the current lawful bases for valid processing laid down in the GDPR should be redefined, as their current meaning may not make sense in a context governed by brain implants. In relation to the human agency and oversight principle and in line with our previous article, an individual could not give consent in the sense of GDPR where data is being gathered by brain implants, because the information collected by these means cannot be initially identified or controlled. The process should be reframed to allow for managing consent in two stages, the second one after the collection of the data but before any further processing takes place. Considering the evolving nature of the technology and the early stage of knowledge currently available, the retention periods should also be approached differently, as further investigation might potentially reveal that additional data, for which there might be no lawful basis for processing, may be derived from the data lawfully held.

Cambridge Analytica may be a good example of the undesired results that may come from massive data processing without the adequate safeguards, measures and limitations in place. This case addressed advertising campaigns that influenced political behaviour and elections results, which was serious enough itself. However, a similar leak in a context where brain implants are used would have involved an incalculable damage to the rights and freedoms of individuals. 

Transparency

Transparency is directly linked to the principle of privacy and data governance, as the provision of full information about the processing is one of the main pillars over which the GDPR has been built. While the technology is still in a development phase, this requirement will not be completely met because there would always be gaps in the system’s capabilities and limitations. Relevant requirements should be put in place to set up the minimum information that should be delivered to the data subjects when using brain implants.

Diversity, non-discrimination and fairness

Diversity, non-discrimination and fairness should be observed to avoid any type of unfair bias and marginalization of vulnerable groups. This principle is dual: on the one hand, it reaches the design of the brain implants and the AI governing them, which should be trained to prevent human biases, their own AI biases and the biases resulting from the interaction between human (brain) and AI (implants). On the other hand, it should be ensured that the access barriers to this technology do not create severe social inequalities.

Societal and environmental well-being

The social and societal impact of brain implants should be carefully considered. This technology may help people who have a medical condition, but its costs should be reasonably affordable to everyone. A more risky version of this circumstance is given when brain implants are not only used for medical treatment, but for human enhancement too, as it would provide some people with improved skills, which would be a detriment to those that could not pay for it.

Accountability

Taking into account the critical nature of brain implants applications, auditability is key to enable the assessment of algorithms, data and design processes. Adequate and accessible redress should be ensured, for instance, in order to be able to contact all individuals using a particular version of the system which has been proved to be exposed to vulnerabilities.

Examples

In the table below we combine our previous article with this one to provide some practical insights of brain implants applications and the AI Ethics principles and GDPR requirements that they would be linked to. For this purpose, we will use the following case as the example: “Treating depression with brain stimulation”. In each row, it is explained how each AI Ethics principle and the relevant GDPR requirement or principle would apply.

Brain implants application AI Ethics principle GDPR requirement/principle
Regular monitoring by medical practitioners to decide whether the treatment is effective. Human agency and oversight. Human intervention.
Unlawful access to  and manipulation of the brain implants would have a direct negative impact over the patient’s health plus it would reveal special categories of data. Technical robustness and safety. Technical and organisational measures.
Processing is necessary for the provision of health treatment. Privacy and data governance. Lawful basis for processing.
Full information about the treatment should be delivered before proceeding. Transparency. Explanation.
No one should be excluded from this treatment and offered a less efficient one just for the reason of not being able to afford it. Diversity, non-discrimination and fairness. No discriminatory results.
Where the treatment negatively affects the behaviour of the patient in a sense that may be risky for other people, relevant measures and safeguards should be applied. Societal and environmental well-being. Statistical and research purposes exceptions.
The hospital or the Estate should be responsible for any damages caused by this use of brain implants. Accountability. Accountability.

 

Check out our vlog exploring Brain Implants and AI Ethics:

You can learn more about AI ethics and regulation in our YouTube channel.

 

Do you have questions about how AI works in brain implants and the associated risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact AssessmentsAI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

AI systems COVID-19

Are the AI Systems Used for Contact Tracing of COVID-19 Ethical?

Are the AI systems used for contact tracing of COVID-19 ethical? In our latest vlog, we explore the extent to which the use of these systems are ethical, and why.

 

With many European Nations launching the Pan-European Privacy Preserving Proximity Tracing (PEPP-PT), to release software code which can be used to create contact tracing apps, tracking the possible transmission of COVID-19, many wonder about the extent to which this would be ethical. The apps in question would use phone Bluetooth signals to track users’ proximity to each other, and would then inform users if they had been in the proximity of someone who had tested positive for the virus. Last week, we explored the use of AI, in tracking or preventing the spread of COVID-19. This week, we take a deeper look at the ethical implications of the use of such technology in our society.

 

According to Article 9 of the GDPR, certain categories of personal data can only be processed under specific circumstances. These special circumstances include things like vital interests, and public health. With regard to public health as a condition for processing personal data, this condition is met not just by virtue of it being for reasons of public health interests. According to Data Protection Act 2018, the processing would also need to be carried out by, or under the responsibility of a health professional, or by another person who in the circumstances owes the duty of confidentiality under the law. Article 22 of the GDPR states that without the subject’s explicit consent, profiling is only allowed where authorised by Union or Member State law.

 

With all this considered, the ethics of the AI systems used in the fight against COVID-19 would play a vital role in maintaining accuracy and non-discrimination. While these measures seem to be very helpful right now, for the sake of public health, there lies the risk of these measures persisting beyond the COVID-19 pandemic. In our latest vlog, we explore the ethics of the use of these AI systems. 

Please subscribe to our YouTube channel, be updated on future content. Do you have questions about how to navigate data protection laws during this global coronavirus pandemic in your company? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

 

Impersonation feature on company platforms

The Reality of the Impersonation Feature on Company Platforms.

Many company platforms and apps include an impersonation feature which allows administrative users to access accounts as though they were logged in as the users themselves.

Imagine knowing that by simply having an account with a company, you are unknowingly granting access to this company’s everyday employees to access your data in just the same way that you would, had you logged in with your username and password. Such is, or has been the case with many companies that we all use on a regular basis. The truth is that there are “user impersonation” tools built into the software of many tech companies like Facebook and Twitter, which not only allow employees to access your account as though they have logged in as you, but also this could be happening without your knowledge. The account holder, or user is typically not notified when this happens, nor is their consent needed in order for this to happen. According to a recent article on OneZero, “…these tools are generally accepted by engineers as common practice and rarely disclosed to users.” The problem is that these tools can be, and have been misused by employees to access users’ private information and even track the whereabouts of users of these companies’ platforms.

The Fiasco Surrounding Uber’s “God mode” Impersonation Feature.

In recent years, the popular transport company, Uber has come under fire for its privacy policies, and in particular, its questionable impersonation features, known as “God mode”. Using the feature, the company’s employees were able to track the whereabouts of any user. Uber employees were said to have been tracking the movements of all sorts of users from famous politicians to their own personal relations. After being called to task by US lawmakers, the company apologized for the misuse of this feature by some of its executives and stated that it’s policies have since been updated to avoid this issue in the future. Uber is not unique to this sort of privacy breach. Lyft is also known to have comparable tools, along with several other companies.

Impersonation Features Form Part of Most Popular Programming Tools.

Impersonation Feature use is much more widespread than just a few known companies. Popular programming languages like Ruby on Rails and Laravel offer this feature, which has been downloaded several million times. The impersonation tools offered by these services do not usually require users’ permission, nor do they notify users that their account has been accessed. It is pretty common for developers to simply white list users with administrator access giving them access to impersonator mode, thereby allowing them to access any account as though they were logged in as that user.

How Impersonation Features Can Be Made Safer.

Some companies have made changes to their policies and procedures in order to make impersonation features safer for customers. For example Uber, following their legal troubles over the ‘ God mode’ feature, have made it necessary for their employees to request access to accounts through security. Other companies have resolved to require the user to specifically invite administrators in order to grant them access.

According to Dr Bostjan Makarovic, Aphaia’s Managing Partner, “Whereas there may be legitimate reasons to view a profile through the eyes of the user to whom it belongs, such as further app development and bug repair, GDPR requires that such interests are not overridden by the individual’s privacy interests. This can only be ensured by means of an assessment that is carried out prior to such operations.”

Does your company use impersonation features and want to be sure you are operating within GDPR requirements? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.