Privacy and ethical concerns of social media

Privacy and ethical concerns have become more relevant in social media due to the prevalence of “explore”, “discover” or “for you” tabs and pages.

 

“Discover” pages on social media deliver content that the app thinks that the user would likely be interested in. This is based on several factors including user interactions, video information, account settings and device settings. These are individually weighted based on the social media algorithms. This has raised some ears regarding profiling and related privacy concerns, particularly with regard to the processing of personal data of minors. 

 

While automated decisions are allowed, as long as there are no legal ramifications, specific care and attention needs to be applied to the use of the personal data of minors. 

 

The decisions made which cause specific content to show up on “explore” and “discover” pages are by and large automated decisions based on profiling of individuals’ personal data. While this may benefit several organizations and individuals allowing large volumes of data to be analyzed and decisions made very quickly, showing only what is considered to be the most relevant content to the individual, there are certain risks involved. Much of the profiling which occurs is inconspicuous to the individual and may quite possibly have adverse effects. GDPR Article 22 does not prohibit automated decisions, not even regarding some minors, as long as those decisions do not have any legal or similarly significant effect on the individual. Working Party 29, now known as the EDPB states that “ solely automated decision making which influences a child’s choices and behavior, could potentially have a legal or similarly significant effect on them, depending upon the nature of the choices and behaviors in question.“ As a requirement of the GDPR, specific protection needs to be applied to the use of personal data when creating personality or user profiles specifically for children or to be used by children. 

 

Much of the data processed by social media apps require consent, however most minors are not able to provide their own consent. 

 

According to the latest updates of the EU ePrivacy rules much of the data processed by social media apps and websites may require consent. In many parts of the world, most minors are not legally able to provide their own consent. The age of consent in this regard varies around the world, and in some countries it can even reach up to 16 years old. However in the UK specifically, children aged 13 or over are able to provide their own consent. The parents or guardians of children younger than this are the ones who must provide consent on their behalf. As a data controller, it is important to know which data requires consent, from whom, and how this consent will be collected, and which data can be processed based on another lawful basis different to consent.

 

In developing social media apps and features it is important to consider several ethical principles. 

 

Trustworthy AI should be lawful, ethical and robust. In developing social media apps and features, it is important to ensure that the data is kept secure, the algorithms are explainable and that the content delivered to the user does not include any biases. Ethical principles like technical robustness, privacy, transparency and non-discrimination are considered paramount. Because social media algorithms serve up content to users on explore and discover pages, it is imperative that the decisions made by these AI systems are incredibly transparent and that attention is paid to whether, or how these systems may possibly be discriminatory. An AI ethics assessment can provide incredible insight into how fair these AI decisions may actually be, and how to ethically and lawfully go about developing the algorithms for social media apps and platforms. 

 

We recently published a short vlog on our YouTube channel exploring the privacy and ethical concerns in social media. Be sure to check it out, like, comment and subscribe to our channel for more AI ethics and privacy content. 

Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Guidelines for Trustworthy AI

Updates on the guidelines for Trustworthy AI

Recently, the High-Level Expert Group on Artificial Intelligence released a document of guidelines, on implementing trustworthy AI in practice. In this blog, we aim to enlighten you on the guidelines for trustworthy AI.

 

Last month on our blog, we reported on the final assessment list for trustworthy artificial intelligence, released by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This list was the result of a three-part piloting process in which Aphaia participated, and involved over 350 stakeholders. Data collection during this process involved an online survey, a series of in-depth interviews, and sharing of best practices in achieving trustworthy AI.. Before implementing any AI systems, it is necessary to make sure that they comply with the 7 principles which were the result of this effort.

 

While AI is transformative, it can also be very disruptive. The goal of producing these guidelines is to promote trustworthy AI based on the following three components. Trustworthy AI should be lawful,  ethical and robust both from a technical and social perspective. While the framework does not deal too much with the legality of trustworthy AI, it provides guidance on the second and third components -making sure that AI is ethical and robust.

 

In our latest vlog, the first of a two part series, we explored three of those seven requirements; human agency and oversight, technical robustness and safety, and privacy and data governance.

 

Human agency and oversight

“AI systems should support human agency and human decision-making, as prescribed by the principle of respect for human autonomy”. Businesses should be mindful in dealing with the effects AI systems can have on human behaviour, in a broad sense, human perception and expectation when confronted with AI systems that ‘act’ like humans, human affection, trust and human dependence.

According to the guidelines, “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills.” The AI systems should be human centric in design and allow meaningful opportunity for human choice.

Technical robustness and safety

Based on the principle of prevention of harm outlined in the document of guidelines, “AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. This entails the protection of human dignity as well as mental and physical integrity.”

Organisations should also reflect on resilience to attack and security, accuracy,reliability, fall-back plans and reproducibility.

Privacy and data governance

Privacy is a fundamental right affected by AI systems. AI systems must guarantee privacy and data protection throughout the entire lifecycle of a system. It is recommended to implement a process that embraces both top-level management and operational level within the organisation.

Some key factors to note with regard to the principle of prevention of harm are adequate data governance, relevance of the data used, data access protocols, and the capability of the AI system to process data in a manner that protects privacy.

 

When putting the assessment list into practice, it is recommended that you not only pay attention to the areas of concern but also to the questions that cannot easily be answered. The list is meant to guide AI practitioners to develop trustworthy AI, and should be tailored to each specific case in a proportionate way.

To learn more on the principles AI HLEG has outlined in order to achieve trustworthy AI, look out for part two of this series by subscribing to our channel.

Do you need assistance with the Assessment List for Trustworthy AI? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments, together with GDPR and Data Protection Act 2018 adaptation and Data Protection Officer outsourcing.

 

AI Ethics and Real Estate

AI Ethics and Real Estate: Further considerations for best practices.

The importance of upholding AI ethics in the world of real estate is essential to maintaining integrity in the industry as AI systems are incorporated in its processes.

 

Earlier this month we explored the importance of AI ethics in the real estate industry in ensuring its ability to function within regulation, while being of benefit to buyers, sellers, the industry and society in general. Artificial intelligence has the ability to revolutionize the real estate industry, however, as with anything else, measures have to be put in place to ensure that this functions ethically, in order to be of true benefit. In this article, we seek to explore the ethical principles that should be applied in the real estate industry to ensure that AI is truly of benefit to the society at large, not just a small number of individuals.

 

With real estate being the second least digitised industry in the world, difficulties are clearly present in how best to incorporate artificial intelligence in this industry. There are many factors to be considered in approaching the use of AI in the world of real estate and construction. With the many categorisations of data that describe any property, there is a need to ensure that coding for any AI system to be applied to real estate is extremely thorough. There is also a need for extreme transparency in the process to ensure that these AI systems function within regulation, and avoid discrimination as far as possible.

 

Technical robustness and safety is AI system development.

 

Machine learning is currently the dominant approach to developing AI systems and contributes to all sorts of technologies including those used in the real estate sector. While this approach has been successful it can sometimes fail in unintuitive ways. If we are to use machine learning effectively and ethically, it is important to consider the possibility of erroneous processing, and work to limit its impact on the use of these systems. We must understand the strengths and limitations of this technology to ensure that it is being used to the best of its ability within reason and within policy.

 

The development of AI systems should consider environmental, social and societal impact.

 

When it comes to choosing the perfect home or the right home for oneself, there are several factors that come into play. Home specifications, neighborhood demographics, and several other factors are paramount to making a buying decision. The opportunity arises here, to develop AI which can differentiate and seek out properties which are best suited to a buyer based not only on price or location, but perhaps building materials or even proximity to certain essential services.

 

It is important to ensure AI systems are avoiding discrimination as far as possible.

 

In using AI systems in the real estate market, it is important to ensure that buyers are not being “algorithmically blackballed” based on factors like nationality, race or generally just not fitting in with the current demographic of a neighbourhood. It is likely that historic biases can be inadvertently built into algorithms and cause them to reflect human prejudices. While it is unlikely that an AI software would be intentionally developed to discriminate against certain demographics, it is possible that these systems discriminate based on the original data inputs, which may show biases based on human prejudices. Real estate companies using AI should test the algorithms often to ensure that any algorithmically biased processes are curtailed.

 

AI systems used in real estate must be developed, and function within regulation.

 

The data of both buyers and sellers needs to be protected throughout the process of the sale and beyond. All AI systems’ processes should be governed by the GDPR to ensure that this is the case. It can be argued that the GDPR poses significant challenges to AI development because AI startups rely on data to train machine learning algorithms. However, if AI systems are to function ethically, they must be used within regulation, including during the development phases. Running a Data Protection Impact Assessment (DPIA) and legitimate interest assessment are likely to be a must.

 

One of the aims of the GDPR is to ensure that people have the power to decide which of the information is used by third parties. This begins with the right to knowledge. In this regard transparency is key, as people have the right to information regarding how much of their data is being used and how. While it may be difficult to ensure full transparency with data subjects, data controllers need to ensure that they are compliant with the GDPR. Finding GDPR-friendly methods of AI development will benefit not just service providers but also data subjects, if done correctly.

 

We recently released a second vlog exploring the use of AI in the real estate sector as part of our series on AI within various Industries.

Subscribe to our YouTube channel to be updated on further content.

Do you have questions about how AI is transforming the real estate sector and the associated risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

AI Help Prevent COVID-19

Can AI Help Prevent COVID-19?

Can AI help prevent COVID-19? Can it be used to predict or detect outbreaks, and would this be ethical?

Can AI help prevent the spread of COVID-19? Recently, we released an article on the technological initiatives being put in place across Europe to help control the spread of the novel COVID-19. In our latest vlog series, we aim to explore any AI initiatives which may have been implemented globally in this regard, to what extent AI can help fight this global pandemic, and what the privacy implications of these would be in Europe. As the virus spreads globally, and cases have shown up in over 200 countries worldwide, even more initiatives are popping up around the globe to help combat this pandemic.

Last December, a Toronto based startup, through analyzing the data published on the local newspapers and the information available on the internet, identified a cluster of unusual pneumonia cases happening around a market in Wuhan. Thus, the AI based platform, BlueDot was able to identify what would commonly be known as COVID-19, nine days before the World Health Organisation released its statement informing people of the emergence of this virus, mere hours after health officials diagnosed the first cases of coronavirus. 

Currently, countries like South Korea, using apps which track location data, are able to constantly monitor infected and non infected persons, and their movements. AI can also be used to analyze the way in which the disease is being discussed on social media, to paint a more vivid picture of the impact of the virus. It is no secret that AI can help prevent COVID-19’s spread and flatten the curve, but what are the privacy implications of such measures being used in Europe? Do they fall in line with the GDPR? 

In our latest vlog, part 1 of a two part series on the use of AI in the fight against COVID-19, we explore how AI can prevent or predict the spread of this viral disease:

Be sure to subscribe to our content on YouTube,  to make sure that you catch Part 2.

Do you have questions about how to navigate data protection laws during this global coronavirus pandemic in your company? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.