Dark patterns in social media platform interfaces

Dark Patterns are defined by the European Data Protection Board (EDPB), as “interfaces and user experiences implemented on social media platforms that lead users into making unintended, unwilling and potentially harmful decisions regarding the processing of their personal data”. These dark patterns seek to influence user behaviour on those platforms, hindering their ability to make conscious choices which effectively protect their personal data. Data protection authorities are responsible for sanctioning any use of these dark patterns if they breach GDPR requirements. These dark patterns include overloading, skipping, stirring, hindering, fickle designs and leaving users in the dark. 

Overloading

 Overloading refers to situations in which users are confronted with an unreasonable amount of requests, or too much  information, options or possibilities which encourages them to share more data that necessary or unintentionally allow personal data processing contrary to the expectations of the data subject. Overloading techniques include continuous prompting, privacy mazes and providing too many options. 

Skipping

 Skipping means designing the interface or user experience in a way that users forget or do not think about all or some of the data protection aspects. Examples of dark patterns which result in skipping include deceptive snugness and “look over there”.

Stirring

 Stirring is a dark pattern which affects the choice users would make by appealing to their emotions or using visual nudges. This includes emotional steering and pertinent information being “hidden in plain sight”. 

Hindering 

 Hindering refers to the obstructing or blocking of users becoming informed or managing their data by making the process extremely hard or impossible to achieve. The dark patterns are considered hindering include dead end designs, longer than necessary processes and misleading information. 

Fickle Interfaces

 Fickle interfaces are designed in an inconsistent and unclear manner, making it hard for the user to navigate the various data protection control tools and understand the purpose of the data processing. These interfaces include those lacking hierarchy as well as those which utilize decontextualising within the design. 

Interfaces that leave users in the dark  

 An interface  is considered to be leaving users in the dark if the interface is designed in a way that hides information or data protection control tools and leaves users unsure of how their data is processed and what kind of control they might have over it regarding the exercise of their rights. Examples of this include language discontinuity, conflicting information and ambiguous wording or information. 

We recently published a short vlog on YouTube outlining the types of dark patterns, how the GDPR principles, if adhered to, can prevent the design of your user interface from falling into these dark patterns, and what measures should be given special attention to avoid being sanctioned. 

Subscribe to Aphaia’s YouTube channel for more information on AI ethics and data protection. 

Does your company have all of the mandated safeguards in place to ensure the safety of the personal data you collect or process? Aphaia can help. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Fintech and AI Ethics

Fintech and AI Ethics

As the world of Fintech evolves, the need for governance and ethics in that arena is of particular importance. 

 

Financial Technology, or “Fintech” refers to new technology that seeks to improve and automate financial services. This technology aids in the smooth running of financial aspects of business or personal finances through the integration of AI systems. Broadly speaking, this term refers to any innovation through which people can transact business, from keeping track of finances, to the invention of digital and cryptocurrency. With crypto-trading and digital platforms for wealth management becoming more popular than ever before, an increasing number of consumers are seeing the practical application and value of Fintech in their lives. As with any application of AI and technology however, certain measures should be in place for the smooth, and more importantly, safe integration of this technology into our daily lives, allowing the everyday user to feel more secure in the use of this tech. 

 

Legislation and guidance have been implemented and communicated guiding Fintech and AI ethics. 

 

Some pieces of legislation, such as the Payment Services Directive 2 (PSD2), an EU regulation governing electronic payment services, already target Fintech. PSD2 harmonizes two services which have both become increasingly widespread in recent times; Payment Initiation Services (PIS) and Account Information Services (AIS). PIS providers facilitate the use of online banking to make online payments, while AIS providers facilitate the collection and storage of information from a customer’s different bank accounts in a single place. With the increasing popularity and use of these innovations and other forms of Fintech, and as experience provides further insight into the impact of the various implications and the true impact of its use, new regulations are expected in the future. 

 

To most people, their financial data is considered to be among their most sensitive and valuable data and as such, most people are very keen on ensuring the safety of their data. Legislation and guidance have been implemented and communicated in order to aid in the pursuit of principles like technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness. These are all imperative to ensuring that the use of Fintech is safe and beneficial for everyone involved. 

 

Technical robustness and safety

 

The safety of one’s personal and financial information is, simply put, of the utmost importance when making decisions about what tools an individual will use to manage their finances. A personal data breach involving financial information could be very harmful for the affected data subjects due to its sensitive nature. Financial institutions and Fintech companies put several measures in place to ensure safe and secure money management through tech. Security measures such as, inter alia, data encryption, role-based access control, penetration testing, tokenization, 2FA, multi-step approval or verification processes  and backup policy all can and should be applied, where necessary and feasible. These measures all aid in helping users feel more secure, but intimately they aid in protecting users from far more than they can imagine including malware attacks, data breaches, digital identity risks and much more. 

 

Privacy and data governance

 

Article 22 of the EU GDPR prohibits a data subject from being subject to a decision based solely on automated processing, except where some circumstances apply. Automated decisions in the Fintech industry may produce legal effects concerning the individuals or similarly significantly affect them. Any decision with legal or similar effects needs special considerations in order to comply with the UK GDPR requirements. A data protection impact assessment may be necessary to determine the risks to individuals and determine how best to deal with them. For special categories of data, automated processing can only be carried out with the individual’s explicit consent or if necessary for reasons of substantial public interest. Robotic process automation (or RPA) could be very useful to businesses and help increase their revenue and save them money. However, it is imperative to ensure compliance with the GDPR and ensure that automated decision making does not result in dangerous profiling practices. 

 

Diversity, non-discrimination and fairness

 

Several studies have been performed exploring the overall fairness of current Fintech, and possible discrimination in consumer lending and other aspects of the industry. Algorithms can either perpetuate widespread human biases or develop their own biases. Common biases in the financial sector arise around gender, ethnicity and age. AI technology, especially in Fintech, where biases can affect an individual’s access to credit and the opportunities that it affords, must prevent discrimination and protect diversity. The use of quality training data, choosing the right learning model and working with an interdisciplinary team may help reduce the bias and maintain a sense of fairness in the world of Fintech and AI in general. 

 

Transparency. 

 

While the use of AI has brought much positive transformation to the financial industry, the question of AI ethics in everything that we do is unavoidable. Transparency provides an opportunity for introspection regarding ethical and regulatory issues, allowing them to be addressed. Algorithms used in Fintech should be transparent and explainable. The ICO and The Alan Turing Institute have produced their guidance “Explaining decisions made with AI ” to help businesses with this. They suggest developing a ‘transparency matrix’ to map the different categories of information against the relevant stakeholders. Transparency enables and empowers businesses to demonstrate trustworthiness. Trustworthy AI is AI that will be more easily adopted and accepted by individuals. Transparency into the model and processes of Fintech and other AI allows biases and other concerns to be raised and addressed. 

 

Check out our vlog exploring Fintech and AI Ethics:

https://youtu.be/7nj2616bq1s

You can learn more about AI ethics and regulation in our YouTube channel.

 

Do you have questions about how AI works in Fintech and the related guidance and laws? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact AssessmentsAI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Privacy and ethical concerns of social media

Privacy and ethical concerns have become more relevant in social media due to the prevalence of “explore”, “discover” or “for you” tabs and pages.

 

“Discover” pages on social media deliver content that the app thinks that the user would likely be interested in. This is based on several factors including user interactions, video information, account settings and device settings. These are individually weighted based on the social media algorithms. This has raised some ears regarding profiling and related privacy concerns, particularly with regard to the processing of personal data of minors. 

 

While automated decisions are allowed, as long as there are no legal ramifications, specific care and attention needs to be applied to the use of the personal data of minors. 

 

The decisions made which cause specific content to show up on “explore” and “discover” pages are by and large automated decisions based on profiling of individuals’ personal data. While this may benefit several organizations and individuals allowing large volumes of data to be analyzed and decisions made very quickly, showing only what is considered to be the most relevant content to the individual, there are certain risks involved. Much of the profiling which occurs is inconspicuous to the individual and may quite possibly have adverse effects. GDPR Article 22 does not prohibit automated decisions, not even regarding some minors, as long as those decisions do not have any legal or similarly significant effect on the individual. Working Party 29, now known as the EDPB states that “ solely automated decision making which influences a child’s choices and behavior, could potentially have a legal or similarly significant effect on them, depending upon the nature of the choices and behaviors in question.“ As a requirement of the GDPR, specific protection needs to be applied to the use of personal data when creating personality or user profiles specifically for children or to be used by children. 

 

Much of the data processed by social media apps require consent, however most minors are not able to provide their own consent. 

 

According to the latest updates of the EU ePrivacy rules much of the data processed by social media apps and websites may require consent. In many parts of the world, most minors are not legally able to provide their own consent. The age of consent in this regard varies around the world, and in some countries it can even reach up to 16 years old. However in the UK specifically, children aged 13 or over are able to provide their own consent. The parents or guardians of children younger than this are the ones who must provide consent on their behalf. As a data controller, it is important to know which data requires consent, from whom, and how this consent will be collected, and which data can be processed based on another lawful basis different to consent.

 

In developing social media apps and features it is important to consider several ethical principles. 

 

Trustworthy AI should be lawful, ethical and robust. In developing social media apps and features, it is important to ensure that the data is kept secure, the algorithms are explainable and that the content delivered to the user does not include any biases. Ethical principles like technical robustness, privacy, transparency and non-discrimination are considered paramount. Because social media algorithms serve up content to users on explore and discover pages, it is imperative that the decisions made by these AI systems are incredibly transparent and that attention is paid to whether, or how these systems may possibly be discriminatory. An AI ethics assessment can provide incredible insight into how fair these AI decisions may actually be, and how to ethically and lawfully go about developing the algorithms for social media apps and platforms. 

 

We recently published a short vlog on our YouTube channel exploring the privacy and ethical concerns in social media. Be sure to check it out, like, comment and subscribe to our channel for more AI ethics and privacy content. 

Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.