Fintech and AI Ethics

Fintech and AI Ethics

As the world of Fintech evolves, the need for governance and ethics in that arena is of particular importance. 

 

Financial Technology, or “Fintech” refers to new technology that seeks to improve and automate financial services. This technology aids in the smooth running of financial aspects of business or personal finances through the integration of AI systems. Broadly speaking, this term refers to any innovation through which people can transact business, from keeping track of finances, to the invention of digital and cryptocurrency. With crypto-trading and digital platforms for wealth management becoming more popular than ever before, an increasing number of consumers are seeing the practical application and value of Fintech in their lives. As with any application of AI and technology however, certain measures should be in place for the smooth, and more importantly, safe integration of this technology into our daily lives, allowing the everyday user to feel more secure in the use of this tech. 

 

Legislation and guidance have been implemented and communicated guiding Fintech and AI ethics. 

 

Some pieces of legislation, such as the Payment Services Directive 2 (PSD2), an EU regulation governing electronic payment services, already target Fintech. PSD2 harmonizes two services which have both become increasingly widespread in recent times; Payment Initiation Services (PIS) and Account Information Services (AIS). PIS providers facilitate the use of online banking to make online payments, while AIS providers facilitate the collection and storage of information from a customer’s different bank accounts in a single place. With the increasing popularity and use of these innovations and other forms of Fintech, and as experience provides further insight into the impact of the various implications and the true impact of its use, new regulations are expected in the future. 

 

To most people, their financial data is considered to be among their most sensitive and valuable data and as such, most people are very keen on ensuring the safety of their data. Legislation and guidance have been implemented and communicated in order to aid in the pursuit of principles like technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness. These are all imperative to ensuring that the use of Fintech is safe and beneficial for everyone involved. 

 

Technical robustness and safety

 

The safety of one’s personal and financial information is, simply put, of the utmost importance when making decisions about what tools an individual will use to manage their finances. A personal data breach involving financial information could be very harmful for the affected data subjects due to its sensitive nature. Financial institutions and Fintech companies put several measures in place to ensure safe and secure money management through tech. Security measures such as, inter alia, data encryption, role-based access control, penetration testing, tokenization, 2FA, multi-step approval or verification processes  and backup policy all can and should be applied, where necessary and feasible. These measures all aid in helping users feel more secure, but intimately they aid in protecting users from far more than they can imagine including malware attacks, data breaches, digital identity risks and much more. 

 

Privacy and data governance

 

Article 22 of the EU GDPR prohibits a data subject from being subject to a decision based solely on automated processing, except where some circumstances apply. Automated decisions in the Fintech industry may produce legal effects concerning the individuals or similarly significantly affect them. Any decision with legal or similar effects needs special considerations in order to comply with the UK GDPR requirements. A data protection impact assessment may be necessary to determine the risks to individuals and determine how best to deal with them. For special categories of data, automated processing can only be carried out with the individual’s explicit consent or if necessary for reasons of substantial public interest. Robotic process automation (or RPA) could be very useful to businesses and help increase their revenue and save them money. However, it is imperative to ensure compliance with the GDPR and ensure that automated decision making does not result in dangerous profiling practices. 

 

Diversity, non-discrimination and fairness

 

Several studies have been performed exploring the overall fairness of current Fintech, and possible discrimination in consumer lending and other aspects of the industry. Algorithms can either perpetuate widespread human biases or develop their own biases. Common biases in the financial sector arise around gender, ethnicity and age. AI technology, especially in Fintech, where biases can affect an individual’s access to credit and the opportunities that it affords, must prevent discrimination and protect diversity. The use of quality training data, choosing the right learning model and working with an interdisciplinary team may help reduce the bias and maintain a sense of fairness in the world of Fintech and AI in general. 

 

Transparency. 

 

While the use of AI has brought much positive transformation to the financial industry, the question of AI ethics in everything that we do is unavoidable. Transparency provides an opportunity for introspection regarding ethical and regulatory issues, allowing them to be addressed. Algorithms used in Fintech should be transparent and explainable. The ICO and The Alan Turing Institute have produced their guidance “Explaining decisions made with AI ” to help businesses with this. They suggest developing a ‘transparency matrix’ to map the different categories of information against the relevant stakeholders. Transparency enables and empowers businesses to demonstrate trustworthiness. Trustworthy AI is AI that will be more easily adopted and accepted by individuals. Transparency into the model and processes of Fintech and other AI allows biases and other concerns to be raised and addressed. 

 

Check out our vlog exploring Fintech and AI Ethics:

https://youtu.be/7nj2616bq1s

You can learn more about AI ethics and regulation in our YouTube channel.

 

Do you have questions about how AI works in Fintech and the related guidance and laws? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact AssessmentsAI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Privacy and ethical concerns of social media

Privacy and ethical concerns have become more relevant in social media due to the prevalence of “explore”, “discover” or “for you” tabs and pages.

 

“Discover” pages on social media deliver content that the app thinks that the user would likely be interested in. This is based on several factors including user interactions, video information, account settings and device settings. These are individually weighted based on the social media algorithms. This has raised some ears regarding profiling and related privacy concerns, particularly with regard to the processing of personal data of minors. 

 

While automated decisions are allowed, as long as there are no legal ramifications, specific care and attention needs to be applied to the use of the personal data of minors. 

 

The decisions made which cause specific content to show up on “explore” and “discover” pages are by and large automated decisions based on profiling of individuals’ personal data. While this may benefit several organizations and individuals allowing large volumes of data to be analyzed and decisions made very quickly, showing only what is considered to be the most relevant content to the individual, there are certain risks involved. Much of the profiling which occurs is inconspicuous to the individual and may quite possibly have adverse effects. GDPR Article 22 does not prohibit automated decisions, not even regarding some minors, as long as those decisions do not have any legal or similarly significant effect on the individual. Working Party 29, now known as the EDPB states that “ solely automated decision making which influences a child’s choices and behavior, could potentially have a legal or similarly significant effect on them, depending upon the nature of the choices and behaviors in question.“ As a requirement of the GDPR, specific protection needs to be applied to the use of personal data when creating personality or user profiles specifically for children or to be used by children. 

 

Much of the data processed by social media apps require consent, however most minors are not able to provide their own consent. 

 

According to the latest updates of the EU ePrivacy rules much of the data processed by social media apps and websites may require consent. In many parts of the world, most minors are not legally able to provide their own consent. The age of consent in this regard varies around the world, and in some countries it can even reach up to 16 years old. However in the UK specifically, children aged 13 or over are able to provide their own consent. The parents or guardians of children younger than this are the ones who must provide consent on their behalf. As a data controller, it is important to know which data requires consent, from whom, and how this consent will be collected, and which data can be processed based on another lawful basis different to consent.

 

In developing social media apps and features it is important to consider several ethical principles. 

 

Trustworthy AI should be lawful, ethical and robust. In developing social media apps and features, it is important to ensure that the data is kept secure, the algorithms are explainable and that the content delivered to the user does not include any biases. Ethical principles like technical robustness, privacy, transparency and non-discrimination are considered paramount. Because social media algorithms serve up content to users on explore and discover pages, it is imperative that the decisions made by these AI systems are incredibly transparent and that attention is paid to whether, or how these systems may possibly be discriminatory. An AI ethics assessment can provide incredible insight into how fair these AI decisions may actually be, and how to ethically and lawfully go about developing the algorithms for social media apps and platforms. 

 

We recently published a short vlog on our YouTube channel exploring the privacy and ethical concerns in social media. Be sure to check it out, like, comment and subscribe to our channel for more AI ethics and privacy content. 

Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Guidelines for Trustworthy AI

Updates on the guidelines for Trustworthy AI

Recently, the High-Level Expert Group on Artificial Intelligence released a document of guidelines, on implementing trustworthy AI in practice. In this blog, we aim to enlighten you on the guidelines for trustworthy AI.

 

Last month on our blog, we reported on the final assessment list for trustworthy artificial intelligence, released by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This list was the result of a three-part piloting process in which Aphaia participated, and involved over 350 stakeholders. Data collection during this process involved an online survey, a series of in-depth interviews, and sharing of best practices in achieving trustworthy AI.. Before implementing any AI systems, it is necessary to make sure that they comply with the 7 principles which were the result of this effort.

 

While AI is transformative, it can also be very disruptive. The goal of producing these guidelines is to promote trustworthy AI based on the following three components. Trustworthy AI should be lawful,  ethical and robust both from a technical and social perspective. While the framework does not deal too much with the legality of trustworthy AI, it provides guidance on the second and third components -making sure that AI is ethical and robust.

 

In our latest vlog, the first of a two part series, we explored three of those seven requirements; human agency and oversight, technical robustness and safety, and privacy and data governance.

 

Human agency and oversight

“AI systems should support human agency and human decision-making, as prescribed by the principle of respect for human autonomy”. Businesses should be mindful in dealing with the effects AI systems can have on human behaviour, in a broad sense, human perception and expectation when confronted with AI systems that ‘act’ like humans, human affection, trust and human dependence.

According to the guidelines, “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills.” The AI systems should be human centric in design and allow meaningful opportunity for human choice.

Technical robustness and safety

Based on the principle of prevention of harm outlined in the document of guidelines, “AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. This entails the protection of human dignity as well as mental and physical integrity.”

Organisations should also reflect on resilience to attack and security, accuracy,reliability, fall-back plans and reproducibility.

Privacy and data governance

Privacy is a fundamental right affected by AI systems. AI systems must guarantee privacy and data protection throughout the entire lifecycle of a system. It is recommended to implement a process that embraces both top-level management and operational level within the organisation.

Some key factors to note with regard to the principle of prevention of harm are adequate data governance, relevance of the data used, data access protocols, and the capability of the AI system to process data in a manner that protects privacy.

 

When putting the assessment list into practice, it is recommended that you not only pay attention to the areas of concern but also to the questions that cannot easily be answered. The list is meant to guide AI practitioners to develop trustworthy AI, and should be tailored to each specific case in a proportionate way.

To learn more on the principles AI HLEG has outlined in order to achieve trustworthy AI, look out for part two of this series by subscribing to our channel.

Do you need assistance with the Assessment List for Trustworthy AI? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments, together with GDPR and Data Protection Act 2018 adaptation and Data Protection Officer outsourcing.