AI Ethics in Asset and Investment Management
Since AI systems govern most trading operations in asset and investment management, AI ethics becomes crucial to ensure no fundamental rights and principles are overridden.
“I am not uncertain”. Any Billions TV Series fan around here? Those who are will know that this is the phrase that the employees of the hedge fund say to their boss before trading when they have potentially incriminating inside information. They know that basing the investment decision on it may address liability from prosecution. What if almost the same results could be achieved by lawful means? For this purpose, AI can definitely help and AI ethics becomes essential. In this article we delve into AI ethics in asset and investment management.
How is AI used in asset and investment management?
Asset and investment management companies, especially hedge funds, have traditionally used computer models to make the majority of their trades. In recent years, AI has allowed the industry to improve this practice with algorithms and systems that are fully autonomous and do not rely on data scientists and manual updates in order to operate regularly.
AI can analyse large amounts of data at extraordinary speeds in real time, learning from any type of information that may be relevant, including news articles, images and social media posts. The insights are applied automatically and algorithms self-adjust through a process of trial and error to produce increasingly more accurate prescriptions.
Their main role is the following:
- Finding new patterns in existing data sets;
- making new forms of data analyzable;
- designing new users experiences and interfaces;
- reducing the negative effects of human biases on investment decisions.
For asset and investment management firms the above means efficiency and operational structure improvement, risk management, investment strategy design, trading efficiency and decision-making enhancement. However, it is paramount to be especially aware of the risk of other companies simulating their findings or deriving similar conclusions from equivalent techniques, therefore elements such as trade secret, property software development and continuous innovation are vital.
Why does AI ethics matter in this context?
There are many risks derived from the use of AI in asset and investment management that could be tackled with the implementation of ethical values and principles.
Some of the issues that may come up in this context are described below:
- Lack of auditability of AI.
- Lack of control over data quality and robust production.
- Failure to monitor and keep track of AI systems’ decisions.
- AI inability to react to unexpected events not closely related to past trends and with no historical data available, like pandemics.
- Difficulty maintaining adherence to current protocols on data security, conduct, and cybersecurity on AI technologies that are new and have not been tested for a period enough to ensure consistency.
- Omission of social purpose, leaving some stakeholders behind.
- Human biases, such as loss aversion (the preference for avoiding losses relative to generating equivalent gains) or confirmation bias (the tendency to interpret new evidence so as to affirm pre-existing beliefs).
- AI systems own biases, derived from the training datasets, processes and models, deficiencies in coding or otherwise caused or acquired.
- Gaps on the definition of the respective responsibilities of the third party provider and the asset management firm using the service or tool, where relevant.
How should AI ethics be applied to asset and investment management?
The risks above can be sorted into seven categories, following the requirements of the EU Commission AI-HLEG Ethics Guidelines for Trustworthy Artificial Intelligence:
|Issue||Failure to monitor and keep track of AI systems’ decisions.||Inability to react to unexpected events.
Difficulty maintaining adherence to current protocols on data security, conduct, and cybersecurity.
|Lack of control over data quality and robust production.||Lack of auditability of AI.||Human biases.
AI systems own biases.
|Omission of social purpose.||Gaps on the definition of the respective responsibilities of the third party provider and the asset management firm.|
|Solution||Human agency and oversight.||Technical robustness and safety.||Privacy and data governance.||Transparency.||Diversity, non-discrimination and fairness.||Societal and environmental well-being.||Accountability.|
Among the solutions identified above, human overview plays a key role. There is a need for redefining the job of data scientists which would be the ones in charge of selecting the right sources of alternative data, integrating it with existing knowledge within the firm and its philosophy or culture and making judgments about where future trends are going considering those specific contexts the AI cannot cover.
The answer is to have AI systems and humans combining their abilities and playing complementary roles, through the so called “Human in the loop” approach, where humans monitor the results of the machine learning model.
What should be the regulatory approach?
The financial sector is heavily regulated. Any new AI tools or digital advisors are subject to the same framework of regulation and supervision as traditional advisors and this is the reason why it is critical to ensure robust cybersecurity defenses, such as data encryption, cybersecurity insurance and incident management policies. However, the use of AI still requires to go one step further when it comes to regulation.
Currently, there is a lack of specific international regulatory standards for AI in asset and investment management. This is tricky though, because likewise it happens with the GDPR, there is a trade-off between the innovation and the respect for the fundamental rights and freedoms.
Considering the specific nature of the industry, it might be beneficial to extend the applicability of existing regulation to the uses of AI first and then running regulatory sandbox programs for testing new AI innovations in a controlled environment. This would allow to identify basic needs and to deeply understand how the technology works before moving forward with new mandatory rules.
Meanwhile, self-regulation and codes of practice may be the first step to settle the future regulatory framework, which could comprise robust and effective governance, regular checks on the use of AI systems within the company, testing and approval processes, governance committees, documented procedures and internal audits.
A proactive and industry-led approach to AI governance and ethics for asset and investment management is necessary to foster the development of standards.
In words of Laurence Douglas Fink, chairman and CEO of BlackRock, “One of the key elements of human behavior is, humans have a greater fear of loss than enjoyment of success. All the academic studies will show you that the fear of loss of capital is far greater than the enjoyment of gains”. AI systems do not have neither fear of loss nor enjoyment of gains, they just have data. However, those human emotions are necessary to properly understand the market. This is the reason why combining both of them may be the most powerful tool for the asset and investment management industry.
Are you worried about COVID-19, regardless of the industry? We have prepared a blog with some relevant tips to help you out, covering COVID-19 business adaptation basics .
Subscribe to our YouTube channel to be updated on further content.
Do you have questions about how AI is transforming the financial sector and what are the risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.
Latest posts by Cristina Contero Almagro (see all)
- Artificial Intelligence: From Ethics to Regulation - July 10, 2020
- AI Ethics in Asset and Investment Management - June 19, 2020
- Spanish DPA issues €25,000 fine to Glovo for Data Protection Officer appointment violation - June 17, 2020