AI Ethics in Asset and Investment

AI Ethics in Asset and Investment Management

Since AI systems govern most trading operations in asset and investment management, AI ethics becomes crucial to ensure no fundamental rights and principles are overridden.

“I am not uncertain”. Any Billions TV Series fan around here? Those who are will know that this is the phrase that the employees of the hedge fund say to their boss before trading when they have potentially incriminating inside information. They know that basing the investment decision on it may address liability from prosecution. What if almost the same results could be achieved by lawful means? For this purpose, AI can definitely help and AI ethics becomes essential. In this article we delve into AI ethics in asset and investment management.

How is AI used in asset and investment management?

Asset and investment management companies, especially hedge funds, have traditionally used computer models to make the majority of their trades. In recent years, AI has allowed the industry to improve this practice with algorithms and systems that are fully autonomous and do not rely on data scientists and manual updates in order to operate regularly.

AI can analyse large amounts of data at extraordinary speeds in real time, learning from any type of information that may be relevant, including news articles, images and social media posts. The insights are applied automatically and algorithms self-adjust through a process of trial and error to produce increasingly more accurate prescriptions.

Their main role is the following:

  • Finding new patterns in existing data sets;
  • making new forms of data analyzable;
  • designing new users experiences and interfaces;
  • reducing the negative effects of human biases on investment decisions.

For asset and investment management firms the above means efficiency and operational structure improvement, risk management, investment strategy design, trading efficiency and decision-making enhancement. However, it is paramount to be especially aware of the risk of other companies simulating their findings or deriving similar conclusions from equivalent techniques, therefore elements such as trade secret, property software development and continuous innovation are vital.

Why does AI ethics matter in this context?

There are many risks derived from the use of AI in asset and investment management that could be tackled with the implementation of ethical values and principles. 

Some of the issues that may come up in this context are described below:

  • Lack of auditability of AI.
  • Lack of control over data quality and robust production. 
  • Failure to monitor and keep track of AI systems’ decisions.
  • AI inability to react to unexpected events not closely related to past trends and with no historical data available, like pandemics.
  • Difficulty maintaining adherence to current protocols on data security, conduct, and cybersecurity on AI technologies that are new and have not been tested for a period enough to ensure consistency. 
  • Omission of social purpose, leaving some stakeholders behind.
  • Human biases, such as loss aversion (the preference for avoiding losses relative to generating equivalent gains) or confirmation bias (the tendency to interpret new evidence so as to affirm pre-existing beliefs).
  • AI systems own biases, derived from the training datasets, processes and models, deficiencies in coding or otherwise caused or acquired.
  • Gaps on the definition of the respective responsibilities of the third party provider and the asset management firm using the service or tool, where relevant.

How should AI ethics be applied to asset and investment management?

The risks above can be sorted into seven categories, following the requirements of the EU Commission AI-HLEG Ethics Guidelines for Trustworthy Artificial Intelligence:

Issue Failure to monitor and keep track of AI systems’ decisions. Inability to react to unexpected events.

Difficulty maintaining adherence to current protocols on data security, conduct, and cybersecurity.

Lack of control over data quality and robust production. Lack of auditability of AI. Human biases.

AI systems own biases.

Omission of social purpose. Gaps on the definition of the respective responsibilities of the third party provider and the asset management firm.
Solution Human agency and oversight. Technical robustness and safety. Privacy and data governance. Transparency. Diversity, non-discrimination and fairness. Societal and environmental well-being. Accountability.

Among the solutions identified above, human overview plays a key role. There is a need for redefining the job of data scientists which would be the ones in charge of selecting the right sources of alternative data, integrating it with existing knowledge within the firm and its philosophy or culture and making judgments about where future trends are going considering those specific contexts the AI cannot cover. 

The answer is to have AI systems and humans combining their abilities and playing complementary roles, through the so called “Human in the loop” approach, where humans monitor the results of the machine learning model. 

What should be the regulatory approach?

The financial sector is heavily regulated. Any new AI tools or digital advisors are subject to the same framework of regulation and supervision as traditional advisors and this is the reason why it is critical to ensure robust cybersecurity defenses, such as data encryption, cybersecurity insurance and incident management policies. However, the use of AI still requires to go one step further when it comes to regulation.

Currently, there is a lack of specific international regulatory standards for AI in asset and investment management. This is tricky though, because likewise it happens with the GDPR, there is a trade-off between the innovation and the respect for the fundamental rights and freedoms.

Considering the specific nature of the industry, it might be beneficial to extend the applicability of existing regulation to the uses of AI first and then running regulatory sandbox programs for testing new AI innovations in a controlled environment. This would allow to identify basic needs and to deeply understand how the technology works before moving forward with new mandatory rules.

Meanwhile, self-regulation and codes of practice may be the first step to settle the future regulatory framework, which could comprise robust and effective governance, regular checks on the use of AI systems within the company, testing and approval processes, governance committees, documented procedures and internal audits. 

A proactive and industry-led approach to AI governance and ethics for asset and investment management is necessary to foster the development of standards.

Final remarks

In words of Laurence Douglas Fink, chairman and CEO of BlackRock, “One of the key elements of human behavior is, humans have a greater fear of loss than enjoyment of success. All the academic studies will show you that the fear of loss of capital is far greater than the enjoyment of gains”. AI systems do not have neither fear of loss nor enjoyment of gains, they just have data. However, those human emotions are necessary to properly understand the market.  This is the reason why combining both of them may be the most powerful tool for the asset and investment management industry.

Do you work in another sector different from the financial one? Don’t miss our AI and data protection in industry series. We have so far covered retail, Part I and Part II.

Are you worried about COVID-19, regardless of the industry? We have prepared a blog with some relevant tips to help you out, covering COVID-19 business adaptation basics .

Subscribe to our YouTube channel to be updated on further content. 

Do you have questions about how AI is transforming the financial sector and what are the risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Spanish DPA issues fine

Spanish DPA issues €25,000 fine to Glovo for Data Protection Officer appointment violation

The Spanish Data Protection Authority (DPA) AEPD fined Glovo €25,000 for not appointing a Data Protection Officer pursuant to article 37 GDPR.

Have you ever wondered whether your business is subject to the DPO designation requirement covered by the GDPR? The ambiguity of the GDPR when it comes to the definition of the cases where the appointment of a DPO is mandatory for controllers and processors is causing confusion in the industry. The latest fine in this regard comes from the Spanish DPA, and it is the first fine in Spain imposed for Data Protection Officer appointment violation.

What happened?

According to the AEPD decision, it seems that Glovo had not appointed a DPO. Apart from that, the company’s website did not contain information about an appointed DPO.

The Spanish DPA deems the lack of DPO appointment a breach of article 37 (1) GDPR because it considers the core activities of Glovo “consist of processing operations which, by virtue of their nature, their scope and/or their purposes, require regular and systematic monitoring of data subjects on a large scale”.

Glovo, on its side, argued that they have not breached the GDPR because they had appointed a Data Protection Board which was in charge of data protection matters. Furthermore, they pointed out that they actually appointed a DPO. However, it should be noted that this appointment took place after the investigation started plus nothing about the DPO or the Data Protection Board was said in their Privacy Policy.

Similar cases

On 28 April 2020, the Belgian Data Protection Authority issued its decision whereby it fined the telecommunications and ICT company Proximus €50,000 for failing at involving the DPO in the processing of personal data breaches. Moreover, the company did not have a system in place to prevent a conflict of interest of the DPO, who also held numerous other positions within the company (head of compliance and audit department) in violation of Article 38(6) of the GDPR. As a consequence, the controller could not ensure that any such tasks and duties did not result in a conflict of interest.

Should our business be worried?

Under the GDPR, you, as controller or processor, should appoint a DPO if:

  • you are a public authority or body (except for courts acting in their judicial capacity);
  • your core activities require large scale, regular and systematic monitoring of individuals (for example, online behaviour tracking); or
  • your core activities consist of large scale processing of special categories of data or data relating to criminal convictions and offences.

It should be stressed that controllers and processors can appoint a DPO even if they are not required to.

What is the origin of Glovo and Spanish DPA disagreement?

While the Spanish DPA states that Glovo should have appointed a DPO because they process personal data on a large scale, Glovo’s counterargument is based on the fact that the GDPR does not define “large scale”.

WP29’s (current EDPB) Guidelines on Data Protection Officers partially clarify this issue.

When determining if a processing is on a large scale, the guidelines say the following factors should be taken into consideration:

  • the numbers of data subjects concerned;
  • the volume of personal data being processed;
  • the range of different data items being processed;
  • the geographical extent of the activity; and
  • the duration or permanence of the processing activity.

Our tip

Running a business online increases the need for a DPO due to the ubiquity of data on the internet, so at least it is recommended to receive advice from a data protection and privacy expert before deciding if the company is subject to the mandatory GDPR requirement.

Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and UK Data Protection Act? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Dubai Data Protection Law

Dubai Data Protection Law No.5 will be implemented on July 1st, 2020.

Dubai Data Protection Law No.5 will be implemented on July 1st, 2020, replacing DIFC No. 1 of 2007.

 

Sheikh Mohammed bin Rashid Al Maktoum, Ruler of Dubai, and Vice President and Prime Minister of the United Arab Emirates, recently enacted the Dubai International Financial Center (DIFC) Data Protection Law No.5 of 2020. This new law will come into practice on the 1st of July 2020. The current law, Data Protection Law DIFC No. 1 of 2007 will remain relevant until then.  The Board of Directors of the DIFC has also updated its protocols and procedures for the synchronization and elevation in standards for data protection, accountability, record keeping, sanctions, as well as the relevant protocols for cross-border transfers of personal data. The Board of Directors of the DIFC has also set out new Data Protection Regulations, governing the procedures for notifications to the Commissioner regarding these standards. This new law combines the best practices from legislation such as GDPR (General Data Protection Regulation), the CCPA (California Consumer Privacy Act), and some other modern technological concepts. 

 

The new Dubai Data Protection Law includes some robust changes to the current law.

 

A Key focus of the new DIFC Data Protection Law is to regulate expectations for Controllers and Processors in the DIFC regarding several privacy and security concerns. These include some robust changes in the contractual obligations to current clients and the implementation data protection officers (if needed), to carry out data protection impact assessments, and contractually ensuring that individuals and their personal data remain protected. This only seeks to further increase U.A.E’s standing as a leading nation in the framework of Data Privacy and Intellectual Property legislation making it still one of the more attractive places for those looking to conduct business ethically.

 

While there are many changes to the legislation being implemented on July 1st, businesses will have until October 1st to get in compliance. 

 

 Updated and highlighted procedures are outlined under the new terms and conditions of the legislation. These new procedures place accountability in the hands of the Processors and Controllers and have serious implications including fines. These fines have not only had their maximum penalty increased, but also had some new ones introduced. It is key to note that AI and Emerging technology companies are not eligible for cross border data transfers or special category personal data processing. These regulations are centered on data sharing structures with state run entities which is an essential step for the deepening of ties with other regions. While this legislation is being implemented on July 1st, due to the COVID-19 global pandemic, the businesses to which it applies will have until October 1st, 2020 to get in compliance, before the law is enforced.

 

The Dubai Data Protection Law is expected to bring multiple benefits to the region.

 

Governor of the DIFC, Essa Kazim echoed many of the reasons for the change. He outlined that the DIFC continues to facilitate the growth of businesses by setting clear regulations for all organizations, based on global best practices on data privacy, thereby creating the correct ecosystem for Privacy regulations. Kazim believes that this will position the U.A.E as one of the leading global financial centers by demonstrating their progressive thinking. This is expected to aid the Middle East, Africa and South Asia (MEASA) region in strengthening its leadership and being positioned as an international financial hub. Because the GDPR allows for personal data transfers to countries whose legislation is seen by the European Commission to provide for an “adequate” level of personal data protection, this is expected to encourage, improve, and increase business between the two regions.

 

Likewise Dubai Data Protection Law No. 5, the CCPA in California is also expected to be enforced on July 1, 2020 

 

Does your company have all of the mandated safeguards in place to ensure data protection compliance? Aphaia provides data protection impact assessments including in international context, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

France will impose digital tax, regardless of international levy

France will impose digital tax regardless of whether the rest of the world proceeds with a deal on an international levy, according to this article by Euractiv.

France will impose a digital tax on corporate giant tech companies. According to Finance Economics Minister, Bruno le Maire, large tech companies like Amazon and Google have largely and disproportionately profited from the ease of doing business online during the COVID-19 pandemic and amid social distancing protocol and practices, and the French, like many other EU nations, feel that they must do something in order to stimulate their local economy in what is expected to be their upcoming deep recession.

Washington may fight back on digital tax

There has been a big pushback on the implementation of a digital tax, which would largely affect digital corporate giants like Google, which records an annual global revenue of over $160 billion (over 145 billion Euros). Washington, considering that many of these tech giants are US based, has threatened to fight back with their own trade tariffs, also claiming that France unfairly targets US digital companies.

Many EU nations are moving forward with digital tax implementation despite setbacks

While digital tax implementation at a uniformed rate across European nations arms to be a long time coming, France is not alone in wanting to move forward with its implementation. Countries like Italy, Britain and Spain either have already implemented digital tax or plan on doing so in the near future. However due to opposition from countries like Ireland, progress towards an EU wide digital tax seems to be stalled at the moment. In other nations, like the Czech Republic for example, Finance Minister Alena Schillerova has said that she may actually delay the implementation of a digital tax until next year and lower the rate, from the currently proposed 7% to 5%.

France will impose digital tax, whether or not international tax is implemented.

According to Euractiv, “Nearly 140 countries from the Organisation for Economic Cooperation and Development (OECD) are negotiating the first major rewriting of tax rules in more than a generation, to take better account of the rise of big tech companies such as Amazon, Facebook, Apple and Google that often book profit in low-tax countries.”

“Never has a digital tax been more legitimate and more necessary,” Finance Minister Bruno Le Maire told journalists on a conference call on May 13th. “In any case, France will apply as it has always indicated a tax on digital giants in 2020 either in an international form if there is a deal or in a national form if there is no deal.” Initially, in January, the government of France had offered to suspend its current digital tax on tech companies until the end of 2020, while an international tax deal was being negotiated. However, due to the circumstances surrounding the coronavirus outbreak, things have changed, with finance ministries more focused now than ever before, on saving their local economies.

EU seeks a better managed digital space, including digital tax.

Considering what seems to be an integration of the US and EU economies with the digital sphere, the European Union has sought to introduce regulation to achieve a level playing field and protect both European consumers and businesses in this new digital world. With legislation like the GDPR controlling the flow of information across borders and protecting consumer data, many legislative authorities do believe that a digital tax is the absolutely necessary next step. As digital corporate giants, like Amazon and Google with little to no physical presence in Europe have largely escaped what many would consider fair taxation, as a result of their predominantly online operational presence, governments across the EU believe that it is time to restructure and level the playing field. While there are many initiatives which are more focused on investment and education, there is a push now from legislators to enforce digital tax, particularly with the current need for income and to stimulate local economies impacted by the effects of COVID-19. Ultimately, the result of this will be a more managed digital space where online companies are not benefiting from a disproportionate advantage.

Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.