Artificial Intelligence: From Ethics to Regulation

Artificial Intelligence: From Ethics to Regulation

The study launched by the European Parliament last month builds on the AI-HLEG Ethics Guidelines for Trustworthy AI and seeks to find alternatives that might help to move from AI ethics to regulation. 

In our previous blogs and vlogs we have discussed the application of the Ethics Guidelines for Trustworthy AI in several contexts and industries, like retail and real estate. However, there is a need for precision in practical terms. For this purpose, the European Parliament has just published a document where different alternatives are explored in order to develop specific policy and legislation for governing AI.

Important considerations about ethics

The European Parliament considers that there are some preliminary points about ethics that should be understood before moving forward with the analysis of further implications of ethics on AI:

  1. Ethics is not about checking boxes. It should be addressed through questions based on deliberation, critique and inquiry.
  2. Ethics should be understood as a continuous process where regular checks and updates become essential.
  3. AI should be conceived of as a social experiment that makes possible to understand its ethical constraints and the kinds of things that need to be learnt. This approach may facilitate the monitoring of social effects thus they can be used to improve the technology and its introduction into society.
  4. Moral dilemmas do not make possible to satisfy all ethical principles and value commitments at the same time, which means that sometimes there will not be a specific response to a problem, but a set of various options and alternatives which an associated ‘moral residue’ each instead.
  5. The goal of ethics is to provide strong enough rationale that an individual is compelled to act in a way they believe is the right/good way.

Key AI ethics insights

According to the study, there are six main elements of AI that should be addressed when it comes to an ethical implementation of algorithms and systems:

  • Transparency. Policy makers need to deal with three aspects: the complexity of modern AI solutions, the intentional obfuscation by those who design them and the inexplicability regarding how a particular input results in a particular output.
  • Bias and fairness. Training data is deemed essential in this context and there is a need for definition of ‘fair’ and ‘accurate’ concepts.
  • Contextualization of the AI according to the society in which it has been created and clarification of the society’s role in its development.
  • Accountability and responsibility.
  • Re-design risk assessment to make them relevant.
  • Ethical technology assessments (eTA). The eTA is a written document intended to capture the dialogue between ethicist and technologist and it comprises the list of ethical issues related to the AI application for the purpose of identifying all the possible ethical risks and drawing out the possible negative consequences of implementing the AI system.

Why is regulation necessary?

The European Parliament points out the following reasons that motivate the need for legislation:

  • The criticality of ethical and human rights issues raised by AI development and deployment.
  • The need to protect people (i.e. the principle of proportionality).
  • The interest of the state (given that AI will be used in state-governed areas such as prisons, taxes, education, child welfare).
  • The need for creating a level playing field (e.g. self-regulation is not enough).
  • The need for the development of a common set of rules for all government and public administration stakeholders to uphold.

What are the policy options?

While ethics is about searching for broad answers to societal and environmental problems, regulation can codify and enforce ethically desirable behaviour.

The study proposes a number of policy options that may be adopted by European Parliamentary policy-makers:

  1. Mandatory Data Hygiene Certificate (DHB) for all AI system developers in order to be eligible to sell their solutions to government institutions and public administration bodies. This certificate would not require insight into the proprietary aspects of the AI system and it would not require organisations to share their data sets competing organisations.
  2. Mandatory ethical technology assessment (eTA) prior to deployment of the AI system to be conducted by all public and government organisations using AI systems. 
  3. Mandatory and clear definition of the goals of using AI when it comes to public administration institutions and government bodies. The aim is avoiding the deployment of AI in society in the hope of learning an unknown ‘something’. Instead, it is proposed that there must be a specific and explicit ‘something’ to be learned.
  4. Mandatory accountability report to be produced by all organisations deploying AI systems meant as a response to the ethical and human rights issues that were identified in the eTA.

Practical considerations about eTA reports

Criteria

The seven key requirements of the Ethics Guidelines for Trustworthy AI  may be used by organisations to structure an eTA, namely:

  • Human agency and oversight.
  • Technical robustness and safety.
  • Privacy and data governance.
  • Transparency.
  • Diversity, non-discrimination and fairness.
  • Societal and environmental well-being.
  • Accountability.

Otherwise the European Parliament also suggests to use the following nine criteria: (1) Dissemination and use of information; (2) Control, influence and power; (3) Impact on social contact patterns; (4) Privacy; (5) Sustainability; (6) Human reproduction; (7) Gender, minorities and justice; (8) International relations and (9) Impact on human values. 

The eTA should be concrete though, thus it may be extended in order to cover: (a) the specific context in which the AI will be used; (b) the AI methodology used; (c) the stakeholders involved and (d) an account of ethical values and human rights in need of attention.

Who is going to make eTA reports?

The tasks in the eTA log require experience in identifying ethical issues and placing them within a conceptual framework for analysis. For this reason, the European Parliament highlights the likelihood of a future role for ethics and for ethicists in the regulation process engaged in organisations around AI.

Will SMEs be able to afford this?

In order to create a level playing field across SMEs, the European Parliament plans to provide an adequate margin of three years and offer small and medium companies EU funding to assist them with report completion as well as with the necessary capacity-building measures. This model parallels the incremental process for companies to comply with the GDPR.

Does your company use AI? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessmentsEU AI Ethics assessments and Data Protection Officer outsourcingContact us today.

CNIL Smart and Thermal Cameras Caution

CNIL Smart and Thermal Cameras Caution

French data protection authority CNIL calls for caution in the use of smart and thermal cameras, due to the fact that certain proposed systems do not comply with the legal framework for the protection of personal data. 

 

The preservation of anonymity in public spaces is an essential part of exercising personal freedoms like the right to privacy and the protection of personal data, freedom to come and go, freedom of expression and the right to demonstrate, etc. Capturing images of people in these public spaces undoubtedly carries the risk of hindering their fundamental rights and freedoms. The rampant deployment of surveillance devices would involve the systematic collection and analysis of data from individuals circulating in public spaces. There is the view that the massive deployment of these devices, systematically monitoring people, presents the risk of normalizing a feeling of surveillance among citizens, resulting in a modification – intended or suffered – of their behaviors. It can also lead to a phenomenon of addiction and trivialization of intrusive technologies, constant surveillance.

 

More generally, the specific use of smart cameras in the context of the current state of health crisis raises important issues. CNIL had previously called for democratic debates on new video uses in September 2018 and more specifically on facial recognition in November 2019. Regarding thermal imaging cameras, however, it should be noted that the health authorities questioned by CNIL expressed reservations about these devices. They pose the risk of not identifying infected people since some are asymptomatic and it can also be bypassed by taking antipyretic drugs (which reduce body temperature without treating the causes of fever).

 

The rights of individuals must be respected, including in the context of a health emergency.

 

The possible implementation of such surveillance systems must comply with the applicable legal framework (GDPR, French Data Protection Act,Data Protection Directive (EU) 2016/680 for Police and Criminal Justice Authorities ) and must also be accompanied by a commitment to preserve individual freedoms and particularly the right to privacy. It is for these reasons in particular that surveillance devices are subject to specific legislative framework in the Internal Security Code. CNIL recalls that the use of “smart” cameras, however, is not currently and specifically provided for in any text. Their usefulness, based on specific circumstances, could not be assessed or debated at a more than general level, by the organizations deciding to set them up.

 

The CNIL insists on the need to provide an adequate textual framework.

 

The CNIL insists on the need to provide an adequate textual framework, which is required when sensitive data is processed; or the right of objection cannot be applied in practice in public spaces. They also call for this framework to be applied to all the assurances that these smart camera devices must provide in terms of the GDPR – demonstration of their necessity and proportionality, limited retention period, pseudonymization or anonymization measures, lack of individual monitoring, etc.). In addition, the deployment of thermal cameras, which process health data (body temperature), should be given special attention.

 

The EDPB states; “For video surveillance based on legitimate interest (Article 6 (1) (f) GDPR) or for the necessity when carrying out a task in the public interest (Article 6 (1) (e) GDPR) the data subject has the right – at any time –to object, on grounds relating to his or her particular situation, to the processing in accordance.“ It also goes on to mention that unless the controller demonstrates compelling legitimate grounds that overrides the rights and interests of the data subject, the processing of data of the individual who objected must stop and requests from data subjects must be answered without undue delay or at the latest within one month. 

A call for caution in the deployment of irregular devices.

 

The fight against the COVID-19 pandemic has led some parties to consider deploying smart cameras intended in particular to measure temperature, detect fevers or even ensure compliance with social distancing or wearing a mask. While CNIL recognises the legitimacy of the objective of flattening the curve of this global pandemic, they consider it necessary to warn that, based on a case-by-case analysis, it appears that most of these devices do not comply with the legal framework applicable to the protection of personal data.

 

When it comes to automated processing of personal data which falls under the GDPR, such devices most often result either in the processing of sensitive data without the consent of the parties concerned (in particular the temperature), or in the waiving of the data subject’s right to opposition. In both cases, these devices must then be subject to a specific regulatory framework, which will require officials to question the proportionality of the use of such devices and the necessary assurances.

 

Does your company utilize smart cameras, thermal cameras or facial recognition? If yes, failure to adhere fully to the guidelines and rules of the GDPR and Data Protection Act 2018 could result in a hefty financial penalty. Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, EU AI Ethics assessments and Data Protection Officer outsourcing. Contact us today.

AI Ethics in Asset and Investment

AI Ethics in Asset and Investment Management

Since AI systems govern most trading operations in asset and investment management, AI ethics becomes crucial to ensure no fundamental rights and principles are overridden.

“I am not uncertain”. Any Billions TV Series fan around here? Those who are will know that this is the phrase that the employees of the hedge fund say to their boss before trading when they have potentially incriminating inside information. They know that basing the investment decision on it may address liability from prosecution. What if almost the same results could be achieved by lawful means? For this purpose, AI can definitely help and AI ethics becomes essential. In this article we delve into AI ethics in asset and investment management.

How is AI used in asset and investment management?

Asset and investment management companies, especially hedge funds, have traditionally used computer models to make the majority of their trades. In recent years, AI has allowed the industry to improve this practice with algorithms and systems that are fully autonomous and do not rely on data scientists and manual updates in order to operate regularly.

AI can analyse large amounts of data at extraordinary speeds in real time, learning from any type of information that may be relevant, including news articles, images and social media posts. The insights are applied automatically and algorithms self-adjust through a process of trial and error to produce increasingly more accurate prescriptions.

Their main role is the following:

  • Finding new patterns in existing data sets;
  • making new forms of data analyzable;
  • designing new users experiences and interfaces;
  • reducing the negative effects of human biases on investment decisions.

For asset and investment management firms the above means efficiency and operational structure improvement, risk management, investment strategy design, trading efficiency and decision-making enhancement. However, it is paramount to be especially aware of the risk of other companies simulating their findings or deriving similar conclusions from equivalent techniques, therefore elements such as trade secret, property software development and continuous innovation are vital.

Why does AI ethics matter in this context?

There are many risks derived from the use of AI in asset and investment management that could be tackled with the implementation of ethical values and principles. 

Some of the issues that may come up in this context are described below:

  • Lack of auditability of AI.
  • Lack of control over data quality and robust production. 
  • Failure to monitor and keep track of AI systems’ decisions.
  • AI inability to react to unexpected events not closely related to past trends and with no historical data available, like pandemics.
  • Difficulty maintaining adherence to current protocols on data security, conduct, and cybersecurity on AI technologies that are new and have not been tested for a period enough to ensure consistency. 
  • Omission of social purpose, leaving some stakeholders behind.
  • Human biases, such as loss aversion (the preference for avoiding losses relative to generating equivalent gains) or confirmation bias (the tendency to interpret new evidence so as to affirm pre-existing beliefs).
  • AI systems own biases, derived from the training datasets, processes and models, deficiencies in coding or otherwise caused or acquired.
  • Gaps on the definition of the respective responsibilities of the third party provider and the asset management firm using the service or tool, where relevant.

How should AI ethics be applied to asset and investment management?

The risks above can be sorted into seven categories, following the requirements of the EU Commission AI-HLEG Ethics Guidelines for Trustworthy Artificial Intelligence:

Issue Failure to monitor and keep track of AI systems’ decisions. Inability to react to unexpected events.

Difficulty maintaining adherence to current protocols on data security, conduct, and cybersecurity.

Lack of control over data quality and robust production. Lack of auditability of AI. Human biases.

AI systems own biases.

Omission of social purpose. Gaps on the definition of the respective responsibilities of the third party provider and the asset management firm.
Solution Human agency and oversight. Technical robustness and safety. Privacy and data governance. Transparency. Diversity, non-discrimination and fairness. Societal and environmental well-being. Accountability.

Among the solutions identified above, human overview plays a key role. There is a need for redefining the job of data scientists which would be the ones in charge of selecting the right sources of alternative data, integrating it with existing knowledge within the firm and its philosophy or culture and making judgments about where future trends are going considering those specific contexts the AI cannot cover. 

The answer is to have AI systems and humans combining their abilities and playing complementary roles, through the so called “Human in the loop” approach, where humans monitor the results of the machine learning model. 

What should be the regulatory approach?

The financial sector is heavily regulated. Any new AI tools or digital advisors are subject to the same framework of regulation and supervision as traditional advisors and this is the reason why it is critical to ensure robust cybersecurity defenses, such as data encryption, cybersecurity insurance and incident management policies. However, the use of AI still requires to go one step further when it comes to regulation.

Currently, there is a lack of specific international regulatory standards for AI in asset and investment management. This is tricky though, because likewise it happens with the GDPR, there is a trade-off between the innovation and the respect for the fundamental rights and freedoms.

Considering the specific nature of the industry, it might be beneficial to extend the applicability of existing regulation to the uses of AI first and then running regulatory sandbox programs for testing new AI innovations in a controlled environment. This would allow to identify basic needs and to deeply understand how the technology works before moving forward with new mandatory rules.

Meanwhile, self-regulation and codes of practice may be the first step to settle the future regulatory framework, which could comprise robust and effective governance, regular checks on the use of AI systems within the company, testing and approval processes, governance committees, documented procedures and internal audits. 

A proactive and industry-led approach to AI governance and ethics for asset and investment management is necessary to foster the development of standards.

Final remarks

In words of Laurence Douglas Fink, chairman and CEO of BlackRock, “One of the key elements of human behavior is, humans have a greater fear of loss than enjoyment of success. All the academic studies will show you that the fear of loss of capital is far greater than the enjoyment of gains”. AI systems do not have neither fear of loss nor enjoyment of gains, they just have data. However, those human emotions are necessary to properly understand the market.  This is the reason why combining both of them may be the most powerful tool for the asset and investment management industry.

Do you work in another sector different from the financial one? Don’t miss our AI and data protection in industry series. We have so far covered retail, Part I and Part II.

Are you worried about COVID-19, regardless of the industry? We have prepared a blog with some relevant tips to help you out, covering COVID-19 business adaptation basics .

Subscribe to our YouTube channel to be updated on further content. 

Do you have questions about how AI is transforming the financial sector and what are the risks? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Artificial Intelligence applied to e-commerce: EU Parliament’s perspective

In this article we delve into EU Parliament’s analysis on Artificial Intelligence applied to e-commerce.

This article is Part II of our “AI and retail” series. In Part I we talked about how AI could help the retail industry and provide opportunities to minimise the impact of COVID-19 while respecting privacy and the ethical principles. In Part II we are going through the document published by the EU Parliament comprising new AI developments and innovations applied to e-commerce. 

What can we expect from retail in the near future? Smart billboards, virtual dressing rooms and self-payment are just some of the elements that will lead the new era in retail. What do they all have in common? The answer is Artificial Intelligence and e-commerce. 

The trade-off between learning and learning ethically 

The current state of the art in mathematics, statistics and programming makes possible the analysis of massive amounts of data, which has leveraged the progress of Machine Learning. However, there is a gap between the development of Artificial Intelligence (AI) and the respect for ethical principles. The EU Parliament deems essential to inject the following values into AI technologies:

  • Fairness, or how to avoid unfair and discriminatory decisions.
  • Accuracy, or the ability to provide reliable information.
  • Confidentiality, which is addressed to protect the privacy of the involved people.
  • Transparency, with the aim of making models and decisions comprehensible to all stakeholders.

According to the document, Europe stands up for a Human-centric AI at the benefit of humans at an individual and at a social level, systems which incorporate European ethical values by-design, which are able to understand and adapt to real environments, interact in complex social situations, and expand human capabilities, especially on a cognitive level.

AI risks and challenges

The opacity of the decisions together with the prejudices and defects hidden in the training data are stressed by the EU Parliament as the main issues to tackle when implementing AI.

AI algorithms act as black-boxes that are able to make a decision based on customers’ movements, purchases, online searches or opinions expressed on social networks, but they cannot explain the reason of the proposed prediction or recommendation.

The biases come from the algorithms being built over and trained on human actions, which may comprise unfairness, discrimination or simply wrong choices. This will affect the result, possibly without the awareness of the decision maker and the subject of the final decision.

Right to explanation

The right to explanation may be the solution to the black-box obstacle, and it is already covered in the General Data Protection Regulation (GDPR): “In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision”. However, this right can only be fulfilled by means of a technology capable of explaining the logic of black-boxes and said technology is not a reality in most cases.

The above addresses to the following question: how can companies trust their products without understanding and validating their operation? Explainable AI technology is essential not only to avoid discrimination and injustice, but also to create products with reliable AI components, to protect consumer safety and to limit industrial liability.

There are two broad ways of dealing with the “understandable AI” problem:

  • Explanation by-design: XbD. Given a set of decision data, how to build a “transparent automatic decision maker” that provides understandable suggestions;  
  • Explanation of the black-boxes: Bbx. Given a set of decisions produced by an “opaque automatic decision maker”, how to reconstruct an explanation for each decision. This one can be further divided between 
    • Model Explanation, when the goal of explanation is the whole logic of the dark model; 
    • Outcome Explanation, when the goal is to explain decisions about a particular case, and 
    • Model Inspection, when the goal is to understand general properties of the dark model.

The societal dimension dilemma

What would be the outcome of the interaction between humans and AI systems? Unlike what we could expect, the EU Parliament points out that a crowd of intelligent individuals (assisted by AI tools) is not necessarily an intelligent crowd. This is because unintended network effects and emergent aggregated behavior.

What does the above means when it comes to retail? The effect is the so-called “rich get richer” phenomenon: popular users, contents and products get more and more popular.  The confirmation bias, or the tendency to prefer information that is close to our convictions, is also referred. As a consequence of network effects of AI recommendation mechanisms for online marketplaces, search engines and social networks, the emergence of extreme inequality and monopolistic hubs is artificially amplified, while diversity of offers and easiness of access to markets are artificially impoverished. 

The aim is that AI-based recommendation and interaction mechanisms help moving from the current purely “advertisement-centric” model to another driven by the customers’ interests.

In this context, what would be the conditions for a group to be intelligent? Three elements are key: diversity, independence, and decentralisation. For this purpose, the retail industry needs to design novel social AI mechanisms for online marketplaces, search engines and social networks, focused on mitigating the existing inequality introduced from the current generation of recommendation systems. It is paramount to have mechanisms for helping individuals acquiring access to diverse content, different people and opinions.

What is the goal?

AI should pursue objectives that are meaningful for consumers and providers, instead of success measures that are functional to intermediaries, and by mitigating the gate-keeper effect of current platforms’ contracts. Such an ecosystem would be also beneficial to e-government and public procurement, and the same basic principles would apply both to marketplaces and information and media digital services, targeted towards the interest of consumers and providers to share high quality contents.

Subscribe to our YouTube channel to be updated on further content. 

Are you facing challenges in the retail industry during this global coronavirus pandemic? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.