Artificial Intelligence applied to e-commerce: EU Parliament’s perspective
In this article we delve into EU Parliament’s analysis on Artificial Intelligence applied to e-commerce.
This article is Part II of our “AI and retail” series. In Part I we talked about how AI could help the retail industry and provide opportunities to minimise the impact of COVID-19 while respecting privacy and the ethical principles. In Part II we are going through the document published by the EU Parliament comprising new AI developments and innovations applied to e-commerce.
What can we expect from retail in the near future? Smart billboards, virtual dressing rooms and self-payment are just some of the elements that will lead the new era in retail. What do they all have in common? The answer is Artificial Intelligence and e-commerce.
The trade-off between learning and learning ethically
The current state of the art in mathematics, statistics and programming makes possible the analysis of massive amounts of data, which has leveraged the progress of Machine Learning. However, there is a gap between the development of Artificial Intelligence (AI) and the respect for ethical principles. The EU Parliament deems essential to inject the following values into AI technologies:
- Fairness, or how to avoid unfair and discriminatory decisions.
- Accuracy, or the ability to provide reliable information.
- Confidentiality, which is addressed to protect the privacy of the involved people.
- Transparency, with the aim of making models and decisions comprehensible to all stakeholders.
According to the document, Europe stands up for a Human-centric AI at the benefit of humans at an individual and at a social level, systems which incorporate European ethical values by-design, which are able to understand and adapt to real environments, interact in complex social situations, and expand human capabilities, especially on a cognitive level.
AI risks and challenges
The opacity of the decisions together with the prejudices and defects hidden in the training data are stressed by the EU Parliament as the main issues to tackle when implementing AI.
AI algorithms act as black-boxes that are able to make a decision based on customers’ movements, purchases, online searches or opinions expressed on social networks, but they cannot explain the reason of the proposed prediction or recommendation.
The biases come from the algorithms being built over and trained on human actions, which may comprise unfairness, discrimination or simply wrong choices. This will affect the result, possibly without the awareness of the decision maker and the subject of the final decision.
Right to explanation
The right to explanation may be the solution to the black-box obstacle, and it is already covered in the General Data Protection Regulation (GDPR): “In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision”. However, this right can only be fulfilled by means of a technology capable of explaining the logic of black-boxes and said technology is not a reality in most cases.
The above addresses to the following question: how can companies trust their products without understanding and validating their operation? Explainable AI technology is essential not only to avoid discrimination and injustice, but also to create products with reliable AI components, to protect consumer safety and to limit industrial liability.
There are two broad ways of dealing with the “understandable AI” problem:
- Explanation by-design: XbD. Given a set of decision data, how to build a “transparent automatic decision maker” that provides understandable suggestions;
- Explanation of the black-boxes: Bbx. Given a set of decisions produced by an “opaque automatic decision maker”, how to reconstruct an explanation for each decision. This one can be further divided between
- Model Explanation, when the goal of explanation is the whole logic of the dark model;
- Outcome Explanation, when the goal is to explain decisions about a particular case, and
- Model Inspection, when the goal is to understand general properties of the dark model.
The societal dimension dilemma
What would be the outcome of the interaction between humans and AI systems? Unlike what we could expect, the EU Parliament points out that a crowd of intelligent individuals (assisted by AI tools) is not necessarily an intelligent crowd. This is because unintended network effects and emergent aggregated behavior.
What does the above means when it comes to retail? The effect is the so-called “rich get richer” phenomenon: popular users, contents and products get more and more popular. The confirmation bias, or the tendency to prefer information that is close to our convictions, is also referred. As a consequence of network effects of AI recommendation mechanisms for online marketplaces, search engines and social networks, the emergence of extreme inequality and monopolistic hubs is artificially amplified, while diversity of offers and easiness of access to markets are artificially impoverished.
The aim is that AI-based recommendation and interaction mechanisms help moving from the current purely “advertisement-centric” model to another driven by the customers’ interests.
In this context, what would be the conditions for a group to be intelligent? Three elements are key: diversity, independence, and decentralisation. For this purpose, the retail industry needs to design novel social AI mechanisms for online marketplaces, search engines and social networks, focused on mitigating the existing inequality introduced from the current generation of recommendation systems. It is paramount to have mechanisms for helping individuals acquiring access to diverse content, different people and opinions.
What is the goal?
AI should pursue objectives that are meaningful for consumers and providers, instead of success measures that are functional to intermediaries, and by mitigating the gate-keeper effect of current platforms’ contracts. Such an ecosystem would be also beneficial to e-government and public procurement, and the same basic principles would apply both to marketplaces and information and media digital services, targeted towards the interest of consumers and providers to share high quality contents.
Subscribe to our YouTube channel to be updated on further content.
Are you facing challenges in the retail industry during this global coronavirus pandemic? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.
Latest posts by Cristina Contero Almagro (see all)
- EU-US Privacy Shield invalidation business implications follow-up - August 7, 2020
- Assessment List for Trustworthy Artificial Intelligence Overview. - July 31, 2020
- EU-US Privacy Shield invalidation business implications - July 22, 2020