Artificial Intelligence applied to e-commerce: EU Parliament’s perspective

In this article we delve into EU Parliament’s analysis on Artificial Intelligence applied to e-commerce.

This article is Part II of our “AI and retail” series. In Part I we talked about how AI could help the retail industry and provide opportunities to minimise the impact of COVID-19 while respecting privacy and the ethical principles. In Part II we are going through the document published by the EU Parliament comprising new AI developments and innovations applied to e-commerce. 

What can we expect from retail in the near future? Smart billboards, virtual dressing rooms and self-payment are just some of the elements that will lead the new era in retail. What do they all have in common? The answer is Artificial Intelligence and e-commerce. 

The trade-off between learning and learning ethically 

The current state of the art in mathematics, statistics and programming makes possible the analysis of massive amounts of data, which has leveraged the progress of Machine Learning. However, there is a gap between the development of Artificial Intelligence (AI) and the respect for ethical principles. The EU Parliament deems essential to inject the following values into AI technologies:

  • Fairness, or how to avoid unfair and discriminatory decisions.
  • Accuracy, or the ability to provide reliable information.
  • Confidentiality, which is addressed to protect the privacy of the involved people.
  • Transparency, with the aim of making models and decisions comprehensible to all stakeholders.

According to the document, Europe stands up for a Human-centric AI at the benefit of humans at an individual and at a social level, systems which incorporate European ethical values by-design, which are able to understand and adapt to real environments, interact in complex social situations, and expand human capabilities, especially on a cognitive level.

AI risks and challenges

The opacity of the decisions together with the prejudices and defects hidden in the training data are stressed by the EU Parliament as the main issues to tackle when implementing AI.

AI algorithms act as black-boxes that are able to make a decision based on customers’ movements, purchases, online searches or opinions expressed on social networks, but they cannot explain the reason of the proposed prediction or recommendation.

The biases come from the algorithms being built over and trained on human actions, which may comprise unfairness, discrimination or simply wrong choices. This will affect the result, possibly without the awareness of the decision maker and the subject of the final decision.

Right to explanation

The right to explanation may be the solution to the black-box obstacle, and it is already covered in the General Data Protection Regulation (GDPR): “In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision”. However, this right can only be fulfilled by means of a technology capable of explaining the logic of black-boxes and said technology is not a reality in most cases.

The above addresses to the following question: how can companies trust their products without understanding and validating their operation? Explainable AI technology is essential not only to avoid discrimination and injustice, but also to create products with reliable AI components, to protect consumer safety and to limit industrial liability.

There are two broad ways of dealing with the “understandable AI” problem:

  • Explanation by-design: XbD. Given a set of decision data, how to build a “transparent automatic decision maker” that provides understandable suggestions;  
  • Explanation of the black-boxes: Bbx. Given a set of decisions produced by an “opaque automatic decision maker”, how to reconstruct an explanation for each decision. This one can be further divided between 
    • Model Explanation, when the goal of explanation is the whole logic of the dark model; 
    • Outcome Explanation, when the goal is to explain decisions about a particular case, and 
    • Model Inspection, when the goal is to understand general properties of the dark model.

The societal dimension dilemma

What would be the outcome of the interaction between humans and AI systems? Unlike what we could expect, the EU Parliament points out that a crowd of intelligent individuals (assisted by AI tools) is not necessarily an intelligent crowd. This is because unintended network effects and emergent aggregated behavior.

What does the above means when it comes to retail? The effect is the so-called “rich get richer” phenomenon: popular users, contents and products get more and more popular.  The confirmation bias, or the tendency to prefer information that is close to our convictions, is also referred. As a consequence of network effects of AI recommendation mechanisms for online marketplaces, search engines and social networks, the emergence of extreme inequality and monopolistic hubs is artificially amplified, while diversity of offers and easiness of access to markets are artificially impoverished. 

The aim is that AI-based recommendation and interaction mechanisms help moving from the current purely “advertisement-centric” model to another driven by the customers’ interests.

In this context, what would be the conditions for a group to be intelligent? Three elements are key: diversity, independence, and decentralisation. For this purpose, the retail industry needs to design novel social AI mechanisms for online marketplaces, search engines and social networks, focused on mitigating the existing inequality introduced from the current generation of recommendation systems. It is paramount to have mechanisms for helping individuals acquiring access to diverse content, different people and opinions.

What is the goal?

AI should pursue objectives that are meaningful for consumers and providers, instead of success measures that are functional to intermediaries, and by mitigating the gate-keeper effect of current platforms’ contracts. Such an ecosystem would be also beneficial to e-government and public procurement, and the same basic principles would apply both to marketplaces and information and media digital services, targeted towards the interest of consumers and providers to share high quality contents.

Subscribe to our YouTube channel to be updated on further content. 

Are you facing challenges in the retail industry during this global coronavirus pandemic? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

AI and retail industry

AI and retail industry after COVID-19: opportunities, privacy, ethics (Part I)

Our lives will change after COVID-19, and AI can help the retail industry and provide opportunities to minimise the impact of the pandemic while respecting privacy and the ethical principles.

In the last two months, we have witnessed how the entire world has changed- from schools to factories, we all have replaced our usual practices and activities by pandemic-proof ones. We are now well aware of how we have to wash our hands, we have been instructed on how to secure our home network for homeworking and homeschooling and we are cautious when it comes to the use of our geolocation data by the Governments. One of the main industries that has been affected is retail, and AI can help to maximise the opportunities while respecting privacy and ethics.

What about the “new normal”? How our everyday life will look after COVID-19? We cannot predict it certainly, but we are quite sure that the AI will have a key role in defining it. In this article, we go through some uses of AI in retail which may become very relevant in the post-pandemic world, also considering how they should be applied ethically.

What changes will the retail industry face?

The COVID-19 pandemic is the first of its kind in the last hundred years. The effects of the disease will presumably result in changes in our habits: the way people buy, socialise, learn, work and set up their preferences will not be the same as before the COVID-19 outbreak. 

How will this impact retail? Let’s think about what could be a common Friday in the UK or Spain. You get up, take the bus or the train to the office, then you have lunch with your workmates and go shopping in the afternoon looking for your brother’s birthday present. After that, you meet him and all your friends in a restaurant for the celebration party. It does sound normal, right? Well, maybe it does not any longer.

While, unfortunately, many people will lose their jobs because of the COVID-19 pandemic, some other will avoid spending much money due to the uncertainty. Economical dilemmas will not be the only pitfall in the retail industry though, as the risk of infection will also limit our movements widely. Getting back to our example above, maybe your brother would have decided to invite his friends home rather than to a restaurant, minimizing the contact with other people. And you may have bought the gift via online while working from home, instead of going to the shop as such. 

It seems that our free time activities will move to an in-house fashion, which will also affect the type of products we buy. For example, premium food or beverages to consume at home may become more relevant, together with highest level appliances that make our lives easier in our “new normal”. 

What changes may come from the reinvention of the industry and how can AI help?

There are two main categories of changes, that we have sorted into “physical” and “digital”. A third one may be the combination of both.

Physical changes

Retailers will need to make their clients feel pandemic-safe when shopping in their stores, which require the implementation of a wide range of measures, such as: 

    • Line management. AI may help to count the number of people which is inside the store, plus control their movements and manage the waiting times in the lines. An app may be designed for this purpose, based on spots booking and SMS notifications.


  • Social distancing. Heatmaps may be useful when it comes to capacity control and minimum distance among the customers. AI could be helpful to identify those higher traffic areas and use the data to redesign the space. 


  • Temperature and symptoms control. Facial and emotion recognition plus temperature sensors may automate the identification of infected customers with the purpose of preventing their contact with other people.
  • Logistics and delivery. Drones built with AI systems can autonomously deliver orders to the customers based on a “zero contact” policy.
  • Self-payment. AI can definitely be key in the replacement of the traditional cashier staff by self-payment machines, or even payment with no checkout at all, using virtual cards via sensors and deep learning.
  • Product disinfection. One of the main obstacles to in-store shopping is that COVID-19 may remain on surfaces for days, which includes products such as clothes. One of the solutions to this issue might be the use of virtual fitting rooms: combining AI and virtual reality (VR), customer can virtually try clothes on their own body with their personal 3D body avatar. This may apply both to ecommerce and in-store shopping.

Digital changes

Even though retailers will do big efforts to make their shops as much attractive as possible for their customers, online shopping will inevitably become more popular, which may be a detriment to physical stores. In this context, the industry will need to improve the ecommerce in order to properly respond to the market demand. To make the most of this “new normal”, retailers may focus on:

  • Targeted advertising and offers. Considering there are few data about the new consumer habits, being able to tailor the offers individually becomes essential in order to survive in the “new normal”. Profiling is crucial to predict individual’s behaviour and maximise the chances to attract a customer to the business.
  • Design and usability of their ecommerce pages. Practices like keeping navigation simple, automating the search or providing relevant recommendations make the costumer feel comfortable within the ecommerce page, therefore the purchase possibilities increase.
  • Track and compare different markets. “Reinvent or die”. New times require adaptation, and where no enough historical data is available, using another techniques, such as comparing countries or matching data from other products or services, may be paramount for the purpose of drafting the new trend. 
  • Omnichannel marketing. Customer experience will be placed in the center of the business model, thus adjusting to the customer based on their behavior through the sales funnel is required to provide the ultimate personalized customer experience.
  • Product placement. When it comes to advertising, there may be new spaces to consider, such as Netflix films or series, which may now be more profitable than the traditional outdoor means.

There is a very thin line between physical and digital in an interconnected world though. While some examples may be clear, other ones may be a combination of both. For example, smart billboards work with data gathered from our physical presence plus information from our devices our digital fingerprint.

In this context, relevant business opportunities may come from the proper analysis of the data with the aim of figuring out the new customer behavior. However, considering the temporary nature of this “new normal”, caused by a pandemic, flexibility should remain in the top of our minds because being able to adapt as fast as possible to any changes in the demand will make the difference, in one direction or another.

Can we achieve all these changes ethically?

It seems that AI will play a key role in the adaptation of the retail industry to the evolution of consumer habits. The purpose businesses pursue with the implementation of changes is maintaining the turnover they had before the crisis, or even improving the rate, which can only be achieved by instilling confidence in the clients.  

All the measures described above relate to health risks management, but one should remember that, even though currently they may be the most important ones due to the COVID-19 outbreaks, there are also some other concerns that businesses should deal with, especially when the new measures may emphasise them. These are, among other, data protection, privacy and ethics concerns.

Customers will not be able to trust a business that uses AI which is not trustworthy. This is the reason why one should ensure that the AI systems are:

(1) lawful –  respecting all applicable laws and regulations.

(2) ethical – respecting ethical principles and values.

(3) robust – both from a technical perspective while taking into account its social environment.

A Data Protection Impact Assessment should be run before implementing any changes using AI systems, considering both data protection and ethical dilemmas. It should help to verify the following requirements are met:

  • Human agency and oversight. For example, a member of the staff should be able to intercede where a customer claims the price charged for a product in his virtual card is not correct.  
  • Technical Robustness and safety. For example, businesses should ensure that no physical violence is applied over a person by an AI system in order to block access to the shop where high temperature has been detected. 
  • Privacy and data governance. Full compliance with the GDPR and any other relevant laws should be guaranteed when using AI systems. For example, access to the data should be limited by user or role and pseudonymisation techniques should be applied where possible. 
  • Transparency. Traceability mechanisms should be provided and AI systems and their decisions should be explained. Customers need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations. For example, the controller should be able to explain the logic behind the access restriction to the store.
  • Diversity, non-discrimination and fairness. Any type of unfair bias should be avoided, either in the training dataset, the creation of the algorithm or its application. For example, stores should make sure that no one is banned from entering for any reason other than temperature or symptoms. This could be the case where someone living in a low income neighborhood quite affected by COVID-19 is banned from accessing a mall just for coming from said area. This could address to the marginalization of vulnerable groups, or to the exacerbation of prejudice and discrimination. 
  • Societal and environmental well-being. AI systems in this context are not only used for improving businesses’ turnover, but also to prevent the spread of the virus for the sake of public health. 
  • Accountability. Business should have measures like civil insurance in place to ensure responsibility and accountability for AI systems and their outcomes. 

Earlier this month the EU Parliament came up with research on AI new developments and innovations applied to ecommerce. We will go thoroughly through it and discuss their in-depth analysis in Part II. 

Subscribe to our YouTube channel to be updated on Part II. 

Are you facing challenges in the retail industry during this global coronavirus pandemic? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.


EDPB on Health Data

EDPB adopts Guidelines on the Processing of Health Data for Scientific Research Purposes during COVID-19

In the middle of the COVID-19 outbreak, the EDPB adopted Guidelines on the processing of health data for scientific research purposes to clarify some legal questions.

Considering that life may not return to normal until a COVID-19 vaccine becomes widely available, researchers from across the globe are focusing their efforts on producing results as soon as possible. In this context, questions regarding the application of the GDPR keep arising, therefore the European Data Protection Board (EDPB) has released guidelines on the processing of health data for scientific research purposes with the aim of providing basic guidance.

What is “health data”?

Article 4 (15) GDPR defines “data concerning health” as “personal data related to the physical or mental health of a natural person, including the provision of health care services, which reveal information about his or her health status”. This meaning also covers the following:

  • Information that becomes health data by cross referencing with other data thus revealing the state of health or health risks, such as the assumption of a person being at high risk for severe illness from COVID-19 because of his medical conditions.
  • Information that becomes health data because of its usage in a specific context, such as information regarding a recent trip to a region affected with COVID-19.

The EDPB points out that “processing for the purpose of scientific research” should be interpreted in a broad manner in line with Recital 159 GDPR.

What is the legal basis for the processing?

According to the GDPR, processing of special categories of personal data is only allowed in some scenarios. The ones that may be more relevant when it comes to the processing of health data for scientific research purposes during COVID-19 pandemic are the following:

  • The data subject has given explicit consent.
  • Processing relates to personal data which are manifestly made public by the data subject.
  • Processing is necessary for the purposes of preventive or occupational medicine.
  • Processing is necessary for reasons of public interest in the area of public health.
  • Processing is necessary for archiving purposes in the public interest, scientific or historical research purposes based on Union or Member State law.

It should be noted also that “further processing for […] scientific research purposes […] shall, in accordance with Article 89 (1), not be considered to be incompatible with the initial purposes”, subject to appropriate safeguards.

Should the data subject be informed?

Pursuant to Articles 13 and 14 GDPR, the data subjects should be informed at the time when personal data is gathered, or “within a reasonable period after obtaining the personal data, but at the latest within one month” where it is not collected from the data subject.

However, considering that it is often not possible to fully identify the purpose of personal data processing for scientific research purposes at the time of data collection, the EDPB recommends to deliver the information to the data subject within a reasonable period of time before the implementation of the new research project. 

There are four exemptions of the information obligation though:

  • The data subject already has the information.
  • The provision of such information proves impossible, would involve a disproportionate effort or is likely to render impossible or seriously impair the achievement of the objectives of that processing. A controller seeking to rely on this exemption should demonstrate the factors that actually prevent it from providing the information to the data subjects or carry out a balancing exercise to assess the effort involved against the potential impact and effects of not providing the information.
  • Obtaining or disclosure is expressly laid down by Union or Member State law. This exemption is conditional upon the law in question providing “appropriate measures to protect the data subject’s legitimate interests”.
  • The personal data must remain confidential subject to an obligation of professional secrecy.

What other measures should be taken?

In light of the data minimisation principle, the EDPB deems essential to specify the research questions and assess the type and amount of data necessary to properly answer them before proceeding. Additionally, the data should be anonymised where possible.

Proportionate storage periods shall be set as well, taking into account criteria such as the length and the purpose of the research.

As for the security measures that should be implemented, together with pseudonymisation, encryption, non-disclosure agreements and strict access role distribution, the EDPS stresses that a data protection impact assessment should be carried out when such processing is “likely to result in a high risk to the rights and freedoms of natural persons”, and remarks the importance of data protection officers as a key role that should be involved in the process.

What about the exercise of data subjects’ rights?

Together with the information obligation exemptions addressed above, Article 17 (3) (d) states that the right to erasure “shall not apply to the extent that processing is necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) in so far as the right referred to in paragraph 1 is likely to render impossible or seriously impair the achievement of the objectives of that processing”.

It has to be noted that, in the light of the jurisprudence of the CJEU, all restrictions of the rights of data subjects must apply only in so far as it is strictly necessary.

Are international data transfers allowed?

In the absence of an adequacy decision pursuant to Article 45 (3) GDPR or appropriate safeguards pursuant to Article 46 GDPR, Article 49 GDPR envisages certain specific situations under which transfers of personal data can take place as an exception, such as:

  • The data subject has explicitly consented to the proposed transfer.
  • The transfer is necessary for important reasons of public interest. 

It should be noted, however, that repetitive transfers of data to third countries, part of a long lasting research project in this regard, would need to be framed with appropriate safeguards in accordance with Article 46 GDPR.

Do you have questions about how to navigate data protection laws during this global coronavirus pandemic in your company? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

And if you want to be updated about COVID-19 and AI, don’t forget to subscribe to our YouTube channel.

AEDP approved first BCR

The AEPD has approved its first Binding Corporate Rules (BCRs) under the GDPR

The Spanish DPA, AEPD, has approved its first Binding Corporate Rules (BCRs) under the GDPR. The AEPD acted as lead DPA and counted with the EDPB’s favourable Opinion.

The AEPD has issued their final opinion concerning the first binding corporate rules drafted by Fujikura Automotive Europe Group, two months after the EDPB approved them. This will be included in the register of decisions which have been subject to the consistency mechanism, and it means that Fujikura Automotive Europe Group will be free to use, from now onwards, the BCRs for transferring personal data to the group members based in third countries with appropriate safeguards. 

What are BCRs?

GDPR defines Binding Corporate Rules as “personal data protection policies which are adhered to by a controller or processor established on the territory of a Member State for transfers or a set of transfers of personal data to a controller or processor in one or more third countries within a group of undertakings, or group of enterprises engaged in a joint economic activity”.

Once approved by the competent DPA, BCRs are considered a valid instrument that provides appropriate safeguards for personal data transfers to third countries.

What is the approval process of BCRs?

First, the lead DPA confirms whether the draft BCRs include all article 47.2 GDPR mandatory requirements. Then, pursuant the consistency mechanism covered in articles 63 and 64.1 GDPR, the EDPB should issue their opinion, after which the lead DPA communicate their final decision and, where approved, BCRs are included in the relevant register.

How did the process apply to this case?

Pursuant to Recital 110 GDPR, “a group of undertakings should be able to make use of approved binding corporate rules for its international transfers from the Union to organisations within the same group”, as long as said BCRs include “all essential principles and enforceable rights to ensure appropriate safeguards for transfers”. 

Back to this case, the BCRs were first drafted by Fujikura Automotive Europe Group and the AEPD reviewed them as the Lead DPA. Accordingly, the AEPD submitted its draft decision to the EDPB, who, early this year, issued their opinion, by which they considered that the BCRs contained appropriate safeguards to ensure that the level of protection of natural persons guaranteed by the GDPR was not undermined when transferring and processing personal data to and by the group members based in third countries. Two months after, the AEPD has finally approved them and communicated their final decision to the EDPB.

Do you need assistance with the appropriate safeguards that should apply to international transfers of personal data? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing.  Contact us today.