Artificial Intelligence applied to e-commerce: EU Parliament’s perspective

In this article we delve into EU Parliament’s analysis on Artificial Intelligence applied to e-commerce.

This article is Part II of our “AI and retail” series. In Part I we talked about how AI could help the retail industry and provide opportunities to minimise the impact of COVID-19 while respecting privacy and the ethical principles. In Part II we are going through the document published by the EU Parliament comprising new AI developments and innovations applied to e-commerce. 

What can we expect from retail in the near future? Smart billboards, virtual dressing rooms and self-payment are just some of the elements that will lead the new era in retail. What do they all have in common? The answer is Artificial Intelligence and e-commerce. 

The trade-off between learning and learning ethically 

The current state of the art in mathematics, statistics and programming makes possible the analysis of massive amounts of data, which has leveraged the progress of Machine Learning. However, there is a gap between the development of Artificial Intelligence (AI) and the respect for ethical principles. The EU Parliament deems essential to inject the following values into AI technologies:

  • Fairness, or how to avoid unfair and discriminatory decisions.
  • Accuracy, or the ability to provide reliable information.
  • Confidentiality, which is addressed to protect the privacy of the involved people.
  • Transparency, with the aim of making models and decisions comprehensible to all stakeholders.

According to the document, Europe stands up for a Human-centric AI at the benefit of humans at an individual and at a social level, systems which incorporate European ethical values by-design, which are able to understand and adapt to real environments, interact in complex social situations, and expand human capabilities, especially on a cognitive level.

AI risks and challenges

The opacity of the decisions together with the prejudices and defects hidden in the training data are stressed by the EU Parliament as the main issues to tackle when implementing AI.

AI algorithms act as black-boxes that are able to make a decision based on customers’ movements, purchases, online searches or opinions expressed on social networks, but they cannot explain the reason of the proposed prediction or recommendation.

The biases come from the algorithms being built over and trained on human actions, which may comprise unfairness, discrimination or simply wrong choices. This will affect the result, possibly without the awareness of the decision maker and the subject of the final decision.

Right to explanation

The right to explanation may be the solution to the black-box obstacle, and it is already covered in the General Data Protection Regulation (GDPR): “In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision”. However, this right can only be fulfilled by means of a technology capable of explaining the logic of black-boxes and said technology is not a reality in most cases.

The above addresses to the following question: how can companies trust their products without understanding and validating their operation? Explainable AI technology is essential not only to avoid discrimination and injustice, but also to create products with reliable AI components, to protect consumer safety and to limit industrial liability.

There are two broad ways of dealing with the “understandable AI” problem:

  • Explanation by-design: XbD. Given a set of decision data, how to build a “transparent automatic decision maker” that provides understandable suggestions;  
  • Explanation of the black-boxes: Bbx. Given a set of decisions produced by an “opaque automatic decision maker”, how to reconstruct an explanation for each decision. This one can be further divided between 
    • Model Explanation, when the goal of explanation is the whole logic of the dark model; 
    • Outcome Explanation, when the goal is to explain decisions about a particular case, and 
    • Model Inspection, when the goal is to understand general properties of the dark model.

The societal dimension dilemma

What would be the outcome of the interaction between humans and AI systems? Unlike what we could expect, the EU Parliament points out that a crowd of intelligent individuals (assisted by AI tools) is not necessarily an intelligent crowd. This is because unintended network effects and emergent aggregated behavior.

What does the above means when it comes to retail? The effect is the so-called “rich get richer” phenomenon: popular users, contents and products get more and more popular.  The confirmation bias, or the tendency to prefer information that is close to our convictions, is also referred. As a consequence of network effects of AI recommendation mechanisms for online marketplaces, search engines and social networks, the emergence of extreme inequality and monopolistic hubs is artificially amplified, while diversity of offers and easiness of access to markets are artificially impoverished. 

The aim is that AI-based recommendation and interaction mechanisms help moving from the current purely “advertisement-centric” model to another driven by the customers’ interests.

In this context, what would be the conditions for a group to be intelligent? Three elements are key: diversity, independence, and decentralisation. For this purpose, the retail industry needs to design novel social AI mechanisms for online marketplaces, search engines and social networks, focused on mitigating the existing inequality introduced from the current generation of recommendation systems. It is paramount to have mechanisms for helping individuals acquiring access to diverse content, different people and opinions.

What is the goal?

AI should pursue objectives that are meaningful for consumers and providers, instead of success measures that are functional to intermediaries, and by mitigating the gate-keeper effect of current platforms’ contracts. Such an ecosystem would be also beneficial to e-government and public procurement, and the same basic principles would apply both to marketplaces and information and media digital services, targeted towards the interest of consumers and providers to share high quality contents.

Subscribe to our YouTube channel to be updated on further content. 

Are you facing challenges in the retail industry during this global coronavirus pandemic? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Healthcare Committee Data Breach

Healthcare Committee Data Breach in Örebro County, Sweden.

Healthcare Committee Data Breach in Örebro County, Sweden after sensitive personal data of a patient was published on the region’s website.


A healthcare committee data breach was uncovered after complaints were filed with the Swedish Data Protection Authority (DPA), concerning the publication of a patient’s personal data on the region’s website. According to an article by the European Data Protection Board, the complaints were concerning a patient admitted to forensic psychiatry whose personal details were found, through an audit, to have been published on the region’s website. The Swedish DPA found that the region’s website published sensitive data wrongfully, with neither legitimate purpose nor legal basis, nor eligibility for exemption from the proscription of handling sensitive personal data under the General Data Protection Regulation (GDPR). As a result, the DPA has fined the Committee and ordered some changes to ensure compliance moving forward.


Swedish DPA audit uncovers lack of written instructions for publishing, increasing risk of a data breach.


The Swedish DPA performed an audit after receiving a complaint about the data breach in question and discovered that there were no written instructions in place for the publication of information on the Committee’s website. The Committee had depended solely on oral communication for passing on instructions for publication. The publication of this patient’s personal data was the result of those instructions not being followed. While it was accidental, the publication of that personal data was the result of insufficient organisational measures to ensure protection of personal data.


Healthcare Committee Data Breach results in a fine of 120,000 Swedish kronor and an order for corrective action. 


The Swedish DPA has ordered the Committee to establish written instructions and to institute measures to ensure compliance with those instructions for those who are tasked with publishing data on their website. In addition to ordering the Committee to bring its handling of personal data into full compliance under the GDPR, the DPA has also ordered the payment of a 120,000 Swedish kronor administrative fine (approximately 11,000 Euro). The published document resulting in the data breach has since been removed from the region’s website. 


What should have the Healthcare Committee done in order to avoid the breach?


-Have in place an adequate internal data protection policy providing written and clear instructions about how to process and secure the personal data held by the Committee. 

Pursuant to Article 24 GDPR “(1) Taking into account the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons, the controller shall implement appropriate technical and organisational measures to ensure and to be able to demonstrate that processing is performed in accordance with this Regulation. Those measures shall be reviewed and updated where necessary; (2) Where proportionate in relation to processing activities, the measures referred to in paragraph 1 shall include the implementation of appropriate data protection policies by the controller”.

-Deliver relevant training to the employees. When it comes to reducing the risk of data breaches, it is paramount to train the staff so that they understand the new processes you have put in place and also the data protection rules behind them.

Why are the measures above especially important in this case?

The data compromised involves health information, which is a special category of personal data, therefore additional safeguards should apply, plus the bases for processing it are limited to some specific scenarios. However, it should be noted that the breach would have taken place even if the personal data published in their website was not sensitive, because there was no legitimate basis to make the information public.

Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and Data Protection Act 2018? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

AI and retail industry

AI and retail industry after COVID-19: opportunities, privacy, ethics (Part I)

Our lives will change after COVID-19, and AI can help the retail industry and provide opportunities to minimise the impact of the pandemic while respecting privacy and the ethical principles.

In the last two months, we have witnessed how the entire world has changed- from schools to factories, we all have replaced our usual practices and activities by pandemic-proof ones. We are now well aware of how we have to wash our hands, we have been instructed on how to secure our home network for homeworking and homeschooling and we are cautious when it comes to the use of our geolocation data by the Governments. One of the main industries that has been affected is retail, and AI can help to maximise the opportunities while respecting privacy and ethics.

What about the “new normal”? How our everyday life will look after COVID-19? We cannot predict it certainly, but we are quite sure that the AI will have a key role in defining it. In this article, we go through some uses of AI in retail which may become very relevant in the post-pandemic world, also considering how they should be applied ethically.

What changes will the retail industry face?

The COVID-19 pandemic is the first of its kind in the last hundred years. The effects of the disease will presumably result in changes in our habits: the way people buy, socialise, learn, work and set up their preferences will not be the same as before the COVID-19 outbreak. 

How will this impact retail? Let’s think about what could be a common Friday in the UK or Spain. You get up, take the bus or the train to the office, then you have lunch with your workmates and go shopping in the afternoon looking for your brother’s birthday present. After that, you meet him and all your friends in a restaurant for the celebration party. It does sound normal, right? Well, maybe it does not any longer.

While, unfortunately, many people will lose their jobs because of the COVID-19 pandemic, some other will avoid spending much money due to the uncertainty. Economical dilemmas will not be the only pitfall in the retail industry though, as the risk of infection will also limit our movements widely. Getting back to our example above, maybe your brother would have decided to invite his friends home rather than to a restaurant, minimizing the contact with other people. And you may have bought the gift via online while working from home, instead of going to the shop as such. 

It seems that our free time activities will move to an in-house fashion, which will also affect the type of products we buy. For example, premium food or beverages to consume at home may become more relevant, together with highest level appliances that make our lives easier in our “new normal”. 

What changes may come from the reinvention of the industry and how can AI help?

There are two main categories of changes, that we have sorted into “physical” and “digital”. A third one may be the combination of both.

Physical changes

Retailers will need to make their clients feel pandemic-safe when shopping in their stores, which require the implementation of a wide range of measures, such as: 

    • Line management. AI may help to count the number of people which is inside the store, plus control their movements and manage the waiting times in the lines. An app may be designed for this purpose, based on spots booking and SMS notifications.


  • Social distancing. Heatmaps may be useful when it comes to capacity control and minimum distance among the customers. AI could be helpful to identify those higher traffic areas and use the data to redesign the space. 


  • Temperature and symptoms control. Facial and emotion recognition plus temperature sensors may automate the identification of infected customers with the purpose of preventing their contact with other people.
  • Logistics and delivery. Drones built with AI systems can autonomously deliver orders to the customers based on a “zero contact” policy.
  • Self-payment. AI can definitely be key in the replacement of the traditional cashier staff by self-payment machines, or even payment with no checkout at all, using virtual cards via sensors and deep learning.
  • Product disinfection. One of the main obstacles to in-store shopping is that COVID-19 may remain on surfaces for days, which includes products such as clothes. One of the solutions to this issue might be the use of virtual fitting rooms: combining AI and virtual reality (VR), customer can virtually try clothes on their own body with their personal 3D body avatar. This may apply both to ecommerce and in-store shopping.

Digital changes

Even though retailers will do big efforts to make their shops as much attractive as possible for their customers, online shopping will inevitably become more popular, which may be a detriment to physical stores. In this context, the industry will need to improve the ecommerce in order to properly respond to the market demand. To make the most of this “new normal”, retailers may focus on:

  • Targeted advertising and offers. Considering there are few data about the new consumer habits, being able to tailor the offers individually becomes essential in order to survive in the “new normal”. Profiling is crucial to predict individual’s behaviour and maximise the chances to attract a customer to the business.
  • Design and usability of their ecommerce pages. Practices like keeping navigation simple, automating the search or providing relevant recommendations make the costumer feel comfortable within the ecommerce page, therefore the purchase possibilities increase.
  • Track and compare different markets. “Reinvent or die”. New times require adaptation, and where no enough historical data is available, using another techniques, such as comparing countries or matching data from other products or services, may be paramount for the purpose of drafting the new trend. 
  • Omnichannel marketing. Customer experience will be placed in the center of the business model, thus adjusting to the customer based on their behavior through the sales funnel is required to provide the ultimate personalized customer experience.
  • Product placement. When it comes to advertising, there may be new spaces to consider, such as Netflix films or series, which may now be more profitable than the traditional outdoor means.

There is a very thin line between physical and digital in an interconnected world though. While some examples may be clear, other ones may be a combination of both. For example, smart billboards work with data gathered from our physical presence plus information from our devices our digital fingerprint.

In this context, relevant business opportunities may come from the proper analysis of the data with the aim of figuring out the new customer behavior. However, considering the temporary nature of this “new normal”, caused by a pandemic, flexibility should remain in the top of our minds because being able to adapt as fast as possible to any changes in the demand will make the difference, in one direction or another.

Can we achieve all these changes ethically?

It seems that AI will play a key role in the adaptation of the retail industry to the evolution of consumer habits. The purpose businesses pursue with the implementation of changes is maintaining the turnover they had before the crisis, or even improving the rate, which can only be achieved by instilling confidence in the clients.  

All the measures described above relate to health risks management, but one should remember that, even though currently they may be the most important ones due to the COVID-19 outbreaks, there are also some other concerns that businesses should deal with, especially when the new measures may emphasise them. These are, among other, data protection, privacy and ethics concerns.

Customers will not be able to trust a business that uses AI which is not trustworthy. This is the reason why one should ensure that the AI systems are:

(1) lawful –  respecting all applicable laws and regulations.

(2) ethical – respecting ethical principles and values.

(3) robust – both from a technical perspective while taking into account its social environment.

A Data Protection Impact Assessment should be run before implementing any changes using AI systems, considering both data protection and ethical dilemmas. It should help to verify the following requirements are met:

  • Human agency and oversight. For example, a member of the staff should be able to intercede where a customer claims the price charged for a product in his virtual card is not correct.  
  • Technical Robustness and safety. For example, businesses should ensure that no physical violence is applied over a person by an AI system in order to block access to the shop where high temperature has been detected. 
  • Privacy and data governance. Full compliance with the GDPR and any other relevant laws should be guaranteed when using AI systems. For example, access to the data should be limited by user or role and pseudonymisation techniques should be applied where possible. 
  • Transparency. Traceability mechanisms should be provided and AI systems and their decisions should be explained. Customers need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations. For example, the controller should be able to explain the logic behind the access restriction to the store.
  • Diversity, non-discrimination and fairness. Any type of unfair bias should be avoided, either in the training dataset, the creation of the algorithm or its application. For example, stores should make sure that no one is banned from entering for any reason other than temperature or symptoms. This could be the case where someone living in a low income neighborhood quite affected by COVID-19 is banned from accessing a mall just for coming from said area. This could address to the marginalization of vulnerable groups, or to the exacerbation of prejudice and discrimination. 
  • Societal and environmental well-being. AI systems in this context are not only used for improving businesses’ turnover, but also to prevent the spread of the virus for the sake of public health. 
  • Accountability. Business should have measures like civil insurance in place to ensure responsibility and accountability for AI systems and their outcomes. 

Earlier this month the EU Parliament came up with research on AI new developments and innovations applied to ecommerce. We will go thoroughly through it and discuss their in-depth analysis in Part II. 

Subscribe to our YouTube channel to be updated on Part II. 

Are you facing challenges in the retail industry during this global coronavirus pandemic? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.


EDPB GDPR consent guidelines .

EDPB published GDPR consent guidelines

The European Data Protection Board (EDPB) published guidelines on consent under regulation, including a complete analysis of the notion of GDPR consent.


The EDPB published guidelines on consent under regulation on May 4th 2020, which includes a complete analysis of GDPR consent. In their 31 page document released earlier this week, the EDPB outlines the requirements for obtaining and demonstrating valid consent. Consent is one of six lawful bases to process personal data, as outlined in Article 6 of the GDPR. Data controllers must consider what would be the appropriate lawful ground for the intended processing of personal data, before initiating any activities which would involve processing such data. 


Elements of valid GDPR consent

Article 4(11) of the GDPR specifies that consent of the data subject means “any freely given, specific,  informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.” 


The use of the term free implies that the data subject has a real choice in the matter. As a general rule, the GDPR states that if the data subject has no real choice, feels compelled to consent or feels they will endure negative consequences in the absence of their consent, then consent will not be valid. Any element of inappropriate pressure or influence upon the data subject which prevents a data subject from exercising their free will, shall render the consent invalid.


In order for consent to be valid, it must also be specific, meaning that consent must be given in relation to “one or more specific” purposes and that a data subject has a choice in each of them. . The requirement that consent must be ‘specific’ aims to guarantee a degree of user control and transparency for the data subject. According to Article 6(1)(a) of the GDPR, data subjects must always give consent for a specific, explicit and legitimate processing purpose. 


The GDPR also maintains the requirement that consent must be informed. According to Article 5 of the GDPR, transparency is one of the fundamental principles, closely related to the principles of fairness and lawfulness. It is imperative that data subjects are provided with sufficient information prior to obtaining their consent. In the absence of sufficient information, the consent will be invalid and the controller may be in breach of Article 6 of the GDPR. 


The EDPB believes that at least the following information is required for obtaining valid consent:

  1. the controller’s identity,
  2. the purpose of each of the processing operations for which consent is sought,

iii. what (type of) data will be collected and used,

  1. the existence of the right to withdraw consent, 
  2. information about the use of the data for automated decision-making in accordance with

Article 22 (2)(c) where relevant, and

  1. on the possible risks of data transfers due to absence of an adequacy decision and of

appropriate safeguards as described in Article 46.


In addition to the aforementioned criteria, consent must always be given through an active motion or declaration. It should be clear that the data subject is consenting to the particular processing. Article 4(11) GDPR clarifies that valid consent requires an unambiguous indication by means of a statement or by a clear affirmative action. Clear affirmative action implies that the data subject  must have taken a deliberate action to consent to the particular processing.

Obtaining explicit GDPR consent

In situations where serious data protection risk presents itself, it is imperative that explicit consent is obtained in order to process personal data. According to Article 9 of the GDPR, explicit content is needed for the processing of special categories of data. The term explicit refers to the manner in which consent is expressed by the data subject. It means that the data subject has to give an express statement of consent, in order for consent to be deemed valid. This can take the form of a signed statement, an electronic form, an email, or a scanned document carrying the signature of the data subject, or an electronic signature. In theory, oral

statements can also sufficiently express valid explicit consent, however, it may be difficult

to prove for the controller that all conditions for valid explicit consent were met when the statement was recorded.

Additional conditions for obtaining valid GDPR consent

According to Article 7 of the GDPR, it is the sole responsibility of the controller to demonstrate a data subject’s consent. Recital 42 states: “Where processing is based on the data subject’s consent, the controller should be able to demonstrate that the data subject has given consent to the processing operation.”  controllers may keep records of consent statements received or choose freely the method through which they comply with this provision. The obligation to demonstrate consent last for as long as the data processing activity is being carried out.  While there is no specific time limit in the GDPR for how long consent will last, the EDPB recommends, as a best practice, that consent should be refreshed at appropriate intervals. 


As far as withdrawal of consent, the GDPR prescribes that the controller must ensure that consent can be withdrawn by the data subject as easily as it was given, and at any given time. The GDPR does not specify that the giving and withdrawing of consent must be done in the same manner, however, when consent is given electronically, via a simple mouse click, swipe or keystroke, the data subject should be able to withdraw that consent just as easily. This requirement of an easy withdrawal is described as a necessary aspect of valid consent in the GDPR. Controllers also  have an obligation to delete data that was processed on the basis of consent once this consent is withdrawn, provided that there is no other purpose justifying the continued retention. 




The guidelines provide some examples for when consent is not valid and when it is. We have put together those ones we consider most relevant below:


Own- and third-party marketing unlawfully bundled

“Within the same consent request a retailer asks its customers for consent to use their data to send them marketing by email and also to share their details with other companies within their group. This consent is not granular as there is no separate consents for these two separate purposes, therefore the consent will not be valid. In this case, a specific consent should be collected to send the contact details to commercial partners. Such specific consent will be deemed valid for each partner …, whose identity has been provided to the data subject at the time of the collection of his or her consent, insofar as it is sent to them for the same purpose (in this example: a marketing purpose).”

Service provision and marketing unlawfully bundled

“A website provider puts into place a script that will block content from being visible except for a request to accept cookies and the information about which cookies are being set and for what purposes data will be processed. There is no possibility to access the content without clicking on the “Accept cookies” button. Since the data subject is not presented with a genuine choice, its consent is not freely given. This does not constitute valid consent, as the provision of the service relies on the data subject clicking the “Accept cookies” button. It is not presented with a genuine choice.”

“Based on recital 32, actions such as scrolling or swiping through a webpage or similar user activity will not under any circumstances satisfy the requirement of a clear and affirmative action: such actions may be difficult to distinguish from other activity or interaction by a user and therefore determining that an unambiguous consent has been obtained will also not be possible. Furthermore, in such a case, it will be difficult to provide a way for the user to withdraw consent in a manner that is as easy as granting it”.

Access to mobile phone features unlawfully bundled with the product

“When downloading a lifestyle mobile app, the app asks for consent to access the phone’s accelerometer. This is not necessary for the app to work, but it is useful for the controller who wishes to learn more about the movements and activity levels of its users. When the user later revokes that consent, she finds out that the app now only works to a limited extent. This is an example of detriment as meant in Recital 42, which means that consent was never validly obtained (and thus, the controller needs to delete all personal data about users’ movements collected this way).”

However, if only benefits linked to the consent are lost if consent is refused, it is ok: 

“A data subject subscribes to a fashion retailer’s newsletter with general discounts. The retailer asks the data subject for consent to collect more data on shopping preferences to tailor the
offers to his or her preferences based on shopping history or a questionnaire that is voluntary to fill out. When the data subject later revokes consent, he or she will receive non-personalised fashion discounts again. This does not amount to detriment as only the permissible incentive was lost.”

Furthermore, there is no detriment if an alternative channel to access the product is provided


“A fashion magazine offers readers access to buy new make-up products before the official launch. The products will shortly be made available for sale, but readers of this magazine are offered an exclusive preview of these products. In order to enjoy this benefit, people must give their postal address and agree to subscription on the mailing list of the magazine. The postal address is necessary for shipping and the mailing list is used for sending commercial offers for products such as cosmetics or t-shirts year round. The company explains that the data on the mailing list will only be used for sending merchandise and paper advertising by the magazine itself and is not to be shared with any other organisation. In case the reader does not want to disclose their address for this reason, there is no detriment, as the products will be available to them anyway.”


A suitable policy should be put in place with regard to children’s consent:


“An online gaming platform wants to make sure underage customers only subscribe to its services with the consent of their parents or guardians. The controller follows these steps: Step 1: ask the user to state whether they are under or over the age of 16 (or alternative age of digital consent) If the user states that they are under the age of digital consent; Step 2: service informs the child that a parent or guardian needs to consent or authorise the processing before the service is provided to the child. The user is requested to disclose the email address of a parent or guardian;  Step 3: service contacts the parent or guardian and obtains their consent via email for processing and take reasonable steps to confirm that the adult has parental responsibility; Step 4: in case of complaints, the platform takes additional steps to verify the age of the subscriber; If the platform has met the other consent requirements, the platform can comply with the additional criteria of Article 8 GDPR by following these steps”.


Do you need assistance with the appropriate safeguards that should apply to consent for processing of personal data? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcingContact us today.