Facial Recognition in Public Spaces

Facial recognition in public spaces can read your face and your mood, and record other information about you.

Facial recognition in public spaces can read your face and your mood, and record other information about you, like how often to pass in a certain spot. Smart billboards are able to tailor their offerings based on this information. As a result, many people are beginning to question their level of comfort with this technology, its degree of use, and its impact on their lives.

In our latest Youtube Vlog “Facial recognition in public spaces” we explore the thoughts and ideas many people are already having regarding this new but quickly developing facial recognition technology, and its impact in our society. We also offer answers to its specific impact on you. You can take a look at our latest vlog here:

GDPR defines ‘profiling’ as“any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”

With regard to direct marketing from these smart billboards, the GDPR states that “Where personal data are processed for the purposes of direct marketing, the data subject should have the right to object to such processing, including profiling to the extent that it is related to such direct marketing, whether with regard to initial or further processing, at any time and free of charge.” They go on to state that ”That right should be explicitly brought to the attention of the data subject and presented clearly and separately from any other information”.

In addition to this and pursuant to article 35 GDPR, “Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data”.

Smart billboards are a new and intrusive technology that may process personal data about data subjects who may even not be aware of it, which limits their rights granted under the GDPR. We took a deeper look into this in our blog “Regulating the right to privacy in the AI era”.

Due to the privacy risks that facial recognition involves, according to a leaked EU Commission white paper, the EU may place a 3 to 5 year ban on facial recognition technology within public places. US, on its side, may also impose some measures in this regard, like a moratorium on federal government use of facial recognition technology until Congress passes legislation regulating it and a prohibition of using federal funds for this technology.

Does your company utilize facial recognition software to conduct profiling or direct marketing? Aphaia’s AI ethics assessments will assist in ensuring that it falls within the scope of the EU’s and UK’s ethical framework. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. Contact us today.

EU White Paper on AI

EU White Paper on Artificial Intelligence Overview

The EU White Paper on Artificial Intelligence contains a set of proposals to develop a European approach to this technology.

As reported in our blog, the leaked EU White Paper obtained by Euractiv proposes several options for AI regulations moving into the future. In our post today we are going through them to show you the most relevant ones.

The EU approach is focused on promoting the development of AI across Member States while ensuring that relevant values and principles are properly observed during the whole process of design and implementation. One of the main goals is cooperating with China and the US as the most important players in AI, but always aiming at protecting EU’s interests, including European standards and the creation of a level playing field.

Built on the existing policy framework, the EU White Paper points out three key pillars for the European strategy on AI:

  • Support for the EU’s technological and industrial capacity.

  • Readiness for the socioeconomic changes brought about by AI.

  • Existence of an appropriate ethical and legal framework.

How is AI defined?

This EU White Paper provides a definition of AI based on its nature and functions. That said, AI is conceived as “software (integrated in hardware or self-standing) which provides for the following functions:

  • Simulation of human intelligence processes.

  • Performing certain specified complex tasks.

  • Involving the acquisition, processing and rational or reasoned analysis of data”.

Europe’s position on AI

What is the role of Europe in the development of AI? Despite EU’s strict rules on privacy and data protection, Europe counts with several strengths that may help to gain leverage in the “AI race” against other markets like China or the US, namely:

  • Excellent research centres with many publications and scientific articles related to AI.

  • World-leading position in robotics and B2B markets.

  • Large amounts of public and industrial data.

  • EU funding programme.

On the negative side, there is a pressing need for significantly increasing investment levels on AI and maximising them through cooperation among Member States, Norway and Switzerland. Europe has as well a weak position in consumer applications and online platforms, which results in a competitive disadvantage in data access.

The EU White Paper offer some proposals to reinforce EU strengths on AI and address those areas that need to be boosted:

  • Establishing a world-leading AI computing and data infrastructure in Europe using as a basis High Performance Computing centres.

  • Federating knowledge and achieving excellence through the reinforcement of EU scientific community for AI and the facilitation of their collaboration and networking based on strengthen coordination.

  • Supporting research and innovation to stay at the forefront with the creation of a “Leaders Group” set up with C-level representatives of major stakeholders.

  • Fostering the uptake of AI through the Digital Innovation Hubs and the Digital Europe Programme.

  • Ensuring access to finance for AI innovators.

What are the prerequisites to achieve EU’s goals on AI?

Access to data

Ensuring access to data for EU businesses and the public sector is essential to develop AI. One of the key measures considered by the Commission for redressing the issue of data access is the development of common data spaces which combine the technical infrastructure for data sharing with governance mechanisms, organised by sector or problem area.

Regulatory framework

The above can be built on EU’s comprehensive legal framework, which includes the GDPR, the Regulation on the Free Flow of Data and the Open Data Directive. The latter may play a fundamental role indeed, as based on its latest revision, the Commission intends to adopt by early 2021 an implementing act on high-value public sector datasets, which will be available for free and in a machine-readable format.

Although AI is already subject to an extensive body of EU legislation including fundamental rights, consumer law and product safety and liability, it also poses new challenges that come from the data dependency and the connectivity within new technological ecosystems. There is therefore a need for developing a regulatory framework that covers all the specific risks that AI brings. In order to achieve this task successfully, the EU White Paper highlights the relevance of complementing and building upon the existing EU and national frameworks to provide policy continuity and ensure legal certainty.

The main risks the implementation of AI in society faces are the following:

  • Fundamental rights, including bias and discrimination.

  • Privacy and data protection.

  • Safety and liability.

It is important to note that the aforementioned risks can be the result either of flaws in the design of the AI system, problems with the availability and quality of data or issues stemming from machine learning as such.

According to the EU White Paper, the Commission identified the following weaknesses of the current legislative framework in consultation with Member States, businesses and other stakeholders:

  • Limitations of scope as regards fundamental rights.

  • Limitations of scope with respect to products: EU product safety legislation requirements do not apply to services based on AI.

  • Uncertainty as regards the division of responsibilities between different economic operators in the supply chain.

  • Changing nature of products.

  • Emergence of new risks.

  • Difficulties linked to enforcement given the opacity of AI.

How should roles and responsibilities concerning AI be attributed?

The Commission considers that, considering the amount of agents involved in the life cycle of an AI system, the principle that should guide the attribution of roles and responsibilities in the future regulatory framework is that the responsibility lies with the actor(s), who is/are best placed to address it. Therefore, the future regulatory framework for AI is expected to set up obligations for both developers and users of AI, together with other groups such as suppliers of services. This approach would ensure that risks are managed comprehensively while not going beyond what is feasible for any given economic actor.

What legal requirements should be imposed on the agents involved?

According to the EU White Paper, the Commission seems to be keen on setting up legal requirements having a preventive ex ante character rather than an ex post one, even though the latter are also referred. That said, the requirements might include:

  • Accountability, transparency and information requirements to disclose the design parameters of the AI system.

  • General design principles.

  • Requirements regarding the quality and diversity of datasets.

  • Obligation for developers to carry out an assessment of possible risks and steps to minimize them.

  • Requirements for human oversight.

  • Additional safety requirements.

Ex post requirements establish liability and possible remedies for harm or damage caused by a product or service relying on AI.

That said, which regulatory options is considering the Commission?

  1. Voluntary labeling.

This alternative would be based on a voluntary labeling framework for developers and users of AI. The requirements would be binding just once the developer or user has opted to use the label.

  1. Sectorial requirements for public administration and facial recognition.

This option would focus on the use of AI by public authorities. For this purpose, the Commission proposes the model set out by the Canadian directive on automated decision-making, in order to complement the provisions of the GDPR.

The Commission also suggests a time-limited ban (“e.g. 3-5 years”) on the use of facial recognition technology in public spaces, aiming at identifying and developing a sound methodology for assessing the impact of this technology and establishing possible risk management measures.

  1. Mandatory risk-based requirements for high-risk applications.

This option would foresee legally binding requirements for developers and users of AI, built on existing EU legislation. Given the need to ensure proportionality, it seems these new requirements might be applied only to high-risk applications of AI. This brings to light the need for a clear criteria to differentiate between “low-risk” and “high-risk” systems. The Commission provides the following definition of “high-risk”: “applications of AI which can produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage for the individual or the legal entity”, and points out the need to consider it together with the particular sector where the AI system would be deployed.

  1. Safety and liability.

Targeted amendments of the EU safety and liability legislation could be considered to address the specific risks of AI.

  1. Governance.

An effective system of enforcement is deemed an essential component of the future regulatory framework, which would require a strong system of public oversight.

Does your company use AI systems? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, EU AI Ethics assessments and Data Protection Officer outsourcing. Contact us today.

Legislative enforcement and AI

Regulating the right to privacy in the AI era

New developments in 2019 have shown that the GDPR rules on AI profiling could not be timelier. From smart billboards to home audio devices, AI has been deployed to make sense of everything we expose about ourselves, including our faces and things we casually say. Regardless of these developments, that on numerous occasions have raised concerns, legislative enforcement in the field has been somewhat slow. Will 2020 be the year when privacy regulation finally hits back?

AI technology

Despite toughening legislation, there still seems to be a clear bias towards technology deployment, irrespective of whether its implementation meets compliance requirements. Worth noting that technology, as such, is rarely ‘non-compliant’ but rather the way it’s used that raises issues.

Take smart billboards capable of reading your facial features that have been deployed at numerous busy, publicly accessible locations in 2019. Have these projects all undergone a General Data Protection Regulation (GDPR) privacy impact assessment, as required by law? One should note that video monitoring of a public space in itself bears considerable privacy risks. When adding real-time analysis of your facial features to such video monitoring, the GDPR clearly gives you the right to object to profiling. If we disregard the obvious difficulties of expressing your objection to a billboard on a busy street, how will your objection to any such profiling in the future be observed next time you pass by?

Machine learning enables us to make increasing sense of vast amounts of data. If they haven’t already, the solutions deployed in 2020 are projected to feel even more intrusive. Ironically, however, this might not be applicable where certain smart systems, put in place to learn to provide more subtle, less visibly intrusive and therefore a more effective link between our preferences and commercial offers served to us, are concerned. This might help us understand which aspect of targeted advertising we loathe more: privacy intrusion or its clumsy implementation.

The law and AI

The notion that the law is simply ‘unable to keep up with technology’ is not only an inadequate response to the problem but is also largely unfounded as a claim. The GDPR includes specific provisions on profiling and automated decision-making, specifically tailored to the use of artificial intelligence in relation to the processing of personal data. Such processing is subject to the right to obtain human intervention and the right to object to it. Additional limitations in relation to special categories of data also exist. Certain non-EU countries have started adopting similar GDPR principles including the likes of Brazil who passed the General Data Protection Law (LGPD) in 2018.

The California Consumer Privacy Act (CCPA), while less focused specifically on AI, empowers consumers by enabling them to prohibit the ‘sale of data’. This is by no means insignificant. Without the possibility to compile and merge data from different sources, its value for machine learning purposes arguably decreases. Conversely, without the ability to sell data, incentives to engage in excessive data analytics can somewhat dissipate.

When it comes to a broader framework for the regulation of artificial intelligence, the legal situation is for now less clear. Principles and rules are currently confined to non-binding guidelines, such as EU Guidelines for Trustworthy AI. But this does not impact the privacy aspects where European regulators are already able to impose fines of up to up to €20 million or 4% of the companies’ global turnover. CCPA fines are lower but might be multiplied by the number of users affected.

The AI regulatory landscape

Early in 2019, the French data protection authority CNIL imposed a fine of €50 million on Google for insufficient transparency in relation to targeted advertising. As noted by CNIL, “essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalisation, are excessively disseminated  across several documents, with buttons and links on which it is required to click to access complementary information.” Whereas the fine was far from the upper limit imposable via the GDPR, the case paves the way for further questions to be asked by data protection authorities in 2020.

For example, are machine-learning algorithms and the data sources used for them sufficiently explained? When the data protection authorities seek answers to such questions, will they rely on the information provided by companies? Alternatively, they might start digging deeper based on anecdotal evidence. How come the user is seeing a particular ad? Is this based on a sophisticated machine-learning algorithm or analysing data that should not have been analysed?

So far, privacy legal battles have largely focused on formal compliance, such as in both ‘Schrems’ cases. But AI usage trends in 2020 might force regulators to look deeper into what is actually going on inside home-based and cloud-based black boxes. As I write this article, the EU has just moved to impose a temporary ban on facial recognition in public places.

Does your company use artificial intelligence in its day to day operations? If so, failure to adhere fully to the guidelines and rules of the GDPR and Data Protection Act 2018 could result in a hefty financial penalties. Aphaia’s data protection impact assessments and Data Protection Officer outsourcing will assist you with ensuring compliance.

This article was originally published on Drooms blog.

Profiling and GDPR

Profiling and the GDPR —here’s what you need to know

In todays blog we take a deeper look into profiling and the GDPR.

Given the highly automated technological world that we live in, were pretty certain that by now youve heard the term profiling once, twice, thrice—who knows, maybe even hundreds of times. But what is it really? How is it used by organizations? Could it affect individualsrights? What measures are in place by the GDPR to limit and regulate any possible unethical or discriminatory effects of profiling?

According to the ICO  profiling is a means of analysing aspects of an individuals personality, behaviour, interests and habits to make predictions or decisions about them. This can be achieved through the use of algorithms to find correlations between separate datasets.

The ICO notes that organizations generally use profiling to:     

find something out about individualspreferences;
predict their behaviour; and/or
make decisions about them.”

Expounding further, the following are offered by the ICO as examples of profiling:

The collection and analysis of personal data on a large scale, using algorithms, AI or machine-learning;
The identification of associations to build links between different behaviours and attributes;
The creation of profiles that you apply to individuals; or
The prediction of individualsbehaviour based on their assigned profiles.

In our latest YouTube VlogI use AI: Do I need to worry about GDPR? FAQs Part 2 we delve further into profiling. We present real life scenarios and offer answers to some of the most frequently asked questions about profiling & the GDPR. Click here to take a look.

Plus, if you believe you have been unfairly profiled, the GDPR might provide a remedy for you. According to Dr Bostjan Makarovic, Aphaia’s Managing Partner, “individuals have the right to object to profiling. If the data controller believes their profiling operation is subject to an exception or ignores their request, theindividual may launch a complaint to the supervisory authority, plus may be entitled to compensation.”

Does your company utilize Artificial Intelligence and algorithms to conduct profiling or automated decision making? Aphaias AI ethics assessments  will assist in ensuring that it falls within the scope of the EUs and UKs ethical framework. Aphaia also provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing.  Contact us today.