Call for a ban on facial recognition

Call for a ban on facial recognition: EDPB and EDPS release a joint statement

The EDPB and EDPS have made a collaborative call for a ban on facial recognition for automated recognition in public spaces. 

 

 The EDPB and EDPS call for a ban on the use of AI for biometric identification in publicly accessible spaces. This includes facial recognition, fingerprints, DNA, voice recognition and other biometric or behavioral signals. This call comes after the European Commission outlined harmonized rules for artificial intelligence earlier this year. While the EDPB and EDPS embrace the introduction of rules addressing the use of AI systems in the EU, by institutions, bodies or agencies, the organizations have expressed concern over the exclusion, from the proposal, of cooperation from international law enforcement. The EDPB and EDPS also stress that it is necessary to clarify that the existing data protection regulation within the EU applies to any and all personal data processing under the scope of the draft AI regulation. 

 

The EDPB and EDPS call for a general ban on the use of AI in public spaces, particularly in ways which might lead to discrimination. 

 

In a recently released joint statement, the EDPB & EDPS recognize that extremely high risks are posed by remote biometric identification of individuals in public spaces, particularly the use of AI systems using biometrics to categorize individuals based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited. According to Article 21 of the Charter of Fundamental Rights, “Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.” In addition the organizations are calling for a prohibition on the use of AI to deduce the emotional state of natural persons except in specific cases. One example of this in the field of health includes cases where patient emotion recognition is relevant and important. However the EDPB and EDPS maintain that any use of this sort of AI for any type of social classification or scoring should be strictly prohibited. “One should keep in mind that ubiquitous facial recognition in public spaces makes it difficult to inform the data subject about what is happening, which also makes it all but impossible to object to processing, including profiling” comments Dr Bostjan Makarovic, Aphaia’s Managing Partner

 

The EDPB and EDPS call for greater clarity on the role of the EDPS as competent and market surveillance authority. 

 

The organizations in their joint opinion, embrace the fact that the European Commission proposal designates the EDPS as the market surveillance authority and competent authority for the supervision of institutions, agencies and bodies within the European Union. However the organisations are also calling for further clarification on the specific tasks of the EDPS within that role. The EDPB and EDPS acknowledge that data protection authorities are already enforcing the GDPR and LED in the context of AI involving personal data. However the organizations are suggesting a more harmonized regulatory approach, involving the DPAs as designated national supervisory authorities, as well as  consistent interpretation of data processing provisions across the EU. In addition, the statement calls for greater autonomy to be given to the European Artificial Intelligence Board, in order to avoid conflict and create an atmosphere for an  AI European body free from political influence. 

 

Do you want to learn more about facial recognition in public spaces? Check our vlog.

Do you use AI in your organisation and need help ensuring compliance with AI regulations? We can help you. Aphaia provides EU AI Ethics Assessments, Data Protection Officer outsourcing and ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments. We can help your company get on track towards full compliance.

Fintech and AI Ethics

Fintech and AI Ethics

As the world of Fintech evolves, the need for governance and ethics in that arena is of particular importance. 

 

Financial Technology, or “Fintech” refers to new technology that seeks to improve and automate financial services. This technology aids in the smooth running of financial aspects of business or personal finances through the integration of AI systems. Broadly speaking, this term refers to any innovation through which people can transact business, from keeping track of finances, to the invention of digital and cryptocurrency. With crypto-trading and digital platforms for wealth management becoming more popular than ever before, an increasing number of consumers are seeing the practical application and value of Fintech in their lives. As with any application of AI and technology however, certain measures should be in place for the smooth, and more importantly, safe integration of this technology into our daily lives, allowing the everyday user to feel more secure in the use of this tech. 

 

Legislation and guidance have been implemented and communicated guiding Fintech and AI ethics. 

 

Some pieces of legislation, such as the Payment Services Directive 2 (PSD2), an EU regulation governing electronic payment services, already target Fintech. PSD2 harmonizes two services which have both become increasingly widespread in recent times; Payment Initiation Services (PIS) and Account Information Services (AIS). PIS providers facilitate the use of online banking to make online payments, while AIS providers facilitate the collection and storage of information from a customer’s different bank accounts in a single place. With the increasing popularity and use of these innovations and other forms of Fintech, and as experience provides further insight into the impact of the various implications and the true impact of its use, new regulations are expected in the future. 

 

To most people, their financial data is considered to be among their most sensitive and valuable data and as such, most people are very keen on ensuring the safety of their data. Legislation and guidance have been implemented and communicated in order to aid in the pursuit of principles like technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness. These are all imperative to ensuring that the use of Fintech is safe and beneficial for everyone involved. 

 

Technical robustness and safety

 

The safety of one’s personal and financial information is, simply put, of the utmost importance when making decisions about what tools an individual will use to manage their finances. A personal data breach involving financial information could be very harmful for the affected data subjects due to its sensitive nature. Financial institutions and Fintech companies put several measures in place to ensure safe and secure money management through tech. Security measures such as, inter alia, data encryption, role-based access control, penetration testing, tokenization, 2FA, multi-step approval or verification processes  and backup policy all can and should be applied, where necessary and feasible. These measures all aid in helping users feel more secure, but intimately they aid in protecting users from far more than they can imagine including malware attacks, data breaches, digital identity risks and much more. 

 

Privacy and data governance

 

Article 22 of the EU GDPR prohibits a data subject from being subject to a decision based solely on automated processing, except where some circumstances apply. Automated decisions in the Fintech industry may produce legal effects concerning the individuals or similarly significantly affect them. Any decision with legal or similar effects needs special considerations in order to comply with the UK GDPR requirements. A data protection impact assessment may be necessary to determine the risks to individuals and determine how best to deal with them. For special categories of data, automated processing can only be carried out with the individual’s explicit consent or if necessary for reasons of substantial public interest. Robotic process automation (or RPA) could be very useful to businesses and help increase their revenue and save them money. However, it is imperative to ensure compliance with the GDPR and ensure that automated decision making does not result in dangerous profiling practices. 

 

Diversity, non-discrimination and fairness

 

Several studies have been performed exploring the overall fairness of current Fintech, and possible discrimination in consumer lending and other aspects of the industry. Algorithms can either perpetuate widespread human biases or develop their own biases. Common biases in the financial sector arise around gender, ethnicity and age. AI technology, especially in Fintech, where biases can affect an individual’s access to credit and the opportunities that it affords, must prevent discrimination and protect diversity. The use of quality training data, choosing the right learning model and working with an interdisciplinary team may help reduce the bias and maintain a sense of fairness in the world of Fintech and AI in general. 

 

Transparency. 

 

While the use of AI has brought much positive transformation to the financial industry, the question of AI ethics in everything that we do is unavoidable. Transparency provides an opportunity for introspection regarding ethical and regulatory issues, allowing them to be addressed. Algorithms used in Fintech should be transparent and explainable. The ICO and The Alan Turing Institute have produced their guidance “Explaining decisions made with AI ” to help businesses with this. They suggest developing a ‘transparency matrix’ to map the different categories of information against the relevant stakeholders. Transparency enables and empowers businesses to demonstrate trustworthiness. Trustworthy AI is AI that will be more easily adopted and accepted by individuals. Transparency into the model and processes of Fintech and other AI allows biases and other concerns to be raised and addressed. 

 

Check out our vlog exploring Fintech and AI Ethics:

https://youtu.be/7nj2616bq1s

You can learn more about AI ethics and regulation in our YouTube channel.

 

Do you have questions about how AI works in Fintech and the related guidance and laws? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact AssessmentsAI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

The new EU AI Regulation

The new EU AI Regulation : leaked document reveals intentions to establish rules for high risk AI

New EU AI Regulation revealed in a recently leaked document, include several intended legislations specific to high risk AI. 

 

The European Commission has proposed a Regulation of the European Parliament and of the Council aimed at governing the use, and sale of high risk AI within the European Union. In a recently leaked document, the organisations stated that “Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being.” With that in mind, the European Commission has set forth to further define and clarify what constitutes high risk AI, and set out rules aimed at ensuring that AI is safely and effectively serving the highest good of natural persons. “It is the step towards the right direction, providing further ethical safeguards as our lifes are becoming increasingly dominaed by machine-made decisions”, comments Dr Bostjan Makarovic, Aphaia’s Managing Partner.

 

The document outlines harmonised rules concerning the placing on the market, putting into service and use of high-risk AI systems in the Union. It also includes harmonised transparency rules for AI systems intended to interact with natural persons and AI systems used to generate or manipulate image, audio or video content. The regulatory framework laid out in this document is intended to function without prejudice to the provisions of existing Union regulations applicable to AI, falling within the scope of this regulation. 

 

The new AI Regulation will apply to providers and users of AI systems in the EU or in third countries to the extent that they affect persons in the EU.

 

This Regulation will apply to providers of AI systems in the practice of placing them on the market or putting them into service in the European Union whether they were established in the EU, or within a third country outside the Union. In addition, the regulations will apply to users of AI systems in the EU, as well as providers and users of AI systems established in third countries to the extent that these systems affect persons within the EU. 

 

Article 3 of the leaked document defines an AI system as “software that is developed with one or more of the approaches and techniques listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing real or virtual environments.” This can constitute a component of a product or a standalone product, the output of which may serve to partially or fully automate certain activities. Annex I outlines several approaches and techniques which indicate artificial intelligence including Machine learning approaches, including supervised and supervised and reinforcement learning, logic and knowledge based approaches, and statistical approaches. 

 

The leaked document outlines several prohibitions intended to be established for the protection of the fundamental rights of natural persons. 

 

This document goes on in Article 4 to outline the prohibited AI practices; a list of artificial intelligence practices which are prohibited as contravening the fundamental rights protected under EU law, and Union values. Title II Article 4 prohibits the use of AI systems that manipulate human behavior, opinions or decisions through choice architectures or any other element of the user interface, which causes persons to make decisions, have opinions or behave in a manner that is to their detriment. In addition, this regulation prohibits the use of AI in any manner that exploits information or predictions about people in an effort to target their vulnerabilities, leading them to behave, form an opinion or make decisions to their detriment. The regulation will also prohibit indiscriminate surveillance applied to all natural persons in a generalised manner without differentiation. Article 4(2) does however state that these prohibitions do not apply when authorised by law, or are carried out by, or on behalf of public authorities in order to safeguard public security, subject to appropriate safeguards to the rights and freedoms of third parties and compliance with the laws of the EU. 

 

Cristina Contero Almagro, Partner in Aphaia, points out that “It should be noted that this new Regulation mentions “monitoring and tracking of natural persons in digital or physical environments, as well as automated aggregation and analysis of personal data from various sources” as elements that the methods of surveillance could include, which means that these provisions might potentially reach any online platform which relies on automated data aggregation and analysis. 

 

Considering that the Regulation takes a risk-based approach and that it interlinks with the GDPR in some areas, this only confirms the importance for businesses to ensure that their systems and processes comply with the data protection framework. In particular, Data Protection Impact Assessments, over which the Conformity Assessment would be built, play a paramount role”.

 

The new AI Regulation specifically defines what constitutes high risk AI systems, in order to outline exactly which systems will be subject to this regulation. 

 

With regard to high risk AI systems, this document has a specific section (Annex II) dedicated to defining with precision and complete with examples, what constitutes a “high risk artificial intelligence system”. Anything that falls within that purview is subject to specific rules and regulations to ensure the best interest of persons. Compliance with these requirements will be required before the placement of these systems on the market or into service. The regulation covers the use of data sets for training these systems, documentation and record keeping, transparency, robustness, accuracy, security, and human oversight. The leaked document includes several obligations for the providers and users of these systems, as well as authorised representatives, importers and distributors.

 

The Regulation sets forth intended measures to support innovation in AI and aid SMEs in ensuring compliance. 

 

This document also sets forth intended measures in support of innovation in AI. These include AI regulatory sandboxing schemes, allowed to be established by national competent authorities in any of the Member States. In order to reduce the regulatory burden for small to medium enterprises and startups, additional measures will be implemented, including priority access to these sandboxes, as well as digital hubs and testing experimentation facilities. These hubs are intended to provide relevant training to AI providers on the regulatory requirements, as well as technical and scientific support, and testing facilities for providers. 

 

The new AI Regulation indicates the intention for the establishment of a European Artificial Intelligence Board. 

 

The document indicated the intention to establish a European Artificial Intelligence Board, tasked with the consistent application of this regulation by the Member States. This task force will be expected to issue opinions  or interpretive guidance documents clarifying the application of the Regulation, collect and share best practices among Member States, aid in the development of standards regarding AI, and continuously monitor developments in the market and their impact on fundamental rights. The European Artificial Intelligence Board will also be expected to ensure consistency and coordination in the functioning of the AI regulatory sandboxes previously mentioned. This board will issue opinions before the Commission adopts a delegated act and coordinate, in carrying out its tasks, with the relevant bodies and structures established at an EU level, including the EDPB. 

 

Do you use AI in your organisation and need help ensuring compliance with AI regulations? We can help you. Aphaia provides EU AI Ethics Assessments, Data Protection Officer outsourcing and ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments. We can help your company get on track towards full compliance.

Spanish DPA AEPD

Spanish DPA AEPD publishes Guidelines on AI audits

AEPD, the Spanish data protection authority, has published Guidelines on the requirements that should be implemented for conducting audits of data processing activities that embed AI.

Early this month, the Spanish DPA, AEPD, published Guidelines on the requirements that should be considered when undertaking audits of personal data processing activities which involve AI elements. The document addresses the special controls to which the audits of personal data processing activities comprising AI components should be subject.

Audits are part of the technical and security measures regulated in the GDPR and they are deemed essential for a proper protection of personal data. The AEPD Guidelines contain a list of audit controls among which the auditor can select the most suitable ones, on a case by case basis, depending on several factors such as the way the processing may affect GDPR compliance, the type of AI component used, type of data processing and the risks to the rights and freedoms of the data subjects that the processing activities pose.

Special features of AI audits methodology

The AEPD remarks that the audit process should be governed by the principles laid down in the GDPR, namely: lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality and accountability.

The AEPD also points out that all the controls listed in the Guidelines are not meant to be applied together. The auditor should select those ones that are relevant based on the scope of the audit and the goals it pursues.

What type of data processing do these requirements apply to and who should comply with them?

The Guidelines will be applicable where:

  • There are personal data processing activities at any stage of the AI component lifecycle; or
  • The data processing activities aim to profile individuals or make automated decisions which produce legal effects concerning the data subjects or similarly significantly affects them.

The AEPD states that in some cases it might be useful to carry out some preliminary assessments before moving forward with the audit, such as, inter-alia, an assessment of the level of anonymisation of personal data, an assessment of the risk of re-identification and an assessment of the risk of losing data stored in the cloud.

The document is especially addressed to data controllers who audit personal data processing activities that include components based on AI, to data processors and developers who wish to offer additional guarantees around their products and services, to DPOs responsible for monitoring the data processing and providing advice to the data controllers and to auditors who work with this type of processing.

Control goals and actual controls

The main body of the Guidelines consists of five audit areas that are broken down into several objectives containing the actual controls among which the auditors, or the person in charge of the process as relevant, can make their selection for the specific audit they are undertaking.

The AEPD provides an exhaustive list comprising more than a hundred of controls, which are summed up in the following paragraphs. 

  • AI component identification and transparency

This area includes the following objectives: inventory of the AI components, definition of responsibilities, and transparency.

The AEPD stresses the importance of keeping full records both of the components, -including, inter alia, ID, version, date of creation and previous versions- and the persons in charge of the process -such as their contact details, roles and responsibilities-. There are also some provisions with regard to the information that should be available to the stakeholders, especially when it comes to the data sources, the data categories involved, the model and the logic behind the AI component, and the accountability mechanisms.

  • AI component purpose

There are several objectives within this area: identification of the AI component purposes, uses and context, proportionality and necessity assessment, data recipients, data storage limitation and analysis of the data subject categories.

The controls linked to these objectives are based on the standards and requirements needed to achieve the desired outcomes and the elements that may affect said result, as for example the conditioning factors, the socioeconomic conditions, and the allocation of tasks, among others, for which a risk assessment and a DPIA are recommended.

  • AI component basis

This area is built over the following objectives: identification of the AI component development process and basic architecture, DPO involvement, adequacy of the theoretical models and methodological framework.

The controls defined in this section are mainly related to the formal elements of the process and the methodology followed. They aim to ensure the interoperability between the AI component development process and the privacy policy, to define the requirements that the DPO should meet and guarantee their proper involvement in a timely manner and to set out the relevant revision procedures.

  • Data management

The AEPD details four objectives in this area: data quality, identification of the origin of the data sources, personal data preparation and bias control. 

Whereas data protection is the ‘leitmotiv’ along the Guidelines, it is specially present in this chapter, which covers, inter alia, data governance, variables and proportionality distribution, lawful basis for processing, reasoning behind the selection of data sources and data and variables categorisation.

  • Verification and validation

Seven objectives are pursued in this area: verification and validation of the AI component, adequacy of the verification and validation process, performance, coherence, robustness, traceability and security. 

The controls set out in this area focus on ensuring data protection compliance for the ongoing implementation and use of the AI component, looking for guarantees around the existence of a standard which allows for verification and validation procedures once the AI component has been integrated, a schedule for internal inspections, an analysis of false positives and false negatives, a procedure to find anomalies and mechanisms for identifying unexpected behaviour, among others.

Final remarks

The AEPD concludes with a reminder of the fact that the Guidelines contain a data protection approach to the audit of AI components, which means, on the one hand, that it may need to be combined with additional controls derived from other perspectives and, on the other hand, that not all controls will be relevant in each case, as they should be selected according to the specific needs, considering the type of processing, the client’s requirements, and the specific features of the audit and its scope, together with the results of the risk assessment.

Does your company use AI? You may be affected by EU future regulatory framework. We can help you. Aphaia provides both GDPR and DPA 2018 adaptation consultancy services, including data protection impact assessmentsEU AI Ethics assessments and Data Protection Officer outsourcingContact us today.