Public consultation on the ethical principles of Artificial intelligence

The European Commission has published the results of the public consultation on the ethical principles of Artificial intelligence.

Can you imagine being part of the decision makers behind the ethical choices of those people who serve us in shops and establishments? For example, imagine going to the bank to ask for a credit card and being able to discuss with the one in charge of the ethical reasons to grant or deny your request. Or, for example, imagine parents asking the headteachers the human rights he or she has taken into account to decide whether or not their child should be enrolled. It would be crazy to think about a society where every single action is judged according to imposed ethical values used as a benchmark to determine what type of house one should have or what countries one should travel to, similar to the famous chapter of the Black Mirror series.

Well it may not be as crazy as we imagine, something similar is in the process of elaboration on the part of the European Commission, but it is not applied to people but to artificial intelligence. This is less striking because the ultimate goal of artificial intelligence is to resemble as much as possible human behaviour, but with the advantages that automation implies. In this sense, it is necessary to provide Artificial intelligence with certain ethical values that wrap their actions and decisions in a minimum of moral norms that allow their insertion into society.

For this purpose, a group of experts on Artificial Intelligence published on the 18th December a report on the ethical basis that must be present in systems that incorporate artificial intelligence (you can read a summary of the document here). The key initiatives include the establishment of framework ethical principles and practical implementation of solutions, in both cases from the “human-centric approach”, which prioritises civil, political, economic and social status of the human being.

The draft was exposed to public consultation, and now the Commission has published the results of it, which you can access here. The final document is expected to be published in March, in order to create an ethical commitment to which companies and institutions can freely adhere to.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Spanish National Cyber-security Incident Notification and Management Guide Overview

Spain has become the first country in the European Union to have a single framework for the notification and management of cyber-security incidents.

The Spanish National Cyber-security Incident Notification and Management Guide approved by the National Cyber-security Council is a technical document that creates a benchmark in terms of notifying and managing cyber-security incidents within Spanish territory. They are addressed both to the public and private sectors and they standardise the criteria in this field.

The Guide establishes a “one-stop” notification mechanism, that implies the incidents shall be reported only to the relevant institution (CSIRT): National Cryptologic Centre of the National Intelligence Centre (CCN-CERT) when it comes to the Public Sector and the National Cybersecurity Institute for the Private Sector (INCIBE-CERT).

The Guide comprises a classification system for the incidents, which are sorted into ten different categories: abusive content (e.g. Spam), harmful content (e.g. Malware), information gathering (e.g. Network traffic monitoring), intrusion attempt (e.g. Access to credentials), intrusion (e.g. Compromised applications), availability (e.g. DDoS), compromised information (e.g. lost data), fraud (e.g. Phishing), vulnerable (e.g. Weak cryptography) and other.

Each incident will be associated to a particular level of danger, which will be defined relying on the risk that the incident would involve for the affected organisations’ systems if it was materliased. There are five levels of danger, namely: critical, very high, high, average and low. Additionally, the Guide sets up an impact indicator in order to assess the consequences post-incident for the organisation or company activities and systems. Depending on this indicator, the impact will be critical, very high, high, average, low. There is an extra category called “no impact”, where no damage at all has been caused as a result of the incident.

As for the cyber-security incidents management, the Guide establishes a six-steps process to prevent these incidents and properly tackle them in case they take place. The phases are described as follows: preparation (e.g. updated policies), identification (e.g. network monitoring), containment (e.g. information assessment and classification), mitigation (e.g. recovery of the latest backup copy), recovery (e.g. restore the activities) and post-incident actions (identification and analysis of the origin of the incident and the costs).

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessment, and Data Protection Officer outsourcing.

Smart glasses and data protection

Overview of the main implications for data protection of smart glasses on occasion of the publication of the first Technology report (“Smart glasses and data protection”) by the European Data Protection Supervisor.

Whereas smart glasses may be deemed as the next step for technology-disruptive devices and they have a high potential to make people’s lives easier, their use also involve high risk for individuals’ data protection rights where the privacy by design principle is not properly implemented.

When it comes to smart glasses and data protection, some features like image and video recording, collection and storage of metadata, sensors, WiFi, connection with internet and other IoT devices and facial recognition can undermine the privacy of individuals, both users and non-users. A lack of security in smart glasses and data protection would not only affect users, whose personal information might be spied or stolen, but also individuals in their proximity, which data might be collected without their consent.

Article 29 Data Protection Working Party analysed the security aspects of IoT/wearable devices in its Opinion on the Internet of Things and came up with some of the main threatens:

-Lack of data control by users and specially non-users.

-Intrusive analysis of behaviour and profiling.

-Lack of anonymity due to the high identifiability of the information being processed.

-The processing of special categories of data, which requires special safeguards.

-The security risks inherent to mass market products.

Not only the regular use of smart glasses poses privacy threat, but they are also vulnerable to hacking attacks.  In 2013 and 2014, researchers demonstrated that it is possible to replace the operating system in Google Glass, plus they found that they could craft malformed QR codes that when photographed crashed glass, encrypted the device’s communications or directed it to a malicious website designed to take full control of the device. Physical security might be the target for Google Glass hackers too, as they could access data recorded at users’ houses.

Google Glass has effectively disappeared from the consumer market, but other companies have launched their own version of the product, like Snap, Inc. and the glasses “Spectacles”, targeted to young audiences and priced below 200 €. Smart glasses are expected to be widely available within a decade, which brings to light the need of Regulation beyond GDPR and ePrivacy, or at least the completion of the revised data protection framework, as when it comes to  smart glasses and data protection it  involves specific risks for privacy that require particular solutions, as for example:

-Ban camera features except for services in charge of security and safety. 

-Design them to be easily distinguishable from non-smart glasses.

-Local storage implications (as opposed to say cloud storage)

-Data breach reports.

-Training for users. A survey presented by Snap, Inc demonstrated that most consumers do not perceive smart glasses can threaten their privacy and data protection, thus privacy concerns do not significantly impact their adoption intention.

Data protection legislation (among others) is fully applicable to smart glasses, but several privacy concerns have to be evaluated in their application and appropriate measures have to be applied in every different context due to the particular nature and features of smart glasses, which make them different from other IoT devices.  

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessment, and Data Protection Officer outsourcing.

EU AI Ethics guidelines overview

EU Commission AI Ethics Guidelines are the Talk of the Town.

The European Commission’s High-Level Expert Group (AI HLEG) produced a draft set of AI Ethics Guidelines in December 2018. The final version is due in March this year and although it won’t be legally binding, it enables all stakeholders to formally endorse and sign up to the Guidelines on a voluntary basis.

AI is the future. But regulation is crucial in order to minimise risks associated with its use. Areas of concern include the adoption of identification processes without the data subject’s consent, covert AI systems and Lethal Autonomous Weapon Systems.

Trust is deemed as the prerequisite for people and societies to develop, deploy and use AI, because without AI being demonstrably worthy of trust, subversive consequences may ensue and its uptake by citizens and consumers might be hindered.

Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable.

(1) Fundamental rights and core principles

AI HLEG believes in an approach to AI ethics based on the fundamental rights commitment of the EU Treaties and Charter of Fundamental Rights, and highlights the main ones that should apply in an AI context:

-Respect for human dignity.
-Freedom of the individual.
-Respect for democracy, justice and the rule of law.
-Equality, non-discrimination and solidarity.
-Citizens’ rights.

In order to ensure that AI is developed in a human-centric manner and the above principles are taken into account when designing and implementing AI systems, some principles and values should be observed:

-The Principle of Beneficence (“Do Good”): AI systems should be designed for the purpose of generating prosperity, value creation, wealth maximization and sustainability.

-The Principle of non-maleficence (“Do no Harm”): by design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work.

-The Principle of Autonomy (“Preserve Human Agency”): human beings using AI should keep full self-determination, and not be subordinated to the AI system under any criterion.

-The Principle of Justice (“Be Fair”): developers and implementers need to ensure that, by using AI systems, individuals and minority groups maintain freedom from bias, stigmatisation and discrimination.

-The Principle of Explicability (“Operate transparently”), which comprises transparency from both the technical and business sides and implies that AI systems should be auditable, comprehensible and intelligible, and that users are knowingly informed of the intention of developers and technology implementers of AI.

(2) Trustworthy AI in practice

The values above must be actually implemented, which requires the development of specific requirements for AI systems. AI HLEG points out the following requirements as the most important ones:

-Accountability: special compensation mechanisms should be put in place, both monetary and non-monetary.
-Data Governance: what’s particularly relevant is the quality and integrity of the datasets used for training, plus the weights of the different categories of data.
-Design for all: systems should be designed in a way that allows all citizens to use the products or services, regardless of their age, disability status or social status.
-Governance of AI Autonomy (Human oversight): one must ensure that AI systems continue to behave as originally programmed, the levels or instances of governance will depend on the type of system and its impact on individuals.
-Non-Discrimination: bias, incomplete datasets and bad governance models should be avoided.
-Respect for (& Enhancement of) Human Autonomy: AI systems should just serve the purposes and tasks previously programmed.
-Respect for Privacy: privacy and data protection must be guaranteed at all stages of the life cycle of the AI system. This includes all data provided by the user, but also all information generated about the user over the course of his or her interactions with the AI system.
-Robustness: reliability, reproducibility, accuracy, resilience to attack and a fall-back plan are required.
-Safety: operational errors and unintended consequences should be minimised.
-Transparency: Information asymmetry should be reduced.

AI HLEG notes that a detailed analysis of any AI system is required in order to detect the main points to be addressed and provide both technical (ethics and rule of law by design, architectures for trustworthy AI, testing and validating, traceability and auditability, etc) and non-technical solutions (Regulation, codes of conduct, training, stakeholder dialogue, etc.), according to the particular context and needs on a case by case basis.

Some of these measures are already included in GDPR, in Recital 71 and Article 22, regarding the requirements that the data controllers should meet when implementing automated decision-making algorithms.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.