Guidelines Trustworthy AI

Updates on the guidelines for Trustworthy AI (Part II)

Recently, we covered some of the requirements of trustworthy AI, in the first part of this series; AI Ethics Regulation Updates. These requirements are to be continuously monitored and evaluated throughout an AI system’s life cycle.

Today we’ll be discussing the second set of requirements the AI HLEG deems relevant for

a self-assessment for trustworthy AI. The principles of transparency,diversity, non discrimination and fairness, societal and environmental wellbeing and accountability are all considered very important by AI HLEG and will be covered in this publication.

Transparency.

Trustworthy AI should encompass three elements; traceability, explainability and open communication about the data, system and business models regarding the AI system. All data sets and processes of the AI system, including the data gathering, data labeling and algorithms used, should be documented as thoroughly as possible to allow for traceability and transparency. This is extremely helpful in the event that an AI decision takes an erroneous turn and can help in preventing future mistakes. 

Both the technical processes and the related human decisions of an AI system must be explainable. The decisions made by an AI system must be able to be understood and traced by human beings. When an AI system has a significant impact on the people’s lives, a timely and suitable explanation of the AI system decision-making process should be possibly made available at the level of expertise of the stakeholder concerned, whether it be a regulator, researcher, or lay person.

Humans have the right to be informed that they are interacting with an AI system and AI systems should never present themselves as humans. To ensure compliance with fundamental rights users should also have the option to decide against interacting with an AI system in favour of human interaction. Users and AI practitioners should also be made aware of the AI systems capabilities and limitations, and this should encompass communication of the AI systems level of accuracy in detail. 

Diversity, non-discrimination and fairness.

Businesses should focus on developing strategies or procedures to avoid biases, educational and awareness initiatives, accessibility, user interfaces, Universal Design principles and stakeholder participation. Historic bias, incompleteness and bad governance models may be inadvertently included in data sets used by AI systems. This relates to those used both for training and operation. This can result in further harm through prejudice and marginalisation. All identifiable and discriminatory bias should be removed during the data collection phase wherever possible. Systems should be user- centric in a manner which allows all people, regardless of age, gender, abilities, or characteristics, to use AI products and services. Accessibility for people with disabilities is particularly important. It would be best to consult with stakeholders who may be affected by the system, directly or indirectly, in order to develop trustworthy AI systems. 

Societal and environmental well-being.

AI systems should be sustainable and environmentally friendly and should consider the well-being of stakeholders like the broader society, other sentient beings and the environment, throughout the AI system’s life cycle. The AI system’s development, deployment and use process should be assessed for this. 

In all areas of our lives, pervasive use of and exposure to AI systems, could possibly alter our conception of social agency, or impact social relationships and attachment. The effects of the use of these systems need to be carefully monitored to curtail effects on people’s physical and mental well-being. This impact should also be assessed from a societal perspective, considering the AI system’s effect on institutions, democracy and society at large.

 Accountability. 

AI systems need to be auditable. Their data, algorithms and design processes but lend themselves to assessment. Information about business models and intellectual property trusted to the AI system need not be made openly available, however evaluation by internal and external auditors is invaluable. The availability of the reports of these evaluations can determine the trustworthiness of the AI system. These systems should be able to be independently audited in cases where fundamental rights are affected. In the event that there is unjust adverse impact by a technology, there should be accessible mechanisms making redress possible with particular attention paid to vulnerable persons or groups. 

We have explored these topics on our YouTube channel. 

To learn more on AI Ethics, subscribe to our Youtube channel for more content.

Do you need assistance with the Assessment List for Trustworthy AI? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments, together with GDPR and Data Protection Act 2018 adaptation and Data Protection Officer outsourcing.

Lincolnshire Police Trial CCTV

Lincolnshire Police Trial CCTV: this technology can even detect moods!

Lincolnshire police trial CCTV technology which can detect moods, eyewear and headgear, but not  before a human rights and privacy assessment is carried out.

 

Lincolnshire police will soon debut their trial of CCTV cameras in Gainsborough. This is a new, more complicated and potentially controversial type of Surveillance technology. Although the funding for this project has been approved and received, due to privacy concerns surrounding the use of this technology, the implementation of the new equipment is at a standstill. Key legal considerations need to be made before this could be released and used in the general public, as this technology has the ability to search for persons using parameters surrounding their mood, or their apparel such as hats or sunglasses. Due to the fact that the police have full control of the search parameters; the technology is inherently problematic, as was in case of court rulings as recently as 2018. 

 

A Welsh national had brought a legal case against the authorities for their use of a very similar facial recognition technology, and this has raised the specter of many ideological and privacy concerns when it comes to the Police having unquestionable access to intrusive means of surveillance, and monitoring persons who may not be suspected or involved in any crimes. Although Mr. Bridges did not have instant success with his claim, as his first petition to the High Court was denied, in his subsequent Court of Appeal claim; three out of five of the unconstitutional breaches of privacy Mr. Bridges presented were ratified as legally valid in the court. 

 

The police have acknowledged, and made attempts at addressing the public’s privacy concerns regarding the use of this technology.

 

Privacy concerns are a very important consideration prior to the establishment of this new technology for everyday use. The police have tried to give some assurance to the public that their rights are of paramount importance  in the means and the protocols surrounding this technology and how it is used. The local police have also released some preliminary information which may ease public anxiety around the implementation of this technology; the scans are not being done in live time and also, all footage is deleted after 31 days. 

 

Legislation continues to be introduced regarding privacy and surveillance.

 

There are also larger debates surrounding the appropriate search terms allowed and under what circumstances they can be implemented in a situation where this new surveillance technology is to be in use. Legislation around government surveillance also has seen changes in recent years since the Ed Bridges case, and it continues to be reformed, in an attempt to encompass everyone’s well-being without stripping them of the fundamental privacies and rights allotted to them. 

 

According to Cristina Contero Almagro, partner at Aphaia, ‘The risk is twofold: first, the police using the technology without the appropriate safeguards and second, the information being compromised and used maliciously by third-parties which may access it unlawfully. Considering the nature of the data involved, it is essential to put in place strong security measures which ensure the data will be adequately protected. It is important to note that once that biometric information has been exposed, the damage to the rights and freedoms of the affected data subjects is incalculable, as it is not something that can be changed like a password’.

 

‘Any facial recognition that includes profiling should be viewed with suspicion,’ comments Dr Bostjan Makarovic, Aphaia’s Managing Partner. ‘The challenge is that there is no way to object to such profiling because it takes place in real time as one enters a certain area. Law enforcement and public safety are important reasons but should not be used as a blanket justification without further impact assessment.’  

Does your company utilize facial recognition or process other biometric data? If yes, failure to adhere fully to the guidelines and rules of the GDPR and Data Protection Act 2018 could result in a hefty financial penalty. Aphaia provides both GDPR adaptation consultancy services and CCPA compliance, including data protection impact assessments, EU AI Ethics assessments and Data Protection Officer outsourcing. Contact us today.

Guidelines for Trustworthy AI

Updates on the guidelines for Trustworthy AI

Recently, the High-Level Expert Group on Artificial Intelligence released a document of guidelines, on implementing trustworthy AI in practice. In this blog, we aim to enlighten you on the guidelines for trustworthy AI.

 

Last month on our blog, we reported on the final assessment list for trustworthy artificial intelligence, released by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This list was the result of a three-part piloting process in which Aphaia participated, and involved over 350 stakeholders. Data collection during this process involved an online survey, a series of in-depth interviews, and sharing of best practices in achieving trustworthy AI.. Before implementing any AI systems, it is necessary to make sure that they comply with the 7 principles which were the result of this effort.

 

While AI is transformative, it can also be very disruptive. The goal of producing these guidelines is to promote trustworthy AI based on the following three components. Trustworthy AI should be lawful,  ethical and robust both from a technical and social perspective. While the framework does not deal too much with the legality of trustworthy AI, it provides guidance on the second and third components -making sure that AI is ethical and robust.

 

In our latest vlog, the first of a two part series, we explored three of those seven requirements; human agency and oversight, technical robustness and safety, and privacy and data governance.

 

Human agency and oversight

“AI systems should support human agency and human decision-making, as prescribed by the principle of respect for human autonomy”. Businesses should be mindful in dealing with the effects AI systems can have on human behaviour, in a broad sense, human perception and expectation when confronted with AI systems that ‘act’ like humans, human affection, trust and human dependence.

According to the guidelines, “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills.” The AI systems should be human centric in design and allow meaningful opportunity for human choice.

Technical robustness and safety

Based on the principle of prevention of harm outlined in the document of guidelines, “AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. This entails the protection of human dignity as well as mental and physical integrity.”

Organisations should also reflect on resilience to attack and security, accuracy,reliability, fall-back plans and reproducibility.

Privacy and data governance

Privacy is a fundamental right affected by AI systems. AI systems must guarantee privacy and data protection throughout the entire lifecycle of a system. It is recommended to implement a process that embraces both top-level management and operational level within the organisation.

Some key factors to note with regard to the principle of prevention of harm are adequate data governance, relevance of the data used, data access protocols, and the capability of the AI system to process data in a manner that protects privacy.

 

When putting the assessment list into practice, it is recommended that you not only pay attention to the areas of concern but also to the questions that cannot easily be answered. The list is meant to guide AI practitioners to develop trustworthy AI, and should be tailored to each specific case in a proportionate way.

To learn more on the principles AI HLEG has outlined in order to achieve trustworthy AI, look out for part two of this series by subscribing to our channel.

Do you need assistance with the Assessment List for Trustworthy AI? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments, together with GDPR and Data Protection Act 2018 adaptation and Data Protection Officer outsourcing.

 

New AI privacy tool

New AI privacy tool prevents facial recognition in photos.

New AI Privacy tool uses cloaking to prevent facial recognition in photos.

 

A new AI privacy tool has been introduced in an effort to combat the ill effects of rapidly developing facial recognition software. Imagine knowing that a stranger can snap a photo of you and identify you within seconds. Even worse, imagine being misidentified by facial recognition software and having to deal with the consequences of a crime you never committed as a result. It has long been recognised that the use of facial recognition software could pose a serious threat to the privacy and freedom of individuals. The fact that photos that we share are being collected by companies in order to train algorithms, which are even sold commercially is quite troubling. The training of these algorithms helps the cause of developing this AI software quickly, however, it also speeds up the rate at which we ourselves are put at risk by this very software.

 

The dangers of facial recognition can be far reaching and solutions are becoming more and more necessary. 

 

According to this Forbes article published last year, protesters in Hong Kong were being identified and targeted using facial recognition towers and cameras. The protesters would use various techniques including lasers, gas masks and eyewear to throw off facial recognition cameras and avoid possible detainment for protesting. One of the other main issues that many are presenting, is that facial recognition is mainly tailored to white men. This means that the rate at which people of colour, and more particularly women of colour are misidentified is extremely alarming. Even sharing photos in this day and age comes with a certain level of risk. For example, Rekognition, the Amazon facial recognition technology creates profiles for us, based on photos from our online profiles, shopping experiences and information from Amazon applications. The legislation that has been introduced thus far, and all the advice floating around on how to protect our privacy and identities, can only do so much. At some point it became apparent that more could be done as far as protecting the identity of individuals using creative tools.

 

This new AI privacy tool could help individuals avoid the dangers of facial recognition.

 

While legislation is being introduced to combat the detrimental effects of the use of facial recognition software, the question remains as to whether the law can keep up with the development and use of this technology. That is seemingly not the case, however new technology is being created, which could very well help in protecting the privacy and identity of individuals. One such solution is a tool called Fawkes, named after the Guy Fawkes masks donned by revolutionaries in the V for Vendetta comic book and film. It was created by scientists at the University of Chicago’s Sand lab. This tool uses artificial intelligence to very subtly and almost imperceptibly alter your photos to trick facial recognition systems. This tool is said to be 100-percent successful against state-of-the-art facial recognition systems like Microsoft’s Azure Face, Amazon’s Rekognition and Megavil’s Face++. While this tool is very helpful in protecting the identity of individuals, it can only protect the identity of those who choose to use it from the point at which they decide to use it. What this means is that those images, and the corresponding information collected by facial recognition companies in the past cannot be retroactively altered to protect one’s identity. There is also the view that without widespread use of a technology like this, it would have a little to no impact.

 

How does this new AI privacy tool work?

 

The cloaking technology will allow you to post selfies without the fear of having companies use them to identify you, or train their algorithms to do so. The Fawkes tool takes a couple minutes to process a photo, making changes that are imperceptible, but which would cause the facial recognition software to mistake you for someone else. Ben Zhao, a professor of computer science at the University of Chicago who helped create the Fawkes tool said “What we are doing is using the cloaked photo in essence like a Trojan Horse, to corrupt unauthorized models to learn the wrong thing about what makes you look like you and not someone else,” This cloaking is intended to corrupt the database that facial recognition systems need to function, which includes hordes of photos scraped from social media platforms.

 

While the law is certainly doing its part in curtailing the ill effects of using facial recognition, this software tool is expected to also play a role in slowing the development of facial recognition software by reducing the number of uncorrupted photos available for them to train their algorithms. This tool will also help individuals who are looking to protect their identity moving forward, especially as, in some cases, there is no way of knowing when and how photos are being used by companies.

Does your company utilize facial recognition or process other biometric data? If yes, failure to adhere fully to the guidelines and rules of the GDPR and Data Protection Act 2018 could result in a hefty financial penalty. Aphaia provides both GDPR adaptation consultancy services and CCPA compliance, including data protection impact assessments, EU AI Ethics assessments and Data Protection Officer outsourcing. Contact us today.