Updates on the guidelines for Trustworthy AI (Part II)
Recently, we covered some of the requirements of trustworthy AI, in the first part of this series; AI Ethics Regulation Updates. These requirements are to be continuously monitored and evaluated throughout an AI system’s life cycle.
Today we’ll be discussing the second set of requirements the AI HLEG deems relevant for
a self-assessment for trustworthy AI. The principles of transparency,diversity, non discrimination and fairness, societal and environmental wellbeing and accountability are all considered very important by AI HLEG and will be covered in this publication.
Trustworthy AI should encompass three elements; traceability, explainability and open communication about the data, system and business models regarding the AI system. All data sets and processes of the AI system, including the data gathering, data labeling and algorithms used, should be documented as thoroughly as possible to allow for traceability and transparency. This is extremely helpful in the event that an AI decision takes an erroneous turn and can help in preventing future mistakes.
Both the technical processes and the related human decisions of an AI system must be explainable. The decisions made by an AI system must be able to be understood and traced by human beings. When an AI system has a significant impact on the people’s lives, a timely and suitable explanation of the AI system decision-making process should be possibly made available at the level of expertise of the stakeholder concerned, whether it be a regulator, researcher, or lay person.
Humans have the right to be informed that they are interacting with an AI system and AI systems should never present themselves as humans. To ensure compliance with fundamental rights users should also have the option to decide against interacting with an AI system in favour of human interaction. Users and AI practitioners should also be made aware of the AI systems capabilities and limitations, and this should encompass communication of the AI systems level of accuracy in detail.
Diversity, non-discrimination and fairness.
Businesses should focus on developing strategies or procedures to avoid biases, educational and awareness initiatives, accessibility, user interfaces, Universal Design principles and stakeholder participation. Historic bias, incompleteness and bad governance models may be inadvertently included in data sets used by AI systems. This relates to those used both for training and operation. This can result in further harm through prejudice and marginalisation. All identifiable and discriminatory bias should be removed during the data collection phase wherever possible. Systems should be user- centric in a manner which allows all people, regardless of age, gender, abilities, or characteristics, to use AI products and services. Accessibility for people with disabilities is particularly important. It would be best to consult with stakeholders who may be affected by the system, directly or indirectly, in order to develop trustworthy AI systems.
Societal and environmental well-being.
AI systems should be sustainable and environmentally friendly and should consider the well-being of stakeholders like the broader society, other sentient beings and the environment, throughout the AI system’s life cycle. The AI system’s development, deployment and use process should be assessed for this.
In all areas of our lives, pervasive use of and exposure to AI systems, could possibly alter our conception of social agency, or impact social relationships and attachment. The effects of the use of these systems need to be carefully monitored to curtail effects on people’s physical and mental well-being. This impact should also be assessed from a societal perspective, considering the AI system’s effect on institutions, democracy and society at large.
AI systems need to be auditable. Their data, algorithms and design processes but lend themselves to assessment. Information about business models and intellectual property trusted to the AI system need not be made openly available, however evaluation by internal and external auditors is invaluable. The availability of the reports of these evaluations can determine the trustworthiness of the AI system. These systems should be able to be independently audited in cases where fundamental rights are affected. In the event that there is unjust adverse impact by a technology, there should be accessible mechanisms making redress possible with particular attention paid to vulnerable persons or groups.
We have explored these topics on our YouTube channel.
To learn more on AI Ethics, subscribe to our Youtube channel for more content.
Do you need assistance with the Assessment List for Trustworthy AI? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments, together with GDPR and Data Protection Act 2018 adaptation and Data Protection Officer outsourcing.
Latest posts by Zandilli Lucien (see all)
- CPS Advisory fined for unauthorized cold calls - September 18, 2020
- Hungarian DPA fined Forbes for GDPR violation. - September 16, 2020
- Complaints against Google and Facebook lead to investigations by the European Center for Digital Rights. - September 11, 2020