Recently, the High-Level Expert Group on Artificial Intelligence released a document of guidelines, on implementing trustworthy AI in practice. In this blog, we aim to enlighten you on the guidelines for trustworthy AI.
Last month on our blog, we reported on the final assessment list for trustworthy artificial intelligence, released by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This list was the result of a three-part piloting process in which Aphaia participated, and involved over 350 stakeholders. Data collection during this process involved an online survey, a series of in-depth interviews, and sharing of best practices in achieving trustworthy AI.. Before implementing any AI systems, it is necessary to make sure that they comply with the 7 principles which were the result of this effort.
While AI is transformative, it can also be very disruptive. The goal of producing these guidelines is to promote trustworthy AI based on the following three components. Trustworthy AI should be lawful, ethical and robust both from a technical and social perspective. While the framework does not deal too much with the legality of trustworthy AI, it provides guidance on the second and third components -making sure that AI is ethical and robust.
In our latest vlog, the first of a two part series, we explored three of those seven requirements; human agency and oversight, technical robustness and safety, and privacy and data governance.
Human agency and oversight
“AI systems should support human agency and human decision-making, as prescribed by the principle of respect for human autonomy”. Businesses should be mindful in dealing with the effects AI systems can have on human behaviour, in a broad sense, human perception and expectation when confronted with AI systems that ‘act’ like humans, human affection, trust and human dependence.
According to the guidelines, “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills.” The AI systems should be human centric in design and allow meaningful opportunity for human choice.
Technical robustness and safety
Based on the principle of prevention of harm outlined in the document of guidelines, “AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. This entails the protection of human dignity as well as mental and physical integrity.”
Organisations should also reflect on resilience to attack and security, accuracy,reliability, fall-back plans and reproducibility.
Privacy and data governance
Privacy is a fundamental right affected by AI systems. AI systems must guarantee privacy and data protection throughout the entire lifecycle of a system. It is recommended to implement a process that embraces both top-level management and operational level within the organisation.
Some key factors to note with regard to the principle of prevention of harm are adequate data governance, relevance of the data used, data access protocols, and the capability of the AI system to process data in a manner that protects privacy.
When putting the assessment list into practice, it is recommended that you not only pay attention to the areas of concern but also to the questions that cannot easily be answered. The list is meant to guide AI practitioners to develop trustworthy AI, and should be tailored to each specific case in a proportionate way.
To learn more on the principles AI HLEG has outlined in order to achieve trustworthy AI, look out for part two of this series by subscribing to our channel.
Do you need assistance with the Assessment List for Trustworthy AI? We can help you. Our comprehensive services cover both Data Protection Impact Assessments and AI Ethics Assessments, together with GDPR and Data Protection Act 2018 adaptation and Data Protection Officer outsourcing.