European Union Agency for Fundamental Rights (FRA) has released a report on Artificial Intelligence and its effects on fundamental rights.
With the increased use of Artificial Intelligence (AI), there have been mixed reactions to the technology from the general public. Despite the varied opinions and reactions, the fact remains that it is quickly being integrated into our society and our everyday life. While AI has proven its usefulness to us as a society, there is growing, genuine concern over its impact on fundamental rights. Certain algorithms have proven to show bias based on gender, race, and even employment status, which have raised eyes and ears in human rights circles. This has sparked much needed conversation on how fundamental rights can be upheld amidst the growing use of AI systems, and whether many AI systems even fit their intended purpose. In a recent report from the European Union Agency for Fundamental Rights (FRA), many of those key issues are addressed as we are given a landscape view of the EU’s current use of AI-related technology.
The FRA report on AI and fundamental rights was the result of extensive work to determine two key factors.
The FRA conducted research consisting of 91 personal interviews in 5 EU member states; Estonia, Finland, France, the Netherlands and Spain. The agency recognized an insufficiency in evidence on how certain AI technologies may affect fundamental human rights, whether positively or negatively, and decided to address this. The research also included interviews with 10 experts involved in monitoring or observing potential fundamental rights violations concerning the use of AI, including civil society, lawyers and oversight bodies. Concrete examples of “use cases” which the report loosely defined as “the specific application of a technology for a certain goal used by a specified actor,” would be necessary to determine two things;
- whether, and to what extent, the application of a technology interferes with various fundamental rights – and
- whether any such interference can be justified, with regard to the principles of necessity and proportionality.
Four broad ‘use cases’ were studied in-depth, in order to provide context for the fundamental rights analysis.
The report focuses on four broad ‘use cases’; social benefits, predictive policing, health services, and targeted advertising. The report contains an in-depth study of each use case, involving different areas of application across public administration and private companies. The report also provides information on the current use of AI, in addition to basic information on EU competence, in each of these select areas. Each use case provides a good sense of what kind of AI and related technologies are currently being used. This aids in providing context for the
fundamental rights analysis.
The fundamental rights-centered approach taken in the FRA report helps stakeholders assess the fundamental rights compatibility of AI systems in various contexts.
The report by the FRA, and the supporting research take a fundamental rights-centred approach to AI. This is underpinned by legal regulation, where the responsibility for respecting, protecting and fulfilling rights rests with the Member State, and does not rely on the voluntary action of organizations and business, as is the case of the ethical based approach which has previously been undertaken. This analysis of selected fundamental rights challenges can help the EU and its Member States, as well as other stakeholders, assess the fundamental rights compatibility of AI systems in different contexts.
The fundamental rights framework outlined in this report, provides a standardized basis for the design, development and deployment of AI tools, including benchmarks. This helps determine whether or not a specific use of AI is compliant with fundamental rights. It is necessary to mention, however, that there are justified interferences with human rights which are covered in this report as well.
The report provides an in-depth impact assessment on the current use of AI on specific fundamental rights.
The FRA report provides an in-depth look at the impact of the current use of AI on specific fundamental rights, starting with a general overview of risks perceived by the several interviewees. It goes on to assess their general awareness of fundamental rights implications in the use of AI, taking into account the views, practices and awareness of the issues expressed in the interviews conducted for this report. The specific fundamental rights assessed include human dignity, specific challenges regarding privacy and data protection, equality and non-discrimination, access to justice and consumer protection, as well as the right to social security and social assistance and the right to good administration. The report provides examples from specific use cases providing insight into the thoughts and experiences of the interviewees, and also provides feedback on the current legislation or stipulations by The Charter, for each use case.
The FRA provides several recommendations on how to assess the fundamental rights impact when using AI and related technologies.
Finally, the report provides suggestions on how to assess the fundamental rights impact when using AI and related technologies, as well as several guidelines for compliance. It suggests data protection impact assessments which are required by European data protection law, but also provides several examples of non binding guidelines. One such example is the freely accessible tool for governmental agencies, the Ethics Toolkit. This tool is based on a risk management approach, supporting fair automated decisions and minimising unintentional harm to individuals with regard to criminal justice, higher-education, social media and other areas. Another such tool is the compliance “quick check” proposed by the Danish Institute for Human Rights, an interactive computer programme allowing companies to assess their human rights compliance by modifying the information in a database to suit their type of business and area of operations.
Almost all the systems discussed in the interviews would require some sort of testing, which included elements of impact assessment, however the report highlights the importance of focusing not just on data protection impact but also human rights impact specifically in carrying out these assessments.