UNESCO Recommendation on AI ethics

UNESCO Recommendation on AI ethics

UNESCO Recommendation on AI ethics has been agreed upon by Member States. 

 

The Member States of the United Nations Educational, Scientific and Cultural Organization (UNESCO) have agreed on the draft text of a recommendation on the ethics of artificial intelligence (AI). Representatives of the Intergovernmental special committee of technical and legal experts met in April and June this year to examine the draft text of the Recommendation on the Ethics of Artificial Intelligence (AI). The initial draft of this document was shared with Representatives from Member States in September 2020. The committee then met in April and June to compose the draft Recommendation, which is well on its way to becoming the first of its kind- a global framework for AI ethics. The final draft will be submitted to Member States for the adoption by the General Conference of UNESCO 41st session in November. 

 

A Recommendation was composed addressing ethical issues in AI as far as they are mandated by UNESCO. The approach taken with Artificial Intelligence is from a holistic, comprehensive, multicultural and evolving perspective. The aim is to aid in dealing with AI technologies responsibly, including both the known and unknown aspects of the technology. The document defines AI as “information-processing technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments.” The Recommendation focuses on the broader ethical implications of AI systems relating to the general arena of UNESCO. This would include education, science, culture, and communication and information. 

 

The UNESCO Recommendation for AI ethics aims to provide a framework for guiding the various stages of the AI system life cycle to promote human rights. 

 

The aim of the UNESCO Recommendation on the Ethics of Artificial Intelligence is to influence the actions of individuals, groups, communities, private companies and institutions to ensure that ethics rest at the heart of AI. The Recommendation seeks to foster dialogue and consensus building dialogue on issues relating to AI ethics on a multidisciplinary, and multi-stakeholder level. This universal framework of values, principles and actions is aimed at protecting, promoting and respecting human rights and freedoms, equality, human dignity, gender equality, cultural diversity and preserving the environment, during each stage of the AI system life cycle. The Recommendation is packed with values and principles to be upheld by all actors in the field of AI, throughout the life cycle of the AI technology. Values play a major role as motivating ideals in the shipping of policy, legal norms and actions. These values are based on the recognition that the trustworthiness and integrity of the lifecycle of AI systems is essential and ensuring that these technologies will work for the good of all humanity.

 

The Recommendation includes a compilation of values considered to be apt at accomplishing the ethical standard for AI set out by UNESCO. 

 

The UNESCO Recommendation for AI ethics is built on the values of respect, protection and promotion of human rights and fundamental freedoms based on the inherent dignity of every human , regardless of race, color, descent, age, gender, language, economic or social condition of birth, disability or any other grounds. It also states that environmental and ecosystem flourishing should be recognized, promoted and protected throughout the entire lifecycle of AI systems. The Recommendation aims to ensure diversity and inclusiveness, living in peaceful, just and interconnected societies.

 

The document also outlines several principles under which AI technologies should operate, at every stage in their life cycle. 

 

Similar to the guidelines for trustworthy AI by the High-Level Expert Group on Artificial Intelligence (AI HLEG), UNESCO has outlined principles under which trustworthy AI should function. Principles like proportionality, safety, security, sustainability and the right to privacy and data protection are all explained in depth, guiding how AI should function worldwide. The Recommendation states clearly that it must always be possible to attribute legal and ethical responsibility to any stage of the AI system life cycle. The principles of transparency, explainability, responsibility, accountability, human oversight and determination are at the core of this Recommendation with specific mention of multi-stakeholder and adaptive governance and collaboration. Adaptive governance and collaboration ensures that States comply with international law while regulating the data passing through their territories. 

 

The UNESCO Recommendation for AI ethics goes on to guide policy areas to operationalize the values and principles that it sets out. 

 

The Recommendation encourages Member States to establish effective measures to ensure that other stakeholders uphold the values and principles of ethical AI technology. UNESCO, recognizing that various Member States will be at various stages of readiness to implement this recommendation, will develop a readiness assessment methodology to help Member States identify their status. The Recommendation suggests that all Member States should introduce frameworks for impact assessments to identify and assess benefits, concerns and risks of AI systems. In addition, Member States should ensure that AI governance procedures are inclusive, transparent, multidisciplinary, multilateral and multi-stakeholder. Governance should ensure that any harms caused through AI systems are investigated and redressed, through the enactment of strong enforcement mechanisms and remedial actions. Data policy, as well as international cooperation are also extremely important to this global framework. The Recommendation suggests how Member States should assess the direct and indirect impact of AI systems, at every point in their life cycle, on the environment and ecosystems. It also upholds that policies surrounding digital technologies and AI should contribute to fully achieving gender equality, the preservation of culture, education and research, as well the improvement of access to information and knowledge. The Recommendation also makes specific mention of how Member States should ensure the preservation of health and well-being through ethical AI use and practices.

 

“The proposal for a Regulation laying down harmonised rules on AI published by the European Commission on April 2021 and this Recommendation on AI ethics elaborated by the UNESCO show the institutions’ interest on regulating AI, both at European and global levels. Businesses using AI now have an opportunity to adapt their systems and practices in order to be ready before the framework becomes mandatory”, points out Cristina Contero Almagro, Aphaia’s Partner. 

Do you use AI in your organisation and need help ensuring compliance with AI regulations? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact Assessments, AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

Call for a ban on facial recognition

Call for a ban on facial recognition: EDPB and EDPS release a joint statement

The EDPB and EDPS have made a collaborative call for a ban on facial recognition for automated recognition in public spaces. 

 

 The EDPB and EDPS call for a ban on the use of AI for biometric identification in publicly accessible spaces. This includes facial recognition, fingerprints, DNA, voice recognition and other biometric or behavioral signals. This call comes after the European Commission outlined harmonized rules for artificial intelligence earlier this year. While the EDPB and EDPS embrace the introduction of rules addressing the use of AI systems in the EU, by institutions, bodies or agencies, the organizations have expressed concern over the exclusion, from the proposal, of cooperation from international law enforcement. The EDPB and EDPS also stress that it is necessary to clarify that the existing data protection regulation within the EU applies to any and all personal data processing under the scope of the draft AI regulation. 

 

The EDPB and EDPS call for a general ban on the use of AI in public spaces, particularly in ways which might lead to discrimination. 

 

In a recently released joint statement, the EDPB & EDPS recognize that extremely high risks are posed by remote biometric identification of individuals in public spaces, particularly the use of AI systems using biometrics to categorize individuals based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited. According to Article 21 of the Charter of Fundamental Rights, “Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.” In addition the organizations are calling for a prohibition on the use of AI to deduce the emotional state of natural persons except in specific cases. One example of this in the field of health includes cases where patient emotion recognition is relevant and important. However the EDPB and EDPS maintain that any use of this sort of AI for any type of social classification or scoring should be strictly prohibited. “One should keep in mind that ubiquitous facial recognition in public spaces makes it difficult to inform the data subject about what is happening, which also makes it all but impossible to object to processing, including profiling” comments Dr Bostjan Makarovic, Aphaia’s Managing Partner

 

The EDPB and EDPS call for greater clarity on the role of the EDPS as competent and market surveillance authority. 

 

The organizations in their joint opinion, embrace the fact that the European Commission proposal designates the EDPS as the market surveillance authority and competent authority for the supervision of institutions, agencies and bodies within the European Union. However the organisations are also calling for further clarification on the specific tasks of the EDPS within that role. The EDPB and EDPS acknowledge that data protection authorities are already enforcing the GDPR and LED in the context of AI involving personal data. However the organizations are suggesting a more harmonized regulatory approach, involving the DPAs as designated national supervisory authorities, as well as  consistent interpretation of data processing provisions across the EU. In addition, the statement calls for greater autonomy to be given to the European Artificial Intelligence Board, in order to avoid conflict and create an atmosphere for an  AI European body free from political influence. 

 

Do you want to learn more about facial recognition in public spaces? Check our vlog.

Do you use AI in your organisation and need help ensuring compliance with AI regulations? We can help you. Aphaia provides EU AI Ethics Assessments, Data Protection Officer outsourcing and ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments. We can help your company get on track towards full compliance.

Fintech and AI Ethics

Fintech and AI Ethics

As the world of Fintech evolves, the need for governance and ethics in that arena is of particular importance. 

 

Financial Technology, or “Fintech” refers to new technology that seeks to improve and automate financial services. This technology aids in the smooth running of financial aspects of business or personal finances through the integration of AI systems. Broadly speaking, this term refers to any innovation through which people can transact business, from keeping track of finances, to the invention of digital and cryptocurrency. With crypto-trading and digital platforms for wealth management becoming more popular than ever before, an increasing number of consumers are seeing the practical application and value of Fintech in their lives. As with any application of AI and technology however, certain measures should be in place for the smooth, and more importantly, safe integration of this technology into our daily lives, allowing the everyday user to feel more secure in the use of this tech. 

 

Legislation and guidance have been implemented and communicated guiding Fintech and AI ethics. 

 

Some pieces of legislation, such as the Payment Services Directive 2 (PSD2), an EU regulation governing electronic payment services, already target Fintech. PSD2 harmonizes two services which have both become increasingly widespread in recent times; Payment Initiation Services (PIS) and Account Information Services (AIS). PIS providers facilitate the use of online banking to make online payments, while AIS providers facilitate the collection and storage of information from a customer’s different bank accounts in a single place. With the increasing popularity and use of these innovations and other forms of Fintech, and as experience provides further insight into the impact of the various implications and the true impact of its use, new regulations are expected in the future. 

 

To most people, their financial data is considered to be among their most sensitive and valuable data and as such, most people are very keen on ensuring the safety of their data. Legislation and guidance have been implemented and communicated in order to aid in the pursuit of principles like technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness. These are all imperative to ensuring that the use of Fintech is safe and beneficial for everyone involved. 

 

Technical robustness and safety

 

The safety of one’s personal and financial information is, simply put, of the utmost importance when making decisions about what tools an individual will use to manage their finances. A personal data breach involving financial information could be very harmful for the affected data subjects due to its sensitive nature. Financial institutions and Fintech companies put several measures in place to ensure safe and secure money management through tech. Security measures such as, inter alia, data encryption, role-based access control, penetration testing, tokenization, 2FA, multi-step approval or verification processes  and backup policy all can and should be applied, where necessary and feasible. These measures all aid in helping users feel more secure, but intimately they aid in protecting users from far more than they can imagine including malware attacks, data breaches, digital identity risks and much more. 

 

Privacy and data governance

 

Article 22 of the EU GDPR prohibits a data subject from being subject to a decision based solely on automated processing, except where some circumstances apply. Automated decisions in the Fintech industry may produce legal effects concerning the individuals or similarly significantly affect them. Any decision with legal or similar effects needs special considerations in order to comply with the UK GDPR requirements. A data protection impact assessment may be necessary to determine the risks to individuals and determine how best to deal with them. For special categories of data, automated processing can only be carried out with the individual’s explicit consent or if necessary for reasons of substantial public interest. Robotic process automation (or RPA) could be very useful to businesses and help increase their revenue and save them money. However, it is imperative to ensure compliance with the GDPR and ensure that automated decision making does not result in dangerous profiling practices. 

 

Diversity, non-discrimination and fairness

 

Several studies have been performed exploring the overall fairness of current Fintech, and possible discrimination in consumer lending and other aspects of the industry. Algorithms can either perpetuate widespread human biases or develop their own biases. Common biases in the financial sector arise around gender, ethnicity and age. AI technology, especially in Fintech, where biases can affect an individual’s access to credit and the opportunities that it affords, must prevent discrimination and protect diversity. The use of quality training data, choosing the right learning model and working with an interdisciplinary team may help reduce the bias and maintain a sense of fairness in the world of Fintech and AI in general. 

 

Transparency. 

 

While the use of AI has brought much positive transformation to the financial industry, the question of AI ethics in everything that we do is unavoidable. Transparency provides an opportunity for introspection regarding ethical and regulatory issues, allowing them to be addressed. Algorithms used in Fintech should be transparent and explainable. The ICO and The Alan Turing Institute have produced their guidance “Explaining decisions made with AI ” to help businesses with this. They suggest developing a ‘transparency matrix’ to map the different categories of information against the relevant stakeholders. Transparency enables and empowers businesses to demonstrate trustworthiness. Trustworthy AI is AI that will be more easily adopted and accepted by individuals. Transparency into the model and processes of Fintech and other AI allows biases and other concerns to be raised and addressed. 

 

Check out our vlog exploring Fintech and AI Ethics:

https://youtu.be/7nj2616bq1s

You can learn more about AI ethics and regulation in our YouTube channel.

 

Do you have questions about how AI works in Fintech and the related guidance and laws? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact AssessmentsAI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.