Is effective AI regulation possible?

On our Youtube channel this week, we are discussing how AI is currently regulated, challenges and what the future could look like!

One of the main challenges of AI is avoiding discrimination in its application and results, which requires one to apply several controls on the datasets used for training and the ethical and general principles that should be part of the design, plus perform regular checks.

An additional issue the legislator should tackle relates to the person or persons who should be liable where the AI breaks the law (Liability). This is also linked to international law, as Courts will have to discern not only who is responsible but also what the jurisdiction or applicable law ought to be.

For example, one could argue this could be: the country where the AI was programmed or the country where the device that includes the AI was manufactured. The severity of these decisions dials up a notch when it comes to life-changing decisions, such as the choice of who should live or die in a car accident. How do you decide?

What’s more, critics are concerned with the issue of self-development or ‘deep learning’ when it comes to AI. What if we cannot control our AI? Should we impose some limits? If so, what limits?

And what about the issue of AI being used as a weapon? Suggestions to limit some very specific applications of AI seem to merit much closer examination and action. A major case in point is the development of autonomous weapons that employ AI to decide when to fire, how much force to apply, and on what targets.

In our video this week we answer the question, is effective AI regulation possible?

We would also love to hear what you think, share your thoughts with us!

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Artificial Intelligence in H2020 overview

The Spanish Centre for Technological Industry Development (CDTI) held last Thursday 11th an info day about the importance of the artificial intelligence in H2020.

Horizon 2020 is the biggest EU Research and innovation programme ever with nearly €80 billion of funding available over 7 years, from 2014 to 2020 and it aims to reach three strategic goals:

  • Excellent science, in order to make the EU become a world leader in science.
  • Industrial leadership, for the improvement of European competitiveness.
  • Societal challenges, where targeted investment in research and innovation can have a real impact benefitting the citizen

Horizon 2020 is expected to help the EU with the development of the AI and then the Digital Europe Programme will support its implementation.

Fernando Rico, ICT representative from H2020, went through the different scenarios of H2020 where the AI has performed a relevant role and highlighted one of its main milestones, the Communication from the Commission, which took place last year. From that point onwards, some other landmarks have been reached, as the creation of the AI HLEG, the launch of an action plan and the adoption of a new strategy agenda for the development of the AI, among others.

In terms of the budget, whereas the Commission allocated €500 mill. towards AI projects during 2018-2019, the goal is to reach €20.000 mill per year by 2020, together with the Member States.

Enrique Pelayo, national ICT 2020 point of contact, underlined some H2020 topics where the AI takes a main role:

  • ICT48- Towards a vibrant European network of AI excellence centers.
  • ICT49- AI on demand platform.
  • ICT38- Artificial Intelligence for Manufacturing.

While the first two ones set up the general basis for AI in ICT, the latter addresses the involvement of AI in a specific sector.

The speakers also referred to InnovFin, a joint initiative in cooperation with the European Investment Bank Group which aims to facilitate and accelerate access to finance for innovative businesses and other innovative entities in Europe.

David González, from the technical office of SGCPC (MICINN), talked about the I+D+i strategy and focused on the national angle. Several discussions and meetings with businesses and companies that provide AI products and services are being held, with the aim of having a draft ready by the next Autumn.

AI key challenges and opportunities were discussed at the end of the event, both from a general and industry side.  When it comes to the pitfalls, the speakers pointed out the lack of administrative staff and the need of practical guidelines that can be easily materialised. AI Ethics and Regulation stood out as one of the most influencer fields where “reaching an agreement among EU Member States becomes an essential step that should be prioritized”, all the speakers agreed.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

AI Ethics in 2 minutes

On our YouTube channel this week, we are discussing the basis of AI Ethics, in line with the EU approach. As a member of the EU AI Alliance, Aphaia is involved in the discussion of all aspects of AI development and its impacts, and we regularly share our thoughts and feedback with the AI HLEG.

The aim of the EU is to promote trustworthy AI. What does this mean?

Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

1) it should be lawful, complying with all applicable laws and regulations, which comprises EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the GDPR, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and numerous EU Member State laws. The AI should be aware not only of what cannot be done, but also of what should be done and what may be done.

2) it should beethical,ensuring adherence to ethical principles and values.

3) it should berobust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.  Such systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts.

Apart from the components, trustworthy AI should meet the following ethical principles:

Respect for human autonomy. AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills.

Prevention of harm. AI systems should be technically robust, and it should be ensured that they are not open to malicious use. Vulnerable persons should receive greater attention and be included in the development, deployment and use of AI systems. Particular attention must also be paid to situations where AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information.

Fairness. Ensuring equal and just distribution of both benefits and costs and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation.

Explicability. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected.

What should AI developers then take into account to create trustworthy AI?

All the above points should be considered and implemented when designing the AI, together with some other key requirements as human agency and oversight, privacy and data governance, societal and environmental wellbeing and accountability.

Do you think this would be enough? Share your thoughts with us!

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

First European AI Alliance Assembly overview

The first EU AI Alliance Assembly was held in The Egg, Brussels on Wednesday 26th.

One year after the creation of the AI HLEG and the European AI Alliance, the first Assembly took place this week in Brussels. It was a full day event that brought together stakeholders, including citizens, businesses, public bodies and policymakers.

The event was divided into two different parts.  The first of them comprised two panel discussions that allowed the AI HLEG to present two important milestones:

The second part was focused on the next steps of the EU AI Strategy. They delivered a panel discussion about the international governance on AI and then four workshops that were run in parallel: competitiveness, research and skills, Regulation and legal framework and societal and environmental impact. All the speakers were part either of the AI HLEG, the OECD or private businesses committed and involved with the development of the AI. These topics are the ones identified as the biggest challenges for the implementation of the AI in Europe. Aphaia attended the legal one; as AI Ethics frontrunners, it is crucial for us to be updated of the latest steps from the EU with regard to AI Regulation, in order to align our EU AI Ethics Assessmentsand other AI products with the Commission opinion. Whenever the AI HLEG moves on with new recommendations or policies, so does Aphaia.

We have put together some of the main topics that were addressed by the speakers:

Policy and Investment Recommendations on AI

This is the second deliverable of the AI HLEG and follows the publication of the Ethics Guidelines for Trustworthy AIon April 2019. The purpose of this document is setting the context on how the Trustworthy AI can actually be developed, deployed, fostered and scaled in Europe, to make the Trustworthy AI as something practical that businesses, both SME and big companies, can achieve as a real goal. In this regard, Juraj Podruzek raised the need of making AI Ethics more user-friendly.

Pekka Ala-Pietilä highlighted the key takeaways from the report:

  • Empower and protect humans and society.
  • Take up a tailored approach to the AI landscape.
  • Secure a Single European Market for Trustworthy AI.
  • Enable AI ecosystems through Sectoral Multi-Stakeholder Alliances.
  • Foster the European data economy.
  • Exploit the multi-faceted role of the public sector.
  • Strengthen and unite Europe’s research capabilities.
  • Nurture education to the Fourth Power.
  • Adopt a risk-based governance approach to AI and an ensure an appropriate regulatory framework.
  • Stimulate an open and lucrative investment environment.
  • Embrace a holistic way of working, combing a 10-year vision with a rolling action plan.

Thiébaut Weber stated that one of the main defiance is to confront the opportunities of the technology to the reality of some users. “We know that AI can bring a lot of added value to some sectors, but we also have maybe 40% of European citizens who suffer of digital illiteracy. No one is left behind; AI is for everybody, for the benefit of all of us”.

We are preparing a separate article with an overview of the Recommendations that will be publish on Aphaia’s blog, both in English and Spanish. The original document can be accessed here.

Launch of the piloting process of the AI Ethics Guidelines

The piloting process aims at gathering feedback from the AI Alliance members on the assessment list that operationalises the key requirements. All the stakeholders are invited to send their comments on how it can be improved, by means of a thorough but to the point surveyquestionnairethat will be openeduntil December 2019. In-depth interviews with a number of representative organisations will as well be used as inputs for the piloting process.

The intended result once the participants have made their contribution is proposing a revised version of the assessment list and preparing a set of AI guidelines tailored to different sectors.

Andrea Renda pointed out that proactive Regulation should be promoted, rather than self-Regulation.

International governance

Raja Chatila underlined the difficulty of implementing trustworthy AI because of the scientific and ethical issues that remain still open, like the concept of transparency, which is not just explainability. We need to take a long-term vision that include different steps and multi stakeholders, and not to impose immediately that we need every piece of machine learning system to be explainable tomorrow. “If we set this as a goal, people will work on it. Proportionality, flexibility and Regulation steer innovation and create a climate of trust”.

Moez Chakchouk, from UNESCO, stands up for thinking globally. “There is a huge difference between developed and developing countries in terms of awareness. Education is the key, we should rise awareness through the education. Robust, safe, fair and trustworthy AI has first to respect human and democratic values”.

Dirk Pilat, from the OECD, claimed the importance of all the countries putting together the work that had been done before in the AI field plus bring to reflection. “We are all trying to figure out the same issue and we need to share our practices and build a consensus”.

Ethical and legal issues

This one was an interactive workshop where the speakers set their view and then the attendants intervened with some questions. The core topic was how the AI Regulation should be framed. Should it be a completely new Regulation? Should it be self-Regulation? Should it be open to the industry?

Ursula Pachl asserted that we should have a look at the current legal framework in the EU. “It is time for everybody to get back to the core and evaluate the existing framework, the existing legislation, does it fit the purpose? What are the regulatory gaps that we have to address? It is technology that serves people and not the other way around. We should use technology according to the laws, values and ethical principles”.

Ramak Molavi, from her side, said that we should transform compliance with the law into a source of competitiveness. “Why does Regulation have such a bad reputation? Do we need specific AI Regulation? We have so many laws already in place that apply to AI (competition, neutrality, GDPR, etc). We have AI rules but how do we implement them? We need to test them before they go out, which only works with ethics by design. We cannot work with the industry working on their own and leaving the government outside.  Self-Regulation is not for those that want to move fast, it is something that would apply to mature companies. We should slow down and look what we can do to develop a sustainable AI and find an alternative to self-Regulation. We need a new approach to regulate innovation. Everyone wants regulation to be perfect from the beginning, but we need previous knowledge.”

The event finished with a review of the topics addressed in the four parallel workshops and a final reflection about the importance of approaching the AI Regulation to the reality of the businesses, companies and society.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.