Artificial Intelligence in H2020 overview

The Spanish Centre for Technological Industry Development (CDTI) held last Thursday 11th an info day about the importance of the artificial intelligence in H2020.

Horizon 2020 is the biggest EU Research and innovation programme ever with nearly €80 billion of funding available over 7 years, from 2014 to 2020 and it aims to reach three strategic goals:

  • Excellent science, in order to make the EU become a world leader in science.
  • Industrial leadership, for the improvement of European competitiveness.
  • Societal challenges, where targeted investment in research and innovation can have a real impact benefitting the citizen

Horizon 2020 is expected to help the EU with the development of the AI and then the Digital Europe Programme will support its implementation.

Fernando Rico, ICT representative from H2020, went through the different scenarios of H2020 where the AI has performed a relevant role and highlighted one of its main milestones, the Communication from the Commission, which took place last year. From that point onwards, some other landmarks have been reached, as the creation of the AI HLEG, the launch of an action plan and the adoption of a new strategy agenda for the development of the AI, among others.

In terms of the budget, whereas the Commission allocated €500 mill. towards AI projects during 2018-2019, the goal is to reach €20.000 mill per year by 2020, together with the Member States.

Enrique Pelayo, national ICT 2020 point of contact, underlined some H2020 topics where the AI takes a main role:

  • ICT48- Towards a vibrant European network of AI excellence centers.
  • ICT49- AI on demand platform.
  • ICT38- Artificial Intelligence for Manufacturing.

While the first two ones set up the general basis for AI in ICT, the latter addresses the involvement of AI in a specific sector.

The speakers also referred to InnovFin, a joint initiative in cooperation with the European Investment Bank Group which aims to facilitate and accelerate access to finance for innovative businesses and other innovative entities in Europe.

David González, from the technical office of SGCPC (MICINN), talked about the I+D+i strategy and focused on the national angle. Several discussions and meetings with businesses and companies that provide AI products and services are being held, with the aim of having a draft ready by the next Autumn.

AI key challenges and opportunities were discussed at the end of the event, both from a general and industry side.  When it comes to the pitfalls, the speakers pointed out the lack of administrative staff and the need of practical guidelines that can be easily materialised. AI Ethics and Regulation stood out as one of the most influencer fields where “reaching an agreement among EU Member States becomes an essential step that should be prioritized”, all the speakers agreed.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

AI Ethics in 2 minutes

On our YouTube channel this week, we are discussing the basis of AI Ethics, in line with the EU approach. As a member of the EU AI Alliance, Aphaia is involved in the discussion of all aspects of AI development and its impacts, and we regularly share our thoughts and feedback with the AI HLEG.

The aim of the EU is to promote trustworthy AI. What does this mean?

Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

1) it should be lawful, complying with all applicable laws and regulations, which comprises EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the GDPR, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and numerous EU Member State laws. The AI should be aware not only of what cannot be done, but also of what should be done and what may be done.

2) it should beethical,ensuring adherence to ethical principles and values.

3) it should berobust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.  Such systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts.

Apart from the components, trustworthy AI should meet the following ethical principles:

Respect for human autonomy. AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills.

Prevention of harm. AI systems should be technically robust, and it should be ensured that they are not open to malicious use. Vulnerable persons should receive greater attention and be included in the development, deployment and use of AI systems. Particular attention must also be paid to situations where AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information.

Fairness. Ensuring equal and just distribution of both benefits and costs and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation.

Explicability. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected.

What should AI developers then take into account to create trustworthy AI?

All the above points should be considered and implemented when designing the AI, together with some other key requirements as human agency and oversight, privacy and data governance, societal and environmental wellbeing and accountability.

Do you think this would be enough? Share your thoughts with us!

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

First European AI Alliance Assembly overview

The first EU AI Alliance Assembly was held in The Egg, Brussels on Wednesday 26th.

One year after the creation of the AI HLEG and the European AI Alliance, the first Assembly took place this week in Brussels. It was a full day event that brought together stakeholders, including citizens, businesses, public bodies and policymakers.

The event was divided into two different parts.  The first of them comprised two panel discussions that allowed the AI HLEG to present two important milestones:

The second part was focused on the next steps of the EU AI Strategy. They delivered a panel discussion about the international governance on AI and then four workshops that were run in parallel: competitiveness, research and skills, Regulation and legal framework and societal and environmental impact. All the speakers were part either of the AI HLEG, the OECD or private businesses committed and involved with the development of the AI. These topics are the ones identified as the biggest challenges for the implementation of the AI in Europe. Aphaia attended the legal one; as AI Ethics frontrunners, it is crucial for us to be updated of the latest steps from the EU with regard to AI Regulation, in order to align our EU AI Ethics Assessmentsand other AI products with the Commission opinion. Whenever the AI HLEG moves on with new recommendations or policies, so does Aphaia.

We have put together some of the main topics that were addressed by the speakers:

Policy and Investment Recommendations on AI

This is the second deliverable of the AI HLEG and follows the publication of the Ethics Guidelines for Trustworthy AIon April 2019. The purpose of this document is setting the context on how the Trustworthy AI can actually be developed, deployed, fostered and scaled in Europe, to make the Trustworthy AI as something practical that businesses, both SME and big companies, can achieve as a real goal. In this regard, Juraj Podruzek raised the need of making AI Ethics more user-friendly.

Pekka Ala-Pietilä highlighted the key takeaways from the report:

  • Empower and protect humans and society.
  • Take up a tailored approach to the AI landscape.
  • Secure a Single European Market for Trustworthy AI.
  • Enable AI ecosystems through Sectoral Multi-Stakeholder Alliances.
  • Foster the European data economy.
  • Exploit the multi-faceted role of the public sector.
  • Strengthen and unite Europe’s research capabilities.
  • Nurture education to the Fourth Power.
  • Adopt a risk-based governance approach to AI and an ensure an appropriate regulatory framework.
  • Stimulate an open and lucrative investment environment.
  • Embrace a holistic way of working, combing a 10-year vision with a rolling action plan.

Thiébaut Weber stated that one of the main defiance is to confront the opportunities of the technology to the reality of some users. “We know that AI can bring a lot of added value to some sectors, but we also have maybe 40% of European citizens who suffer of digital illiteracy. No one is left behind; AI is for everybody, for the benefit of all of us”.

We are preparing a separate article with an overview of the Recommendations that will be publish on Aphaia’s blog, both in English and Spanish. The original document can be accessed here.

Launch of the piloting process of the AI Ethics Guidelines

The piloting process aims at gathering feedback from the AI Alliance members on the assessment list that operationalises the key requirements. All the stakeholders are invited to send their comments on how it can be improved, by means of a thorough but to the point surveyquestionnairethat will be openeduntil December 2019. In-depth interviews with a number of representative organisations will as well be used as inputs for the piloting process.

The intended result once the participants have made their contribution is proposing a revised version of the assessment list and preparing a set of AI guidelines tailored to different sectors.

Andrea Renda pointed out that proactive Regulation should be promoted, rather than self-Regulation.

International governance

Raja Chatila underlined the difficulty of implementing trustworthy AI because of the scientific and ethical issues that remain still open, like the concept of transparency, which is not just explainability. We need to take a long-term vision that include different steps and multi stakeholders, and not to impose immediately that we need every piece of machine learning system to be explainable tomorrow. “If we set this as a goal, people will work on it. Proportionality, flexibility and Regulation steer innovation and create a climate of trust”.

Moez Chakchouk, from UNESCO, stands up for thinking globally. “There is a huge difference between developed and developing countries in terms of awareness. Education is the key, we should rise awareness through the education. Robust, safe, fair and trustworthy AI has first to respect human and democratic values”.

Dirk Pilat, from the OECD, claimed the importance of all the countries putting together the work that had been done before in the AI field plus bring to reflection. “We are all trying to figure out the same issue and we need to share our practices and build a consensus”.

Ethical and legal issues

This one was an interactive workshop where the speakers set their view and then the attendants intervened with some questions. The core topic was how the AI Regulation should be framed. Should it be a completely new Regulation? Should it be self-Regulation? Should it be open to the industry?

Ursula Pachl asserted that we should have a look at the current legal framework in the EU. “It is time for everybody to get back to the core and evaluate the existing framework, the existing legislation, does it fit the purpose? What are the regulatory gaps that we have to address? It is technology that serves people and not the other way around. We should use technology according to the laws, values and ethical principles”.

Ramak Molavi, from her side, said that we should transform compliance with the law into a source of competitiveness. “Why does Regulation have such a bad reputation? Do we need specific AI Regulation? We have so many laws already in place that apply to AI (competition, neutrality, GDPR, etc). We have AI rules but how do we implement them? We need to test them before they go out, which only works with ethics by design. We cannot work with the industry working on their own and leaving the government outside.  Self-Regulation is not for those that want to move fast, it is something that would apply to mature companies. We should slow down and look what we can do to develop a sustainable AI and find an alternative to self-Regulation. We need a new approach to regulate innovation. Everyone wants regulation to be perfect from the beginning, but we need previous knowledge.”

The event finished with a review of the topics addressed in the four parallel workshops and a final reflection about the importance of approaching the AI Regulation to the reality of the businesses, companies and society.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Practical guidance on how to process mixed datasets

The European Commission has published guidance on the interaction between the Regulation on the free flow of non-personal data and the GDPR.

One year after the GDPR started to apply, most controllers are (or at least they should) well aware of the security and privacy requirements that should govern the datasets which contain personal data. However, what happens when those datasets include not only personal data but also non-personal information?

There is a new Regulation(Regulation 2018/1807 on a framework for the free flow of non-personal data in the European Union), applicable as of 28 May 2019, that sets up the conditions for the processing and transfer of non-personal data in the European Union and aims at removing obstacles to the free movement of non-personal data across Member States and IT systems in Europe. Accordingly, when it comes to mixed datasets, one should consider not only the GDPR, but also this new one.

The European Commission has published guidancein order to clarify the interaction between the Free Flow of Non-Personal Data regulation and the GDPR.

For the purposes of the Free Flow of Non-Personal Data Regulation, non-personal data means:

  • data which originally did not relate to an identified or identifiable natural person, such as data on weather conditions generated by sensors.
  • data which were initially personal data but were later made anonymous.

It is defined just as the opposite of the personal data concept of the GDPR.

The Free Flow of Non-Personal Data Regulation has three notable features:

  • It prohibits, as a rule, Member States imposing requirements on where data should be localised.
  • It establishes a cooperation mechanism to make sure that competent authorities continue to be able to exercise any rights they have to access data that are being processed in another Member State.
  • It provides incentives for industry, with the support of the Commission, to develop self-regulatory codes of conduct on the switching of service providers and the porting of data. ´

Datasets containing the names and contact details of legal persons are in principle non-personal data, except for some cases, as for when the name of the legal person is the same as that of a natural person who owns it or if the information relates to an identified or identifiable natural person.

In the case of a dataset composed of both personal and non-personal data:

  • The Free Flow of Non-Personal Data Regulation applies to the non-personal data part of the dataset;
  • The GDPR free flow provision applies to the personal data part of the dataset; and
  • If the non-personal data part and the personal data parts are ‘inextricably linked’, the data protection rights and obligations stemming from the GDPR fully apply to the whole mixed dataset, also when personal data represent only a small part of the dataset.

What does ‘inextricably linked’ mean?

The concept of ‘inextricably linked’ is not defined by either of the two Regulations. For practical purposes, it can refer to a situation whereby a dataset contains personal data as well as non-personal data and separating the two would either be impossible or considered by the controller to be economically inefficient or not technically feasible. For example, when buying CRM and sales reporting systems, the company would have to duplicate its cost on software by purchasing separate software for CRM (personal data) and sales reporting systems (aggregated/non-personal data) based on the CRM data. Separating the dataset is also likely to decrease the value of the dataset significantly. In addition, the changing nature of data makes it more difficult to clearly differentiate and thus separate between different categories of data.

What is the conclusion then?

Whenever personal data is involved, GDPR applies. However, the Free Flow of Non-Personal Data Regulation provides the controllers with a chance of managing personal and non-personal data different where they are suitable separated.

This new Regulation, combined with the GDPR, provides the EU with the most stable legal framework for the free movement of all data within the European Union.

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing.