The EU's Approach to AI – Recent Regulatory Developments

The European Union has long been interested in artificial intelligence (AI). Back in 2014, the EU first investigated the issue by producing guidelines on the regulation of robotics. Since then, they have continued to work on this area, with a strong focus on the regulation of AI. 

In the last two years, there has been a wealth of new publications, guidelines and political declarations from various EU bodies on AI. These provide insight into the future of AI in Europe – including on how it will be regulated, what governments will promote, who will be liable for defects in AI and how safety standards will be enforced. These insights are of value both to legal practitioners operating in the emerging technologies sector and to organisations developing, using or considering the procurement of AI products. 

While all interesting and important documents, the length of each document and the abundance of recent publications mean that it can be a time intensive process to get a handle of the overall picture these publications present. This article aims to provide an overview of all these reports by providing a brief summary of the recent key EU publications on AI.

 

 

Title

EU body

Date of publication

1.

Declaration of cooperation on artificial intelligence

EU Member States

10.04.2018

2.

Communication from the Commission to the European Parliament, The European Council, the Council, the European Economic And Social Committee and the Committee Of The Regions On Artificial Intelligence For Europe

European Commission

25.04.2018

3.

Artificial intelligence: a European perspective European Commission - Joint Research Centre

2018

4.

A definition of AI: main capabilities and discipline European Commission - High-Level Expert Group on Artificial Intelligence

08.04.2019

5.

Ethics guidelines for trustworthy AI

European Commission - High-Level Expert Group on Artificial Intelligence

08.04.2019

6.

Building Trust In Human-Centric Artificial Intelligence, Communication From The Commission To The European Parliament, The Council, The European Economic And Social Committee And The Committee Of The Regions

European Commission

08.04.2019

7.

Policy and investment recommendations for trustworthy AI

European Commission - High-Level Expert Group on Artificial Intelligence

26.06.2019

8.

Liability For Artificial Intelligence And Other Emerging Digital Technologies

European Commission (New Technologies Formation)

27.11.2019

9.

Commission report on safety and liability implications of AI, the internet of things and robotics

European Commission

19.02.2020

10.

White Paper On Artificial Intelligence - A European Approach To Excellence And Trust

European Commission

19.02.2020

1. Declaration of cooperation on artificial intelligence (AI) - 10.04.2018

To view more detail from the European Commission click here.

25 European countries signed the Declaration of Cooperation on AI. The purpose of this relatively concise declaration is to bring together the national AI initiatives of each signatory. One of its aims is to assemble a unified European approach to the most important issues relating to AI. These include:

a) EU competitiveness: ensuring Europe's competitiveness in the research and deployment of AI through boosting Europe's technology and industrial AI capacity; 

b) Education and upskilling for citizens: addressing socio-economic challenges created by the transformation of the labour markets; and

c) Transparency: ensuring there is an adequate legal and ethical framework upon which to handle the various social, economic and ethical questions which arise as a result of the adoption of AI solutions.

The declaration sets out high level aims for the use of AI in the EU. It intends to bring greater alignment to the approaches being developed and to the adoption of AI solutions throughout the signatory states. In particular, it aims to address the challenges which arise from the emergence of AI's transformation of the labour market, such as widespread modernisation of Europe's education and training systems on AI, efforts to upskill and/or reskill European citizens to improve AI and data literacy, and promote an environment of trust and accountability around the development and use of AI. This declaration forms the basis of many of the documents discussed in this article. 


2. Communication from the Commission to the European Parliament, The European Council, the Council, the European Economic And Social Committee and the Committee Of The Regions On Artificial Intelligence For Europe - 25.04.2018

To view more detail from the European Commission click here.

This communication follows on from the signing of the Declaration of Cooperation on Artificial Intelligence on 10 April 2018 (item 1 above). It sets out a European initiative on AI to boost the EU's technological and industrial capacity through the uptake of AI across the economy by both the private and public sectors. Furthermore, the communication looks to address the potential socio-economic changes brought about by AI by:

a) encouraging the modernisation of education and training systems;
b) nurturing home talent;
c) supporting labour market transitions;
d) adapting social protection systems; and
e) ensuring that an appropriate ethical and legal framework based on the EU's values and in line with the Charter of Fundamental Rights is in place. 

The communication reports that the EU currently lags behind the US and China in terms of private investments into AI. In 2016, EU investment totalled €2.4-3.2 billion, compared to €6.5-9.7 billion in Asia and €12.1-18.6 billion in North America. Accordingly, creating a fertile environment for investment is seen as crucial in order to preserve and build upon EU AI assets. The report also stresses that it is important to ensure that AI solutions are adopted throughout the European economy and across different sectors to ensure the EU remains competitive.  In 2017, it was found that only 25% of large EU enterprises and 10% of EU SMEs were using big data analytics. 

The report concludes that the EU has the main ingredients to become a leader in AI. In particular, it has a strong scientific and industrial base to build upon, with leading research labs and universities, recognised leadership in robotics, and innovative start-ups. The EU also has a comprehensive legal framework which protects consumers while promoting innovation and the EU is making progress in the creation of a Digital Single Market.


3. Artificial intelligence: a European perspective - 2018

To view more detail from the European Commission click here.

This report was published in 2018 by the Joint Research Centre ("JRC"), which is the European Commission’s science and knowledge service. The objective of the report is to provide a balanced assessment of the opportunities and challenges presented by AI from a European perspective, and to support the development of European action in the global AI context.  

The report highlights that the global competition in AI is largely between the US and China. The US is on top as things stand but China is in line to be the world leader for AI development by 2030. For the EU, the challenge is not so much to contest and become world leader itself, but rather to develop and protect a well-defined and robust ethical framework as a key characteristic of AI development in the EU. The JRC concludes that Europe is well placed to establish a distinctive form of AI that is ethically robust and which protects the rights of individuals, firms and society at large.

The JRC emphasises the need to share data between the key stakeholders in AI development, including between public and private sector bodies and the public.The areas which collaboration is marked as a priority include:

a) Partnerships: Increasing efforts to join research initiatives and create partnerships in Europe;
b) Interoperable datasets: Making high quality and trusted data repositories available to a broad range of users;
c) Support: Improving accessibility to know-how and testing facilities, such as smart-hospitals and precision-farming solutions;
d) Digital innovation hubs: To increase the uptake of AI solutions; and
e) Workforce: Skilling and upskilling the home-grown workforce, but also to take action to attract and retain non-EU talent.


4. A definition of AI: main capabilities and disciplines - 08.04.2019

To view more detail from the European Commission click here.

Defining AI is a controversial activity due to the broad nature of AI techniques and capabilities. This is a challenge for the future, as regulating AI will require some type of definition to be agreed on in order to have certainty over what is being regulated. 

Here the High-Level Expert Group on Artificial Intelligence (AI HLEG), an expert advisory group to the European Commission, provides their proposed definition of AI, using the definition proposed by the European Commission (discussed in section 3) as a starting point. Their definition assumes that any AI system has three common components:

1. perception; 
2. reasoning or decision making; and 
3. actuation. 

In order to make decisions, AI systems must have some way of learning (which can be achieved using techniques such as machine learning, neural networks, deep learning, decision trees etc) so that it can learn how to solve problems that cannot be precisely specified, or whose solution method cannot be described by fixed rules. 


5. Ethics guidelines for trustworthy AI 08.04.2019

To view more detail from the European Commission click here.

In this document, the AI HLEG proposes a set of guidelines to ensure that the use of AI is "trustworthy". The guidelines flag that user trust is an essential component in the deployment of a new technology. Without trust, the economic and potential societal benefits of AI cannot be realised as users will not adopt it. 

The AI HLEG sets out three pillars of trustworthiness, with explanations on how these can be achieved. The guidelines explain that AI should be:

a) lawful,
b) ethical; and 
c) robust, both from a technical and social perspective. 

The first point is not addressed fully – the AI HLEG merely notes that existing laws, such as consumer safety and data protections laws, should be complied with but avoids a discussion on what further laws would be required to regulate AI. 

The AI HLEG is more expansive on the second and third pillars. It recommends that ethical AI can be achieved when four key criteria are used: 

1. respect for human autonomy;
2. prevention of harm;
3. fairness; and 
4. explicability. 

The guidelines acknowledge that such principles may conflict (for example, how can the prevention of harm with human autonomy in autonomous vehicles with manual override be reconciled). 

On robustness, the guidelines recommend policy measures such as transparency and stakeholder participation, as well as a suite of crucial technical measures, including:

1. having resilience to cybersecurity attack; 
2. putting backup safety measures in place – such as having to ask for confirmation from a human operator if things go wrong; 
3. setting accuracy thresholds at a high level; and
4. ensuring that all results obtained from AI systems are reproducible and reliable.

Though the guidelines lack specific detail on how AI developers can comply with these suggestions in practice, they do provide a useful first analysis of how AI can be regulated to ensure that it benefits, and is seen to benefit, society. 


6. Building Trust In Human-Centric Artificial Intelligence, Communication From The Commission To The European Parliament, The Council, The European Economic And Social Committee And The Committee Of The Regions 08.04.2019

To view more detail from the European Commission click here.

This document is based on the advice the European Commission received from the AI HLEG. The European Commission intends to put into action many of AI HLEG's recommendations and to begin a "piloting phase involving stakeholders on the widest scale" to assess how to implement ethical guidelines for the development and use of AI.

The document summarises the AI HLEG documents referenced above, and for this reason alone it is a useful resource. It then goes on to set out its plans for reaching a consensus on the key requirements for AI systems, which the European Commission touts as an important first milestone towards establishing guidelines for ethical AI. 

The European Commission intends to do this with a two-pronged approach:

1. firstly, by running a pilot to understand how AI developers and users respond to the guidelines. All stakeholders will be invited during this stage to test the proposals and provide feedback on how to improve the guidelines; and 

2. secondly, running in parallel to the first prong, a project to raise awareness and an understanding of the guidelines and their aims. The European Commission plans on doing this by organising various outreach activities, giving AI HLEG representatives the opportunity to present the initial guidelines to relevant stakeholders.

The document further discusses the European Commission's global ambitions. The EU has been in discussion with governments internationally, such as Canada, Singapore and Japan, to build an international consensus on ethical and trustworthy AI. 


7. Policy and investment recommendations for trustworthy AI - High-level expert group on artificial intelligence 26.06.2019

To view more detail from the European Commission click here.

These recommendations build on the AI HLEG's guidelines discussed above. While the guidelines discuss the principles around trustworthiness, these recommendations set out specific steps that governments can take to further the adoption and acceptance of AI by (1) using trustworthy AI to build a positive impact in Europe; and (2) by leveraging Europe's enablers for trustworthy AI. 

'Europe's enablers' for trustworthy AI include:
a) Legally compliant and ethical data management and sharing initiatives;
b) AI-specific cybersecurity infrastructures;
c) The re-development of education systems to reflect emerging technologies;
d) Reskilling and upskilling the current workforce;
e) Adapting regulations and laws to ensure adequate protection from adverse impacts; and
f) Governance mechanisms for single market AI trustworthiness.

While the recommendations are made to the EU and to Member State governments, the content is useful to anyone interested in the direction of travel for AI investment and regulation. For example, the document discourages the use of surveillance (including customer surveillance in a commercial context) and recommends investment into AI solutions that address sustainability challenges. The AI HLEG also recommends introducing a duty of care on developers of consumer-oriented AI systems to ensure that these can be used by all intended users, and to ensure that parts of society do not get left out by the use of AI. The report also recommends increasing investment into private sector AI development for companies of all sizes and in all sectors.


8. Liability For Artificial Intelligence And Other Emerging Digital Technologies - 27.11.2019

To view more detail from the European Commission click here.

This report, published in November 2019, was written by an Expert Group on Liability and New Technologies called New Technologies Formation, in conjunction with the European Commission.  The authors carried out an assessment of the existing liability regimes in the wake of emerging digital technologies, such as AI, IoT and distributed ledger technologies. 

They found that the liability regimes in force in Member States do currently ensure at least a basic level of protection for victims whose damage is caused by the operation of such new technologies. There are issues, however, with the current mechanisms for compensation for claimants and the allocation of liability, which may result in inefficient or unfair outcomes for victims.  This is due to the specific characteristics of these technologies and their applications, including inherent complexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats.

To rectify these shortcomings, the authors recommended making a number of adjustments to national and EU level liability regimes, including:

a) Strict Liability: There should be strict liability for operators of permitted technologies that carry an increased risk of harm to others.

b) Duties on users: For operators of technologies that do not pose an increased risk of harm to others there should be a requirement to abide by certain duties – including having to choose the right system for the right task and skills, and to monitor and maintain the chosen system. There would be liability for operators who breach those duties if found to be at fault. 

c) Accountability: Persons using such technologies that have a degree of autonomy should not be less accountable for ensuing harm than if said harm had been caused by a human auxiliary.

d) Manufacturer's liability: There should be liability for manufacturers of products or digital content incorporating emerging digital technology for damage caused by defects in their products, even if the defect was caused by updates made to the product after it had been placed on the market (as long as the manufacturer was still in control of those updates). A development risk defence should not apply for producers. 

e) Insurance: Requiring compulsory liability insurance to give victims better access to compensation and to protect potential tortfeasors against the risk of liability in situations exposing third parties to an increased risk of harm. 

f) Appropriate standards of proof:  Ensuring that victims are entitled to so-called "facilitation of proof" in circumstances where an emerging technology has caused harm but where it is difficult to prove liability because of, for example, the particularly complex nature of the technology.

g) Mandatory logging features:  There should be a duty on manufacturers to equip technology with a way of recording data about its operation where such data is essential in establishing whether a risk in a technology has materialised. Furthermore, the absence of such data or a failure to provide a victim with reasonable access to the logged data should not prevent a victim from being able to prove that the manufacturer was at fault but for the missing data.

h) Data loss as damage: The destruction of the victim’s data should be regarded as damage, for which compensation is available (subject to certain conditions being satisfied).

i) No autonomous legal personality: It should not be necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.


9. Commission report on safety and liability implications of AI, the internet of things and robotics - 19.02.2020

To view more detail from the European Commission click here.

This report was published by the European Commission on 19 February 2020. It addresses the need to ensure that AI, IoT and Robotics all have clear and predictable legal frameworks within which to be developed. These emerging technologies raise new challenges in terms of product safety and liability, including connectivity, autonomy, data dependency, opacity, complexity of products and systems, software updates and more complex safety management and value chains. 

The report recognises that the nature of AI could make it difficult under the existing liability framework to offer compensation to victims. In particular, under the current rules the allocation of cost when damage occurs may be unfair or inefficient. The report considers various adjustments to the Product Liability Directive and national liability regimes to rectify this, and to address potential uncertainties in the existing framework. 

Ideas for such adjustments include:

a) Scope of liability: Adjusting the scope of the definition of a "product" under the Product Liability Directive to give greater clarity over and to better reflect the complexity of emerging technologies;
b) Burden of proof: Mitigating the complexity of AI solutions by alleviating or reversing the burden of proof required by national liability rules for damage caused by the operations of AI applications in favour of victims;
c) Liability for modifications: Adjusting the concept of 'putting into circulation' under the Product Safety Liability Directive, so that account is taken of products that can be changed or altered after they have been released for sale to help clarify who is liable for alterations made to a product;
d) Insurance: Coupling strict liability with an obligation to have available insurance, akin to the system put in place by the Motor Insurance Directive in respect of motor vehicles; and
e) Causation: Adapting the burden of proof in respect of causation and fault so as to avoid a situation whereby a potentially liable party has not logged relevant data for assessing fault or is not willing to share such data with the victim.

The end goal of this exercise is to help create trust in these emerging digital technologies so that users benefit from a reliable liability framework. This would also have the effect of improving confidence for investors. 


10. White Paper On Artificial Intelligence - A European Approach To Excellence And Trust - 19.02.2020

To view more detail from the European Commission click here.

When Ursula Von der Leyen first took the European Commission presidency in December 2019, she promised a legislative proposal on AI within the first 100 days. While this deadline wasn't met, the European Commission has published a white paper to explore the various policy options. A legislative proposal is still due to follow and is currently expected by the end of 2020. 

In part this document builds on and repeats the recommendations made by the AI HLEG (as discussed in section 8), but then goes on to set out possible legal changes. 

Firstly, the European Commission proposes creating a legal definition of AI, which would build on the work already done by the European Commission and the AI HLEG. It then goes on to address the types of legal changes it may recommend including:

  • Making updates to existing consumer protections laws, to ensure that they stay relevant and continue to apply to AI consumer product and services; and
  • Proposing new laws to regulate "high-risk" AI. Anything not classed as high-risk would be subject to laws which already exist. 

The paper lacks detail about what "high-risk" AI could mean, other than to say that the European Commission will assess this both by sector (e.g. healthcare) and by purpose. It considers AI systems that can affect the rights of an individual or company legally or in a similarly significant way, or that pose risk of injury, death or damage, are the types of applications that would be considered high risk.  Three definite examples are given:

a) use of AI applications for recruitment processes;
b) for biometric identification; and 
c) for surveillance.

Conclusion

While the EU currently lags behind the US and China in AI technologies, the EU is intent on being at the forefront of AI: both in terms of scientific research and commercial development, and regulation which protects users. 

The documents summarised in this article show how the EU's approach to AI – and the regulation of AI – is gradually evolving, and how in broad terms it has settled on a three pronged approach of investment, regulation and public education. 

Anyone involved in the development of AI technologies or investing in AI should keep a close eye on further developments, so as to have a head start in getting to grips with a new legal and economic landscape for AI as it is gradually put into place by the EU and its Member States.

 

 

 

 

Latest insights

More Insights
Car by beach

Requests for flexible work – can employers say “no”?

Apr 18 2024

Read More
Crowds crossing lines 782x440

Flex appeal - Exploring the new statutory flexible working regime

Apr 18 2024

Read More
Curiosity line pink background

Frontline UK Employment Law Update Edition 28 2024 - Case Updates

Apr 18 2024

Read More