AI regulatory plan targets 'high risk' applications

One of the building blocks of the European Commission's Digital Strategy is the White Paper on Artificial Intelligence (AI), which was also presented on 19 February. While the initial pledge by Commission President Ursula von der Leyen to present AI legislation within her first 100 days in office proved to be over-ambitious, the White Paper sets out various policy options regarding how to regulate AI in Europe.  The aim is to present a legislative proposal in the final quarter of 2020.

Key initiatives

This non-binding plan proposes 10 key actions to be undertaken by the Commission, with respect to areas most likely to be affected by the deployment of the new technology. Among these actions are the following:

  • Boosting research and investment through the creation of excellence and testing centres;
  • Developing necessary AI skills as part of a Digital Education Action Plan;
  • Sustaining small and medium-sized enterprises (SMEs) via a forthcoming pilot investment fund of €100 million focused on AI and blockchain; and
  • Setting up a new public-private partnership in AI, data and robotics under the framework of the Horizon Europe programme.

Mandatory requirements 

The Commission regards it as necessary to adopt an EU-wide regulatory framework to ensure legal certainty for both citizens and businesses, while avoiding the fragmentation of the internal market. Given the specific characteristics of AI technologies, the new rules will focus on two areas where the use of AI is expected to entail higher risks: fundamental rights (including privacy and data protection), as well as safety and liability.

According to the White Paper, a new regulatory framework will be built around a series of mandatory legal requirements, such as training data, data and record-keeping, information provision, robustness, accuracy and human oversight.

This new regulatory framework is supposed to have a limited scope of application. Its mandatory requirements will only apply to so-called "high risk" AI. This category includes AI technology employed in sectors where "significant risks can be expected to occur" (e.g.: healthcare, transport and energy) and applications "used in such a manner that significant risks are likely to arise".

Adjustments to the existing EU and national regulatory framework have also been foreseen, including potential amendments to the Product Liability Directive and a "targeted harmonisation of national liability rules".

Following the presentation of the new AI strategy, the Commission has announced a public consultation on the White Paper, which will be open to feedback until 31 May 2020.

Sign up for our monthly Connected newsletter to stay up-to-date with the latest regulatory and public affairs issues.

For further information contact: Chiara Horgan

 

Latest insights

More Insights
City skyline at dusk

Requests for flexible work – can employers say “no”?

Apr 18 2024

Read More
Crowds crossing lines 782x440

Flex appeal - Exploring the new statutory flexible working regime

Apr 18 2024

Read More
Curiosity line teal background

Frontline UK Employment Law Update Edition 28 2024 - Case Updates

Apr 18 2024

Read More