First-ever legal framework for AI proposes obligations for developers, users and importers

Written By

francine cunningham Module
Francine Cunningham

Regulatory and Public Affairs Director
Belgium Ireland

In the latest evidence of the European Commission’s ambitious regulatory agenda for the digital ecosystem, an Artificial Intelligence (“AI”) package was published on 21 April with the aim of setting a “global gold standard” for regulation in this cutting edge sector. The AI package includes a proposal for a Regulation laying down harmonised rules on AI (“Artificial Intelligence Act”), a Coordinated Plan with Member States and a proposal for a Machinery Regulation.



Significant new compliance obligations will apply to developers of AI and users, as well as importers of AI systems, under the proposed Regulation which will have extra-territorial reach. Under the proposal, the highest risk systems, deemed to conflict with EU fundamental values, will be banned. The proposal also puts forward deterrent fines of up to €30 million, or 6 per cent of worldwide annual turnover in the preceding financial year, for companies flouting the data-related rules.

However, the proposed Regulation has already been criticised by some stakeholders for vague language and for lacking a redress mechanism for citizens affected negatively by AI systems. The possibility for industry to meet EU standards by carrying out self-assessments in some sensitive areas, such as the use of AI in the employment sphere and for migration control, has also provoked civil society demands for independent oversight.

Three-year long process

The Artificial Intelligence Act is the outcome of a three-year process, that started in 2018 with the EC Communication “Artificial Intelligence for Europe” and the establishment of a High-Level Expert Group on Artificial Intelligence. Based on the Ethics Guidelines and Recommendations produced by this group, in February 2020 the Commission adopted a White Paper on AI, followed by a public consultation which attracted more than 1,000 contributions. Only the newly launched proposal for an AI Act will ultimately become binding when adopted.

Global hub

The main political aim of the proposal is to “turn Europe into the global hub for trustworthy AI”. According to the Commission, its objective is to strike a balance between building citizens’ trust in AI systems to mitigate associated risks and boost investment and innovation in the further development of AI systems, built on high-quality data sets. Following the model of the General Data Protection Regulation (GDPR), the Commission aimed to anchor the new rulebook to the EU fundamental values, with which the design, development and use of AI systems will have to comply.

Risk-based approach

A risk-based approach has been adopted by the Commission, built around the concept “the higher the risk, the stricter the rule”. Therefore, the proposal differentiates AI uses into four categories:

  • Unacceptable risks: AI systems falling within this category are prohibited, as they are deemed to be against EU fundamental rights and values. Banned AI systems include exploitative or manipulative practices, such as “practices that have a significant potential to manipulate persons through subliminal techniques”, and AI-based social scoring carried out by public authorities.

  • High risks: Such high-risk AI systems will be allowed only if they comply with certain mandatory requirements comprising: data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security, as well as ex-ante conformity assessments.

    The identification of high-risk AI will be closely linked to their intended purpose and includes systems used in the following sectors: critical infrastructure, educational training, hiring services, migration and border control tools, justice administration and law enforcement.

    Within this high-risk group, the Commission has included real-time biometric systems (e.g. facial recognition), which will be banned unless considered strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to locate the suspect of a serious criminal offence.

  • Low risks: AI systems to which only specific transparent obligations will apply, as of making citizens aware that on the other side there is a machine (and not a human) interacting with them e.g. chatbots.

  • Minimal risks: this last group comprises AI systems that are considered not to constitute a risk or pose a threat to citizens’ fundamental rights and to which no specific obligation will be applied.

A new enforcer

In relation to the governance, the Commission proposes national competent market surveillance authorities which will conduct checks and assessments, although some AI providers will be allowed to carry out technical self-assessments. The proposal also foresees a European Artificial Intelligence Board to be made up of Commission and Member State representatives, which will be very influential in determining the practical effects of the proposal. Additionally, the Commission will operate an EU-wide database where stand-alone high-risk AI systems will have to be registered by the providers before being placed on the market.

Points of contention

A prime battlefield when it comes to the interactions of AI regulation and privacy concerns is likely to be the use of AI systems for facial recognition purposes. The European Data Protection Supervisor already called for a moratorium on this controversial technology and some Members of the European Parliament are ready to advocate a complete ban.

Stakeholders have also pointed to a lack of clarity and legal certainty around the definition of high-risk AI systems and the definition used for “subliminal techniques” used in relation to prohibited AI systems.

While the European Parliament had previously called for a civil liability regime for AI, the liability aspects related to the design, development and use of AI systems seem to fall outside of the scope of this proposal. The Commission is expected to address the liability aspects related to new technologies within a forthcoming wider revision of the EU liability framework.

Due to vague wording in Recital 9, an additional open question is whether the new rules will apply to AI systems embedded in online platforms. Meanwhile, crucial AI developers and users will want to ensure that the risk assessment procedure is not too burdensome.

Next steps

The AI Act will undergo the ordinary EU legislative procedure and will become binding on Member States when eventually adopted by both the European Parliament and the Council. The Commission wants the regulation to be applicable one year and a half after its adoption, but the AI Board should be up and running before then.

For further information contact Francine Cunningham and Chiara Horgan

Sign up for our Connected newsletter for a monthly round-up from our Regulatory & Public Affairs team.


Latest insights

More Insights
abstract colourful lines of code

How to do crypto business in Poland

Apr 24 2024

Read More
Mobile Phone in hand on purple background

Digital Identities in the UK

Apr 24 2024

Read More
Chair

One step closer to a sustainable EU; the European Parliament adopts the revised CSDDD proposal

Apr 24 2024

Read More