Technical efforts underway to push AI Act across the finish line

Written By

francine cunningham Module
Francine Cunningham

Regulatory and Public Affairs Director
Belgium Ireland

As Regulatory & Public Affairs Director in Brussels, I assist companies facing an unprecedented wave of new EU regulation that will have an impact on every business operating in the digital and data-related economy. I help companies navigate complex EU decision-making processes and understand the practical application of the law to their sectors.

Detailed technical work to finalise the legal text of the incoming EU Artificial Intelligence Act is ongoing, despite the announcement of a political agreement on the new regulation released with much fanfare on 9 December after a three-day marathon negotiation. Belgium, which took over the Presidency of the EU Council on 1 January from Spain has organised a series of technical meetings in January with the aim of producing a consolidated text by the end of the month. At the very latest, a final draft of the Regulation should be ready by early February to allow sufficient time for both the European Parliament and the Council to formally adopt the Regulation ahead of the European Parliament elections in June.

Before the winter holiday, the three biggest EU Member States - France, Germany and Italy – declined to officially endorse the political agreement on the AI Act at a meeting of the Committee of EU deputy ambassadors (COREPER), on the basis that they had not seen the final text. Representatives of these three Member States have also expressed reservations about the political compromise reached. Nevertheless, there remains strong political momentum to finalise the Regulation in the next weeks.

What’s in the political agreement?

Taking the form of an EU Regulation, the AI Act will be directly applicable on the 27 EU Member States. Some key points from the political agreement reached in December are outlined below.

Definition: The political agreement adopted a revised version of the OECD’s definition of AI: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

General-Purpose AI Systems: the political agreement includes obligations for all general-purpose AI-systems (GPAI) which appears to cover what was considered to be Foundation Models in previous texts. These obligations include providing technical documentation, information to downstream providers, transparency measures e.g., watermarking and compliance with EU copyright law.

Systemic Risk GPAI: Notably, the political agreement includes stricter requirements for what is called “Systemic Risk GPAI”, which are classified according to computational power used for training and would likely include only the largest international language models (LLMs). They will have to comply with more extensive obligations, such as conducting assessments and evaluations of the risks presented by them and mitigating those risks, meeting cybersecurity requirements, reporting serious incidents to the European Commission, conducting adversarial testing/red teaming, as well as meeting criteria regarding energy consumption.

High risk Systems: providers of systems deemed high-risk (which could be built on top of/using GPAI-systems or developed separately) will have to comply with an extensive list of product safety obligations, which relate to, for example, risk management, data quality and security. The list of high-risk systems includes those used for purposes which can have significant implications for fundamental rights, health, safety democracy/rule of law and the environment, such education, employment, critical infrastructure, public services, law enforcement, border control and the administration of justice. Users/deployers of these systems will have to comply obligations such as implementing human oversight and in certain cases conducting a Fundamental Rights Impact Assessment.

Prohibitions: the AI Act will include a list of banned applications that are deemed to pose an unacceptable risk, such as manipulative techniques, systems exploiting vulnerabilities, categorisation based on sensitive characteristics, social scoring (i.e., using metrics to assess social behaviour) and predictive policing software. The use of databases based on bulk scraping of facial images will also be banned, together with the use of emotion recognition in the workplace and educational environment (except for medical and safety reasons e.g. monitoring the tiredness of a pilot). Real-time remote biometric identification is prohibited except for certain law enforcement exceptions, namely to prevent terrorist attacks, locate missing the victims, or activities related to a predefined list of serious crimes.

Liability: although the European Union is currently considering a separate proposal to create a harmonised regime for AI-related liability, which has been put on hold until the AI Act was finalised, the political agreement includes a possibility for individuals to lodge complaints. Citizens would also have the right to request explanations for decisions made by high-risk systems that affect their rights.

Copyright: the political agreement contains copyright provisions for providers of GPAI models. Providers must make publicly available a “sufficiently detailed” summary of the content used for training the AI model. Additionally, there is a requirement to create a copyright policy. Providers of GPAI should ensure that they comply with EU copyright law, in particular to observe the reservation of rights holders against text and data mining.

Penalties: potential fines for non-compliance are set as a minimum amount or percentage of the company’s annual global turnover, if the latter is higher. The most severe violations regarding prohibited systems and non-compliance with data requirements would see fines up to 7% of a company’s annual worldwide turnover in the preceding year or EUR 35 million. Violations of obligations for system and model providers would see fines of up to 3% of annual worldwide turnover in the preceding year or EUR 15 million. Meanwhile, failure to provide accurate information risks fines of up to 1.5% of annual worldwide turnover in the preceding year or EUR 7.5 million. Non-complying systems will (after a grace period) be barred from the EU market.

Next Steps

The consolidated AI Act text resulting from the technical meetings will be subject to lawyer-linguists review which will take some weeks. The European Parliament’s plenary then needs to formally adopt the final text mostly likely in April. The text also needs to be formally adopted by the Council at minister level.

Once the final text has been published in the Official Journal of the EU it will enter into force 20 days later, triggering the gradual application of the rules. While the AI Act will apply 24 months after it enters into force, the provisions regarding prohibitions will apply six months after the Regulation’s entry into force. Requirements for GPAI will apply 12 months after the Regulation’s entry into force. Some requirements for high-risk AI systems will apply 36 months after the AI Act’s entry into force.

For more information please contact Francine Cunningham.


SIGN UP TO OUR CONNECTED NEWSLETTER FOR A MONTHLY ROUND-UP FROM OUR REGULATORY & PUBLIC AFFAIRS TEAM

Latest insights

More Insights
Tech AI robot

Key Areas of Focus in Legal Due Diligence for AI Companies in Germany: Assessing Risks and Ensuring Compliance

Dec 04 2024

Read More
collection of files with coloured bulldog clips

Key digital takeaways from the hearings of incoming Commissioners

Dec 03 2024

Read More
Curiosity line blue background

ENISA Implementing Guidance on NIS2 security measures - draft for consultation

Dec 03 2024

Read More