Countdown to compliance as EU AI Act set to enter into force

Written By

paula alexe Module
Paula Alexe

Regulatory and Public Affairs Advisor
Belgium

As a Regulatory and Public Affairs Advisor, I help clients navigate the dynamic EU environment.

francine cunningham Module
Francine Cunningham

Regulatory and Public Affairs Director
Belgium Ireland

As Regulatory & Public Affairs Director in Brussels, I assist companies facing an unprecedented wave of new EU regulation that will have an impact on every business operating in the digital and data-related economy. I help companies navigate complex EU decision-making processes and understand the practical application of the law to their sectors.

paolo sasdelli Module
Paolo Sasdelli

Regulatory and Public Affairs Advisor
Belgium

As a Regulatory and Public Affairs Advisor, I assist clients in understanding the EU decision-making processes and the impact of EU laws on their sectors.

One of the most significant pieces of legislation to be adopted by the outgoing EU mandate, the Artificial Intelligence Act (AI Act) was finally published in the Official Journal on 12 July 2024. It is due to come into force on 1 August 2024 and will become applicable in a phased manner from between six and 36 months thereafter, with most provisions applying after 24 months.

As the world’s most comprehensive legislative framework for AI developers, deployers and importers. the new Regulation seeks to guarantee that AI systems placed on the European Union internal market are secure, uphold current laws regarding fundamental rights and adhere to EU values.

Key Obligations

Taking the form of a Regulation, the AI Act is directly applicable on the EU’s 27 Member States. The Act takes what the EU institutions have described as a “risk-based approach”:

  • Prohibited AI practices: includes AI practices violating fundamental rights such as social scoring, exploiting people's vulnerabilities, using subliminal techniques, real-time biometric identification in public spaces (with limited exceptions), certain forms of individual predictive policing, emotion recognition in workplaces and schools, in addition to untargeted scraping of internet or CCTV footage for facial images to build databases.
  • High Risk AI systems: include AI systems used in biometrics, critical infrastructure, education, employment, self-employment, essential private/public services, law enforcement, migration, asylum, border control, justice administration and democratic processes. AI systems which are safety components of devices or are devices covered by EU product safety legislation are also considered high-risk. Requirements for these systems include pre-market conformity assessment, risk management, data governance, technical documentation provision, record keeping, transparency and human oversight. High-risk AI systems deployed by public authorities or related entities must be registered in a public EU database.
  • Transparency obligations for certain AI systems: providers of AI systems intended to interact directly with natural persons or which generate synthetic audio, image, video or text content will be subject to transparency obligations. Deployers of emotion recognition or biometric categorisation systems and deployers of AI systems that generate or manipulate image, audio or video content constituting a deep fake are also subject to their own transparency obligations.
  • General Purpose AI (GPAI) systems and models: risk categorisation is based on model capability rather than application. Two risk categories exist: GPAI models entailing systemic risk and all other GPAI models. Providers of systemic risk GPAI models have more compliance requirements. 
    • GPAI Models: providers must maintain technical documentation and share information with potential users about the capabilities and limitations of the model. They must also draw up and make publicly available a ‘sufficiently detailed summary’ about the content used for training of the model. A code of practice will be drawn up by the AI Office. 
    • GPAI Models with Systemic Risks: regarded as having systemic risk if they have high impact capabilities or are designated as such by the European Commission. A model is presumed to have high impact capabilities if the compute used for training measures 10^25 or more Floating Point Operations Per Second (FLOPs). Providers of these models have additional obligations like model evaluations, systemic risk mitigation, incident reporting and ensuring cybersecurity protection. A code of practice is also recommended.

Timeline and enforcement structure

Provisions regarding prohibitions will apply already six months after the Regulation’s entry into force on 1 August 2024, while requirements for GPAI will apply 12 months after. Most of the other provisions in the AI Act will apply 24 months after it enters into force, while some specific requirements for high-risk AI systems will apply after 36 months.

With regard to enforcement, the European Commission has established an AI Office on 16 June 2024, which is now located within the Commission’s Directorate General for Communications Networks, Content and Technology (DG CNECT), under the leadership of Lucilla Sioli. The AI Office has exclusive authority over GPAI models and has responsibility for developing the EU’s expertise and capabilities in the field of artificial intelligence.

The European AI Office has been restructured into five units and two advisory roles. These are:

  • Excellence in AI and Robotics Unit
  • Regulation and Compliance Unit
  • AI Safety Unit
  • AI Innovation and Policy Coordination Unit
  • AI for Societal Good Unit
  • Lead Scientific Advisor
  • Advisor for International Affairs

Overall, the AI Office is expected to employ over 140 staffers, including economists, technology specialists, administrative assistants, lawyers and policy specialists.

In addition to the AI Office, Member States will be required to appoint national competent authorities who will have responsibility for supervising the application and implementation regarding conformity to high-risk and prohibitions. An AI Board, comprising representatives from these Member States, will be established with the aim of providing a coherent implementation of Regulation. An Advisory Forum of stakeholders and a Scientific Panel of independent experts will also be established.

Potential penalties

Penalties for infringements of the new Regulation can reach up to EUR 35 million or 7% of annual global turnover, and up to EUR 15 million or 3% depending on the violation. Incorrect reports can result in penalties of up to EUR 7.5 million or 1.5% of annual turnover. Additionally, providers can be forced to withdraw non-compliant AI systems from the market.

Next Steps

The European Commission is planning to come forward with around 20 follow up documents by August 2026 including secondary legislation (Delegated Acts and Implementing Acts), guidelines and templates, in addition to codes of conduct, to support implementation of the AI Act. This is in addition to work on codes of practice to demonstrate compliance for general-purpose AI models until harmonised standards are established. Subsequently, the Commission may grant these codes EU-wide validity through an Implementing Act. Looking ahead, the new European Parliament is also expected to focus on the relationship between AI and copyright and on the use of AI in the workplace.

EU AI Act Guide – now ready to download! 

To guide you through the EU AI Act, our multi-disciplinary global team of AI experts has launched our EU AI Act Guide which summarises key aspects of the new regulation and highlights the most important actions organisations should take in seeking to comply with it. Serving a similar purpose as our GDPR Guide, our EU AI Act Guide is divided into thematic sections, with a speed-read summary and a list of suggested priority action points. 

To access the guide, click here.

For further information, please contact Francine Cunningham, Paolo Sasdelli and Paula Alexe.

Latest insights

More Insights
Tech AI robot

Key Areas of Focus in Legal Due Diligence for AI Companies in Germany: Assessing Risks and Ensuring Compliance

Dec 04 2024

Read More
collection of files with coloured bulldog clips

Key digital takeaways from the hearings of incoming Commissioners

Dec 03 2024

Read More
Curiosity line pink background

ENISA Implementing Guidance on NIS2 security measures - draft for consultation

Dec 03 2024

Read More