The Artificial Intelligence Act (AI Act) was published in the EU Official Journal on 12 July 2024 and entered into force on 1 August 2024. Most provisions are set to apply from 2 August 2026, and others are being phased in over a period of six to 36 months from the date of entry into force.
Summary
AI package comprises:
A Regulation on a European approach for Artificial Intelligence;
An updated Coordinated Plan with Member States; and
A Proposal for a Regulation on machinery products
A risk-based approach was proposed by the Commission, built around the concept “the higher the risk, the stricter the rule”. Therefore, the AI Act differentiates AI uses into four categories:
Unacceptable risks: AI systems falling within this category are prohibited, as they are deemed to be against EU fundamental rights and values. Banned AI systems include exploitative or manipulative practices, such as “practices that have a significant potential to manipulate persons through subliminal techniques”, and AI-based social scoring carried out by public authorities;
High risks: Such high-risk AI systems will be allowed only if they comply with certain mandatory requirements comprising: data governance, documentation and recordkeeping, transparency and provision of information to users, human oversight, robustness, accuracy and security, as well as ex-ante conformity assessments. The identification of high-risk AI will be closely linked to their intended purpose and includes systems used in for critical infrastructure, educational training, hiring services, migration and border control tools, justice administration and law enforcement. Realtime biometric systems (facial recognition) are included in this group and would be banned unless strictly necessary;
Low risks: AI systems to which only specific transparent obligations will apply, as of making citizens aware that on the other side there is a machine (and not a human) interacting with them e.g., chatbots; and
Minimal risks: this last group comprises AI systems that are considered not to constitute a risk or pose a threat to citizens’ fundamental rights and to which no specific obligation will be applied
Interplay with EU framework
The European Data Protection Supervisor, the European Data Protection Board and the European Central Bank have published relevant opinions about this proposal in the fields of their competence. Questions have been raised regarding the interplay of the proposed AI Regulation and consistency with the EU legal framework including the EU Charter of Fundamental Rights, the General Data Protection Regulation (GDPR) and the Product Liability Directive, the General Product Safety Directive and among other instruments.
How could it be relevant for you?
The AI Package represents the first ever set of rules on AI, which will be binding for AI systems, developers, users and importers of such technologies.
The package is aimed at striking a balance between building citizens’ trust in AI systems to mitigate associated risks and boosting investment and innovation in the further development of AI systems.
Next steps
Most of the provisions in the AI Act (such as high-risk systems rules under article 6(2) and Annex III, or transparency obligations in Chapter IV) will apply from 2 August 2026, which is 24 months from its entry into force.
However, pursuant to article 113, the following provisions of the Act will become applicable in a phased manner from between six and 36 months from the entry into force of the Regulation:
Chapter I (General Provisions) and Chapter II (prohibitions) start to apply from 2 February 2025.
Chapter III Section 4 (notified bodies), Chapter V (GPAI systems), Chapter VII (Governance), Article 78 (Confidentiality), and articles 99-100 (Fines for providers of general-purpose AI models, except for article 101) start to apply from 2 August 2025.
Article 6(1) (Classification rules for high-risk AI systems) starts to apply from 2 August 2027.