Analysing the impact of the EU AI Act vote on businesses

In a pivotal move, the European Parliament conducted a plenary vote on the proposed Artificial Intelligence Act today, as indicated in the updated draft agenda. Following the final vote, the AI Act is set to come into force 20 days after its publication in the official journal.

The introduction of the AI Act marks a significant step in regulating AI, particularly focusing on product oversight and risk mitigation. As businesses grapple with the implications, it's crucial to understand the distinctive features and challenges posed by this legislative framework.  Some of which are detailed below.

From broad principles to specific risks

The final version of the AI Act deviates from the European Parliament's draft by moving the general principles applicable to all AI systems, such as human agency, oversight, and transparency to the recitals. This means that the majority of AI use cases remain unregulated unless they fall into the high-risk category or other specified classifications. Most AI use cases will probably fall under the Transparency Obligations for Providers and Users of Certain AI Systems and GPAI Models category. Despite existing international initiatives emphasising responsible AI use, the AI Act takes a more targeted approach, focusing primarily on high-risk scenarios.

Biometric systems

The negotiations surrounding the AI Act have brought to light several contentious issues, particularly in the treatment of biometric AI systems. Examining various perspectives, it becomes evident that the text carries nuances that could introduce uncertainty within the industry.

Firstly, the AI Act says that the definition of biometric data should be interpreted “in light of” the biometric data definition provided under the GDPR. Yet the definitions of biometric data under the two laws are materially different as one focuses on the ability to uniquely identify the individual whereas the other does not. This brings in entities that would not normally be concerned about the biometric data related provisions in the GDPR into the scope of the AI Act.

Separately, the AI Act introduces many prohibitions and restrictions on biometric systems, but careful scrutiny reveals various exceptions and caveats to such restrictions and prohibitions, leaving the industry in a state of uncertainty as to which rules apply to them. For instance, while biometric systems categorising individuals based on sensitive data are prohibited, the Act doesn't clearly address the risk categorisation of labelling or filtering lawfully acquired biometric datasets which are excluded from the scope of prohibition but unclear if they then fall into high-risk category. Biometric categorisation systems integrated as ancillary features within other commercial services is not caught in the AI Act. This exemption is particularly relevant in scenarios like "try-on" filters on online stores or filters on social media platforms. 

Stringent technical requirements

Unlike other technology-related regulations such as the GDPR, which is predominantly implemented through internal guidelines and policies, the AI Act – in its nature as product regulation - directly impacts the development of AI products. Companies must contend with more stringent technical requirements, emphasising the importance of incorporating regulatory considerations early in the technical implementation process. 

Urgent need for technical standards

Many of the AI Act's requirements, for example in data governance, necessitate further guidance for effective implementation. Striving for error-free, complete, and unbiased data is paramount. To bridge the gap between legal concepts and practical implementation, the development of technical standards becomes imperative. Organisations, including bodies like CEN/CENELEC, are actively working towards establishing these standards, providing an opportunity for businesses to contribute and ease the regulatory burden.

Confusion surrounding GPAI models

The introduction of obligations related to general-purpose AI (GPAI) models has stirred confusion among businesses. Distinguishing between GPAI models and GPAI systems, and understanding their separation from high-risk systems, poses challenges. Many companies struggle with the distinctions, contributing to uncertainties about applicable compliance boundaries and obligations. Additionally, the obligations related to GPAI models themselves are also still confusing. For example, the obligation of making available a "sufficiently detailed summary" about training data is open to interpretation, at least until harmonised standards are published.

In conclusion, the AI Act, while marking a significant step in regulating AI, is not without its challenges. The industry must navigate the complexities and ambiguities present in the legislation, seeking clarity through ongoing dialogue and engagement. The diversity of perspectives emphasises the need for continuous refinement and adaptation to ensure a balanced and effective regulatory framework for AI.

Contact us if you would like more information.

Latest insights

More Insights
Snow-capped mountain range

China Cybersecurity and Data Protection: Monthly Update - April 2024 Issue

Apr 26 2024

Read More
Curiosity line blue background

Bring out the wine and cheese: Enhanced protection for European GIs in New Zealand

Apr 26 2024

Read More
Green paper windmill

Green Gold: Navigating Mandatory Climate Disclosure and ESG Strategies

Apr 26 2024

Read More