The AI Act establishes a comprehensive legal framework for AI regulation in the EU, with the rules for General-Purpose AI (“GPAI”) models forming a central pillar. As these specific GPAI provisions become applicable in early August 2025, the European Commission has published the “Guidelines on the scope of the obligations for general-purpose AI models” (“Guidelines”) to provide much-needed clarity for the industry. Prior to this final publication, there was no actual draft, but in its consultation call from April 2025, the Commission formulated a so-called “Preliminary Approach”, which provided an indication of the potential content of the guideline.
While the final Guidelines are technically non-binding, they are of immense practical importance. They signal how competent authorities intend to interpret and enforce the law, effectively creating a de facto standard for compliance.
It is important to note that the Guidelines, which focus on classification (e.g., what constitutes a GPAI model or a provider), are designed to complement the recently published GPAI Code of Practice. The Code, in contrast, primarily details how to fulfill the specific obligations addressed at providers of GPAI models.
Article 3(63) of the AI Act defines a General-Purpose AI (GPAI) model as an AI model displaying 'significant generality' and being 'capable of competently performing a wide range of distinct tasks'. To provide further practical clarity, Section 2.1 of the Guidelines now introduces indicative criteria which together act as a rebuttable presumption:
However, meeting these two conditions is not enough. The fundamental requirement of the AI Act’s definition – the ability to competently perform a "wide range of distinct tasks" – remains paramount. Therefore, the aforementioned indicative criteria works both ways: a model that meets the criteria (e.g., the compute threshold) can still be proven not to be a GPAI model if the tasks it is competently able to perform are too narrow. Conversely, a model not meeting the criteria (e.g., by falling below the threshold) could exceptionally be classified as a GPAI model if it demonstrates significant generality.
The practical examples provided in the Guidelines illustrate this mechanism in an instructive manner. For example, models designed solely for transcribing speech to text, speech to text or reconstructing missing parts of an image will, because of their narrow purpose, not count as GPAI models even if they exceed the compute threshold. In the same line, a model that generates a song where the user must still provide the lyrics in the prompt is considered to have a narrow range of tasks and would therefore also not qualify as a GPAI model.
Changes compared to the Preliminary Approach: The training compute threshold was increased tenfold, from 10²² to 10²³ FLOPs, and the criterion regarding output modalities was introduced. |
The AI Act itself establishes that a GPAI model poses a systemic risk if it meets a rather high training compute threshold (10²⁵ FLOPs) or is specifically designated by the Commission. The Guidelines provide further clarity by highlighting the following five aspects:
Changes compared to the Preliminary Approach: The above statements are new compared to the previous version; there, GPAI models with systemic risk were only addressed in the specific case of downstream modifications. |
In order for the AI Act to apply to a provider of a GPAI model, such GPAI model needs to be placed on the market within the meaning of Article 3(9) AI Act irrespective of where the provider is established.
The terms “placing on the market” means the first making available of an AI system or a GPAI model on the EU market. The term “making available” on the market is explained in Article 3(10) and means “…the supply of an AI system or a GPAI model for distribution or use …in the course of commercial activity, whether in return for payment or free of charge;” on the EU market.
There are various examples provided in the guidelines on what amounts to "placing on the market" of GPAI models and include: a GPAI model being made available for the first time on the Union market via an API, an app store, a software library or package, via a cloud computing service, a web interface, as a physical copy, integrated into customer’s infrastructure, uploaded to a public catalogue, hub, or repository for direct download or used in internal processes that are essential for providing a product or service to third parties or that affect the rights of natural persons in the EU. The examples should be interpreted in line with the EU’s Blue Guide’s meaning of placing on the market.
Recital 97, sentence 8, contains a rather hidden but significant legal fiction: If a company develops a GPAI model, does not place it on the market, but then integrates it into an AI system and places that system on the market, the underlying model is deemed to have been placed on the market as well. The Guidelines reference this provision by direct quotation but unfortunately provide no further clarification regarding the application or limits of this fiction.
Changes compared to the Preliminary Approach: There isn’t a material change. |
The Commission introduced a new “lifecycle” term in relation to the modifications conducted by the original providers of GPAI models.
“The lifecycle of a model starts with the large pre-training run and any subsequent development of the model downstream of this large pre-training run performed by the original provider or on behalf of the provider, whether before or after the model has been placed on the market, forms part of the same model’s lifecycle rather than giving rise to new models.”
This means a model is considered to be the same model throughout its lifecycle even if it is modified after the initial large pre-training run by the same provider or on their behalf. This is especially important with regards to grandfathering provisions where models are modified beyond 2 August 2025, if they have been placed on the market before 2 August 2025.
This is in contrast to downstream providers fine-tuning a model that has been placed on the market by another provider. When a downstream modifier fine-tunes a GPAI model, they become a provider if the modification leads to a significant change in the model’s generality, capabilities, or systemic risk. This is presumed to be the case when the training compute used for the modification is greater than a third of the training compute of the original model. Where the initial training compute is not known by the downstream provider, then they can use the thresholds for qualifying as a GPAI model (1023 FLOPs), or GPAI model with systemic risk (1025 FLOPs) for their calculations.
When a downstream modifier becomes a provider, their obligations would be limited to that modification or fine-tuning (meaning they can supplement the documentation provided by the initial provider).
Changes compared to the Preliminary Approach: The modifications made by original providers no longer amount to distinct models when the fine-tuning computing resource exceeds a third of the training compute of the original model. Instead, the modified versions are considered to be part of the lifecycle of the model. For modifications by third party providers, the initially absolute compute threshold for the creation of a new GPAI model (one third of the computing threshold required for qualifying as a GPAI model) changed to a now relative one (details see above). |
The AI Act offers a significant exemption for open-source models, relieving their providers from most transparency and documentation obligations to foster innovation and collaboration. However, this exemption is not absolute and is a key area of clarification in the Guidelines.
The exemption is immediately voided if the provider monetizes the model. The Guidelines, in Section 4, provide a much stricter and clearer definition of what constitutes monetization. Beyond simply charging a price, it now explicitly includes providing paid technical support that is essential for the model's use, offering the model via a platform that is itself monetized (e.g., through advertising), or collecting personal data for reasons other than improving the model's security.
From the perspective of the regulator, this clarification effectively closes a potential loophole, ensuring that the exemption benefits genuinely non-commercial, community-driven open-source projects, rather than commercial actors using an open-source license as a shield against regulatory duties.
Changes compared to the Preliminary Approach: While the preliminary draft had a more ambiguous definition of monetization, the final version provides a clear and expanded list of what is considered commercial activity. |
Deadline: The obligations for providers of general-purpose AI models laid down in Chapter V of the AI Act apply from 2 August 2025.
Grandfathering: Under Article 111(3) AI Act, ‘[p]roviders of general-purpose AI models that have been placed on the market before 2 August 2025 shall take the necessary steps in order to comply with the obligations laid down in this Regulation by 2 August 2027.’ The Commission Guidelines add a very welcome commentary for the industry that the grandfathering provisions would cover models throughout their entire lifecycle.
Enforcement: The Commission’s enforcement powers enter into force from 2 August 2026 onwards which is a year after the GPAI obligations start to apply. However, the operators are still expected to comply with their obligations within this one-year gap. Especially when a GPAI provider is expected to reach the systemic risk threshold, the providers are expected to notify the Commission without undue delay and in any event within two weeks. When determining enforcement actions, the Commission will consider all relevant factors outlined in Article 101 of the AI Act which lays down the criteria and procedure for imposing fines against providers of GPAI models.