The EU’s Artificial Intelligence Act (EU AI Act), which entered into force in August 2024, is being implemented in stages. As of August 2025, one of the most significant chapters of the EU AI Act has just become applicable: the rules governing General-Purpose AI (GPAI) models. For companies operating in this space, the time for preparation is over.
The chapter on GPAI was a relatively late, yet crucial addition to the EU AI Act's legislative text. The original architecture was conceived around a risk-based pyramid, which categorises AI systems according to their specific, intended purpose. This logic, however, struggled to account for the meteoric rise of foundation models, which are not designed for a single task but can be adapted for a vast range of applications. To address this regulatory gap, lawmakers introduced a parallel set of obligations targeting the very core of the ongoing AI revolution: the large-scale models that power generative AI.
Like many EU regulations, the AI Act sets out its core principles and obligations at a relatively high level of abstraction. While the accompanying recitals offer some interpretative guidance, they often lack the granular detail necessary for practical implementation. This is particularly true for the novel obligations placed on providers of GPAI models.
To bridge this gap between the legal text and operational reality, the legislator has established a framework of supporting documents — a form of secondary legislation — designed to provide concrete, actionable guidance. These instruments help companies understand precisely what they must do to achieve compliance. For the GPAI ecosystem, three documents are now central: The GPAI Guidelines, GPAI Code of Practice (GPAI CoP) and the official Template for the Training Data Summary (TDS Template). For the GPAI Guidelines, we refer to our earlier article; this article will focus on the latter two documents.
The GPAI CoP is not automatically binding; their legal effect is unlocked when a provider formally subscribes to them. By signing and adhering to the GPAI CoP, a provider benefits from a "presumption of conformity". This means authorities will presume the provider follows the EU AI Act’s corresponding GPAI obligations (primarily Article 53), which offers the significant advantage of a clear safe harbour and a high degree of legal certainty. Conversely, a provider can ignore the GPAI CoP and develop its own individual approach. This path, however, carries considerable risk, as the provider must then be prepared to justify its compliance measures from the ground up and can expect a higher degree of scrutiny from enforcement authorities. The rapid adoption of the GPAI CoP by nearly all major global AI providers upon its release underscores its central importance as the de facto industry standard.
Alongside the GPAI CoP, the Commission has also released the official TDS Template, which operationalises Article 53(1)(d) EU AI Act. Under this provision, providers must publish information about the content used to train their models. The template seeks to balance transparency for third parties with the protection of trade secrets of the providers. It requires disclosure of the general characteristics of the training data, including lists of large publicly available datasets as well as licensed and third-party sources. At the same time, the template avoids over-disclosure by focusing on aggregated information and narrative-style reporting rather than detailed, work-by-work or technical documentation.
Unlike many other rules in the EU AI Act that impose obligations on both providers and deployers (i.e., entities using an AI system in a professional capacity), the duties for GPAI models under Article 53 EU AI Act fall exclusively on the provider. A provider is the entity that develops a GPAI model – or has it developed – and subsequently places it on the market.
Article 53(1) EU AI Act establishes four core obligations for these providers, which form the backbone of the GPAI rulebook. These range from documentation and transparency duties to policies on copyright law and a training data summary (see Sections 2.1 to 2.3 below).
Furthermore, the EU AI Act creates a special category for GPAI models with systemic risk, which are subject to a more stringent set of obligations. Given the high thresholds required to meet this classification, their immediate practical relevance is currently limited to a small number of very large-scale models. Therefore, this article will only briefly touch upon this special regime (see Section 2.4 below).
2.1 A Mandate for Clarity: The CoP Blueprint for Transparency and Documentation (Article 53(1)(a)/(b))
The foundation of the GPAI regime is laid out in Article 53(1), points (a) and (b) EU AI Act, which require providers to create and maintain technical documentation for their models and to provide necessary information to the AI Office, national competent authorities and downstream providers who integrate these models into their own systems. While the EU AI Act outlines the categories of information required, the legal text leaves significant room for interpretation regarding the specific format and level of detail.
The new GPAI CoP fills this gap with its Transparency Chapter by providing a structured, operational framework for compliance. Its central element is the "Model Documentation Form", a standardised template that signatories can use to fulfil their obligations. This form brings crucial clarity by creating a tiered system for information disclosure, specifying whom each data point is intended for (the AI Office, national competent authorities or downstream providers).
The GPAI CoP clarifies that providers must document specific details covering the model’s properties, its training process and data provenance, computational resources, and energy consumption. For example, while general information about the model's architecture and intended uses is shared with downstream providers, more sensitive details like the precise computation used for training are accessible only to authorities. The GPAI CoP formalises the information-sharing process, requiring that necessary documentation be provided to downstream providers within a reasonable timeframe, specified as no later than 14 days barring exceptional circumstances. Finally, the GPAI CoP mandates that this documentation must be kept updated for new models and that previous versions must be retained for 10 years after the model is placed on the market.
2.2 Balancing Innovation and Rights: The CoP's Rules for Copyright Policies (Article 53(1)(c))
Article 53(1)(c) EU AI Act obliges GPAI providers to adopt a policy ensuring compliance with EU copyright law, in particular respecting reservations of rights under the Directive on Copyright in the Digital Single Market and data mining provisions.
While the EU AI Act sets this obligation only in broad terms, the GPAI CoP adds operational detail:
Compared to earlier drafts, the final version strikes a stricter balance: the exclusion of piracy sites is now mandatory, complaint handling duties have been tightened, while an earlier duty to audit non-web-crawled third-party datasets has been removed (former Measure I.2.4).
2.3 What's Under the Hood? The Official Template for the Training Data Summary (Article 53(1)(d))
Article 53(1)(d) EU AI Act requires GPAI providers to publish a “sufficiently detailed summary” of the training data used for their models.
While the legislative text outlines this transparency duty only at a high level, the Commission’s new TDS Template adds operational substance. The template obliges providers to disclose general information about training modalities and data size, identify large public datasets, and provide narrative descriptions of licensed and private data sources, scraped content (including the most relevant domains), user data, and synthetic data.
Crucially, the TDS Template is designed to be comprehensive without being overly technical. It avoids work-by-work disclosure and protects confidential information by focusing on aggregated reporting and narrative explanations.
2.4 A Higher Standard: The Safety and Security Framework for GPAI with Systemic Risk (Article 55)
While most GPAI model providers must comply with the transparency and information duties of Article 53, the EU AI Act establishes a special category for GPAI models with systemic risk under Article 55. A model is designated as having systemic risk if it has high-impact capabilities, typically determined by the cumulative amount of computation used for its training exceeding a set threshold (currently 10^25 FLOPs). For these powerful models, Article 55 imposes a far more demanding set of obligations, including performing state-of-the-art model evaluations, assessing and mitigating potential systemic risks, ensuring a high level of cybersecurity protection, and tracking and reporting serious incidents to the AI Office.
The GPAI CoP in its Safety and Security Chapter gives substantive and procedural shape to these high-level legal requirements. It requires signatories to establish a comprehensive, state-of-the-art "Safety and Security Framework" as their central governance tool. This framework must outline the provider's entire lifecycle approach to risk management. A crucial element is the involvement of independent external evaluators to validate the model's safety and the effectiveness of its mitigations. All of these processes, measures, and results must be documented in a detailed "Safety and Security Model Report" and submitted to the AI Office before the model is placed on the market and updated regularly thereafter.
The entry into application of the GPAI model provisions marks a pivotal moment for the AI industry in Europe. For providers of these powerful technologies, compliance is no longer a future prospect but an immediate operational reality. As this article has shown, the new GPAI CoP is the most critical instruments for navigating this new landscape. They transform the high-level principles of the EU AI Act into actionable, standardized processes, offering a vital safe harbour through the presumption of conformity for those who subscribe to them.
Looking ahead, it is clear that these GPAI CoPs are not the final word but rather the starting point of an ongoing regulatory dialogue. They are intended to be living documents, evolving alongside rapid technological advancements and emerging best practices. For companies, simply adopting the current GPAI CoPs is not enough. The key to sustainable compliance will be to foster a culture of proactive engagement with the regulatory ecosystem, monitoring guidance from the AI Office and contributing to the continuous refinement of these standards. By doing so, providers can not only ensure compliance today but also help shape a predictable and innovation-friendly regulatory environment for the future