On July 10, 2025, after nearly a year of work involving experts and thousands of AI industry participants 1, the European Commission published its General-Purpose AI Code of Practice 2, establishing the first detailed compliance framework under the EU AI Act. The Code creates a voluntary pathway for demonstrating compliance with mandatory EU AI Act obligations that go into effect August 2, 2025. It is intended to help industry comply with safety, transparency, and copyright requirements.
The General-Purpose AI Code of Practice, which was originally scheduled for release in May, is now under assessment by the EU Member States and the European Commission, with just weeks before the EU AI Act obligations go into effect. The Commission is also supposed to publish guidance on which AI providers are considered "general-purpose". The Commission, however, recently indicated that it does not intend to postpone enforcement or implementation of the AI Act 3. Any company providing AI models to EU markets should assess their compliance readiness now, as conforming to the Code of Practice creates a rebuttable presumption of compliance with the EU AI Act's upcoming requirements.
What is a "General-Purpose AI Model"?
A General-Purpose AI (GPAI) model is a foundational AI system designed to perform a wide range of tasks rather than a specific function.
The EU AI Act defines a GPAI model as "an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications." 4 The EU AI Act also notes that, "whereas the generality of a model could, inter alia, also be determined by a number of parameters, models with at least a billion parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinctive tasks." 5 Therefore, "large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks." 6
Examples of GPAI include OpenAI's GPT models, Anthropic's Claud, Google's Gemini, and similar large language models that can be adapted for various applications, but the EU AI Office notes that it intends to provide further clarification as part of its ongoing research 7.
Requirements: Transparency, Copyright, Safety and Security
The GPAI Code of practice is divided into three chapters: Transparency, Copyright, and Safety and Security. All GPAI Model providers are expected to conform with the first two chapters on Transparency 8 and Copyright 9 to demonstrate compliance with their obligations under Article 53 of the EU AI Act. Article 53 requirements include documentation on technical specifications and implementation, copyright policies, and required public disclosures.
- Transparency Requirements: The Transparency Chapter describes the measures and disclosures that all GPAI model providers must provide, including completing the comprehensive Model Documentation Form 10 detailing technical specifications, training data characteristics, computational resources, and energy consumption.
- Copyright Compliance: The Copyright Chapter requires all GPAI model providers to implement policies that respect intellectual property rights, including honoring robots.txt (Robot Exclusion for web crawling and automated website use) protocols, preventing copyright-infringing outputs, and establishing mechanisms for lodging complaints.
- Downstream Support: The Code of Practice also requires all GPAI model providers to establish processes for timely information sharing with downstream system providers who utilize their model.
For the frontier GPAI model providers – those exceeding 1025 floating-point operations-- there are additional Safety and Security 11 requirements that must be met. These additional requirements include:
- Safety and Security Frameworks: The Safety and Security Chapter requires establishing formal governance structures with independent risk oversight.
- Model Evaluations: Similarly, the largest GPAI model providers are required to conduct rigorous testing by qualified independent external evaluators before deployment, after major updates, and periodically thereafter.
- Cybersecurity Standards: Additionally, the chapter describes the state-of-the-art security measures that must be established, including end-to-end encryption, access controls, and insider threat protections.
Currently, it is estimated that only 5-15 companies worldwide – such as OpenAI, Anthropic, Google, and Microsoft – would be subject to the Safety and Security chapter guidance, but the number of GPAI model providers in this group will likely grow as compute efficiency improves.
Strategic Considerations for Voluntary Adopters
While the GPAI Code of Practice is voluntary, the requirements of the EU AI Act are mandatory for GPAI model providers in the EU market.
By following the Code of Practice, however, GPAI model providers enjoy a rebuttable "presumption of conformity" with AI Act obligations – which is a beneficial safe harbor as AI technology and regulations evolve. The Code of Practice is intended as a roadmap for compliance in the EU, so deviating from it may invite unnecessary scrutiny or uncertainty from the EU AI Office.
The Code of Practice's requirements also align with emerging global AI governance trends. As NIST develops U.S. frameworks and other jurisdictions advance similar initiatives, investment in these compliance capabilities will likely serve multiple regulatory regimes.
Takeaways
For companies subject to the EU AI Act, the GPAI Code of Practice provides the most concrete guidance to date on compliance with the EU AI Act's complex requirements. EU regulators are sticking to the rapidly approaching implementation deadlines, so impacted companies should prioritize the following action items from the Code of Practice to identify any gaps between the Code's recommendations and their current AI documentation and practices.
While every GPAI model should align with the full range of relevant measures in the GPAI Code of Practice, key areas of focus and next steps should include:
- Review the Model Documentation Form, determine knowledge gaps, and engage relevant stakeholders to gather the required information. Some US GPAI model providers may lack the detailed energy-use and computer-hour data needed, so it is particularly important to assess and inventory technical documentation.
- Implement an online complaint-intake mechanism, if one does not already exist, for users to submit copyright-related complaints, such as an email or API endpoint for right-holder complaints and AI Office inquiries.
- Designate a point of contact for copyright-related matters and post the contact details for users to access.
- Evaluate and update your GPAI model's web-scraping hygiene, including making any necessary updates to ensure that robots.txt opt-outs are honored and you are prepared to implement the blocklist of websites "recognised as persistently and repeatedly infringing copyright and related rights" that the EU will maintain.
- For frontier models covered by the Safety and Security chapter, evaluate whether your current red-team and benchmark tests are as rigorous as the state-of-the-art requirements of the Code of Practice. Conduct these tests before releases, major updates, and regularly thereafter. Be prepared to identify serious incidents and notify the AI Office within the reporting timelines detailed in Measure 9.3, typically requiring a prompt report between two and fifteen days after becoming aware of the model's involvement in an incident.
Adopting the measures outlined in the Code of Practice is currently the most effective way to demonstrate EU AI Act readiness. Although the compliance demands may require a significant operational lift – especially around data governance, external model testing, and incident reporting – the Code of Practice reflects significant industry input and offers valuable regulatory clarity for GPAI model providers participating in EU markets.
Footnotes
1 See Drawing-up a General-Purpose AI Code of Practice, European Commission (July 10, 2025), available at https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice.
2 The General-Purpose AI Code of Practice, European Commission (July 10, 2025), available at https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai [hereinafter, the "GPAI Code of Practice" .
3 See, e.g., EU sticks with timeline for AI rules, Foo Yun Chee, Reuters (July 4, 2025) (quoting a Commission spokesperson that "There is no grace period. There is no pause."), available at https://www.reuters.com/world/europe/artificial-intelligence-rules-go-ahead-no-pause-eu-commission-says-2025-07-04/.
4 EU AI Act, Article 3(63).
5 Id., Recital 98.
6 Id., Recital 99.
7 See generally General-Purpose AI Models in the AI Act – Questions & Answers, European Commission (June 12, 2025), available at https://digital-strategy.ec.europa.eu/en/faqs/general-purpose-ai-models-ai-act-questions-answers.
8 GPAI Code of Practice, Transparency Chapter (July 10, 2025), available at https://ec.europa.eu/newsroom/dae/redirection/document/118120.
9 GPAI Code of Practice, Copyright Chapter (July 10, 2025), available at https://ec.europa.eu/newsroom/dae/redirection/document/118115.
10 GPAI Code of Practice, Model Documentation Form (July 10, 2025), available at https://ec.europa.eu/newsroom/dae/redirection/document/118118.
11 GPAI Code of Practice, Safety and Security Chapter (July 10, 2025), available at https://ec.europa.eu/newsroom/dae/redirection/document/118119.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.