š¢EU PUBLISHES FIRST CODE OF PRACTICE FOR GENERAL-PURPOSE AI MODELS: A MILESTONE FOR RESPONSIBLE AI GOVERNANCE
- PCV LLC
- Jul 11
- 3 min read

On 10 July 2025, the European Commission officially unveiled the Code of Practice for General-Purpose AI ModelsĀ (GPAI Code), marking a critical step toward ensuring responsible AI innovation across the EU.
Designed through a multi-stakeholder process and built as a voluntary tool, the Code helps AI model providers demonstrate compliance with the transparency, safety, and copyright obligationsĀ under the AI Act, particularly Articles 53 and 55.
Why this Code matters
While general-purpose AI (GPAI) models such as large language models or multimodal systems drive rapid innovation, they also pose systemic risks ranging from misinformation to intellectual property infringements. The GPAI Code is the EUās proactive response, developed ahead of the binding AI Act obligations entering into force in August 2025 (for Article 53) and August 2026 (for Article 55).
Structure and Scope
The Code is structured into three dedicated chapters, each addressing a core compliance area:
1. Transparency (Article 53(1)(a)-(b))
This chapter obliges GPAI providers to maintain robust Model Documentation, using a standardised Model Documentation Form. It ensures downstream providers and authorities can understand a modelās architecture, capabilities, design rationale, and limitations. It also includes provisions to keep prior versions archived for 10 years and to make contact channels available for data access or clarification requests.
2. Copyright (Article 53(1)(c))
Providers must implement a copyright policyĀ aligned with Directive (EU) 2019/790, focusing on:
Avoiding the use of unlawfully accessible data during training
Ho nouring machine-readable rights reservationsĀ using standards like robots.txt
Implementing technical safeguards against copyright-infringing outputs
Setting up complaint mechanisms for rightsholders
This chapter clarifies that while adherence to the Code helps demonstrate compliance, it does not constitute legal proofĀ under Union Copyright Law.
3. Safety and Security (Article 55 ā systemic risk models)
This section applies only to advanced GPAI modelsĀ considered to pose systemic risks. It introduces a detailed Safety and Security FrameworkĀ and a lifecycle-based approachĀ to risk governance. Key commitments include:
Conducting continuous model evaluations
Defining risk thresholds and response plans
Reporting serious incidents to the AI Office and National Competent Authorities
Collaborating with independent evaluators and civil society
Implementing state-of-the-art safety and cybersecurity protocols
Notably, small and medium-sized providers benefit from proportionality clauses, allowing simplified reporting and compliance pathways.
Evaluation Phase and Commission Guidelines
The European Commission and Member States will now assess the adequacyĀ of the Code in the coming weeks. Furthermore, the Code will be complemented by Commission guidelinesĀ later in July 2025 to clarify key concepts and further support implementation.
Stakeholders across the AI value chain, including model providers, regulators, researchers and rightsholders, are encouraged to engage with the Code. Their participation supports the development of a trustworthy AI ecosystem that promotes innovation while ensuring ethical and legal safeguards.
Implications for the AI Ecosystem
The GPAI Code of Practice represents a first-of-its-kind regulatory innovation, blending soft law and hard lawĀ to guide the AI industry toward compliance and accountability. It sends a clear message: the EU remains committed to advancing AI responsibly, with no grace periods or regulatory pauses.
As the implementation of the AI Act accelerates, voluntary adherence to the Code may offer a competitive advantage, both in demonstrating regulatory readiness and in building public trust.
For tailored legal advice on compliance with the EU AI Act and the General-Purpose AI Code of Practice, please contact our team at info@pelaghiaslaw.com. We are available to support clients navigating the evolving European AI regulatory framework.
Comments