top of page

📢UNDERSTANDING THE EU AI ACT's CODE OF PRACTICE FOR GENERAL-PURPOSE AI MODELS

  • Writer: PCV LLC
    PCV LLC
  • 6 hours ago
  • 3 min read

As the European Union's Artificial Intelligence Act (AI Act) continues to shape the regulatory landscape for AI technologies, a pivotal component in its implementation is the Code of Practice for General-Purpose AI (GPAI) model providers. This Code serves as a transitional compliance framework bridging the gap between the AI Act's enforcement and the establishment of formal European standards.


What Is the Code of Practice?

Outlined in Article 56 of the AI Act, the Code of Practice is a set of guidelines designed to assist GPAI model providers in aligning with the Act's requirements. While not legally binding, adherence to the Code demonstrates a provider's commitment to compliance, especially during the interim period before official standards are adopted. The Code is particularly relevant for models that exhibit significant generality and can be integrated into various downstream systems or applications.


Why Is It Necessary?

The AI Act, effective from the 1st of August 2024, introduces obligations for GPAI model providers that become enforceable twelve months later, on August 2, 2025. However, the development of harmonised European standards, a process involving bodies like CEN and CENELEC, often spans several years. The Code of Practice thus serves as an interim mechanism, enabling providers to demonstrate compliance until these standards are in place.


Key Components of the Code

The third draft of the Code, released on the 11th of March 2025, delineates three main sections:

  1. Transparency

    • Providers commit to maintaining up-to-date model documentation

    • They must supply relevant information to downstream providers and the AI Office upon request

    • Ensuring the quality, security, and integrity of the documented information is paramount

  2. Copyright

    • Providers are to establish and implement a copyright policy

    • This includes using only lawfully accessible content for training, complying with rights reservations, and mitigating risks of producing infringing outputs

    • A designated point of contact for copyright concerns is also required

  3. Safety and Security (Applicable to GPAI models with systemic risk)

    • Providers must conduct state-of-the-art model evaluations and risk assessments

    • They are obligated to report serious incidents and implement corrective measures

    • Adequate cybersecurity protections must be in place


Note: GPAI models with systemic risk are typically those trained with computational resources exceeding 10^25 FLOPS.


Drafting Process and Stakeholder Involvement

Initiated in October 2024, the drafting of the Code has been a collaborative effort involving over a thousand stakeholders, including GPAI model providers, downstream providers, academics, civil society organizations, and independent experts. The process has been structured into four working groups focusing on transparency, copyright, safety and security, and systemic risk. Thirteen independent Chairs and Vice-Chairs have led these groups, ensuring a comprehensive and inclusive approach.


Timeline and Implementation

  • 1st of May 2025: Deadline for finalising the Code of Practice.

  • 2nd of August 2025: GPAI model provider obligations under the AI Act become enforceable.

  • Post-May 2025: The European Commission, following a review by the AI Office, may approve the Code through an implementing act. If not approved, the Commission can establish alternative compliance methods.


Implications for GPAI Model Providers

Adhering to the Code of Practice allows GPAI model providers to demonstrate proactive compliance with the AI Act's requirements. It serves as a practical guide during the period before formal standards are established, helping providers navigate the regulatory landscape and mitigate potential risks associated with non-compliance.


Preparing for Compliance in an Evolving Landscape

As the EU Artificial Intelligence Act enters its critical implementation phase, the Code of Practice for General-Purpose AI emerges as a key transitional tool for AI model providers. While non-binding, it signals a clear expectation of transparency, responsibility, and risk mitigation in AI development and deployment.


For organisations operating in or targeting the EU market, aligning with this evolving framework is not only prudent but essential. Early adherence demonstrates regulatory readiness and a commitment to responsible innovation—both of which can enhance trust, attract partnerships, and reduce future legal exposure.


At our Law Firm, we actively monitor the development of the AI Act and its associated instruments, including the Code of Practice. We are ready to support technology providers, platform operators, and downstream users in interpreting their obligations and building legally compliant AI strategies.


If your organisation is developing or deploying general-purpose AI models, now is the time to prepare. Our team is available to guide you through the practical steps toward compliance, offering tailored advice grounded in deep regulatory insight.



Comments


bottom of page