top of page

📢EU AI ACT COMPLIANCE ALERT: KEY OBLIGATIONS APPLICABLE FROM 2 AUGUST 2025

  • Writer: PCV LLC
    PCV LLC
  • Jul 29
  • 7 min read
ree

As the European Union advances its landmark AI regulatory framework, 2 August 2025 marks the first major operational milestone in the phased application of the Artificial Intelligence Act (Regulation (EU) 2024/1689). On this date, several foundational provisions come into force — enabling supervision, governance, and early-stage enforcement across both general-purpose and high-risk AI domains.


What Comes into Effect on 2 August 2025?

Pursuant to Article 113(b) of the AI Act, the following parts of the regulation become applicable on 2 August 2025:


Chapter III, Section 4 (Articles 28–39) | Notifying Authorities and Notified Bodies

A key pillar of the EU AI Act is the establishment of a robust and harmonised system for the conformity assessment of high-risk AI systems. Chapter III, Section 4 of the Regulation outlines the roles, requirements, and oversight mechanisms for notifying authorities and notified bodies – the institutional backbone ensuring AI systems comply with EU safety, transparency, and trustworthiness standards before reaching the market.


Each Member State must designate notifying authorities tasked with evaluating and formally notifying conformity assessment bodies to the European Commission. These authorities must operate with independence, impartiality, and confidentiality, ensuring the integrity of the notification process.


Conformity assessment bodies, once designated as notified bodies, are responsible for carrying out third-party assessments of high-risk AI systems. They must be legally established within the EU and meet strict requirements concerning technical competence, independence, internal procedures, liability insurance, and cybersecurity safeguards. Compliance with relevant harmonised standards provides a presumption of conformity with these obligations.


Importantly, notified bodies are held fully accountable for any activities carried out by subcontractors or subsidiaries, and are expected to avoid placing undue burdens on providers particularly micro and small enterprises. The Commission assigns unique identification numbers to all notified bodies and maintains a public, up-to-date registry.


The AI Act also introduces mechanisms for continuous supervision, allowing the Commission and Member States to monitor, review, or withdraw designations where necessary. Furthermore, notified bodies must participate in coordination and peer learning activities to ensure consistency across the Union. Bodies established in third countries may also be recognised, provided they meet equivalent standards under relevant international agreements.


This institutional framework ensures that only qualified and reliable entities are entrusted with the critical role of conformity assessment reinforcing the AI Act’s objective of safe, lawful, and trustworthy AI deployment across the European Union.


ree

Chapter V | General-Purpose AI Models

The EU AI Act introduces a dedicated framework for general-purpose AI models (GPAI)—those capable of performing a wide range of tasks and which may be integrated into various downstream AI systems. From 2 August 2025, key regulatory obligations will apply to GPAI providers, particularly where models are classified as presenting systemic risks to public interests such as safety, fundamental rights, or democratic integrity.


Classification and System Risk Assessment


A GPAI model is deemed to present systemic risk if it exhibits high-impact capabilities based on technical benchmarks, most notably if it has been trained using computational resources exceeding 10²⁵ floating-point operations (FLOPs). Additionally, the European Commission may classify a model as high-risk either on its own initiative or upon receiving a qualified alert from the AI Act’s Scientific Panel.


Providers must notify the Commission when their models meet systemic risk thresholds, and may submit arguments if they believe the specific characteristics of their models mitigate those risks. A public EU-wide register of GPAI models with systemic risk will be maintained, balancing transparency with the protection of intellectual property and trade secrets.


Baseline Obligations for GPAI Providers


Regardless of risk classification, GPAI providers must:

  • Maintain up-to-date technical documentation on the model’s training, testing, and evaluation

  • Provide essential information to downstream AI system providers to ensure they can comply with the AI Act

  • Disclose a summary of the training data used, in a format determined by the AI Office

  • Adopt a policy for copyright compliance, in line with EU copyright law


Additional Duties for High-Risk GPAI Models


Where a GPAI model is classified as presenting systemic risk, providers must go further by:

  • Conducting adversarial testing and documenting evaluations to identify and mitigate systemic risks

  • Implementing Union-level risk mitigation measures

  • Ensuring a high standard of cybersecurity for both the model and its infrastructure

  • Reporting serious incidents to the AI Office and national authorities without delay


These providers may rely on the Code of Practice published by the Commission on the 10th of July 2025 to demonstrate compliance until harmonised EU standards are adopted. Where such standards or codes are not followed, alternative documented measures must be submitted to the Commission for review.


ree

Chapter VII | Governance

The governance framework of the EU AI Act is designed to ensure consistent and coordinated application of the Regulation across all Member States, with dedicated roles at both the Union and national levels. Chapter VII sets out the institutional architecture, encompassing the AI Office, the European Artificial Intelligence Board, the Scientific Panel of Experts, and the designated national competent authorities, whose responsibilities come into full effect by 2 August 2025.


Centralised Union-Level Coordination


At the heart of the EU’s governance model is the AI Office, established within the European Commission to build expertise, ensure oversight, and support enforcement of the Regulation. The AI Office collaborates closely with the European Artificial Intelligence Board (EAIB), a body composed of national representatives from each Member State tasked with ensuring uniform application of the AI Act across the Union.


The Board promotes coordination among national authorities, issues opinions and recommendations, supports the development of guidance and standards, and contributes to cross-border market surveillance efforts. Two permanent sub-groups facilitate dialogue on notified bodies and market surveillance, while additional sub-groups may be established as needed to address emerging issues.


To ensure transparency and stakeholder engagement, an Advisory Forum will support the Board and the Commission by bringing together industry representatives, start-ups, SMEs, academia, and civil society, offering technical advice and annual public reporting.


Complementing this structure is a Scientific Panel of Independent Experts, established by the Commission to provide technical and risk-based input particularly in assessing systemic risks posed by general-purpose AI models. These experts also support the AI Office, Member States, and market surveillance activities, including cross-border investigations.


National-Level Implementation and Oversight


Each Member State must designate by 2 August 2025:


  • At least one notifying authority

  • At least one market surveillance authority, and

  • A single point of contact responsible for communication and coordination


These national authorities must act independently, impartially, and be adequately resourced to fulfil their functions. Their responsibilities include supervising market actors, providing guidance (especially to SMEs and start-ups), ensuring cybersecurity, and cooperating with the Commission and other Member States. Importantly, authorities must have expertise in AI, data governance, cybersecurity, fundamental rights, and legal standards.


National authorities must report to the Commission every two (2) years on the adequacy of their financial and human resources. The Commission will facilitate knowledge exchange and coordination among national authorities, and support cross-sector guidance aligned with existing EU law.


ree

Article 78 | Safeguarding Confidentiality under the EU AI Act


The EU AI Act places strong emphasis on protecting confidential information throughout its implementation and enforcement, as set out in Article 78. All authorities and entities involved, whether the Commission, market surveillance authorities, notified bodies, or other public or private actors must comply with Union and national rules on confidentiality when handling sensitive data during inspections, assessments, or audits.


This includes the protection of:

  • Intellectual property rights, including source code,

  • Trade secrets and other confidential business information,

  • The integrity of enforcement activities, and

  • Public or national security interest


Authorities are permitted to request only data that is strictly necessary to assess the risk posed by AI systems and to perform their duties. Additionally, they must implement robust cybersecurity measures and ensure that any collected data is deleted once no longer needed.


Where high-risk AI systems are used by law enforcement, immigration, or border control authorities (e.g. facial recognition, biometric identification), heightened confidentiality protections apply. In such cases, technical documentation must remain on-site and accessible only to officials with appropriate security clearance.


Furthermore, cross-border information sharing is restricted to prevent compromising sensitive operational data. The Regulation also ensures that confidentiality provisions do not hinder necessary cooperation or information sharing among EU institutions or with third-country regulators, provided bilateral or multilateral confidentiality arrangements are in place that guarantee equivalent safeguards.


Chapter XII | Penalties

The EU AI Act introduces a graduated and proportionate enforcement framework designed to ensure compliance across all actors in the AI value chain, ranging from system providers and importers to GPAI developers and Union institutions. Member States and the European Commission are entrusted with setting and applying penalties that are effective, dissuasive, and fair, with full application expected by 2 August 2025.


National Enforcement and Fines


Each Member State must establish and notify the Commission of its national penalties framework, which may include administrative fines, warnings, and non-monetary measures. Fines must respect proportionality and consider the specific context of SMEs and start-ups, including their economic viability.


Violations are tiered according to severity:

  • Prohibited AI practices (e.g. manipulative or exploitative systems under Article 5) can trigger fines of up to €35 million or 7% of global annual turnover

  • Breaches of other key obligations (e.g. those by providers, importers, deployers, and notified bodies) can result in fines up to €15 million or 3% of turnover

  • Providing incomplete or misleading information may lead to fines up to €7.5 million or 1% of turnover


Fines for SMEs and start-ups are capped at the lower of the percentage or the fixed amount. Authorities must assess factors such as the duration, intent, severity, harm caused, cooperation level, and whether the operator self-reported the breach. Procedural safeguards, such as due process and judicial review, are mandated by Union and national law.


Union Institutions and AI Developers


The European Data Protection Supervisor (EDPS) is authorised to fine EU institutions, bodies, or agencies for breaches, with maximum fines ranging from €750,000 to €1.5 million, depending on the infringement. These penalties are subject to the EDPS’s internal due process obligations and do not affect the core operation of the fined institution.


Legal Insights & Key Takeaways

With the EU AI Act entering a critical enforcement phase in August 2025, organisations across the AI value chain must treat compliance as a strategic priority, not a regulatory afterthought. The Regulation not only imposes obligations, but also creates enforcement leveragem particularly for high-risk and general-purpose AI systems while promoting legal certainty and harmonised oversight across the Union.


Providers and users of AI systems should proactively map their obligations under the Act, assess the risk classification of their AI systems, and identify whether they rely on general-purpose AI models subject to systemic risk obligations. Engaging legal and technical counsel early will be crucial to navigating conformity assessments, documentation requirements, governance expectations, and possible sanctions.


Whether you are a provider, deployer, GPAI developer, startup, or institution affected by the AI Act, our dedicated team is ready to guide you through every step of this new regulatory landscape, from risk classification and model documentation to Codes of Practice and compliance planning.


 To discuss how we can support your organisation, contact us at info@pelaghiaslaw.com.



Comments


bottom of page