📢EU AI ACT: KEY UPDATES ON TRANSPARENCY, OVERSIGHT AND IMPLEMENTATION CHALLENGES
- PCV LLC
- Sep 16
- 3 min read

The EU AI Act continues to shape the future of artificial intelligence regulation in Europe. Recent developments reveal not only the scale of the task ahead but also the legal, technical, and political challenges facing regulators, industry, and stakeholders.
Commission Consultation on Transparent AI Systems
The European Commission has launched a public consultation to develop guidelines and a Code of Practice for transparent AI systems. The focus is on generative AI, where providers and deployers will be required to detect and label AI-generated or manipulated content. Under the AI Act, users must be informed when they are interacting with AI systems, including emotion recognition, biometric categorisation, or synthetic content. The consultation runs until 2 October 2025, alongside a call for expressions of interest to participate in shaping the Code of Practice. These transparency obligations will become legally binding from 2 August 2026, reflecting the EU’s emphasis on responsible and trustworthy AI adoption.
German Privacy Authorities Push Back
In Germany, data protection authorities have sharply criticised the government’s draft law implementing the AI Act. Their concern is that supervisory powers over high-risk AI systems in areas such as law enforcement, border management, and justice are being shifted away from data protection bodies to the telecommunications regulator (BNetzA). Seventeen state authorities argue that this move undermines the AI Act’s intention and could amount to a “massive weakening of fundamental rights.” The criticism underscores the delicate balance between regulatory efficiency and safeguarding fundamental freedoms.
GPT-5 and Questions of Compliance
The launch of OpenAI’s GPT-5 model on 7 August 2025 has raised questions over its immediate compliance with the AI Act. Developers of general-purpose AI systems released after 2 August 2025 are required to publish a summary of training data and disclose their copyright policy. GPT-5 appears not to have done so, despite OpenAI’s earlier commitment to the EU Code of Practice. The EU AI Office is currently assessing whether GPT-5 qualifies as a “systemic risk” model, which would trigger stricter obligations. Formal enforcement, however, begins in August 2026, giving developers limited time to align with the new framework.

AI Office Staffing Shortages
The EU AI Office, tasked with enforcement and standard-setting under the Act, is facing serious recruitment challenges. Despite employing 125 staff and planning to add 35 more by year-end, key leadership roles remain vacant. Salaries ranging from €55,000 to €120,000 struggle to compete with private sector offers, hindering recruitment of the technical expertise needed to carry out more than 100 regulatory tasks. With MEP Axel Voss noting that compliance and safety units alone may require 200 staff, the Office’s capacity gap raises questions about the EU’s readiness for timely enforcement.
Copyright Dilemmas in AI Regulation
The intersection of copyright law and AI development remains a pressing issue. While obligations around transparency and safety are relatively clear, copyright compliance creates higher training costs and reduced data availability. Prohibitions on reproducing protected content in outputs are justified, yet dataset transparency and copyright opt-outs present practical hurdles. Current compromises in the Code of Practice have encouraged most major AI developers (excluding Meta and xAI) to sign, but experts warn that Europe’s copyright framewor shaped largely by media industries may inadvertently stifle AI research and competitiveness.
Industry Missing in Standards Development
The success of the AI Act hinges on the development of technical standards by CEN and CENELEC. However, European industry’s limited participation has drawn sharp criticism. Piercosma Bisconti, a lead figure in the process, noted that even companies advocating for delays in AI Act implementation such as Airbus, Siemens, Spotify, and SAP are absent from the standards-setting table. Without robust engagement, the risk is that Europe will struggle to transform broad regulatory principles into practical, enforceable guidance for AI developers.
Key Takeaways
The EU AI Act is entering a critical phase:
Transparency rules for generative AI take effect in August 2026
Implementation disputes highlight tensions between fundamental rights and regulatory efficiency
Compliance gaps, as seen with GPT-5, signal the scale of adaptation needed by AI developers
Institutional capacity remains a weak point for effective enforcement
Copyright and standards remain unresolved bottlenecks for innovation and competitiveness
As the regulatory landscape evolves, businesses, developers, and institutions must stay proactive in aligning with upcoming obligations while contributing to the debate on how Europe balances innovation with trust and accountability.
For more insights on how the EU AI Act may affect your business, and tailored advice on compliance and strategic positioning, contact us at info@pelaghiaslaw.com.
