📢EU AI ACT: KEY DEVELOPMENTS IN IMPLEMENTATION, CONSULTATION AND INDUSTRY ENGAGEMENT
- PCV LLC
- Jun 11
- 3 min read

The European Union continues to shape the global regulatory landscape on artificial intelligence through its landmark AI Act. As the regulation transitions from legislative adoption to implementation, recent developments highlight the complexity of this process and the evolving relationship between policymakers, industry leaders, and civil society.
1. Public Consultation on High-Risk AI Systems
The European Commission has launched a public consultation focused on the classification and implementation of rules for high-risk AI systems under the AI Act.
The consultation seeks practical input to clarify which systems fall under the high-risk category and how responsibilities should be distributed across the AI value chain. These systems include those that:
Are subject to EU harmonised legislation on product safety, or
May significantly affect health, safety, or fundamental rights in specific contexts outlined in the Act
Open to a wide range of stakeholders, the consultation will inform the upcoming Commission Guidelines and runs until 18 July 2025.
2. Potential Postponement of AI Rules Due to Standards Delay
At a recent EU ministerial meeting in Luxembourg, Henna Virkkunen, Executive Vice President of the Commission, acknowledged possible delays in implementing aspects of the AI Act if essential technical standards are not finalised on time. This follows calls from industry, including US tech firms for a “stop-the-clock” mechanism to prevent regulatory misalignment.
While some EU Member States, such as Poland, expressed conditional support for delays, they stressed that any extension must be coupled with a clear roadmap to maintain momentum and compliance readiness.
3. US Tech Giants Call for a Streamlined AI Code of Practice
The voluntary Code of Practice on general-purpose AI, initially due on 2 May 2025, has been postponed, with a new publication expected before August 2025. During a consultation with EU officials, representatives from Amazon, IBM, Google, Meta, Microsoft, and OpenAI advocated for a simplified framework that avoids duplicative reporting and unnecessary burdens. They urged that the code remains within the scope of the AI Act and warned against premature implementation before companies have adequate time to prepare.
4. Governing AI Agents: New Insights into Risk and Accountability

A detailed analysis by The Future Society has provided the first comprehensive interpretation of the AI Act's approach to AI agents—autonomous systems capable of real-world interaction.
Key takeaways include:
The Act regulates both general-purpose models and the AI agents they power
Risk classification depends on the agent’s use case unless exempted by model providers
Compliance responsibilities must be distributed across the value chain to solve the "many hands problem"
The regulation relies on four pillars: risk assessment, transparency, technical deployment controls, and human oversight
These findings are expected to influence future interpretive guidance and compliance strategies for AI developers and deployers.
5. Calls to Reduce Regulatory Burdens for EU Competitiveness
A new policy brief from DIGITAL EUROPE urges the Commission to go further in reducing administrative burdens under the AI Act and other regulations. Although a 25% cut for large firms and 35% for SMEs is currently planned by 2029, the organisation argues for a 50% reduction, citing Europe’s lag in scaling technologies such as AI and semiconductors.
The proposed reforms aim to:
Simplify overlapping EU rules
Improve legal certainty across Member States, and
Enhance the region’s ability to scale innovation and compete globally
6. Concerns About Inadequate Transparency for Downstream Compliance
Experts at the appliedAI Institute have warned that the draft Code of Practice for general-purpose AI models fails to equip downstream users with the transparency tools needed to meet high-risk system requirements. The disparity between model providers’ disclosures and downstream due diligence obligations may result in either relaxed enforcement or reluctance to integrate general-purpose models into regulated applications.
Conclusion
The evolving implementation of the AI Act illustrates both the ambition and complexity of regulating cutting-edge technologies. Stakeholders must remain engaged, adaptive, and proactive, especially as further guidance is published in the coming months. At Pelaghias, Christodoulou, Vrachas LLC, we continue to monitor developments closely and provide legal insight into compliance, risk management, and the broader regulatory landscape.
For tailored guidance or assistance with your organisation’s AI compliance strategy, please contact our team at info@pelaghiaslaw.com.
Comments