top of page

📢EU AI ACT: DEVELOPMENTS & STAKEHOLDER INSIGHTS

  • Writer: PCV LLC
    PCV LLC
  • May 27
  • 3 min read

The European Union's Artificial Intelligence Act (AI Act) has entered a critical phase in 2025, with significant provisions now in effect. We will delve into recent developments, stakeholder feedback, and the broader implications for AI governance within the EU.


Stakeholder Feedback on AI Definitions and Prohibited Practices


A comprehensive report by the Centre for European Policy Studies (CEPS) for the EU AI Office analysed responses from two public consultations on the AI Act's definitions and prohibited practices.


Key findings include:


  • Industry Dominance: 47.2% of nearly 400 responses were from industry stakeholders, while citizen engagement was limited to 5.74%

  • Call for Clarity: Respondents emphasised the need for clearer definitions of terms like "adaptiveness" and "autonomy" to avoid inadvertently regulating conventional software

  • Concerns on Prohibited Practices: Significant apprehensions were raised regarding practices such as emotion recognition, social scoring, and real-time biometric identification. Stakeholders advocated for concrete examples to delineate prohibited activities from permissible ones


AI Literacy Requirements Under Article 4

Effective from 2 February 2025, Article 4 mandates that AI providers and deployers ensure adequate AI literacy among their personnel. The European Commission's extensive AI literacy Q&A outlines:

  • Assessment Criteria: Consideration of individuals' technical knowledge, experience, education, and training

  • Contextual Application: Tailoring literacy levels based on the context in which AI systems are used and the target audience

  • Compliance Guidance: Detailed instructions on meeting the literacy requirements, enforcement mechanisms, and additional resources


Appointment of Lead Scientific Adviser Pending

Despite receiving numerous applications since late 2024, the European Commission has yet to appoint a Lead Scientific Adviser for the AI Office. This role is pivotal in:

  • Scientific Oversight: Ensuring a high level of scientific understanding of general-purpose AI

  • Model Evaluation: Leading the scientific approach in testing and evaluating general-purpose AI models, in collaboration with the AI Office's Safety Unit

  • Regulatory Integrity: Maintaining scientific rigour and integrity across AI initiatives


Expert Analysis: Balancing Bureaucracy and AI Safeguards

In a recent op-ed in Fortune, Risto Uuk and Sten Tamkivi argue that Europe's path to AI competitiveness lies in reducing bureaucratic red tape rather than removing AI safeguards.


Key points include:

  • Resource Allocation: Major AI companies like Meta and Google possess the resources to comply with safety evaluations

  • Importance of Safeguards: Independent assessments are crucial to ensure AI models do not pose systemic risks

  • Focus on Bureaucracy: Emphasis on streamlining traditional business bureaucracy to foster economic growth without compromising on AI safety


The Code of Practice: Enhancing AI Safety and Security


The voluntary Code of Practice on General Purpose AI serves as a framework to translate the AI Act's requirements into actionable guidance.


Highlights include:

  • Clarification of Obligations: Providing detailed instructions on model evaluation and systemic risk mitigation

  • Compilation of Best Practices: Aggregating safety practices from leading AI companies like OpenAI, Anthropic, and Google DeepMind

  • Regulatory Encouragement: The European Commission promotes adoption by offering benefits such as increased trust and streamlined enforcement for signatories

  • Targeted Scope: Primarily focusing on approximately eleven (11) global providers with models exceeding 10 ²⁵ FLOPs.

  • Democratic Development: Developed through an inclusive process involving around a thousand stakeholders across three drafts


Cultural Concerns: Protecting Creative Rights in the AI Era

Björn Ulvaeus, ABBA member and President of the International Confederation of Societies of Authors and Composers (CISAC), addressed the European Parliament expressing concerns over the potential weakening of creative rights under the AI Act.


He emphasised:

  • Risks of Dilution: Warnings against proposals that could undermine copyright protections

  • Call for Transparency: Advocating for the inclusion of creative sector perspectives in the development of AI regulations

  • Preservation of Principles: Urging the EU to uphold its commitment to protecting creators' rights in the face of technological advancements


Conclusion

The EU's AI Act represents a significant milestone in the development of a harmonised regulatory framework for artificial intelligence. As implementation progresses and obligations become enforceable, it is essential for stakeholders to remain informed and proactively adapt to the evolving compliance landscape. Our team is well-positioned to provide strategic legal guidance on AI regulatory matters. For tailored advice or to discuss how these developments may impact your organisation, please reach out to us at info@pelaghiaslaw.com.


Comentarios


bottom of page