Long Read: The Second Wave Hits: Why the EU AI Act just might become the global standard for AI regulation

13th August 2025

Written by Andy Moseby.

 

 

 

 

 

The Regulatory Watershed Moment for General-Purpose AI

The European Union’s AI Act reached a pivotal milestone on 2 August 2025, marking the commencement of binding obligations for providers of general-purpose AI (GPAI) models. This “second wave” of the AI Act represents far more than a regulatory compliance exercise: it constitutes the establishment of what is likely to become the global framework for AI governance, with profound implications for UK businesses operating in an inter-connected digital economy.

The rules encompass transparency and copyright-related requirements, with additional obligations for models that may carry systemic risks requiring providers to assess and mitigate these risks. Unlike the February 2025 prohibition on unacceptable AI practices, which primarily affected edge-case applications, the GPAI obligations strike at the heart of the generative AI revolution, affecting every organisation deploying Chat GPT-style models in European markets.

The extraterritorial reach: Brexit does not shield UK digital businesses

For UK businesses, the AI Act’s extraterritorial scope renders Brexit irrelevant in practical terms. The regulation applies to any provider placing GPAI models on the EU market or offering AI-powered services to EU users, regardless of the provider’s geographic location. This creates a compliance imperative that extends far beyond traditional notions of territorial jurisdiction.

The implications are immediate and far-reaching. UK firms developing, fine-tuning or substantially modifying AI models may find themselves classified as “providers” under the Act, triggering comprehensive documentation, transparency and risk management obligations. Even organisations acting as downstream deployers cannot assume immunity: those embedding third-party models into EU-facing services may find themselves jointly liable for compliance failures.

The three-pillar compliance framework

Providers of GPAI models are required to maintain technical documentation that makes the model’s development, training and evaluation traceable. The regulatory framework rests on three fundamental pillars, each representing a distinct compliance challenge:

  • Documentation and transparency: companies must establish comprehensive documentation pipelines capturing the entire model lifecycle. This extends beyond simple version control to encompass training data provenance, model architecture decisions, evaluation methodologies and performance benchmarks. The requirement for traceability means that ad hoc development practices must be replaced with rigorous governance frameworks capable of withstanding regulatory scrutiny.
  • Copyright compliance: the AI Act demands robust policies addressing the use of copyrighted material in training data. This represents a particularly complex challenge for organisations that have relied on web-scraped datasets without comprehensive rights clearance. Providers must now implement systematic approaches to data rights management, potentially requiring retrospective auditing of existing training processes.
  • Risk assessment and security: models identified as posing systemic risks trigger additional obligations under Article 55, including formal risk assessments, incident reporting mechanisms, cybersecurity measures, and notification requirements to the EU’s AI Office. The determination of “systemic risk” considers both model scale and advanced capabilities, creating a tiered compliance structure that escalates requirements for the most capable systems.

Code of Practice

The European Commission published the GPAI Code of Practice on 10 July 2025: a voluntary code of conduct designed to help the industry meet the specific requirements for General Purpose AI, serving as an alternative compliance tool under the AI Act. The voluntary Code of Practice creates incentives for pro-active compliance while establishing a clear differentiation between co-operative and resistant market participants.

Major signatories include OpenAI, Google, Microsoft, Anthropic and Amazon, while Meta has notably declined to sign, calling the EU’s implementation “overreach”. This division among BigTech leaders reveals the strategic calculations underlying compliance decisions. Signatories gain regulatory goodwill and reduced scrutiny, while non-signatories risk enhanced supervisory attention and potentially more aggressive enforcement action.

The Code’s three-chapter structure – transparency, copyright and safety/security – mirrors the statutory obligations while providing practical guidance on implementation. Crucially, signing the Code does not confer legal immunity but signals good faith engagement with the regulatory process. For UK firms, this creates a binary choice with significant strategic implications: embrace the Code as a pathway to regulatory certainty, or risk being characterised as resistant to European governance frameworks.

Financial and operational consequences

The European Commission gains enforcement tools, including the ability to impose fines, from August 2026. The enforcement architecture demonstrates the EU’s serious intent. Administrative fines of up to €35 million or 7% of global annual turnover represent genuinely deterrent penalties for organisations of any size.

The enforcement timeline creates a compressed preparation window. Providers of GPAI models that have been placed on the market before this date need to be compliant with the AI Act by 2 August 2027. While legacy models receive a compliance extension until August 2027, new models entering the market face immediate obligations. This differential treatment incentivises rapid deployment of non-compliant systems: a regulatory arbitrage opportunity that will likely be brief and risky.

Market surveillance mechanisms will enable authorities to restrict distribution of non-compliant models across the EU’s single market. For UK providers, this represents an existential threat to European market access, potentially severing revenue streams that many organisations consider essential for commercial viability.

Practical implications for tech in-house counsel

The AI Act’s second wave fundamentally reshapes the advisory landscape. Traditional software licensing and data protection frameworks, while relevant, prove insufficient for the complexity of AI governance.

Organisations procuring third-party AI services must implement enhanced due diligence processes, requiring contractual representations regarding AI Act compliance, audit rights and incident notification procedures. The joint provider liability risk means that procurement decisions carry regulatory consequences. We are also starting to see collaboration between legal teams and technical personnel to establish standards that satisfy both regulatory requirements and commercial confidentiality concerns. This represents a novel intersection of intellectual property protection and regulatory transparency.

Professional liability and technology errors and omissions policies may not adequately cover AI Act violations. Legal advisers should also work with insurance brokers to ensure appropriate coverage for regulatory fines and business interruption resulting from market surveillance actions.

The Brussels Effect accelerated

The AI Act represents the “Brussels Effect” in its most potent form: the EU’s ability to set global standards through regulatory leadership rather than market size alone. Just as GDPR established the global privacy baseline, the AI Act is positioned to define international AI governance norms.

This dynamic creates both opportunity and obligation for UK firms. Those that embrace AI Act standards early will find themselves advantaged in global markets where EU-style governance is increasingly expected by customers, insurers and business partners. Conversely, organisations that resist European standards risk exclusion from an expanding ecosystem of AI-compliant business relationships.

The geopolitical dimension also cannot be ignored. As US-EU tensions over technology regulation intensify, UK businesses occupy a potentially advantageous position as regulatory intermediaries. UK organisations that demonstrate mastery of European AI governance while maintaining technological innovation capabilities may find themselves uniquely positioned to serve as bridges between regulatory regimes.

Competitive advantage through compliance

UK organisations should view AI Act compliance not as a regulatory burden but as a competitive differentiator in an increasingly governance-conscious market. Companies that implement robust AI governance frameworks before enforcement begins will enjoy first-mover advantages in EU markets while building internal capabilities that provide competitive moats against less sophisticated competitors.

The Act’s transparency requirements, properly implemented, can serve as powerful marketing tools, demonstrating to customers and partners that an organisation takes AI safety and reliability seriously and the systematic risk assessment processes required by the Act will improve overall AI deployment practices, reducing the likelihood of costly failures and enhancing system reliability.

AI governance frameworks designed to meet EU standards will generally exceed requirements in other jurisdictions, enabling organisations to scale globally with confidence in their compliance stance.

The dawn of regulated AI innovation

The EU AI Act’s second wave represents more than regulatory compliance, it marks the transition from unregulated AI experimentation to governed AI innovation. For UK businesses, this transition presents challenges but also unprecedented opportunities to establish leadership positions in an emerging market for trustworthy AI.

The organisations that thrive in this new environment will be those that recognise the Act not as an obstacle to innovation but as a framework for sustainable AI development. By embracing transparency, implementing robust governance and demonstrating regulatory sophistication, UK firms can position themselves as preferred partners in a global economy that increasingly values AI safety alongside AI capability.

The second wave has arrived. The question facing UK businesses is not whether to comply, but whether to lead. Those who choose leadership will shape the future of AI governance; those who resist may find themselves excluded from it.

Find out more about our expertise in Corporate here.