Britain made a choice. Not a consultation, not a proposal, a choice.
The UK Department for Science, Innovation and Technology published a report on March 18, 2026, laying it before Parliament under Section 136 of the Data (Use and Access) Act 2025. The report’s conclusion: the UK will pursue a Licensing-First approach to AI copyright. The previously proposed TDM opt-out model, which would have created a broad copyright exception allowing AI systems to train on protected works, is rejected.
For AI developers operating in the UK, this matters immediately. There’s no longer a proposed opt-out to plan around. The regulatory posture is market-led licensing. Developers are directed toward solutions like the Creative Content Exchange (CCE), a licensing infrastructure currently advancing its pilot phase, rather than a statutory exemption.
What the Licensing-First approach means in practice
The TDM opt-out model would have allowed AI systems to train on copyrighted material by default, with rights holders needing to actively opt out to block that use. The UK considered this approach and declined it. The Licensing-First framework inverts the dynamic: developers are expected to secure licensing arrangements rather than rely on a default permission.
The Creative Content Exchange is the vehicle the UK is pointing developers toward. The CCE is designed to create a functioning market where rights holders can license content for AI training at scale. Whether that market actually develops, and on what timeline, remains to be seen.
The EU divergence and what it means for cross-border operations
UK legal analysis from Slaughter and May identifies this pivot as part of the UK’s broader 2026 AI regulatory posture. The EU AI Act takes a different path, and its extraterritorial reach means UK businesses that operate in EU markets remain subject to EU rules regardless of what the UK decides at home.
That dual exposure is the operational reality for many UK-based AI companies. The UK has chosen market-led licensing. The EU has chosen regulatory compliance obligations. A company serving both markets needs two frameworks. That complexity doesn’t resolve itself, it requires deliberate legal and compliance structuring.
The UK has reportedly paired the regulatory shift with infrastructure commitments including a Lanarkshire AI Growth Zone and, according to reports, a £500m Sovereign AI Unit described as operational as of March 2026. The intent appears to be signaling that the Licensing-First approach is pro-investment, not anti-development.
Forward-looking: Digital Replicas consultation
A consultation on “Digital Replicas”, addressing deepfakes, style imitation, and the potential creation of a new “Personality Right”, is reportedly planned for Summer 2026, according to analysis of the DSIT report. This would extend the UK’s regulatory attention beyond training data to AI-generated content that mimics real people. Watch for an official consultation launch announcement from DSIT.
What to watch
The CCE’s pilot phase is the near-term test. If a functioning licensing market develops at meaningful scale, the UK’s bet pays off and the Licensing-First model becomes defensible as a stable regulatory framework. If it doesn’t, if rights holders and developers can’t agree on terms, or if the CCE fails to achieve critical mass, the UK will face pressure to intervene legislatively. The gap between policy announcement and market reality is where this story will play out.
TJS synthesis
The UK’s Licensing-First decision is a deliberate rejection of the regulatory approach. Where the EU created compliance obligations and enforcement mechanisms, the UK is betting on market incentives. That’s a meaningful philosophical difference with real operational consequences. AI compliance teams at companies operating across UK and EU markets can’t adopt a single framework, they need jurisdiction-specific strategies for training data, licensing, and content rights. The March 18 DSIT report is the document that makes that requirement concrete for UK operations.