The same act, training an AI model on text scraped from the web, is treated differently under three major regulatory frameworks that together cover a substantial share of the global AI market. That’s not a theoretical problem. It’s the operating reality for any company building or deploying AI in the US, UK, or EU as of March 2026.
Two developments this week sharpened the divergence. On March 20, the White House released its National AI Legislative Framework. On March 18, the UK’s Department for Science, Innovation and Technology published its report on AI and copyright. Both address the same underlying question. Neither answers it the same way.
The US Position: Developer-Favorable, Court-Dependent
The White House framework covers copyright as one of several named objectives, confirmed via DLA Piper’s analysis of the document. The framework’s language frames the challenge as “protecting copyright holders while balancing AI developer needs”, a formulation that doesn’t resolve the tension so much as name it.
According to Gibson Dunn’s legal analysis of the framework, the copyright provisions appear to suggest that training AI models on copyrighted material does not violate copyright law. This is an interpretation of the document, not a direct quote from it. Multiple law firms have reached similar readings independently. When experienced IP counsel converge on the same interpretation, the policy signal has weight even without statutory text to back it.
The practical implication: the US framework does not tell developers they need licenses. It signals that the courts are the appropriate venue for resolving training data disputes, and, by implication, that training on publicly available content is defensible. Per AP News reporting on the framework, the administration’s intent is for Congress to act on AI regulation, but until legislation passes, this is guidance, not law.
One complication: the Wire has flagged a reported tension between the framework’s copyright position and at least one separately proposed Congressional measure. That conflict is unverified from available source text and should be tracked rather than relied upon as confirmed. The legislative picture in the US is unsettled. The framework signals direction. Congress determines destination.
The UK Position: Licensing-First, Developer-Obligating
The UK’s approach reverses the default. The DSIT report, laid before Parliament under Section 136 of the Data (Use and Access) Act 2025, formally rejects the “TDM opt-out” model that had been under consideration. Per reporting confirmed by AI SaaS Writer and corroborated by Birketts’ legal update on the March 2026 report, the UK is now pursuing a Licensing-First framework.
Where the opt-out model placed the burden on rights-holders to exclude their work, Licensing-First places the burden on developers to secure licenses. The Creative Content Exchange, referenced in the DSIT report as a pilot initiative, is the infrastructure play, a market mechanism designed to enable rights-holder and developer negotiations.
The policy signal is unambiguous. Commercial AI developers operating in the UK, training on UK-originating content, or deploying models in the UK market are expected to seek licenses. Whether that expectation carries current legal force is a different question, the DSIT report is not enacted legislation. But in a Licensing-First environment, “we relied on the opt-out that doesn’t exist” is not a defensible compliance position.
The report is also reported to include plans for a future consultation on digital replicas and a potential personality right, with analysts citing a Summer 2026 timeline. This track, covering deepfakes and style imitation, is separate from the training data question. It signals that the UK’s IP and AI framework will continue evolving.
The EU Position: Transparency Requirements, Third Path
The EU AI Act takes a third approach, established in prior coverage of the Act’s operational phase. Rather than deferring to the courts (US) or requiring market licensing (UK), the EU imposes transparency obligations on providers of general-purpose AI models: they must publish sufficiently detailed summaries of the training data used to allow rights-holders to assess whether their content was included.
This transparency requirement doesn’t prohibit training on copyrighted material. It doesn’t mandate licenses. It requires disclosure, and creates the informational foundation for rights-holders to pursue enforcement through existing channels. It’s a compliance layer, not a licensing framework or a court-deference regime.
The EU’s high-risk AI system deadlines are also in flux. A preliminary political agreement reached on March 18, 2026 would push Annex III compliance to December 2, 2027, per IAPP reporting. A full plenary vote is expected March 26. Until that vote is confirmed, current EU AI Act deadlines remain operative.
Comparative Framework
| Jurisdiction | Default Position | Mechanism | Compliance Obligation | Status |
|---|---|---|---|---|
| US | Developer-favorable | Court resolution | No current licensing requirement | Legislative framework, not enacted law |
| UK | Rights-holder expectation | Market licensing (CCE pilot) | Licensing expected for commercial developers | Government report, not enacted law |
| EU | Transparency obligation | Disclosure requirements | Training data summaries required for GPAI models | Enacted, implementation ongoing |
What Cross-Border Developers Must Do Now
None of these frameworks, taken individually, delivers a clean answer. Together, they create a compliance environment where the question “can we train on this content?” has a different answer depending on jurisdiction, and where a company with operations in all three markets faces conflicting signals simultaneously.
Several observations follow from the verified facts above. First, data provenance documentation is now a cross-jurisdiction asset, not a nice-to-have. Whether a regulator asks for transparency (EU), a court evaluates training data legality (US), or a licensing negotiation requires proof of what was trained on (UK), the ability to answer “what did you train on and where did it come from” is the baseline capability that enables every other response.
Second, the UK’s Licensing-First environment, even in its current non-statutory form, changes the risk profile for training on UK-originating content without a license. The gap between “not yet illegal” and “expected to have a license” is narrowing. The CCE pilot timeline and any formal legislative follow-on are the milestones to watch.
Third, the US framework’s court-deference stance provides cover only while current litigation trends hold. Pending AI training copyright cases in US federal courts will refine what “doesn’t violate copyright law” actually means in practice. The framework signals a position; the courts will test it.
For legal and compliance teams with cross-border exposure: the three-jurisdiction divergence documented this week is not a temporary anomaly. It reflects different underlying policy philosophies, the US prioritizing innovation flexibility, the UK prioritizing rights-holder market power, the EU prioritizing systemic transparency. Those philosophies don’t converge quickly. The compliance challenge this week is also the compliance challenge next year.
This is a developing area. Prescriptive legal advice requires qualified IP counsel with jurisdiction-specific expertise. The analysis above identifies the landscape, navigating it requires human legal authority.