Three jurisdictions. Three answers. Zero consensus.
In the span of roughly three weeks in April 2026, the White House published a national AI policy framework, the UK government abandoned a proposed AI training data exemption under sustained pressure from publishers and creative industries, and the EU Parliament passed a resolution calling for mandatory copyright rules while giving the Commission until summer 2026 to respond. Each development, covered individually, looks like a policy update. Taken together, they represent a structural divergence in how the world’s three largest AI regulatory zones intend to handle the same underlying question: who controls the right to train AI on existing content, and on what terms?
The answer differs significantly by jurisdiction. Here is where each one stands.
The White House Position: Licensing Frameworks Over Legislation
The White House National AI Policy Framework, published approximately April 17-22, 2026 per registry documentation, reportedly takes a market-preference approach to AI copyright rather than a prescriptive legislative one, according to published legal analysis of the document. The framework reportedly advises Congress to refrain from immediate legislative action on fair use, favoring a “wait-and-see” posture while courts work through pending AI training data cases. It reportedly recommends establishing licensing frameworks that would allow AI developers and content owners to negotiate training data access on commercial terms, a structure that would give rights holders direct economic leverage without requiring Congress to define fair use boundaries for AI.
Two additional recommendations in the framework are worth tracking for rights holders. The framework reportedly calls for enabling collective negotiation rights for publishers and content organizations, allowing them to bargain with AI labs as a group rather than individually, a structure with precedent in music licensing but novel in the text and visual content context. It also reportedly includes a call for federal digital protections against unauthorized AI-generated voice and likeness replicas, an area where state-level legislation has proliferated in the absence of a federal standard, as documented in prior TJS analysis of the framework’s contested provisions.
These are recommendations, not enacted law. The framework’s status as a policy document rather than legislation means every element is subject to Congressional action, or inaction. As the White House framework and GUARDRAILS Act conflict illustrates, Congress is not operating as a unified actor on AI policy.
The EU Position: Mandatory Rules, Pending Response
The EU’s trajectory on AI copyright is moving in the opposite direction from the U.S. In late April 2026, the European Parliament passed a resolution calling for mandatory AI copyright rules, specifically, requiring AI developers to obtain rights or pay compensation for training data use, and set a deadline for the Commission to respond by summer 2026. That Commission response is the nearest actionable decision point in the EU copyright track, and it will determine whether the EU moves toward mandatory licensing obligations or continues to rely on the existing text and data mining exceptions under the 2019 Digital Single Market Directive.
The DSM Directive’s Article 4 opt-out mechanism, which allows rights holders to explicitly reserve their content from AI training use, is currently the operative EU standard for publishers and content owners. The Parliament’s resolution reflects dissatisfaction with that opt-out model: rights holders have argued that technical compliance with opt-out protocols is burdensome, enforcement is unclear, and the mechanism does not address compensation for past training data use. The Commission’s summer 2026 response will signal whether mandatory rules are on the legislative calendar.
What this means for AI developers with EU operations: the opt-out mechanism is the current compliance standard, but it may not be the final one. Organizations that have not audited their training data practices against DSM Article 4 opt-out signals face potential exposure if mandatory rules are adopted retroactively or with short implementation windows.
The UK Position: Retreat, Uncertainty, and Legal Notices
The UK’s copyright position is the most turbulent of the three. The UK government had proposed an exception to copyright law that would allow AI training on any lawfully accessed content, with rights holders permitted to opt out. That proposal drew sustained opposition from publishers, authors, visual artists, and music rights organizations. In late April 2026, the government retreated, declining to proceed with the training exemption in its originally proposed form, per reporting on the UK’s decision.
The retreat does not resolve the underlying legal uncertainty. Forty publishers have issued formal legal notices to AI companies regarding training data use, creating potential litigation exposure that exists regardless of what Parliament eventually legislates. The UK’s current copyright framework offers AI developers no training data safe harbor. In the absence of a legislative resolution, the question of whether AI training on UK-published content constitutes infringement is live litigation risk.
For AI developers with UK-published content in their training data, the government’s retreat means there is no near-term legislative clarity coming. The practical implication is that organizations relying on implied or assumed UK fair dealing rights for AI training are operating in legally contested territory.
The Comparison: Where the Three Regimes Agree, Where They Diverge
| Dimension | United States | European Union | United Kingdom |
|---|---|---|---|
| Training data rights | Reportedly recommends licensing frameworks; fair use question deferred to courts | DSM Article 4 opt-out operative; mandatory rules under consideration by Commission | No safe harbor; opt-out exemption proposal withdrawn; litigation risk active |
| Fair use / fair dealing | Reportedly recommends judicial deference, no legislative definition recommended | Not directly applicable; text and data mining exception under DSM Directive | Fair dealing defense for AI training is untested and contested |
| Digital replicas / likeness | Reportedly recommends federal protections; currently patchwork state law | Covered under AI Act deepfake disclosure rules; no standalone licensing framework | No dedicated digital replica framework; general intellectual property law applies |
| Publisher rights / collective negotiation | Reportedly recommends enabling collective negotiation frameworks | Parliament calling for mandatory rules; collective licensing precedent under DSM | 40+ publishers have issued formal legal notices; no collective framework enacted |
| Nearest action point | Congressional response to framework recommendations, timeline uncertain | EU Commission response to Parliament resolution, summer 2026 | UK legislative calendar unclear post-retreat; litigation timeline active |
| Enforcement risk | Low near-term; courts are the primary venue | Medium near-term; Commission response determines trajectory | High near-term; litigation notices from publishers are already filed |
What Organizations Must Track Before Any of This Becomes Actionable Law
None of the three jurisdictions has enacted final, binding AI copyright rules as of April 2026. That does not mean the compliance picture is static.
For publishers and rights holders: The EU Commission’s summer 2026 response to the Parliament resolution is the most consequential near-term decision point. A Commission proposal for mandatory licensing rules would put a legislative timeline on the table for the first time. Track EUR-Lex for Commission communications and proposed directives.
For AI developers with cross-border training data: The UK litigation track is the most immediate legal exposure. Formal legal notices from publishers are not regulatory guidance, they are the precursors to infringement claims. Developers with significant UK-published content in training corpora should assess their legal position now, not when litigation is filed.
For organizations with U.S. policy exposure: The White House framework’s recommendations are inputs to a legislative process, not instructions. The framing of federal preemption versus state-level copyright-adjacent AI laws adds another layer of uncertainty for U.S.-focused compliance teams. The framework reportedly favors federal action, but Congress has not acted.
The deeper compliance challenge is this: for organizations operating across all three jurisdictions, there is no single standard to comply with, no harmonization on the horizon before summer 2026, and active litigation risk in one market while the other two work through policy processes. The working assumption that “we’ll comply once the rules are clear” is not a defensible posture when one of the three jurisdictions has active legal notices in circulation.
The question worth keeping on the agenda: if your training data governance policy was designed for one jurisdiction’s framework, does it actually address your exposure in the other two?