There’s a policy decision that governments don’t have to announce. They can simply not act, and the result is the same as choosing: someone else makes the rules.
In the space of one reporting cycle, the UK and US governments both chose not to legislate on AI training data copyright. The UK abandoned its text and data mining exception with opt-out mechanism in March 2026, after sustained opposition from publishers. The US National Policy Framework, according to legal analysts at Baker Botts and Hunton Andrews Kurth, appears to favor leaving AI fair use questions to judicial resolution rather than congressional legislation, though analysts note this interpretation is inferential. Same outcome, different path. In both countries, the courts are now the primary venue for establishing AI copyright law.
This is not a neutral outcome. It’s a structural choice with predictable consequences, for AI developers, for publishers, and for the timeline over which legal clarity emerges.
Section 1: The Government Retreat, Two Jurisdictions, One Pattern
The UK’s TDM exception would have created a statutory safe harbor: AI developers could train on copyrighted content unless rights holders explicitly opted out. The publishing industry opposed the opt-out mechanism as unworkable and insufficient. The government’s March 2026 reversal reflects that pressure. Without the exception, AI developers training on UK-sourced content have no statutory safe harbor. They operate in legal uncertainty that can only be resolved by licensing agreements or court decisions.
The US situation is structurally parallel but arrived differently. The White House’s March 2026 National Policy Framework does not create a safe harbor for AI training. Legal analysts interpret the framework as preferring judicial resolution of fair use questions, meaning the existing Section 107 fair use doctrine applies to AI training, and courts will determine how. The framework appears, according to legal analysis, to recommend against congressional legislation on this question. The inference is that the administration believes the existing legal framework, applied by courts to AI facts, is preferable to new legislation. That’s not a neutral observation: it means rights holders and AI developers will fight over fair use in federal courts for years.
The common thread: In both countries, the government had an opportunity to write the rules and chose not to. The stated or implied rationale in both cases is that the legal questions are too complex, the stakeholder interests too contested, or the technology too fast-moving for legislative precision. The practical result is the same: courts inherit the rulemaking function.
Section 2: The Litigation Wave, What’s Actually Filed and What It Means
In the UK, the action is immediate and coordinated. A coalition of approximately 40 UK independent publishers sent formal letters of claim to major AI developers, according to Lewis Silkin’s reporting. Letters of claim are the formal procedural precursor to litigation in UK civil law, they establish the legal basis for the claim and require a response. AI developers receiving these letters face a decision: respond with a legal defense, negotiate toward a licensing agreement, or ignore the letter and accelerate toward filed proceedings.
The coalition structure matters. Forty publishers acting together, if that number holds in any eventual proceedings, changes the legal and economic dynamics compared to individual publisher actions. Coordinated litigation creates shared legal costs, builds a unified evidentiary record across claimants, and signals that the publishing sector has organized itself for a sustained campaign rather than individual opportunistic actions. It also increases the likelihood of a licensing market outcome: when rights holders are coordinated, licensing negotiation becomes a credible alternative to protracted litigation for all parties.
In the US, the litigation landscape is more diffuse. Multiple copyright cases involving AI training data are proceeding through federal courts, covering different aspects of the fair use question, whether training itself constitutes fair use, whether serving outputs substantially similar to training data infringes, and how to assess market harm when the use is transformative. The National Policy Framework’s preference for judicial resolution means those cases proceed without legislative intervention.
Section 3: The Licensing Market in Transition
Legal analysts at Lewis Silkin project that the UK AI licensing market could double by year-end 2026, a forecast from a single source that has not been independently verified, and which should be treated as a directional signal rather than a precise prediction. The logic behind the direction is sound: when a statutory safe harbor disappears, licensing becomes the cleaner path to legal certainty for AI developers who want to operate in the UK market without litigation exposure.
Rights holders know this. The coalition action looks, in strategic terms, like an opening position in a licensing negotiation as much as a litigation threat. Letters of claim put developers on notice of the legal exposure; that notice creates the incentive to license rather than litigate. If the strategy works, the outcome isn’t a series of court decisions, it’s a licensing market that emerges under legal pressure and sets rates that reflect rights holders’ leverage.
The parallel with music and film licensing markets is instructive but imperfect. Those markets developed over decades, with established royalty collection societies, statutory rate-setting mechanisms, and blanket licensing frameworks. No equivalent infrastructure exists for AI training data. Building it, if that’s where this goes, would require industry-wide coordination, which the coalition action may be designed to catalyze.
Section 4: Risk Exposure for AI Developers
For AI developers with UK market exposure, the near-term risk is concrete: receiving a letter of claim requires a legal response. Beyond that immediate procedural requirement, developers need to assess training data provenance. Which UK-sourced or UK-authored content was used in training the models they’re currently serving? That question is harder to answer than it sounds, particularly for large language models trained on internet-scale datasets where provenance documentation is inconsistent.
The EU offers a different posture that’s worth noting as a third data point. The EU’s existing text and data mining exception under the DSM Directive, Article 4, permits TDM of lawfully accessed content for commercial purposes, with an opt-out mechanism that operates differently from the UK’s abandoned proposal. EU-based AI developers, or developers licensing training data from EU sources, have a statutory framework the UK and US currently lack. That differential creates a compliance geography: training data sourcing decisions now carry jurisdiction-specific legal implications.
Section 5: What a Court-Driven Copyright Framework Actually Looks Like
Court-driven legal development has predictable characteristics. It’s slow: major copyright cases take 3-7 years from filing to final appellate resolution. It’s fragmented: different cases produce different precedents, in different circuits, on different facts, before converging toward coherent doctrine. It’s retroactive: the law that emerges from litigation applies to past conduct, which means developers are operating under uncertainty during the period when the rules are being made.
It also produces different incentives than legislation. Courts respond to litigants’ arguments, not to policy objectives. A court that finds AI training infringes copyright isn’t trying to shape the AI industry, it’s applying the law to the facts before it. The policy implications of that decision are someone else’s problem.
The scenarios worth modeling: a court decision that training on copyrighted content without license constitutes infringement would immediately increase licensing demand and accelerate the formation of a licensing market. A court decision that training is fair use would vindicate developers’ current practices but wouldn’t prevent rights holders from pursuing other legal theories (output similarity, market harm). A settlement before major decisions are reached could produce voluntary licensing frameworks that shape the market without establishing binding legal precedent.
TJS Synthesis
When governments step back from writing the rules, they don’t eliminate the rulemaking, they redistribute it. In AI copyright, the redistribution is to courts and to coordinated rights holder action. For AI developers, the strategic implication is that legal clarity on training data copyright is years away through litigation, and the path to near-term operational certainty runs through licensing. For rights holders, the litigation-first approach creates leverage but also uncertainty, a settlement produces certainty faster than a court decision, but settlement terms are shaped by the strength of each party’s legal position. For the AI industry as a whole, the absence of legislative frameworks in two major common-law jurisdictions means that the legal infrastructure for AI training data markets will be built case by case, negotiation by negotiation, over a decade or more. The companies that treat training data provenance as a compliance priority now, rather than a legal risk to manage later, will be better positioned for whatever that decade produces.