Three jurisdictions. Three decisions. One month.
In March 2026, the global legal framework for AI and copyright moved in three separate directions at once. The United States Supreme Court closed the door on AI-only copyright claims. The United Kingdom rejected a regulatory opt-out and turned to market licensing. The Court of Justice of the European Union convened its first-ever oral hearing on whether large language models can lawfully train on protected content.
Each development makes sense on its own. Together, they reveal a structural problem for the AI industry: there’s no unified global answer to the copyright question, and every major jurisdiction is now making binding choices.
The U.S. position: settled law, open legislature
The Supreme Court’s March 2, 2026 denial of certiorari in Thaler v. Perlmutter (Case No. 25-449) didn’t introduce new law. It affirmed existing law by declining to disturb it.
The case history is instructive. Stephen Thaler applied for copyright registration in 2018 for “A Recent Entrance to Paradise,” an image his DABUS system generated autonomously. The Copyright Office rejected the application. The federal district court affirmed. The Court of Appeals affirmed. The Supreme Court declined to intervene. At each level, the answer was the same: U.S. copyright requires human authorship.
Morgan Lewis’s analysis of the denial identifies the practical effect clearly: the human authorship requirement is now settled federal appellate law. AI-generated outputs created without meaningful human creative contribution are not eligible for copyright protection under the current statutory framework.
The judicial system has spoken. Congress hasn’t. The White House’s National AI Legislative Framework and Senator Blackburn’s TRUMP AMERICA AI Act, introduced March 20 and March 18 respectively, both address AI and copyright from opposite directions. The White House framework reportedly treats AI training on copyrighted material as outside the scope of copyright violation and defers fair use questions to courts. Blackburn’s 291-page bill creates a federal right of publicity with, according to Fox Rothschild’s analysis, no fair use protection for unauthorized reproduction for AI training or inference.
That’s not a minor difference. The White House and the most prominent Senate AI legislation have reached opposite conclusions on the same fundamental question. Legal analysts note that the Supreme Court’s certiorari denial is not a ruling on the merits and leaves open the possibility of future challenges under different facts, particularly cases involving partial human contribution rather than fully autonomous generation. But those cases aren’t here yet. The current legal answer in the U.S. is clear, and Congress is the only actor positioned to change it.
The UK position: a market bet, not a regulatory mandate
The UK made a different kind of choice. On March 18, 2026, the Department for Science, Innovation and Technology published a report under Section 136 of the Data (Use and Access) Act 2025, establishing a Licensing-First approach to AI copyright.
The report explicitly rejects the TDM opt-out model, the approach that would have created a statutory exception permitting AI training on protected works by default. Instead, the UK is directing developers toward market-led licensing, with the Creative Content Exchange serving as the designated infrastructure for rights transactions at scale.
This is a philosophical wager. The UK government is betting that a functioning rights marketplace will emerge organically, that rights holders and developers can negotiate licensing terms without statutory compulsion, and that this bet will attract AI investment. The Licensing-First approach signals to the market: we’re open for business, but you need to pay for what you use.
UK legal analysis from Slaughter and May situates this pivot within the UK’s broader 2026 AI regulatory posture. The UK has reportedly paired the regulatory shift with infrastructure commitments including a Lanarkshire AI Growth Zone and, according to reports, a £500m Sovereign AI Unit described as operational as of March 2026. The investment signaling is intentional, this isn’t just a copyright policy, it’s a competition strategy.
Whether the CCE achieves sufficient market scale to validate the Licensing-First approach is the open question. If rights holders and developers can’t agree at commercially viable terms, the UK’s bet fails and the pressure for statutory intervention increases.
The EU position: fundamental questions, no ruling yet
The EU is asking the most fundamental question of the three, and it hasn’t answered it yet.
On March 10, 2026, the CJEU held what Bird & Bird describes as the first-ever oral hearing on generative AI and copyright in the court’s history. The case, Like Company v Google Ireland Limited (C-250/25), was brought by a Hungarian digital media company alleging that Google’s Gemini model extracted and displayed copyrighted news content for AI training without authorization between June 13, 2023 and February 7, 2024.
The hearing addressed the territorial scope of EU copyright and the legal framework for LLM development, according to Bird & Bird’s hearing report. These are exactly the questions that national courts across the EU have been unable to resolve consistently. The CJEU’s job is to provide a binding interpretive answer, one that applies in all 27 Member States.
Bird & Bird’s analysis indicates the hearing revealed sharply diverging positions among Member States, the European Commission, and the parties. That divergence is significant. The CJEU operates within a political context, and when Member States disagree, the court must resolve the legal question from the text and purpose of the directives rather than from political consensus. There is no published ruling date for C-250/25. The typical CJEU timeline, Advocate General opinion followed by chamber or Grand Chamber ruling, runs six months to over a year from the oral hearing.
A comparative map: three frameworks, one question
| Dimension | United States | United Kingdom | European Union |
|---|---|---|---|
| Copyright mechanism | Human authorship required by statute; AI-only works excluded | Licensing-First; market-led arrangements required | Under CJEU consideration; no ruling yet |
| Training data rule | No statutory exception; case law evolving (separate litigation) | TDM opt-out rejected; licensing required | CJEU C-250/25 addresses this directly |
| Enforcement path | Federal courts; Copyright Office | Market failure would trigger legislative review | CJEU ruling binds 27 Member States; AI Act obligations run in parallel |
| Legislative movement | White House framework + Blackburn bill, opposing approaches | DSIT report March 18, 2026 under DUAA 2025 | EU AI Act GPAI provisions active; CJEU ruling pending |
| Primary uncertainty | Congressional resolution of White House/Blackburn divide | CCE market development | CJEU ruling timeline and scope |
The compliance reality for cross-border operators
A company training a model on web-sourced data and deploying it in all three markets faces three distinct legal contexts simultaneously. In the U.S., the output ownership question is settled but the training data question isn’t. In the UK, licensing is required, and the infrastructure to do it at scale is still in pilot phase. In the EU, the court hasn’t ruled, but the GPAI provisions of the EU AI Act already impose transparency obligations on model providers above the 10^25 FLOP compute threshold.
None of these frameworks reference each other. None of them produce the same answer. The assumption that a unified global standard will emerge has been common in AI industry discussions, but March 2026 produced evidence that the opposite is happening. Three major jurisdictions made three different choices within weeks of each other. The divergence is accelerating, not resolving.
What to watch
In the U.S.: Congressional committee action on the White House framework and Blackburn bill. These bills take opposing positions on AI training and copyright, one of them, or neither, will advance. The White House’s position on fair use is the key watch point.
In the UK: The CCE’s pilot phase and whether a functioning licensing market emerges at scale. A Summer 2026 consultation on “Digital Replicas” is reportedly planned, this would extend UK regulatory attention to AI-generated content that mimics real people.
In the EU: The Advocate General’s opinion in C-250/25, which will precede the CJEU’s ruling and provide an early read on the court’s direction. Also watch for national court decisions in Member States that may reference the CJEU referral while awaiting the final ruling.
TJS synthesis
The compressed timeline is what makes March 2026 significant. These decisions didn’t arrive in isolation over years, they arrived within weeks of each other, in the middle of active legislative debates, while major AI labs are actively making training data and licensing decisions. The AI industry has been operating on the assumption that copyright law would either stabilize or be clarified by a single dominant framework. March 2026 suggests the opposite: the frameworks are diverging, and compliance teams need jurisdiction-specific strategies, not a single global answer. The company that treats this as a unified legal question is building on a false premise.