The week of March 18, 2026 produced something the AI policy space rarely gets: two serious federal AI governance proposals from the executive and legislative branches, released within two days of each other, covering largely the same ground, and disagreeing on the question that matters most.
Understanding where they agree and where they split is the first analytical task. The second is understanding what the courts have already settled, so the legislative debate can be read against an accurate judicial baseline. The third, the one that matters for anyone building AI systems or holding rights to content those systems might train on, is what the divergence actually means in practice.
Section 1: The Legislative Moment
The White House released its National AI Legislative Framework on March 20, 2026, pursuant to President Trump’s Executive Order of December 11, 2025. It is four pages. It is a principles document addressed to Congress, a statement of what the administration wants the legislative branch to do, not a bill with operative text.
Sen. Marsha Blackburn (R-TN) had already moved. Her TRUMP AMERICA AI Act discussion draft arrived March 18, nearly 300 pages of draft legislative text organized around what Blackburn’s office calls the “4 Cs”: children, creators, conservatives, and communities.
Both documents advocate federal preemption of state AI laws. Both oppose creating a new federal AI regulatory body. Both frame their approach as innovation-first. On those structural questions, the White House and the senator are largely aligned.
The alignment ends at copyright.
Section 2: The Copyright Fault Line
| Dimension | White House Framework | Blackburn Bill (TRUMP AMERICA AI Act) | |—|—|—| | Federal preemption of state AI laws | Urges Congress to adopt | Proposes to establish | | Copyright / AI training question | Defers to courts, ongoing judicial resolution | Would specify AI training on copyrighted works is not fair use (per early legal analyses) | | New federal AI regulatory body | No, existing agencies handle oversight | No (per available analyses) | | Document length / form | 4 pages, principles document | ~300 pages, discussion draft | | Binding status | Legislative recommendation | Discussion draft, no committee action yet |
The White House’s position on copyright is explicit. Sullivan & Cromwell’s analysis of the Framework confirms that the administration document leaves the question of whether AI training on copyrighted content constitutes infringement or qualifies as fair use to ongoing judicial resolution, rather than recommending legislation. The White House is saying: the courts are handling this, Congress shouldn’t intervene yet.
Blackburn’s draft goes the other direction. According to early legal analyses of the draft, it would specify that training AI models on copyrighted works without authorization does not qualify as fair use. That’s a direct legislative answer to the question the White House declined to answer. The copyright provision is drawn from legal analyses of draft text, not from directly verified bill language, and it must be understood as an interpretation of the discussion draft’s current form, subject to revision before any committee action.
That distinction matters. The White House defers. Blackburn proposes to codify. Those approaches produce entirely different compliance environments for every company currently training AI models.
Section 3: What the Courts Have Already Settled
Before analyzing the legislative debate, the judicial baseline needs to be accurate, because both frameworks are being drafted against it.
On March 2, 2026, the U.S. Supreme Court denied certiorari in Thaler v. Perlmutter (Case No. 25-449). Dr. Stephen Thaler had spent years attempting to register copyright in “A Recent Entrance to Paradise,” a visual artwork he stated was autonomously created by his AI system DABUS, listed as the sole author. The Copyright Office denied it. The D.C. Circuit affirmed. The Supreme Court declined to hear the appeal.
What this settled: AI cannot be listed as the author of a copyright registration under current law. The human authorship requirement stands.
What this did not settle: whether training an AI system on copyrighted works constitutes fair use or infringement. That is a different legal question, and the Thaler cert denial doesn’t touch it. The Thaler case asked who can own a copyright after the work is made. The training data question asks whether making the AI in the first place violated anyone’s copyright. Two questions; one answer so far.
Holland & Knight’s analysis characterizes the denial as effectively closing this particular avenue “at least for now”, attorney analysis, not a court statement, but it accurately captures the practical effect. The “for now” carries real meaning. Future cases built on different constitutional grounds, human-AI collaboration theories, or novel legal arguments remain possible. Nothing in the cert denial forecloses them.
Section 4: The Stakeholder Map
Three groups are watching the White House–Blackburn copyright divergence, and they’re watching it for different reasons.
*AI developers and deployers.* The White House’s judicial-deferral approach is the more comfortable posture for companies currently training models. Ongoing judicial resolution means existing uncertainty continues, which is not the same as permission, but it isn’t a statutory prohibition either. The Blackburn bill’s proposed “not fair use” codification would, if enacted and sustained, create explicit statutory liability exposure for training practices that companies have been operating under the assumption that the fair use question remains open. For developers making training data decisions today, the legislative trajectory matters more than any single document.
*Rights holders, content creators, publishers, musicians.* The Blackburn bill’s explicit copyright provision is the stronger protection from their perspective. The White House’s deferral approach continues the uncertainty that has driven litigation and, from rights holders’ vantage point, permitted uses they contest. Blackburn’s “4 Cs” framing, creators as one of the four, is not accidental. The bill is explicitly oriented toward protecting creator interests in ways the White House framework declines to address directly.
*The courts.* The Thaler cert denial shows the judiciary resolving what it can with existing legal tools, the authorship question under copyright’s human-creativity requirement. The training data fair use question is actively litigated in federal courts. The outcome of those cases has real bearing on whether Congressional action becomes more or less urgent: a clear judicial ruling on training data fair use could either resolve the legislative question or sharpen it, depending on which direction the ruling goes. The Filter has identified Thomson Reuters v. Ross Intelligence as a case relevant to this context, but Builder cannot verify current case status from training data alone, operators should confirm the current posture of that litigation before including it in distributed materials. That verification matters because a case that settled or produced a ruling changes the analysis.
Section 5: Three Signals to Watch
1. Whether the Blackburn discussion draft advances to committee markup. A discussion draft is not a bill. Committee markup would be the first signal that the copyright provision has enough support to survive the legislative process. Watch for Senate Judiciary Committee scheduling.
2. Whether the White House framework’s copyright-deferral stance survives Congressional negotiation. The administration’s preference for judicial resolution may not survive if Congressional negotiators, particularly those aligned with creators and rights holders, insist on a legislative answer. The four-page principles document will eventually need to become bill text, and that translation process is where the copyright question will be relitigated.
3. Whether training data litigation produces a ruling before Congress acts. A definitive federal court ruling on AI training fair use would either make Blackburn’s codification redundant (if courts rule against fair use) or give it more urgency (if courts rule in favor, and rights holders push Congress to override). The legislative and judicial tracks are running in parallel. They may converge.
These are proposals. Not law. The compliance posture question right now is how to plan when the legislative outcome is genuinely uncertain, a question our earlier analysis of these frameworks addresses directly.
TJS synthesis: The White House and Blackburn frameworks agree on enough to make federal AI preemption look increasingly likely over the next legislative cycle. But the copyright fault line reveals a genuine disagreement, between deferring to courts and legislating directly, that reflects a deeper tension about who the federal AI framework is primarily designed to protect. The White House’s innovation-first framing treats training data access as infrastructure; Blackburn’s creator-protection framing treats it as a property rights question. Congress will eventually have to choose. When it does, it will be choosing between two models of what the AI economy is for. Companies building AI systems today should be stress-testing their training data practices against both scenarios, not because either document is law, but because one of these approaches, or a negotiated version of them, likely will be.