Three branches of the U.S. government now have distinct, competing positions on one of the most consequential questions in AI policy: what rules govern the use of copyrighted material to train AI systems. The White House issued its position on March 20. Senator Marsha Blackburn issued hers around March 18. The Supreme Court issued its own, by saying nothing, on March 2. None of them is binding law. All of them matter.
Understanding what each position actually says, and, critically, what none of them resolves, is now a core compliance task for legal teams, AI product managers, and developers making decisions about training data.
The Three Positions
The table below maps the core stances. The sections that follow explain why they don’t resolve each other and what that means for companies operating today.
| Actor | Position on AI Training & Copyright | Position on AI Authorship | Binding Authority | Status |
|---|---|---|---|---|
| White House | AI training on copyrighted material should not constitute a copyright violation; defer to courts; suggests nonmandatory collective licensing as an option | Not directly addressed | None, legislative recommendation to Congress | Released March 20, 2026; not enacted |
| Senator Blackburn | Unauthorized use of copyrighted works for AI training does not constitute fair use; federal enforcement proposed | Not directly addressed in confirmed provisions | None, discussion draft, no co-sponsors confirmed | Introduced ~March 18, 2026; discussion draft stage |
| Supreme Court | Not addressed, cert denial in Thaler v. Perlmutter (Case No. 25-449) did not concern training data | AI cannot be a copyright author under U.S. law; human authorship required | Cert denial upholds lower courts and Copyright Office, not a new ruling, but forecloses Supreme Court review | March 2, 2026, final at Supreme Court level, “at least for now” |
Why They Don’t Resolve Each Other
The natural instinct is to treat these three signals as pieces of a puzzle that, assembled, tell companies what the law is. That instinct is wrong.
The Supreme Court’s cert denial in Thaler v. Perlmutter addressed one narrow question: can AI be listed as an author on a copyright application? The answer, affirmed by non-action, is no. Holland & Knight attorneys described the denial as “the end of the road, at least for now” for efforts to establish AI-generated works as independently copyrightable. What the court did not address, and was never asked, is whether training an AI system on copyrighted material constitutes infringement. Those are distinct legal questions. The cert denial answers one; it is silent on the other.
The White House framework is a recommendation, not a statute. Cooley’s analysis describes it as “the most concrete statement yet of where the administration wants Congress to take federal AI policy,” released March 20, 2026, as a follow-through from a December 11, 2025 executive order. Until Congress acts on it, the framework has no legal effect on copyright claims.
Senator Blackburn’s discussion draft is further still from binding law. It has no co-sponsors confirmed in the sources reviewed for this analysis. A discussion draft is an invitation to negotiate, not a pending statute. Its copyright position, that unauthorized AI training does not constitute fair use, per music industry trade reporting on the Copyright Alliance’s response, is the direct opposite of the White House’s position. These two federal signals are in conflict. Neither is law.
The Preemption Wildcard
Both the White House framework and the Blackburn proposals share one structural feature: both seek federal preemption of state AI laws. But they seek preemption in service of opposite substantive standards. That distinction matters enormously.
The White House framework urges Congress to preempt state AI laws that “impose undue burdens,” while preserving state authority for laws protecting children, preventing fraud, safeguarding consumers, and exercising zoning authority. The framework’s preemption architecture is permissive toward AI development: clear the state-law thicket, establish a national standard, let companies operate under a single federal framework.
The Blackburn proposals, based on what can be verified from available sources, take a different posture on copyright specifically, one more protective of rights holders. If both frameworks are seeking federal preemption but disagree on the substantive standards that preemption would impose, the preemption race matters as much as the copyright debate itself. Whichever federal standard is enacted first becomes the floor that displaces state alternatives.
According to law firm analyses of the framework, the document appears to favor reliance on existing sector-specific regulators over establishing a new federal AI body. The Gibson Dunn analysis confirms preemption as a core legislative recommendation of the Trump administration’s proposed framework for federal legislation.
What Companies Are Operating Under Right Now
The answer is neither the White House framework nor the Blackburn draft. Companies are operating under existing copyright law as interpreted by lower courts, a body of case law that is actively developing, inconsistent across circuits, and unresolved at the Supreme Court level on the training question.
The Copyright Office’s human authorship standard has been upheld by the Supreme Court’s non-action. That baseline is clear: AI-generated works without human creative contribution are not eligible for copyright protection in the U.S. That question is settled, at the Supreme Court level, for now.
The training question is not settled. No federal statute addresses it. No Supreme Court ruling addresses it. The White House framework says training should not be treated as infringement; the Blackburn draft says it should. Until one of those positions becomes law, or until a circuit court split creates the pressure for Supreme Court review, companies are making training data decisions under legal uncertainty, not legal clarity.
The Copyright Alliance’s response is instructive. The organization welcomed both the White House framework and the Blackburn proposals despite their opposing copyright positions. That posture reflects a sophisticated stakeholder calculation: any federal engagement with AI copyright beats the current vacuum, regardless of which direction it runs. The Alliance is holding space in both conversations.
What to Watch
Four signals are worth tracking closely over the next 90 days.
Congressional committee activity on the Blackburn draft. A discussion draft without committee traction is a political statement, not a legislative trajectory. If the draft receives hearings or co-sponsors, the copyright conflict becomes a live legislative fight.
The “One Big Beautiful Bill” integration signal. Cooley’s analysis references the prior inclusion of White House AI framework language in broader budget reconciliation legislation. If the current framework’s preemption language appears in reconciliation or other must-pass vehicles, it could advance without a standalone AI bill. That is the faster path to binding law.
Copyright Office guidance updates. The Copyright Office’s human authorship guidance has been the operative standard throughout the Thaler litigation. Any new guidance on AI-assisted works or training data would shift the baseline without requiring congressional action.
Lower court activity on training data cases. Several cases involving AI training and copyright are working through federal courts. A circuit court ruling, particularly one that creates a split with another circuit, is the most likely trigger for a future Supreme Court grant on the training question.
TJS Synthesis
The most important thing to understand about the current U.S. AI copyright landscape is that the appearance of policy activity is not the same as legal resolution. Three branches of government have spoken. None of them has answered the question companies actually need answered: is training AI on copyrighted material legal under federal law?
The White House says it should be. Senator Blackburn says it shouldn’t. The Supreme Court hasn’t been asked. The gap between those positions is where every AI company building on commercial training data currently operates. Making decisions in that gap requires understanding not just what each position says, but what each position is and isn’t. A recommendation is not a law. A discussion draft is not a bill. A cert denial is not a ruling. Each of those distinctions is doing significant legal work right now, and the companies that track them precisely will be better positioned than those treating political signals as settled law.