Washington has produced two simultaneous federal positions on AI copyright. They point in opposite directions.
On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence, a set of legislative recommendations that, per legal analyses, takes the position that AI training on copyrighted material would not constitute copyright infringement under proposed federal standards. The Framework also recommends Congress establish collective licensing mechanisms and supports judicial resolution of contested copyright cases. It does not itself create binding legal obligations. It asks Congress to act.
Approximately two days earlier, on or around March 18, 2026, Senator Marsha Blackburn (R-Tenn.) circulated a discussion draft of the TRUMP AMERICA AI Act. Title XV of that draft states the opposite: unauthorized reproduction, copying, or computational processing of copyrighted works for AI training, fine-tuning, or development does not constitute fair use. Under the draft, AI-generated derivative works produced without copyright owner authorization would be deemed infringing. That bill has not been formally introduced as legislation. It also asks Congress to act, but toward the opposite result.
Two proposals. Two federal actors. Two incompatible answers to the same question.
What the conflict means in practice
Neither document is law. The White House Framework is a set of executive-branch recommendations to Congress. The Blackburn draft is a discussion document, not yet a formally introduced bill. A developer building on data scraped from copyrighted sources today is not protected by the Framework and not prohibited by the bill. They’re operating in the same legal environment that existed before either document appeared.
That said, the policy conflict matters. It signals where the political fault lines are forming inside a single administration. The Framework’s approach, training is permissible; resolve edge cases through courts and licensing, aligns with the interests of AI developers and frontier labs. Blackburn’s bill aligns with the interests of rights holders: publishers, musicians, visual artists, and studios whose content is being used as training data without compensation.
The fact that both proposals emerged within days of each other, from a Republican-controlled executive branch and a Republican senator with close ties to that administration, suggests the intra-administration consensus on AI copyright is not settled. It is contested.
What compliance teams should watch
The Framework also recommends Congress preempt state AI laws that impose “undue burdens,” while preserving state authority in defined areas. That preemption push is a separate issue from copyright, but it matters to the same audience. If Congress acts on the Framework’s preemption recommendation, California and other state-level AI regulations could be superseded. If Congress acts on the Blackburn bill’s copyright provisions instead, developers would face significantly higher legal exposure for training data practices that most of the industry currently treats as legally defensible.
The U.S. Copyright Office has published its own analytical framework on AI and copyright, a separate document from the White House Framework, with its own analytical conclusions. It should not be read as an extension of the administration’s position.
TJS synthesis
The meaningful development here is not that either document exists. It’s that they emerged within 48 hours of each other, from actors who nominally belong to the same political coalition, and they contradict each other on the central question AI developers most need answered. Congress has not acted on either. Courts haven’t resolved the underlying fair use question. The practical result is that AI developers and rights holders are each holding a federal document that says they’re right, and neither document is law. Watch for the moment one of these proposals gains legislative traction. That’s when the policy conflict becomes a compliance decision.