Three documents. Three answers. One unresolved question.
Washington has not produced a coherent AI copyright policy. It has produced three of them, each from a different federal actor, each with a different theory of what the law is or should be, and none of them with the force of enacted law. For AI developers and rights holders, that’s not a technicality. It’s the operating environment.
This deep-dive maps each position, identifies the stakeholders who benefit from each outcome, and draws practical conclusions for the compliance teams and legal departments who need to act before Congress does.
The Conflict, Laid Out
The three documents in play are distinct. They should not be conflated.
The White House National Policy Framework for Artificial Intelligence, released March 20, 2026, is a set of executive-branch legislative recommendations. It does not create law. It asks Congress to act. Per legal analyses of the Framework, the administration’s position is that AI training on copyrighted material would not constitute infringement under proposed federal standards. The Framework also recommends Congress establish collective licensing mechanisms and supports judicial resolution of contested copyright cases.
The TRUMP AMERICA AI Act is a discussion draft circulated by Senator Marsha Blackburn (R-Tenn.) on or around March 18, 2026. It has not been formally introduced as legislation. Under Title XV, the draft states that unauthorized reproduction, copying, or computational processing of copyrighted works for AI training, fine-tuning, or development does not constitute fair use. Legal analyses of the draft further indicate that AI-generated derivative works produced without copyright owner authorization would be deemed infringing under Title XV.
The U.S. Copyright Office has published a separate analytical report on AI and copyright. This is not an administration document. It is not part of the White House Framework. It represents the Copyright Office’s own legal analysis and carries independent institutional weight. Where the Office’s conclusions differ from either the Framework or the Blackburn bill, those differences are substantive, they reflect a separate analytical institution’s reading of existing law, not a policy recommendation.
Three actors. Three documents. Zero enacted law.
What the White House Framework Actually Proposes
The Framework’s copyright position fits within a broader architecture. It recommends that Congress preempt state AI laws that impose “undue burdens” while preserving state authority in defined areas, a significant structural proposal that the same compliance teams tracking copyright exposure need to understand. Greenspoon Marder’s analysis describes the Framework as establishing “a consolidated federal framework intended to supersede the expanding patchwork of state artificial intelligence regulations.”
On copyright specifically, the Framework recommends against establishing a new federal AI regulator, favoring oversight through existing sector-specific agencies. This signals the administration’s preference for working within existing legal architecture, including existing copyright doctrine, rather than creating new enforcement mechanisms. The collective licensing recommendation is the most constructive proposal in the Framework on copyright: it would create a system through which rights holders receive compensation without litigation, and through which developers gain legal clarity on training data use.
That said: the Framework is not law. It is a legislative recommendation from an executive branch to a Congress that may or may not act on it. The collective licensing framework exists in concept only.
What the Blackburn Bill Would Do
The TRUMP AMERICA AI Act takes a different path. Its framework centers the interests of rights holders. Title XV’s fair use clarification is direct: training on copyrighted material without authorization is not protected by fair use. If enacted, that provision would retroactively expose years of industry-standard training data practice to copyright liability.
The derivative works provision compounds this. Legal analyses of the discussion draft indicate that AI-generated outputs produced without copyright owner authorization would be deemed infringing. That covers not just the training process but the outputs themselves, meaning a developer could face liability both for training a model and for deploying it.
The bill’s scope, children, intellectual property, platform accountability, is broad. The copyright provisions are its most consequential for AI developers. The bill is framed around Senator Blackburn’s four stated focus areas, not around the technical architecture of AI development. That framing matters: it suggests the bill’s copyright provisions were designed to appeal to a political coalition (creators, conservatives) rather than to resolve a technical legal question.
The bill is a discussion draft. Its procedural status is early. But discussion drafts that attract co-sponsors move. Watch for co-sponsorship announcements as the leading indicator of legislative trajectory.
Stakeholder Map: Who Benefits From Each Outcome
The three positions map onto identifiable stakeholder groups.
If the White House Framework’s position prevails through legislation: AI developers and frontier labs gain legal clarity that training on publicly available data is permissible under federal law. Collective licensing provides rights holders a revenue mechanism, but without the veto power that a “training requires authorization” standard would give them. Courts are relieved of having to resolve the question. The creative economy loses the leverage that copyright infringement claims provide.
If the Blackburn bill’s Title XV provisions are enacted: Rights holders, publishers, music labels, studios, independent artists, gain explicit statutory authority to demand compensation for training data use and to challenge AI outputs as derivative works. AI developers face significant operational disruption: training data clearance at scale is not currently feasible for most organizations, and derivative works liability for model outputs would require architectural changes to deployment pipelines. Frontier labs with legal resources to structure licensing deals weather this better than smaller developers.
If the Copyright Office’s analytical framework shapes judicial outcomes (without new legislation): Existing fair use doctrine applies on a case-by-case basis. Courts evaluate the transformative nature of AI training, the commercial purpose, and the effect on the market for the original work. This path provides no systemic certainty. Developers in different circuits could face different outcomes on identical facts. Rights holders with resources to litigate have leverage; those without do not.
If Congress acts on neither: The status quo persists. Ongoing litigation in federal courts continues to develop the case law. Developers operate under legal ambiguity. Rights holders continue pursuing licensing deals and infringement suits. Both sides prefer legislative certainty, they disagree sharply on what that certainty should say.
What Compliance Teams and Developers Should Do Now
Neither document is law. That is the first, most important fact for any legal or compliance team assessing AI training data exposure.
Acting as though the White House Framework is law overstates developer protection. Acting as though the Blackburn bill is law overstates developer exposure. Both errors lead to bad decisions.
What compliance teams can reasonably do now:
Document training data sourcing practices in detail. If litigation or legislation requires a defense of training data choices, the organization that has records of what data was used, when, and under what license terms is in a better position than one that does not. This is good practice regardless of which legislative path prevails.
Evaluate collective licensing exposure. The Framework’s recommendation for collective licensing mechanisms signals that some form of rights-holder compensation for AI training may become a federal requirement. Organizations that have already engaged with licensing frameworks, through licensing deals, direct partnerships with content owners, or structured data acquisition, are better positioned if collective licensing becomes mandatory.
Monitor the Blackburn bill for co-sponsorship. A discussion draft with no co-sponsors is political signaling. A discussion draft with a dozen Senate co-sponsors is a legislative threat. The co-sponsorship trajectory over the next 60 days is the leading indicator of whether Title XV’s provisions have real legislative momentum.
Do not conflate the Copyright Office’s analysis with the administration’s position. They are separate documents, from separate institutions, with independent analytical conclusions. Citing the Copyright Office as evidence of administration support for a “training is permissible” position is a factual error.
TJS Synthesis
The policy conflict Washington has created is not accidental. It reflects a genuine political division about who bears the cost of AI development: the technology industry, which has built on the assumption that training data use is legally defensible, or the creative economy, which is watching its work processed at scale without compensation. Both constituencies have political weight inside the current administration. The Framework reflects the tech industry’s preferred outcome. The Blackburn bill reflects the creative industry’s preferred outcome. Congress has not chosen.
The implication for developers and compliance teams is not paralysis. It is precision. Know which legal theories favor your current practices, which threaten them, and which institution is most likely to produce binding law first, Congress through legislation, or federal courts through copyright litigation. Right now, the courts are moving faster than Congress. That means the Copyright Office’s analytical framework, and the ongoing fair use cases working through federal dockets, may produce the first durable precedents. Legislation, if it comes, will arrive later. Compliance strategy should account for both paths.