The Supreme Court doesn’t always speak loudest when it rules. Sometimes it speaks by declining to rule at all.
On March 2, 2026, the Court denied certiorari in Thaler v. Perlmutter, Case No. 25-449, refusing to take up the question of whether AI can be listed as the author of a copyrighted work. The case stemmed from Dr. Stephen Thaler’s application to register copyright in a visual artwork he said was created autonomously by his AI system, DABUS. The Copyright Office denied the application. The courts upheld the denial. The Supreme Court declined to intervene.
What that denial means, and what it deliberately doesn’t mean, is now essential context for anyone tracking the White House and congressional AI proposals released in the same month.
What the Court Decided
The cert denial upholds the U.S. Copyright Office’s position and the rulings of lower courts: copyright protection requires human authorship. An AI system cannot be listed as a sole author. Works produced without human creative contribution are not eligible for copyright protection under current U.S. law.
This is the second time the Supreme Court has declined to hear a Thaler challenge. Holland & Knight attorneys described the denial as “the end of the road, at least for now” for efforts to establish AI-generated works as independently copyrightable. Thaler previously sought to have AI listed as sole inventor on a patent application; the Court declined that case as well. The pattern across both intellectual property domains is consistent.
The case has been in litigation since Thaler filed his original copyright application in 2018. That eight-year arc ending in a cert denial, without a Supreme Court opinion on the merits, illustrates how slowly judicial resolution moves relative to the pace of AI development.
What the Court Did Not Decide
The cert denial does not address whether training an AI model on copyrighted material constitutes infringement. That is a distinct legal question, and it was not before the Court in Thaler v. Perlmutter. It remains unresolved at the Supreme Court level.
This distinction matters directly for the current policy debate. The White House framework released March 20 recommends that AI training on copyrighted works not be treated as a copyright violation. Senator Blackburn’s competing discussion draft takes the opposite position. Neither the White House nor Senator Blackburn can point to a Supreme Court ruling to support their position, because the Court hasn’t ruled on it. The cert denial in Thaler is silent on training data.
What to Watch
The human authorship baseline is stable. The training data question is not. Several cases involving AI training and copyright are working through federal courts. A circuit split on the training question, two federal circuits reaching opposite conclusions, would be the most likely path to a future Supreme Court grant on that specific issue.
For a full analysis of how the Thaler decision fits within the broader collision between the White House framework, the Blackburn proposals, and the judicial record, see the complete policy collision deep-dive in the Regulation pillar.
TJS Synthesis
A cert denial is not a ruling. That technical distinction carries significant practical weight right now. The Supreme Court has not said training AI on copyrighted material is legal or illegal. It declined a case about something different, and that careful scope boundary is being obscured in public commentary that treats the Thaler outcome as broader than it is. IP counsel advising on training data decisions should be precise: the authorship question has a clear answer; the training question does not.