Three jurisdictions moved on AI copyright in nine days. The outcomes differ. The direction doesn’t.
On or around March 2, 2026, the US Supreme Court declined to hear Thaler v. Perlmutter, the case testing whether AI-only generated works could qualify for copyright protection. Per Morgan Lewis’s LawFlash analysis, the Court’s refusal leaves the human authorship requirement intact. AI-only outputs don’t qualify. The Copyright Office and federal courts continue applying that standard.
On March 6, the UK House of Lords Communications and Digital Committee published its report “AI, copyright and the creative industries.” On March 10, the European Parliament adopted a resolution on copyright and generative AI. According to Slaughter and May’s legal analysis of both documents, they are “very much pro-copyright, favouring licensing and remuneration over broad exceptions.”
Understanding what you’re reading here requires separating three different types of legal actions that carry three different types of weight.
What each action actually is
The SCOTUS certiorari denial is procedural. It is not an affirmative ruling on the merits. The Court did not declare that AI cannot hold copyright. It declined to review the question. The lower court’s reasoning stands for the Thaler facts, but unresolved questions about how much human involvement is sufficient for protection remain open.
The Lords report is a committee recommendation. It carries no binding legal force. Parliament must act before any of its recommendations become law. The timeline for that is uncertain.
The EP resolution is non-binding. It represents the Parliament’s position, not enacted EU legislation. The EU AI Act is already law; this resolution addresses questions the Act doesn’t fully resolve.
None of these are the same as a statute or a binding judicial ruling on copyright doctrine. What they share is directional weight, and that matters for compliance planning even when the law hasn’t changed.
The divergence that complicates the convergence
The US and UK/EU approaches are pointing toward different outcomes, not the same one.
In the US, the human authorship question is settled in the lower courts (for now), but the question of what constitutes sufficient human creative contribution to qualify for protection is not. Companies using AI tools to assist human creators exist in legal gray space. SCOTUS declining Thaler doesn’t help them. It just means that particular question doesn’t get answered this way, right now.
In the UK and EU, the live debate isn’t about output protection, it’s about training data. The Lords and the EP both focus on whether AI developers must license the copyrighted content they train on, or whether broad text-and-data-mining exceptions permit training without payment. Their answer is clear: licensing, not exceptions.
That’s a different legal question than the US is asking. The US is debating who can own AI-generated outputs. The UK and EU are debating who must pay to create AI systems in the first place. Both affect developers. Neither answer resolves the other question.
What compliance teams should do with this
Four practical assessments are worth running now.
First, audit your AI content pipeline for human authorship. If your company’s IP strategy relies on copyright protection for AI-assisted outputs, the relevant question is whether those workflows involve sufficient human creative direction. There’s no bright-line test. That’s a legal analysis, not a compliance checkbox, but it’s overdue for companies that haven’t done it.
Second, map your training data provenance if you’re operating AI systems in the UK or EU. The regulatory direction toward mandatory licensing means that training data sourced from copyrighted works without licensing agreements is a growing liability, not just an ethical question.
Third, track the UK Government’s next move. The committee report triggers an expected government response. According to reports, an economic impact assessment was expected by March 18, 2026. That document, when published, will indicate how seriously the government is treating the committee’s recommendations.
Fourth, don’t conflate US and UK/EU signals. These are parallel developments on different questions. A US company operating in the EU faces both the output ownership question (US law) and the training data licensing question (EU and UK direction). Those require separate legal analyses.
The practical reality for AI developers is that copyright risk is expanding geographically at the same time the law in each jurisdiction is moving. Nine days in March didn’t resolve any of this. They made it clearer that resolution is coming, and that waiting is a posture with increasing cost.