On March 27, 2026, U.S. District Judge Rita Lin of the Northern District of California granted Anthropic a preliminary injunction, suspending the Trump administration’s national security supply-chain-risk designation against the company. Per TRT World’s reporting, the ruling freezes a presidential order that had barred all federal agencies from using Anthropic technology and suspended the requirement that defense contractors certify non-use of Anthropic’s models. Wired, BBC, CBS News, and the Washington Post all independently confirmed the injunction.
That’s what the ruling covers. Here’s what it doesn’t cover, and why the distinction matters for every AI company doing business with the federal government.
Section 1: What the Northern District Ruling Actually Covers
A preliminary injunction is a temporary order. It halts the challenged action while the court works through the full case. Judge Lin’s ruling suspends the designation in the Northern District, meaning federal agencies and defense contractors operating under that court’s jurisdiction are no longer required to cease using Anthropic’s products or certify non-use. The legal standard for granting a preliminary injunction requires the judge to find, among other things, that the plaintiff is likely to succeed on the merits of the case. An Anthropic spokesperson acknowledged that framing directly, telling TRT World the company is “pleased they agree Anthropic is likely to succeed on the merits.”
Likely to succeed is not the same as succeeded. The preliminary injunction is the court saying this case deserves a full hearing and the damage from enforcing the designation in the meantime is too significant to allow while that hearing proceeds. Final resolution requires the case to proceed to merits, or to be resolved at the appellate level.
The injunction covers two elements of the original designation: the federal agency cease-use order and the defense contractor certification requirement. Both are paused. Federal agencies can continue using Claude. Defense vendors don’t have to certify non-use while the injunction holds.
Section 2: The Unresolved Front
While the Northern District case was proceeding, Anthropic simultaneously filed a separate, narrower case in the D.C. Circuit Court of Appeals. Per TRT World’s reporting on the dual proceedings, this D.C. Circuit case is still pending. The supply-chain-risk designation has not been blocked by the D.C. Circuit. Politico’s coverage of the legal landscape indicated the designation “remains in place” in the context of that proceeding, and described the Northern District win as potentially “premature” as a signal of the case’s overall resolution.
That framing is technically accurate. Two federal court proceedings, one paused and one unresolved, means the legal situation isn’t settled. The Northern District injunction is good news for Anthropic’s near-term federal market access. It’s not a clean resolution of the underlying legal question: whether the executive branch has the authority to designate an AI company as a national security supply-chain risk based on that company’s refusal to modify its acceptable use policies.
The D.C. Circuit proceeding is watching the Northern District outcome. If the Trump administration appeals the Northern District ruling, both cases could converge at the appellate level. If the D.C. Circuit reaches a different conclusion than the Northern District, the cases could create conflicting precedents across federal jurisdictions, an outcome that would require Supreme Court resolution to resolve.
Section 3: Why Anthropic Refused, and Why It Matters as Precedent
The designation wasn’t triggered by Anthropic’s business conduct or a security breach. It was triggered by Anthropic’s refusal to modify its usage policies to permit two specific applications: mass domestic surveillance and fully autonomous weapons. Per Anthropic’s own public statement, Dario Amodei explicitly named mass domestic surveillance as a use case the company declined to enable. BBC reporting corroborates both categories, mass domestic surveillance and fully autonomous weapons, as the contested uses.
Anthropic’s position is that these restrictions are core safety commitments, not negotiating positions. The company declined to remove the prohibitions even under the threat of losing federal market access. That’s a significant commercial decision, Anthropic has publicly valued federal contracts, and Claude has been deployed across multiple government contexts.
The precedent this sets for other AI companies is direct. Every frontier AI lab publishes acceptable use policies. Those policies restrict certain applications, typically including weapons of mass destruction, non-consensual surveillance, and content that violates specific laws. If the Trump administration’s theory holds, that refusing to modify those policies to enable government-requested use cases constitutes a supply-chain risk, then any AI company that maintains meaningful safety restrictions faces the same vulnerability.
Anthropic’s court win, if it holds on the merits, would establish that acceptable use policies are not a basis for supply-chain-risk designation under existing legal authority. That’s a meaningful protection for the entire AI industry operating in federal markets. If the administration wins, the opposite signal goes out: safety restrictions are negotiable under government procurement pressure.
Section 4: Market Implications for Federal AI Contracting
The immediate market implication is straightforward. Federal agencies that were ordered to stop using Claude can keep using it. Defense contractors that faced certification requirements get relief for the duration of the injunction. Anthropic’s federal revenue stream, which had been at risk of significant disruption, is stabilized in the near term.
The longer-term market implication is structural. The case has surfaced a previously untested question in the federal AI procurement landscape: what authority does the executive branch actually have to exclude AI vendors from federal use on national security grounds? Procurement officers, compliance teams at AI companies, and defense contractors all need an answer to that question before they can reliably plan federal AI deployments.
Defense contractors currently required to certify non-use of Anthropic’s models face a specific compliance challenge: the Northern District injunction suspends that requirement, but the D.C. Circuit case is unresolved. A prudent compliance posture tracks both proceedings, not just the favorable one. Organizations that rely on Anthropic technology for federally-contracted work should maintain contingency documentation, not because the injunction isn’t valid, but because a preliminary injunction is temporary and the litigation isn’t over.
For AI companies watching this case from the outside: the supply-chain-risk designation mechanism is now a known tool of executive procurement policy. The Anthropic case has demonstrated that it’s legally challengeable. It’s also demonstrated that challenging it takes resources, legal sophistication, and the willingness to fight a protracted dual-jurisdiction battle. Not every AI company has those resources. The barrier to federal market access just got more complex, regardless of which side wins.
Section 5: What to Watch
Three developments will determine the significance of this ruling over the next 90 days. First, whether the Trump administration appeals the Northern District injunction, an appeal would suspend the injunction’s effect and take the question to the Ninth Circuit, adding another layer to an already multi-jurisdictional dispute. Second, the D.C. Circuit timeline, if that court rules while the Northern District case is still at the preliminary stage, it could create the conflicting precedents described above. Third, whether any other AI companies receive supply-chain-risk designations under the same authority, a pattern of designations would transform this from a company-specific dispute into a systemic industry issue.
The Anthropic injunction is significant. It’s not final. Anyone treating it as a settled outcome is reading one front of a two-front legal war and calling it done. The supply-chain-risk designation is paused in one jurisdiction. Somewhere in the D.C. Circuit’s docket, the other proceeding is still moving.