Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Daily Brief

Judge Blocks Pentagon Ban on Anthropic, Finds It Likely Illegal Retaliation for AI Safety Policy

3 min read AI Certs Confirmed
U.S. District Judge Rita F. Lin granted a preliminary injunction on March 26, 2026, halting the Pentagon's "supply chain risk" designation against Anthropic, keeping federal agencies' access to Claude systems intact while legal proceedings continue. The court found Anthropic likely to prevail on its Administrative Procedure Act claims.

A federal judge has drawn a line. The Pentagon crossed it.

On March 26, 2026, U.S. District Judge Rita F. Lin issued a preliminary injunction blocking the Department of Defense’s move to designate Anthropic as a “supply chain risk”, a designation that, combined with a presidential directive consistent with it, had barred federal agencies from using Anthropic’s Claude model. The injunction restores federal agencies’ access to Anthropic’s systems while the full case proceeds.

The ruling, confirmed across multiple outlets including AI Certs, the LA Times, and GovInfoSecurity, found Anthropic likely to succeed on its Administrative Procedure Act claims. The LA Times characterized the court’s position as finding the Pentagon’s action amounted to illegal retaliation.

Key terms for compliance readers.

A preliminary injunction is not a final ruling. It means the court found that Anthropic is likely to prevail on the merits and would suffer irreparable harm without immediate relief, not that Anthropic has won the case. The full trial is ongoing.

A supply chain risk designation is a procurement mechanism. The Pentagon used it to exclude Anthropic from federal contracts by characterizing the company as a security risk. Under the Administrative Procedure Act, agencies must follow procedural requirements and cannot take arbitrary or capricious action, which is the basis of Anthropic’s legal challenge.

The dispute’s origin.

The conflict centers on Anthropic’s “Autonomous Weapon Refusal” policy. According to AI Certs reporting on the case, the policy forbids Claude from powering fully self-directed lethal systems. Secretary Pete Hegseth branded Anthropic a supply chain risk after the company refused to remove that guardrail to comply with a DoD directive requiring AI systems to be available for “any lawful use.” The policy reportedly also bars deployment for mass surveillance applications, though that element wasn’t explicitly confirmed in the fetched source content and should be treated as reported rather than established.

The court found the Pentagon’s action appears designed as punitive retaliation rather than a legitimate national security measure, according to reporting from the LA Times.

What this means for AI companies with government contracts.

This ruling is narrow and preliminary. It doesn’t establish that AI vendors can definitively enforce any guardrail they choose against government customers. It establishes that this specific action, removing Anthropic from contracts because it maintained specific safety policies, likely violated procedural law.

The broader question, who decides what an AI system can refuse to do: the vendor, the customer, or the government, remains open. This ruling is one data point. A full trial ruling would be another. A circuit court decision would matter more.

What’s immediately actionable: AI companies with federal contracts or federal contract aspirations should review their acceptable use policies and assess where those policies could create procurement conflicts. A second Anthropic challenge ruling is still pending and warrants monitoring.

TJS perspective: a court has now found, in preliminary terms, that the government can’t punish an AI vendor for maintaining safety guardrails, at least not through this mechanism, in this way. That’s a significant data point for AI governance. It is not a settled principle. The full case will determine whether this preliminary ruling reflects where the law actually lands.

View Source
More Regulation intelligence
View all Regulation

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub