A frontier AI lab is suing the US government to keep its safety rules intact.
On March 9, Anthropic filed two federal complaints in the Northern District of California against the Department of Defense. According to NPR’s reporting, the company is asking a federal judge to block Pentagon officials from enforcing a national security supply chain risk designation, a label that, if it stands, restricts the company’s ability to operate on government systems.
Anthropic states in its complaints that the designation came after it refused to remove safety guardrails preventing use of its AI for autonomous weapons or domestic surveillance. The company argues the designation constitutes illegal retaliation and violates its free speech and due process rights. The New York Times reports Anthropic characterizes the action as being “punished on ideological grounds.” Those are Anthropic’s legal positions in a pending lawsuit, no court has ruled on them.
The Pentagon’s stated rationale for the designation isn’t available in current sources. What’s confirmed is the designation itself, the filing date, the court, and Anthropic’s stated grounds for challenge. Lawfare’s analysis of the civil complaint provides the most detailed available account of the legal theory.
For AI practitioners and compliance teams at companies with government relationships, this case has direct implications. It’s the first major federal litigation testing whether an AI company’s own safety constraints are legally protected against government override. Watch the Northern District of California docket.