Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Daily Brief

AI Regulation News: Anthropic Sues Pentagon After Refusing to Strip AI Safety Guardrails From Its Models

1 min read The Straits Times (T3, verified excerpt) Partial
Anthropic filed a lawsuit against the Department of Defense on March 9, 2026, after the Pentagon designated the company a national security supply-chain risk on March 5 for refusing to remove guardrails that restrict its AI from being used for autonomous weapons and domestic surveillance. The case puts the question of AI safety constraints in federal procurement at the center of US AI policy.

Anthropic didn’t blink. The company refused to remove its AI safety guardrails to satisfy the Pentagon’s requirements, absorbed a national security blacklist designation on March 5, and filed suit four days later.

The Straits Times reports that Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk after the company declined to allow its models to be used for autonomous weapons systems or domestic surveillance applications. Anthropic filed its lawsuit on March 9, 2026, seeking to block the Pentagon from enforcing the designation. The lawsuit reportedly cites free speech and due process grounds, according to Straits Times reporting, the full legal filing was not publicly available at time of publication.

Mobile World Live reports that the GSA is simultaneously drafting new guidelines for AI suppliers seeking federal contracts, with requirements that reportedly include an “irreversible licence” for all legal uses and tools described as “neutral, non-partisan.” Those draft requirements have not been confirmed from the full draft text.

The Straits Times also reports that the designation puts at risk approximately US$200 million in contract value, according to its coverage, that figure has not been independently verified against court filings or contract records.

AI companies with federal contracts or federal contract ambitions now face a visible fork: adopt the government’s usage requirements, or risk the kind of designation that just landed on Anthropic. That’s not a hypothetical tension. It happened this week.

View Source
More Regulation intelligence
View all Regulation

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub