The Pentagon designated Anthropic a national security “supply chain risk” and moved to blacklist the company from government work. Anthropic responded by filing two federal lawsuits. TIME and Democracy Now both reported the suits on March 11, consistent with a filing on or around March 9. The New York Times and Reuters independently confirmed the lawsuit and the blacklisting mechanism.
The dispute stems from Anthropic’s refusal to remove restrictions on Claude’s use in fully autonomous weapons systems, according to multiple reports. Anthropic argues in its lawsuit that the designation is unlawful and violates its constitutional rights. The company reportedly lost a $200 million government contract as a result of the designation, according to multiple reports, though whether the cancellation is final remains unconfirmed.
The blacklisting mechanism matters beyond Anthropic. If the designation stands, it bars not only Anthropic but any federal contractor from deploying Claude in work for US armed forces. That’s a structural constraint on the entire AI vendor ecosystem serving government clients.
The core legal question is whether a company’s published AI safety commitments can constitute a national security liability. That question has no precedent. Courts will decide it. What compliance teams should note now: this case signals that AI acceptable use policies are becoming a material factor in federal procurement risk, not just an ethical posture.