The U.S. government officially designated a leading AI safety company a supply chain risk. That sentence needs no editorializing.
The BBC confirmed that the Pentagon officially classified Anthropic as a supply chain risk, a federal procurement designation that can restrict or eliminate a vendor’s eligibility for government contracts. Anthropic has stated in a court filing, as reported by Business Insider, that the designation could cost the company up to $5 billion in lost business. That figure is Anthropic’s own stated exposure from its court filing, not an independent financial assessment.
Anthropic has vowed to sue the Pentagon. The nature of the dispute clarifies why this extends well beyond one company’s government contracts. Wired reports that Anthropic contends its AI systems aren’t yet capable of safely undertaking certain tasks the Pentagon wants them to perform. The Pentagon wants the right to make that safety judgment independently. Two irreconcilable positions on the same question: who decides when an AI system is ready for high-stakes deployment?
For AI vendors with federal contracts or federal contract aspirations, this case isn’t background noise. The outcome will establish a precedent on whether AI safety assessments belong to the vendor, the government, or some third-party certification process that doesn’t yet fully exist.