On March 27, a federal court temporarily blocked the Department of Defense’s supply chain risk designation against Anthropic, granting the AI company a preliminary injunction that keeps its products in use by government contractors while the legal dispute continues. The ruling does not resolve the underlying conflict. It buys time.
The DoD had moved to designate Anthropic as a supply chain risk, a classification that would effectively bar federal agencies and their contractors from using Claude and potentially other Anthropic products. Anthropic challenged the designation in court. According to reports, the dispute arose after Anthropic declined to waive contractual restrictions related to surveillance and autonomous weapons applications. Those restrictions appear to be core to Anthropic’s published acceptable use commitments. The DoD’s position, if the reporting is accurate, is that a vendor’s own safety guardrails cannot limit what a procuring agency can require.
That tension is the story beyond this case. Anthropic is not the only AI developer with use-case restrictions baked into its terms of service. Every major frontier model provider maintains some version of prohibited use categories. If the DoD’s supply chain risk designation framework can be applied to enforce agency access to capabilities a vendor has restricted, the implications run across the government AI vendor landscape, not just to Anthropic.
The preliminary injunction was granted by a US District Judge identified in reports as Rita Lin. A preliminary injunction at this stage means the court found Anthropic likely to succeed on the merits, or at minimum that the balance of harms favors blocking the designation while the case proceeds. Neither conclusion is final. The case continues.
Reports also indicate the ruling may affect a broader directive to federal agencies regarding Anthropic’s products, though this element could not be independently confirmed from functioning sources available at publication.
What this means for contractors: if your organization holds federal contracts that involve Anthropic’s Claude, the injunction preserves your current operating posture while the case develops. The more important planning exercise is assessing whether your AI vendor agreements contain use-case restrictions that could create similar supply chain risk exposure. This case is the first visible instance of that conflict. It’s unlikely to be the last.
TJS will continue tracking this case as it develops. The next significant milestone is any ruling on the merits, or a settlement that clarifies whether government procurement authority can override vendor safety commitments.