Federal AI procurement moves fast when national security is the argument. According to reporting from Axios and PYMNTS, the White House is reportedly drafting an executive order to restore federal access to Anthropic’s AI products, including the Claude Mythos model that has been at the center of the government access dispute since April.
The reversal arc is worth stating clearly. In April, the Pentagon designated Anthropic a supply-chain risk after Anthropic declined to authorize uses inconsistent with its published responsible scaling commitments. The specific refusal, according to PYMNTS reporting, involved autonomous weapons applications, uses that Anthropic’s RSP explicitly restricts. The Pentagon’s response was to limit federal procurement access. The White House is now reportedly moving to override that limitation.
Policy analysts have cited federal cybersecurity requirements as the primary driver for the reported reversal. The reasoning: Claude Mythos has capabilities specific enough to cybersecurity operations that no comparable domestic alternative is available at the required classification and capability level. That argument, if accurate, reframes the Anthropic situation from a procurement compliance problem to a national security dependency question.
The executive order is reported as a draft, not enacted. Its scope, the specific OMB directive it would reverse, and the conditions it might attach to restored access are all unconfirmed. What is confirmed is the prior context: Anthropic’s RSP explicitly limits uses that could contribute to weapons with potential for mass casualties, and the company maintains that its commitments are not subject to government-specific waivers under its current governance structure. RSP v3.2, published April 29 and covered separately in this cycle, updated those commitments, and the external review authorizations in that update may be directly relevant to how any EO would structure oversight.
Two analytical threads pull in different directions here. First: if the executive order conditions restored access on specific use-case restrictions consistent with Anthropic’s RSP, it could model a new template for government-AI company procurement governance, safety commitments honored, access restored through defined oversight. Second: if the EO overrides Anthropic’s use-case restrictions without consent, it tests whether voluntary safety commitments have any teeth against federal procurement authority. The distinction matters enormously for every AI company with a published safety policy and government contracts in its pipeline.
The governance tension isn’t theoretical. Legal teams at AI vendors with federal contracts should be watching how the EO’s reported scope is structured, specifically whether it treats Anthropic’s RSP restrictions as binding constraints on federal use or as items subject to executive modification.