Three weeks ago, Anthropic briefed the Trump administration on Claude Mythos. One week ago, the DoD reportedly called Anthropic a supply chain risk. This week, Treasury joined the queue. The escalation timeline is fast, and the stakes have shifted.
This deep-dive maps the dispute, who the parties are, what each one wants, what leverage they hold, and what precedent this sets for government access to restricted AI capabilities. It draws from this cycle’s daily brief, two prior published briefs in this series, and the verified facts available from each. Where claims couldn’t be independently confirmed, that limitation is stated directly.
The Capability at the Center
Start with what the dispute is actually about. Anthropic states that Mythos Preview, released under Project Glasswing on approximately April 7, 2026, identified thousands of previously unknown zero-day vulnerabilities across major operating systems and browsers during testing, according to MobileWorldLive’s April 14 reporting. This claim has been secondarily reported by Hacker News and other outlets, all attributing it to Anthropic directly. It hasn’t been independently confirmed.
That matters for understanding the dispute. A general-purpose frontier model creates policy questions about safety, alignment, and deployment. A model that demonstrably finds novel software vulnerabilities at scale creates a different set of questions, ones that live closer to weapons law, export control, and national security classification than to AI ethics. The government agencies seeking access aren’t asking because Mythos writes good emails.
The ECI score attributed to Mythos, reportedly around 156 per Epoch AI, cannot be confirmed. The Epoch AI evaluation URL associated with this story returned broken during source verification. Any ECI figure for Mythos should be treated as unverified until Epoch publishes independently.
The Four Parties and What They Each Want
*Anthropic.* The company launched Mythos with a deliberate access structure: a restricted partner tier limited to Amazon, Apple, Nvidia, and Google, according to The Next Web. Anthropic hasn’t publicly stated why those four were chosen, but the structure implies a model where access is controlled by commercial relationship and, presumably, by Anthropic’s assessment of each partner’s ability to handle the capability responsibly. What Anthropic wants is straightforward: to maintain that control. The company built a restricted access system precisely because it judged the capability too significant for unrestricted release.
*The Department of Defense.* DoD reportedly sought access to Mythos and, when access wasn’t granted, designated Anthropic as a “supply chain risk” in March 2026, according to reporting attributed to IAPP. This designation couldn’t be independently confirmed from primary government sources, the IAPP article URL wasn’t confirmed accessible during source verification. If the designation is accurate, it carries real procedural weight. Supply chain risk designations under federal acquisition frameworks can restrict agency procurement from a vendor, trigger reviews of existing relationships, and affect pending contract opportunities. The DoD’s position appears to be that access to a capability it considers strategically relevant should not be gateable by a private company’s commercial partnership model.
*The US Treasury Department.* Treasury’s involvement, as reported by MobileWorldLive, is newer and less detailed in the available source material. The specific use case Treasury is pursuing, financial system vulnerability testing, sanctions evasion detection, or something else, is not confirmed. What the Treasury’s presence signals is that the demand for Mythos access is not limited to defense applications. Two agencies with different mandates are making the same ask.
*The Partner Tier: Amazon, Apple, Nvidia, Google.* The four authorized organizations represent a specific kind of access structure, large, well-resourced technology companies with existing Anthropic commercial relationships and, presumably, the infrastructure to deploy Mythos responsibly. What they agreed to in terms of usage restrictions is not public. What the government agencies are now implicitly challenging is whether Anthropic can legally sustain a private access tier that excludes federal agencies when those agencies believe the capability is relevant to national security functions. The partner tier’s interests in this dispute are not clearly articulated in the available reporting. None of the four companies has publicly commented on the access escalation.
The Supply Chain Risk Designation: What It Actually Means
Federal supply chain risk designations are not primarily AI governance tools. They come from procurement law, specifically, the Federal Acquisition Regulation and related authorities, and they’re designed to give agencies a formal mechanism to exclude vendors whose products or practices create unacceptable risk to government systems or operations. Applying this framework to an AI capability access dispute is unusual.
If DoD’s designation is confirmed, it creates a structural problem for Anthropic: the company may find its ability to pursue government contracts, or to have its technology used by government contractors, constrained, independent of any resolution to the Mythos access question specifically. A designation of this kind doesn’t require the access dispute to be resolved; it simply changes Anthropic’s standing as a government vendor.
The precedent is significant. This would be among the first cases in which a government agency used supply chain risk designation as leverage in a dispute about access to a private AI capability, rather than in response to a security incident involving an existing government system. Whether that framing holds up legally is a question for federal procurement attorneys, not AI analysts.
What the Access Model Debate Is Actually About
Strip away the specifics and the dispute is about one structural question: can a private company legally control access to an AI capability that government agencies judge to be strategically significant?
The answer is almost certainly “yes, for now”, but the mechanism being tested here could change that. Supply chain risk designations, export control frameworks, and national security authorities have all been used to constrain private technology companies’ control over their own products before. The question is whether any of those frameworks fit cleanly onto an AI model access dispute. They probably don’t fit cleanly. That’s exactly why this case is worth watching.
Implications for Enterprise and Security Teams
Organizations currently using Anthropic products, particularly those with federal contracts or in regulated sectors, have two immediate concerns.
First: does a DoD supply chain designation (if confirmed) affect their ability to use Anthropic products in government-adjacent work? The answer depends on the specific designation scope, which is not public. Legal review is warranted before assuming continuity.
Second: the Mythos partner tier structure is a preview of where high-capability AI access models are heading. If the most powerful AI systems are restricted to a small group of authorized organizations, procurement strategy for AI capabilities is going to start looking a lot more like enterprise software licensing and a lot less like API access. Security teams should be thinking about access tier strategy now.
What to Watch
Three developments will clarify where this goes. First: whether the DoD supply chain designation gets confirmed via primary government sources, the IAPP reporting needs corroboration. Second: whether Anthropic makes any public statement about the agency access demands, which it has not done as of this writing. Third: whether either agency pursues a formal legal mechanism, FOIA, legislative action, or contract vehicle, to compel or work around access restriction.
The Mythos dispute will not resolve quietly. The capability is too significant, the government interest too specific, and the structural question too novel. Watch this space.