Anthropic has drawn a line that no major AI lab has drawn quite this clearly before. Claude Mythos Preview, which Anthropic describes as its most capable model to date, is being kept off the public market entirely. Not delayed. Not limited by geography or enterprise tier. Locked.
The access structure is called Project Glasswing. Roughly 50 organizations have been selected to use Mythos Preview for defensive cybersecurity purposes, scanning their own infrastructure for vulnerabilities before adversaries find them first. Everyone else waits. Preview pricing for Glasswing partners is $25 per million input tokens and $125 per million output tokens, a price point that signals this isn’t a casual development environment.
Why the restriction? Anthropic’s own testing documentation states that Mythos Preview can identify and exploit zero-day vulnerabilities across major operating systems. The Hacker News reported the company’s characterization of the volume as significant. The pricing and access controls aren’t marketing, they’re a response to what Anthropic found when it tested what it had built. This is, as PBS NewsHour reported, the first time the company has restricted a model’s release specifically because of what the model can do.
That’s the structural shift worth tracking. The AI industry has debated responsible release practices for years, mostly in the abstract. Anthropic has now made a concrete operational decision: some capability levels require a gated deployment architecture, not just a terms-of-service clause. Project Glasswing is that architecture. Fifty organizations, defensive use only, partner-level pricing, no public API.
Mythos is also available on Google Cloud Vertex AI in Private Preview to a select group, reinforcing that access is controlled at both the Anthropic and distribution-partner level simultaneously.
What this means for security teams outside the Glasswing program is a real question. The organizations with access get to use the most capable AI vulnerability scanner available. Those without it don’t, and they’re being defended by teams that may eventually face adversaries who’ve found equivalent capabilities elsewhere. The asymmetry matters.
Two things to watch. First, whether Glasswing expands. Fifty organizations is a pilot, not a permanent ceiling. The question is what threshold Anthropic uses to decide the model is safe enough for broader deployment, and whether that threshold involves independent evaluation or only internal judgment. Second, whether other frontier labs treat this as a template. The pattern of restricting cybersecurity-capable models isn’t unique to Anthropic; it reflects a convergent view among labs about where the risk line sits.
The deeper issue is governance. A company has unilaterally decided which 50 organizations get access to a tool that can find zero-day vulnerabilities at scale. There’s no public standard for how that selection happened, no external audit of the criteria, and no regulatory framework that currently governs this kind of access tier. Project Glasswing is Anthropic’s answer to a governance problem that no institution has formally solved yet.