Anthropic has said nothing. The documents did the talking instead.
Leaked internal materials, reported on April 1 by multiple outlets and confirmed as an exclusive by Fortune, describe a new Anthropic flagship model called Claude Mythos. According to those reports, Mythos sits in a new model tier called Capybara, above the current Opus tier, which has been Anthropic’s top publicly available tier. Capybara is the tier name, not a separate model.
What the Documents Reportedly Say
Leaked documents, as reported by multiple outlets including Fortune, describe Claude Mythos as “by far the most powerful AI model we have ever developed.” The quote comes from the leaked materials as reported, it has not been confirmed by Anthropic. According to reports citing those documents, training for Claude Mythos is complete.
The leaked documents also reportedly note significant cybersecurity risks associated with the model’s release. According to reports, Claude Mythos is currently in trial with select early access partners focused on cybersecurity applications. No public release date has been announced.
One figure circulating in secondary reporting, a 10 trillion parameter count, does not appear to originate from the leaked documents themselves, according to cross-reference analysis. That figure comes from secondary commentary about the leak. It’s not a verified specification and doesn’t appear in this brief.
Why It Matters
The Capybara tier above Opus is the structural news here. Anthropic’s public model ladder currently runs Haiku, Sonnet, Opus. A new tier above Opus, if the documents are authentic, signals a meaningful capability jump, not an incremental update. The fact that Anthropic is reportedly beginning trials with cybersecurity partners, rather than a general developer release, suggests the company is treating this as a high-stakes deployment requiring controlled rollout.
The cybersecurity risk language in the documents is also notable. Anthropic positions itself publicly as an AI safety company. A leaked document describing its own most powerful model as posing “significant cybersecurity risks” creates a visible tension with that public positioning, one worth tracking as more information surfaces.
Context
Leaked documents about frontier AI models aren’t new. What’s different here is the sourcing pattern. Fortune’s exclusive reporting carries more weight than the social media and low-tier outlets that initially circulated the story. That doesn’t authenticate the documents, but it does mean the story has editorial scrutiny behind it beyond initial leaks.
What to Watch
Watch for any official statement from Anthropic, silence is itself informative. Watch whether the cybersecurity trial partners are identified, which would confirm the trial phase. Watch for Fortune’s full exclusive piece if it isn’t already published; the snippet available via cross-reference suggests a more detailed account exists.
TJS Synthesis
This is a leak-sourced story, and that matters. Nothing here should be treated as confirmed Anthropic product strategy. What the reports do establish, if the documents are authentic, is that Anthropic has a model in trial that its own documentation describes as both historically powerful and cybersecurity-risky. The combination of “most powerful ever” and “significant cybersecurity risks” in the same document, from a company that leads with safety as its brand, is the story worth following.