The same week. Three labs. Three completely different answers to the same question.
The question is this: when you build an AI system capable of finding zero-day vulnerabilities, generating functional exploit code, or autonomously navigating complex security infrastructure, who gets to use it? The answer, it turns out, depends entirely on which lab you ask.
Anthropic answered with a closed consortium. OpenAI answered with a credentialing program. Google DeepMind hasn’t answered yet, and that gap is itself informative.
Understanding the differences matters if you’re a security team lead evaluating whether any of these programs are accessible to your organization. It matters more if you’re a compliance officer thinking about what it means when private companies build quasi-regulatory access structures around dual-use capabilities with no formal government mandate. And it matters for anyone watching how AI governance actually gets made, not in policy documents, but in product decisions.
Anthropic’s Model: The Gated Consortium
Anthropic’s approach to Claude Mythos, announced via Project Glasswing, is the most restrictive of the three. Access isn’t individual. It’s institutional. Glasswing is a closed consortium of vetted partner organizations, roughly a dozen at launch, that collectively agreed to terms governing how the capability can be used, shared, and disclosed.
The model itself, per Anthropic’s own documentation, is capable of identifying zero-day vulnerabilities at a level the company determined warranted a non-release decision for the general market. That decision, hold the capability, then release it only within a credentialed coalition, is a specific theory of responsible deployment: capability is dangerous enough that even individual security professionals shouldn’t have unilateral access. Organizations are the unit of accountability.
What this means practically: if you’re a security researcher at a firm not in the Glasswing coalition, you don’t have access. If your organization wants in, the path is consortium membership, not individual application. The governance structure is designed for institutional accountability, not practitioner access.
OpenAI’s Model: The Individual Credential
OpenAI’s Trusted Access for Cyber program, expanded alongside the GPT-5.4-Cyber release on April 14, operates on a different theory. Access is individual. Security professionals apply, authenticate, and receive access based on their credentials and stated purpose. According to OpenAI, the program now includes thousands of authenticated individual defenders, a scale that already distinguishes it from Anthropic’s coalition model.
The “cyber-permissive” framing, OpenAI’s own terminology for this model variant, signals that TAC members get access to capabilities that would be restricted in a standard deployment. The model is described by OpenAI as optimized for identifying and remediating vulnerabilities in digital infrastructure. No independent evaluation of those capabilities has been published at the time of this piece.
The practical implication for security teams is more accessible than Glasswing: individual practitioners can apply directly, without their employer being a named consortium partner. The tradeoff is that individual credentialing puts more weight on the application and authentication process to screen out misuse, and OpenAI hasn’t published detailed public eligibility criteria for that process.
Google DeepMind: The Notable Absence
Google DeepMind is the third major frontier lab and, as of this publication, doesn’t have a publicly announced equivalent to Mythos or GPT-5.4-Cyber. That absence is worth naming directly. DeepMind has published research on AI security threats, including work on agentic AI attack surfaces, but a dedicated cybersecurity AI program with a structured access model comparable to Glasswing or TAC hasn’t been announced.
This could mean several things: DeepMind is developing something not yet public, DeepMind has decided against this category of product, or DeepMind’s security-relevant capabilities are being deployed through enterprise channels rather than a dedicated program. The hub is monitoring for a DeepMind announcement and will update this piece when one emerges. For now, the comparison is a two-lab picture with an open third column.
The Comparison That Matters
Set the capability claims aside for a moment, they’re vendor-attributed for both Mythos and GPT-5.4-Cyber, with no independent evaluation published for either at time of writing. Focus instead on the access architecture, because that’s what’s independently verifiable and immediately consequential.
| Dimension | Anthropic (Project Glasswing / Mythos) | OpenAI (TAC / GPT-5.4-Cyber) | Google DeepMind |
|---|---|---|---|
| Access unit | Organization (consortium membership) | Individual (TAC credential) | Not announced |
| Scale at launch/expansion | ~12 partner organizations | Thousands of individual professionals (per OpenAI) | – |
| Application process | Consortium negotiation | Individual application and authentication | – |
| Eligibility criteria (public) | Not detailed publicly | Not detailed publicly | – |
| Independent capability evaluation | Pending (METR/Epoch referenced but not published) | Pending (no evaluation announced) | – |
| Governance theory | Institutional accountability | Individual credentialing at scale | – |
The divergence in governance theory is the substantive difference. Anthropic’s model treats organizations as the unit of accountability, if something goes wrong, a named partner organization is responsible. OpenAI’s model treats individuals as the unit, and relies on the credentialing process to screen for intent and role. Neither model has published the specific eligibility criteria that make those theories operational, which is itself a gap worth noting.
What This Means by Audience
For security team leads: The access path differs significantly. TAC is individual, you can apply without your employer being a named partner. Glasswing requires organizational membership. If your organization is already a Glasswing partner (the coalition membership list is partially public per prior TJS coverage), Mythos access follows. If not, TAC is the more accessible near-term path for individual practitioners. Neither program has published verified capability benchmarks, so access decisions should be based on program fit and organizational eligibility, not on vendor-reported performance claims.
For compliance and governance professionals: Both programs represent private companies exercising quasi-regulatory authority over who can use dual-use AI capabilities. That authority is real and operational, it determines access to frontier security AI with no formal delegation from any government. The programs are currently self-governed. As they scale, they will attract regulatory scrutiny: what are the eligibility criteria, how are decisions reviewed, and what recourse exists for organizations denied access? Those questions don’t have public answers yet. Tracking them is the compliance work ahead.
For policymakers: The restriction vs. disclosure framework that TJS analyzed in “When AI Becomes the Best Hacker in the Room” is now instantiated in two live programs. Both labs chose restriction with controlled access over open disclosure. Whether that’s the right policy answer, or whether private access governance is an adequate substitute for public regulatory frameworks, is a live question. The EU AI Act’s provisions on high-risk AI systems and the NIST AI RMF’s guidance on dual-use capability governance are both relevant frameworks, but neither was designed with a private consortium credentialing model explicitly in mind.
What to Watch
Several developments will define how this story evolves. First, independent evaluation results for both Mythos and GPT-5.4-Cyber, when Epoch AI, METR, or another credible third party publishes capability assessments, the gap between vendor claims and verified performance will close or widen. Second, Glasswing coalition membership changes, new additions or departures will signal how the consortium model is working in practice. Third, TAC eligibility criteria, if OpenAI publishes more detailed criteria for individual authentication, that will clarify who “thousands of defenders” actually includes. Fourth, Google DeepMind’s announcement, if one comes, completing the three-lab picture changes the comparison significantly. And fifth, regulatory signals, any EU AI Act enforcement action or NIST framework guidance touching private access governance for dual-use AI would reframe the compliance implications immediately.
TJS synthesis. Three labs. Two live programs. One open column. The most significant development this week isn’t any single model release, it’s that two frontier labs independently converged on a similar answer (controlled access for vetted professionals) while arriving at structurally different implementations. That convergence suggests an emerging industry norm. The structural difference between institutional and individual credentialing models suggests the norm isn’t settled. Security teams need to understand both programs on their own terms, not treat them as interchangeable. Compliance professionals need to start asking the questions about private access governance that regulators haven’t yet required labs to answer. The week’s product announcements are news. The governance architecture being built around them is the story.