Jack Clark’s confirmation on April 14 that Anthropic briefed the Trump administration on Mythos is a small sentence carrying significant weight. A co-founder of a private AI company confirmed that his organization briefed the executive branch on a model it won’t release to the public. That’s not a product launch. That’s a governance moment.
It’s also not the first one. OpenAI launched GPT-5.4-Cyber and built the Trusted AI Collaborators program around it, a tiered access structure that now reportedly reaches thousands of verified security defenders. DeepMind is releasing open-weight models while simultaneously developing robotics and video generation capabilities. Three labs. Three approaches to a shared problem: what do you do with a cybersecurity AI that’s genuinely dangerous if misused?
The answer each lab has given reveals more about their governance philosophy than any policy document they’ve published.
The New Access Tier: What “Restricted” Actually Means
Not all restricted access is the same. Understanding the difference matters for organizations trying to figure out where they stand.
Project Glasswing, as reported by ABC News, puts approximately 40 organizations inside Anthropic’s Mythos access program. That’s a very small number. For context: there are roughly 16 critical infrastructure sectors in the United States alone, each containing hundreds of major operators. Forty organizations across a global threat landscape means the vast majority of potential defenders are outside the program, by design.
OpenAI’s Trusted AI Collaborators program operates at a different scale. As covered in our earlier brief on GPT-5.4-Cyber, the TAC has reportedly expanded to thousands of verified individual defenders. That’s still a controlled list, it’s not public API access, but the scale difference is significant. A program with thousands of participants has different operational characteristics than one with forty.
DeepMind’s answer, at least for Gemma 4, is open weights. That’s not directly comparable to a cybersecurity-specific access program, but it signals an underlying philosophy: broader distribution of capability, with the assumption that the benefits of developer access outweigh the risks of misuse at that capability tier. Gemma 4 isn’t a cybersecurity model. But the open-weights choice, released in the same week Anthropic confirmed government briefings on Mythos, is a pointed contrast.
Government Engagement: What the Bessent Briefing Signals
The specific claim about Treasury Secretary Bessent requires careful handling. The Guardian reported that Bessent summoned senior executives from Goldman Sachs and JPMorgan to discuss Mythos risks. That’s a single-source claim without official government or financial institution confirmation. Read it as reported, not confirmed.
What the Clark confirmation does verify is the government engagement fact itself: Anthropic briefed the Trump administration. The specific mechanism, who called whom, what was disclosed, what was requested, remains unverified. But the underlying dynamic is clear and significant regardless of the meeting details.
When a frontier AI lab briefs the executive branch on a capability it considers too dangerous to release publicly, the government is now inside the information asymmetry. The government knows what the lab knows. Regulatory agencies, other governments, and private sector organizations outside the briefing room don’t. That asymmetry is a governance problem, whether or not a formal policy framework exists to address it.
The Organizations on the Outside
Who isn’t in Project Glasswing or the TAC? Almost everyone.
Consider the practical situation for a mid-sized financial institution, a regional hospital system, or a state-level government agency. Their threat environment is real, ransomware, nation-state intrusion attempts, supply chain attacks. The models being restricted from them are reportedly capable of identifying and chaining the same classes of vulnerabilities those attacks use. The restricted programs exist because the labs believe those capabilities are too dangerous to release widely. From the outside, that logic is both understandable and uncomfortable.
The security asymmetry argument runs in both directions. Restricting access to a powerful offensive capability reduces misuse risk. It also concentrates defensive capability among a small group of organizations. If the 40 organizations in Project Glasswing include major defense contractors and intelligence-adjacent firms, the program might be optimizing for threat intelligence rather than broad defensive coverage. We don’t know. The program’s selection criteria haven’t been published.
Three Labs, Three Philosophies
The access model divergence maps onto a broader strategic difference:
Anthropic is betting that ultra-restriction plus government engagement is the right approach for its highest-risk capabilities. The argument is precautionary: a model that can chain zero-day exploits shouldn’t be accessible until the policy infrastructure to govern it exists. The government briefing is, in this reading, an attempt to accelerate that infrastructure.
OpenAI is betting on tiered access at scale. The TAC model attempts to serve the security community’s defensive needs without full public release. Thousands of verified defenders is a meaningful number, it’s not a symbolic program. The risk is that verification at scale is harder than verification at forty, and the security properties of a large tiered program are harder to audit than a small one.
DeepMind’s open-weights strategy for Gemma 4 isn’t directly comparable, but it reflects a different underlying bet: that capability distribution and developer ecosystem development outweigh the marginal risk at current capability tiers. As DeepMind’s capabilities advance, that calculus may change.
None of these philosophies is obviously correct. They’re testable propositions, and the security community will eventually have data on which approach produced better defensive outcomes. We don’t have that data yet.
What Comes Next: Regulatory and Liability Implications
The policy conversation that hasn’t started formally is already happening informally. Government briefings on frontier AI capabilities are, functionally, early-stage regulatory engagement without a framework. The labs are deciding what to disclose, to whom, and when. That’s a governance function, one currently being performed by private companies without accountability mechanisms.
The Regulation pillar has tracked the absence of a formal framework for restricted-access AI programs. What this week’s briefing confirmation adds: the government is now actively receiving information from labs operating restricted programs. That creates a de facto obligation, even without formal rules. If Treasury is being briefed on financial sector risks from Mythos, and a financial institution then experiences a Mythos-class attack it was never warned about, the question of who bears liability for the information asymmetry isn’t hypothetical.
The regulatory gap isn’t just “we need rules for AI.” It’s specifically: “we need rules for who gets access to high-capability AI, who decides, and what disclosure obligations follow from the decision to restrict.”
What to Watch
The near-term milestones that would move this story forward: any independent technical evaluation of Mythos capabilities (which would test Anthropic’s claims and potentially reframe the access restriction rationale); any official Treasury or banking sector statement about the reported Bessent meeting (which would confirm or refute the financial sector engagement); and any formal government response to the briefings (executive order, agency guidance, or Congressional attention) that would signal whether the informal engagement is moving toward formal policy.
TJS Synthesis
The access program story is actually a governance precedent story. How these three labs structure access to their highest-risk capabilities, and how they engage with governments doing the same, is creating a set of facts on the ground that policy will eventually have to respond to. Forty organizations, thousands of verified defenders, open weights: three different answers to the same question. Organizations evaluating their own security posture should understand which program they can qualify for, what that access costs, and what they’re expected to do without it. The policy framework will catch up eventually. The threat landscape won’t wait.