Three things happened on the same week that, taken separately, look like routine product news. Together, they sketch something more deliberate.
On May 7, OpenAI released GPT-5.5 and GPT-5.5-Cyber, framing the launch as “Scaling Trusted Access for Cyber.” The model is not a general release. GPT-5.5-Cyber is rolling out in limited preview to vetted cybersecurity teams, organizations that pass a qualification process before they get access. That access tier is new. OpenAI has released safety-focused system cards before, but a restricted rollout gated on defender credentials represents a structural change in how frontier AI reaches high-stakes users.
The credibility anchor for this claim is external. The UK AI Safety Institute conducted independent cyber evaluations on GPT-5.5 and published a finding: “GPT-5.5 is one of the strongest models we have tested on our cyber tasks.” AISI is a T1 government body. Its evaluation is not a vendor benchmark. That distinction matters for practitioners deciding whether to take the capability claim seriously.
On the infrastructure side, Epoch AI’s Epoch Capabilities Index places GPT-5.5 Pro at a score of 159, an independently-tracked benchmark that corroborates the model’s frontier positioning without relying on OpenAI’s own evaluation methodology.
GPT-5.5 is one of the strongest models we have tested on our cyber tasks.
UK AI Safety Institute (AISI)
Separately, OpenAI introduced MRC, Multipath Reliable Connection, and contributed the protocol specification to the Open Compute Project. AMD, Broadcom, Intel, Microsoft, and Nvidia are listed as consortium partners. MRC addresses a specific operational problem: networking resilience in large AI clusters. By releasing it through OCP, OpenAI makes the protocol available to any hardware vendor building high-bandwidth AI fabric, rather than keeping it proprietary. The practical implication is multi-vendor AI networking interoperability. One detail to watch: the specific interface speed figures in some early reporting were not confirmed in the OCP specification documents available at publication. The protocol’s general design target is high-bandwidth networking; practitioners should verify speed specifications against the OCP technical documents directly.
The 10GW milestone is the third data point. OpenAI confirmed it surpassed 10 gigawatts of secured AI infrastructure capacity, the target originally set for 2029 under the Stargate initiative. Hitting that target ahead of schedule matters for two reasons: it sets a baseline for what “at scale” means for GPT-5.5 workloads, and it accelerates the timeline for whatever comes after GPT-5.5.
What enterprise security teams should assess now: GPT-5.5-Cyber’s limited preview structure means access is controlled and applied-for, not purchased. Organizations with cybersecurity mandates that include AI-assisted defense capability need to understand the qualification criteria. The AISI evaluation is a starting point for internal justification, it provides independent language that legal and compliance teams can reference. The MRC protocol is relevant for infrastructure teams evaluating AI cluster networking, not an immediate procurement decision, but relevant to any organization building or expanding on-premises AI compute fabric.
Unanswered Questions
- What are the published qualification criteria for GPT-5.5-Cyber limited preview access?
- What specific interface speed standards does MRC target, and has the OCP specification been publicly released in full?
- What timeline applies to GPT-5.5-Cyber GA availability beyond the current limited preview?
One practical gap the announcement leaves open: the qualification criteria for GPT-5.5-Cyber access have not been published in detail. That matters. The model is described as restricted to vetted defenders, but the vetting process itself is opaque at this point. Enterprise teams building a case for access have no published eligibility standard to assess themselves against. That gap will either be filled quickly by OpenAI or will become a friction point as organizations line up for the preview.
The three elements, tiered access, open networking standard, ahead-of-schedule infrastructure – are not coincidental in timing. They form a coordinated signal about how OpenAI is structuring the relationship between frontier capability and controlled deployment. The question for practitioners is not whether GPT-5.5-Cyber is powerful. The AISI says it is. The question is whether your organization can qualify for it, and whether your infrastructure is built to run it at scale.