OpenAI’s GPT-5.4 is now widely available, confirmed by TechCrunch and OpenAI’s own platform. The company positions it as its flagship model for professional work, with native computer-use capabilities built into the base model rather than layered on as a separate product. That’s a meaningful architectural shift from prior releases, where computer-use was an add-on feature.
OpenAI states GPT-5.4 features a 1 million token context window, which would place it among the longest-context models currently available via API. The company also describes a new reasoning mode, which OpenAI refers to as “extreme reasoning mode”, designed for extended, high-reliability tasks. Neither the context window figure nor this feature name has been independently confirmed at primary source level; both are vendor-stated. OpenAI’s own evaluation puts the model at GDPval 83%, SWE-Bench Pro 57.7%, and OSWorld 75%, according to OpenAI’s internal benchmarks. Pricing is reportedly set at $2.50 per million input tokens and $10.00 per million output tokens, though these figures have not been confirmed by independent reporting at tier-one sources.
On the developer side, OpenAI’s developer platform confirms support for agent workflow deployment and optimization via AgentKit. The company has introduced API enhancements designed to support multi-step agent workflows, according to company documentation. The specific parameter naming has not been independently confirmed.
GPT-5.4 arrives in the same week Google released multiple Gemini updates with their own agentic API infrastructure. Both frontier labs are clearly moving the competition from model capability benchmarks toward developer integration surface area, the question now is which platform’s architecture developers build their agent pipelines on first.