OpenAI’s next frontier model has a codename, a stated purpose, and a financial justification. CFO Sarah Friar put a number on the company’s commercial shift: enterprise users now represent more than 40% of OpenAI’s revenue, roughly double their share when Friar joined in 2024. The model built to serve that base is called “Spud,” and according to reporting from the Seattle Times, it’s coming soon.
Friar didn’t disclose a release date. What she did provide is the clearest public articulation of OpenAI’s commercial direction: professional and enterprise users are where the growth is, and the product roadmap is following the revenue.
OpenAI describes “Spud” as engineered for reasoning-heavy, production-grade professional workflows, according to CFO Sarah Friar. The model is designed around what enterprise customers actually need, complex reasoning chains, intent dependency management, and reliable output at scale, rather than the general-purpose versatility that defines consumer-facing models. Those are vendor characterizations. No independent benchmark data is available yet, and OpenAI has not published technical specifications.
The compute context matters here. Reports from multiple secondary outlets suggest that the decision to wind down Sora, OpenAI’s video generation product, reflects a reallocation of compute toward “Spud,” though as Yahoo Finance noted, OpenAI has not confirmed this connection directly. If accurate, it tells you something about internal resource prioritization: video generation lost the argument to professional AI infrastructure.
Why does this matter to enterprise teams evaluating OpenAI’s roadmap? The revenue disclosure is meaningful because it’s structural, not aspirational. OpenAI isn’t announcing an intention to focus on enterprise, it’s reporting that enterprise has already become the majority of its growth story. That changes how product decisions get made. Models get built for the customers funding the business.
The 40% figure, attributed to Friar across multiple outlets including the Seattle PI, carries the credibility of a named executive statement, not an anonymous source or an analyst estimate. It can’t be treated as an audited financial disclosure, but it’s a meaningful data point from the person responsible for OpenAI’s commercial operations.
Context worth holding: OpenAI’s enterprise push isn’t new. The company has been building out its business tier, custom GPT infrastructure, and API enterprise agreements for over a year. What’s new is the revenue confirmation that the strategy is working, and the signal that future model development will be shaped by that reality. The practitioner audience should watch for: whether “Spud” introduces any new architectural approaches to enterprise reliability (context management, reduced hallucination rates, tool-use stability), and what happens to OpenAI’s consumer API pricing as the product mix shifts toward business customers.
The synthesis: OpenAI is now a business software company that also has a consumer product, not the other way around. “Spud” is the clearest expression of that inversion yet. Developers and enterprise teams building on OpenAI infrastructure should expect continued prioritization of the capabilities that matter for production deployments, and continued uncertainty about which consumer-facing experiments survive the compute budget.