Start with the coincidence. One week. Two of the three largest AI labs in the world made moves that, read together, say something neither company said directly.
OpenAI reportedly walked away from consumer video. Anthropic revealed, accidentally, that it’s positioning its next flagship model for the cybersecurity market. Both companies are placing enterprise bets. Both are deprioritizing broad consumer utility. And they’re doing it at the same time, for reasons that make sense individually but become more legible when you put them side by side.
This is that exercise.
The Week in Brief
Reports confirmed that OpenAI is discontinuing its Sora video app. Disney reportedly withdrew from a relationship valued at approximately $1 billion around the same time, according to industry newsletters. Reports suggest OpenAI is reorienting toward core AI development, enterprise infrastructure, and autonomous systems applications, though OpenAI had not published a formal statement as of March 28, 2026. Treat the strategic framing as directional reporting, not confirmed company doctrine.
Separately, Anthropic confirmed to Fortune that a CMS error exposed approximately 3,000 pieces of unpublished internal content, including information about an upcoming model called Claude Mythos, described in leaked materials as the company’s most powerful to date. Internal documents reportedly positioned it as “significantly ahead of any other AI model in the cybersecurity industry.” That’s a leaked internal claim, not a published benchmark. No independent evaluation exists. Anthropic also expanded Claude’s capabilities to include computer control via Claude Code and broader agentic workflows.
Same week. Two companies. Two directional signals pointing at different parts of the same market.
OpenAI’s Calculation: What Sora’s Exit Says
Consumer AI products are hard to defend.
That’s not a post-hoc rationalization. It’s the structural reality that OpenAI appears to be acting on. Consumer video generation is a market where differentiation erodes quickly. Sora launched to significant attention in 2024. By 2026, the field had widened: Runway, Kling, Veo 2, and others had all produced capable video output. User acquisition in that space is expensive. Monetization at scale is difficult relative to enterprise contracts. Content moderation risk is substantial.
Infrastructure and enterprise deployment are different. API access, foundation models for embedded enterprise use, and platforms for autonomous systems development are recurring revenue businesses with cleaner customer relationships and higher switching costs. OpenAI’s infrastructure layer, the models that power third-party products, the enterprise ChatGPT deployments, the API economy built around GPT, is where the defensible business lives.
The Disney relationship, reportedly valued at approximately $1 billion, is the most concrete signal in the available reporting. Enterprise partnerships in AI are typically conditional on product roadmaps and capability trajectories. If Disney’s investment was tied to Sora as a production or creative tool, for content pre-visualization, animation assistance, or similar applications, a strategic exit from that product makes the partnership economically illogical. Partnerships follow product bets. When the bet changes, the partnership changes with it.
What OpenAI is reportedly moving toward, autonomous systems applications, infrastructure scale, is a market where compute advantages, model quality, and enterprise relationships compound over time. Consumer video is a features race. Infrastructure is a moat-building exercise. The directional logic holds, even if the specific execution details remain unconfirmed by OpenAI directly.
Anthropic’s Calculation: Agentic Control and Cybersecurity Positioning
Anthropic is making a different bet. Or, more precisely, it’s making overlapping bets that point at the same customer profile: the enterprise buyer who needs AI to do work, not just answer questions.
The Claude Code expansion is the near-term signal. Computer-use agents, systems that interact directly with operating environments rather than generating text about them, are a qualitative step in what enterprise customers can deploy. The prior Claude was a reasoning and generation layer. Claude with computer control is an execution layer. That distinction changes the product category. It also changes the security surface area, the testing requirements, and the enterprise governance considerations. Developers building on Claude APIs now have access to a meaningfully different class of capability.
The Mythos leak is a longer-arc signal. Internal materials reportedly positioned it as the company’s most powerful model with a specific cybersecurity focus. That framing, if it reflects actual product strategy and not just internal positioning language, suggests Anthropic is making a vertical bet. Not just “a better model for everything,” but “the best model for cybersecurity.” That’s a different go-to-market strategy than general frontier model competition. Vertical positioning creates a clearer sales motion, a more defined customer, and a more defensible competitive position than horizontal capability races.
The caveat is real: Claude Mythos hasn’t been released. The capability claims come from pre-release internal documents exposed in a data leak, not from published benchmarks or independent evaluation. Epoch AI has no assessment. There’s no arXiv paper. The claim that it’s “significantly ahead of any other AI model in the cybersecurity industry” is exactly the kind of internal language organizations use before a model ships, language that may or may not survive contact with independent evaluation. Hold the specific claims loosely. The strategic direction they point to is the more durable signal.
Convergence and Divergence
| OpenAI | Anthropic | |
|---|---|---|
| Moving toward | Infrastructure scale, enterprise APIs, autonomous systems | Enterprise vertical AI, computer-use agents, cybersecurity |
| Moving away from | Consumer video, broad consumer product portfolio | General-purpose consumer chat utility (implicit) |
| Partnership signal | Disney reportedly exited ~$1B relationship | Leak confirms significant enterprise positioning investment |
| Model strategy | Infrastructure-first; products follow | Vertical-first; cybersecurity as anchor use case |
| Transparency | No formal announcement as of March 28 | Confirmed leak and cause publicly; appropriate response |
Both labs are betting on enterprise over consumer. Both are betting on agentic or autonomous applications. Those convergences reflect the same market intelligence: enterprise AI is where the durable revenue is, and agentic capability is the differentiating feature.
The divergence is in execution theory. OpenAI, based on available signals, is going deep on horizontal infrastructure, the layer that powers many enterprise applications across many verticals. Anthropic appears to be going deep on vertical penetration, using specific capability claims in specific industries to win customers who need more than a general-purpose model.
These are not mutually exclusive strategies. Both can work. They require different organizational capabilities, different sales motions, and different product roadmap priorities. What they share is a recognition that the consumer product phase of frontier AI is maturing, and the enterprise phase is where the next competitive cycle plays out.
What Builders and Buyers Should Watch
For developers: platform bets are becoming less interchangeable. When OpenAI and Anthropic are making distinct capability investments, infrastructure-first vs. vertical-first, the choice of which API ecosystem to build on starts to matter more. A developer building for the cybersecurity market is getting a different value proposition from Anthropic in 2026 than from OpenAI. That wasn’t obviously true twelve months ago.
For enterprise buyers: vertical AI capabilities are emerging as a real differentiator, but procurement decisions made today are based on claims that independent evaluation hasn’t yet confirmed. The Claude Mythos cybersecurity positioning, the OpenAI infrastructure pivot, these are strategic signals, not shipping products. Buy based on what’s available and verified, not on what leaked documents say is coming.
For investors and strategists: the consumer AI product market is thinning. The enterprise and infrastructure market is thickening. That reallocation is visible in both of this week’s moves. The companies that survive the next competitive cycle in frontier AI are the ones that have defensible enterprise revenue, not the ones with the most impressive consumer demos.
What remains unconfirmed: OpenAI’s formal strategic statement; Claude Mythos’s release timeline, actual capabilities, and independent benchmark performance; the precise terms of the Disney relationship exit. Watch for official announcements from both companies in the weeks ahead. The direction is clear. The details still need verification.
TJS take: This week’s two stories share a thesis. The frontier lab race is entering an enterprise-and-infrastructure phase, and both OpenAI and Anthropic are positioning for it differently. OpenAI is building the horizontal layer that everything else runs on. Anthropic is building the vertical capabilities that specific enterprise buyers will pay a premium for. Neither strategy is obviously superior. Both are coherent. What’s notable is the speed at which the field is moving from general-purpose consumer AI to specialized enterprise AI, and how much of that movement is visible in a single week’s worth of product decisions.