Anthropic’s April 6 announcement delivered two interconnected signals at once: a compute commitment large enough to anchor its model roadmap through the next generation, and a revenue trajectory that suggests the demand to justify it already exists.
The core of the deal is straightforward. Anthropic has agreed to access multiple gigawatts of next-generation TPU capacity through Google Cloud, with hardware supplied through Broadcom. Anthropic’s announcement and the Google Cloud press release both use the phrase “multiple gigawatts”, the specific figure of 3.5 gigawatts was reported by Yahoo Finance, which cited a Broadcom regulatory filing, but that number does not appear in either primary source. The distinction matters for anyone modeling compute costs or comparing capacity across labs. Capacity is expected to come online starting in 2027.
The revenue figure is the clearer signal. Anthropic disclosed that its run-rate revenue has now surpassed $30 billion, up from approximately $9 billion at the end of 2025. That’s more than a tripling in roughly 15 months. The company also reported that the number of enterprise customers each spending more than $1 million annually has exceeded 1,000, doubling from the 500-plus figure disclosed at the February Series G announcement. Enterprise customers accessing Claude through Google Cloud include Coinbase, Cursor, Palo Alto Networks, Replit, and Shopify, per Google Cloud’s press materials.
CFO Krishna Rao framed the deal explicitly in terms of growth pacing: “This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development.”
Both the revenue and customer figures are vendor-disclosed, Anthropic has not had an independent audit of these numbers reported publicly, so they carry the confidence level of a company announcement, not a third-party earnings report.
Reporting citing a Broadcom regulatory filing described the arrangement as extending through 2031 and CNBC confirmed Broadcom’s involvement in expanded chip deals with both Google and Anthropic, though the specific contract term was not confirmed in either primary source read for this brief.
Why it matters for enterprise buyers. The 2027 capacity timeline is the operational detail that procurement teams should log. If Claude’s most capable models run on infrastructure that won’t be fully online for another year, enterprise buyers evaluating multi-year AI service contracts need to factor that into their planning horizon. The doubling of $1M-plus enterprise accounts in under two months suggests Anthropic’s sales motion is accelerating fast, which means pricing, terms, and model availability could shift materially before that 2027 capacity arrives.
What to watch. The Broadcom regulatory filing that reportedly specifies the 3.5 GW figure and 2031 term has not been independently surfaced in this cycle. That filing, when located, will confirm or revise both the specific capacity number and the contract duration, two details that matter for competitive benchmarking against Microsoft’s Azure AI investments and Google’s own first-party infrastructure. Watch for Anthropic’s next public update on model availability timelines as capacity scales toward 2027.
TJS synthesis. The revenue growth rate here is the real headline, even more than the compute deal. Surpassing $30 billion in run-rate revenue, with 1,000-plus enterprise accounts each at seven figures, marks Anthropic as a genuinely commercial operation at scale, not a research lab that sells API access. The compute commitment follows that growth logically. What the deal doesn’t tell us is whether the capacity secured is sufficient for the next generation of frontier models, or merely adequate for serving current demand at scale. That question, which requires Epoch AI compute benchmarking data to answer properly, remains open.