Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Markets Deep Dive

Amazon Outspent Alphabet and Microsoft by Billions in Q1 2026, What the Capex Gap Reveals

$110.75B Q1 capex
5 min read SEC Filings (Alphabet, Microsoft, Amazon Q1 2026) Partial Moderate
Three companies spent $110.75B on AI infrastructure in a single quarter. The aggregate is the headline everyone will report. The per-company breakdown is the story that actually matters for investors and compliance teams trying to understand where the infrastructure is going and who controls it.
Amazon: $44.20B vs. Microsoft: $30.88B Q1
Key Takeaways
  • Amazon spent $44.20B on infrastructure in Q1 2026, 24% more than Alphabet and 43% more than Microsoft, per SEC filings, with the gap reflecting divergent partnership structures, not just scale differences
  • The $110.75B aggregate is a mathematical sum of three SEC-sourced per-company figures from Q1 2026 earnings
  • Epoch AI projects five US data centers will exceed 1GW peak power draw by late 2026 - the physical scale that $110B in quarterly infrastructure spending produces
  • Amazon's infrastructure-linked AI partnership model (exemplified by the Anthropic compute agreement) creates forward build obligations that differ structurally from
  • Alphabet's proprietary model and Microsoft's Azure-linked capex
Q1 2026 Capital Expenditure (SEC filings)
Amazon
$44.20B
Alphabet
$35.67B
Microsoft
$30.88B
Big Three Combined
$110.75B
Analysis

Amazon's 43% capex premium over Microsoft in Q1 2026 is the largest single-quarter divergence in the current infrastructure cycle. The gap reflects Amazon's infrastructure-linked AI partnership model, building ahead of demand to honor compute commitments to AI lab partners. If the divergence holds across 2026, Amazon will have committed more than $50B more than Microsoft to infrastructure in a single year.

Opportunity

Epoch AI projects five US data centers will exceed 1GW peak power draw by late 2026. That threshold connects directly to EU AI Act GPAI compute classifications. Compliance teams tracking regulatory exposure should monitor which training facilities cross this threshold and what model training occurs there.

Amazon spent $44.20B on capital infrastructure in Q1 2026. Alphabet spent $35.67B. Microsoft spent $30.88B. The combined figure, $110.75B across three companies in 90 days, has already drawn coverage as a record. What hasn’t been examined is what the gaps between those three numbers reveal about diverging infrastructure strategies and what the spending is actually building toward.

This deep-dive is a follow-up to TJS coverage of hyperscalers as AI capital infrastructure and the Q1 2026 capex aggregate brief published May 1. The aggregate was the record. This analysis is about what sits beneath it.

The Numbers

Amazon’s Q1 2026 capital expenditure of $44.20B, sourced to SEC filings, exceeds Alphabet’s $35.67B by 24% and Microsoft’s $30.88B by 43%. These are not rounding-order differences. In a single quarter, Amazon outspent Microsoft by more than $13B. That gap compounds across a year. If the divergence holds, Amazon will have committed roughly $52B more than Microsoft to infrastructure over 2026 alone.

All three figures come from quarterly SEC filings, the most reliable source available for capital expenditure data. No direct filing URLs are available in this production cycle; the figures are attributed to Alphabet’s Q1 2026 10-Q, Microsoft’s Q1 FY2026 10-Q, and Amazon’s Q1 2026 10-Q respectively. Readers who want primary verification should pull those filings directly from SEC EDGAR.

The Divergence

Amazon’s higher figure isn’t surprising in isolation, AWS is the largest cloud infrastructure provider by revenue. What makes the gap structurally significant is *why* each company is spending, and what the spending is optimized for.

Amazon’s capex trajectory is driven by two distinct but reinforcing dynamics. First, AWS core infrastructure: data center build-out, networking, power systems, and chip procurement for a cloud business that still generates the majority of Amazon’s operating profit. Second, infrastructure-linked AI partnerships. Anthropic’s previously announced agreement for up to 5GW of AWS Trainium compute capacity, disclosed in April 2026 and covered in prior TJS analysis, is one example of a commitment structure where AI lab partnerships translate directly into data center build obligations. Amazon is not just spending to serve AWS customers. It’s spending to honor commitments made to AI partners who need compute at a scale that requires new physical infrastructure.

Alphabet’s spending pattern reflects a different posture. Google’s AI infrastructure is largely proprietary, TPUs, custom silicon, and data centers optimized for Google’s own model training and inference workloads. Alphabet doesn’t sell AI compute the same way AWS does. Its $35.67B includes Google Cloud’s external business, but the dominant driver is Google’s own AI stack. That’s a structurally different spend than Amazon’s, where a meaningful portion of infrastructure serves external AI tenants.

Microsoft’s $30.88B is the most directly legible figure: it’s spending closely tied to Azure’s AI service capacity and, critically, to Microsoft’s compute commitments to OpenAI. OpenAI has reportedly committed to approximately 2GW of capacity via AWS, according to commentary sources, though that figure hasn’t been confirmed in official filings and should be treated as approximate. If accurate, it would suggest OpenAI is diversifying compute away from exclusive Microsoft dependency, a development worth monitoring. Microsoft’s Q1 figure is the lowest of the three. Whether that reflects discipline, different project timing, or genuine strategic divergence from Amazon’s build pace is something the next two quarters will clarify.

The Capacity Question: What $110.75B Is Actually Buying

Capital expenditure at this scale translates into physical infrastructure with specific thresholds. According to Epoch AI’s compute tracking research, five US data centers are projected to exceed 1 gigawatt of peak power draw by late 2026. That figure, cited in Financial Times coverage of Epoch AI’s analysis, is a concrete benchmark for what the current build cycle produces. A 1GW data center requires roughly the output of a mid-size power plant. Five of them crossing that threshold in a single year represents an infrastructure concentration that has no precedent in commercial computing history.

The Epoch AI 1GW projection is distinct from the 44x compute growth framing covered on the TJS technology pillar this week. The 44x figure describes the rate of compute growth annually. The 1GW threshold describes the physical scale of individual facilities the current investment cycle is producing. Both matter. They describe the same underlying infrastructure build from two different angles, one as velocity, one as mass.

The Commitment Layer: What Partnerships Mean for Returns

Infrastructure at this scale isn’t speculative. It’s committed. Amazon’s Anthropic partnership, Microsoft’s OpenAI relationship, and Google’s internal model ambitions each represent forward commitments that require physical capacity to exist before the revenue materializes. The hyperscalers aren’t building data centers because demand is arriving, they’re building because their AI partners need the compute to exist in order to generate the products that will eventually produce the revenue.

This creates a specific return-modeling problem for infrastructure investors. The commitment layer is real and binding, but the downstream revenue depends on whether the AI products those labs ship generate the enterprise adoption needed to justify the spend. For Amazon, the Anthropic partnership provides some revenue-linked rationale. For Microsoft, Azure AI services are directly monetizable. For Alphabet, the return path runs through Google Cloud’s AI services and Google’s own advertising and productivity products. Three different return structures, same quarterly spend magnitude.

What Investors and Compliance Teams Should Watch

The next two quarters of SEC filings will reveal whether Amazon’s capex lead holds, narrows, or widens. A sustained 20%+ gap between Amazon and the other two hyperscalers would suggest Amazon is executing a deliberate infrastructure dominance strategy, not just serving current demand but creating supply that locks in AI lab partners before they can build or buy their own.

Compliance teams have a separate reason to track this data. The EU AI Act’s general purpose AI provisions include compute-based thresholds for model classification. Epoch AI’s 1GW projection directly connects to those thresholds: infrastructure at gigawatt scale enables training runs that cross regulatory boundaries. The five data centers Epoch AI projects will exceed 1GW by late 2026 aren’t just infrastructure investments, they’re the physical facilities where models subject to GPAI obligations get trained. The TJS regulation pillar has covered the GPAI classification thresholds separately. The connection here is that the capex numbers and the regulatory thresholds are describing the same asset from different vantage points.

TJS synthesis: The $110.75B aggregate tells you the scale. The per-company breakdown tells you the strategy. Amazon’s 24% capex premium over Alphabet and 43% premium over Microsoft in a single quarter reflects an infrastructure-linked partnership model that creates physical commitments to AI labs, not just cloud capacity for buyers. For investors, the divergence is a signal about which hyperscaler is most exposed to AI lab partner performance. For compliance teams, the Epoch AI 1GW threshold connects this spending directly to the compute ceilings that define regulatory classification in the EU AI Act. The quarterly filings are worth reading not just as financial data but as infrastructure commitments with regulatory downstream implications.

View Source
More Markets intelligence
View all Markets
Related Coverage

Stay ahead on Markets

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub