$22 billion to $52 billion. One year.
Per Epoch AI’s Chip Components Explorer, total spending on AI chip components more than doubled between 2024 and 2025, a $30 billion year-over-year increase that represents the fastest single-year cost escalation in the hardware layer of the AI stack. Epoch AI is the primary source for this data; no comparable tracker exists at this granularity.
HBM is the bottleneck. High-Bandwidth Memory accounted for approximately $20 billion of that $30 billion increase. That’s not a supporting character in the cost story, it’s the lead. HBM is the interface between a chip’s compute cores and the data those cores need to process. As AI models have grown larger and inference workloads have intensified, HBM capacity has become the binding constraint on what chips can actually do in production.
The Nvidia B300 is reported to feature approximately 288GB of HBM3E memory, roughly double the capacity of the H200 predecessor, per Epoch AI’s component tracking data. That specification reflects where the market is heading: more HBM per chip, at higher cost per unit, with supply concentrated among a small number of manufacturers.
HBM Capacity (flagship chip)
Why this matters for infrastructure investors. HBM is produced by three companies: SK Hynix, Samsung, and Micron. Nvidia’s roadmap depends on HBM delivery schedules that these manufacturers control. When $20 billion of a $30 billion cost increase flows through three supply chain points, infrastructure cost planning becomes a function of semiconductor manufacturing capacity, not just AI model demand.
This is the third data point in the current reporting cycle connecting to AI infrastructure capital concentration, following reported hyperscaler backlog commitments and Challenger workforce attribution data. The direction is consistent: capital is concentrating in the infrastructure layer, costs are rising, and supply chain dependency is narrowing. Epoch AI’s component-level data is the most granular evidence of that concentration published in .
The practical implication. Enterprise buyers pricing AI deployment over a multi-year horizon are now working with a hardware cost baseline that rose 136% in a single year. Whether 2025’s rate of increase continues into 2026 is the open question. The Chip Components Explorer gives infrastructure planners a public tool to track that answer as new data arrives.
What to Watch
What to watch. SK Hynix and Samsung HBM production announcements are the leading indicator for whether the $20 billion HBM cost concentration of 2025 expands, stabilizes, or contracts in 2026. Watch also for whether Nvidia’s next-generation roadmap announcements include HBM capacity figures, those disclosures will give a forward view on whether component costs are still accelerating or beginning to plateau.
TJS synthesis. Epoch AI’s tracker gives AI infrastructure analysis something it’s lacked: component-level spending data that doesn’t originate from vendor earnings calls. The $22B-to-$52B figure is the first independently sourced evidence of how fast the hardware build is actually moving. Watch the 2026 edition of this data, if HBM’s share of component spend stays above 60%, the supply chain concentration risk isn’t resolving. If it drops, that signals either manufacturing expansion or a shift in chip architecture that reduces HBM dependency.