The four largest US tech companies are on track to spend roughly $650 billion on capital expenditures this year, according to Bloomberg, a figure directed primarily at AI data center construction, compute hardware, and supporting infrastructure. CNBC has reported a figure approaching $700 billion for the same group; the range across major outlets sits between $635 billion and $700 billion, with Bloomberg’s $650 billion representing the T2 anchor figure.
The scale of these facilities is significant on its own terms. According to Climate Change News, hyperscale AI facilities can exceed 100 megawatts of power demand, a scale some analysts compare to the annual electricity consumption of a city of 100,000 households. That level of demand is placing measurable strain on electrical grids in regions where these facilities are concentrated.
NVIDIA’s GTC 2026 announcements reinforce the infrastructure picture. Per NVIDIA’s newsroom, the company announced a full-stack AI cloud partnership with Nebius and a long-term gigawatt-scale infrastructure initiative with Thinking Machines Lab. Those commitments add another layer to an already large capital deployment picture.
Two important framing notes. The Climate Change News piece was published March 3 and updated March 6, it provides essential context on the energy demand trajectory, not a new development this week. And on the NVIDIA survey data: NVIDIA’s 2026 State of AI report, based on more than 3,200 enterprise respondents, indicates broad expectations for AI budget growth across industries; specific percentages from that report should be treated as vendor-attributed figures pending independent confirmation.
For the grid strain and cost-allocation dimensions of this story, see the existing TJS brief on “AI Data Centers Are Breaking the Power Grid. Who Pays Is Now a Political Fight.” This brief covers the investment scale. That one covers who bears the cost.