Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Technology Deep Dive

The 44x Problem: What Epoch AI's Compute Data Means When the Regulatory Baseline Shifts Every Two Weeks

5 min read Epoch AI Qualified Weak
The governance frameworks being built to contain frontier AI risk were designed around numbers. Compute thresholds, model counts, infrastructure concentration metrics, these are the architecture of regulatory accountability. Epoch AI's May 2026 update shows those numbers moving faster than any framework can track them. The compliance teams working from last month's data are already working from the wrong baseline.
44x annual compute growth vs. 3.8x 2010–2022 baseline (Epoch
Key Takeaways
  • The 44x annual compute growth rate (2023–2026) is a structural break from the 3.8x baseline (2010–2022), current compliance programs were calibrated to a slower-moving target
  • The 37% annual efficiency gain accelerates threshold crossings by lowering training cost; efficiency and growth are running in the same compliance direction, not opposite ones
  • The 12-to-30+ model count jump in 12 days has two plausible explanations, new releases or methodology revision, and each has different implications for how compliance teams treat the threshold count as a planning variable
  • Infrastructure investors and compliance teams are both working from baselines that the 44x growth rate is making obsolete faster than institutional processes can update them
Compute Growth vs. Regulatory Threshold Count (Epoch AI data, all figures qualified, no working URL)
2010–2022 compute growth
~3.8x/year
2023–2026 compute growth
~44x/year
Efficiency gain (petaFLOP/$)
~37%/year
Models above 10^25 FLOP (April 20)
12
Models above 10^25 FLOP (May 2)
30+ (drivers unconfirmed)
Largest known data center
>1.1 GW (Anthropic-Amazon)
Analysis

Two explanations for the 12-to-30+ threshold jump: (1) rapid new model releases in a 12-day window, consistent with 44x annual growth; (2) Epoch AI methodology revision updating compute estimates for existing models. The second explanation means organizations can be reclassified as systemic risk providers based on updated accounting, not new releases. Compliance programs need to treat the threshold count as a dynamic variable.

Warning

The EU AI Act's 10^25 FLOP threshold was designed for a small population of frontier labs. At 30+ models and growing, the compliance obligations attached to systemic risk designation are migrating toward organizations that didn't build compliance infrastructure for that classification. The deadline doesn't adjust for scope expansion.

Twelve to thirty. That’s how many models moved above the EU AI Act’s systemic risk threshold between April 20 and May 2, 2026, a span of twelve days.

Epoch AI’s May 2026 update doesn’t just report growth. It reveals that the numbers underpinning current regulatory architecture, compliance programs, and infrastructure investment models are changing faster than the institutions relying on them can absorb. Every stakeholder in this landscape, from compliance teams at GPAI model providers to infrastructure investors to the EU AI Office writing implementation guidance, is, to some degree, working from a baseline that’s already out of date.

The Numbers, and What They Mean

Three figures from the May 2026 update deserve close attention, in sequence:

Frontier training compute grew approximately 44 times per year between 2023 and 2026, compared to approximately 3.8 times per year in the 2010–2022 baseline. That’s not an acceleration of the existing trend. It’s a break. The 2010–2022 period was itself a period of historically rapid compute growth, the AlexNet era, the transformer scaling era, the GPT-3 era all fit within that baseline. The current period is running more than 11 times faster than that already-fast baseline.

Compute efficiency, petaFLOP per dollar, is improving at approximately 37% per year per Epoch AI’s data. This sounds like a moderating force. It isn’t. Lower cost per FLOP means more organizations can afford to train at scales that previously required top-tier hyperscaler resources. The efficiency gain is a threshold-crossing accelerant, not a counterbalance. When training a 10^25 FLOP model becomes cheaper, more organizations do it.

The 10^25 FLOP threshold itself, the trigger for mandatory systemic risk designation under the EU AI Act, now captures more than 30 models, according to Epoch AI’s May data. As recently as late April, Epoch AI’s own tracking showed 12. The specific explanation for the jump has not been confirmed by Epoch AI in this package: it could reflect rapid new model releases in the intervening period, a refinement to Epoch’s counting methodology, or both. The uncertainty about the cause matters, and we’ll return to it. But the directional signal is consistent with the 44x growth rate: the population of models triggering mandatory regulatory obligations is expanding faster than compliance architecture was designed to handle.

The Regulatory Framework Running Behind the Curve

The EU AI Act’s 10^25 FLOP threshold was calibrated to identify a small number of the most capable models, what the Act calls general-purpose AI models with systemic risk. The regulatory obligations attached to that threshold are substantial: mandatory adversarial testing, incident reporting, transparency requirements, and ongoing model evaluation. These were designed as obligations for a handful of frontier labs. Thirty-plus models changes that picture materially.

The GPAI model compliance deadlines are fixed. The Act doesn’t pause because the threshold population grew faster than expected. Organizations that trained past the 10^25 FLOP mark in the last month are newly subject to mandatory obligations whether or not their compliance programs were built for that designation. The gap between the regulatory timeline and the compute growth rate is real, and it’s widening.

There’s a structural irony in the efficiency data. Compliance under the EU AI Act was partly premised on the idea that most organizations wouldn’t reach systemic risk thresholds. The compute cost of doing so was a natural barrier. As the 37% annual efficiency gain compresses that cost, the natural barrier falls. The regulatory burden designed for a few organizations is migrating toward a wider population, and that population includes many organizations that haven’t built the compliance infrastructure to support it.

The Efficiency Paradox

The 44x growth rate and the 37% efficiency gain are running in the same direction from a compliance perspective, even though they look like offsetting forces.

More compute per dollar means more model training at any given cost threshold. More model training means more models crossing 10^25 FLOP. More models crossing the threshold means more organizations with mandatory systemic risk obligations. The cost reduction doesn’t reduce the compliance burden for the sector, it distributes it more broadly.

Infrastructure investors reading the 44x growth rate as a pure demand signal should be aware that it also implies continued capex expansion: the largest known AI data center, per Epoch AI’s tracking, is the Anthropic-Amazon joint infrastructure exceeding 1.1 GW of capacity. That’s a scale figure that sits at the intersection of energy infrastructure, compute concentration, and, through the EU AI Act’s systemic risk framework, regulatory accountability. The infrastructure building the compute is the same infrastructure that triggers the compliance obligations. Those two things aren’t usually on the same balance sheet in traditional infrastructure investing.

The Threshold Count Discrepancy: What It Signals

The 12-to-30+ jump deserves specific attention rather than being smoothed over as routine data variance. Two explanations are plausible. The first is straightforward: multiple frontier and near-frontier models were released or publicly documented in the twelve-day window, and they collectively pushed the count above 30. This explanation is consistent with the 44x annual growth rate, at that pace, new threshold-crossing models aren’t annual events, they’re potentially weekly ones.

The second explanation is a methodology refinement. Epoch AI updates its model database continuously, and threshold counts can shift when training compute estimates for existing models are revised. If the jump reflects a retroactive re-estimation of previously counted models, the compliance implication is different: organizations that were not previously designated as systemic risk providers may find themselves reclassified based on updated compute accounting, not new model releases.

Either explanation is consequential. The first tells compliance teams the scope of their obligations is growing in real time. The second tells them the scope can be revised retroactively. Both require compliance programs to treat the threshold count as a dynamic variable rather than a fixed regulatory parameter.

What Changes Next

Three developments are worth tracking against this data.

First, Epoch AI’s methodology note. The specific drivers of the April-to-May threshold count increase will shape how compliance teams and regulators respond. Watch for Epoch AI to publish a methodology update or data note explaining the jump. That document, when it arrives, is likely to become a reference source for EU AI Act compliance programs.

Second, the EU AI Office’s implementation guidance. The Office has been developing technical guidance on GPAI model evaluation requirements. If that guidance was calibrated around a 12-model scope, a 30+ model count changes the resource and enforcement picture materially. Regulatory bodies don’t move at 44x annual pace, but the guidance will eventually need to account for a threshold population that’s an order of magnitude larger than the initial design assumption.

Third, the infrastructure investment cycle. Hyperscaler capex commitments are locking in multi-year infrastructure at scale. At 44x annual compute growth, the infrastructure being built now will be running models that don’t yet exist at scales that current regulatory frameworks haven’t yet modeled. The organizations making 5- and 10-year compute commitments are effectively betting that the governance architecture will catch up.

Whether it does is a question the EU AI Act’s five-year review mechanism was designed to answer. Based on Epoch AI’s May 2026 data, the review will have material to work with.

View Source
More Technology intelligence
View all Technology

Stay ahead on Technology

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub