Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Daily Brief

Epoch AI: 12 AI Models Now Exceed the EU AI Act's Systemic Risk Threshold, With More Expected

3 min read Epoch AI Partial
According to Epoch AI's April 2026 Frontier Compute Report, 12 notable AI models have now exceeded the EU AI Act's 10^25 FLOP training compute threshold, the level at which models are automatically presumed to pose systemic risk under Article 51. The finding makes the EU AI Act's most consequential compliance tier concrete: the organizations operating these models now face enhanced regulatory obligations under European law.

The EU AI Act’s systemic risk tier has felt abstract since the regulation passed. A threshold expressed in scientific notation, 10^25 floating-point operations, isn’t the kind of benchmark that lands intuitively for most compliance teams. Epoch AI’s April 2026 data changes that. According to Epoch AI’s public model database, 12 notable AI models have now crossed this threshold. The tier has twelve occupants. The compliance question is no longer theoretical.

What the Threshold Means in Law

The EU AI Act establishes a tiered classification system for general purpose AI (GPAI) models based on training compute. Models trained with more than 10^23 FLOP and capable of generating language are classified as GPAI. Models crossing the higher 10^25 FLOP threshold, confirmed by T1 EU sources including the EU AI Act reference database, are automatically presumed to pose systemic risk under Article 51. That presumption is the accurate legal description: it’s not a determination of harm, it’s a legal presumption that triggers enhanced compliance obligations. The affected organization can rebut the presumption, but the burden shifts to them.

The frontier labs currently operating at this compute scale, including OpenAI, Google, Anthropic, Meta, and Mistral, are the representative entities in scope. This list isn’t exhaustive; Epoch AI’s database covers notable models, not every model in deployment.

The Growth Signal

Epoch AI’s tracking data indicates a growing number of models have crossed this threshold. The April 2026 report characterizes this as a significant increase since late 2025, according to the report’s findings, the precise percentage figure from the report should be verified against the full published document. Epoch AI has projected that the number of models exceeding 1e26 FLOP will continue to grow through 2026, a trend the company has noted could challenge threshold-based regulatory designs. The specific count at the 1e26 FLOP level projected for year-end 2026 requires verification against Epoch AI’s full April report before being used in compliance planning.

The August 2026 Clock

The EU AI Act’s general application deadline is August 2026. For the 12 models now presumed to pose systemic risk, the obligations that apply at that deadline include adversarial testing, incident reporting, and model evaluation requirements at the systemic risk tier, requirements more extensive than those applying to GPAI models below the 10^25 FLOP threshold. Organizations that haven’t mapped their models against the compute threshold should treat Epoch AI’s April 2026 report as a catalyst for that exercise.

What to Watch

Epoch AI’s database is cumulative. The count of 12 reflects the April 2026 snapshot; that number will change as new models are trained and released. Compliance teams at frontier labs should track the database directly: epoch.ai/data/notable-ai-models. The European Commission’s guidance on systemic risk tier obligations, specifically the model evaluation framework and the adversarial testing methodology, is the next regulatory document to watch ahead of August 2026.

TJS Synthesis

Epoch AI’s compute tracking has just done something that regulatory text alone can’t: it turned a threshold into a list. Twelve models. Named frontier labs. An August 2026 deadline. For compliance teams at organizations training at frontier compute scale, this report is the clearest indicator yet that systemic risk classification isn’t a future scenario, it’s the current regulatory status. The question isn’t whether these models trigger Article 51’s presumption. They do. The question is whether the affected organizations are ready to demonstrate compliance before August.

View Source
More Regulation intelligence
View all Regulation
Related Coverage

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub