Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Deep Dive

94 Days, No Extension, 12 Models in Scope: What AI Compliance Teams Must Have Ready Before August 2

The EU AI Act's August 2 high-risk compliance deadline is now locked in. Three converging forces confirm it: EU negotiators failed to reach a deferral agreement on April 28, the US federal government is actively blocking state-level regulatory alternatives, and 12 AI models have already crossed the threshold that triggers systemic risk obligations. For compliance teams, the question is no longer whether the deadline holds, it's whether their programs will.
12 models now exceed EU AI Act systemic risk threshold
Key Takeaways
  • The EU AI Act's August 2, 2026 Annex III high-risk compliance deadline is confirmed active - no Official Journal extension has been published and the April 28 trilogue ended without agreement.
  • Epoch AI's April 2026 Frontier Compute Report confirms 12 AI models have crossed the 10²⁵ FLOP threshold, triggering GPAI systemic risk classification under Article 51, a separate framework from Annex III with its own implementation timeline.
  • The DOJ's AI Litigation Task Force, reportedly formally established May 1, 2026, is actively targeting California and Colorado state AI laws on federal preemption grounds, narrowing the US state-law alternatives compliance teams had been tracking.
  • The May 13 trilogue is the last realistic window for a political deferral agreement; compliance teams should build toward August 2 as if no extension is coming regardless of outcome.
  • Organizations deploying a systemic risk-classified model in an Annex III high-risk application face simultaneous obligations under two distinct EU AI Act frameworks with different documentation and reporting structures.
Timeline
2026-04-28 EU Digital Omnibus trilogue collapses
2026-05-01 DOJ AI Litigation Task Force reportedly formally established
2026-05-13 Last realistic trilogue window before August 2
2026-08-02 EU AI Act Annex III high-risk compliance deadline, ACTIVE
EU AI Act Framework Applicability
Annex III High-Risk
Deployers of high-risk applications (hiring, education, law enforcement, critical infrastructure). August 2, 2026 deadline. Requires FRIA, conformity assessment, EU database registration, incident reporting.
GPAI Systemic Risk
Providers of models exceeding 10²⁵ FLOP training compute. 12 models currently in scope per Epoch AI. Requires adversarial testing, model evaluations, incident reporting to EU AI Office, cybersecurity plans. Timeline follows GPAI implementation schedule, distinct from August 2.
Warning

The Annex III and GPAI systemic risk frameworks are frequently conflated. They apply to different entities (deployers vs. model providers), carry different documentation requirements, and run on different implementation timelines. Organizations subject to both face the most complex compliance posture, and the least tolerance for assumption.

Analysis

The DOJ task force creates an asymmetric compliance environment: EU obligations are tightening with a fixed deadline, while US state-level frameworks, which many compliance programs had been tracking as domestic reference points, are under active federal legal challenge. The practical effect is that EU compliance programs can no longer be calibrated against anticipated US state law convergence.

The Deadline Is Law

Ninety-four days remain before August 2, 2026. No extension has been published in the EU Official Journal. No emergency procedure has been triggered. The EU AI Act’s Annex III high-risk compliance deadline is active law, and as of April 28, the last realistic political mechanism for changing that has stalled.

The April 28 political trilogue between the European Parliament, the Council of the EU, and the European Commission ended without agreement on the Digital Omnibus proposal, a package that would have deferred certain compliance obligations. According to reporting on the breakdown, the dispute was reportedly centered on disagreements over how the Machinery Directive interacts with EU AI Act compliance requirements. Whatever the precise cause, the result is the same: the August 2 deadline holds until an Official Journal amendment says otherwise.

One window remains. The May 13 trilogue has been identified as the last realistic opportunity for a political agreement before August 2 makes the question moot. That’s not much time, and as the April 28 experience shows, “last realistic window” does not mean “likely outcome.”

Compliance teams waiting for a political lifeline should stop waiting.

The Technical Scope Has Expanded

While negotiators have been deadlocked, the list of organizations subject to the EU AI Act’s most demanding obligations has grown.

Epoch AI’s April 2026 Frontier Compute Report confirms that 12 AI models have crossed the 10²⁵ floating-point operations (FLOP) training compute threshold. That threshold matters because Article 51 of the EU AI Act establishes it as the trigger for general-purpose AI (GPAI) systemic risk classification. Twelve organizations, or the enterprises deploying their models, are now in scope for obligations they may not have been tracking when the Act was first published.

It’s worth being precise about what “in scope” means here, because the GPAI systemic risk framework and the Annex III high-risk framework are distinct.

Annex III covers specific high-risk application categories: AI used in hiring and employment management, educational access decisions, critical infrastructure management, law enforcement, and similar sensitive domains. Organizations deploying systems in these categories face mandatory conformity assessments, fundamental rights impact assessments, registration in the EU database, and ongoing post-market monitoring. The August 2 deadline applies directly to these obligations.

GPAI systemic risk classification, triggered by the 10²⁵ FLOP threshold, applies to the providers of the underlying models, not necessarily every organization that uses them. Under the EU AI Act’s 2026 implementation schedule, systemic risk providers face requirements including adversarial testing, model evaluation, incident reporting to the EU AI Office, and cybersecurity protections for model weights. The precise timing of these obligations follows its own implementation track and should not be assumed to be identical to the August 2 Annex III date.

The practical implication: if your organization deploys one of the 12 flagged models in a high-risk Annex III application, you may face compliance obligations under both frameworks simultaneously, and they have different documentation and reporting structures.

The US Counter-Move

While the EU clock runs, something notable is happening in the United States: the federal government is narrowing the field of regulatory alternatives.

The Department of Justice confirmed the formal establishment of its AI Litigation Task Force, following weeks of reporting that the unit was operational. The task force’s stated mission is to challenge state AI regulations on federal preemption and interstate commerce grounds. California and Colorado have been identified in prior coverage as specific targets, Colorado most recently in the context of its AI hiring law.

The legal theory is straightforward: the federal government argues that a patchwork of state-level AI regulations creates unconstitutional burdens on interstate commerce, and that federal policy, specifically the White House’s National Policy Framework for AI, should preempt conflicting state requirements. Whether that theory succeeds in court is a separate question, but the enforcement posture is now clear: the DOJ is actively litigating it.

For companies subject to both EU AI Act requirements and US state AI laws, a situation that describes most large enterprises with transatlantic operations, this creates a specific compliance problem. The EU is tightening a deadline while the US is disrupting the state frameworks many compliance programs were quietly relying on as domestic analogues. You can’t replace August 2 with “wait for a US state law” anymore. The stakeholder positions on federal preemption have hardened, and the DOJ’s task force is the enforcement mechanism that makes those positions real.

Note that the May 1 formal establishment date for the task force is based on reporting that follows weeks of prior coverage describing the unit as already operational. Compliance teams should treat the task force as active regardless of which specific date the formal announcement carries.

What Compliance Teams Must Have Ready Right Now

For Annex III deployers, the checklist before August 2 is not theoretical. The obligations are defined, the deadline is active, and the remaining time is 94 days.

Fundamental rights impact assessments (FRIAs)

Deployers of high-risk AI systems must complete and document these before deployment or, for systems already in use, before August 2. The FRIA must identify affected populations, assess the system’s impact on their rights, and document the human oversight mechanisms in place. This is the most time-intensive documentation requirement and the one most frequently underestimated.

Conformity assessments

For most Annex III high-risk systems, the required conformity assessment pathway is internal, no third-party notified body is required unless the system falls into specific categories (biometric identification in particular). But “internal” doesn’t mean informal. The assessment must be documented, repeatable, and defensible to the EU AI Office on request.

EU database registration

High-risk AI systems must be registered in the EU AI Act database. Providers register their systems; deployers register their use. Both obligations run to August 2. If you are a deployer using a third-party provider’s high-risk system, confirm that your provider has registered the system and that your own deployment record is complete.

Incident reporting procedures

Post-deployment monitoring and serious incident reporting to national supervisory authorities must be operational, not planned, operational, by August 2. This requires a designated responsible person, a clear definition of what constitutes a “serious incident” for your specific system, and a tested reporting pathway.

For GPAI systemic risk providers, the action items are different. Adversarial testing documentation, model capability evaluations, and cybersecurity plans for model weight protection are the core requirements under the EU AI Act’s implementation schedule. The EU AI Office has indicated it will begin direct engagement with identified systemic risk providers. If your model is among the 12 Epoch AI has confirmed in scope, that engagement is coming.

The May 13 Window, Three Realistic Outcomes

The May 13 trilogue matters because it is the last point at which a political agreement could result in an Official Journal publication that modifies August 2 obligations with meaningful lead time. After May 13, even a successful agreement would face publication and implementation lag that makes August 2 relief increasingly theoretical.

Three outcomes are plausible:

Outcome 1: Agreement is reached

The Machinery Directive dispute is resolved, the Digital Omnibus package passes, and some compliance obligations are deferred. In this scenario, the precise scope of any deferral matters enormously, a deferral of GPAI systemic risk requirements is categorically different from a deferral of Annex III high-risk requirements. Compliance teams should not assume any agreement covers their specific obligations without reading the Official Journal publication carefully.

Outcome 2: May 13 also fails

Talks collapse again. At this point, the political pathway is exhausted before August 2. Enforcement discretion from national supervisory authorities becomes the operative variable, and enforcement discretion is not the same as a legal safe harbor. The EU AI Office has signaled it intends to begin active supervision.

Outcome 3: A narrower agreement

Negotiators reach a partial deal that resolves specific disputed provisions but leaves others intact. This is arguably the most likely outcome if agreement is reached at all, and it’s the scenario that creates the most compliance complexity, because it requires a careful read of exactly which obligations changed and which did not.

The compliance-sound position in all three scenarios is the same: build toward August 2 as if no extension is coming. If an extension arrives, it becomes a gift. If it doesn’t, your program is ready.

Here’s the question worth carrying into your next compliance review: if the May 13 window also closes without agreement, do you have a documented position on which Annex III obligations your organization’s systems are subject to, and evidence that you’ve acted on it? That documentation is what distinguishes a compliance program from a compliance intention.

View Source
More Regulation intelligence
View all Regulation
Related Coverage

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub