August 2026 is still the law. That sentence needs to lead everything that follows, because the most dangerous compliance response to this story is to treat a proposed delay as a finalized one. The positions adopted by the European Parliament and Council as part of the AI Omnibus would postpone core obligations for high-risk AI systems, but those positions aren’t enacted legislation. Until the AI Omnibus process concludes and formal amendments take effect, the EU AI Act’s original high-risk system deadline governs.
With that foundation established: Tech Policy Press reported on April 2, 2026, that the legislative process has produced two concrete alternatives, December 2027 and August 2028, and that the structural non-retroactivity of the AI Act transforms a timeline shift into something with permanently different consequences. Understanding those consequences requires working through each scenario explicitly.
Where the Process Stands
The AI Omnibus, formally the Digital Omnibus, is a legislative package moving through the European Parliament and Council that addresses multiple areas of EU digital law. The AI Act’s high-risk system obligations are among the components under amendment. Michael McNamara serves as Co-Rapporteur for the Digital Omnibus on AI in the European Parliament, making him a central figure in how the AI Act provisions ultimately take shape.
Positions have been adopted by both the Parliament and the Council, but the two institutions must reconcile those positions before legislation is finalized. That reconciliation process – trilogue in EU legislative terminology, means the December 2027 and August 2028 options are not competing public choices; they’re positions that need to be resolved into a single outcome. Either date could emerge. A negotiated middle date is also possible. The August 2026 deadline holds until that process concludes.
One element the Parliament has added to its proposals is a ban on nudifier apps, tools that generate non-consensual intimate imagery. That ban sits alongside the delay provisions, not in place of them. It reflects the Parliament’s interest in using the Omnibus as a vehicle for additions, not just amendments. The final Omnibus text may look quite different from the positions currently on the table.
The Non-Retroactivity Problem
Here is the structural issue that makes the delay more than an administrative timeline shift. The EU AI Act is not retroactive. A high-risk AI system placed on the EU market before the compliance deadline takes effect is not required to meet the law’s requirements after deployment. It enters a legal status, compliant by timing, not by design, that the law cannot retroactively reach.
Analysts warn that this means high-risk AI systems deployed before whichever new deadline ultimately takes effect could permanently remain outside the AI Act’s oversight requirements. “Permanently” is a strong word, but it’s the analytically accurate one: unless the EU enacts subsequent legislation specifically requiring retroactive compliance (a significant political and legal undertaking), systems that deploy in the gap are structurally exempt.
This isn’t a theoretical concern. The gap between August 2026 and December 2027 is 16 months. The gap to August 2028 is two years. High-risk AI systems are already being deployed in EU markets. Medical AI tools, recruitment algorithms, biometric identification systems, the categories the law was explicitly designed to govern, can enter the market during that window. Tech Policy Press notes that critics argue this weakens the law at a critical moment for AI governance, a position that reflects the analytical community’s concern, not a settled regulatory determination, but one grounded in the law’s actual text.
What Counts as High-Risk: A Scope Question That Isn’t Settled
Before modeling compliance scenarios, organizations need clarity on which systems they’re modeling for. The AI Act’s high-risk classification covers systems used in areas including critical infrastructure, education, employment, essential services, law enforcement, border control, and the administration of justice and democratic processes.
According to Tech Policy Press reporting, the Omnibus debate includes active discussion about overlap between the AI Act and sector-specific legislation. Medical devices, toys, and connected cars are among the categories where sector-specific regulatory frameworks may apply instead of – or in addition to, the AI Act’s high-risk requirements. Organizations in those sectors face an additional analytical layer: does the AI Act apply to their systems, the sector-specific regime, or both? That question has not been resolved in the current legislative positions and should be tracked explicitly as the Omnibus process continues.
Three Scenarios, Three Compliance Implications
Here is what each outcome means for an organization with an active EU AI Act compliance program targeting high-risk systems.
Scenario A: No delay, August 2026 holds. The AI Omnibus process stalls, or Parliament and Council reach an impasse that results in the original deadline remaining in force. Organizations that maintained compliance program momentum are positioned. Organizations that stood down based on delay reports face a compressed timeline with no regulatory relief. Probability: lower than it was six months ago, but not zero. Treat this as the baseline your program must be capable of meeting.
Scenario B: December 2027 delay. The more conservative of the two proposed options adds 16 months to the August 2026 baseline. For organizations in early or mid-compliance stages, this is a meaningful extension. For organizations already in late-stage readiness, it’s an opportunity to stress-test and refine rather than rush to finish. The non-retroactivity implication is significant: systems deployed between August 2026 and December 2027 under this scenario are in the gap. If your organization plans to deploy high-risk systems in EU markets before December 2027, this scenario isn’t a reprieve, it’s a window with permanent consequences for those systems.
Scenario C: August 2028 delay. The longer option adds two full years. It is the option most favorable to vendors with systems in or approaching the EU market. It also creates the longest non-retroactivity gap. An organization that deploys a high-risk system in January 2027 under this scenario is deploying a system that may never face EU AI Act compliance requirements, regardless of how the law ultimately evolves. For compliance professionals advising business units on EU deployment timelines, that framing matters: the question isn’t just “when does the deadline hit” but “is the system we’re deploying now in scope when the deadline does hit.”
What Compliance Teams Should Do Now
Four concrete actions follow from this analysis.
First: don’t stand down August 2026 programs. The delay isn’t final. The cost of maintaining readiness is lower than the cost of a compressed rebuild if the original deadline holds.
Second: build a high-risk system inventory with deployment timeline annotations. For every system your organization plans to deploy in EU markets before August 2028, document whether it would fall inside or outside the compliance window under each scenario. That inventory becomes the basis for board-level risk disclosure if a delay does finalize.
Third: track the sector-specific exemption debate. If your high-risk systems are in medical devices, automotive, or consumer products, the Omnibus process may produce a different compliance framework than the one you’ve been planning against. Don’t assume the AI Act’s high-risk categories are stable until the Omnibus is finalized.
Fourth: annotate the EU AI Act deadline entries in your compliance calendar to reflect proposed delay status. If your calendar shows August 2026 as a hard date, it’s now more accurately described as “August 2026 (current operative deadline, proposed delay to December 2027 or August 2028 pending AI Omnibus finalization).” That annotation protects against the organizational failure mode of planning against a deadline that legislative reporting has already flagged as potentially moot.
The EU AI Act delay story isn’t about whether regulators are backing down from AI oversight. It’s about the structural reality that non-retroactive law combined with a multi-year delay creates a permanent market entry window for high-risk systems that the law was designed to govern. The organizations that understand that structural fact, and build their deployment decisions around it, are the ones whose compliance programs will still make sense regardless of which deadline ultimately lands.