Three dates now govern EU AI Act compliance. Organizations that planned around a single high-risk deadline are working from an outdated map.
The political agreement reached on March 11, 2026 between EU member states and European Parliament lawmakers restructures the Act’s enforcement timeline while adding a prohibition that takes effect on adoption. Neither development is optional. Both require immediate attention from compliance and legal teams operating in EU markets.
What Was Agreed
The amendment package addresses three distinct areas: a ban on non-consensual AI-generated sexual content, extended deadlines for high-risk system compliance, and provisions intended to ease requirements for AI embedded in sector-regulated products. The first is a tightening. The second and third are reliefs. All three are from a political agreement, not enacted law. The official amended text has not been published.
That distinction matters. Compliance teams should treat these provisions as the probable final framework, not the confirmed one. The practical response is to plan against these dates while tracking the formal adoption process.
The Deepfake Ban: Scope and Planning Implications
The prohibition covers AI systems that generate non-consensual sexual or intimate imagery and child sexual abuse material. This is not limited to dedicated deepfake tools. Any general-purpose system capable of producing prohibited content falls within the ban’s scope, including generative AI platforms, image generation APIs, and multimodal models deployed in EU markets.
Organizations providing these systems need to assess whether their content filtering, output controls, and acceptable-use policies address this prohibition before formal adoption. Waiting for enacted text to begin that assessment is a planning error.
The New Deadline Map: Three Tracks
The political agreement creates a three-track compliance schedule:
Track 1, Transparency rules for AI-generated content: August 2, 2026. This date is unchanged from the original Act and is confirmed by the European Commission’s official timeline. Providers and deployers of generative AI must label AI-generated content by this date. This deadline is active regardless of the amendments.
Track 2, High-risk AI systems listed in Annex III (standalone): December 2, 2027, under the proposed amendments. These are systems used in high-stakes contexts including recruitment, credit scoring, biometric identification, law enforcement applications, and critical infrastructure. The original Act set an earlier date; the amendment package extends the timeline. The December 2027 date is confirmed via multiple corroborating sources, including a T2 legal analysis from Cooley and the artificialintelligenceact.eu reference text.
Track 3, High-risk AI embedded in regulated products: August 2, 2028, under the proposed amendments. Systems embedded in medical devices, industrial machinery, and other products already governed by sector regulations fall here. These organizations were already subject to pre-market conformity requirements under their sector rules. The amendment acknowledges that and extends their timeline.
The exact extension from prior deadlines is not confirmed from a working source in this package and is therefore not stated. What is confirmed: the amended schedule represents a material extension for high-risk AI systems.
Who Gets Relief, and What the Sector Exemption Actually Means
The political agreement includes provisions intended to ease compliance for AI embedded in sector-regulated products. Medical device manufacturers, industrial equipment makers, and organizations in other heavily regulated product categories are the primary beneficiaries.
One critical qualification: the specific scope of this exemption awaits the final legislative text. The direction is confirmed. The details are not. Organizations should not communicate a compliance relief position to stakeholders based on the political agreement alone. The practical stance is: flag the likely exemption in internal planning documents, continue current compliance preparation, and update once the final text is published.
What’s Still Missing
Four things remain unresolved as of March 14, 2026:
First, the final legislative text. The political agreement triggers the drafting of formal amendments. Until that text is published in the EU Official Journal, no provision carries legal force.
Second, the sector exemption scope. Which sector-regulated products qualify, and under what conditions, is not confirmed.
Third, enforcement guidance. National competent authorities have not yet published guidance on how they will apply the new timeline. Organizations with multi-member state operations should monitor guidance from the relevant authorities in their primary markets.
Fourth, the Code of Practice finalization. The August 2, 2026 transparency deadline applies, and the Code of Practice for AI-generated content labeling was in second-draft stage as of early March. Organizations should track the Code’s finalization timeline alongside the amendment process.
What Compliance Teams Should Do Now
Map your systems to the three deadline tracks. That mapping is the foundation of every planning decision that follows. Systems touching Track 1 have less than five months to the August 2026 transparency deadline, there is no extension there. Systems in Tracks 2 and 3 have more time under the proposed amendments, but the amended text is not final and planning should not stop.
For organizations with systems capable of generating visual or written content: begin the content policy review for the deepfake prohibition now. The ban is agreed in principle. Its scope is clear enough to act on.
The EU AI Act’s compliance structure is now more complex than it was before March 11. The three-track timeline, the content ban, and the pending sector exemption mean organizations cannot apply a single compliance posture across their AI portfolio. Track-level assessment is the required approach.