On March 11, 2026, EU member states and European Parliament lawmakers reached a political agreement on a package of amendments to the EU AI Act. Two directions at once: tighter rules for AI-generated harmful content, and more time for organizations deploying high-risk systems.
The amendments prohibit AI systems that generate non-consensual sexual or intimate imagery and child sexual abuse material. That ban applies across the EU and covers both dedicated deepfake tools and general-purpose systems capable of producing prohibited content.
On deadlines, the political agreement extends the timeline for high-risk AI rules. Under the proposed amendments, standalone high-risk systems listed in Annex III face an application date of December 2, 2027. High-risk AI embedded in regulated products, medical devices, industrial machinery, and similar categories, faces a later date of August 2, 2028. The amendments also include provisions intended to ease compliance requirements for AI in sector-regulated products, though the specific scope of those provisions awaits the final legislative text.
One deadline did not move. Transparency rules for AI-generated content remain on track for August 2, 2026, confirmed by the European Commission’s official timeline.
The political agreement is a necessary but not sufficient step. EU institutions must formally adopt the amended text before these changes carry legal force. Compliance teams should treat these dates as the probable final schedule while monitoring for official publication. Organizations in sectors affected by the embedded-systems exemption should not assume relief is confirmed until the text is published.