Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Deep Dive

EU AI Act Omnibus: Which Deadline Applies to Your System, and What the Nudifier Ban Means in Practice

4 min read European Parliament Press Room Partial
The EU AI Act now has three distinct compliance tracks with three different deadlines, and a new prohibition that arrived without a grace period. Compliance teams that have been planning for August 2026 need to know whether their systems are on a different clock now, and what the exception for "effective safety measures" in the nudifier ban actually requires.

The European Parliament’s Internal Market and Civil Liberties committees voted to adopt a joint position on an AI Act omnibus amendment. The vote passed 101 in favor, 9 against, and 8 abstentions, according to the European Parliament’s announcement. That result matters because it aligns Parliament’s position with the Council’s, a necessary condition for a proposal to move toward final law.

This is a committee position, not enacted law. The full Parliament must still vote. The proposal is not final. Use it to plan, not to publish compliance confirmations.

What changed: The three deadline tracks

The omnibus proposes three separate timelines. They apply to different types of systems and different types of providers.

Track 1, Listed high-risk AI systems: Compliance deadline moves to December 2, 2027. This track covers AI systems specifically listed in Annex III of the Act, the systems Parliament designated as high-risk based on their application area (education, employment, credit, law enforcement, etc.). Providers of these systems gained 16 months of runway under the proposal. This is the most consequential extension for enterprise AI developers and deployers, as Annex III is where most commercial AI applications sit.

Track 2, Sectoral safety systems: Compliance deadline moves to August 2, 2028. This covers AI systems embedded in products already regulated by EU sectoral safety legislation, think medical devices under MDR, machinery under the Machinery Regulation, or motor vehicles under type approval frameworks. These systems face a later deadline because they’re already subject to existing conformity assessment processes that must be harmonized with the AI Act requirements.

Track 3, AI-generated content watermarking: According to the committee proposal, providers must comply with watermarking rules for AI-generated audio, image, video, and text content by November 2, 2026. The Filter’s verification flagged this deadline as partially verified, the November 2026 date comes from the EP press release, but the prior baseline date could not be independently confirmed from cross-reference sources. Use this date for planning; verify it against the official text once available. This deadline is notably different in character from the others: it’s a producer-side transparency obligation, not a risk classification compliance requirement.

What isn’t delayed

The omnibus extensions don’t touch everything. General-purpose AI (GPAI) model obligations, the prohibited practices already in effect, and transparency requirements for certain AI interactions have their own timelines and are unaffected by this proposal. Before assuming a deadline has shifted, confirm your system’s specific classification and whether the omnibus proposal covers that category.

What’s new: The nonconsensual image generation ban

The proposal introduces a prohibition that wasn’t in the original AI Act text. Insurance Journal’s coverage quotes the draft language directly: AI systems that generate realistic images depicting “sexually explicit activities or the intimate parts of an identifiable natural person” without their consent would be banned. A T2 Reuters report independently corroborates the provision.

Two things matter practically.

First, the scope. “Identifiable natural person” is the operative phrase. The ban targets outputs that could be matched to a real individual, not fictional characters or generalized imagery. Systems capable of producing photorealistic deepfakes of real people are the clear target.

Second, the exception. Systems incorporating “effective safety measures” are exempted. The draft does not specify what “effective safety measures” means in operational terms, that definition will likely require implementing guidance or regulatory technical standards. For developers of image generation models, this is the gap that matters. A system that implements content filtering, identity detection, or generation refusal mechanisms may satisfy the exception, but without defined criteria, compliance planning is judgment-based until the standards are set.

There’s no grace period framing in the committee text for this prohibition. Unlike the extended deadlines above, the nudifier ban appears as an amendment to the prohibited practices framework, which suggests it would become effective when the omnibus is enacted, not on a delayed compliance track.

The compliance decision tree

Ask three questions about each AI system in scope:

1. Is it listed in Annex III? If yes: December 2, 2027 deadline. If no, go to question 2. 2. Is it embedded in a product subject to EU sectoral safety legislation? If yes: August 2, 2028 deadline. If no, go to question 3. 3. Does it generate audio, image, video, or text content? If yes: November 2, 2026 watermarking obligation (per committee proposal, verification pending).

Separately: Does it generate realistic images of identifiable natural persons in sexual contexts? If yes: prohibited under the ban as proposed, with the effective safety measures exception available.

What comes next

The full Parliament vote is the next procedural step. The committee alignment with member state positions is meaningful, it indicates the kind of cross-institutional consensus that typically advances proposals, but it doesn’t guarantee the timeline. “Likely to become law later this year” is how the reporting from Insurance Journal and Reuters frames the trajectory. That framing reflects the political alignment, not a confirmed legislative calendar.

For compliance planning: treat the December 2027 and August 2028 deadlines as the working targets for systems on those tracks. For watermarking, verify the November 2026 date against the official omnibus text when available. For the nudifier ban, begin assessing whether existing safety measures satisfy the “effective safety measures” threshold, and monitor implementing guidance.

The omnibus shifts the EU AI Act’s compliance timeline significantly. Sixteen additional months for high-risk systems is real runway. But the nudifier ban adds a new obligation on a faster track than most providers are managing. Both changes require deliberate action.

View Source
More Regulation intelligence
View all Regulation

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub