The omnibus deal didn’t close the EU AI Act compliance story. It added a chapter that most compliance programs haven’t started reading.
Since the political agreement was announced May 7 and its details published in provisional form on May 14, coverage has concentrated on the three-deadline structure, December 2026, December 2027, August 2028. That framing is accurate. It’s also incomplete, because it treats the December 2026 deadline as a single item when it actually contains two legally distinct obligations with different scope footprints: the GPAI watermarking and transparency requirement for models already on the market, and the newly added named prohibition on nudifier applications. Both share the date. Neither shares the audience. Compliance teams working from deadline lists alone will miss that distinction until enforcement arrives.
What the nudifier prohibition actually says
The provisional agreement introduces what legal analysts are describing as a named prohibition on AI systems that generate non-consensual intimate imagery, commonly called nudifier apps, and AI-generated child sexual abuse material. Per multiple reports citing the European Parliament’s announcement, compliance with this prohibition is required by December 2, 2026.
The precision of that framing matters. This is a prohibition added to the EU AI Act through the omnibus amendment process, it augments the existing Act rather than creating a standalone regulation. It sits alongside the Act’s other prohibited practice provisions, which include social scoring, real-time biometric surveillance, and certain subliminal manipulation techniques. The nudifier prohibition’s arrival in the same statutory framework means it carries the same enforcement architecture: the AI Office for GPAI-adjacent enforcement, national market surveillance authorities for everything else.
What the provisional text does not resolve, and what legal analysts at White & Case and Taylor Wessing have flagged in their preliminary analysis, is the question of scope beyond the obvious case. The “nudifier app” label describes the clearest instance: a product whose primary purpose is generating non-consensual intimate imagery from clothed photographs. But the prohibition’s language, as reported, covers AI systems *capable* of generating such content. That raises a harder question for platforms that aren’t nudifier apps but deploy image generation models with general-purpose capabilities.
The scope problem for non-obvious deployers
Consider what the prohibition could capture beyond dedicated services:
A general-purpose image generation API deployed by a third-party developer who adds a “clothing removal” feature. The API provider isn’t a nudifier app. The downstream product is. Under the AI Act’s provider/deployer liability architecture, both entities may carry obligations, but the split between Article 25 deployer responsibilities and Article 16 provider obligations is exactly the kind of ambiguity that the AI Office’s implementation guidance would normally resolve. That guidance doesn’t exist yet for this prohibition.
A social platform’s user-generated content pipeline that accepts image generation prompts. The platform doesn’t intend to enable non-consensual intimate imagery. Its content moderation systems catch most attempts. But the system is technically capable of generating the prohibited content before moderation filters it. Does capability plus an inadequate moderation layer equal non-compliance? The provisional text, as reported, doesn’t answer that question definitively.
Who This Affects
Unanswered Questions
- Does the nudifier prohibition apply to a GPAI model provider whose downstream deployer adds non-consensual intimate image generation capability, and which entity bears primary liability?
- What constitutes adequate technical controls for a general-purpose platform to demonstrate compliance with the prohibition before the December 2, 2026 date?
- Will the AI Office publish enforcement priorities before the December 2, 2026 deadline, and if so, will those priorities clarify scope for non-dedicated platforms?
- What conformity assessment pathway satisfies both the EU AI Act Annex I requirements and existing Machinery Regulation obligations under the omnibus clarification?
These aren’t edge cases invented for this analysis. They’re the questions that companies with image generation capabilities anywhere in their stack should be documenting right now, because December 2, 2026 is roughly six months from the agreement’s announcement and the Official Journal publication timeline is unknown.
The three-deadline map by deployer type
The omnibus structures the broader compliance calendar around system type, not company size or sector. Understanding which deadline applies requires classifying the system first.
*GPAI model providers (December 2, 2026).* General-purpose AI models already on the market before August 2, 2026 have until December 2, 2026 to comply with Article 50 watermarking and transparency requirements, per Latham & Watkins’ analysis of the provisional agreement and consistent with prior TechJacks coverage of the transparency deadline. This is a grace period, not a new deadline – the underlying obligation exists; the omnibus extends the compliance window for legacy deployments. Providers releasing new GPAI models after August 2, 2026 don’t get the grace period.
*Annex III high-risk system deployers (December 2, 2027).* High-risk AI systems used in employment screening, access to education, credit decisions, critical infrastructure management, law enforcement, migration processing, and administration of justice face a December 2, 2027 deadline under the provisional agreement. This is the window that most enterprise AI deployers in regulated industries have been planning around. The omnibus extension gives compliance programs roughly 18 additional months relative to the pre-omnibus baseline.
*Annex I regulated product manufacturers (August 2, 2028).* Companies embedding AI in products already regulated under EU product safety law – medical devices, machinery, toys, elevators, radio equipment, face the longest window: August 2, 2028. This date resolves the conformity assessment overlap dispute that had blocked earlier omnibus negotiations. The provisional agreement clarifies that AI Act obligations and existing Machinery Regulation or Medical Device Regulation conformity assessments can be fulfilled through a coordinated process, rather than running duplicative assessments under separate frameworks.
That resolution matters practically. Before the omnibus, manufacturers faced a genuine legal ambiguity about whether satisfying the Machinery Regulation’s safety requirements would satisfy parallel EU AI Act high-risk classification requirements, and if not, what additional conformity assessment steps were required. The provisional agreement’s clarity on that overlap, per the IAPP’s reporting, is what unlocked the August 2028 date.
What the enforcement architecture does and doesn’t clarify
Prior coverage in May raised a specific question: can the AI Office actually enforce the nudifier ban by the deadline, given the office’s nascent operational state? The provisional agreement’s formalization doesn’t resolve that question. It confirms the prohibition exists and establishes the December 2026 effective date. It does not provide a detailed enforcement roadmap, specify the resource level the AI Office will deploy, or indicate whether initial enforcement will target obvious violators (dedicated nudifier services) before expanding to ambiguous cases (general-purpose platforms with inadequate content controls).
Verification
Partial Multiple T3 secondary sources (JD Supra, ComplexDiscovery, Latham & Watkins) citing European Parliament announcement; IAPP URL confirmed live, article body not fully accessible via scraper All dates and prohibition scope language are from the provisional agreement. Not yet published in the Official Journal. Specific provision text may change. Human validation of all source material recommended before use in compliance planning documents.Warning
The nudifier prohibition's scope for general-purpose platforms remains unresolved in the provisional text. Organizations with image generation capabilities anywhere in their stack should begin scope determination now, the December 2, 2026 deadline won't pause for AI Office guidance that hasn't been published yet.
The AI Act’s enforcement structure for prohibited practices relies on national competent authorities for most deployers, with the AI Office taking direct jurisdiction over GPAI model providers. A nudifier prohibition enforcement action against a dedicated service would likely fall to the national authority in the member state where the service is established. An enforcement action involving a GPAI model provider whose model was used to build a nudifier application involves a more complex jurisdictional question.
What compliance teams can act on now, without waiting for enforcement guidance: scope determination (does any system we deploy, operate, or distribute fall within the prohibition’s scope?), technical controls assessment (what prevents generation of prohibited content, and how robust are those controls?), and provider/deployer obligation mapping (if we’re a downstream deployer of a third-party image generation model, what obligations does our contract with the provider need to address?).
What requires the final published text
Certain compliance decisions genuinely can’t be finalized against the provisional agreement. These include: the exact scope definition of “non-consensual intimate imagery” in the statutory text; the specific conformity assessment pathway for Annex I manufacturers; and the precise documentation requirements that will flow from the AI Office’s implementation guidance. Organizations that design detailed compliance programs around provisional text language run the risk of rework when the final text publishes.
The appropriate approach is two-speed: move now on classification and scoping work that would be required regardless of final text language, and hold detailed technical compliance design until the Official Journal publication.
TJS synthesis
The nudifier ban is the EU AI Act’s first specifically named prohibition to carry a near-term enforcement date, and it arrives before the AI Office’s enforcement machinery is fully operational. That combination creates an asymmetry compliance teams should track: the legal obligation is clear in principle, the enforcement mechanism is underspecified in practice, and the scope ambiguity for non-dedicated platforms remains open. The real question isn’t whether nudifier apps face obligations by December 2026, they obviously do. The question is whether the AI Office will use its first enforcement actions under this prohibition to draw scope lines that platform operators with general-purpose image generation can use for their own compliance analysis. Watch for AI Office guidance in Q3 2026: if enforcement priorities are published before the December deadline, they’ll do more to clarify the prohibition’s practical reach than the provisional text itself.