Two regulatory systems arrived at the same problem from opposite directions this week.
In an Ohio federal court, James Strahler II, identified in reporting as the first person in the nation convicted under the Take It Down Act, entered a guilty plea on April 7. The prosecution targeted the non-consensual distribution of AI-generated intimate imagery. The statute reached through the distribution act to the person who created and shared it.
In Brussels, trilogue negotiators on the EU Digital Omnibus were reportedly converging on proposed amendments to the EU AI Act that would add a prohibition to Article 5, the Act’s banned practices list, targeting AI systems designed to generate non-consensual intimate imagery in the first place. If confirmed and adopted, that provision wouldn’t wait for distribution. It would restrict the model.
Criminal enforcement at the output. Model prohibition at the source. Different instruments. Same harm.
The US Approach: Criminal Statute, Platform Deadline
The Take It Down Act, as confirmed by the FTC, operates on two tracks. Track one is criminal: it criminalizes the non-consensual publication of intimate visual depictions, including AI-generated deepfakes. The Strahler conviction confirms that track is operational. Track two is regulatory: it requires covered platforms to establish notice-and-removal procedures for reported NCII content within 48 hours of a valid report. The deadline for platform compliance is May 19, 2026.
What the US approach does not do: restrict AI model development. A developer who builds an image generation model capable of producing NCII content faces no specific federal liability under the Take It Down Act unless they publish that content non-consensually or operate a platform that fails to remove it. The statute is targeted at distribution and platform behavior, not at what gets trained or what capabilities exist.
The specific sentencing exposure tiers under the Act, figures that have appeared in legal coverage, should be confirmed against the statutory text or DOJ documentation before relying on them in compliance planning. The operational facts are confirmed at T1: FTC enforcement authority, May 19 platform deadline, 48-hour removal requirement.
The EU Approach: Model-Level Prohibition
The proposed EU approach, according to legal analysts tracking the trilogue negotiations – including coverage attributed to A&O Shearman and OneTrust, though primary source URLs are currently unavailable, would work at the model level rather than the distribution level. A new prohibition reportedly proposed for Article 5 would target AI systems designed to generate non-consensual intimate imagery, placing them alongside the Act’s existing banned practices: social scoring, certain real-time biometric surveillance, and manipulation of vulnerable groups.
This framing matters for developers. Article 5 prohibitions in the EU AI Act aren’t compliance obligations with documentation requirements and risk management procedures. They’re bans. A system that falls under Article 5 cannot be placed on the EU market. Full stop.
The qualifier that governs everything in this section: these proposed amendments have not been confirmed against Commission or trilogue primary documents. The reporting is sourced to law firm and compliance platform analysis of the negotiations, both of which rely on trilogue readouts and position papers that are not publicly archived in accessible form. Legal analysis firms publishing on active negotiations are generally reliable for directional framing, but specific prohibitions in specific articles require primary text confirmation before compliance teams act on them. The proposed April 28 political agreement date, reportedly the trilogue’s near-term target, is the confirmation trigger to watch.
The Compliance Gap: What Differs Across Jurisdictions
A company developing a general-purpose image generation model that could be used to create NCII content faces different obligations in each jurisdiction.
| Dimension | United States | European Union (Proposed) |
|---|---|---|
| Legal instrument | Take It Down Act (criminal statute) | EU AI Act Article 5 (model prohibition, proposed) |
| Target of obligation | Distributor / platform | AI system provider |
| Enforcement mechanism | Criminal prosecution + FTC platform compliance | Market exclusion (no compliance path for banned systems) |
| Compliance action | Platform: notice-and-removal procedures by May 19 | Developer: don’t build or deploy the system in EU |
| Confirmed status | Confirmed, T1 (FTC.gov) | Proposed, unconfirmed at primary source level |
| Timeline | May 19, 2026 (platform deadline) | Reportedly: political agreement April 28, 2026 (adoption timeline follows) |
The gap this table reveals is structural, not just jurisdictional. In the US, a developer who builds an NCII-capable model and licenses it to a platform has one degree of separation from the compliance obligation, the platform bears the May 19 deadline. In the EU, if the proposed Article 5 prohibition holds, there is no degree of separation. The developer is the compliance party.
That difference has product implications. A fine-tuned image model with commercial applications in the EU would need to assess whether its capabilities bring it within the scope of the proposed prohibition. That assessment can’t happen until the prohibition’s exact language is adopted. But the planning can start now.
What the Convergence Means
Regulators on both sides of the Atlantic are working from the same premise: AI-generated NCII is a distinct harm category that existing legal frameworks, harassment laws, revenge porn statutes, general civil liability, address inadequately. The US responded with a dedicated federal criminal statute. The EU is responding, reportedly, with a model-level prohibition that sits alongside its most serious AI Act banned practices.
The convergence is notable because it’s arriving through different legal traditions. The US approach is enforcement-first, building criminal and regulatory precedent from individual cases and platform deadlines. The EU approach is prohibition-first, attempting to prevent the harm at the system design stage.
AP News confirmed the interim measure direction in the EU’s WhatsApp antitrust case, a separate proceeding, but one that reflects the same regulatory posture: the EU is willing to move against AI-adjacent conduct before the full legal process concludes when it believes the harm is serious and ongoing. That same posture, applied to NCII model development, would mean enforcement before final adoption if the Commission determines harm is occurring now.
What to Watch
Two dates carry the near-term tracking weight.
May 19, 2026: The US platform compliance deadline. This is confirmed and operational. Platforms that haven’t finalized notice-and-removal procedures are running four weeks behind. FTC’s first enforcement action under the platform-compliance track, distinct from criminal prosecution, will set the operational standard for what “adequate procedures” means in practice.
April 28, 2026 (reported): The reportedly targeted EU trilogue political agreement date. A political agreement at this stage would signal that the proposed Article 5 NCII prohibition and the fixed application dates (reportedly December 2, 2027 for Annex III systems; August 2, 2028 for Annex I) are entering the formal adoption track. Confirmation from Commission sources following that date would move the EU side of this analysis from “reportedly proposed” to actionable compliance planning inputs.
The two enforcement frameworks won’t fully converge, different legal systems don’t produce identical obligations. But AI companies operating in both markets are now looking at a compliance environment where NCII capability is a specific regulatory risk category, not an incidental feature of general-purpose generation. That framing change is already underway, regardless of which specific provisions are ultimately confirmed.