Meta is changing what it does when it finds a deepfake.
Starting in May 2026, AI-generated content on Meta’s platforms, Facebook and Instagram, will carry mandatory labels. Starting in July 2026, Meta will no longer remove deepfake videos simply because they’re manipulated. Removal will continue only when a deepfake violates another Community Standard, such as voter interference provisions.
The policy was announced by Monika Bickert, Meta’s Vice President of Content Policy, and reported by BankInfoSecurity. Meta characterized the shift as aligning with approaches that favor transparency over removal as a means of preserving freedom of expression.
Two dates, two different obligations
The two-phase structure matters for platform operators and content teams. May and July carry different requirements:
| What Changes | When | What It Means |
|---|---|---|
| Mandatory AI content labels go live | May 2026 | AI-generated content must be labeled across Meta platforms |
| Deepfake removal as standalone policy ends | July 2026 | Deepfakes remain up unless they violate another Community Standard |
These aren’t abstract policy preferences. They represent the floor for what Meta will enforce on its own platforms, and that floor is now lower than what at least one federal law requires.
The federal gap
The Take It Down Act, which mandates 48-hour removal of non-consensual intimate imagery including AI-generated deepfakes, is already in enforcement. The hub covered the first federal conviction under the Act in April analysis connecting US and EU enforcement approaches. Meta’s July policy change doesn’t exempt the platform from that federal mandate, deepfakes that fall under the Take It Down Act still require removal within 48 hours. Meta’s policy applies to the category of deepfakes that don’t trigger a federal or separate Community Standards obligation.
That distinction is easy to miss. A content moderation team reading Meta’s announcement might conclude that deepfake removal is over. It isn’t. It’s narrowed.
What to watch
The May labeling rollout is the near-term operational item. Watch for Meta’s technical implementation, how labels are applied, whether they’re user-visible on shared content, and whether the labeling standard aligns with what the EU DSA requires for VLOSE platforms. The July removal policy change is the higher-stakes shift, and advocacy group responses to it in the April 28 to May window are likely to generate follow-up regulatory attention.
TJS synthesis
Meta’s shift from removal to labeling is a governance choice with a specific theory behind it: that transparency is a more proportionate response to manipulated media than removal, particularly where free expression values are at stake. Whether regulators agree is a separate question, and the answer may differ in Brussels versus Washington. The question worth sitting with is whether your organization’s content moderation policy still references Meta removal as a backstop for deepfake risk. After July, that backstop narrows considerably.