A federal lawsuit filed May 5 in Manhattan names Mark Zuckerberg alongside Meta Platforms as a defendant. The plaintiffs, Hachette, Macmillan, McGraw Hill, Elsevier, Cengage, and author Scott Turow, allege in their complaint that Zuckerberg “personally authorized and actively encouraged” the alleged infringement used to train Llama 2, Llama 3, and Llama 4. That framing matters. AI copyright litigation has been common for two years. CEO defendants have not.
According to multiple news reports citing the complaint, plaintiffs allege the company pursued large volumes of copyrighted content because direct licensing was deemed “impractical.” The complaint reportedly describes an internal effort to acquire bulk content, some accounts refer to an internal project name, though that attribution has not been independently confirmed, framing the acquisition strategy as deliberate rather than incidental.
The Copyright Management Information claim is the legal mechanism that sharpens the personal liability theory. The Wall Street Journal reported that publishers allege Meta stripped CMI from works before training, a step that signals intentionality under 17 U.S.C. § 1202. Stripping CMI doesn’t happen by accident. A federal judge has already found that Meta must answer the CMI claim, which means this theory has cleared an early procedural hurdle.
The personal-defendant structure is the element that distinguishes this filing from most prior AI copyright cases. Anthropic reportedly settled a comparable author-led lawsuit in 2025 for $1.5 billion without admitting wrongdoing, but that settlement named the company, not individual executives. If the Zuckerberg personal liability theory survives a motion to dismiss, it would mark the first time a court found reason to let an AI copyright case proceed against a named CEO.
For compliance teams, the question worth sitting with is this: in your organization, who makes the decision to include a particular dataset in training, and is that decision documented in a way that makes executive authorization visible? The CMI removal allegation suggests intentionality wasn’t just inferred after the fact; it was reportedly built into the acquisition process. That’s a different governance problem than inadvertent infringement.
Anthropic’s settlement established a dollar scale for comparable exposure. A personal defendant theory, if it advances, establishes something harder to price: individual executive risk. D&O insurance structures at AI companies weren’t underwritten with that scenario in mind.
The case is in its earliest stages. No verdict, no admission, no finding of liability, these are allegations in a complaint. But the compliance architecture question doesn’t wait for verdicts. It follows the theory.