The pipeline of AI copyright litigation has a pattern. A creator class, authors, journalists, visual artists, programmers, identifies that their work was ingested into a training dataset without consent. They file under federal copyright law. The central legal question is whether training on copyrighted material constitutes infringement under the fair use doctrine, or whether the unauthorized ingestion of the work itself is the violation.
Voice actors have filed a different complaint.
According to MediaPost’s reporting, the proposed class action names Amazon, Apple, Google, Meta, Microsoft, and Nvidia as defendants. The plaintiffs allege those companies used their vocal performances to train AI voice models, voice assistants, audiobook narration tools, text-to-speech products, without authorization or compensation. The legal vehicles are the Illinois Right of Publicity Act and federal copyright law. The Right of Publicity claim is the one that hasn’t appeared in this form before, and it matters enormously for where AI voice development goes from here.
Two Theories, One Complaint
Federal copyright and Right of Publicity protect different things, through different mechanisms, with different standards.
Copyright protects original creative works, the expression fixed in a recording. If a voice actor records an audiobook and the recording is ingested into a training dataset, a copyright claim says the defendant violated the actor’s exclusive right to reproduce or create derivatives of that recording. The defendant can counter with fair use: transformation, commentary, a different purpose. Courts have been wrestling with this across a dozen major cases.
The Right of Publicity protects something else entirely: a person’s name, likeness, and voice as commercially valuable attributes. It isn’t a property right in a creative work. It’s a right to control how your identity, including how you sound, gets used commercially. Illinois’s statute is among the country’s strongest, covering both living and deceased individuals and extending to commercial uses without written consent.
The key difference: a copyright claim requires a protected creative work that was copied. A Right of Publicity claim requires a commercial use of a person’s voice likeness without consent. If the defendants trained models on voice recordings that weren’t individually copyrighted, public domain materials, user-generated recordings, licensed content where training wasn’t contemplated, copyright claims may not reach them. Right of Publicity claims might.
AI Training Data Liability: Before and After Right of Publicity Theory
What to Watch
Why These Six Defendants
The defendant list isn’t arbitrary. Amazon, Apple, Google, Meta, Microsoft, and Nvidia each operate commercial AI voice products. Amazon’s Alexa and its audiobook ecosystem. Apple’s Siri and neural text-to-speech for accessibility and content creation. Google’s Assistant, WaveNet architecture, and Google Cloud Text-to-Speech. Meta’s voice AI components for its AR/VR platforms. Microsoft’s Azure Cognitive Services speech stack and Cortana. Nvidia’s Riva platform, a dedicated speech AI toolkit used in enterprise voice product development.
Plaintiffs are framing this as an industry-wide practice. The breadth of the defendant list suggests the theory is: any company that developed commercial AI voice products using large-scale voice recording datasets potentially exploited professional voice talent without compensation. That framing will be contested. Not every company on this list built their voice products from the same kind of dataset, and what “used their vocal performances” means in terms of training pipeline specifics is currently unknown from this reporting.
The Preemption Question
Federal copyright law contains a preemption provision, Section 301 of the Copyright Act, that bars state law claims when they’re “equivalent” to copyright claims and involve copyrighted works. Defendants will argue that the Right of Publicity claims here are equivalent to copyright, and therefore preempted.
Courts have split on this. Some have found that Right of Publicity claims survive preemption because they protect a different interest, personal identity and autonomy, rather than a property interest in a creative work. Others have found preemption when the state claim is functionally indistinguishable from a copyright claim. The specific issue in AI voice training cases, where the claimed harm is commercial exploitation of voice likeness, not copying of a specific recording, sits in territory that courts haven’t resolved.
The threshold motion will likely be a motion to dismiss on preemption grounds. If the claims survive, plaintiffs get discovery into how the defendants’ voice AI training pipelines were actually constructed, which data sources they used, whether professional voice actors were identifiable in those datasets, and what consent or licensing framework applied. That discovery process itself changes the industry calculus, regardless of the eventual outcome.
This case enters a copyright landscape that federal courts have been reshaping for months. The input-copying theory tested in Nazemian v. NVIDIA established that unauthorized ingestion of training data can itself constitute infringement, the copy made during training, not just the output. Publishers’ suits against Meta have added executive liability theories. Each case that clears early motions expands the legal surface area.
The Right of Publicity theory adds a state-law dimension that the existing federal copyright cases don’t carry. If this claim survives preemption, AI voice developers don’t just face federal copyright exposure, they face potential liability under 50 different state Right of Publicity statutes, each with different standards, different damages provisions, and different consent requirements.
Unanswered Questions
- Do existing voice AI training data licenses include Right of Publicity consent provisions, or only copyright licenses?
- Which state Right of Publicity statutes are most exposure-relevant for companies with voice AI products?
- Does the theory extend to synthetic voice products that weren't trained on any specific plaintiff's recordings, but on a broad voice corpus?
Warning
Companies with commercial voice AI products that haven't audited their training data against state Right of Publicity requirements should treat this filing as a trigger event, not a precedent to wait for.
What the Voice-Over Workforce Actually Faces
The economic stakes for the voice-over profession predate this lawsuit. AI voice cloning and text-to-speech have been compressing the commercial voice-over market for several years. The lawsuit frames this as a legal injury, unauthorized commercial use of professional performers’ voices. The broader market reality is that even if the legal theory fails, the economic displacement of the voice-over workforce from synthetic voice products continues.
For the voice-over industry, a successful outcome in this case wouldn’t reverse that displacement. It would, at most, establish that using professional voice recordings in AI training without consent and compensation is a compensable injury. That’s meaningful, it creates a pathway for licensing and compensation rather than unilateral extraction, but it doesn’t restore the market position the profession occupied before AI voice products existed.
What to Watch
Four signals matter in the coming months. First, the defendants’ initial responses, whether they move to dismiss on preemption grounds immediately, or whether any choose early settlement to limit discovery exposure. Second, any additional Right of Publicity suits filed in other states, which would indicate coordinated litigation strategy across multiple jurisdictions. Third, whether the court finds the Illinois claim sufficiently distinct from copyright to survive preemption, that ruling will be heavily cited in every subsequent AI voice case. Fourth, whether any of the named defendants disclose more about their voice AI training data sourcing in response to the filing, either in legal proceedings or in public communications.
TJS Synthesis
Every motion-to-dismiss ruling in AI copyright litigation teaches the industry something it didn’t know before. If the preemption argument fails and Right of Publicity claims survive as a standalone theory, the compliance surface for voice AI developers expands dramatically, from one federal framework to fifty state statutes with varying consent, compensation, and enforcement standards. Companies with commercial voice AI products that haven’t mapped their training data sourcing against state Right of Publicity laws should treat this filing as the trigger to do so. The cases that have moved furthest in federal court have done so by advancing novel theories that companies hadn’t priced in. This one fits that pattern.