Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Deep Dive

Three AI Copyright Regimes Are Diverging at Once, What U.S., EU, and Japan Each Require of AI Companies

5 min read Multiple (Dykema / White House / Reuters / law.asia) Partial
The U.S. is deferring AI copyright questions to the courts. Japan's permissive training data exception is attracting its first litigation signals. The EU is preparing to apply platform-scale audit requirements to an AI-native chatbot. These aren't isolated regulatory events. They're three major markets moving simultaneously in different directions, and companies operating across all three now face a copyright compliance map with no unified logic.

The Moment of Divergence

Three significant AI copyright developments landed in the same reporting window. The White House issued a legislative framework signaling that Congress should stay out of AI copyright disputes while courts decide them. In Japan, legal analysts flagged the “unreasonably prejudice” proviso in the country’s AI training data exception as the next litigation target. And in Brussels, the European Commission is assessing whether to apply its toughest digital compliance tier to the world’s most widely used AI chatbot.

None of these are coordinated. That’s precisely the point.

For AI companies that develop and deploy models globally, this isn’t a regulatory story about any single jurisdiction. It’s a story about divergence, three major markets moving simultaneously, on the same underlying question of who controls the data that trains AI systems, toward answers that are structurally incompatible.

The U.S. Approach: Courts Over Congress

The White House released its National AI Legislative Framework on March 20, 2026. Its copyright chapter, Section III, is notable for what it asks Congress not to do. According to Dykema’s legal analysis of the framework, Section III directs Congress to avoid legislation that could interfere with current copyright litigation, leaving active cases like NYT v. OpenAI to establish the doctrine. This is Dykema’s legal reading of the framework, not a confirmed explicit directive from the document’s text. It should be treated as informed legal interpretation.

The framework’s philosophy is “permissionless innovation”, a phrase confirmed in the document, and “minimally burdensome” regulation. Holland & Knight’s review confirms this framing. The operative consequence for AI companies: the U.S. government has signaled, through the executive branch, that it prefers courts to set AI copyright doctrine. Congress is being asked to wait.

Section III also contains an affirmative proposal: a voluntary collective licensing system for AI training data, modeled after ASCAP and BMI, with proposed antitrust immunity for participants, per Dykema’s analysis. Neither provision is law. But the proposal tells companies where the policy appetite is pointing: toward licensed training data ecosystems rather than litigation-by-litigation outcomes.

The U.S. copyright position today, in practice: no clear safe harbor, no statutory licensing framework, judicial doctrine under construction in real time. Companies that trained on data before the litigation wave hit are navigating retroactive risk. Those making training data decisions now are operating without statutory clarity, by design.

Japan’s Approach: The Permissive Exception Under Pressure

Japan took a different path. Its Copyright Act Article 30-4 permits AI training on copyrighted material, a broad permission that made Japan a favored jurisdiction for data-intensive model development. The AI Promotion Act, enacted May 28, 2025, confirmed Japan’s soft-law governance posture: guidance-based, no financial penalties.

The exception isn’t unlimited. It’s subject to the “unreasonably prejudice” proviso, a limitation that applies when AI training use unreasonably harms the copyright holder’s interests. As analysis from law.asia identifies, legal experts now flag this proviso as the provision most likely to face litigation testing as commercial-scale AI training use cases expand.

What does “unreasonably prejudice” mean in practice? Japan’s courts haven’t said yet. The proviso has no settled interpretation at foundation model scale. The litigation that eventually tests it will set that boundary, and the outcome will determine whether Article 30-4 remains the broad permission it appears to be or narrows substantially in application.

According to industry reporting, METI is also preparing a version 1.2 update to its AI Guidelines for Business, reportedly addressing cross-border data flows and EU AI Act interoperability. METI has not confirmed this officially. If the update materializes, it will signal whether Japan intends to maintain its permissive posture as EU-facing compliance pressure increases on Japanese operations.

Japan’s position, today: the broadest statutory training data permission of the three jurisdictions, under increasing litigation pressure, enforced through soft law that doesn’t protect against civil copyright claims by rights holders.

The EU Approach: Platform Scale Meets AI Transparency

The EU’s approach to AI copyright is layered across two regulatory instruments that operate independently but can apply to the same product simultaneously.

The EU AI Act addresses AI systems by use-case risk classification. The Digital Services Act addresses platforms by scale. ChatGPT, according to Reuters reporting via Tech in Asia, disclosed 120.4 million average monthly EU users for the six-month period ending September 2025, nearly triple the 45 million user threshold in DSA Article 33 that triggers VLOSE designation. The European Commission is assessing formal designation. No decision has been announced as of publication.

What VLOSE designation would require, per confirmed DSA text: annual systemic risk assessments under Article 34, independent audits under Article 37, transparency reporting, and researcher data access. These obligations don’t address training data copyright directly, they address systemic risk and operational transparency at platform scale. But the audit requirements create documentation and transparency infrastructure that interacts with questions about training data provenance.

The EU AI Act’s GPAI provisions, applying to general-purpose AI models, include copyright compliance documentation requirements of their own. The interaction between DSA VLOSE obligations and GPAI requirements for a product like ChatGPT has no official EC guidance as of this reporting cycle. That gap is itself a compliance planning problem.

The EU’s copyright position today: training data transparency requirements through the AI Act’s GPAI provisions, platform-scale audit requirements potentially adding through DSA designation, and no settled doctrine on whether existing training on copyrighted content was lawful under EU copyright law.

What This Means for Global AI Development

Companies operating across all three jurisdictions face a compliance map that produces no unified answer to the question: can we train on this data?

In the U.S.: probably, but litigation risk is unresolved and the legal doctrine is being written in active cases. Statutory clarity has been deliberately deferred.

In Japan: yes, under Article 30-4, with a proviso whose limits are untested at scale and whose interpretation will eventually be set by courts, not regulators.

In the EU: subject to copyright compliance documentation, with additional platform-scale obligations if VLOSE designation proceeds, and no official guidance yet on how the two frameworks interact.

The divergence has practical implications beyond compliance. Training data sourcing decisions made with Japan’s Article 30-4 in mind may carry risk in the EU if training data provenance documentation requirements expose practices that weren’t documented for EU compliance purposes. U.S. litigation outcomes in cases like NYT v. OpenAI will not bind EU or Japanese courts, but they will influence how rights holders in those jurisdictions assess their own litigation prospects.

For compliance teams with global mandates: no single jurisdiction’s framework is a safe harbor for the others. Three separate assessments are required. The most prudent near-term posture combines documentation practices designed for EU GPAI requirements, which are the most demanding, with active monitoring of litigation signals in Japan and the U.S.

TJS Synthesis

What makes this moment significant isn’t that three jurisdictions disagree about AI copyright. They always have. What’s new is that all three are moving simultaneously, under litigation and regulatory pressure, toward outcomes that will be harder to harmonize the further they diverge. The window for coordinated international AI copyright standards, if it ever existed, is narrowing. Companies waiting for a global consensus framework to reduce compliance complexity should plan for the opposite: three distinct regimes that require distinct analysis, distinct documentation, and distinct risk assessments. Build the documentation infrastructure for the most demanding jurisdiction and work outward. Right now, that’s the EU.

View Source
More Regulation intelligence
View all Regulation
Related Coverage

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub