Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Deep Dive

Three Jurisdictions, Three AI Governance Philosophies, What US, EU, and Japan Divergence Means for...

Holland & Knight Partial
The US Supreme Court closed the AI authorship question on March 2. The EU Parliament restructured its compliance timeline into three tiers this week. Japan continues operating its soft-law model with no fines and no bans. For companies operating across all three markets, these aren't three separate compliance stories, they're one structural challenge: how do you build a compliance program that functions coherently across three fundamentally different theories of AI governance?

There’s no global AI law. There likely won’t be one. What exists instead is a widening divergence between three governance philosophies, each coherent on its own terms, each creating compliance obligations, formal or reputational, for organizations operating in its jurisdiction. This week’s regulatory news across the US, EU, and Japan makes the shape of that divergence unusually clear.


The Governance Divergence in Brief

Three theories are in play.

The EU model is prescriptive risk-tiering. The AI Act classifies systems by risk level, assigns conformity obligations by tier, and enforces through a regulatory apparatus with teeth. The Parliament’s vote this week restructured the timeline, November 2026, December 2027, August 2028, but not the theory. The EU is building a compliance infrastructure that resembles its approach to medical devices and financial products: technical standards, notified bodies, documented conformity. This takes time to implement, which is why the deadlines moved. It doesn’t take time to enforce on the categories where the harm is obvious, which is why the deepfake ban passed in the same session.

The US model is contested consolidation. The White House has stated a preference for federal preemption of state AI laws, but the legislative vehicles are competing and the outcome is uncertain. Meanwhile, state-level regimes in Colorado, Illinois, and Texas are operative. US copyright law, as confirmed by the Supreme Court’s March 2 denial in Thaler v. Perlmutter, requires human authorship for copyright protection, full stop. On most other AI governance questions, the US is a patchwork of state laws and federal agency guidance, with a nonbinding White House Framework pointing toward consolidation that may or may not materialize.

The Japan model is voluntary reputational governance. Japan’s AI Promotion Act carries no mandatory requirements, no fines, no bans. Per regulatory reporting as of March 27, 2026, enforcement operates through administrative guidance and public identification of non-compliant operators. Japan has indicated an intent to align with G7 AI principles, according to reporting on the Act, though no binding agreement reflects that alignment yet. It succeeds at that goal. It also means that companies operating in Japan answer to reputation risk, not legal penalty risk – and in Japan’s business culture, that’s a meaningful distinction in degree rather than kind.


Jurisdiction Comparison

Dimension United States European Union Japan
Enforcement model State laws (operative); federal law (pending, contested) Prescriptive risk-tiering with regulatory penalties Administrative guidance + “name and shame” (no fines)
Copyright / IP Human authorship required, AI-only works unprotectable (confirmed, Supreme Court March 2026) Human authorship requirement consistent with Berne Convention; specific AI guidance developing IP guidance on AI-generated content expected from Cabinet Office through 2026 (qualified, not yet published)
Key compliance threshold State-specific (bias audit, impact assessment, disclosure), may be preempted by federal law Risk tier determines obligation, high-risk deadline December 2027 (pending Council approval) No formal threshold, voluntary compliance with soft-law guidance
Primary risk for non-compliance State enforcement (current); federal enforcement (uncertain timeline) Regulatory penalties, market access restriction Reputational risk, administrative guidance, public identification

Note: EU deadline dates and Japan framework details carry qualified verification status as indicated in the individual brief items. US copyright position is confirmed via T2 legal analysis. Readers with operational dependence on these details should verify against primary government sources.


The US Copyright Anchor

The Supreme Court’s Thaler v. Perlmutter decision, or more precisely, its non-decision – resolves one question in the US that remains open in both the EU and Japan. AI-only generated works don’t get copyright protection in the United States. Full stop. Human authorship is required. For companies building products that generate content, images, text, audio, code, the IP architecture of those products needs to reflect this. The commercially defensible path is documented human creative contribution in the authorship chain, not reliance on the possibility that courts might eventually rule differently.

The fair use question is different and still open. Whether training large AI models on copyrighted content constitutes infringement is being litigated across multiple cases now. That question isn’t resolved by Thaler. It’s the next contested terrain in US AI IP law – and legal observers note that courts, not Congress, are the venue where these questions are being resolved in real time.


The EU Compliance Sequence

The EU’s tiered deadline structure gives multi-jurisdictional compliance programs something valuable: a clear sequence. Organizations operating in the EU can now build a phased program – content transparency by November 2026, high-risk system conformity by December 2027, sectoral integration by August 2028, rather than a single-deadline sprint. That sequence is pending Council approval. It’s also the most operationally rational structure the EU has provided since the Act entered force.

For organizations operating across the US and EU simultaneously, the sequencing creates a practical alignment opportunity. The EU’s November 2026 watermarking and content transparency requirement coincides with a period when US federal AI law is likely still in legislative debate. Organizations that build content transparency capabilities for EU compliance will be ahead of whatever US content authenticity requirements eventually emerge, a common outcome when EU regulation sets the international standard.


Japan: Soft Law Is Not No Law

The characterization of Japan’s AI Promotion Act as “no regulation” is analytically wrong in a way that creates real risk for companies entering the Japanese market. The Act creates a governance framework. It creates enforcement mechanisms. Those mechanisms rely on reputation rather than penalty, and in a business environment where long-term institutional relationships are central to commercial success, the threat of public identification as a non-compliant operator is not trivial.

A practical point: APPI (Act on Protection of Personal Information) is a separate framework with its own requirements. A revision reportedly easing consent requirements for AI training on sensitive data in research contexts was reported in early 2026, but that claim rests on a single secondary source and warrants verification against official METI or Personal Information Protection Commission publications before any operational reliance. Don’t build compliance positions on that claim without confirming it against primary government sources.


Building Compliance Across Three Models

Multi-jurisdictional AI compliance programs face a structural choice: build three separate programs optimized for each jurisdiction’s requirements, or build one framework with jurisdiction-specific modules. The modular approach is more expensive to design and cheaper to maintain. The siloed approach is cheaper to launch and expensive to govern over time.

The three governance philosophies create natural module boundaries. A multi-jurisdictional program needs: a content transparency module (EU November 2026 watermarking + US copyright documentation); a risk classification module (EU high-risk system identification + US state-law bias audit where operative); and a governance posture module (Japan’s reputational framework + evolving G7 alignment). These modules don’t conflict with each other. They address different questions about the same set of AI systems.

TJS synthesis. The story of this week’s AI regulation news is jurisdictional divergence in three different directions simultaneously. The US is moving toward consolidation from fragmentation. The EU is moving toward enforcement readiness from legislative ambition. Japan is staying in the voluntary lane while watching both. For compliance professionals, the practical takeaway is this: a program built for the EU’s prescriptive requirements will be over-engineered for Japan and under-specified for the US copyright question. Build the EU compliance framework as your structural foundation, it’s the most technically demanding and the most globally portable, and treat US and Japan requirements as modules that sit on top. The EU model has a way of becoming the global baseline. We’ve seen it with GDPR. We’re watching it happen again.

View Source
More Regulation intelligence
View all Regulation