Three is not the same as one with variations.
That distinction is the starting point for any compliance team that has spent the last two years building an EU AI Act program and is now wondering what to do about Japan. The instinct, map your existing EU documentation against the new jurisdictions and close the gaps, works for some things and fails for others. The EU, the United States (federal layer), and Japan have each taken a different theory of what AI governance is for. Those theories produce different compliance obligations, and the differences are not minor.
Japan’s AI Promotion Act became law on May 28, 2025, making it the first Asian jurisdiction with dedicated AI legislation. The EU AI Act is the most developed hard regulatory model in the world. The US federal layer remains a patchwork. All three are now operative in some form. Here is what each actually requires, and where compliance teams will find the frameworks diverging in ways that matter operationally.
The Three-Jurisdiction Landscape
The EU AI Act is the baseline for hard regulatory comparison. It is the only one of the three with monetary penalties calibrated to the severity of violation, up to €35 million or 7% of global annual turnover for the most serious breaches. It operates on a risk classification system (unacceptable, high, limited, minimal) that triggers different compliance requirements at each tier. Prohibited uses are enumerated and absolute. High-risk system developers face conformity assessments, technical documentation requirements, human oversight mandates, and registration obligations before market deployment.
The US federal layer has no equivalent single statute. The federal framework is a collection of executive orders, agency guidance documents, voluntary commitments, and procurement requirements that create compliance obligations for federal contractors and guidance-level expectations for everyone else. NIST’s AI Risk Management Framework is the closest thing to a unified standard, but adoption is voluntary outside federal procurement. State laws, California, Colorado, and now Connecticut, are filling gaps the federal layer hasn’t addressed.
Japan sits in a distinct third category. Clifford Chance’s April 15, 2026 analysis describes Japan’s framework explicitly as “still no fines, still no mandates.” The AI Promotion Act is statutory law, not guidance, not a voluntary framework, but its current enforcement mechanism is administrative guidance rather than monetary penalty. The Cabinet approved the Basic AI Plan on December 23, 2025, establishing four governance chapters under an AI Strategic Headquarters chaired by the Prime Minister. The structure is formal and centralized. The compliance weight is not yet operational.
Where the Frameworks Converge
The three regimes share enough vocabulary to make comparison useful, and enough substantive difference to make direct mapping dangerous.
All three use some form of risk-based or impact-based classification. The EU AI Act’s four-tier risk ladder is the most fully developed. The US NIST RMF uses a risk management approach that maps onto the EU’s logic reasonably well. Japan’s framework designates “high-impact” AI models as the scope trigger, but has not yet published the technical thresholds that define what “high-impact” means in practice.
All three include transparency obligations. The EU requires detailed technical documentation, conformity assessments, and registration for high-risk systems. The US federal layer requires AI disclosures in specific procurement and consumer-facing contexts. Japan’s framework anticipates transparency requirements through the Headquarters’ authority over designated models.
All three include human oversight as a principle. The EU codifies it as a mandatory technical requirement for high-risk systems. California’s SB 7 operationalizes it as an employment law mandate. Japan’s framework includes it as a governance objective.
The vocabulary convergence is real. Compliance teams who have built EU AI Act programs have transferable work. But the gaps are where the operational planning has to focus.
Where They Diverge
Enforcement penalties: The divergence here is stark. The EU AI Act imposes fines up to €35 million or 7% of global annual turnover. The US federal layer has no equivalent cross-sector AI penalty structure, enforcement runs through sector-specific regulators (FTC, SEC, financial regulators) with existing authority. Japan currently has no monetary fines for AI governance violations. Japan’s ruling LDP has proposed adding penalties for specific harms (deepfakes, piracy), but those proposals are not yet law.
Prohibited use cases: The EU AI Act enumerates absolute prohibitions, social scoring, real-time biometric surveillance in public spaces, subliminal manipulation. The US has no equivalent enumerated prohibition list at the federal level, though specific practices face legal risk under existing consumer protection and civil rights frameworks. Japan’s prohibitions are not yet operationally defined through the technical threshold publication process.
Scope definition: The EU AI Act defines scope by system risk classification and use case. The US federal layer defines scope primarily by procurement context and sector. Japan’s framework triggers on “high-impact” model designation, a concept whose technical definition is pending. Developers cannot yet determine with certainty whether a given system falls within Japan’s mandatory reporting authority.
Extraterritorial reach: The EU AI Act applies to any AI system placed on the EU market or affecting EU users, regardless of where the developer is located. The US applies enforcement authority through market access and sector regulation rather than explicit extraterritorial scope. Japan’s framework is currently ambiguous on extraterritorial application, the technical threshold guidance, when published, will clarify whether foreign developers are in scope.
Compliance timelines: The EU AI Act is in phased implementation with its August 2, 2026 full application deadline approaching. US federal procurement requirements are already in effect for contractors. Japan’s compliance timeline is defined by the Headquarters’ publication of technical thresholds, a date that remains unconfirmed.
What Compliance Teams Should Do Now
The immediate action list is shorter than the framework analysis might suggest.
First: complete your EU AI Act documentation before August 2. That deadline is approximately 99 days away as of April 25, 2026. The EU AI Act’s August 2026 compliance deadline has been discussed, but until official confirmation of any extension, your program should target August 2. Everything else is secondary to that.
Second: monitor Japan’s AI Strategic Headquarters for technical threshold publication. This is the document that converts Japan’s framework from a structural reality to an operational compliance obligation. When thresholds are published, the window between publication and effective date will define how much runway developers have. Building your Japan monitoring now, not after publication, is the right posture.
Third: map your existing EU AI Act risk classification against what Japan’s framework is likely to require. Risk-based approaches in multiple jurisdictions tend to converge on similar trigger criteria. High-risk systems under the EU Act are likely candidates for designation under Japan’s high-impact threshold, whatever that threshold turns out to be. The documentation work you’ve done for the EU is not wasted on Japan.
Fourth: watch the LDP penalty proposal. If Japan’s parliament advances legislation adding monetary fines for deepfake and piracy-related AI violations, that signals the political direction for broader enforcement additions. A penalty structure changes the calculus for how seriously the framework needs to be treated.
The three-jurisdiction reality is here. Treating each framework in isolation produces compliance programs that are redundant in some areas and have critical gaps in others. A unified approach, with the EU as the structural foundation, US federal requirements as procurement-specific additions, and Japan as an emerging obligation requiring active monitoring, is the defensible program architecture for 2026.