Japan’s AI governance approach doesn’t fit the EU template. There is no single comprehensive AI statute. Instead, METI (the Ministry of Economy, Trade and Industry) has built a governance layer from voluntary guidelines applied on top of existing law, the Copyright Act, the Personal Information Protection Law, and sector-specific regulations, rather than through new AI-specific legislation.
Coverage of Japan’s approach characterizes this as “agile governance”, a framework designed to adapt quickly as AI capabilities change, without requiring parliamentary cycles to amend prescriptive rules. Japan’s policy builds on the government’s Social Principles of Human-Centered AI, established in 2019 according to published reports, as the values foundation for subsequent METI guidance.
METI and the Ministry of Internal Affairs and Communications launched AI Guidelines for Business Ver1.0 as a core element of this voluntary framework. The specific launch year has not been confirmed in available sources and is not stated here.
The voluntary layer isn’t decorative. In practice, adherence to METI guidelines shapes outcomes in AI-related disputes and influences how existing laws are applied to AI systems, making non-adherence a meaningful risk for international operators, even without direct enforcement.
According to the EU-Japan Centre for Industrial Cooperation, METI is expected to publish a Civil Liability Framework for AI Utilization report and a draft AI Robotics Strategy by the end of March 2026. Both represent the next phase of Japan’s framework-building process and will be worth monitoring for international businesses with Japan operations.
This brief is Part 2 of TJS’s coverage of Japan’s dual-track AI governance. For the mandatory law layer, see the earlier brief on the AI Promotion Act.