China’s Cyberspace Administration (CAC) released draft Provisional Measures on the Administration of Human-like Interactive AI Services for public consultation in early April 2026, according to chinalawtranslate.com’s English-language summary and reporting by NDTV. The draft targets a specific and growing product category: AI that doesn’t just answer questions but behaves like a person.
What the draft covers
The proposed measures apply to AI products and services that simulate human personality traits, modes of thinking, and communication styles, and that engage users in emotional interaction. That definition reaches well beyond general-purpose chatbots. AI companions, virtual therapists, customer service agents designed to build rapport, and interactive entertainment characters would likely fall within scope, if the measures are adopted as drafted.
The rules apply to services offered to the public in mainland China. Companies operating China-facing AI services in any of these categories need to track this consultation closely.
Prohibited activities (reported, not independently confirmed)
The draft measures are reported to include prohibitions on content endangering national security and ethnic unity, consistent with China’s broader AI governance framework across its generative AI rules and algorithm recommendations. These provisions have not been independently confirmed from the original Chinese-language text in this report. That framing is consistent with China’s established regulatory pattern, but specific provisions should be verified against the official CAC publication before any compliance analysis relies on them.
China’s layered AI regulatory architecture
This draft fits a larger structure. China has issued algorithm recommendation rules, generative AI service regulations, and deep synthesis rules over the past several years. The human-like interactive AI measures would add a fourth layer targeting relational and emotional AI specifically. Each layer has added compliance obligations for companies operating AI services in China. The pattern is consistent: China regulates AI by use-case category rather than by model type or general capability.
For companies already complying with China’s generative AI rules, the interactive AI measures would likely add new disclosure obligations, user consent requirements, and content moderation standards specific to emotionally interactive contexts.
What to watch
The public consultation period will close on a timeline the CAC has not yet publicly confirmed in available English-language sources. Final measures could follow weeks or months after consultation closes. Companies with China-facing AI services should monitor the official CAC publication channel and engage legal counsel familiar with Chinese digital regulation during the comment period.
TJS synthesis
China doesn’t regulate AI as a monolith. It regulates it product by product, use-case by use-case. The human-like interactive AI draft is the clearest signal yet that emotionally interactive AI, a category that global AI companies are investing in heavily, is being placed under explicit oversight in the world’s largest AI market. Whether the draft’s final form resembles what’s been reported will depend on consultation feedback. What won’t change is the direction: more specific regulation, not less.