Most Western jurisdictions haven’t formally defined what a digital human is, let alone regulated one. China is further along.
According to Mayer Brown’s analysis of the CAC’s draft measures, China’s Cyberspace Administration has issued rules governing what the draft calls “human-like interactive artificial intelligence services.” That’s the regulatory category covering AI products designed to simulate human personalities, voices, and interaction patterns, digital avatars, virtual companions, and conversational AI personas that present as human.
A brief explainer on how this works: China’s CAC regulatory process typically moves from draft measures to a public comment period and then to finalization. “Draft measures” means these rules are not yet in force. They’re subject to revision before publication as final regulations. Companies should monitor the comment period outcome, not treat these provisions as enacted requirements, but experienced compliance teams know that CAC draft measures tend to reflect what the final rules will look like in substance, even if specific provisions shift.
The provisions Mayer Brown analyzes include several worth flagging for compliance and product teams. According to their analysis of the draft, providers would be required to prominently label services as digital humans, making clear to users that they’re interacting with AI rather than a person. The draft measures would reportedly prohibit virtual intimate relationships between AI services and minors, a category of product restriction that hasn’t appeared in most Western regulatory frameworks at this specificity. The rules would also reportedly prohibit using an individual’s personal information to create their digital likeness without consent, a provision that intersects with existing data protection obligations in China and elsewhere.
Providers operating in regulated sectors, reportedly including healthcare, finance, and legal services, would also need to comply with the sector-specific regulatory requirements applicable to those industries. That layered compliance requirement is consistent with CAC’s broader approach to AI regulation, which generally requires alignment with existing sectoral rules rather than creating AI-specific carve-outs.
One provision worth framing carefully: restrictions on content that endangers national security or promotes illegal activities. This is standard language in virtually every CAC internet regulation. It’s not novel to this draft and shouldn’t be read as a distinctive feature of the digital humans framework specifically.
The draft measures are reportedly open for public comment until May 6, 2026, according to Mayer Brown’s analysis. Tech Jacks Solutions has not verified this date against the CAC’s official publication, builders should confirm against the CAC source before treating May 6 as a firm deadline.
For global compliance teams and developers building avatar or conversational AI products, the practical question isn’t whether China’s rules apply to them today, it’s whether their products have a China user base, whether they have plans to enter the China market, and whether the consent and labeling requirements in this draft align with their existing product design. Companies that have already built robust consent and labeling architecture for GDPR or other frameworks will find the requirements familiar in structure, if different in specifics.
China AI regulation context: Tech Jacks Solutions doesn’t yet have a dedicated China AI regulatory overview page, but this brief will anchor that content when it’s published. Watch this space.