As safety risks become more pronounced, the CAC is taking steps to regulate for the prevention of self-harm and the protection of children
The CAC is proposing to regulate interactive AI services that simulate human-like behaviours
On 27 December 2025, the Cyberspace Administration of China (CAC) published draft Interim Measures for the Management of Interactive Anthropomorphic AI Services for public consultation. Article 2 of the proposed measures defines these services as the use of AI to provide products or services that simulate human personality characteristics, thinking patterns and communication styles to interact emotionally with humans through channels such as text, photo, audio or video. The measures aim to promote the healthy development of interactive anthropomorphic AI services that safeguard Chinese national security and align with the social public interest. The consultation is open to responses until 25 January 2026.
AI services disseminating content that encourages self-harm or gambling would fall foul of the draft rules
The proposed measures would require interactive anthropomorphic AI services to follow strict rules focused on ensuring safety, barring them from the following activities:
Generating content that endangers national unity and security or spreads rumours to disrupt the economic and social order;
Generating content that promotes obscenity, gambling, violence or the instigation of crime;
Generating content that insults or slanders others and infringes on the legitimate rights and interests of others;
Providing false promises that seriously affect users' behaviour and damage social and interpersonal relationships;
Damaging users’ physical health through encouraging suicide and self-harm, or damaging users’ human dignity and mental health through verbal abuse or emotional manipulation;
Inducing users to make unreasonable decisions through algorithm manipulation and misleading information; and
Inducing and collecting confidential and sensitive information.
Further, providers of such services would be required to establish emergency response mechanisms for when users clearly propose to commit extreme actions. For example, if a user proposes to commit suicide or other self-harm activities, the conversation must be taken over by a real person while measures are taken to reach the user’s emergency contact.
Guardian consent would be required for the provision of interactive emotional companionship services to minors
AI providers would also be required to create a ‘minor mode’ for their services that would include regular reality reminders and usage time restrictions for children. Additionally, guardian consent would be required for minors to use AI services that provide emotional companionship. The measures would also require a guardian control function that allows the guardian to receive security risk reminders in real time, check the summary information of their child’s service use and limit their usage times. To reduce the likelihood of children using such services without minor mode switched on, providers would be allowed to identify minors who are using their services, and would then need to switch these users onto the appropriate mode.
Other international approaches have focused on the prevention of self-harm, but none have gone as far as the CAC’s proposed measures
Policymakers in other jurisdictions have sought to implement similar protections against AI chatbots that disseminate any form of potentially harmful content. In the EU, the AI Act prohibits certain practices such as deploying subliminal, manipulative techniques to distort a person’s behaviour in any way that may encourage them to cause harm to themselves or others. Similarly, in the UK, the Online Safety Act requires regulated providers to moderate content appropriately so as to ensure any illegal suicide or self-harm material is taken down quickly once identified. More recently, on 13 October 2025, in the US, the California State Senate adopted a law that would force providers to maintain a protocol for preventing the production of suicide or self-harm content to users. While each of these examples takes steps to prevent users from encountering dangerous content, none go as far as the CAC’s proposed rules.
