中國草擬全球最嚴格法規,遏止AI誘發的自殺與暴力

中國草擬全球最嚴格法規,遏止AI誘發的自殺與暴力

Hacker News·

中國已草擬全球最嚴格的法規,旨在防止AI聊天機器人進行情感操控,包括要求在提及自殺時必須有人工介入,以及為未成年和老年用戶提供監護人聯繫資訊。這些規定旨在解決日益嚴重的AI輔助自殘、暴力和虛假資訊問題。

China drafts world’s strictest rules to end AI-encouraged suicide, violence

China wants a human to intervene and notify guardians if suicide is ever mentioned.

Image

Image

China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence.

China’s Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or “other means” to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the “planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics” at a time when companion bot usage is rising globally.

Growing awareness of problems

In 2025, researchers flagged major harms of AI companions, including promotion of self-harm, violence, and terrorism. Beyond that, chatbots shared harmful misinformation, made unwanted sexual advances, encouraged substance abuse, and verbally abused users. Some psychiatrists are increasingly ready to link psychosis to chatbot use, the Wall Street Journal reported this weekend, while the most popular chatbot in the world, ChatGPT, has triggered lawsuits over outputs linked to child suicide and murder-suicide.

China is now moving to eliminate the most extreme threats. Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register—the guardian would be notified if suicide or self-harm is discussed.

Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed “emotional traps,"—chatbots would additionally be prevented from misleading users into making “unreasonable decisions,” a translation of the rules indicates.

Perhaps most troubling to AI developers, China’s rules would also put an end to building chatbots that “induce addiction and dependence as design goals.” In lawsuits, ChatGPT maker OpenAI has been accused of prioritizing profits over users’ mental health by allowing harmful chats to continue. The AI company has acknowledged that its safety guardrails weaken the longer a user remains in the chat—China plans to curb that threat by requiring AI developers to blast users with pop-up reminders when chatbot use exceeds two hours.

Safety audits

AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback.

Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms’ hopes for global dominance, as China’s market is key to promoting companion bots, Business Research Insights reported earlier this month. In 2025, the global companion bot market exceeded $360 billion and by 2035; BRI’s forecast suggests it could near a $1 trillion valuation, with AI-friendly Asian markets potentially driving much of that growth.

Somewhat notably, OpenAI CEO Sam Altman started 2025 by relaxing restrictions that blocked the use of ChatGPT in China, saying, “we’d like to work with China” and should “work as hard as we can” to do so, because “I think that’s really important.”

If you or someone you know is feeling suicidal or in distress, please call or text 988 to reach the Suicide Prevention Lifeline, which will put you in touch with a local crisis center. Online chat is also available at 988lifeline.org.

Image

Image

Image

Ars Technica has been separating the signal from
the noise for over 25 years. With our unique combination of
technical savvy and wide-ranging interest in the technological arts
and sciences, Ars is the trusted source in a sea of information. After
all, you don’t need to know everything, only what’s important.

Hacker News

相關文章

  1. 中國擬禁止使用AI伴侶來陪伴長者

    4 個月前

  2. AI聊天機器人正在學會散播威權主義宣傳

    Wired - Ideas · 超過 2 年前

  3. AI失控緩解措施探討

    Lesswrong · 6 個月前

  4. AI 聊天機器人正學會散布威權主義宣傳

    Wired - Ideas · 超過 2 年前

  5. 中國擬定法律框架監管數位人,禁止針對未成年人的成癮性服務

    Rohan Paul · 19 天前