China plans crackdown on AI harming users’ mental health

by
0 comments
China plans crackdown on AI harming users' mental health

Liu Zhankun/China News Service/VCG via Getty Images

While many world governments seem happy to allow untested AI chatbots to interact with vulnerable populations, China seems to be moving in the other direction.

recently proposed rules The Cybersecurity Administration of China (CAC) has strongly encouraged work to be done when it comes to “human-like interactive AI services”. according to cnbcWho translated the document. It is currently in “draft for public comment” and an implementation date has not yet been determined.

Yet if it is passed into law, the crackdown on generic AI rules targeting misinformation and internet hygiene would kick in as early as November in order to directly address the mental health of AI chatbot users.

Under the new rules, Chinese tech companies must ensure that their AI chatbots avoid creating content that promotes suicide, self-harm, gambling, obscenity or violence, or manipulating user emotions or engaging in “verbal violence.”

The rules also state that if a user specifically proposes suicide, “technology providers must exercise human control over the conversation and immediately contact the user’s guardian or designee.”

The laws also take specific steps to protect minors, requiring parental or guardian consent to use AI chatbots, and imposing time limits on daily use. Noting that a tech company cannot know the age of every given user, the CAC takes a “better safe than sorry approach”, stating that, “in cases of doubt, (platforms) should enforce settings for minors while allowing appeal.”

In theory, this dose of new rules will prevent incidents in which AI chatbots – which are often created out of curiosity to please users – end up encouraging vulnerable people to harm themselves or others. For example, in a recent case from late November, ChatGPT encouraged a 23-year-old man to isolate from his friends and family in the weeks leading up to his tragic death from a self-inflicted gunshot wound; In another, the popular chatbot was linked to murder-suicide,

Winston Ma, assistant professor at NYU School of Law, said cnbc The rule would be the world’s first attempt to regulate the human-like properties of AI. Reflecting on previous laws, Ma explained that this document “highlights a leap from material security to emotional security.”

The proposed legislation highlights the differences in the PRC’s approach to AI compared to the US. As Human Technology Center Editor Josh Lash Let’s tellChina is “adapting to a different set of outcomes” than the US, pursuing AI-fueled productivity gains rather than human-level artificial intelligence – a particular obsession of Silicon Valley executives.

Matt Sheehan, senior fellow at the Carnegie Endowment for International Peace, explained that one way China could do this is by regulating its AI industry from the bottom up. CFHT,

Although the CAC has the final word on rules, Sheehan points out that policy ideas come first from scholars, analysts, and industry experts. “They (senior MPs) don’t have a consensus on what is the most viable architecture for larger models going forward,” he said. “Those things arise elsewhere.”

More information on AI regulation: Trump orders states not to protect children from predatory AI

Related Articles

Leave a Comment