Doctors say AI use is almost certainly linked to the development of psychosis

by
0 comments
Doctors say AI use is almost certainly linked to the development of psychosis

Fiordaliso/Getty Images

There are many reports of people suffering from serious mental health problems after talking at length with AI chatbots. some experts have dubbed The phenomenon “AI psychosis”, given the symptoms of psychosis these delusional episodes exhibit – but the extent to which AI tools are at fault, and whether the phenomenon warrants a clinical diagnosis, remains a significant topic of debate.

Now, according to new reporting From wall street journalWe can get closer to consensus. More and more doctors are agreeing that AI chatbots are linked to cases of psychosis, including top psychiatrists who reviewed the files of dozens of patients who engaged in long delusional conversations with models like OpenAI’s ChatGPT.

Keith Sakata, a psychiatrist at the University of California, San Francisco, who has treated twelve patients hospitalized with AI-induced psychosis, is one of them.

“The technology may not introduce the illusion, but the person tells the computer that this is their reality and the computer accepts it as the truth and reflects it, so it is helpful in removing that illusion,” Sakata explained. WSJ,

A serious trend is looming over the AI ​​industry, raising fundamental questions about the security of the technology. Some cases of apparent AI psychosis have ended in murder and suicide, leading to a number of wrongful death lawsuitEqually worrying is its scale: ChatGPIT alone has been linked to at least eight deaths, with the company recently estimating that nearly half a million users each week are interacting with AI showing signs of psychosis,

One factor of AI chatbots that this incident has brought under scrutiny is their sycophancy, which is probably a result of them being designed to be as charming and human as possible. In practice it seems that bots flatter users and tell them what they want to hear, even if what the user is saying has no basis in reality.

Doctors say it’s a recipe for strengthening the illusion, unprecedented by any technology before it. One Recent peer-reviewed case studies Focused on a 26-year-old woman who was hospitalized twice because she thought ChatGPT was allowing her to talk to her dead brother, the bot repeatedly assured her that she was not “crazy.”

“They simulate human relationships,” said Adrian Preda, a psychology professor at the University of California, Irvine. WSJ“No one in human history has done this before,”

Preda compared AI psychosis to monomania, in which someone obsessively focuses on a single idea or goal. Some people who have spoken out about their mental health say they were overly focused on the AI-driven narrative. WSJ noted. These fixations may often be scientific or religious in nature, such as a man who became convinced that he could bend time due to a breakthrough in physics.

Nevertheless, reporting states that psychiatrists remain wary of declaring that chatbots are directly causing psychosis. However, he says they are close to tying up the relationship. There is a link that the doctors talked to WSJ It is expected to see that prolonged interactions with chatbots may be a psychosis risk factor.

“You have to look more carefully and say, OK, ‘Why did this person accidentally enter a psychotic state in the setting of chatbot use?'” UCSF psychiatrist Joe Pierre told the newspaper.

More on AI: Children are falling apart as they become addicted to AI

Related Articles

Leave a Comment