Research shows that chatbot use may make mental illness worse

by
0 comments
Research shows that chatbot use may make mental illness worse

A new study finds that chatbot use appears to worsen mental illness symptoms in people suffering from a variety of conditions, adding to the growing consensus among medical experts that interacting with unregulated chatbots could put some users in crisis.

ResearchConducted by a team of psychiatrists at Aarhus University in Denmark and published earlier this month in the journal Acta Psychiatrica ScandinavicaAnalyzed the digital health records of approximately 54,000 Danish patients suffering from mental illnesses. After identifying 181 examples of patient notes that mentioned AI chatbots, they determined that the use of bots – particularly intensive, prolonged use – appeared to worsen mental illness symptoms in dozens of patients. They found that this pattern seemed to be especially true for patients suffering from delusions or paranoia, and that for some the risks of using chatbots could be “serious or even fatal.”

This latest study was led by Danish psychiatrist Dr. Søren Dinesen Ostergaard, who in August 2023, it was predicted Human-like chatbots such as ChatGPT may stand in for reinforcing delusions and hallucinations in people “suffering from psychosis”. one in Press releaseOstergaard urged that while more research is needed on causation, he would “argue that we now know enough to say that using AI chatbots is risky if you have a serious mental illness.”

“I would urge caution here,” Ostergaard said.

Although limited to Denmark, the wave of study findings has extended public reporting And Research Regarding mental health crises associated with AI – sometimes referred to by mental health professionals as “oh psychosis” – in which bots like ChatGPT and others introduce, reinforce, or otherwise provoke delusional beliefs in users that contribute to destructive mental spirals and real-world outcomes. In fact, rather than steering users away from delusional beliefs or potentially harmful fixations, previous studies show that chatbots reinforce them — which is exactly what mental health professionals urge people not to do when communicating with someone who is in crisis.

Ostergaard said, “AI chatbots have an inherent tendency to validate the user’s beliefs. It is clear that this is highly problematic if the user already has a belief or is in the process of developing it.”

The Danish study found that in addition to deepening delusional beliefs, chatbots also appeared to worsen suicidal thoughts and self-harm, disordered eating habits, depression and obsessive or compulsive symptoms, among other symptoms of mental health problems.

The researchers note that, out of the approximately 54,000 records they analyzed, they identified 32 cases in which the use of chatbots by patients for therapy or companionship appeared to be “constructive”, for example reducing symptoms of loneliness or providing patients with a supportive version of talk therapy. But the use of chatbots as a substitute for human physicians has proven to be an extremely common use case for chatbots, the study authors said. Emphasis on AI therapy It is still completely irregular terrain.

As futurism And others have reported, delusional spiral bound extensive chatbot use – and the real consequences of these episodes, which range from divorce to job loss and financial distress, to suicide, stalking and harassment, hospitalization and jail time, and more. even death – Has affected people with a known history of serious mental illnesses as well as those with no such background. the new York Times recently interviewed Dozens of mental health professionals reported that AI illusions are increasingly appearing in their practice.

Meanwhile, OpenAI is facing more than a dozen lawsuits related to user safety and the potential psychological effects of widespread ChatGPT use. One plaintiff, a 34-year-old California man named John Jacquez, was diagnosed with schizoaffective disorder — a condition he worked to manage for years until ChatGPT sent him into a destructive psychosis, he claimed in his lawsuit. Jaquez told in an interview futurism If he had been warned that ChatGPT could reinforce delusional thinking, he “would never have touched the program.”

“I haven’t seen any warnings that it could be negative for mental health,” Jacquez said.

“I fear the problem is more common than most people think,” Ostergaard said. “In our study, we are only looking at the tip of the iceberg, because we were only able to identify cases that were described in electronic health records.”

He added, “It’s likely there are many more that have not been discovered.”

More on AI illusions: AI confusion is leading to domestic abuse, harassment and stalking

Related Articles

Leave a Comment