As it battles a growing pile of user safety and wrongful death lawsuits, OpenAI says it will introduce a “trusted contact feature” into ChatGPT that will alert a chatbot user’s designated loved one in the event of a potential mental health crisis.
OpenAI announced new feature last week blog postBilled as an “update on our mental health-related work”. It said it was “working closely” with its Council on Well-Being and AI and the Global Physician Network – two internally regulated groups of experts that were launched following reports of AI-tied health. mental health crisis began to emergeAs well as news of a high-profile trial last August that revealed the death by suicide of one 16 year old ChatGPT user named Adam Raine — To launch this feature, which it is marketing as an adult-focused effort separate from its efforts to integrate parental controls and other systems designed to identify and protect minors.
This announcement comes after widespread public reporting – in addition to at least thirteen separate consumer protection lawsuits – that OpenAI customers are being pulled into delusional or suicidal spirals with ChatGPT after extensive, often deeply intimate use of the chatbot.
The company didn’t provide much detail about the feature in the post, just saying that it will “allow adult users to designate someone to receive notifications when they may need additional support.” It has not yet defined any reporting standards that could actually force the system to flag a person’s use, however, which would be a tricky policy question. Would someone need to clearly declare their intention to hurt or kill themselves or possibly someone else in order to notify their loved one? Or the feature will be designed to track and flag less-obvious signals that the user may have heightened state of crisis – For example, signs that they may be manic, express delusional beliefs, or experience psychosis?
It’s likely we’ll learn more as OpenAI prepares to roll out this feature, and we could see it being particularly helpful for users suffering from mental illness, who know that deep AI use can tamper with their mental health in devastating ways. futurism Several cases have been reported of ChatGPT users who had successfully managed mental illness for several years before falling into ChatGPT-tied crisis. In many of the cases we reviewed, in addition to reinforcing scientific or spiritual illusions, ChatGPT encouraged users with mental illness not to continue taking their prescribed medication, implied that users were somehow misdiagnosed by human professionals, or created discord between users and their real-world support system. One ChatGPT user has now sued OpenAI, with a 34-year-old scholar named John Jacquez telling us that if he had known that ChatGPT could strengthen the illusion, he “would have never touched” the product.
That said, OpenAI still doesn’t warn new ChatGPT users that widespread use could have negative effects on their mental health – which, of course, is still being studied and litigated about, though there is growing consensus among experts on both anecdotally And in studies, chatbots could potentially exacerbate existing mental health conditions or make emerging crises worse. Millions of people struggle with mental illness every day; As with the “trusted contact feature”, it will be up to the user to also be aware that chatbots may pose some level of risk to their mental health, and then also want a loved one to be informed about any concerning usage patterns.
That “want” is important. A large number of people depend on AI for emotional support and advice. This is often due to the lower cost and accessibility of AI compared to inaccessible human therapy – but in many cases, because someone may find it easier or safer to share sensitive or revealing thoughts with a non-human bot.
In other words, some users may explicitly discuss mental health issues with ChatGPT, or perhaps share confusing or dangerous ideas. Because They may not want to share those thoughts or ideas with another person – a reality that both AI companies and the regulators looking at these issues will have to contend with. And to that end, if OpenAI’s internal monitoring tools indicate that someone may be in crisis, but that user hasn’t opted to list a trusted contact, what does the company do with that kind of information?
According to the report, the delusional and suicidal AI spiral has not only affected users with a diagnosed history of serious mental illness. futurism and this new York TimesWhich may also impact how many people choose to use this type of feature. However in its blog post, OpenAI claimed it is “advancing how our models detect and respond to signs of emotional distress”, which in addition to notification tools, includes “new assessment methods that simulate extended mental health-related conversations.” The company says this will help it “better identify potential risks and improve how ChatGPT responds in sensitive moments.”
OpenAI says it hosts 900 million ChatGPT users every week. According to their own estimates, by October, there are Millions of weekly ChatGPT users Showing signs of suicide, psychosis and other crises. While the efficacy of such a notification feature remains to be seen, it feels like a positive step forward — although the company’s efforts to mitigate the risks its products may pose to its users still feel reactive, not proactive.
More on AI and mental health: Research shows that chatbot use may make mental illness worse
