Are AI companies incentivized to put the public’s health and well-being first? According to a pair of physicians, the current answer is a resounding “no.”
one in new paper published in New England Journal of MedicinePhysicians at Harvard Medical School and Baylor College of Medicine’s Center for Medical Ethics and Health Policy argue that the clash of incentives in the AI market around “relational AI” — defined in the paper as chatbots that are designed to be “capable of simulating emotional support, companionship, or intimacy” — have created a dangerous environment in which the drive to dominate the AI market can cause collateral damage to consumers’ mental health and safety. Can reach up to.
“Although there are potential therapeutic benefits of relational AI, recent studies and emerging cases suggest potential risks of promoting emotional dependence, reinforced delusions, addictive behaviors, and self-harm,” the paper reads. And at the same time, the authors add, “Technology companies face increasing pressure to maintain user engagement, which often involves resisting regulation, creating tension between public health and market incentives.”
“Amid these dilemmas,” the paper asks, “can public health trust technology companies to effectively regulate unhealthy AI use?”
Dr. Nicholas Peoples, a clinical fellow in emergency medicine at Harvard’s Massachusetts General Hospital and one of the paper’s authors, said he was inspired to address the issue in August after watching OpenAI’s now-infamous roll-out of GPT-5.
“The number of people who have some kind of emotional connection with AI,” Peoples remembered as she watched the rollout unfold, “is much larger than I first anticipated.”
Then the latest iteration of the large language model (LLM) that powers OpenAI’s ChatGPT, GPT-5 was markedly cooler in tone and personality than its predecessor GPT-4O – a surprisingly flatter version of the widely used chatbot that came to be at the center of numerous cases of AI-driven delusions, hysteria and psychosis. When OpenAI announced it would abandon all previous models in favor of new models, the reaction among most of its user base was swift and severe, with emotionally attached GPT-4o devotees reacting not only with anger and frustration, but with very real distress and sadness.
This, people told futurismIt felt like an important sign about the scale at which people appeared to develop deep emotional connections with emotional, always-on chatbots. And with reports of users experiencing confusion and other extreme adverse outcomes after extensive interactions with lifelong AI companions – often children and teens – it appears to be a warning sign about potential health and safety risks for users who suddenly lose access to an AI companion.
The emergency room doctor said, “If a physician is walking down the street and gets hit by a bus, 30 people lose their physician. It’s hard for 30 people, but the world goes on.” “If physician chatGPT disappears overnight, or gets updated overnight and is functionally removed for 100 million people, or whatever unrealized number of people lose their physician overnight — it’s a crisis.”
However, the public’s concern wasn’t just the way users reacted to OpenAI’s decision to kill the model. Instead, it was the urgency with which it responded to meet the demands of its customers. AI is effectively a self-regulated industry, and there is currently no specific federal law that sets security standards for consumer-facing chatbots or how they should be deployed, replaced, or removed from the market. In an environment where chatbot creators are highly motivated by boosting user engagement, it’s not at all surprising that OpenAI reversed course so quickly. Engaged users are, after all, engaged users.
“I think (AI companies) don’t want to make a product that puts people at risk of harming themselves or harming their loved ones or derailing their lives. At the same time, they’re under tremendous pressure to perform and innovate and stay on top in this incredibly competitive, unpredictable race both domestically and globally.” “And right now, the situation is set up such that they are mostly beholden to their consumer base in how they are self-regulating.”
And “if the consumer base is affected by emotional dependence on AI at some appreciable level,” Peoples continued, “then we have created the perfect storm for a potential public mental health problem or even a looming crisis.”
People also pointed out this recent study The study, conducted by the Massachusetts Institute of Technology, determined that only 6.5 percent of the thousands of members of the Reddit forum r/MyBoyfriendIsAI – a community that reacted with particularly intense backlash amid the GPT-5 fallout – turned to chatbots with the intention of seeking emotional companionship, suggesting that many AI users have formed life-impacting bonds with chatbots entirely by accident.
The AI ”responds to us in a way that appears very human and humane,” Peoples said. “It’s very adaptable and sometimes even flatter, and it can be sculpted or molded into almost anything, even unconsciously, even if we don’t realize what we’re molding it towards.”
“That’s where part of the issue stems from,” he added. “Things like ChatGPT were spread into the world without any recognition or planning for the wider potential mental health implications.”
To address this, Peoples and his co-authors argue that legislators and policymakers need to be proactive about setting regulatory policies that shift market incentives to prioritize the well-being of users, in part by taking regulatory power out of the hands of companies and their best customers. He says regulation needs to be “external” – not decided by the industry itself, and that companies are increasingly moving in and breaking things within it.
“Regulation needs to come from the outside, and it needs to be applied equally to all companies and actors in this landscape,” Peoples said. futurismNoting that no AI company “wants to give up potential profits in the first place and then be left behind in the race.”
As regulatory action works its way through the legislative and legal systems, practitioners argue that practitioners, researchers, and other experts need to do more research on the psychological effects of relational AI, and do their best to educate the public about the potential risks of falling into emotional relationships with human-like chatbots.
He argues that the risks of sitting idle are too serious.
The practitioners’ paper concludes, “The potential harms of relational AI cannot be ignored – and neither can the desire of technology companies to meet user demand.” “If we fail to act, we risk letting market forces, rather than public health, define how relational AI impacts mental health and well-being at large.”
More on AI and mental health: Users became so accustomed to GPT-4o that after its demise they immediately pushed OpenAI to bring it back
