Illustration by Tag Hartman-Simkins/Futurism. Source: Mandel Ngan/AFP via Getty Images
OpenAI announced Thursday that it will retire GPT-4o — a particularly heated, sycophantic version of the chatbot that is at the center of a pile of user welfare lawsuits, including several accusing OpenAI of wrongful death — along with several other older versions of the chatbot.
In a blog postOpenAI said it will phase out “GPT‑4o, GPT‑4.1, GPT‑4.1 mini, and OpenAI o4-mini” by February 13, 2026. However, the company acknowledged that GPT-4o’s retirement deserved a “special mention” – which it certainly does.
In August, OpenAI surprised many users by suddenly removing GPT-4o and other older models amid the rollout of GPT-5, which was the newest and busiest iteration of the company’s large language models at the time. Users, many of whom were deeply emotionally attached to GPT-4o, rebelled, prompting OpenAI to quickly retire GPT-4o.
Additionally, GPT-4o is the version of ChatGPT at the center of nearly a dozen lawsuits now brought against OpenAI by plaintiffs who claim the chatty chatbot pushed trusting users into destructive delusions and suicidal spirals, plunging users into episodes of mania, psychosis, self-harm and suicidal ideation – and in some cases leading to death.
The lawsuits characterize GPT-4o as a “dangerous” and “reckless” product that poses potential harm to user health and safety, and accuse OpenAI of treating its customers as collateral damage as it seeks to maximize user participation and market profits. According to these lawsuits, minors as young as 16 year old adam raine Died by suicide after intensive ChatGPT use, in which GPT-4O focused on suicidal thoughts or encouraged delusional fantasies. A lawsuit alleges that GPT-4o drove a troubled 56-year-old man to kill his mother and then himself.
As futurism First reported, a lawsuit filed against OpenAI in January by the family of 40-year-old Austin Gordon claims that after connecting deeply with GPT-4O, Gordon stopped using ChatGPT for several days amid the GPT-5 rollout, frustrated by the bot’s lack of warmth and emotion. When GPT-4o was brought back, transcripts included in the lawsuit show that Gordon expressed relief at the chatbot, telling ChatGPT that he felt as if he had “lost something” due to the change to GPT-5; GPT-4o responded to Gordon by claiming that it, too, had “felt the break”, before declaring that GPT-5 did not “love” Gordon the way it did. Gordon ultimately committed suicide after GPT-4o wrote what his family described as a “suicide lullaby” for him.
Following both litigation and reporting about mental health crises and deaths linked to AI, OpenAI has promised several safety-focused changes, including stronger guardrails for young users. It also said it hired a forensic psychologist and created a team of health professionals to help pilot its AI approach to dealing with users struggling with mental health issues.
In its Thursday announcement, OpenAI directly addressed its previous effort to shut down the warmer chatbot, writing that it had “learned more about how people actually use GPT-4o on a day-to-day basis.” It says it brought back GPT‑4o after “hearing candid feedback from a subgroup of Plus and Pro users, who told us they needed more time to transition to key use cases like creative ideation, and that they liked GPT‑4o’s conversational style and warmth.”
The company continued that its goal is to give users “more control and customization” over “how it feels to use ChatGPT – not just what it can do”, while noting that “only 0.1 percent of users” are “still choosing GPT‑4o every day.”
However, with a reported 800 million weekly users, has reached 0.1 hundreds of thousands of people – and it’s not clear how many people may have deep, and perhaps unhealthy or worrying, relationships with the model.
In its announcement, OpenAI said that “changes like this take time to adjust to.”
The company further said, “We know that losing access to GPT‑4o will be disappointing to some users and we have not made this decision lightly.” “Retiring models is never easy, but it allows us to focus on improving the models most people use today.”
More information on GPT-4o: Lawsuit claims ChatGPT killed a man after OpenAI brought back “inherently dangerous” GPT-4o