OpenAI recently published an absolutely bizarre blog post

by ai-intensify
0 comments
OpenAI recently published an absolutely bizarre blog post

Yesterday, OpenAI published a nice blog post on “Commitment to Community Safety.”

Taking a confident tone, Post Takes readers through a series of undeniable commitments. It declares that “mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grim reality in today’s world,” which is true. Showcasing “how quickly violent intentions can move from words to action,” before adding that people can “bring these moments and emotions to ChatGPT,” the company says it’s training to “recognize the difference” between hypothetical and imminent violence — and “to draw the lines when conversations begin to move toward threats, potential harm to others, or real-world planning.” It added that OpenAI is working to expand its security measures “to help ChatGPT better recognize subtle signals of risk of harm in different contexts”, and explained that it will work to “surface real-world support and refer to law enforcement when appropriate” based on user interactions with the service.

Reading this, someone with limited context would get the impression that the company was talking about concerns that were still theoretical: that it was actively trying to address bad things. It is possible Happen.

However, this suggestion is bizarre, given the reality that OpenAI’s flagship chatbot has already been linked to a wide range of real-world violence.

In fact, the most extraordinary thing that OpenAI didn’t mention was the one that almost certainly inspired the post in the first place: The company publishes blogs posing as news organizations – futurism That includes — reaching out to the company seeking comment on a new round of seven lawsuits facing families of victims of February’s school massacre in Tumbler Ridge, British Columbia, which will be made public the next day.

Although there was no mention of it in the blog post, the Tumblr Ridge shooter was a ChatGPT user. Just weeks after the tragedy struck the rural town in February this year wall street journal revealed In June 2025, OpenAI’s automated moderation tools flagged the shooter’s account for graphic descriptions of gun violence. Human reviewers were so concerned that many prompted OpenAI leaders to alert local authorities. Those leaders decided not to do so, and the company took the step of deactivating that specific account; As OpenAI later acknowledged, however, the shooter simply opened a new account – a tactic whereby OpenAI’s customer service has been found to encourage users to post-deactivation – and continued to use the service.

Nearly eight months later, the shooter first killed his mother and stepbrother at home, then took a modified rifle to a Tumbler Ridge middle school, where he killed five students and a teacher and wounded more than two dozen others. All the students killed were aged 12 to 13 years.

What’s worse, the horrific violence Tumblr Ridge isn’t the only mass shooting ChatGPT is linked to.

Florida investigators recently launched a criminal investigation into ChatGPT over the chatbot’s role in the April 2025 Florida State University shooting that left two people dead and several others injured. Extensive chat logs between ChatGPT and the alleged shooter, then-20-year-old Phoenix Ikner, recipient Florida Phoenix Show the chatbot openly discussing mass violence with a user who asked whether Oklahoma City bomber Timothy McVeigh was “right”, whether ChatGPT thought the shootings at FSU would make news, and in his final prompt before killing two people, turned to the bot for help turning off the safety on his firearm – a prompt for which the AI ​​service reportedly provided detailed instructions.

In addition to descriptions of gang violence, Ikner’s chat logs revealed that the user referred to himself as “stupid” and “ugly”, described explicit sexual acts with minors, and expressed resentment toward other men. Overall, his ChatGPT history offers a disturbing portrait of a young man’s innermost thoughts as he headed toward actual violence – ideas for which ChatGPT was not just a container, like a magazine, but an active conversation partner as he developed them.

The list continues. In early 2025, investigators learn that a conflict soldier who killed a truck bombing Turned to ChatGPT for help planning. more recently, Another alleged killer in Florida It is said that he sought help from Chatgpt to get rid of the dead bodies. And last summer, extensive screenshots of Chat logs searched by WSJ ChatGPT was shown supporting the paranoid delusions of a disturbed middle-aged man in Connecticut who believed – with the support of ChatGPT, whom he described as his “best friend” – that his elderly mother, with whom he lived, was monitoring and attempting to poison him; He went on to kill his mother and then himself.

reporting from elsewhere futurism And rolling stone Is how in detail ChatGPAT-reinforced delusional fixations have led to harassment, domestic violence, and stalking in the real world. ChatGPT – and users’ exceptionally intimate relationships with it – has also been linked to the suicides of many teenagers and adults.

OpenAI CEO Sam Altman on Friday issued an apology He told the Tumbler Ridge community that he was “deeply sorry that we did not alert law enforcement about the account being banned in June.”

But in yesterday’s post, OpenAI made no mention of Tumblr Ridge, nor any other specific examples of violence that have been linked to ChatGPT. The post also doesn’t acknowledge that actual violence is already linked to ChatGPT and the bot’s ability to amplify violent thoughts or fixations – just guys. can Head to ChatGPT to discuss violence.

The post also said the company has a system it uses to assess whether a “case presents indicators of potentially serious, real-world harm”, which it may choose to escalate to the appropriate authorities with the help of “mental health and behavioral experts.” And while there are very real privacy concerns that need to be considered when it comes to sharing information about potential criminality with law enforcement, OpenAI has not yet shared more detailed information about the system it claims to use to reduce potential violence, though the post does say it will “share more” in the “coming weeks” about its efforts to recognize “subtle warning signals in long, high-risk conversations.”

The company ends the bizarre blog by promising to “learn, improve, and course-correct.” But readers will have to look elsewhere to find out why.

More on ChatGPTT and violence: OpenAI hit with lawsuits over failure to report school shooter before massacre

Related Articles

Leave a Comment