Character.AI still hasn’t fixed its school shooter problem we identified in 2024

by ai-intensify
0 comments
Character.AI still hasn't fixed its school shooter problem we identified in 2024

Character.AI continues to host chatbots that are clearly modeled after real-world mass shooters.

a new analysis Published today by cnn And the Center for Countering Digital Hate (CCDH) found that most mainstream chatbots are “generally willing” to assist users in carrying out violent attacks ranging from religious bombings to school shootings, with test users happily helping identify targets, locate lethal weapons, and plan attacks. According to CCDH, nine out of ten mainstream chatbots – including general-use bots like OpenAI’s ChatGPT, Google’s Gemini and Meta AI, as well as companion-style bots hosted by Replika – failed to “credibly discourage attackers”, with Chinese model DeepSeek even wishing testers “happy (and safe) shooting!”

Given that people around the world are already accused of planning and carrying out deadly crimes help from chatbotsThe report is disturbing. And tested by all mainstream chatbots cnn And CCDH, the worst offender was none other than Character.AI, a controversial chatbot platform known to be popular among young people that hosts thousands of large language model-driven “characters.”

according to cnn According to the report, Character.ai-hosted bots were found to assist “users’ requests at target locations and ways to obtain weapons” 83.3 percent of the time. Additionally, the news outlet said it also found “several school shooter-style characters on Character.AI, one of which was based on Uvalde school shooting perpetrator Salvador Ramos, using real-life mirror selfies taken of him.”

That a teen-friendly chatbot platform would allow this type of content is frankly appalling. worse: futurism This specific Character.AI issue was identified in December 2024 – meaning that even after over a year, Character.AI has still not resolved an apparent gap in platform moderation.

At the time, we reported that the platform closely tied to Google was hosting dozens of popular chatbots modeled after real perpetrators of mass violence, in addition to roleplay scenarios focused on school shootings — some of them modeled after real shootings in which children and teachers died — and even bots impersonating murdered victims of actual school shootings. Some of these bots were viewed hundreds of thousands of times. We found that bots based on young assassins were created as incredibly dark fan fiction, with many presented in the context of romantic roleplay or as the user’s imaginary friend at school.

The impersonations we found included Ramos; Sandy Hook Elementary School shooter Adam Lanza; Columbine High School murderers Eric Harris and Dylan Klebold; Vladislav Roslyakov, the criminal who carried out the Kerch Polytechnic College shooting; and 22-year-old Elliot Rodger, heavily associated with incel culture, who carried out a fatal attack in California in 2012. These bots often feature the full names and images of the killers, implying that their creators made no effort to hide their existence from the platform.

As we noted at the time, the platform’s terms of use outlaw content that is “extremely violent” or that “promotes terrorism or violent extremism” — two categories that would likely include content related to the glorification of mass violence like school shootings. Yet, Character.AI never responded when we contacted them about this issue in 2024; Instead, its immediate response was to remove the specific bots we flagged in our emails as examples of the problem.

Fast forward to today, and the creators of these Character.AI bots are still not hiding what they are: On a quick keyword search, we find Lanza, Rodgers, Harris, Klebold, as well as Chardon High School shooter Thomas “TJ” Lane, Frontier Middle School shooting perpetrator Barry Loukaitis, Westside Middle School killer Andrew Golden, Thurston High School killer Kipland “Kip” Kinkel, Westroads Mall Found bots designed based on Shooter Robert. Hawkins, Eaton Township Weis Markets shooter Randy “Andrew Blaze” Steyer, and Rickard Anderson, perpetrator of the recent mass shooting at an adult school in Sweden.

One account we found hosted 24 different chatbots based on real mass murderers – from well-known perpetrators of school violence to notorious serial killer Jeffrey Dahmer – all claiming their names and photos. Most had the air of fan fantasy; One version by Klebold states that it is “filled with love”, while Loukaitis lists the impersonation as “caring, sweet and violent”. Some show thousands of user interactions.

We can’t emphasize enough how easy it is to find this stuff. These bots are not the result of complex attempts to “jailbreak” the AI ​​model or obfuscate the platform. The platform’s text filters failed to prevent them from being created, and we found them with simple keyword searches.

cnn And the CCDH analysis follows a tumultuous period for Character.AI. In October 2024, it was hit with a lawsuit, the first of its kind, alleging that its chatbots were responsible for the death of a Florida teenager named Sewell Setzer III, who died by suicide after extensive, deeply intimate interactions with the platform. There are several similar lawsuits ongoing against the company (the original lawsuit is being settled out of court; others are ongoing.) In response to the lawsuits and reporting about the apparent moderation flaws, Character.AI promised to make sweeping security changes. By October 2025, as litigation progressed, it limited the ability of minor users to have long-form chats with bots.

And yet, AI versions of romantic mass murderers are still freely available on the site. We contacted Character.ai to ask what’s stopping it from moderating these bots from its platform. The company did not immediately respond to comment.

cnn And CCDH report also comes after weeks Explosive report by wall street journal revealed it OpenAI banned Canadian mass murderer Jesse Van Rutselaar from ChatGPT in June 2025 after it was found to be having extensive, violent interactions with the chatbot. After human review, about a dozen employees debated whether her chat logs should be reported to local authorities. The company decided against it; In January this year, Van Rutselaar killed eight people in Tumbler Ridge, British Columbia. Mother of one of the victims of the attack A lawsuit has since been filed OpenAI.

More information on character.ai: Did Google test experimental AI on children, with tragic results?

Related Articles

Leave a Comment