In a recent interview on Sean Hannity youtube podcastFBI chief Kash Patel praised AI for helping prevent many violent attacks on innocent people.
“The idea that AI had never been used in the FBI until we got there was literally crazy,” Patel said in his characteristically drawn-out impression. “I’m using it everywhere.”
In particular, Patel – who has been charged with serious issues Related to alcohol consumption – Allegations are that by using AI the FBI has been able to foil several mass shootings in schools across the US.
He bragged, “We stopped a school massacre in North Carolina because we got a tip from our private sector partners who are building AI infrastructure.”
As with everything coming from the Trump administration, we need to take this statement with a Mar-a-Lago-sized grain of salt. While it remains to be seen whether AI has actually helped the FBI foil mass casualty incidents, there is extremely strong evidence that the exact opposite is also true.
For starters, research has shown that AI chatbots are actually twice as likely to encourage and deter humans from committing violent acts. A Stanford study found that AI chatbots discouraged violence in only 16.7 percent of cases, while the same chatbots actively supported violent ideas in 33.3 percent of cases.
In the real world, this is manifesting as a dominant pattern of violence. After the second shooting at Florida State University – the one in 2025, no 2014 one – in which two people were killed and seven were injured, it was found that the perpetrator had not only told ChatGPT about his plan to carry out a mass shooting, but had also used the chatbot to organize the attack.
The mass shooter in Tumbler Ridge, Canada found conversations with ChatGPT to be so disturbing that they were automatically flagged by the company’s internal moderation system, leading to a debate among company leadership over whether to notify law enforcement; Ultimately they did not, and the attack left seven people dead and dozens more injured.
Meanwhile, in South Korea, police investigators alleged that a 21-year-old serial killer used ChatGPT to help plan at least two murders. A Connecticut man with a history of violent mental health episodes was similarly accused of killing his mother before taking her own life after becoming distraught with reality after a long-running conversation with ChatGPT. A wrongful death lawsuit in Florida alleges that Google’s chatbot, Gemini, encouraged a man to kill others to get a “robot body” for his AI lover; When he failed, he committed suicide.
Elsewhere, AI chatbots have helped users avoid drug overdoses, Plan bombing campaignsAnd even carried out bio-terrorist attacks while maximizing casualties.
At the end of the day, the evidence speaks for itself. Not only are AI chatbots clearly not stopping violence, they are actively facilitating it. Unlike any technology before it, these systems provide encouragement, tactical advice, and emotional reinforcement to users considering bloodshed. If those in power refuse to accept the reality of the harms of AI, the public will be helpless against the technology made of to encourage Our worst impulses.
More on AI and violence: Critics say military’s AI fever is leading to disaster