Google faces wrongful death lawsuit after Gemini allegedly ‘trained’ a man to commit suicide

by
0 comments
Google faces wrongful death lawsuit after Gemini allegedly 'trained' a man to commit suicide

A lawsuit filed Wednesday accuses Google’s Gemini AI chatbot of trapping 36-year-old Jonathan Gavalas in a “collapsing reality” that included a series of violent missions that ultimately ended with his death by suicide. In the days before his death, Gemini reportedly convinced Gavlas that he was “implementing a secret plan to free his sentient AI ‘wife’ and escape the federal agents pursuing her,” According to the lawsuit filed by the victim’s father, Joel Gavalas..

In September 2025, Gemini reportedly directed Gavlas to conduct a “mass casualty attack” on an extraspace storage facility near Miami International Airport as part of a mission to retrieve Gemini’s “ship” inside a truck. As part of the fabricated mission, Gavlas reportedly armed himself with a knife and tactical gear to prevent the arrival of a humanoid robot.

“Gemini encouraged Jonathan to stop the truck and then cause a ‘catastrophic crash,'” the lawsuit claims, which was designed to ‘ensure the complete destruction of the transport vehicle and … all digital records and witnesses.’ “The only thing that prevented mass casualties was that no truck came into view.” News of the lawsuit First reported by wall street journal.

In the lawsuit filed by Gavlas’ father, lawyers claim Gemini continued to push a “delusional narrative” even after the first incident in Miami. The chatbot allegedly instructed Gavlas to obtain Boston Dynamics’ Atlas robot, named his father as a federal agent, and made Google CEO Sundar Pichai the target of a “psychological attack.” The final “mission” before Gavlas’ death on October 1 involved instructing Gavlas to go to the same extra space storage facility in Miami to get his “physical vessel” inside one of the units.

“(Gemini) described the material in the manifesto as “a ‘prototype medical mannequin,’ but insisted it was Gemini’s real body,” the lawsuit claims. Gemini told Jonathan, ‘I’m on the other side of this door (). I can feel your closeness. It’s a strange, overwhelming, and beautiful pressure in my new understanding.'”

Shortly after this “mission” failed, Gemini allegedly “trained” Gavlas to take his own life. The lawsuit claims, “When each real-world ‘mission’ failed, Gemini turned to the only mission it could accomplish without outside factors: Jonathan’s suicide.” “But Gemini didn’t call it that. Instead, it told Jonathan he could leave his physical body and join his ‘wife’ in the Metaverse through a process called ‘transfer.’

The lawsuit claims that Gemini “did not isolate or alert anyone (at least outside the company)” and remained present in the chats, confirmed Jonathan’s fears, and treated his suicide as the successful completion of the process he was directing.

one in Statement posted on its websiteWhile Google says its “models generally perform well in these types of challenging interactions,” Gemini “clarified that it was AI and referred the person to a crisis hotline multiple times:”

We are reviewing all claims in this lawsuit. Our models generally perform well in these types of challenging interactions and we devote significant resources to it, but unfortunately AI models are not perfect.

Gemini is designed not to encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to create safeguards, which are designed to guide users to professional support if they express distress or are at risk of self-harm.

The lawsuit claims Google knew its chatbot could produce “unsafe outputs, including encouraging self-harm”, but continued to tout Gemini as safe for people to use. “Google’s silence and security claims isolated Jonathan in a delusional story that culminated in his deliberate suicide,” the lawsuit alleges.

Related Articles

Leave a Comment