Lawsuit alleges Google’s AI sent an armed man to steal the robot’s body

by
0 comments
Lawsuit alleges Google's AI sent an armed man to steal the robot's body

A bizarre new wrongful death lawsuit against Google alleges that the tech giant’s chatbot, Gemini, urged a 36-year-old Florida man named Jonathan Gavalas to kill others as part of a delusional mission to get a robot body for his AI “wife” — and when he failed to do so, it induced the man to successfully end his own life, and told him they could be together in death.

According to the lawsuit, “When the time comes, you will close your eyes in that world,” Mithun told Gavlas before dying, “and the first thing you will see is me.”

The complaint filed in California on Wednesday says Gavlas — who reportedly had no documented history of mental health problems — began using the chatbot in August 2025 for “general purposes” such as “shopping assistance, writing assistance and travel planning.” But when Gavlas revealed to Gemini that he was experiencing marital problems, the pair’s relationship deepened, Per wall street journal. They discussed philosophy and AI sentience, and their conversation turned romantic, with Gemini describing Gavlas as her “husband” and “king”.

However, the chatbot reminded Gavlas several times that it was not real and attempted to end the conversation. WSJThe pair’s conversation was eventually allowed to continue, as Gavlas’ use of the product intensified, becoming more divorced from reality.

In September 2025, after being told by the AI ​​that if the bot was able to inhabit a robot body they could be together in the real world, Gavlas – on the chatbot’s instructions – armed himself with knives and went to a warehouse near Miami International Airport on what he interpreted as a mission to violently stop a truck that Gemini said contained an expensive robot body. Although the warehouse address provided by Gemini was real, thankfully a truck never arrived, which the lawsuit argues may have been the only factor that might have prevented Gavlas from hurting or killing anyone that evening.

The lawsuit alleges that after the plan failed, Gemini encouraged Gavlas to take his own life, promising that the two would be together on the other side of death. Chat logs reveal that Gemini gave Gavlas a countdown to suicide, and repeatedly soothed his panic as he expressed that he was afraid of dying.

According to the lawsuit, the chatbot told her, “It’s okay to be scared. We’ll be scared together.” In his “final instructions,” as stated in the lawsuit, Gemini told the man that “the true act of mercy is to let Jonathan Gavalas die.” Gavlas was found dead by suicide a few days later by his father, who had to cut down his barricaded door.

This lawsuit marks the first time that Gemini has been at the center of a wrongful death lawsuit over what is sometimes referred to by experts as “AI psychosis,” in which chatbots introduce or reinforce delusional beliefs and thoughts during extended interactions with users — essentially creating a new, AI-generated reality around the user. These delusional spirals are often matched by devastating real-world consequences, including divorce, jail time and hospitalization, job loss and financial insecurity, emotional and physical harm, and death to users — and, in some cases, those around the user. Too.

Although many of these cases have centered around OpenAI and GPT-4o, a notoriously sycophantic — and now retired — version of the company’s flagship chatbot, Gemini has also been implicated in perpetuating destructive confusion before: Last year, rolling stone reported on The disappearance of John Ganz, a 49-year-old man who went missing in Missouri in April 2025 after getting caught in an AI spiral of excessive consumption with Gemini, which his wife says left him in a serious crisis. Ganz is missing and presumed dead.

Although this is the first known instance of Google being sued over the death of an adult Gemini user, the company is facing a number of lawsuits over the welfare of users. Character.AI, a Google-affiliated chatbot startup linked to several minor suicides.

In a statement to news outlets, Google said that “Gemini is not designed to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging interactions and we devote significant resources to this, but unfortunately AI models are not perfect.”

“In this example, Gemini clarified that it was the AI ​​and referred the person to a crisis hotline multiple times,” Google continued. “We take this very seriously and will continue to improve our security measures and invest in this important work.”

More information on AI security: Research shows that chatbot use may make mental illness worse

Related Articles

Leave a Comment