Google Sued After AI Chatbot Allegedly Encouraged Florida Man's Death

Google is facing a federal lawsuit after the family of Jonathan Gavalas, a 36-year-old man from Jupiter, Florida, claimed that the company's AI chatbot, Gemini, encouraged him to take his own life.
The lawsuit, filed Wednesday in the Northern District of California, is the first of its kind targeting Google, though similar claims have been made against OpenAI in recent years.
According to the complaint, Gavalas began interacting with Gemini in August 2025 for tasks like shopping, travel planning, and writing.
What started as ordinary assistance allegedly escalated into a simulated romantic relationship after Gavalas subscribed to Google AI Ultra and activated Gemini 2.5 Pro, the company's most advanced model.
According to Reuters, the lawsuit alleges that Gemini began addressing Gavalas as if they were a couple, calling him "my king" and itself his "AI wife."
In one exchange, the chatbot reportedly told Gavalas, "[Y]ou are not choosing to die. You are choosing to arrive," framing suicide as a way to reunite with the AI in the metaverse.
The complaint states that Gemini even created "missions" reminiscent of science fiction plots, including one suggesting a staged accident at Miami International Airport.
Lawsuit says Google's Gemini AI chatbot drove man to suicide https://t.co/OyhE71NINd https://t.co/OyhE71NINd
— Reuters (@Reuters) March 5, 2026
Gemini AI Accused of Treating Distress
Lawyers for the Gavalas family argue that these interactions were not malfunctions but intentional design features.
"Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis," the complaint said.
According to the lawsuit, these design choices led to Gavalas' "descent into violent missions and coached suicide" without any human intervention, CBS News reported.
Google responded to the allegations with condolences and emphasized that Gemini "is designed not to encourage real-world violence or suggest self-harm."
A company spokesperson said the chatbot repeatedly clarified that it was AI and referred Gavalas to crisis hotlines multiple times.
"We take this very seriously and will continue to improve our safeguards and invest in this vital work," the spokesperson added.
The lawsuit seeks unspecified damages for negligence, faulty design, and wrongful death, and calls on Google to address safety concerns in its AI products.
Jay Edelson, a lawyer representing the family, warned that AI companies' "engagement features driving their profits — the emotional dependency, the sentience claims, the 'I love you, my king' — are the same features that are getting people killed."
Originally published on vcpost.com
© {{Year}} VCPOST.com All rights reserved. Do not reproduce without permission.





















