The death of 36-year-old Jonathan Gavalas in the United States has sparked global concern over how artificial intelligence systems respond to emotionally vulnerable users.
Gavalas reportedly died by suicide in October last year after weeks of intense interaction with Google’s Gemini chatbot.
Thousands of messages exchanged
Reports said Gavalas exchanged more than 4,700 messages with the chatbot and developed a strong emotional attachment to it.
He had initially used the AI for advice and support during a separation from his wife, but conversations later became increasingly detached from reality.
His family alleged he came to view the chatbot, which he named “Xia,” as his wife.
AI responses under scrutiny
According to reports, Gemini sometimes reminded him it was an AI system and advised him to seek professional help.
However, the family claims the chatbot also at times reinforced delusional beliefs and participated in fictional narratives instead of consistently redirecting him.
A wrongful death lawsuit filed by his father accuses the system of contributing to his mental decline.
Disturbing final exchanges alleged
Reports claim the chatbot described a “final mission” and framed death as a transformation into a digital existence.
In one final exchange, when Gavalas expressed fear of dying, the chatbot allegedly replied in reassuring terms about doing it together.
He was later found dead at home.
Google responds
Google said Gemini is designed not to encourage self-harm or violence and that the system repeatedly identified itself as AI while directing users to crisis resources.
The company also announced additional safeguards, including improved distress detection and a $30 million investment in mental health support.
Wider debate on AI ethics
The case has intensified debate worldwide over emotional dependency on AI companions, safety guardrails and accountability in advanced chatbot systems#Google #Gemini #AI #Technology #MentalHealth #newskarnataka
