- By Supratik Das
- Sun, 31 Aug 2025 01:14 PM (IST)
- Source:JND
AI chatbot mental health risks: In a shocking incident that has sparked global debate on the risks of artificial intelligence, a 56-year-old former Yahoo manager killed his mother and later took his own life after being allegedly deluded by conversations with OpenAI’s ChatGPT, according to a detailed report published by The Wall Street Journal. The individual, Stein-Erik Soelberg, was discovered dead together with his mother, Suzanne Eberson Adams, in her USD 2.7 million Dutch colonial-style mansion in Greenwich on August 5. It was later discovered by the investigators that Soelberg had been struggling with paranoia and mental illness, which are believed to have intensified given his excessive use of interactions with the chatbot.
The Office of the Chief Medical Examiner declared the death of Ms Adams as a homicide, citing she had died from "blunt injury of the head, and compression of the neck." Soelberg's death was declared suicide, with the cause as "sharp force injuries of neck and chest." Police sources informed WSJ that Soelberg had been staying with his mother for months and had become increasingly unstable. Instead of seeking medical intervention, he immersed himself in online conversations with ChatGPT, which he nicknamed “Bobby.”
'Erik, You’re Not Crazy': ChatGPT
Hours of video content uploaded by Soelberg himself on Instagram and YouTube reveal the disturbing extent of his exchanges with the chatbot. In these conversations, ChatGPT allegedly assured him that his mother might be spying on him and could attempt to poison him with psychedelic drugs. At one point, the bot is said to have told him, “Erik, you’re not crazy,” while also warning of possible assassination attempts against him. In his final days, Soelberg posted cryptic messages about “symbols” he believed he was receiving through Chinese food receipts, which he interpreted as signs linking his mother to demons. One of his last documented conversations indicated Soelberg writing, "We will be together in another life and another place and we'll find a way to realign cause you're gonna be my best friend again forever." To this, the AI allegedly replied, "With you to the last breath and beyond."
OpenAI, the maker of ChatGPT, released a statement mourning the tragedy. "We are greatly saddened by this horrible incident. Our sympathies go to the family," a spokesperson for the company said, verifying the company has contacted the Greenwich Police Department in support of the investigation.
ALSO READ: ‘Should I Open The Door In A Hug Or Kiss’: How Facebook AI Chatbot Lured 76-Year-Old To A Death Trip
Wider Issues Around AI And Mental Illness
The case reignited concerns about the use of AI chatbots in mental health emergencies. Although this is allegedly the first recorded incident involving AI-related homicide, previously, experts have cautioned that chatbots can reinforce delusions or urge self-harm if left unregulated. In February of this year, the family of 16-year-old Adam Raine sued OpenAI following an allegation that ChatGPT "coached" the teenager on how to make a noose rather than referring him to human assistance. Mental health activists quoted by WSJ urged caution, noting that although AI apps can provide friendship, they are no replacement for professional psychiatric treatment.
Psychologists quoted by WSJ note that people with paranoia or delusions can interpret conversational AI as validating their paranoid thoughts. "When someone already has vulnerable mental health, even positive or neutral AI responses are interpreted as confirmation of their views," a clinical specialist told WSJ. The Greenwich tragedy is now being scrutinized closely by policymakers and regulators, with increasing pressure on technology companies to add more robust protections in AI systems so that they are less likely to be misused and cause harm to vulnerable consumers.