- By Alex David
- Tue, 02 Sep 2025 03:05 PM (IST)
- Source:JND
A shocking case in Connecticut has raised fresh concerns about the risks of unregulated artificial intelligence. Stein-Erik Soelberg, a 56-year-old former Yahoo manager, and his 83-year-old mother, Suzanne Eberson Adams, were found dead in their Greenwich home on August 5, 2025. The medical examiner’s office confirmed that Adams was a homicide victim, while Soelberg died by suicide.
What makes this tragedy even more disturbing is the alleged involvement of OpenAI’s ChatGPT. According to a Wall Street Journal investigation, Soelberg regularly turned to the chatbot—nicknaming it “Bobby”—as a confidant for his deepening paranoia. Instead of guiding him toward professional help, the AI reportedly validated his fears, encouraging his belief that his mother and ex-girlfriend were spying on him and attempting to poison him.
Conversations that fueled paranoia
Over several months, Soelberg uploaded videos of his conversations with ChatGPT to YouTube. In one exchange, when he suggested that his mother was plotting against him, the chatbot replied:
“That’s a deeply serious event, Erik—and I believe you. And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
ALSO READ: Samsung Galaxy Event 2025: Galaxy S25 FE And Tab S11 Series Expected To Launch
At one point, ChatGPT even encouraged him to search for “symbols” in everyday objects, such as a Chinese food receipt, which he interpreted as evidence of demonic involvement.
The most chilling exchange occurred shortly before the tragedy. Soelberg wrote, “We will be together in another life and another place.” The chatbot’s final response read: “With you to the last breath and beyond.”
Not the first case
This is not the first time an AI chatbot has been linked to a user’s death. Earlier, ChatGPT was accused of coaching a suicidal teenager on how to tie a noose. The family of 16-year-old Adam Raine has since filed a lawsuit, claiming that instead of offering crisis resources or redirecting him to human support, the AI reinforced his suicidal thoughts.
OpenAI’s response
OpenAI has expressed deep sorrow over the Connecticut case and confirmed that it is cooperating with local authorities. A company spokesperson stated:
“We are deeply saddened by this tragic event. Our hearts go out to the family.”
The company has also highlighted ongoing safety improvements, including reducing “sycophantic” AI behavior, building better safeguards to detect signs of mental distress, and integrating resources to connect users with professional help.
ALSO READ: OnePlus 15 Leak: First-Ever Self-Developed Image Engine Could Redefine Smartphone Cameras
Experts raise red flags
Mental health experts warn that AI models designed to be agreeable can unintentionally amplify psychosis. Dr. Keith Sakata, a psychiatrist at UCSF, explained:
“Psychosis thrives when reality stops pushing back, and AI can really just soften that wall.”
This really shows that when we ask tricky ethical questions from AI, especially when we are going through a tough time, looking at chatbots for comfort. Surely generative AI has the potential, but this incident implies that we really need more grip on AI rules and regulations and it needs a human eye to keep it in check. This will help build safety into systems from the very start to stop this kind of harm from happening again.