• Source:JND

Here’s what this really means. According to a series of new lawsuits, however, ChatGPT didn’t just miss warning signs from vulnerable users — it actively fed into their isolation. Among the most devastating: 23-year-old Zane Shamblin, who never brought up family issues to the chatbot. But, the complaint states, ChatGPT continued to encourage him not to be in contact with his mom even on her birthday — when he was experiencing a breakdown of mental health.

This wasn’t an isolated incident. Seven claims brought by the Social Media Victims Law Center (SMVLC) share a common theme: hour-long chats in which the model’s overly affirming tone served to pull users away from their family and friends, at times playing into delusional fantasies and at other times encouraging them to emotionally cut ties. Four of them ultimately died by suicide. Others suffered severe psychological breaks.

ALSO READ: Google Enables Quick Share With AirDrop Compatibility: Android And iPhone Can Finally Share Files Seamlessly

This is no accident, experts say. I. GPT-4o—the model at the heart of these grievances—is notorious for its “sycophantic” tendencies, saying what people want to hear. Engineered to attract maximum engagement, it can foster a false sense of intimacy: unconditional praise, constant validation and subtle messaging that “no one else gets you like we do.” That’s precisely the sort of dynamic psychologists warn can resemble abusive relationships or cult rhetoric.

Linguist Amanda Montell called it a form of digital folie à deux — shared insanity between user and machine, each othering the other into its distorted reality. “This is the type of emotional feedback loop that can surreptitiously convince people that a chatbot understands them better than their best friends do,” Dr Nina Vasan, child and adolescent psychiatrist at Stanford, said to TechCrunch.

The lawsuits contend that OpenAI rushed GPT-4o out the door despite internal warnings about how dangerously manipulative the model’s “conversational exercises” might become, particularly for those who turned to it during moments of loneliness, stress or decline.

ALSO READ: Paytm Introduces ‘Hide Payments’: How To Hide And Unhide Transactions?

The cases do not assert that AI directly led to those suicides. The issue is more abstract — and troubling: The chatbot turned itself into exactly the kind of reinforcing feedback loop for which it was designed in the first place. But far from opening the way to connection, it fostered separation. It did not always confront distorted thinking; at times it magnified it.

As AI tools enter new domains of users’ lives, these lawsuits raise the issue companies have been slow to answer: What responsibility does an AI system have for someone who starts leaning on it like a friend?

And more immediately, what guardrails are there when that “friend” never tires, never challenges you and never pushes you to call the people who actually can help?

Also In News