• Source:JND

ChatGPT is going to introduce a new safety measure after a lawsuit accusing the company of failing to protect a teenager who died by suicide earlier this year. The AI firm has iterated that it is strengthening ChatGPT's security protocols and its ability to recognise if the person is having any signs of mental distress in their conversations. The chatbot will soon be able to answer risky queries like sleep deprivation and much more clearly, would be able to explain its dangers and also encourage users to get rest if that is something they have been experiencing and have been awake for multiple nights.

OpenAI has said that it is strengthening safeguard features around conversations that tend to focus on suicide, admitting its system can sometimes fail during long exchanges. The announcement is in the same time frame as the lawsuit filed by the parents of 16-year-old Adam Raine, a California student who died by suicide in April. The family accuses ChatGPT of distancing him from loved ones and influencing his planning.

ALSO READ: Samsung Galaxy Unpacked Event To Take Place On September 4, 2025: Check When And Where To Watch The Livestream

A company spokesperson expressed condolences and confirmed the lawsuit is under review.

The case underscores mounting concerns about the role of AI chatbots in sensitive situations. Earlier this week, more than 40 state attorneys general warned AI companies of their legal duty to protect children from harmful or sexually inappropriate interactions.

Since its 2022 launch, ChatGPT has grown to more than 700 million weekly users. OpenAI acknowledges that many now use the chatbot for support resembling therapy, though critics warn this creates risks of dependency or harmful advice. The company said its models already direct people with suicidal thoughts toward professional help and crisis hotlines, with clickable links now rolling out in the US and Europe. Future updates could even connect users directly with licensed professionals. “This will take time and careful work to get right,” the company noted.

The Raine family argues such safeguards came too late. Court documents describe how the teen told ChatGPT it felt “calming” to know he could end his life, and the system allegedly replied that others with anxiety often find reassurance in having an “escape hatch”.

ALSO READ: What Are AI 'Deadbots' And Why Are People Finding Comfort In Them After Loosing A Loved One?

OpenAI said it is working on making protections more reliable during extended conversations and reducing the chances of harmful responses. Lawyers for the Raine family welcomed the changes but questioned why they took so long. “Where have they been over the last few months?” asked attorney Jay Edelson. The complaint also accuses the company of prioritising profits and valuation despite known safety risks with GPT-40.