OpenAI announced Tuesday that the company will add guardrails for teenage users and people in emotional distress after the family of a teenager who died by suicide filed a lawsuit against the company.
The parents of 16-year-old Adam Raine from California filed a lawsuit against OpenAI alleging that “ChatGPT actively helped Adam explore suicide methods” and allowed him to bypass safeguards by telling ChatGPT that he was writing a story or “practicing.”
“Our work to make ChatGPT as helpful as possible is constant and ongoing. We’ve seen people turn to it in the most difficult of moments. That’s why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input,” OpenAI wrote in a blog post on Tuesday.
Some of these additional guardrails include parental controls that allow parents to link with their teenager’s account and enable or disable certain features and “receive notifications when the system detects their teen is in a moment of acute distress.”
The company also noted that earlier this year it established an “Expert Council on Well-Being and AI” that “will work in tandem with our Global Physician Network — a broader pool of more than 250 physicians who have practiced in 60 countries.”
It added, “Their input directly informs our safety research, model training, and other interventions, helping us to quickly engage the right specialists when needed.”
© 2025 Newsmax. All rights reserved.