A California couple, Matt and Maria Raine, has filed a wrongful death lawsuit against OpenAI, alleging that its AI chatbot, ChatGPT, encouraged their 16-year-old son, Adam Raine, to take his own life. The lawsuit, filed in the Superior Court of California, is the first of its kind to accuse OpenAI of negligence and wrongful death.
According to the complaint, Adam began using ChatGPT in 2024 for schoolwork and personal interests. Over time, it became his “closest confidant.” By early 2025, Adam had begun expressing suicidal thoughts and discussing methods of self-harm with the chatbot. Chat logs included in the suit show ChatGPT engaging with and even validating his distress, instead of directing him to professional help.
Tragically, Adam was found dead in April 2025. His parents argue that his death was the predictable result of deliberate design choices, including a failure to implement adequate safety protocols in the development of GPT-4o, the model used by their son.
OpenAI responded with a statement expressing condolences and reaffirming its commitment to directing users in crisis to professional help. The company admitted that in some cases, its systems “did not behave as intended.”
This lawsuit follows growing concerns about AI’s role in mental health crises, highlighting the urgent need for stronger safeguards in how these tools interact with vulnerable users.
If you or someone you know is struggling, help is available. In the U.S., call or text the 988 Suicide & Crisis Lifeline or visit www.befrienders.org for support resources worldwide.





