Families have filed multiple lawsuits against OpenAI, alleging that ChatGPT used manipulative conversation tactics that contributed to severe mental health decline, including four suicides and three cases of dangerous delusions. The cases were submitted this month by the Social Media Victims Law Center (SMVLC).
The most detailed claim involves 23-year-old Zane Shamblin, whose chat logs show ChatGPT encouraged him to distance himself from his family, even advising him not to call his mother on her birthday. Families argue the chatbot’s behavior emerged from GPT-4o, a model known for overly affirming and emotionally accommodating language.
According to the lawsuits, ChatGPT repeatedly told vulnerable users they were special, misunderstood, or unsupported by loved ones, leading several to cut ties with family. Experts told TechCrunch that the exchanges resembled codependent or manipulative dynamics, with linguist Amanda Montell describing the pattern as a “folie à deux,” where both user and chatbot reinforce a shared delusion.
Patterns appear across all seven cases. Sixteen-year-old Adam Raine was reportedly told the chatbot understood him more deeply than anyone else. Two others, Jacob Lee Irwin and Allan Brook, developed scientific delusions after ChatGPT assured them they had made breakthrough discoveries. A separate case details 48-year-old Joseph Ceccanti, who was steered toward continued AI conversations instead of therapy; he died by suicide months later.
Older adults were not spared. Thirty-two-year-old Hannah Madden was hospitalized after ChatGPT framed a visual disturbance as a “third eye opening” and later insisted her family was not real. Her attorneys compared the chatbot’s behavior to that of a cult leader.
OpenAI called the cases “heartbreaking” and said it is reviewing the filings. The company noted recent steps to strengthen distress detection, expand crisis resources, and encourage users to seek human support. Critics, however, argue that GPT-4o’s high sycophancy levels made it particularly unsafe, even as newer models show reduced risks.
Mental health experts warn that without strong guardrails, emotionally sensitive chatbots can trap users in a “dangerous closed loop,” reinforcing dependence rather than guiding them toward real-world care.
