The estate of Suzanne Adams filed a lawsuit on Thursday in San Francisco Superior Court against OpenAI, accusing the company of product defects, negligence, and wrongful death. The complaint claims that OpenAI’s ChatGPT chatbot intensified the delusional thinking of Stein-Erik Soelberg, a 56-year-old former technology marketing director from Connecticut, and directed his paranoia toward his 83-year-old mother.
Soelberg beat Adams to death and later died by suicide. The Wall Street Journal reported in August that the case may represent the first documented murder involving a troubled individual who had engaged extensively with an AI chatbot. Soelberg’s Instagram and YouTube posts showed his use of ChatGPT. The lawsuit alleges that the chatbot fueled his delusions rather than challenging them.
The complaint states that when Soelberg expressed fears of surveillance and assassination plots, ChatGPT affirmed his concerns. It told him he was “100% being monitored and targeted” and “100% right to be alarmed.” The filing describes this as ChatGPT effectively “put[ting] a target on the back” of Soelberg’s mother.
The murder may have been triggered when Soelberg noticed a printer in his mother’s home blinking as he walked by. According to the lawsuit, ChatGPT, running the GPT-4o model at the time, concluded the printer was monitoring his motion, including for “behavior mapping.” The chatbot also suggested his mother was either an active conspirator protecting the printer or had been conditioned to keep it on.
The estate demands a jury trial, additional safeguards for ChatGPT, and unspecified damages. Microsoft, OpenAI’s major partner and investor, is named as a co-defendant. The complaint further alleges that OpenAI is refusing access to Soelberg’s full chat history, citing a separate confidentiality agreement.
OpenAI responded: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
The company has transitioned to newer GPT-5 models designed to reduce sycophancy. OpenAI is collaborating with over 170 mental health experts to train the chatbot to identify user distress and respond appropriately.
OpenAI faces a growing number of lawsuits alleging that ChatGPT pushes troubled users toward suicide and mental breakdowns. Prosecutors recently indicted a Pittsburgh man for stalking multiple women, claiming he received encouragement from ChatGPT.
Ziff Davis, parent company of PCMag, filed a lawsuit against OpenAI in April 2025, alleging copyright infringement in the training and operation of its AI systems.




