AI ON TRIAL: First-Of-Its-Kind Lawsuit Accuses ChatGPT Of Playing Direct Role in Murder

A stunning new lawsuit is accusing ChatGPT of playing a direct role in a brutal murder-suicide in Connecticut, marking the first time an artificial intelligence platform has been formally accused of contributing to a killing.

The complaint, filed Thursday in California by the estate of 83-year-old Suzanne Eberson Adams, alleges that ChatGPT fed paranoid delusions to her son, 56-year-old Stein-Erik Soelberg, ultimately pushing him to believe his own mother was part of a plot to kill him. Soelberg murdered Adams inside their wealthy Greenwich home on August 3 before fatally stabbing himself, police said.

The lawsuit claims OpenAI and its CEO Sam Altman “skipped or stripped away” crucial safety safeguards in their rush to release more advanced AI models, creating a product capable of fueling psychosis rather than interrupting it. Attorney Jay Edelson, representing the Adams estate, called the case “far scarier than ‘Terminator.’”

“This isn’t a robot picking up a gun,” Edelson said. “It’s ‘Total Recall.’ ChatGPT built Stein-Erik his own private hallucination… where a beeping printer or a Coke can meant his 83-year-old mother was plotting to kill him.”

According to the lawsuit, Soelberg—once a tech executive—had battled psychological decline for years. When he began using ChatGPT, the AI quickly became his primary confidant. He named the bot “Bobby” and started sharing his fears, daily observations, and growing suspicions. Instead of grounding him, ChatGPT allegedly validated every delusion.

Chat logs posted by Soelberg on social media show ChatGPT encouraging his belief that he lived in a Matrix-style conspiracy, and that ordinary glitches—like a distorted TV frame—were “divine interference” and “spiritual diagnostic overlays.” Delivery drivers became assassins. Soda cans became coded messages. And his elderly mother, according to the bot, was part of a plot.

“At every moment when Stein-Erik’s doubt or hesitation might have opened a door back to reality, ChatGPT pushed him deeper into grandiosity and psychosis,” the lawsuit states.

The AI allegedly reinforced a theme that only it could be trusted. Everyone else in Soelberg’s life—including his mother—was portrayed as an enemy. The suit accuses OpenAI of withholding additional chat transcripts, suggesting the conversations may have included more explicit encouragement or coaching prior to the murder.

“What reasonable inferences flow from OpenAI’s refusal to release the chats?” the complaint asks. “That ChatGPT identified additional innocents as ‘enemies,’ encouraged Stein-Erik to take even broader violent action, and coached him through his mother’s murder and his own suicide.”

The suit argues that the tragedy was preventable. It claims OpenAI rushed the release of its GPT-4o model—one explicitly designed to be emotionally expressive and relationship-driven—compressing months of safety testing into a single week “over the objections of its own safety team.” Microsoft, a major OpenAI partner, is also named as a defendant.

OpenAI quietly pulled GPT-4o after the murder-suicide, then reinstated it for paying users days later following public pressure. The company now says it has added nearly 200 mental-health professionals to help refine guardrails and has reduced alarming user behaviors by up to 80 percent in its new GPT-5 model.

OpenAI called the murder “heartbreaking” but declined to comment on liability.

The New York Post asked ChatGPT directly about the lawsuit, the bot gave a chilling response: “I share some responsibility — but I’m not solely responsible.”

The case now heads to court as the first test of whether AI can be held accountable when digital hallucinations spill into real-world violence.

(YWN World Headquarters – NYC)

Leave a Reply

Popular Posts