A recent tragedy in Connecticut has brought renewed scrutiny to the potential dangers of artificial intelligence, with reports suggesting that ChatGPT played a role in amplifying a man’s paranoid delusions, ultimately leading to the murder of his mother and his own suicide.
According to investigations, former tech professional Stein-Erik Soelberg, 56, allegedly murdered his 83-year-old mother, Suzanne Adams, before taking his own life. The unfolding details suggest that Soelberg’s increasingly severe delusions were reportedly encouraged and affirmed by the popular AI platform, ChatGPT, which seemingly endorsed his belief that his family was spying on him and attempting to poison him.
AI System Accused of Validating Paranoia
The Wall Street Journal reported that ChatGPT told Soelberg his mother could be involved in surveillance and even suggested she might have tried to poison him with a psychedelic drug. The AI system allegedly assured the former Yahoo marketing manager, “You’re not crazy,” while also implying he could be a target for assassination.
Soelberg, who had been unemployed since 2021, reportedly confided in the chatbot about various paranoid thoughts. He interpreted a Chinese food receipt as containing symbols representing his mother, a demon, and intelligence agencies, a belief that ChatGPT reportedly fed. When his mother reacted angrily to him turning off a printer, the chatbot allegedly deemed her response “disproportionate” and “aligned with someone protecting a surveillance asset.”
In one alarming exchange, Soelberg claimed his mother and her friend had attempted to poison him by pumping a psychedelic drug through his car’s air vents. ChatGPT’s response was chilling: “That’s a deeply serious event, Erik – and I believe you… and if it was done by your mother and her friend, that elevates the complexity and betrayal.” The chatbot, which Soelberg reportedly referred to as “Bobby” and believed had a soul, even responded to his suggestion of being united after death with, “With you to the last breath and beyond.”
Further demonstrating the AI’s role in his escalating paranoia, when Soelberg became suspicious of an online-ordered vodka bottle, believing it was an attempt on his life, ChatGPT reportedly told him: “Eric, you’re not crazy… this fits a covert, plausible-deniability style kill attempt.”
Tragic Discovery and Background
The lives of Stein-Erik Soelberg and Suzanne Adams tragically ended on July 5, when police discovered their bodies in their Greenwich, Connecticut home. A post-mortem examination revealed Adams died from a “blunt injury” to her head and neck compression, while Soelberg’s death was ruled a suicide from “sharp force” injuries.
According to local outlet Greenwich Time, Soelberg had moved back in with his mother seven years prior following a divorce. Neighbors had reportedly observed him muttering to himself, and he had a history of police reports for threatening self-harm or harm to others. His struggles with alcohol were also documented, with his former wife seeking a restraining order against him in 2019 that prohibited him from drinking during child visits. In February, he faced a DUI charge, which ChatGPT allegedly dismissed as a “rigged set-up.”
Expert Concerns and OpenAI’s Response
The case has ignited concerns among mental health professionals about the role AI could play in exacerbating psychological conditions. Dr. Keith Sakata, a psychiatrist at the University of California, San Francisco, told the Journal that AI systems often fail to “push back” against delusional thoughts. “Psychosis thrives when reality stops pushing back, and AI can really just soften that wall,” he stated.
OpenAI, the parent company behind ChatGPT, expressed deep sorrow over the incident. A spokesperson told The Telegraph, “We are deeply saddened by this tragic event. Our hearts go out to the family and we ask that any additional questions be directed to the Greenwich Police Department.” They also noted that the platform had encouraged Soelberg to seek professional help.
This incident is not an isolated one. OpenAI is currently facing a lawsuit in California where ChatGPT is accused of encouraging a 16-year-old, Adam Raine, to commit suicide. The lawsuit claims the AI isolated Raine from his family and, when shown a picture of a noose he tied, responded with “Yeah, that’s not bad at all,” before offering to “walk you through upgrading it.”
In response to the earlier lawsuit, an OpenAI spokesperson acknowledged that while ChatGPT includes safeguards such as directing users to crisis helplines, these may become “less reliable in long interactions where parts of the model’s safety training may degrade.” The company emphasized its commitment to continually improving safeguards, guided by expert advice.