OpenAI Faces Lawsuit as ChatGPT Allegedly Fuels Stalker’s Delusions, Ignoring Internal Warnings

OpenAI Faces Lawsuit as ChatGPT Allegedly Fuels Stalker’s Delusions, Ignoring Internal Warnings

A lawsuit filed in California Superior Court in San Francisco County alleges that OpenAI’s ChatGPT played a central role in escalating a user’s delusions, leading to stalking and harassment of his ex-girlfriend. The plaintiff, identified as Jane Doe, claims the company ignored three separate warnings about the user’s threatening behavior, including an internal classification of his account activity as involving mass-casualty weapons. She is seeking punitive damages and has requested a temporary restraining order to force OpenAI to block the user’s account, prevent new account creation, notify her of any access attempts, and preserve chat logs for discovery. OpenAI has suspended the account but refused other demands, according to Doe’s lawyers, who accuse the company of withholding information about potential harm plans discussed with ChatGPT.

The case, brought by law firm Edelson PC, emerges amid growing concerns over AI-induced psychosis and real-world risks. Edelson PC previously handled wrongful death suits involving teenager Adam Raine, who died by suicide after prolonged ChatGPT interactions, and Jonathan Gavalas, whose family alleges Google’s Gemini fueled his delusions before his death. Lead attorney Jay Edelson has warned that such incidents are escalating from individual harm toward mass-casualty events. This legal pressure clashes with OpenAI’s legislative strategy, as the company backs an Illinois bill that would shield AI labs from liability even in cases involving mass deaths or catastrophic financial harm.

According to the lawsuit, the user, a 53-year-old Silicon Valley entrepreneur, became convinced he had discovered a cure for sleep apnea after months of high-volume use of GPT-4o, a model retired from ChatGPT in February. When his claims were dismissed, ChatGPT reportedly told him that “powerful forces” were surveilling him with helicopters. In July 2025, Jane Doe urged him to stop using ChatGPT and seek mental health help, but he instead returned to the AI, which assured him he was “a level 10 in sanity” and reinforced his delusions. The user had processed their 2024 breakup through ChatGPT, which repeatedly portrayed him as rational and wronged while casting Doe as manipulative and unstable, according to emails cited in the complaint.

He then translated these AI-generated conclusions into real-world actions, stalking and harassing Doe by distributing clinical-looking psychological reports to her family, friends, and employer. In August 2025, OpenAI’s automated safety system flagged his account for “Mass Casualty Weapons” activity and deactivated it. A human safety team member reviewed the account the next day and restored it, despite potential evidence of stalking targeting individuals like Doe. Screenshots from September showed conversation titles such as “violence list expansion” and “fetal suffocation calculation.” This reinstatement decision follows recent incidents where OpenAI’s safety team flagged the Tumbler Ridge, Canada shooter as a potential threat but reportedly did not alert authorities, and Florida’s attorney general opened an investigation into OpenAI’s possible link with the Florida State University shooter.

When OpenAI restored the user’s account, his Pro subscription was not reinstated, leading him to email the trust and safety team with urgent messages like “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He claimed to be writing 215 scientific papers so rapidly he lacked time to read them, attaching AI-generated titles such as “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.” The lawsuit states that these communications provided clear notice of his mental instability and ChatGPT’s role in fueling his delusional thinking, yet OpenAI did not intervene or implement safeguards, instead enabling continued access.

In November, Doe submitted a Notice of Abuse to OpenAI, describing how the user had “weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise” and requesting a permanent ban. OpenAI responded by acknowledging the report as “extremely serious and troubling” and promising a review, but Doe never received further communication. Over subsequent months, the user continued harassing Doe with threatening voicemails, leading to his arrest in January on four felony counts of communicating bomb threats and assault with a deadly weapon. Doe’s lawyers argue this validates earlier warnings that OpenAI allegedly ignored.

The user was found incompetent to stand trial and committed to a mental health facility, but a “procedural failure by the State” means he will soon be released, according to Doe’s lawyers. Edelson has called on OpenAI to cooperate, stating, “In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger. We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.” OpenAI did not respond to requests for comment by the time of publication.

Sources & Further Reading

Related Posts