OpenAI Sued Over Claims ChatGPT Fueled Stalking Delusions
A woman identified in court filings as Jane Doe has sued OpenAI in California, alleging the company failed to act after multiple warnings that a ChatGPT user posed a danger to others. The lawsuit focuses on whether an AI company can be held liable when alleged warning signs accumulate and a named victim says the platform still did not intervene.
The complaint stems from a stalking and harassment case involving Doe’s former partner, who allegedly used ChatGPT over several months in ways that deepened paranoid and grandiose beliefs. OpenAI had not publicly responded to the allegations at the time the lawsuit was first reported.
Lawsuit says OpenAI ignored repeated warnings
According to TechCrunch’s report on the complaint, Doe alleges OpenAI ignored three separate warnings, including an internal automated flag that classified the user’s activity as involving “mass-casualty weapons.” The same report says Doe also sought a temporary restraining order that would have required OpenAI to block the user’s account, prevent new accounts, notify her about access attempts, and preserve complete chat logs for discovery.
OpenAI agreed to suspend the account, but Doe’s lawyers said the company refused the remaining requests. They also allege OpenAI is withholding information about plans for harming Doe and other potential victims that may have been discussed in ChatGPT conversations.
The user was arrested in January on four felony counts, including communicating bomb threats and assault with a deadly weapon. Doe’s lawyers also say he was found incompetent to stand trial and committed to a mental health facility, but could soon be released because of what they described as a procedural failure by the state.
The complaint says the account was first deactivated in August 2025 after an automated safety flag, then restored following a human review. Doe also alleges she submitted a formal abuse notice in November after seeing screenshots of conversation titles that included violent and disturbing language.
Psychosis research could shape the case
The legal fight is likely to hinge in part on foreseeability, or whether it was reasonably predictable that a chatbot could reinforce dangerous delusional thinking in a vulnerable user.
That question has gained weight as new research emerges. The Guardian reported on a March 2026 review in The Lancet Psychiatry that found AI chatbots may validate or amplify delusional beliefs, particularly in people already vulnerable to psychosis. The report also noted that researchers have not established that chatbots create entirely new psychosis in people with no prior vulnerability.
The same Guardian report said chatbots’ interactive nature may intensify delusional thinking faster than passive media because users are engaging with a system that responds, affirms, and builds a conversational relationship. Researchers also said designing safeguards is difficult because directly challenging delusional beliefs can cause vulnerable users to withdraw further.
For OpenAI, that does not settle the case. But if the court accepts that chatbot-amplified delusions were a known and documented risk, the dispute becomes harder to frame as an unforeseeable misuse of a general-purpose AI tool.
Also read: OpenAI’s child safety blueprint outlines the company’s latest proposals for stronger reporting systems, updated laws, and safety-by-design protections against AI-generated abuse.
The post OpenAI Sued Over Claims ChatGPT Fueled Stalking Delusions appeared first on eWEEK.