OpenAI Discovers Mass Shooter’s Secret ChatGPT Account
Lightning struck twice in Tumbler Ridge, British Columbia earlier this month.
First in the eerie stillness of a classroom where eight lives were stolen. Then, in Ottawa’s halls of power where officials demanded answers from Silicon Valley about what a chatbot knew and when it knew it.
As Canada grapples with one of the worst school shootings in its history, the focus has shifted from forensic investigations to the digital trail left on ChatGPT. The company at the center of today’s debates, OpenAI, confirmed that before the Feb. 10 massacre it flagged and banned an account belonging to the shooter for violent content, according to the Associated Press.
However, it did not alert law enforcement at that time. That choice and the fact the suspect then created a second account have prompted Canadian leaders to call for new protocols on how AI companies handle potential threats.
What happened before the shooting?
In June 2025, approximately eight months before the tragedy, OpenAI’s internal abuse-detection systems flagged a ChatGPT account linked to Jesse Van Rootselaar, an 18-year-old whose messages included troubling descriptions of violent scenarios.
Company review determined the account violated usage policies and it was banned. However, OpenAI said it did not refer the case to the Royal Canadian Mounted Police at the time, explaining that the content did not meet its threshold for referral — specifically, there was no “credible and imminent” plan for serious physical harm.
Unknown to authorities then, Van Rootselaar later created a second ChatGPT account that similarly evaded detection, a fact only uncovered after police publicly named her as the suspect in the Tumbler Ridge shooting. In that Feb.10 attack, eight people — including children and an educator — were killed before the suspect died by suicide.
In a letter to Ottawa, OpenAI outlined steps it says it’s taking in response.
The company said it will enhance its systems to better detect attempts to evade bans, adjust its internal thresholds so that cases like this would now trigger referrals to law enforcement, and establish clearer direct contact points with Canadian authorities.
OpenAI’s vice president for global policy, Ann O’Leary, said that had these updated protocols been in place previously, the earlier account would have been referred to police.
Canada’s push for accountability
Canadian officials have made it clear they see this as more than corporate housekeeping.
Artificial Intelligence Minister Evan Solomon summoned senior OpenAI safety representatives to Ottawa to explain the company’s decision-making and safety protocols. Solomon said he was deeply disturbed by the delay in reporting and emphasized that all policy options, including regulation of AI chatbots, are under consideration to protect public safety.
British Columbia Premier David Eby echoed those concerns, saying from the outside it “looks like” OpenAI had the opportunity to prevent this tragedy by alerting authorities earlier. He has urged a detailed breakdown of the company’s escalation criteria and stronger public-safety collaboration.
The debate now sits at the junction of privacy, technology, and public safety. OpenAI defends its protocols as a balance between respecting user privacy and addressing real-world harm.
The company notes that optional referrals to law enforcement have always hinged on a high threshold, one it says was not met in this case. Still, with new systems and promises of tighter engagement with law enforcement, it aims to show it has learned from a tragedy that has shaken a nation.
The conversation Canada has started may well echo far beyond its borders as governments and tech firms alike wrestle with where AI oversight should begin and end.
For more on OpenAI’s latest safety efforts, check out our coverage of its new Lockdown Mode designed to counter prompt injection attacks.
The post OpenAI Discovers Mass Shooter’s Secret ChatGPT Account appeared first on eWEEK.