top of page
Search

Tumbler Ridge and the Infrastructure That Was Never Built

  • Writer: Ryan James Purdy
    Ryan James Purdy
  • 3 days ago
  • 7 min read


Key Takeaways


  1. The natural impulse after Tumbler Ridge is to find a single point of failure. The harder truth is that multiple institutions, each making defensible decisions in isolation, compounded into catastrophe.

  2. OpenAI flagged the shooter's account eight months before the attack, banned it, and then watched her open a second one. Twelve employees recommended contacting Canadian police. Leadership overruled them. No Canadian law required otherwise.

  3. The RCMP had seized firearms from the shooter's home after multiple mental health interventions. The guns were returned after a petition. But the primary weapon used at the school had never been in police custody at all, and its origin remains unknown.

  4. The privacy versus safety debate around AI chatbot monitoring has no clean resolution. Mandating lower reporting thresholds risks turning every AI company into a surveillance operation. Maintaining the status quo means companies decide alone, with no transparency and no external oversight, when a threat is real enough to act on.

  5. Every institution involved held a fragment of the picture. OpenAI had the content. The RCMP had the mental health history. The firearms system had the licensing data. No framework existed to connect those fragments before it was too late.


In June 2025, roughly a dozen OpenAI employees sat with a decision. An automated review system had flagged a Canadian user's ChatGPT account for conversations describing scenarios of gun violence over several days. Some employees interpreted the content as indicating a genuine risk of real-world harm and recommended contacting Canadian law enforcement. Leadership disagreed. The account was banned. No one called the RCMP.

Eight months later, Jesse Van Rootselaar killed eight people in Tumbler Ridge, British Columbia.

The public narrative that followed was clean and satisfying. OpenAI knew. OpenAI did nothing. If we build policy on that version of events, though, we will get policy that addresses one failure while leaving the rest untouched. The full picture is muddier than the headlines. It is also more important.

What OpenAI Knew and Did Not Do

OpenAI's internal threshold at the time required evidence of "imminent and credible" planning of serious physical harm before a case would be referred to police. Van Rootselaar's conversations did not meet that standard, according to the company. The account was banned for violating usage policies, and that was where the matter ended.

Except it was not. Van Rootselaar opened a second ChatGPT account, bypassing a system designed to detect repeat policy violators. OpenAI did not discover it until after the shooting, when her name became public.

When AI Minister Evan Solomon summoned OpenAI's senior safety officials to Ottawa on February 24, the meeting went badly. Solomon called it "disappointing." Public Safety Minister Gary Anandasangaree said nothing substantial came out of it. Culture Minister Marc Miller said he was "troubled." The company had not presented new safety protocols or an action plan.

It took a direct video call between Solomon and CEO Sam Altman on March 4, followed by a separate meeting with BC Premier David Eby the next day, before OpenAI committed to a series of changes: a direct RCMP contact, Canadian experts embedded in its safety review process, retroactive review of previously flagged cases, and a report on its detection systems. Those commitments only materialized under political pressure, from a foreign government, after eight people were dead.

What the Firearms System Did and Did Not Prevent

The primary weapon used at Tumbler Ridge Secondary School had never been in police custody. Its origin is unknown. The RCMP is still investigating how Van Rootselaar obtained it.

That fact reframes everything that follows.

Police had attended the shooter's residence on multiple occasions over several years for mental health concerns. Van Rootselaar had been apprehended under the Mental Health Act and taken to hospital for assessment at least once. The most recent visit, in spring 2025, involved concerns about self-harm. Firearms had been seized from the family home under the Criminal Code approximately two years before the shooting. The lawful owner petitioned for their return and succeeded, reportedly about a month before the attack.

The seizure-and-return sequence is troubling on its own terms. Returning firearms to a home where police had documented a pattern of mental health crises and at least one apprehension under the Mental Health Act deserves scrutiny, and a BC coroner's inquest will likely provide it.

But the weapon that caused the most damage at the school was not among those previously seized guns. Even if those firearms had never been returned, the attack may still have happened with a weapon the system had never seen.

The OpenAI story and the firearms story share the same structure. Each system caught something. Neither system caught enough. And nothing connected what one system knew to what the other was missing.

The Privacy Problem No One Has Solved

The most consequential policy question from Tumbler Ridge is deceptively simple: when should an AI company call the police about what a user types into a chatbot?

OpenAI's previous standard required evidence of imminent and credible planning. The company now says it has made its referral criteria more flexible and would have reported Van Rootselaar under updated policies.

The debate over where the new line should sit is far from settled.

Michael Geist, who holds the Canada Research Chair in Internet and E-commerce Law at the University of Ottawa, has argued that mandating lower thresholds for reporting chatbot activity would harm individual privacy. His comparison is direct: we would be reluctant to require Google to actively monitor and report on emails, and he does not see a meaningful difference between that and what takes place in an AI chatbot context. He is now calling for an AI Transparency Act rather than a rush to expand reporting obligations.

Others see it differently. A researcher at Simon Fraser University drew the parallel to the Tarasoff principle, which imposes on therapists a duty to warn when a patient poses a credible threat to an identifiable person. If AI chatbots function as pseudo-therapists, the argument goes, a similar obligation should follow. But on the other end of a chatbot conversation there is no clinician making a professional judgment. There is an automated classifier making a probabilistic assessment of text, followed by a small team of content reviewers working from a corporate usage policy. The distance between those two things is enormous.

There is also a practical concern that rarely gets aired. If users know that AI companies are monitoring conversations and reporting to police on flexible criteria, the people most likely to change their behaviour are not would-be attackers. They are the millions of ordinary users who treat chatbots as private spaces for processing difficult thoughts, and who will simply stop doing so.

None of this means OpenAI made the right call in June 2025. Twelve of its own employees thought otherwise. But legislating the answer is harder than the current political moment suggests.

The Failure That Connects Everything

OpenAI had the chatbot content. The RCMP had the mental health history and the firearms seizure record. The firearms licensing system had the expired licence data. The school system had the dropout history. Each institution held a fragment of a picture that, assembled, would have painted an unmistakable threat.

No structure existed to assemble it.

Canada had no AI legislation requiring companies to report threats to law enforcement under defined circumstances. The Online Harms Act, which might have imposed some of those requirements, never became law because an election was called. PIPEDA's emergency disclosure provision under section 7(3)(e) was drafted for clear-cut crises, not for the probabilistic threat indicators that AI interactions generate. For a foreign corporation navigating that legal ambiguity, the safer corporate decision was inaction.

OpenAI set its own reporting threshold, in its own offices, with no external audit, no Canadian context in the review process, and no transparency about how decisions were made. The federal AI minister had to summon the company to Ottawa just to learn what its safety protocols were.

Not a single villain. Not a single broken rule. Disconnected institutions, each operating within its own mandate, with no connective tissue between them.

Where This Leaves Us

Geist is right that transparency is a precondition. You cannot regulate what you cannot see, and until February 2026, the Canadian government could not see any of this. But transparency tells you what happened after the fact. It does not connect fragmented information before the fact.

That connective infrastructure does not exist in Canada today. Not between AI companies and law enforcement. Not between law enforcement and mental health systems. Not between mental health systems and firearms licensing. Tumbler Ridge exposed every seam at once.

Building it requires balancing privacy against safety, defining obligations for companies operating across jurisdictions, and giving governments the capacity to audit AI safety systems before a crisis rather than after one. It also requires honesty about the fact that even a well-built system will not reduce the probability of tragedy to zero. Every institution involved in the Tumbler Ridge timeline could have acted differently. Some should have. But reduced chances are not eliminated chances, and anyone who promises otherwise is selling something simpler than the truth.

Eight people are dead, and every institution involved can point to the others. The question that matters now is not who failed worst. It is whether Canada will build the connective infrastructure that did not exist on February 10, or whether we will wait for the next case to prove, again, that it is missing.

References


  1. CBC News, "AI minister 'disappointed' by OpenAI meeting held in wake of Tumbler Ridge shooting," February 24, 2026. https://www.cbc.ca/news/politics/open-ai-government-meeting-tumbler-ridge-9.7104789

  2. CBC News, "OpenAI CEO expressed 'horror and responsibility' over ChatGPT's ties to Tumbler Ridge, AI minister says," March 4, 2026. https://www.cbc.ca/news/politics/evan-solomon-open-ai-meeting-ceo-sam-altman-9.7114767

  3. CBC News, "What we know about the guns used in the Tumbler Ridge, B.C., shooting," February 14, 2026. https://www.cbc.ca/news/politics/tumbler-ridge-firearms-9.7088680

  4. BetaKit, "OpenAI connection to Tumbler Ridge tragedy puts balance between privacy and public safety in the spotlight," February 2026. https://betakit.com/openai-connection-to-tumbler-ridge-tragedy-puts-balance-between-privacy-and-public-safety-in-the-spotlight/

  5. Michael Geist, "More Transparency Not Police Reporting: Navigating the Safety-Privacy Balance for AI ChatBots," March 2026. https://www.michaelgeist.ca/2026/03/more-transparency-not-police-reporting-navigating-the-safety-privacy-balance-for-ai-chatbots/

  6. The Conversation, "Danger was flagged, but not reported: What the Tumbler Ridge tragedy reveals about Canada's AI governance vacuum," February 2026. https://theconversation.com/danger-was-flagged-but-not-reported-what-the-tumbler-ridge-tragedy-reveals-about-canadas-ai-governance-vacuum-276718

  7. Globe and Mail, "What do AI firms do when users tell chatbots their dark, violent thoughts?" February 28, 2026. https://www.theglobeandmail.com/canada/article-tumbler-ridge-ai-chatbots-regulation/

  8. OpenAI, "Helping people when they need it most," August 2025. https://openai.com/index/helping-people-when-they-need-it-most/

  9. CBC News, "Family of Tumbler Ridge shooting victim suing OpenAI," March 10, 2026. https://www.cbc.ca/news/canada/british-columbia/openai-sued-tumbler-ridge-victim-9.7121635

  10. Wikipedia, "2026 Tumbler Ridge shooting." https://en.wikipedia.org/wiki/2026_Tumbler_Ridge_shooting


Ryan James Purdy is an independent AI governance researcher, writer, and AI assurance professional. He is the author of three books on AI policy and compliance in education and a seven-memorandum academic series on AI governance in K-12 education, published on SSRN and Zenodo. He has conducted limited AI assurance assessments of Canadian and American school boards. jamespurdy624@gmail.com.

 
 
 

Comments


bottom of page