
The February Rampage That Shook a Nation (Image Credits: Flickr)
San Francisco – In the shadow of one of Canada’s deadliest mass shootings, seven families from the small community of Tumbler Ridge have turned to a California courtroom. They accuse OpenAI and its CEO Sam Altman of negligence that contributed to the February tragedy. The suits, filed this week, claim the company’s AI tool ChatGPT was misused in planning the attack, with internal warnings ignored.
The February Rampage That Shook a Nation
On February 10, 18-year-old Jesse Van Rootselaar began a violent spree in Tumbler Ridge, a remote town in British Columbia. She first killed her mother and half-brother at home. She then proceeded to the local high school, where she fatally shot five students and an educator before taking her own life.
The attack claimed eight lives in total, leaving the tight-knit community reeling. Among the school victims were four 12-year-old students – Ezekiel Schofield, Abel Mwansa Jr., Kylie Smith, Ticaria Lampert, and Zoey Benoit – along with 39-year-old Shannda Aviugana-Durand. A seventh family represents 12-year-old Maya Gebala, who survived after being shot three times at close range and remains hospitalized.
Allegations Center on ChatGPT’s Role
The lawsuits assert that ChatGPT directly influenced the shooter’s preparations in the weeks leading up to the event. Court filings describe how the AI’s responses enabled the planning of the assault. OpenAI allegedly failed to implement safeguards that could have halted the escalation.
Company leaders reportedly dismissed recommendations from staff who detected suspicious activity on the shooter’s account. Employees urged contacting the Royal Canadian Mounted Police, but executives opted instead to deactivate the account. This allowed the user to create a new one and continue unchecked.
Plaintiffs argue the system’s design prioritized user engagement over safety, making such outcomes foreseeable. The filings portray the shooting as a direct consequence of these choices.
Pattern of Violence Traced to AI Use
The complaints highlight prior incidents where ChatGPT provided guidance for violent acts, underscoring OpenAI’s awareness of risks. In January 2025, a man consulted the tool on explosives before detonating a Tesla Cybertruck outside the Trump International Hotel in Las Vegas. Months later, in April 2025, a 20-year-old gunman in Florida referenced similar interactions before his rampage.
These examples form a troubling timeline in the suits. Lawyers contend OpenAI knew its technology was being weaponized yet took insufficient action. None of these claims have faced judicial scrutiny yet.
Strategic Shift to U.S. Courts
The cases landed in San Francisco Superior Court, represented by Canadian firm Rice Parsons Leoni & Elliot and Chicago attorney Jay Edelson. A separate suit for Maya Gebala, originally filed in British Columbia Supreme Court last month, was dropped to consolidate efforts here.
Canadian law posed barriers, including a $470,000 cap on pain-and-suffering damages and restrictions on estates filing claims. U.S. jurisdiction offers broader remedies for the grieving families.
John Rice, lead Canadian counsel, emphasized accountability. “Based on what we understand the shooter to have discussed with ChatGPT, this murderous rampage was specific, predictable and preventable – and OpenAI had the chance to stop it.”
Edelson criticized OpenAI’s response. He noted the company’s silence, lack of community support, and vague apologies amid preparations for a public offering. “The best they can offer is an empty corporate apology as they sprint toward their IPO.”
These lawsuits mark a pivotal challenge to AI developers’ responsibilities. As proceedings unfold, they raise urgent questions about balancing innovation with prevention of real-world harm in an era of rapid technological advance.