Eight months ago, an 18-year-old opened fire at a secondary school in Tumbler Ridge, British Columbia, killing five children, a teaching assistant and two family members, and wounding 27 others. OpenAI’s internal security team had already identified the shooter’s ChatGPT account as a credible, real-world threat of gun violence.
According to whistleblowers cited in seven lawsuits filed Wednesday in California federal court, trained security experts flagged the account in June 2024, about eight months before the February shooting.
Under OpenAI’s own protocols, credible threats of real-world violence should trigger law enforcement referrals, especially important given that Canadian police had previously opened a file on the shooter and had already removed guns from his home on one occasion.
Instead, OpenAI deactivated the account. Then, according to the lawsuits, the company sent a customer support email explaining how the user could return to ChatGPT by registering with a new email address. The evidence presented in the complaints indicates that the shooter followed those instructions. OpenAI denies that support communications instruct banned users to re-register.
The attack on Tumbler Ridge Secondary School, in a rural mining town of about 2,000 people, killed five children and a teaching assistant at the school, with the shooter’s mother and brother already dead at home. The shooter also died from apparently self-inflicted wounds.
Jay Adelson, a lawyer who led the cross-border legal team representing the families, said Ars Technica The lawsuits were deliberately filed in California where OpenAI is headquartered to ensure that Altman would face a jury of “his neighbors”. All seven complaints were filed in the same week, with more complaints from additional injured families expected within three weeks.
Adelson alleged that OpenAI’s conduct was shaped by one overriding priority: rushing to an IPO currently targeting an $852 billion valuation without the public understanding the scale of the ChatGPT-related violence cases being managed internally by the company.
“Their goal is to reduce the number of visible incidents where deaths occurred on their platform,” he alleged, adding that without whistleblowers, the Tumblr Ridge connection to ChatGPT would likely never have come to light.
Altman issued a public apology to Tumbler Ridge residents last week, calling OpenAI’s failure to alert law enforcement a mistake and promising improvements. Adelson called the apology “ridiculous” and said it came a month after Altman had already agreed privately with the city’s mayor that an apology was necessary.
