Throughline, a New Zealand-based startup that already provides crisis redirection for OpenAI, Google, and Anthropic, is developing a new tool to identify and intervene when users exhibit violent extremist tendencies.
The project, supported by Christchurch Call Advice, is designed to provide a “hybrid response” by integrating specialized chatbot interactions with referrals to real-world mental health and de-radicalisation services.
The recent initiative aims to address growing security concerns driven by the use of chatbots in planning violent attacks. As a result, large AI companies are facing numerous lawsuits accusing the tech giants of failing to prevent or even enabling violence.
In February, OpenAI came under criticism when the company revealed that the perpetrator of a deadly Canadian school shooting had used ChatGPT to carry out nefarious plans.
As a result, the Canadian government threatened to intervene against OpenAI after the company revealed that it had banned a school shooter from its platform without alerting law enforcement.
mechanism of intervention
When an AI detects signs of extremism, it will take the user to a throughline, providing access to human-run helplines and specialized intervention chatbots.
Unlike standard AI, this intervention tool will be trained by counter-extremism experts rather than on generic datasets to ensure safe and effective interactions.
“We’re not using the training data of the base LLM,” said founder Elliot Taylor, referring to the common datasets used by larger language model platforms to create coherent text. “We are working with the right experts. The technology is currently being tested, but no date has been set for release.”
Elliot also explained that it is not advisable for these types of simple users, if they do, users will turn to unregulated platforms, which will lead to a more dangerous situation.
The tool connects automated support with a network of more than 1,600 helplines in 180 countries.
Beyond AI chatbots, it would be more productive to roll out the product to moderators of gaming forums and parents and caregivers, according to Galen Lamphere-Englund, a counter-terrorism consultant representing The Christchurch Call.
OpenAI confirmed the relationship with Throughline. Anthropic and Google have not yet commented.
