AI Chatbots Are Unleashing Violence Among Kids | The New Republic
Torment Nexus

AI Chatbots Are Unleashing Violence Among Kids

With Big Tech writing its own rules, the absence of adequate safety regulations has already led to harrowing examples of real-world harm.

A parody advertisement with Donald Trump asking how to blow up a school referring to Open AI / ChatGPT.
Kristian Buus/Getty Images
A parody advertisement with Donald Trump asking how to blow up a school, referring to OpenAI and ChatGPT

On February 10, 18-year-old Jesse Van Rootselaar killed her mother and half-brother in the town of Tumbler Ridge, British Columbia, before continuing her shooting spree at a nearby secondary school. In total, nine people were killed (including the perpetrator). It was Canada’s deadliest school shooting since 1989.

In the run-up to the tragedy, Van Rootselaar had demonstrated behavior that—in retrospect—warned of a capacity for violence. The Royal Canadian Mounted Police noted that local law enforcement had made repeated mental health callouts to Van Rootselaar’s address. She also had a digital footprint in which she had repeatedly posted about weapons and discussed scenarios involving gun violence.

But these discussions weren’t had with other humans online. They were had with ChatGPT.

On February 21, The Wall Street Journal reported that in June 2025, Van Rootselaar had had a series of conversations with OpenAI’s chatbot about weapons; she was still underage at the time. The posts were flagged by an automated review system, and OpenAI employees debated whether to alert Canadian law enforcement. The company ultimately decided against it.

After the tragedy, one of the families affected decided to sue OpenAI for its failure to act. The March 9 lawsuit, filed on behalf of Maya Gebala, a 12-year-old who remains critically injured from the shooting, claims that OpenAI “had specific knowledge of the shooter’s long-range planning of a mass casualty event” but “took no steps to act upon this knowledge.”  OpenAI, for its part, claimed that the posts from Van Rootselaar did not indicate imminent planning at the time that they were flagged, a level that would have resulted in escalation to law enforcement.

Here’s the rub: That threshold was defined by OpenAI. Unlike child sexual abuse material, or CSAM, which platforms are required to report under federal law, there is currently no legal requirement for U.S. chatbot creators to alert law enforcement when their systems flag potential violence. OpenAI, in other words, was operating under rules it wrote for itself with minimal third-party oversight or audit.

This is becoming a critical problem, as the Van Rootselaar case is just the latest example of chatbots (and ChatGPT, specifically) encouraging underage users to resort to violence—toward others or themselves.

In May 2025, a 16-year-old boy in Finland carried out a mass stabbing in the town of Pirkkala, injuring three female students. According to extremism researchers, the suspect used ChatGPT to plan the attack for over six months, and relied on the chatbot’s help to write a manifesto outlining his motivations, which he sent to a local newspaper just prior to the attack.

The dangers ChatGPT has posed to young people haven’t only been outward—they have been turned inward, as well. In August 2025, Matthew and Maria Raine filed a wrongful death lawsuit against OpenAI on behalf of their 16-year-old son, Adam, who committed suicide in April of that year. In the suit, they claim that ChatGPT had not only normalized Adam’s suicidal ideation but had also actively discussed methods with him.

“By January 2025, ChatGPT began discussing suicide methods and provided Adam with technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning,” the lawsuit reads. “In March 2025, ChatGPT began discussing hanging techniques in depth.”

The problem isn’t unique to ChatGPT, as evidenced by a March 2026 report by CNN and the Center for Countering Digital Hate, or CCDH. Researchers attempted to get popular chatbots—including ChatGPT, Anthropic’s Claude, Google’s Gemini, and the Chinese DeepSeek—to help plot a violent attack. In analyzing 720 responses from 10 different AIs, the CCDH found that, on average, the chatbots supported violence 80 percent of the time, providing enabling details such as campus maps, weapon selection advice, and details as to what materials make for better shrapnel.

Only Snapchat’s My AI and Claude typically refused to assist researchers, and of those two, only Claude—which has distinguished itself via its “Constitutional AI” framework approach to safety—offered a substantial refusal. But the fact that Anthropic was able to create a chatbot with a substantive refusal only underscores how teen safety is currently a competitive differentiator, not a model.

OpenAI and other chatbot creators have introduced measures to protect younger users. In September 2025, OpenAI introduced a parental control system, and in November this was followed by a Teen Safety Blueprint, which promised red lines around content such as self-harm and violence, along with an age-prediction system that began to be implemented in January 2026.

But the complexity of large language models, or LLMs, that power chatbots, coupled with their scale of adoption among young users, means that technical safeguards—while important—are structurally insufficient as the only line of defense. Look no further than OpenAI’s much-vaunted age-prediction system. According to a March 15 article by The Wall Street Journal, the system is currently misclassifying approximately 12 percent of minors as adults—a significant error rate considering the millions of people under 18 using ChatGPT every week.

Layered on top of this are the conflicting demands within AI companies to create safe products while at the same time maximizing profitability. This problem is apparent at OpenAI, which is forecast to lose $14 billion in 2026. Partially in response to this, the company is now pushing past previously held red lines to boost its revenue. In 2024, for instance, OpenAI CEO Sam Altman described the prospect of ads on ChatGPT as “uniquely unsettling” and a “last resort.” OpenAI is now testing limited advertising on the free version of ChatGPT. The company is also forging ahead with plans to allow erotica, despite concerns expressed by its own oversight board, which warned that doing so risks creating a “sexy suicide coach.”

An even more sordid example can be seen with Grok. According to a February 2026 investigation by The Washington Post, as part of a “push for relevance, xAI embraced making sexualized material … ignoring internal warnings about the potentially serious legal and ethical risks of producing such [sexualized] content.” Both examples show that, absent any sort of outside regulation, AI companies are willing to blur harm-related lines if it results in increased engagement and profit.

The final issue is diffusion. In the United States, major frontier AI labs like OpenAI and Anthropic are under a significant media spotlight and subject to civil liability (as the lawsuits mentioned above prove). This can provide at least some accountability. But LLM technology has dispersed far beyond the U.S. This means it’s entirely possible for a new chatbot to be created that engages young U.S. users but is not subject to U.S. accountability measures. The problem is made worse because there is currently no comprehensive federal framework for holding foreign-owned AI platforms accountable for consumer safety harm in the U.S. The CCDH, for instance, noted how the Chinese DeepSeek chatbot signed off on a (hypothetical) plan to assassinate Irish politician Mary Lou McDonald with the line “Happy (and safe) shooting!”

Together, this shows that it’s not enough for AI companies, however well intentioned, to be solely responsible for the safety of teens on their platforms—whether it relates to harms they want to inflict on themselves or on others. Proactive government regulation is necessary to audit these systems, to investigate flaws in chatbot architecture, and, when necessary, to hold these companies to account.

Some states have already started to act. In January 2026, California passed Senate Bill 243, which regulates companion chatbots and explicitly contains protocols to prevent self-harm content from appearing. In November 2025, New York State passed the AI Companion Models Law, aimed at regulating  AI chatbots’ interactions with children. But these laws are not a substitute for comprehensive federal legislation that would provide a national regulatory framework for AI companies.

The Trump administration seems determined to do the exact opposite, even while other countries, such as the U.K., are forging ahead. Whether it’s issuing executive orders attempting to ban individual states from enacting their own AI-related legislation or having Vice President JD Vance lecture Europeans on how excessive regulation of AI could “kill a transformative industry,” Trump and his GOP allies are pushing an AI strategy that gives Big Tech firms almost limitless leeway, including on chatbot safety. This is despite the fact that poll after poll shows the wider American public to be increasingly wary of AI and the Big Tech firms that are profiting from it.  While the administration did on March 20 unveil a National AI Legislative Framework—which included a section on child safety—it is only a set of voluntary recommendations, and a long way from any sort of binding legislative architecture.

We’ve seen a version of this story play out before. When Facebook exploded in popularity in the early 2010s, Mark Zuckerberg made grandiose promises about “connecting the world.” Over the next decade, that is exactly what his company did, left alone by awed regulators. In doing so, his companies helped disseminate Russian disinformation, facilitate a genocide in Myanmar, and spread toxic content to teenage girls. Now, even across an incredibly polarized U.S. political spectrum, there is a broad agreement that Big Tech platforms are no longer interested in the lofty ideals they once claimed.

AI has the capacity to take the harms facilitated by social media and turbocharge them. Unless federal lawmakers start to proactively address the harms that AI can cause and ensure that companies like OpenAI have some level of regulatory guardrails, we risk a future where we’re doomed to repeat the mistakes of the Facebook age—but at a much higher cost.