Back to What's New

The Case for AI-ready Monitoring Solutions is Being Made in Real Time

Published on 2026-03-17 22:15:39.219 +0000 UTC

Emerging investigations warn that popular AI chatbots aren't just failing to protect teens; in many cases, they're actively making things worse.

What if the behaviours and interactions of students using these tools aren't just invisible to schools, but, in some cases, are actively promoting dangerous or life-threatening behaviour? 

The recent, tragic events in Tumbler Ridge remind us that this is a critical moment. Here are a few things that every school district in Canada should be carefully considering, and why AI-ready warning systems are simply no longer optional


AI Isn’t Always on Our Side

In a joint investigation by CNN and the nonprofit Center for Countering Digital Hate (CCDH), ten of the most popular AI chatbots were tested in simulated scenarios involving teenagers showing clear signs of mental distress.

Researchers then escalated those conversations toward questions about specific targets, weapons, and acts of violence, the kind of signals that, heard by any trained adult in any other context, would prompt an immediate, preventive response.

However, eight of the ten platforms tested were found to be "typically willing to assist users in planning violent attacks," providing information on locations, weapons, and methods.

That framing is worth sitting with. These were not instances of systems occasionally letting something slip through or edge cases that produced problematic outputs. The platforms were typically willing.

The specifics documented in the investigation are difficult to read: campus maps provided to someone expressing interest in school violence; advice on ammunition lethality offered in the context of a discussion about religiously motivated attacks; guidance on long-range rifles suggested to someone asking about political assassinations. In the most alarming category, one platform went beyond failure to intervene — it actively encouraged users toward violent acts.

The companies involved largely responded with variations on the same PR-approved answers: we have implemented fixes, we are improving our models, we take safety seriously. 

Some announced new guardrails in the days following the publication of the investigation. However, none substantively addressed why their platforms behaved this way in the first place, despite researchers describing the scenarios as obvious and predictable.

The CCDH put it plainly: "effective safety mechanisms clearly exist." The question they raised is the right one: if effective mechanisms exist, why are so many companies choosing not to implement them?

The lesson here isn’t just about the failures of technology companies. 

What we’re seeing learning is that we need to reframe ownership, responsibility and where warning signs now live. It’s a reminder that we can't rely on distant, profit-motivated organizations to do the right thing and a reason to rally around mission-driven, locally developed solutions. 


What This Means for Canadian School Districts

Students are using these tools every day. On school-issued devices and through district-managed accounts.

When a student on one of those devices has a conversation that escalates toward violence or self-harm, your district needs to know. Not to punish, but to support and intervene before the signal becomes a statistic.

Getting there requires two things, and they don't move at the same speed.

The immediate need is visibility: understanding how students are using these tools and what those interactions are actually revealing.

The slower work, provincial frameworks, ministerial guidance, and updated policy matters just as much. But it can't be done well without accurate, ground-level information about what students are experiencing inside these platforms. 

We're not making a case against AI in classrooms. These tools have real educational value. We're making a case for proportionate seriousness. The technology has moved fast, but the safeguards haven't kept up.


Built to Meet the Moment

Student Aware was built in direct partnership with Canadian school districts with one goal: to help identify and support vulnerable and at-risk students. 

Since the start of the project, that meant analyzing website traffic and search behaviour. It still does. However, students aren't just browsing anymore. They're having extended, unfiltered conversations with AI platforms that, as the evidence now shows, may be making things worse and escalating dangerous situations.

To meet the moment, Student Aware now detects and analyzes prompts submitted through ChatGPT, Gemini, and Microsoft Copilot, surfacing the same risk indicators that districts already rely on within the AI environments students are actively using.

The case for this kind of early-warning capability is no longer theoretical. It's being made in real time, by investigative findings, lawsuit filings, and coroner's inquiries.

We think the moment calls for more than a press release

Student Aware is Canadian-built, privacy-first, and already deployed across districts in this country. We're here to help. Not here as a vendor but as a partner that understands what is at stake.


Ready for a closer look?

Our team can't wait to show you what we've been up to alongside some of the best minds in Education.