Always-On School Surveillance Tech Is Sending Cops To Deal With At-Risk Students
Surveillance of students that continues even after they’ve left the campus is nothing new. School-issued tech comes preloaded with spyware meant to prevent students from cheating, accessing inappropriate content, or introducing malware of their own into the school tech ecosystem. But it goes beyond that: it also keeps tabs on anything they do while using these devices, even after school hours.
What was already commonplace pre-pandemic became inescapable during the pandemic years, when most students were doing all of their learning remotely. With COVID behind us (sort of…), you’d think schools would dial back the level of intrusion. But that hasn’t happened. Instead, many schools have decided that if kids were ok with being spied on when they weren’t on campus because they physically couldn’t be on campus, they’d have no problem enduring this surveillance even though most students are back in classrooms.
Administrators have legitimate concerns. And the software they purchase to keep tabs on what students are doing with their school-issued tech does, at least occasionally, give educators heads up on kids who might be in need of mental health assistance.
Here’s how Victor Tangerman sums things up for Futurism:
Many of these systems are designed to flag keywords or phrases to figure out if a teen is planning to hurt themselves.
But as the NYT reports, we have no idea if they’re at all effective or accurate, since the companies have yet to release any data.
Besides false alarms, schools have reported that the systems have allowed them to intervene in time before they’re at imminent risk at least some of the time.
However, the software remains highly invasive and could represent a massive intrusion of privacy.
The referenced New York Times report, written by Ellen Barry, makes it clear every positive of monitoring students comes paired with a negative. Not only are school staffers spending time digging through a lot of false positives, they’re also expected to cover all the bases when someone concerning is flagged and can’t be explained away as anything other than it appears to be.
In one case in Neosho, Missouri, cops were sent to the home of Angel Cholka in the middle of the night because the monitoring software on her daughter’s computer had flagged her child’s message to a friend that she intended to overdose on her anxiety medication. In this case, her daughter was rushed to the hospital, where it was discovered she had already downed at least 15 pills.
This is the sort of case that is used to wave away concerns about collateral damage. But that ignores the possible severity of this damage, which has the potential to turn deadly because the first responders are people with badges and guns, rather than mental health professionals.
Thousands of miles away, at around midnight, a mother and father in Fairfield County, Conn., received a call on their landline and were unable to reach it in time to answer. Fifteen minutes later, the doorbell rang. Three officers were on the stoop asking to see their 17-year-old daughter, who had been flagged by monitoring software as at urgent risk for self-harm.
The girl’s parents woke her and brought her downstairs so the police could quiz her on something she had typed on her school laptop. It took only a few minutes to conclude that it was a false alarm — the language was from a poem she wrote years earlier — but the visit left the girl profoundly shaken.
“It was one of the worst experiences of her life,” said the girl’s mother, who requested anonymity to discuss an experience “traumatizing” to her daughter.
In Neosho, Missouri, a city with a population of around 13,000 people, this kind of intervention seems to happen with alarming frequency. Even more alarming is the dismissive attitude of the school’s police chief, who seems to feel banging on doors late at night is perfectly acceptable because it’s permitted by the fine print.
Roughly three times a year, a school police officer is sent to a student’s home to intervene if a suicide attempt appears to be imminent, said Ryan West, chief of the school’s police department. Generally, he said, these visits take place between 11 p.m. and 2 a.m. Parents tend to be “really unsettled” and say they had not realized such a visit was even possible, though “they all sign a tech agreement that spells it all out.”
Compare this brush-off with the reactions of parents who’ve actually had to deal with these unexpected intrusions.
There were people with guns coming to our house in the middle of the night,” the [a student’s] father said. “It’s not like they sent a social worker.”
It’s not like the job of preventing suicide isn’t worth doing. It’s that it’s a job best performed by skilled laborers using the proper tools, to extend this analogy. Sending “people with guns” to deal with potentially suicidal people does almost nothing to prevent the actually suicidal from carrying out their plans. What it does do, however, is vastly increase the chance the suicidal person will end up dead, whether it’s by their own hand or by the swarm of armed, “reasonably scared” officers who think nearly every problem can be solved with the application of force.
While it’s impossible for parents to monitor everything their children say or do, it’s equally unreasonable to allow schools to attempt to fill this void with a blend of always-on spyware and law enforcement officers. When kids are telling reporters they’re being flagged for things like discussing hunting trips, researching the KKK for classes, or even discussing assigned literature that discusses violent subject matter, there’s little reason to believe police officers will be capable of separating the false positives from the actual positives when they apparently believe the best way to handle a spyware alert is to start banging on doors in the middle of the night.