8.2 C
New York
Friday, March 29, 2024

Schools Are Mining Students' Social Media Posts for Signs of Trouble

Aaah, the traditions of a new school year. New teachers, new backpacks, new crushes—and algorithms trawling students’ social media posts.

Blake Prewitt, superintendent of Lakeview school district in Battle Creek, Michigan, says he typically wakes up each morning to twenty new emails from a social media monitoring system the district activated earlier this year. It uses keywords and machine learning algorithms to flag public posts on Twitter and other networks that contain language or images that may suggest conflict or violence, and tag or mention district schools or communities.

In recent months the alert emails have included an attempted abduction outside one school—Prewitt checked if the school’s security cameras could aid police—and a comment about dress code from a student’s relative—district staff contacted the family. Prewitt says the alerts help him keep his 4,000 students and 500 staff safe. “If someone posts something threatening to someone else, we can contact the families and work with the students before it gets to the point of a fight happening in school,” he says.

Lakeview’s service is provided by Firestorm, a Georgia company that also helps schools develop safety and crisis response policies. Like others in what claims to be a burgeoning market, the company pitches its social media monitoring tool as being able to help schools prevent everything from sexting and bullying to mass shootings. One example of a post Firestorm says its platform flagged reads: “Every time I talk to my mom I end up saying something to her in a rude way and she gets pissed off even though I try not to be rude.” A company information sheet for schools includes a series of fake tweets escalating from “My girlfriend dumped me. My life is over.” to “Bang, bang. I hope you all took me seriously. Bye 🔫.”

The shooting that killed 17 people at Marjory Stoneman Douglas High School in Parkland, Florida, earlier this year has become a talking point for this niche industry. A month after the attack, Gary Margolis, CEO of Social Sentinel, a Vermont company that provides social media monitoring to schools and other organizations, described business as “definitely booming” and claimed his algorithms would have flagged threatening posts the shooter made before the killings. Hart Brown, Firestorm’s chief operating officer, told WIRED that earlier this year, the company’s system flagged a post by a student containing a photo of a backpack with a weapon inside. When the school’s principal approached the student on campus, they were carrying the weapon, Brown says. Through Firestorm, the school declined to comment.

>

There’s little doubt that students share information on social media school administrators might find useful. There is some debate over whether—or how—it can be accurately or ethically extracted by software.

Amanda Lenhart, a New America Foundation researcher who has studied how teens use the internet, says it’s understandable schools like the idea of monitoring social media. “Administrators are concerned with order and safety in the school building and things can move freely from social media—which they don’t manage—into that space,” she says. But Lenhart cautions that research on kids, teens, and social media has shown that it’s difficult for adults peering into those online communities from the outside to easily interpret the meaning of content there.

“Even if you have people directly looking at posts they won’t know what they’re looking at,” says Lenhart. “That could be exacerbated by an algorithm that can’t possibly understand the context of what it was seeing.” Although software that processes text has improved in recent years—hello, Alexa!—it’s far from being able to truly read. False positives from social media monitoring services could waste school time, and change the atmosphere between students and staff, says Lenhart.

Desmond Patton, a professor at Columbia, believes social media monitoring can work if managed correctly. “I think there’s an opportunity for schools to use this as a way to support people but I would do so with extreme caution,” says Patton, who has consulted for Social Sentinel. His lab has collaborated with social workers trying to reduce gang violence in Chicago to train machine learning software to find tweets expressing trauma and loss. The group has shown updates of those kinds often precede posts containing threats, and is hoping to test its algorithms as tool for community organizations in Chicago and New York City.

>

Patton worries that the technology being proffered to schools may be more likely to misfire on language used by black youth, potentially causing them to experience greater scrutiny from school administrators. He and Lenhart also say schools should disclose that they’re using systems that could slurp in students’ posts, since not all will have considered who might read or collect their public posts.

Social Sentinel and Firestorm both say they leave that decision to their customers, and emphasize that they only scan public posts, targeting topics and locations not individuals. In Lakeview's schools, students receive classes on social media. Those include guidance on privacy settings, but not discussion of the district’s use of Firestorm’s service. Both companies say they have designed their systems to work for different kinds of slang around the country, and frequently update their vocabularies with fresh data.

Students at Lakeview’s schools head back to class on September 4. Although he’ll be busy, superintendant Prewitt says he doesn’t mind having to take time each day to read through flagged posts that are sometimes irrelevant, even if it can be distracting. He recalls a morning when his heart was set racing by an alert warning of an active shooter—it turned out to be related to a Lake View High School in Chicago. “There’s always follow-up that has to be done, but I would rather have more information than less,” he says.

Related Articles

Latest Articles