1.9 C
New York
Thursday, February 22, 2024

Coronavirus Disrupts Social Media’s First Line of Defense

logoThe AI Database →


Content moderation




End User

Big company


Social media


Source Data




Machine learning

Natural language processing

Facebook users around the globe began to notice something strange happening on their feeds Tuesday night. Links to legitimate news outlets and websites, including The Atlantic, USA Today, the Times of Israel, and BuzzFeed, among many others, were being taken down en masse for reportedly violating Facebook’s spam rules. The problem impacted many people’s ability to share news articles and information about the developing coronavirus pandemic. Canadian pundit and podcast host Andrew Lawton said he was shocked to find that Facebook had wiped his episode archive and was barring him from sharing updates about Covid-19. “This is unreal,” he wrote in a since deleted tweet.

Twitter content

This content can also be viewed on the site it originates from.

Facebook attributed the problem to a mundane bug in the platform’s automated spam filter, but some researchers and former Facebook employees worry it’s also a harbinger of what’s to come. With a global health crisis sweeping the globe, millions are confined to their homes, and social media platforms have become one of the most vital ways for people to share information and socialize with one another. But in order to protect the health of its staff and contractors, Facebook and other tech companies have also sent home their content moderators, who serve as their first line of defense against the horrors of the internet. Their work is often difficult, if not impossible, to do from home. Without their labor, the internet might become a less free and more frightening place.

“We will start to see the traces, which are so often hidden, of human intervention,” says Sarah T. Roberts, an information studies professor at UCLA and the author of Behind the Screen: Content Moderation in the Shadows of Social Media. “We’ll see what is typically unseen—that’s possible, for sure.”

Read all of our coronavirus coverage here.

After the 2016 US presidential election, Facebook significantly ramped up its moderation capabilities. By the end of 2018, it had more than 30,000 people working on safety and security, about half of which are content reviewers. Most of these moderators are contract workers, employed by firms like Accenture or Cognizant in offices around the world. They work to keep Facebook free of violence, child exploitation, spam, as well as other unseemly content. Their jobs can be stressful, if not outright traumatizing.

On Monday night, Facebook announced thousands of contract content moderators would be sent home “until further notice.” The workers would still be paid—although they wouldn’t receive the $1,000 bonus Facebook is giving to full-time staff. To fill the gap, Facebook is shifting more of the work to artificial intelligence, which CEO Mark Zuckerberg has been heralding as the future of content moderation for years. But some of the most sensitive content will be given to full-time staff, Zuckerberg told reporters on a call Wednesday, who will continue working at its offices.

Among Facebook users, Zuckerberg said, “I’m personally quite worried that the isolation from being at home could potentially lead to more depression or mental health issues.” To prepare for the potential onslaught, Facebook is ramping up the number of people working on moderating content about things like suicide and self-harm, he added. Another concern is the spread of misinformation—always an issue online, but particularly during a public health crisis. As part of its wider response to Covid-19, Facebook also announced it’s rolling out a Coronavirus Information Center to the newsfeed, where people can get updated information about the pandemic from authoritative sources.

As complaints over the spam glitch grew on Tuesday, those affected as well as some former Facebook employees wondered if it could be connected to the company’s recent workflow changes. “It looks like an anti-spam rule at FB is going haywire,” Facebook’s former security chief Alex Stamos said on Twitter. “Facebook sent home content moderators yesterday, who generally can't [work from home] due to privacy commitments the company has made. We might be seeing the start of the [machine learning] going nuts with less human oversight.”

Facebook’s vice president of integrity, Guy Rosen, quickly swooped in to clarify: “We’re on this—this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce. We're in the process of fixing and bringing all these posts back,” he wrote in a reply to Stamos on Twitter. (When asked for more detail as to what happened Tuesday evening, Facebook policy communications manager Andrew Pusateri directed WIRED to Rosen’s tweet.)

But researchers say problems like Tuesday night’s could become more common in the absence of a robust team of human moderators. YouTube and Twitter announced Monday that their contractors would be sent home as well, and that they too would be relying more heavily on automated flagging tools and AI-powered review systems. Leigh Ann Benicewicz, a spokesperson for Reddit, told WIRED on Tuesday that the company had “enacted mandatory work-from-home for all of its employees,” which also applies to contractors. She declined to elaborate about how the policy was impacting content moderation specifically. Twitch did not immediately return a request for comment.

With fewer moderators, the internet could change considerably for the millions of people now reliant on social media as their primary mode of communication with the outside world. The automated systems Facebook, YouTube, Twitter, and other sites use vary, but they often work by detecting things like keywords, automatically scanning images, and looking for other signals that a post violates the rules. They are not capable of catching everything, says Kate Klonick, a professor at St. John's University Law School and fellow at Yale’s Information Society Project, where she studies Facebook. The tech giants will likely need to be overly broad in their moderation efforts, to reduce the likelihood that an automated system misses important violations.


“I don’t even know how they are going to do this. [Facebook’s] human reviewers don’t get it right a lot of the time. They are amazingly bad still,” says Klonick. But the automatic takedown systems are even worse. “There is going to be a lot of content that comes down incorrectly. It’s really kind of crazy.”

That could have a chilling effect on free speech and the flow of information during a critical time. In a blog post announcing the change, YouTube noted that “users and creators may see increased video removals, including some videos that may not violate policies.” The site’s automated systems are so imprecise that YouTube said it would not be issuing strikes for uploading videos that violate its rules, “except in cases where we have high confidence that it’s violative.”

As part of her research into Facebook’s planned Oversight Board, an independent panel that will review contentious content moderation decisions, Klonick has reviewed the company’s enforcement reports, which detail how well it polices content on Facebook and Instagram. Klonick says what struck her about the most recent report, from November, was that the majority of takedown decisions Facebook reversed came from its automated flagging tools and technologies. “There's just high margins of error; they are so prone to over-censoring and [potentially] dangerous,” she says.

Facebook, at least in that November report, didn’t exactly seem to disagree:

While instrumental in our efforts, technology has limitations. We’re still a long way off from it being effective for all types of violations. Our software is built with machine learning to recognize patterns, based on the violation type and local language. In some cases, our software hasn’t been sufficiently trained to automatically detect violations at scale. Some violation types, such as bullying and harassment, require us to understand more context than others, and therefore require review by our trained teams.

Zuckerberg said Wednesday that many of the contract workers that make up those teams would be unable to do their jobs from home. While some content moderators around the world do work remotely, many are required to work from an office due to the nature of their roles. Moderators are tasked with reviewing extremely sensitive and graphic posts about child exploitation, terrorism, self-harm, and more. To prevent any of it from leaking to the public, “these facilities are treated with high degrees of security,” says Roberts. For example, workers are often required to keep their cell phones in lockers and can’t bring them to their desks.

Zuckerberg also told reporters that the offices where content moderators work have mental health services that can’t be accessed from home. They often have therapists and counselors on staff, resiliency training, and safeguards in place that force people to take breaks. (Facebook added some of these programs last year after The Verge reported on the bleak working conditions at some of the contractors’ offices.) As many Americans are discovering this week, the isolation of working from home can bring its own stresses. “There’s a level of mutual support that goes on by being in the shared workspace,” says Roberts. “When that becomes fractured, I’m worried about to what extent the workers will have an outlet to lean on each other or to lean on staff.”

There are no easy choices to make. Sending moderators back to work would be an inexcusable public health risk, but making them work from home raises privacy and legal concerns. Leaving the task of moderation largely up to the machines means accepting more mistakes and a reduced ability to rectify them at a time when there is little room for error.

Tech companies are left between a rock and a hard place, says Klonick. During a pandemic, accurate and reliable moderation is more important than ever, but the resources to do it are strained. “Take down the wrong information or ban the wrong account and it ends up having repercussions for how people can speak—full stop—because they can't go to a literal public square,” she says. “They have to go somewhere on the internet.”

Related Articles

Latest Articles