21.1 C
New York
Wednesday, May 22, 2024

Better Than Nothing: A Look at Content Moderation in 2020

“I don’t think it’s right for a private company to censor politicians or the news in a democracy.”—Mark Zuckerberg, October 17, 2019

“Facebook Removes Trump’s Post About Covid-19, Citing Misinformation Rules”—The Wall Street Journal, October 6, 2020

For more than a decade, the attitude of the biggest social media companies toward policing misinformation on their platforms was best summed up by Mark Zuckerberg’s oftrepeated warning: “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.” Even after the 2016 election, as Facebook, Twitter, and YouTube faced growing backlash for their role in the dissemination of conspiracy theories and lies, the companies remained reluctant to take action against it.

Then came 2020.

Under pressure from politicians, activists, and media, Facebook, Twitter, and YouTube all made policy changes and enforcement decisions this year that they had long resisted—from labeling false information from prominent accounts to attempting to thwart viral spread to taking down posts by the president of the United States. It’s hard to say how successful these changes were, or even how to define success. But the fact that they took the steps at all marks a dramatic shift.

“I think we’ll look back on 2020 as the year when they finally accepted that they have some responsibility for the content on their platforms,” said Evelyn Douek, an affiliate at Harvard’s Berkman Klein Center for Internet and Society. “They could have gone farther, there’s a lot more that they could do, but we should celebrate that they’re at least in the ballgame now.”


Social media was never a total free-for-all; platforms have long policed the illegal and obscene. What emerged this year was a new willingness to take action against certain types of content simply because it is false—expanding the categories of prohibited material and more aggressively enforcing the policies already on the books. The proximate cause was the coronavirus pandemic, which layered an information crisis atop a public health emergency. Social media executives quickly perceived their platforms’ potential to be used as vectors of lies about the coronavirus that, if believed, could be deadly. They vowed early on to both try to keep dangerously false claims off their platforms and direct users to accurate information.

One wonders whether these companies foresaw the extent to which the pandemic would become political, and Donald Trump the leading purveyor of dangerous nonsense—forcing a confrontation between the letter of their policies and their reluctance to enforce the rules against powerful public officials. By August, even Facebook would have the temerity to take down a Trump post in which the president suggested that children were “virtually immune” to the coronavirus.

“Taking things down for being false was the line that they previously wouldn’t cross,” said Douek. “Before that, they said, ‘falsity alone is not enough.’ That changed in the pandemic, and we started to see them being more willing to actually take down things, purely because they were false.”

Nowhere did public health and politics interact more combustibly than in the debate over mail-in voting, which arose as a safer alternative to in-person polling places—and was immediately demonized by Trump as a Democratic scheme to steal the election. The platforms, perhaps eager to wash away the bad taste of 2016, tried to get ahead of the vote-by-mail propaganda onslaught. It was mail-in voting that led Twitter to break the seal on applying a fact-checking label to a tweet by Trump, in May, that made false claims about California’s mail-in voting procedure.

This trend reached its apotheosis in the run-up to the November election, as Trump broadcast his intention to challenge the validity of any votes that went against him. In response, Facebook and Twitter announced elaborate plans to counter that push, including adding disclaimers to premature claims of victory and specifying which credible organizations they would rely on to validate the election results. (YouTube, notably, did much less to prepare.) Other moves included restricting political ad-buying on Facebook, increasing the use of human moderation, inserting trustworthy information into users’ feeds, and even manually intervening to block the spread of potentially misleading viral disinformation. As the New York Times writer Kevin Roose observed, these steps “involved slowing down, shutting off or otherwise hampering core parts of their products — in effect, defending democracy by making their apps worse.”

The moves drew measured approval from academics and researchers who had for years been calling for stronger policies to combat misinformation. “My basic take on how the platforms performed during this election was that they started to take some of the advice of the scientific community and implemented some of the recommendations,” said Sinan Aral, the co-leader of the Initiative on the Digital Economy at MIT.

Most promising were efforts to slow down the spread of information, such as Facebook Messenger sharply limiting the ability to forward posts (a tactic borrowed from WhatsApp in India), or Twitter nudging users to read an article before sharing it. “Research shows that falsity spreads faster than truth, and if you slow down information, maybe you give the debunking a chance to catch up,” Aral said. (The most high-profile example of this technique was, unfortunately, Twitter’s botched approach to the infamous Hunter Biden laptop story; the company admitted it went too far when it banned users from linking to the story entirely.)

Aral also praised platforms for demonetizing certain types of false content, so that anti-vaxx videos, for example, can’t earn ad revenue—thus disincentivizing their creation. And, like other researchers, he welcomed the increased use of fact-checking labels, but pointed out that the technique still needs refinement. Generic labels like “This claim about the election is disputed” don’t necessarily affect belief or virality, and if platforms apply fact checking labels inconsistently, it can lead to users assuming anything without a label must be true.

At a broader level, experts welcomed the fact that platforms have, if haltingly, embraced a wider set of possible policies than simply take-it-down vs. leave-it-up. “Part of what’s happened isn’t just how much action they’re taking, but trying to expand the repertoire of actions,” said Dean Eckles, a colleague of Aral’s at MIT.

OK, so … did any of this stuff work?

The unfortunate fact is that we don’t know. That’s mostly because the platforms, by and large, don’t share any of the relevant data. When Facebook boasts about improvements in the ability of its AI to filter hate speech, for example, it doesn’t provide receipts. Promisingly, Facebook has said that it is working with outside researchers on experiments to test its influence on the 2020 election. The results are set to be released next summer, and the company says it will have no veto power. The question is whether this effort will fare better than previous failed attempts to team up with academic researchers.

“We have all these [fact-checking] labels, and we just have no idea if they did anything, or if they just made us feel a little better,” said Douek. Twitter has published limited findings about its election-related efforts to police disinformation, but hasn’t shared the underlying data with outside researchers. Other platforms remain even more of a black box. “YouTube stuck these labels on everything and wants praise, and we have no idea how many people clicked on those labels,” said Douek. “And did people actually read what they clicked on?”

“Transparency” can be a vague and disappointing term, something organizations promise when they want to appear serious while avoiding any concrete commitments. But when it comes to the question of how well various online interventions work at suppressing disinformation, it’s the place where every conversation dead-ends. The overwhelming majority of decisions about what piece of content to show whom are made invisibly, by proprietary algorithms—and, indirectly, by the human beings who design them. Researchers have done herculean work to show that algorithms optimized for maximum engagement have problems: creating filter bubbles, reinforcing bias, amplifying the reach of false or harmful material. But to regulate any of this, people outside the companies need a direct view into how input X turns into output Y.

“Data, data, data: Do we really know what happened without the data on what occurred?” said Sam Gregory, program director at Witness, a tech-focused human rights organization.

The fight for transparency could become a major component of debates in Washington over how to regulate social media companies in the new Congress. In a recent essay for Slate, Susan Ness, a former commissioner of the Federal Communications Commission, argued that requiring transparency, including public disclosures of the impact of algorithms, could effectively “tackle hate speech and disinformation without trampling on free expression.” In his recent Senate appearances, Twitter CEO Jack Dorsey likewise suggested that Congress enact legislation that would “require the publication of moderation processes and practices, a straightforward process to appeal decisions, and best efforts around algorithmic choice.” No one paid his ideas much attention at the time—the senators were mostly too busy yelling about Section 230 and the election—but they jibe with much of what the research community has been advocating.

This past year saw social companies take huge steps toward assuming greater accountability over public discourse and the spread of disinformation. But before the rest of us can figure out whether those steps are having the intended impact, there’s going to need to be some accountability for the accountability.

Related Articles

Latest Articles