9.1 C
New York
Thursday, March 28, 2024

More Content Moderation Is Not Always Better

Content moderation is eating the world. Platforms’ rule-sets are exploding, their services are peppered with labels, and tens of thousands of users are given the boot in regular fell swoops. No platform is immune from demands that it step in and impose guardrails on user-generated content. This trend is not new, but the unique circumstances of a global public health emergency and the pressure around the US 2020 election put it into overdrive. Now, as parts of the world start to emerge from the pandemic, and the internet’s troll in chief is relegated to a little-visited blog, the question is whether the past year has been the start of the tumble down the dreaded slippery content moderation slope or a state of exception that will come to an end.

There will surely never be a return to the old days, when platforms such as Facebook and Twitter tried to wash their hands of the bulk of what happened on their sites with faith that internet users, as a global community, would magically govern themselves. But a slow and steady march toward a future where ever more problems are sought to be addressed by trying to erase content from the face of the internet is also a simplistic and ineffective approach to complicated issues. The internet is sitting at a crossroads, and it’s worth being thoughtful about the path we choose for it. More content moderation isn’t always better moderation, and there are trade-offs at every step. Maybe those trade-offs are worth it, but ignoring them doesn’t mean they don’t exist.

A look at how we got here shows why the solutions to social media’s problems aren’t as obvious as they might seem. Misinformation about the pandemic was supposed to be the easy case. In response to the global emergency, the platforms were finally moving fast and cracking down on Covid-19 misinformation in a way that they never had before. As a result, there was about a week in March 2020 when social media platforms, battered by almost unrelenting criticism for the last four years, were good again. “Who knew the techlash was susceptible to a virus?” Steven Levy asked.

Such was the enthusiasm for these actions that there were immediately calls for them to do the same thing all the time for all misinformation—not just medical. Initially, platforms insisted that Covid misinformation was different. The likelihood of harm arising from it was higher, they argued. Plus, there were clear authorities they could point to, like the World Health Organization, that could tell them what was right and wrong.

But the line did not hold for long. Platforms have only continued to impose more and more guardrails on what people can say on their services. They stuck labels all over the place during the US 2020 election. They stepped in with unusual swiftness to downrank or block a story from a major media outlet, the New York Post, about Hunter Biden. They deplatformed Holocaust deniers, QAnon believers, and, eventually, the sitting President of the United States himself.

>

For many, all of this still has not been enough. Calls for platforms to do better and take more content down remain strong and steady. Lawmakers around the world certainly have not decreased their pressure campaigns. There’s hardly a country in the world right now that’s not making moves to regulate social media in one form or another. Just last week, the European Union beefed up its Code of Practice on Disinformation, saying that a stronger code is necessary because “threats posed by disinformation online are fast evolving” and that the ongoing infodemic is “putting people’s life in danger.” US Senators still write to platforms asking them to take down specific profiles. Platforms continue to roll out new rules aimed at limiting the spread of misinformation.

As companies develop ever more types of technology to find and remove content in different ways, there becomes an expectation they should use it. Can moderate implies ought to moderate. After all, once a tool has been put into use, it’s hard to put it back in the box. But content moderation is now snowballing, and the collateral damage in its path is too often ignored.

There’s an opportunity now for some careful consideration about the path forward. Trump’s social media accounts and the election are in the rearview mirror, which means content moderation is no longer the constant A1 story. Perhaps that proves the actual source of much of the angst was politics, not platforms. But there is—or should be—some lingering unease at the awesome display of power that a handful of company executives showed in flipping the off-switch on the accounts of the leader of the free world.

The chaos of 2020 shattered any notion that there’s a clear category of harmful “misinformation” that a few powerful people in Silicon Valley must take down, or even that there’s a way to distinguish health from politics. Last week, for instance, Facebook reversed its policy and said it will no longer take down posts claiming Covid-19 is human-made or manufactured. Only a few months ago The New York Times had cited belief in this “baseless” theory as evidence that social media had contributed to an ongoing “reality crisis.” There was a similar back-and-forth with masks. Early in the pandemic, Facebook banned ads for them on the site. This lasted until June, when the WHO finally changed its guidance to recommend wearing masks, despite many experts advising it much earlier. The good news, I guess, is they weren’t that effective at enforcing the ban in the first place. (At the time, however, this was not seen as good news.)

>

As more comes out about what authorities got wrong during the pandemic or instances where politics, not expertise, determined narratives, there will naturally be more skepticism about trusting them or private platforms to decide when to shut down conversation. Issuing public health guidance for a particular moment is not the same as declaring the reasonable boundaries of debate.

The calls for further crack-downs have geopolitical costs, too. Authoritarian and repressive governments around the world have pointed to the rhetoric of liberal democracies in justifying their own censorship. This is obviously a specious comparison. Shutting down criticism of the government’s handling of a public health emergency, as the Indian government is doing, is as clear an affront to free speech as it gets. But there is some tension in yelling at platforms to take more down here but stop taking so much down over there. So far, Western governments have refused to address this. They have largely left platforms to fend for themselves in the global rise of digital authoritarianism. And the platforms are losing. Governments need to walk and chew gum in how they talk about platform regulation and free speech if they want to stand up for the rights of the many users outside their borders.

There are other trade-offs. Because content moderation at scale will never be perfect, the question is always which side of the line to err on when enforcing rules. Stricter rules and more heavy-handed enforcement necessarily means more false positives: That is, more valuable speech will be taken down. This problem is exacerbated by the increased reliance on automated moderation to take down content at scale: These tools are blunt and stupid. If told to take down more content, algorithms won’t think twice about it. They can’t evaluate context or tell the difference between content glorifying violence or recording evidence of human rights abuses, for example. The toll of this kind of approach has been clear during the Palestinian–Israeli conflict of the past few weeks as Facebook has repeatedly removed essential content from and about Palestinians. This is not a one-off. Maybe can should not always imply ought—especially as we know that these errors tend to fall disproportionately on already marginalized and vulnerable communities.

Finally, it is becoming clearer every day that removing content alone does not address the underlying social and political issues that led to the creation of the content in the first place. Trump’s hold on the Republican party did not evaporate along with his Twitter account, even though the online world is talking about him less than they have in five years. QAnon remains astonishingly popular, despite having been largely banished by the major platforms. Meanwhile, mega-money is pouring into creating an alternative “free speech” social media ecosystem. If these investments pay off, and the alt-tech platforms continue to grow, kicking people off so-called mainstream platforms may do even less than it has been doing.

>

“Just delete things” removes content but not its cause. It’s tempting to think that we can content-moderate society to a happier and healthier information environment or that the worst social disruptions of the past few years could have been prevented if more posts had just been taken down. But fixing social and political problems will be much harder work than tapping out a few lines of better code. Platforms will never be able to fully compensate for other institutional failures.

That’s not to say better code can’t help or that platforms don’t need to keep thinking about how to mitigate the harmful effects of what they’ve built. Experimentation outside the false take-down/leave-up binary of content moderation is promising. Twitter now nudges people to read a story before retweeting it or rethink a reply that might be offensive, and people actually do. Giving users the ability to take charge of their own experience—like being able to bulk-delete harassing comments on their posts—can offer real protection. Putting limits on the number of times people can forward content in one go can dramatically decrease the spread of viral content. Platforms themselves can turn the dial down on engagement with toxic content, although even these adjustments can misfire. Importantly, none of these measures require Silicon Valley or anyone else to determine and dictate to the rest of us whether a post is true.

Conversations about content moderation often feel like complaining that the food is terrible and the portions are too small: It’s bad and there should be more. But there’s only so much you can do with rotten ingredients and a terrible kitchen.

It’s possible, of course, that the content moderation state of emergency has done more good than harm, that the mistakes were worth it. It’s possible that the trend is heading in the right direction and should only keep accelerating. But, like everything that happened during the pandemic, this should not be blindly accepted and deserves collective, thoughtful reflection about the trade-offs involved. At the very least, the last year should prove that solving the problems in our online information ecosystem will require much bigger imaginations than “just delete the bad stuff.” Even if we could agree on what the “bad stuff” is—and we never will—deleting it is putting a bandaid on a deep wound.

Related Articles

Latest Articles