7.1 C
New York
Friday, April 19, 2024

On Social Media, American-Style Free Speech Is Dead

American social media platforms have long sought to present themselves as venues for unfettered free expression. A decade ago, Twitter employees used to brand the startup as “the free speech wing of the free speech party.” In late 2019, Mark Zuckerberg gave an address defending Facebook’s allegiance to First Amendment principles—including his oft-stated belief that platforms should not be “the arbiters of truth.”

The pandemic made a mockery of that idea. In the context of a global public health emergency, companies like Facebook, Twitter, and Google began removing posts containing misleading information about the coronavirus, which required them to make judgments about truth and falsity. The presidential election pulled them even deeper into the job of fact-checking. By spring 2020, Twitter was applying warning labels to Donald Trump’s account, and by the summer, all the platforms were touting their proactive efforts against election misinformation.

Evelyn Douek is a doctoral student at Harvard Law School and an affiliate at the Berkman Klein Center for Internet & Society. As an Australian scholar, she brings an international perspective to free speech questions. In a new article published by the Columbia Law Review, she argues that the pandemic exposed the hollowness of social media platforms’ claims to American-style free speech absolutism. It’s time, she writes, to recognize that “the First Amendment–inflected approach to online speech governance that dominated the early internet no longer holds. Instead, platforms are now firmly in the business of balancing societal interests.”

In a conversation last week, Douek explained why the shift to a more “proportional” approach to online speech is a good thing, why platforms must provide more transparency into their moderation systems, and the perils of confusing onions for boobs. The interview has been condensed and edited for clarity.

WIRED: You set up a distinction in this article between two ways of thinking about content moderation. The first is more of an American, First Amendment-inspired approach so let’s just start there. What are the key features of that?

Evelyn Douek: It’s part of the lore of content moderation, that it started with these American lawyers in Silicon Valley inspired by First Amendment exceptionalism. These ideas that they shouldn’t be the arbiters of truth of content, that the best remedy for speech is more speech, and that their platforms should be like the quintessential marketplace of ideas, where users came to truth with their wonderful public discourse.

So those are the ideas that they invoked. And this idea that they shouldn’t be in the business of looking at particular speech and balancing its social cost, or the truth of it. Now, that was always a bit of a myth and a bit of theater, because of course content moderation has been there from the start. They have always removed certain categories of speech, like pornography, spam, and things like that, that would have made their sites wastelands.

You just used a word that’s really important in your article, which is categories. One point you make is that, in this First Amendment–style approach, the distinctions are more categorical. Can you give an example of a kind of content moderation decision that embodies that way of looking at it?

Yeah, so this is really important, because it’s one of the defining features of American free speech jurisprudence that the rest of the world has like looked at and gone, Mmm, not so much. It is a bit of technical, and so to break it down, let’s take, for example, adult nudity.

The way that it started [for social media] was fairly unsophisticated. It was like, “If there’s boobs, take it down.” That was the dominant approach for a while. And then people would go, Hold on, you’re removing a whole bunch of things that have societal value. Like people raising awareness about breast cancer, or people breastfeeding that shouldn’t be stigmatized. Or, in a bunch of cultures, adult nudity and breasts are a perfectly normal, accepted, celebrated kind of expression. So gradually this category of adult nudity became, first of all, untenable as an absolute category. And so it got sort of broken down into finer and finer distinctions that no longer really looks like a solid category, but looks much more like, OK we’ll take these finer and finer sort of instances and balance, what’s the social cost of this, what’s the benefit, and how should we reconcile that and approach it in a more proportionate manner?

So it sounds like what you’re saying is a categorical approach to content moderation attempts to draw pretty bright lines around certain categories of posts that are not allowed, versus everything else that is allowed. But over time, you have to keep slicing these categories more and more thinly. And at a certain point, maybe it’s hard to say exactly where, it stops looking like a true category at all, because the judgments have to get so nuanced.

Yeah, that’s basically it. Let’s take another category: false speech. That was a category that was, like, absolutely protected by platforms pretty much universally for the history of content moderation. It was like, “The fact that it’s false is not enough for us to get involved and we’re not going to look at that any further. ‘It’s just false’ is not a reason that we will inquire into the value of that speech.”

The pandemic really changed that. For the first time, platforms came out and said, The cost of this false speech around Covid misinformation is too high in the context of a public health emergency, and so we will take it down for this small category. And they got rewarded for that quite a bit. Your colleague at WIRED ran a piece that said, “Has the coronavirus killed the techlash?” There was a couple of weeks in March where the platforms were “good” again, because they were taking all of this action, and they sort of learned this lesson. Suddenly, late 2020, Facebook’s doing things like banning Holocaust denial, which in the early days of the First Amendment paradigm of protecting speech and not looking at false speech would have been unthinkable. That was the most sort of symbolic attachment to that American-style approach. And now they’re like, No, the costs of that particular kind of false speech are too high, they aren’t outweighed by the benefits, and so, on balancing those costs and benefits, we’re going to start removing it.

The Holocaust denial example is really interesting for a few reasons. As these companies have gotten more global, they’ve absorbed more international ideas about how to treat this because the American tradition is this world-historical anomaly. Nobody else really does it the way we do.

Which gets us to the second approach to content moderation that you write about. What is the more international way of thinking about this that we’re seeing more and more of in how platforms treat user speech?

The First Amendment tradition is not just substantively exceptional—it’s not just that it’s more protective than other jurisdictions. It’s also methodologically exceptional in that categorical approach.

In pretty much every other jurisdiction, there’s this proportionality approach, which goes, Yes, OK, we start with the proposition that speech should be protected; however, any mature system of free expression has to acknowledge that there are certain circumstances in which speech needs to be restricted. How do we go about thinking about that? Well, first we go, What’s the reason that you’re restricting that speech? It can’t just be “I don’t like it,” it has to be some sort of compelling interest. And then the question has to be, Are you restricting it as little as possible? For example, it might not be that you need to ban it entirely. It could be that you can restrict it in other ways that are more proportionate. And, furthermore, the way in which you restrict it has to be actually effective at achieving the aim that you’re saying you restricted it for.

>

In late 2019, Mark Zuckerberg gave a speech at Georgetown in which he really tied Facebook to the principles of the First Amendment. Of course, Facebook’s a private company; they’re not literally governed by the First Amendment. But Zuckerberg went out and said they choose to be guided by First Amendment values anyway.

Holocaust denial is a really good example of how that isn’t the case. In the United States, you cannot get punished by the government for saying the Holocaust didn’t happen. In other places, notably Europe, you can. And so Facebook changing its policy on that does look like this big break from its previously stated commitments to the First Amendment. An interesting question it raises is, is this an example of Facebook doing a balancing of benefits versus harms and coming to a different conclusion, or is it just an example of the company bowing to public pressure and criticism and getting tired of having to defend this thing that’s unpopular anyway?

That’s the thing about the Holocaust denial decision: it’s just such a symbolic decision, because in some ways they had already been in the business now of removing false speech for other reasons, like the pandemic. But they really held on to this Holocaust denial one because it’s the most famous symbol of the American free speech tradition. One of the proudest moments of American free speech history is that the Nazis were allowed to march in Skokie. It’s like one of the first things you learn about American free speech. So it was a really symbolic decision for Facebook to hold on to that so steadfastly even as the tide turned against it publicly.

And then the reversal of the decision did show a break with that tradition. The question of, is it somehow a principled decision—do I think that the lawyers within Facebook are, like, reading a bunch of theory about methods of adjudication and looking at the categorical approach and the proportionality approach and coming to some sort of deep jurisprudential realization about the benefits of one approach over the other? No, I don’t. I don’t think that that’s what’s happening here. I definitely think public pressure, of course, is the primary driver of this.

But, to some extent … that’s OK! Like, that’s how social change happens. Courts are also responsive to public pressure in different ways. The reason why I want to call out the fact that now they’re in the business of balancing, whether it’s on the basis of public opinion or not, is because, once we start to accept that and look at it more closely, we can say, Are they actually balancing our interests, or are they balancing their own interests? And they need to make those arguments more compelling.

You argue in the paper that the shift to proportionality is good. How is the proportional approach better?

One of the things that the proportionality approach is really strong on is, you still need to restrict it in the least restrictive way. And one of the flaws of the categorical approach is that it makes half-measures look like just wimping out. It’s like, if you say that this is no longer a category of protected speech, why don’t you just get rid of it?

If you’re pretending not to look at the social costs in that fine-grained way, it becomes very difficult to justify half-measures or different kinds of responses to different kinds of speech. But most of the content moderation that’s occurring now, we’re moving away from this blunt-force, take down/leave up false binary. We’re seeing all these sort of intermediate measures that I personally find much more exciting. Things like labels, introducing friction, fact checking; things like warning screens, trying to promote counter speech. I think that that’s a more promising way to go, and so, in a sense, the proportionality approach might be more speech protective because it will require platforms to look to those less restrictive, more proportionate responses to the harms.

This should be one of the most exciting times for free speech jurisprudence and rules. Free speech law has always been based on, basically, empirical hypotheses that we’ve never been able to test. Like, what is the “chilling effect” of this rule? We’re going to assume that if you ban this kind of speech it’s going to chill all these other kinds of speech and that’s too much of a cost, we can’t do that. But no one’s ever really known that; that’s just some judges going, like, “You know what? I think that’s probably what will happen.” And then we have all this “chilling effects” doctrine.

For the first time in human history, we can measure a lot of this stuff with an exciting amount of precision. All of this data exists, and companies are constantly evaluating, What are the effects of our rules? Every time they make a rule they test its enforcement effects and possibilities. The problem is, of course, it’s all locked up. Nobody has any access to it, except for the people in Silicon Valley. So it’s super exciting but also super frustrating.

This ties into maybe the most interesting thing for me in your paper, which is the concept of probabilistic thinking. A lot of coverage and discussion about content moderation focuses on anecdotes, as humans are wont to do. Like, “This piece of content, Facebook said it wasn’t allowed, but it was viewed 20,000 times.” A point that you make in the paper is, perfect content moderation is impossible at scale unless you just ban everything, which nobody wants. You have to accept that there will be an error rate. And every choice is about which direction you want the error rate to go: Do you want more false positives or more false negatives?

The problem is that if Facebook comes out and says, “Oh, I know that that looks bad, but actually, we got rid of 90 percent of the bad stuff,” that doesn’t really satisfy anyone, and I think one reason is that we are just stuck taking these companies words for it.

Totally. We have no idea at all. We’re left at the mercy of that sort of statement in a blog post.

But there’s a grain of truth. Like, Mark Zuckerberg has this line that he’s rolling out all the time now in every Congressional testimony and interview. It’s like, the police don’t solve all crime, you can’t have a city with no crime, you can’t expect a perfect sort of enforcement. And there is a grain of truth in that. The idea that content moderation will be able to impose order on the entire messiness of human expression is a pipe dream, and there is something quite frustrating, unrealistic, and unproductive about the constant stories that we read in the press about, Here’s an example of one error, or a bucket of errors, of this rule not being perfectly enforced.

Because the only way that we would get perfect enforcement of rules would be to just ban anything that looks remotely like something like that. And then we would have onions getting taken down because they look like boobs, or whatever it is. Maybe some people aren’t so worried about free speech for onions, but there are other worse examples.

No, as someone who watches a lot of cooking videos—

That would be a high cost to pay, right?

I look at far more images of onions than breasts online, so that would really hit me hard.

Yeah, exactly, so the free-speech-for-onions caucus is strong.

I’m in it.

We have to accept errors in one way or the other. So the example that I use in my paper is in the context of the pandemic. I think this is a super useful one, because it makes it really clear. At the start of the pandemic, the platforms had to send their workers home like everyone else, and this means they had to ramp up their reliance on the machines. They didn’t have as many humans doing checking. And for the first time, they were really candid about the effects of that, which is, “Hey, we’re going to make more mistakes.” Normally, they come out and they say, “Our machines, they’re so great, they’re magical, they’re going to clean all this stuff up.” And then for the first time they were like, “By the way, we’re going to make more mistakes in the context of the pandemic.” But the pandemic made the space for them to say that, because everyone was like, “Fine, make mistakes! We need to get rid of this stuff.” And so they erred on the side of more false positives in taking down misinformation, because the social cost of not using the machines at all was far too high and they couldn’t rely on humans.

>

In that context, we accepted the error rate. We read stories in the press about how, like, back in the time when masks were bad, and they were banning mask ads, their machines accidentally over-enforced this and also took down a bunch of volunteer mask makers, because the machines were like, “Masks bad; take them down.” And it’s like, OK, it’s not ideal, but at the same time, what choice do you want them to make there? At scale, where there’s literally billions of decisions, all the time, there are some costs, and we were freaking out about the mask ads, and so I think that that’s a more reasonable trade off to make.

But that’s not the kind of conversation that we have. We have this example of individual errors being held up as problems. And it may be that the errors are too high. I’m not trying to say that realizing that we have to accept errors means that we have to accept all errors, and there is plenty of evidence that the error rates are not good enough across many kinds of categories. But that’s the conversation that we need to have—that’s the uncomfortable territory that we need to live in.

Right, and this is why it comes back to transparency. Companies don’t share this information in any useful, comprehensive way and so people outside the companies are kind of endlessly in this position of having to dig it up themselves, and relying on individual anecdotes.

Platforms need to open up and be more candid about this. And I will say we are moving in that direction.

So Facebook, last year, for the first time, started reporting prevalence of hate speech, which is, “How much hate speech did we miss? The average user, how much hate speech did they see on Facebook, even after we’ve done our content moderation?” And that’s basically an error rate. And YouTube, just a couple of weeks ago, for the very first time, started reporting that too. That is a much more meaningful transparency measure than, “We took down so much hate speech.” Like, Cool, good work, guys—is that 2 percent of hate speech or is that 98 percent of hate speech? Who knows.

So the prevalence metric is useful, and it also therefore incentivizes the right responses as well. One of the really interesting things is, Facebook’s transparency report earlier this year said, “We reduced the prevalence of hate speech,” and how did they do that? They did that, not by over-enforcing or taking down much more, but by tweaking the ranking in their News Feed, which is a more proportionate and better response to fixing that kind of problem. So we’re moving in the right direction, but we need way more access to data.

Why talk about content moderation at all? What can more enlightened approaches to content moderation accomplish, such that we should care?

So, I think we focus on content moderation too much, or we put too much pressure on it to fix things that are much bigger than content moderation. And that’s in part because it's the most obvious lever to pull. Stuff that’s on platforms is the most visible expression of other societal problems, and then content moderation is right there and could get rid of that visible expression. This goes back to my meta-argument, that I don’t think content moderation can solve societal problems, and I think it’s a limited lever to pull.

But I still have a reason to get up and do the work that I do every day and write these long articles, because even though it’s a much smaller part of the problem than we think, I do think it is really important. To say that content moderation can’t fix the breakdown of the Republican Party, or the pandemic, or social isolation, is not to say that it can’t have real, meaningful, and important impacts. There are certain instances where the effects of content moderation are really visible. Like, we can talk about inadequate moderate moderation in conflict societies, or removal of evidence of war crimes. This is very important stuff. It’s also an expression of societal values, and it’s a place where we argue about what we think as a society around norms, and what is acceptable. And these are important debates about what should the shape of our public sphere be, and I think that’s really exciting. I like to be in the space because it’s a really exciting place to be, and we can nudge it towards a better world.

I’m very hopeful that we’re not going to be at the whims of Mark Zuckerberg and Jack Dorsey for the rest of their natural lives. I think there’s going to be more disruption and changes in this space than we can possibly imagine. These companies are still very young, this whole idea is still very young.

But I don’t think that fundamental issue of “We need a way to think about content on the internet and the rules that govern them”—I don’t think that’s ever going away, and so we need to start developing the norms around that. We need to acknowledge that we are balancing societal interests, we are going to have error rates. Once we can even be on the same page about that, that’s when we can start having a proper conversation about what we’re doing.

Related Articles

Latest Articles