12.6 C
New York
Friday, April 19, 2024

The Parler Bans Open a New Front in the 'Free Speech' Wars

This was supposed to be Parler’s time to shine. 

Launched in 2018, the social media platform billed itself as a free-speech paradise, a haven for conservative users who believe Big Tech is out to silence them. Given that posture, Trump getting kicked off Facebook and Twitter last week after inciting a deadly riot at the US Capitol was incredible free publicity. What better proof that the fix was in against conservatives than shutting down the president of the United States of America? (Set aside the fact that Trump would likely have been banned long ago if he weren’t the president.) Conservative influencers like Dan Bongino encouraged their fans to ditch Twitter and follow them to Parler. (Bongino has an ownership stake in the company.) Even as reports swirled that the platform had played a role in fomenting last week’s assault on the Capitol, Parler surged to become the top free app in Apple’s App Store. One had to wonder: Could it turn into a durable conservative alternative to the dominant platforms? Only time would tell.

Wait, scratch that—Big Tech told, and the answer was no. Over the weekend, Apple and Google told Parler that they were banning its mobile app from their app stores, and Amazon Web Services said it would stop hosting Parler’s website. The companies pointed to the continued presence of user posts encouraging or inciting violence. As a result, Parler has, for the moment, ceased to exist. Even if it migrates to a new host, it won’t be able to return to the App Store or Google Play unless it abandons its identity as a platform whose content policies are as permissive as the First Amendment. 

This is not the first time providers of internet infrastructure have pulled their services from social network scofflaws. Several companies, including PayPal and GoDaddy, abandoned Gab after it emerged that the perpetrator of the 2018 mass shooting at the Tree of Life synagogue in Pittsburgh had used the network to broadcast his intentions. Apple pulled Tumblr from the App Store in 2018 because it was failing to screen out child sex abuse material. (The app was restored a few weeks later, after Tumblr announced it was banning all adult content too.) Perhaps most notably, the website security company Cloudflare, following a great deal of soul-searching by its free-speech libertarian CEO, Matthew Prince, pulled its services from the white supremacist Daily Stormer website after the Charlottesville rally, and from the shooter-manifesto-hosting message board 8chan after the El Paso shooting. 

The Parler situation, however, opens a new front in the online speech wars, as the debate over moderation migrates from an oligopoly of social media platforms to the oligopoly of companies that make those platforms available to the public. (In the case of Google, those oligopolies overlap.) Never before have three of the most dominant Silicon Valley corporations—all of them subjects of Congress’s massive antitrust investigation—simultaneously banned a social media platform because they don’t approve of its policies around user speech. They have, in effect, decided that they get to moderate the moderators. And that raises a number of difficult questions. 

“It’s a very unusual step for those companies to say, ‘Because we are the gatekeepers of the store, we are now going to look at everything that’s sold in our store and check to see if they are good citizens’” regarding user posts, said Alex Alben, a lecturer in internet law at UCLA and the former chief privacy officer of Washington state. “That’s a pretty big jump.” 

In letters sent to Parler about their decisions, Amazon, Apple, and Google all cited the social media company’s lack of a workable system to keep violent content off its platform. “The processes Parler has put in place to moderate or prevent the spread of dangerous and illegal content have proved insufficient,” wrote Apple. “Specifically, we have continued to find direct threats of violence and calls to incite lawless action.” 

You can see why these companies wouldn’t want to expose app store customers to a social media platform whose moderation system has failed to prevent the spread of harmful material. But then you’ve got to wonder what’s keeping them from banning the likes of Facebook, Twitter, and YouTube. The past few years of social media history have been nothing if not a relentless cycle of platforms failing to live up to their claims about how well they police themselves. Facebook was used to facilitate ethnic cleansing in Myanmar, and with its vastly larger user base was almost certainly a greater vector of “Stop the steal” disinformation than Parler. Journalists and academics have credibly accused YouTube of driving right-wing radicalization. Twitter was long notorious for permitting heaps of sexist and racist abuse. 

These three companies have, to varying degrees, imposed stricter policies over the past year due to the coronavirus pandemic and the election. But it remains easy to find content that seems to violate the letter of the rules. Even days before the attack on the Capitol, journalists found groups on Facebook and Twitter calling for revolution. Amazon’s letter to Parler notes that the company flagged 98 examples “of posts that clearly encourage and incite violence.” It’s hard to imagine that Facebook, with its bigger user base, doesn’t eclipse that number. 

All of which makes the decision to ban Parler seem somewhat capricious. 

“I think the public perception is that all those scary people who gathered on Capitol Hill, they met up and continue to meet up on Parler, whereas Facebook and Twitter are doing something about it,” said Danielle Citron, a law professor at the University of Virginia and an expert on online harms. “And so Parler is the lowest hanging fruit.”  

To be clear, there are big differences between Parler, whose entire raison d’être is to provide a space of almost completely uninhibited expression, and the mainstream platforms, which now boast of their efforts to combat certain types of misinformation and their sophisticated AI moderation tools. Parler did have a few, minimal rules, including against fraud, doxing, and threats of violence. But the company’s stated mission was to create an online platform where content is governed by the principles of the First Amendment. “Parler doesn’t have a hate speech policy,” Jeffrey Wernick, Parler’s COO, told me last week, before the Capitol riot. “Hate speech has no definition, OK? It’s not a legal term.” 

Wernick is right. The First Amendment—which, I feel compelled to remind you, applies to the government, not private companies—protects a lot of material that most people don’t want to see on social media. It allows pornography. It allows glorification of violence. It allows explicit racism. And so, therefore, does Parler. 

By tracking the First Amendment, however, Parler’s policies were incompatible on their face with those of Apple, Google, and Amazon, even aside from the matter of enforcement. Google and Apple, for example, both explicitly prohibit apps in their stores from allowing hate speech. 

Perhaps Parler’s biggest problem was that it provided much more latitude for the type of material that the big platforms define as threatening violence. That’s because, under First Amendment doctrine, the government can only criminalize very narrow categories of speech, such as so-called “true threats”—roughly, language explicitly intended to make an individual or group fear for their life or safety. Arguing that people should rise up in arms, or that a politician or celebrity should be shot, wouldn’t meet the criteria for incitement or true threat. Believe it or not, that sort of speech is legally protected. (It can still earn you an inquisitive knock on the door from the Secret Service. I don’t recommend it.) Parler’s community guidelines mirrored that standard. 

It seems obvious that social media platforms should have the right to police content more tightly than the government does. Users don’t want to be bombarded by racist or sexist comments, and most advertisers don’t want their brand showing up next to neo-Nazi videos. But it also seems at least reasonable that a platform should be able to do what Parler did—that is, to incorporate the First Amendment as its content policy. The idea that private companies should be free to set their own rules rests on another premise: that consumers who don’t like those rules can take their business elsewhere. For users to be able to take their business to Parler, Parler must be able to take its business somewhere other than Apple and Google. That option is now foreclosed.

Not that Parler has been wholly banished from existence. Once it finds a new host, users will be able to access the desktop and mobile sites. Without a mobile app, however, a social network is not going to thrive in 2021. (Technically, there are ways to download apps that are banned from the app stores, but it’s something few people go to the trouble of doing.) I’m not saying Parler was poised to take the world by storm. It had a clunky user experience and a self-limiting sales pitch. But Amazon, Apple, and Google pulled the plug on the experiment before it even had a chance to fail. 

This all points to a question best answered by Congress and regulators: At what point down the “stack”—the chain of hardware and software between technology providers at the bottom and end users at the top—does a service becomes a utility, to which government must set some rules of access? 

Very few people, for example, would be comfortable letting cell phone carriers prohibit offensive content in private phone calls or text messages. An app store is higher in the stack than a carrier or ISP, but lower than a Facebook or YouTube. It’s a position whose power to set the bounds of discourse has been overshadowed by the power wielded by the social media platforms themselves. Now that the companies have anointed themselves as meta-moderators, however, they have invited a new wave of scrutiny. What exactly is the standard for “adequate” content moderation systems, as Apple put it in its letter to Parler? Is it just whatever Facebook and Twitter have decided to implement? And why should Google, which owns YouTube, or Amazon, which competes against social media platforms for advertising dollars, get to decide whose moderation is up to snuff? 

Citron cautioned that deplatforming Parler could even backfire by pushing dangerous conversations into places where they’re harder to monitor. “It’s worse to lose insight into these various plots that are happening right now,” she said. “In the next 10 days we have to prepare ourselves for serious physical violence.” 

Of course, Amazon, Apple, and Google—and Facebook and Twitter before them—insist that the actions they’ve taken over the past week were necessary precisely to head off further Trump-inspired violence. By confronting Trump and his most deluded followers, Big Tech may have curried some much-needed favor from a Democratic Party that has been calling to rein in its power. At the same time, in doing so, it has put that power on display like never before.

Related Articles

Latest Articles