Twitter announced today a sweeping set of changes designed to add "extra friction” to the platform to check the spread of political disinformation. This includes requiring users to add their own comments before retweeting others, labeling premature claims of election victories, removing calls for violence in response to the election, and restricting the reach of tweets containing misinformation from political figures with over 100,000 followers.
We applaud these changes, and believe that if Twitter is serious about its stated goal of “protecting the integrity of the election conversation,” there's another thing the platform should consider: putting a time delay on the tweets of Donald Trump and other political elites.
Each new broadcast technology has had to find its relationship to liveness. In 1952, when the Federal Communications Commission (FCC) prohibited broadcasting live telephone conversations—but allowed broadcasting taped conversations—radio station call-in shows used a short delay to get around the prohibition, recording conversations to tape and then, six or seven seconds later, playing the tape. The (not always perfect) solution also gave broadcasters a measure of control over live situations, letting them bleep or mute profanity and inappropriate content or anything else they wanted to keep from their audiences. The “bleep censor” quickly became an industry standard.
Why not put Donald Trump’s tweets and his Facebook posts, as well as those of other political elites, on a time delay? (See here for a smart and similar earlier proposal focused on how a delay might strengthen national security.) Twitter and Facebook have extensive and well-documented content rules that prohibit everything from electoral to health disinformation. The platforms have singled out these categories of content in particular because they have significant likelihood of causing real world harm, from voter suppression to undermining the Centers for Disease Control and Prevention’s public health guidelines. The FBI found that the plot to kidnap Michigan governor Gretchen Whitmer was, in part, organized in a Facebook group.
To date, the enforcement of these policies has been spotty at best. Twitter has labeled some of the president’s tweets as “potentially misleading” to readers about mail-in ballots. The platform hid a Trump tweet stating “when the looting starts, the shooting starts” for “glorifying violence,” and it recently hid another tweet equating Covid-19 to the flu, claiming that the president was “spreading misleading and potentially harmful information” when he wrote that “we are learning to live with Covid, in most populations far less lethal!!!" Facebook has taken similar actions, providing links to reliable voter and health information and removing posts that it deems violate its policies.
But these actions often take hours to put in place while this content racks up thousands of engagements and shares. In those hours, as recent research from Harvard shows, Trump is a one-man source of disinformation that travels quickly and broadly across Twitter and Facebook. And we know that the mainstream media often picks up on and amplifies Trump’s posts before platforms moderate them. Journalists report on platforms’ treatments of Trump’s tweets, making that and them the story, and giving life to false claims.
What if we never let Trump’s disinformation breathe to begin with, cutting it off from the social media and mainstream journalism oxygen it craves?
We suggest Twitter and Facebook immediately institute the following process for all of Trump’s social media posts, and for those of other political elites: Any time the president taps “Tweet” or “Post,” his content is not displayed immediately but sent to a small 24/7 team of elite content moderators who can evaluate whether the content accords with these platforms’ well-established policies. This is especially important in the context of electoral and health disinformation, which all the major platforms have singled out as being of utmost importance. Within, say, three minutes, such a team would decide whether to (a) let the post through, (b) let the post through with restrictions, (c) place a public notice on Trump’s account saying that the platform is evaluating a post and needs more time, or (d) block the post entirely because it breaks the company’s policies. The platforms would publicly announce that such a system was in place, they would provide weekly metrics on how many posts the review system had considered and categorized, they would allow those impacted to appeal any decisions, and they would revisit these systems after an experimental period to evaluate their effectiveness.
To be sure, this system might raise concerns. First, why should this system be applied only to the posts of Trump and other political elites when Twitter and Facebook are rife with abuse from many sources? The answer, in short, is that when it comes to political and health disinformation, political elites matter the most. As decades of political science has taught us, people often take their cues from political leaders; they have outsized influence on public attitudes. Even more, such a high-profile test run of these systems on political elites in the US might help these companies figure out how to create a generalized post-delay system to ensure the integrity of their platforms’ policies. Trump, in particular, is such a reliable source of disinformation and such a powerful actor that there is a strong case to make for prioritizing his account.
Second, why wouldn’t these systems also apply more broadly to all sorts of institutional accounts? The short answer is that they can, but until evidence shows that they are reliable sources of misinformation with the power to harm scores of people, we would argue that the system be limited to Trump and other political elites, especially those on the ballot.
Third, wouldn’t this system limit Twitter and Facebook’s value as a real-time source of conversation, chilling speech and closing off public dialog? This system would certainly interrupt the immediacy of some elite political users, but it would hardly amount to censorship. For one, this short review applies in the context of platforms’ preexisting policies that focus on speech designed to harm, and especially focus on democratic processes and public health. For another, the president and others have many means of speaking to the public—from press releases to media organizations to interrupting broadcast communications in the middle of a national emergency. Finally, it is worth remembering that platforms such as Twitter and Facebook already enforce these content guidelines, they just do so often ineffectively, long after the violation, and with no public oversight. A delay might help the public better see the power that platform companies already exercise, show the kind of technological innovations companies could experiment with, foster a richer conversation about whether such companies should have this power at all, and highlight the importance of timely fact-checking. In the meantime, a short delay would bring enforcement in line with preexisting policies, cut off disinformation at its sources, and recognize that the internet moves faster than the platforms can moderate.
Delaying the social media posts of those who seek to manipulate the public—like the president—would be controversial and a sea change in platform content moderation. But it is exactly the pause we urgently need.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at firstname.lastname@example.org.