5.6 C
New York
Saturday, March 23, 2024

You May Not Even Know You're Spreading Lies

Misinformation and disinformation are two different things. One is spread by people who genuinely believe what they're saying; the other by those who aim to sow chaos and confusion. That distinction matters. But in the cacophonous swirl of social media, it's not always possible to tell which is which. Not only can intentional and inadvertent falsehoods fade into each other, they can happen simultaneously; a conspiracy theory about the Iowa caucuses might be shared by people who know the theory is false and by those who believe it’s true. Good luck targeting messages when you can’t be sure where misinformation ends and disinformation begins.

The framework guiding this WIRED series—information ecology—leans into these complications. It’s also the subject of my forthcoming book, coauthored with Ryan Milner, titled You Are Here: A Field Guide for Navigating Polluted Information. (The introduction can be accessed here.) As Milner and I explain, by foregrounding the reciprocal interconnections between our networks, our tools, and ourselves, an ecological approach to information helps us triangulate our own “you are here” stickers on the network map. Knowing where we stand in relation to everything else better equips us to make more humane, more reflective, and more ethical choices online—all by showing how our individual me is entwined within a much larger we.

When contending with the falsehood, bigotry, and abuse inundating the internet—which Lisa Nakamura bluntly describes as a “trash fire”—we employ a metaphor of pollution, reflecting both its ubiquity online and the threat it poses to the digital environment. Building on Claire Wardle’s work, our polluted information frame acknowledges that there are qualitative differences between mis- and disinformation online but doesn’t attempt to parse them in the wild. Nor does it focus too intently on why a person shares false information. It’s not that motives are irrelevant. It’s that they matter less than environmental impacts, and less than the overall process of spread: how and why a polluted message is able to travel so seamlessly across so many audiences—or how and why it’s encouraged to do so by attention-economy incentives.

By sidestepping questions of motives, we open the conversation up to unexpected sources of pollution. These sources spread toxicity through messages that wouldn’t qualify as mis- or disinformation, yet can be just as potent and just as damaging. In the offline world, unintentional pollution takes many forms: washing your hair and sending globs of petrochemicals swirling down the drain, for example; or driving an old car and leaking oil all over the driveway; or fertilizing the lawn just before a rainstorm, trickling chemicals into the street. People aren’t trying to pollute when they do these things, so they don’t call themselves polluters. Yet the water supply still ends up tainted. Online, everyday actions like responding to a falsehood in order to correct it or posting about a conspiracy theory in order to make fun of it—case in point, QAnon—can send pollution flooding just as fast as the falsehoods and conspiracy theories themselves.

This isn’t to minimize the damage caused by those who willfully poison the digital landscape. Their harms are acute and warrant aggressive intervention. The issue is how much we miss when we restrict our focus to those who actively set out to do harm online. People who care deeply about the environment can pollute as well, even when they’re just living their lives. Even when they’re just trying to help.

It’s easy to apply this critique to others. When it comes to reflecting on the things we post ourselves, however, I’ve found it to be a much tougher sell. We know what our motivations are, and because those motivations are good, what we’re doing must be good, too. Reactions to the conspiracy theories that have swirled throughout primary season, particularly after the Iowa caucuses debacle, are a case in point. I’ve seen countless people, from journalists to politicians to everyday folks, loudly condemn and mercilessly mock those who spread falsehoods about the election—by quote-retweeting the conspiracy theorists, paraphrasing their claims, and breaking the theories down point by point. The irony of criticizing people for spreading polluted information while helping to spread that same polluted information rarely, if ever, comes up; what the exasperated retweeters and journalists are doing is different, or at least it is from their perspective.

In some ways they’re right. Pushing back against falsehood is not the same thing as being the source of that falsehood. None of these tweeters and journalists intend to pollute. The information ecology, however, doesn’t give a shit about anyone’s intentions. What matters most is consequence. And the consequence of those retweets, litanies, and articles is to spread the pollution further.

Some will resist this idea. Some will think, “But my audience understands that RTs aren’t endorsements, that the theories aren’t true, and that I’m just fact-checking! Anyway people need to know what’s happening—otherwise how can we start pushing back?” Hand to heart, I hear you. I’m sure the bulk of your audience does understand. I also agree that people need to know what’s happening; we can’t have a functioning democracy otherwise. And yes, 100 percent, we can’t resist things that haven’t been named.

In a perfect world, the conversation could end there. But ours is not a perfect world. Basing our decisions on the networks we wish we had, rather than the ones we actually do have, won’t get us any closer to the shared goal of a less-terrible internet.

Within these networks, the very notion of an intended audience disintegrates; the result of context collapse (the unpredictable comingling of audiences), Poe’s Law (the difficulty of determining meaning online), and an attention economy that incentivizes the fastest possible spread of the most possible information. We can talk about “our audience” all we want, of course, and tailor our messages to what we think they need to hear. But we often have no way of knowing how other, wholly unintended audiences will react to the things we post. In response to our messages, retweets, invectives and, yes, WIRED articles, maybe some will respond exactly as we hope. But others could be further emboldened because they’ve triggered a lib, lol, and that means they must be onto something. Still others could begin to wonder if there’s any truth to the idea—“I mean, Graham is a US senator; and don’t they have access to top-secret information?”—and start Googling, in the process encountering even worse and more misleading information.

>

These are just a few possibilities; there are so many network variables, we often can’t even know what we don’t know about our unintended audiences. Once we publish or send or retweet, our messages are no longer ours; in an instant, they can ricochet far beyond our own horizons, with profound risks to the environment. At least potentially. On the other hand, if we only published or sent or retweeted things when we knew with absolute certainty what would happen next, we’d never say anything. That would be bad!

Maybe we’ll never know exactly where our posts and our retweets will travel. We can reflect, however, on the kinds of outcomes to avoid—the easy victories for chaos agents. Far-right operatives, for example, need and want liberals to help spread their messages; they rely on the mainstream majority for signal-boosting. This was true in Iowa, and it will be true throughout the 2020 election. The question then becomes: What messages work least well as free publicity for them?

There are no one-size-fits-all solutions here; that’s not how things work in a complex ecosystem. Nor is it practical to demand zero pollution. When pollution is avoidable, it should be avoided; certain things simply don’t need saying. That’s particularly true when someone is posting a response for the sake of responding, or when they don’t realize that the very concept of joking online belongs in scare quotes. Other times, there’s no way to avoid the sludge; certain things do need saying, because the messages add context or nuance or moral clarity, even as they help publicize the source-pollution. That pollution might be unavoidable, but you can reduce the runoff by considering two separate waste sites: the spot where you’re standing and the areas downstream. What pollution might your message mitigate, and for whom? How does that compare to the pollution it might generate? Whose bodies might be nourished, and whose might be poisoned? Are those costs worth the benefits?

As we hurtle toward 2020, it’s easy to feel stymied. There’s just so little we can do. We can’t control social media platform policies. We can’t control government regulation. We can’t control the various industrial polluters who make a killing by killing democracy. What we can control is how and when we choose to post; and by extension, the amount of pollution we filter into the landscape. On its own, of course, this isn’t enough. We need structural overhaul. We need a Green New Deal for the digital age, in which policy change dovetails with economic change dovetails with educational change dovetails with behavioral change.

Still, cumulatively, we can begin taking steps to mitigate the pollution that already exists and begin to minimize the pollution that’s created going forward. In the process, we can generate the kind of grassroots energy needed for policy and economic and educational change. We do this by reflecting on where we are in relation to everything else. We do this by not making life too easy for the most noxious polluters. We do this by remembering, always, that downstream is just a click away.

Related Articles

Latest Articles