After another election in which the polls and polling-based forecasts erred pretty badly, you might expect political writers and pundits to have learned their lesson. You might expect them to approach further polling numbers—the ones that come out just after the election, instead of just before it—with a fair degree of skepticism. You would be wrong.
>
It took less than 24 hours after voting concluded on November 3 for analysis to start pouring in on how various demographic groups had changed allegiances since 2016. A CNN headline promised to show “how voters shifted during four years of Trump;” one from Vox offered to explain why “Trump made gains with Black voters in some states.” Other outlets offered even more precise parsings of the electorate: “Over half of those whose family income was more than $100,000 a year supported the president,” claimed The Financial Times, “compared with 45 percent in 2016.”
It’s easy to assume that all these data-driven judgments, delivered after Election Day, are somehow epistemologically distinct from the faulty pre-election forecasts. In fact, they are mainly based on a national exit poll conducted by Edison Research, and no more free of systematic bias or methodological ineptitude than, say, a statewide poll of Ohio from October. Exit polls are a lot like regular polls, only worse. That was especially true this year, when capturing voters who had cast ballots early through the mail required calling them up weeks before the election. In a sense, the 2020 exit poll was just another pre-election survey.
Even in normal times, exit polls are plagued with sampling bias, meaning different groups are not all equally likely to respond. This leads to a misleading picture of who actually voted. College graduates and young people, for example, tend to be overrepresented. As the political scientist Robert Griffin notes in The Washington Post, this seems to have led to this year’s exit polls heavily underestimating the share of the electorate representing white people without a college degree—just as the exit polls did in 2016. Compounding the problem is the fact that an exit poll has to match the results of the election; if not enough respondents report voting for Trump, for example, the pollster will have to adjust the input from various groups to bring up Trump’s share of the vote. This often means the groups that are already overrepresented get the biggest adjustments—which might explain this week’s dubious reports that 55 percent of white women broke for Trump.
>
To be fair, articles like the ones I’m citing tend to include disclaimers about the limitations of exit polls. But then they plow ahead with the analysis anyway, rather like personal-health reporting that is based on a study of a dozen healthy undergrads placed on a two-week diet. The more honorable disclaimer would be to not write the article in the first place. “Exit polls are garbage,” said Lee Drutman, a political scientist and senior fellow at New America. “Any smart person at this point knows not to make any judgments about the electorate from exit polls, because the sampling methodology is just totally off.”
Not all analyses of the electorate have been based on exit polling. The New York Times, for example, has been publishing data visualizations that plot the degree to which every county in certain swing states shifted left or right between 2016 and 2020, along with explanations of what it means for voter behavior. (Example: “Hispanic Voters Deliver a Texas Win for Trump,” a finding based on the fact that counties with higher proportions of Hispanic residents shifted more heavily toward the GOP.) There’s a lot that can be gained by this sort of geographic analysis, but it still has pitfalls. The first is obvious: all the votes aren’t counted yet. As of this writing, a fair number of Texas counties still have counted less than 90 percent of their ballots. If the endless election process we’re living through has taught us anything, it’s that even one or two percent of the ballots can dramatically shift the results. California and New York State appear to have millions of votes left to count. It makes little sense to draw firm conclusions about the Black and Hispanic vote, especially, before hearing from those two states.
Even when the votes are counted, you should still be wary of analysis that purports to reveal insights about voter behavior based on shifts in certain areas.
“You can’t use aggregate data to say something about how individuals behaved,” said Brian Schaffner, a pollster and political scientist at Tufts. This, Schaffner explained, is a version of what’s known as the “ecological fallacy.” A lot of people from a certain demographic group may happen to live in one area, but that doesn’t mean they’re driving whatever electoral shift takes place from one year to the next. “You could, for example, say, ‘These Latino precincts shifted Republican,’” Schaffner said. “But maybe that’s because the white voters in those precincts just voted more Republican for whatever reason, maybe as a reaction to increasing diversity in those precincts.”
>
None of this is to say that any of the emerging narratives about how various groups voted this year are wrong. We just don’t know yet. Certainly, the results in places like Miami and the Rio Grande Valley, where Trump wildly overperformed expectations, strongly suggest a meaningful shift to the right among the Latino voters who live there. At the same time, however, in electoral politics, a lot rides on small differences. Whether Trump improved his margin among Black voters by 2 percentage points, as the pre-election AP Votecast poll suggests, or by 4 points, as the Edison exit poll does—or by more, or less, or not at all—matters a fair bit for political strategists and indeed for anyone trying to make sense of the election results.
“Many pundits and commentators are far too quick to use flawed data sources, and this often produces an election narrative that persists even when better data and analysis call it into question,” said John Sides, a political scientist at Vanderbilt.
The good news is, help is on the way. By sometime early next year, states will have finished updating their voter files, meaning public information will exist on exactly who did and did not vote this year. That’s important, because yet another problem with survey-based analysis is that people lie not only before Election Day about whether they intend to vote, but also afterwards, about whether they actually did. (According to Schaffner, college-educated people are particularly bad offenders.) Once the voter files are updated, several organizations will release data from large surveys that is matched against those validated files. These include the Cooperative Election Study, which Schaffner helps administer; as well as Pew’s validated voter survey, which is as close to a gold standard as exists in the polling business.
Studies like these will give a much more accurate sense of who really voted, and for whom. Unfortunately, they probably won’t be released until next summer. What’s a politics junky to do in the meantime? Ignore the noise. Play with your kids, call your parents. It may be unsatisfying to have to wait for answers, but, as I hardly need to remind you, that’s a lot better than buying into narratives that turn out to be untrue.