By 10 pm EST on election night—when Florida moved decisively away from the hands of the Biden–Harris ticket—the venomous backlash against the pollsters had already begun. An especially large amount of fury was directed toward Nate Silver, America’s former forecaster laureate, who had mostly repaired his image since the infamous 2016 presidential election misfire and had millions of people hanging on his every word. As election season 2020 rolled around, Silver’s predictive model had reestablished itself as the standard.
The work of poll wonks like Silver, and the idea of predictions in elections, has been criticized for valid reasons, some offering insightful takes as to why they got things wrong in 2020. But in the midst of this backlash, the professional side of me—that builds mathematical models of epidemics like Covid-19—couldn’t help but empathize with Silver and his kind. There are many parallels between society’s response to seemingly errant predictive models for elections and the ones about the trajectory of infectious diseases. And in discussing features of each, we can learn why the forecasters aren’t to blame for our disappointment.
Most forecasting models of epidemics are mechanistic. In March, very early in the pandemic, mathematical epidemiologist Neil Ferguson and colleagues at Imperial College London developed a model that offered dire predictions for the number of individuals who might be infected and die in the US and UK (around 2 million deaths). Another model, developed by the Institute for Health Metrics and Evaluation at the University of Washington, was the center of controversy after it changed its predictions to suggest that the US is closer to the peak than we realized in many places.
These are but two of the many Covid-19 forecasts based on a presumed understanding of how the epidemic actually works. The scientists construct a version of the world, encoded in equations and bits and colored by details underlying how infectious the virus is, how people are interacting with each other, and other variables.
Many popular pollsters' algorithms—like the one used by Silver at FiveThirtyEight—are based on an array of opinion surveys of likely voters. Their overall probabilities are based on an aggregation of these polls, which are weighted by quality, sample size, and other features. After the 2016 debacle, election forecasters were more vigilant about correcting for education status of the voters, which helped to explain some of the discrepancies in 2016. Some methods, like the one used by The Economist, use a combination of polling data and economic factors to make predictions.
>
Models of epidemics are often imprecise, because they take on the unrealistic burden of trying to capture all of the complexity of an epidemic, which is impossible. No computer could tabulate all the meaningful detail underlying infections on vacation cruises, superspreading events during choir practice, or maskless politicians during a Rose Garden ceremony. Math and computers may be able to capture subtle features of any one of these events, but the most popular models of Covid-19 are supposed to tell us something about how an epidemic plays out in aggregate, for millions of people, in different settings. These are often ones that we use in policy discussions.
The accuracy of predictive models of elections are similarly undermined by the vagaries of human behavior, social structure, and other stuff that we just don’t understand.
We might account for the voting trends of individuals of Latinx descent but undervalue the large differences (including political preferences) between Afro-Latinos in the Bronx, Cuban Americans in Miami, and Mexican Americans in El Paso.
We might weigh an election forecasting model based on what we think are rural, white voters in the rust belt but not account for voters who don’t bother participating in polls.
We might know that, because of Covid-19, there will be more mail-in ballots among those who identify as Democrats, but it is challenging to predict how this will manifest across settings and impact the election.
For all of the criticism of Silver, he’s been open about the flaws in his 2016 model, is generally forthcoming about uncertainty in his predictions, and has attempted to explain how his model works. But despite his efforts at communicating the importance of uncertainty, we seem surprised—and personally offended—every time a forecaster gets it “wrong.”
The negative responses to errant predictions by epidemiologists have a similar weight: When the actual Covid-19 case counts are higher or lower than the models say, some question why we need mathematics at all.
To help avoid misinterpretation, both the epidemiologist and the pollster must own up to the limitations in their craft. They must be transparent about why they’re building a model in the first place, and about their method of choice. In addition, the sharing of open-source data and code (so that the citizen-scientist can participate in the modeling process) can go a long way in helping the public engage with forecasts in a responsible way.
This should include clear communication regarding what a prediction truly means, and what probability translates into in real-world terms. When I say that I predict 120,000 cases per day in December, under what conditions does that prediction apply? When I discuss a 90 percent chance that candidate A will defeat candidate B in a congressional race, I should explain what those percentages truly mean: that if the real world looks like the one based on available polling data, then candidate A would be the presumptive favorite. Such a forecast says little about what the real world actually looks like: There’s always a possibility that the polling data driving the predictions of Silver and others are systematically biased in some way, and consequently, missing a feature of how people will behave on election day. Silver and colleagues might phrase this explicitly by highlighting the in silico nature of the forecasts: His predictions aren’t for the world as is, but are based on data that might reflect voter behavior.
Even when the forecaster follows these rules, we can feel blindsided when the predictions end up wrong. In these cases, the righteous indignation over the misalignment between our expectations and the forecasts is often less about the models and more about us.
This is because we often build our expectations from models around our emotional needs. If we don’t like one of the candidates, our frustration about the model’s imprecision is not really driven by disappointment in how it performed but by the fact that we don’t like the outcome. Because we so desperately want the Covid-19 pandemic to be over, we aren’t in a position to reasonably interpret a forecast that offers a meaningful percentage of us being back to normal by early 2021, even if such a model provides the disclaimer that the percentage represents a best-case (and wholly unrealistic) scenario.
These emotions cloud our interpretation of probability and noise, and we end up burdening the models with our need for a linear narrative. That's especially true in paradigms—like our political future and our experience with scary pandemics—where the stakes are high.
But predictive models and forecasts were never supposed to offer us a deterministic world. As the statistician George Box said, “All models are wrong, but some are useful.” They’re only meant to provide an instrument through which we can construct a picture of the world. They are not supposed to be the picture. Nor are they equipped to be the sole instrument driving our hopes and fears.
Photographs: Bryan R. Smith/AFP/Getty Images; Eli Hartman/Odessa American/AP