Consider this thought experiment: J is a 55 year-old patient who has smoked two packs of cigarettes a day since he was 22. He has just been diagnosed with stage III non-small-cell lung cancer. His doctor uses a series of methods, including a model, to decide his prognosis.
In Situation 1, his doctor uses the “precautionary principle” and presents the worst-case scenario based on a model of the worst case: J has about six months to live.
In Situation 2, the doctor bases her prognosis on future-projecting J’s present situation, by definition not the worst-case scenario and more “optimistic”: J has another two years to live.
Which scenario is better?
The answer isn’t so straightforward. In medicine, prognostication is fraught with its own challenges and depends largely on the data and model used, which may not perfectly apply to an individual patient. More importantly: The patient is part of the model. If the information used then shifts the patient’s behavior, the model itself changes–more precisely, the weights given to certain variables in the model change either toward a more negative or positive outcome. In the first scenario, J may decide to shift his behavior to make the most of his next six months, perhaps extending it to nine months or longer. Does that mean the model was inaccurate? No. It does mean that knowledge of the model helped nudge J toward a more optimistic outcome. In the second scenario the opposite may happen: J may continue his two-pack-a-day smoking habit, or only cut down to a pack a day, which may hasten a more negative outcome. It’s entirely possible that J in Situation 1 lives for two years, and in Situation 2 lives for six months.
>
This pattern exists everywhere, from prognosticating climate change to even polling (knowing poll results can affect voting behavior, potentially changing the outcome). We’ve seen a similar dilemma with Covid-19 pandemic modeling, which may help explain the divisiveness over everything from when the pandemic may end to whether lockdowns are appropriate. Last year, just as the World Health Organization declared Covid-19 a global pandemic, I wrote about uncertainty and risk perception. When faced with uncertainty we defer to experts, but a month later the National Institute of Health's Anthony Fauci correctly noted that experts are fraught with predicting what was (and still is) a “moving target.”
Over the past few weeks we’ve seen more opinion pieces focused on optimism: that herd immunity will be reached by April, and summer will be more like 2019, wide open and carefree. We’ve also seen how this optimism, based on a “present-day accurate model” can sway behavior: from schools opening (but then locking back down) to Texas’ recent removal of its mask mandate potentially contributing to an uptick in cases. Others have taken a more pessimistic approach, saying it may be another two years until things “return to normal,” and the virus variants are a “whole other ballgame.” Today, in Michigan and in Canada, a potential variant-fueled third wave suggests a less optimistic outlook (for now). We’re all deeply familiar with how this pattern has repeated itself several times over the past year, and even experts disagree (and some have changed tack). It’s more than just bad news bias. But how do we reconcile this dichotomy between the “optimists” and the “pessimists”? It may come down to how we understand the purpose of epidemiological models in general, and the two types of pandemic forecasting models.
Justin Lessler is an associate professor of epidemiology at Johns Hopkins University and is part of a team that regularly contributes to the Covid-19 Forecast Hub. He specifies that there are four main types of models: theoretical, which help us understand how disease systems work; strategic, which help public officials make decisions, including to “do nothing”; inferential, which help estimate things like levels of herd immunity; and forecasting, which project what will happen in the future based on our best guess how the response and epidemic will actually unfold.
When it comes to forecasting models, there are those whose forecasts are not worst-case scenario by definition (thus more optimistic), which aim to describe present-day patterns in transmission and susceptibility and project out, assuming the current patterns stay the same. In these “dynamic causal models” a variety of different variables are added to also include, as University College London based biomathematician Karl Friston described, unknown factors that affect how the virus spreads, dubbed “dark matter.”
Then there are forecasting models guided by the “precautionary principle,” aka “scenario models,” where the assumptions are often the most conservative. These account for the worst-case scenario, to allow governments to best prepare with supplies, hospital beds, vaccines, and so forth. In the UK, the government’s Scientific Advisory Group for Emergencies focuses on these models and thus guides policy around lockdowns. In the US, President Biden’s Covid-19 task force is the closest equivalent, while the epidemiologists and actuaries that appear nonconformist may be the closest we get to a group like the Independent SAGE (which Friston works with).
“The type of modeling we do for the Independent SAGE is concerned with getting the granularity right, ensuring the greatest fit–with minimal complexity–to help us look under the hood, as it were, at what is really going on,” Friston told me. “So, the fundamental issue is namely, do we comply with the precautionary principle using worst-case scenario modeling of unmitigated responses, or do we commit to the most accurate models of mitigated response?”
This gets to the heart of the tension between various “experts.” For instance, epidemiologists like Stanford’s John Ioannidis have tended to be more concerned with modeling the pandemic to accurately explain current patterns (and extending this pattern into the future), which can come off as more optimistic and isn’t typically used to guide policy.
“A lot of the confusion arises from not understanding the purposes of a given model, such as presenting a strategic model as a forecasting model,” says Lessler. “I prefer the term ‘planning scenario,’ [and] in a pandemic our response may lead to the predicted scenario not happening.” He points to the Institute for Health Metrics and Evaluation model from spring 2020, which was accurate for one to two weeks into the future but assumed strict interventions remained in place indefinitely. This was invalid for long-term planning but had been considered for long-term planning by many eager to embrace its apparently rosy outlook on the pandemic. This model was relied heavily upon by the Trump Covid-19 task force.
As with the thought experiment about J, the Heisenberg uncertainty principle helps us understand a similar idea in epidemiologic forecasting models. The impact of translating this information to the public changes the model, as it creates a feedback loop where individuals change their behavior based on perceived risk, and this then shifts transmission patterns. Models are a simulation: one we create but are also affected by. We saw this with initial recommendations that we rigorously clean surfaces: Initial forecasts included contact transmission, but those models shifted once we realized that mode of transmission was minor and most transmission is through respiratory secretions. We’ve also seen this with masks: Initially, not wearing masks resulted in high levels of transmission. Transmission then decreased once masks were adopted, so the model shifted toward a more optimistic model. (This was the crux of Neander-gate.)
“Many models are not intended as predictors but as tools to help decisions. So when you see something presented on the news, notice that it is usually a statement of what ‘could’ happen, and listen for what else ‘could’ happen if people react to the epidemic,” Lessler told me. “A lot of people have a tendency to focus on the worst case, but if the model is successful in informing policy then that dire prediction that is getting all the press will be wrong.”
These feedback loops are further complicated by the asymmetry in how we view information and incorporate it into our behavior, as individuals. Optimists may update their information as part of optimistic update bias (toward taking more risks). Pessimists may be more risk averse even when presented with an “optimistic” model. This is not dissimilar from confirmation bias. Our behavior also depends on epistemic trust: whether we decide to trust one expert forecast over another enough to change our minds and behavior. This recently arose with the pushback against a controversial article in The Atlantic, written by an economist, about the risks of Covid-19 transmission in children.
Science, and specifically epidemiology, is concerned with measurement and truth. Accurate models are important. But at time point A, if a group of individuals listens to the worst-case/pessimistic/precautionary principle model, the likelihood of the worst-case actually occurring may decrease as a result of a shift in the group’s behavior to minimize risk. The opposite is also true: At the same point, if a group of individuals listens to the “dynamic causal”/optimistic model and shifts their behavior to be more liberal, the model shifts toward the worst-case.
“Pandemic forecasting is similar to weather forecasts, which are good for a 10-day outlook, but I couldn’t tell you what the weather will be in the third week of July,” Lessler told me. With infectious diseases, “we can’t say what will happen in three months from now, since we have feedback loops with policy and behavior and uncertainty in the underlying data.”
Let’s come back to J: In Situation 1 he may decide to take that pessimistic model as a nudge to quit smoking. The reverse may happen in Situation 2. Ideally, his doctor would share both projections, and it would be up to J to weigh both options.
Public health is trickier, because decisions made by the individual ripple out to affect their community. Arguably, it’s better to be overprepared and overcautious than under, where millions of lives are at risk, though the externalities to individual liberties and to the economy are also important and impact our choices and evaluation of risk.
Here’s the good news: Over time, the forecasting models of the optimists and the pessimists could appear to converge. So both the scenario and dynamic causal models are, in a sense, correct: Overall and gradually, we tend to make more accurate predictions together. This suggests that once the case numbers dwindle, the models will resemble one another, which signals the end of the pandemic or simply appears to be a reflection of it. Lessler later shared in an email: “All models get to a destination of very low cases. It is just a matter of how long and what happens along the way.”
As such, a more “pragmatic” outlook, one that advocates for continued use of masks, vaccines, and social distancing, may best yield the optimistic outcome of herd immunity and life returning to a more enjoyable “normal” later this year.
When I held a Twitter poll earlier this month, over two-thirds of some 700 respondents seemed to take the more optimistic view, that in North America the end of the pandemic is near. At first I felt relieved but then realized this view could lead to the more pessimistic outcome if that same optimism dictates less prudent behavior. Instead, balancing cautious evidence-based pessimism in the present, with the idea that this might result in a reason to be optimistic in the future, may be best to dictate behavior so that we can emerge from this together, like others appear to be. Which is another way of summing up the writer Ezra Klein’s recent tweet: “Hope feels like an unsafe emotion lately. Personally and professionally, I don’t want to wax optimistic only to be crushed as deaths rise. Pessimism is safer.”
Perhaps pragmatism, with a healthy dose of tolerance for change and uncertainty, is even safer.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.