On January 20, both the United States and South Korea confirmed their first cases of Covid-19; Taiwan reported its first case the next day, and Singapore followed two days later. Epidemic parity began and ended there. By the end of March, those three Asian countries had largely contained at least the first wave of their outbreaks—and, not only that, had done so at relatively minimal cost to their citizens' routine way of life. The same could scarcely be said of the US. The story behind this divergence was obvious: The governments of South Korea, Taiwan, and Singapore were prepared to test, to trace, and to isolate, and ours was not. Such a vast discrepancy in basic preparedness was, however, almost incomprehensible to many American observers—it seemed impossible to imagine that it could be that simple. The astounding national variance had to be explained by some hidden variable.
The two obvious candidates were culture and technology. On the cultural account, the comparative success of the South Koreans, Taiwanese, and Singaporeans was understood to be an artifact of national—or even supranational—character, some signature constellation of attributes. As The Wall Street Journal paraphrased one expert's view, “In South Korea, as in Japan and Taiwan, the lingering cultural imprint of Confucianism gives a paternalistic state a freer hand to intrude in people's lives during an emergency.” Such an appeal to “culture” provided no actionable lessons for the US, but it did provide a ready-made excuse. Nobody was going to propose that the people of Kirkland, Washington, or New Rochelle, New York, be instructed to read the Analects; these Asian countries were successful because they drew on a venerable tradition of filial piety and noble self-abnegation in the service of collective well-being.
A slightly less Orientalist cultural explanation of these successes traded heavily on the notion of social trust. “Social trust is higher in South Korea than in many other countries,” according to The New York Times, “particularly Western democracies beset by polarization and populist backlash.” Social trust, in this generalized sense, is an ill-defined concept that encapsulates a variety of otherwise distinct phenomena: trust in the government as a whole, trust in the relevant authorities in particular, and ultimately trust in one's neighbors. It was, in any case, something that made people more prone to listen to their leaders, wear masks in public, and stand 6 feet apart rather than where they pleased.
Such social solidarity couldn't be counted on in the West. American and European leaders have instead been prone to draw inspiration from these Asian nations' use of technological wizardry. Though South Korea, Taiwan, and Singapore each followed their own distinct approach to pandemic response, their practices were often lumped together as “digital,” as if such a category were self-evidently meaningful. The interventions appeared in roughly three varieties: the recourse to geofencing to enforce quarantine; the use of location and purchase-history information to identify pathogen trails; and the development of contact-tracing techniques, reliant on GPS or Bluetooth data, to pinpoint infectious encounters and notify the exposed.
If cultural stories about deep reservoirs of social trust didn't seem to offer much hope for the worsening American situation, these technological approaches promised, in contrast, that unruly civilian behavior could be rendered irrelevant. Human noncompliance, ineptitude, and weakness could be managed with external controls. Never mind that we can't trust our neighbors to stay home when there's suspicion of infection: In Taiwan, if someone's phone transgresses their invisible geofence, the police are summoned. Similarly, while human contact tracers must trust their subjects to be honest about where they've been and whom they've seen—even if that includes a brief but magical interim at a love hotel—digital contact tracing obviates the need for that sort of candor. No trust per se is necessary when we're given complete confidence in a fail-safe mechanism.
It doesn't take much insight to recognize that there has been something incoherent in the tendency to reduce the successes of these Asian countries to a function of both trust and technology. If they enjoyed such preponderant social trust, why would they also need to rely on automated enforcement mechanisms? One way to resolve this tension was to suggest that their eagerness to submit to digital interventions was in fact just another example of culturally inflected obedience. There had to be some deep cultural reason for their willingness to accept the heavy hand of technological oversight—measures we Americans would never tolerate. This view was so common it could be taken for granted. A New York Times article from mid-April struck a contrast between Asian technologies and the high-touch, labor-intensive contact-tracing schemes that had been drawn up for the commonwealth of Massachusetts: “Contact tracing has helped Asian countries like South Korea and Singapore contain the spread of the virus, but their systems rely on digital surveillance, using patients' digital footprints to alert potential contacts, an intrusion that many Americans would not accept.”
The past 20 years have given Americans good reason to be skittish about national crises as pretexts for expanding the surveillance state. If Asian citizens were content to be tracked, regardless of the privacy compromises, by centralized agencies, that was fine for them, but Americans were wont to believe that the government enjoyed quite enough power, and to suspect that any credulity would likely be abused. Tracking technology could work, but only if its proportional application could be cryptographically guaranteed—if, that is, the original technology was underwritten by further technology. We in the hard-headed West would borrow techniques from Singapore and other countries and improve upon them: not to track people, but to track the virus itself.
On April 10, Apple and Google announced an unprecedented collaboration in the service of such a possibility. This may have been a pragmatic necessity—81 percent of Americans own personal tracking devices called smartphones, almost all of them running iOS or Android—but it also made cultural sense. These companies are, after all, two of our most popular institutions, and if their credibility had recently operated at a deficit with the American public, this was their chance to cancel those debts.
The Apple-Google partnership took up the strategy of Bluetooth contact tracing. Imagine how this works with Alice and Bob, the preferred hypothetical pawns of the world's cryptographers, both of whom have chosen to opt into the system. Alice's phone broadcasts randomized identifiers—they are numerical but can be imagined as pseudonyms—to any other device within a radius of, say, 6 feet, for longer than a specified time. If Alice develops Covid-19 symptoms and she tests positive for the disease, she reports this to her phone, and her phone uploads to some server the list of pseudonyms (“Mr Potatohead,” “Alphonso Wetwhistle,” “David Carradine420”) it has recently used, none of which can ever be traced to Alice's device or her person. Bob's phone periodically downloads a list of infected pseudonyms and checks its own Bluetooth-encounter logs to see if any of them come up as recent proximity matches. If Bob's phone finds that it was in fact recently within 6 feet of a “David Carradine420,” he would be instructed to self-isolate. If Bob then developed his own symptoms and tested positive, he would then notify his own app, which would then send out its own untraceable list of pseudonyms, and so on. Nobody—not even health care providers—knows who has been infected, who has been exposed, or where these encounters have taken place, but everyone who might have lingered in dangerous proximity is automatically warned to take expanded precaution. The basic protocol is anonymous and decentralized.
There has been some understandable confusion over what Apple and Google purport to do. They are not, in fact, collaborating on an app but an underlying protocol—a tool kit that public health departments might use to build out their own varying approaches. App developers need the cooperation of the underlying operating system to make any tracing system work. But their focus on a protocol rather than an actual application suggests that neither company wants explicit ownership of the eventual app's administration.
Ownership gets complicated. Despite the pervasive fantasy that technology can offer an end run around human unreliability, it should be clear that this system does not in fact eliminate the need for trust; it simply redistributes it away from a centralized intermediary and toward the edges. Alice and Bob need, in the first instance, to trust that the contact identification is in fact accurate—that the two of them were indeed within transmission distance. This is not straightforward. Alice's phone is not, after all, measuring her actual distance to Bob, but rather using received Bluetooth signal strength as a proxy. But all sorts of factors complicate the measurements for Bluetooth signals: Was Alice's phone in her pocket or her bag? Was it right side up or upside down? Was it an iPhone or a Samsung, and if it was a Samsung, which model was it? Even if researchers can determine a foolproof way to make received signal strength reliable, there's a lot of latitude for mistakes. Alice might be 6 inches away from Bob but on the other side of a glass partition. Alice might be 15 feet from Bob but singing at the top of her lungs. False positives and false negatives are likely to swamp the system and undermine its uptake.
Many of these technical problems can be addressed with various kludges, but the most pressing issues cannot be solved by throwing more technology at them. Even if our faith in the code is warranted, our trust in others might not be. As cryptographer Ross Anderson put it in a blog post, “The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.” And even if for some reason Alice trusts the integrity of the self-reporting system, on what basis should she expect compliance from Bob? If Bob gets a notification that he might have been exposed and should quarantine for 14 days, he might very well decide after 72 asymptomatic hours that there's no profit in continued isolation—especially if there's pressure from Bob's boss. Almost all of the incentives for an asymptomatic person are aligned to encourage defection.
In other words, these systems are extremely unlikely to work very well on their own; their viability requires extensive public health oversight. For one thing, the country would need widespread access to easy and free testing to mitigate the false positives and false negatives that apps are sure to generate. For another, we would probably need accredited health care providers to trigger the notification process, so as to spare ourselves the trolling that would invariably accompany any anonymized system of self-reporting. We would probably also want some sort of authority in the loop to make us pay attention. We get so many automated alerts on our phones that we're likely to ignore sudden instructions to self-quarantine as so much vibrational noise; even if we're inclined to take them seriously, surely recipients would want further instruction. People directed to quarantine would also want some sort of official document to present to their employers to authorize their absences.
There are also crucial epidemiological reasons for our phones to collect more than anonymized, bare-bones encounter information. Public health experts can better anticipate viral spread if they have access to granular data about where transmissions are occurring; if lockdowns are going to be lifted, we need to know the differences in transmission rates between schools, restaurants, and public parks.
The important question for any of these technological initiatives is thus not whether the data is gathered under the aegis of some centralized supervisory body but how, precisely, such a centralized system is implemented, and by whom.
Take, for example, the experience of Singapore, the first country to develop a major Bluetooth contact-tracing strategy. TraceTogether, as they called it, was in place by the middle of March, and it was the inspiration for similar projects in Europe and the US. The software was developed under the direction of the Ministry of Health, and it is neither automated nor entirely anonymous, and its use is voluntary. As a member of the Singaporean team told me, “We toyed around with many different schemes, and one of them was very similar to the decentralized mechanisms we're seeing around. And the health officials said, ‘Look, we cannot be flying blind with this. We need the information about how it all works, how people respond, what the impact is.’” Without a human involved in the process, offering direction, those notified about exposure might run to the ER or a doctor's office rather than going home to quarantine. Or they might make a grocery run before isolating themselves, endangering others. “If you approach this as an exercise in cryptography, you are missing the point—this is primarily an exercise in public health, about creating and coordinating an operational response to Covid that works.” Singapore had already marshaled teams of hundreds of trained manual contact tracers, and the app was only ever seen as one additional tool—a tool that serves their epidemiological management rather than purports to supplant it.
There is an obvious tension here. Epidemiologists are in widespread agreement that such digital mechanisms as Bluetooth contact tracing only really add value if they are deployed by credible actors atop a robust public health response that handles testing, outreach, and resource distribution. But the Google-Apple protocol is built to prioritize a different, and probably incompatible, end: to circumvent the pitfalls of centralized surveillance that have preoccupied Americans since at least 9/11. This makes these companies the effective gatekeepers between sovereign governments and the citizens they're scrambling to serve. (They are hardly transparent gatekeepers, either; Google's spokesperson directed me twice to a vague blog post about its operations and refused further comment.) Public health officials in France and the UK are developing apps and would prefer that they be able to collect more data or store it on central servers; Google and Apple won't allow it. Either they play by Google and Apple's rules or they face OS-level constraints that will thwart their reliability and efficacy.
Even this tension, however, might ultimately be beside the point. In South Korea, Taiwan, and Singapore, where various and sundry technological interventions have been used to further the more traditional, human measures already in place, they've been of only marginal value. Taiwan has rarely even used the advanced surveillance technologies at its disposal; Singapore's TraceTogether system has proven most useful in confirming information that manual contact tracers had already secured. This raises an obvious question: If these sorts of tools require a robust institutional response to make them accurate and useful, why do we really need them at all? Have we in the US actually been suffering from a deficit of advanced technology, or might we be suffering from an utter lack of confidence in our institutions and our neighbors?
The attempt to bypass the enigma of “social trust” in favor of precision engineering thus comes full circle: Issues of institutional and interpersonal credibility can't be avoided. The notion that social trust exists in some generalized state does not hold up to any real scrutiny, especially when it's perceived as a function of cultural stereotype. As the German scholar Katharin Tai, who studies Chinese internet policy at MIT, noted on Twitter, “I keep hearing that Europe cannot learn #Covid19 strategies from Asia bc Asians are ‘obedient’ & not as ‘critical’ — South Koreans ousted their last president with mass protests, Taiwanese students occupied parliament to protest a trade deal & HK has been protesting for months.” In an interview, she compared the perceptions of various national responses. “Now Germany is held up as this shining example instead of South Korea or Taiwan, the underlying idea being that if these places are democracies they're somehow not the same—whereas Germany and Merkel seem more like ‘us.’ Why is that, if not due to some fundamental idea that Asia is different?”
Yet there is clearly something going on in these societies. The people have mobilized in ways we clearly haven't, and they've had a much easier time incorporating modest technological tools with neither the fanfare nor the apprehensiveness with which these mechanisms have been greeted in Europe or the US.
Over the past few decades, political scientists have come to articulate a concept of trust that is not a fixed property of a given culture or society. Margaret Levi, the director of the Center for Advanced Study in the Behavioral Sciences at Stanford, has proposed that we wean ourselves from talking about “trust” as a primary concept and instead talk about “trustworthiness.” As Henry Farrell puts it in his book The Political Economy of Trust, drawing on the work of Levi and the late Russell Hardin, “trust pertains to relationships with specific others over specific matters” (italics in original)—that is, trust is not some ambient property of a system but a way to describe and evaluate the expectations that condition and color our relationships, expectations determined by mutual understanding and candid negotiation. The upshot of this view is that “trust” is not an explanation for behavioral phenomena but is itself a behavioral phenomenon that needs to be explained.
What, in other words, has largely been dismissed as either an effect of draconian surveillance or an expression of sheeplike obedience—or both—is that these Asian governments have earned their citizens' trust with a record of public investment and accountability. In the case of the response to the new coronavirus, some of this trust was predicated on the government's rapid and competent rollout of preventative measures. South Korea immediately introduced simple, pervasive Covid-19 testing, and Taiwan was putting health officials with diagnostic equipment onto incoming flights from Wuhan as early as January. Some of it has to do with the basic confidence inspired by a strong social safety net: People can “trust” that seeking treatment won't bankrupt them—both countries have extremely reliable national health care systems—and that they can stay home from work without finding themselves unable to pay for their daily needs. In Taiwan, people quarantining at home are given a stipend of $33 per day. If they don't have space to quarantine, they are directed to hotels; refusal to comply is punished with fines that run to tens of thousands of dollars. It's not that the government measures have worked because people are naturally cooperative and responsible; people are cooperative and responsible because the government measures have worked.
The competence extends to communication. South Korea allayed fears of runaway surveillance with frank explanations of exactly who has access to what personal data, on what terms, and for how long; when a few citizens were publicly exposed as potential carriers via inadvertently identifiable records, the country was quick to alter the way the data was published. The Taiwanese authorities made nationwide mask-inventory data available in real time, and volunteer developers immediately published simple apps to dampen fears of scarcity and publicize the location of distribution kiosks. When a young boy called into the national pandemic help hotline—a call center with an extremely high immediate-pickup rate—to complain that, anticipating ridicule, he was afraid to wear a pink mask, the next day's pandemic task force press conference saw every public health representative in a pink mask.
As Audrey Tang, Taiwan's digital minister and a veteran of the 2014 Sunflower Movement, said to me, a lot of it boils down to norms—which themselves have been conditioned by material experience. “People expect that anyone who has those symptoms will wear a mask, go to a clinic, and report whatever they have done in the past 14 days—and they will do this not because there's anything top-down but because we have a single-payer system and it's the logical thing to do. Coordination might look like compliance, this stuff about Confucian thought; I've read the Analects and understand that it's a useful metaphor, but this looks far more Taoist to me,” she joked. “It's all rational choice, there's no magic about it—if you know there's no social and financial burden, you do the right thing.”
None of this has anything to do with ahistorical “cultural values” or “social cohesion” or demographic homogeneity; it simply reflects basic commitments to transparency in governance, open communication, and, perhaps most of all, trust in the reliable provision of services. These are concrete measures with historical causes. That history includes Asia's fraught and perilous experience of two previous viral outbreaks, SARS and MERS. Tang told me that everyone of her generation and older has been marked by their memories of the barricading of the Taipei Municipal Hoping Hospital during the 2003 SARS outbreak. Hundreds of patients and health care providers were summarily locked inside, and at least one nurse tried to throw herself out of a window to escape. It was a rushed and clumsy response that left a legacy of anxiety, one that the current administration was keen not to repeat.
The bad news is that there is no shortcut to effective pandemic management. The good news is that social trust—the kind that undergirds both an institutional response and a technological one—can be cultivated, as long as responsible authorities keep their promises and refuse to default on their most basic obligations. Even small acts of ministerial competence can go a long way in a crisis. Taiwan banned the export of N95 and surgical masks on January 24 and nationalized mask distribution two weeks later, which established an atmosphere of mettle and faith.
Americans, of course, need far more than access to masks, though that would be a good start. A functional government, universal health care, and a much stronger social safety net—not to mention consistent, meaningful communication from on high—would help us relinquish the enduring fantasy that we will be saved by a Silicon Valley moon shot or that our low levels of social trust are congenital.
Over the past few months we've seen some fleeting, guarded optimism that this moment might be used as an opportunity for national reconciliation—a time to re-create a sense of fellowship and solidarity lost to decades of increasingly rancorous polarization. Crises like these can be occasions to see ourselves as part of what Margaret Levi and the political scientist John Ahlquist call an “expanded community of fate,” “those with whom we perceive our interests as bound and with whom we are willing to act in solidarity at some personal sacrifice.” The spontaneous appearance or expansion of radical mutual-aid groups—grassroots efforts of concerned citizens to help ease one another's difficulties—might thus be taken as an auspicious sign. But mutual-aid groups are at best a stopgap measure. All too often we imagine that Washington doesn't work because we as citizens are irremediably polarized, when the truth is almost certainly the opposite. We cannot thus hope that mere crisis solidarity will once again render America functional; we must instead remind our leaders that our precarious solidarity hangs in the balance of their competence. They should feel welcome to start with something small.
When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.
GIDEON LEWIS-KRAUS is a contributing editor at WIRED. He last wrote about democracy, social media, and misinformation in issue 28.02.
This article appears in the July/August issue. Subscribe now.
Let us know what you think about this article. Submit a letter to the editor at email@example.com.