9.3 C
New York
Thursday, March 28, 2024

The Protests Prove the Need to Regulate Surveillance Tech

Law enforcement has used surveillance technology to monitor participants of the ongoing Black Lives Matter protests, as it has with many other protests in US history. License plate readers, facial recognition, and wireless text message interception are just some of the tools at its disposal. While none of this is new, the exposure that domestic surveillance is getting in this moment is further exposing a great fallacy among policymakers.

All too often, there is a tendency among the policy community, particularly for those whose work involves national security, to discuss democratic tech regulation purely in terms of geopolitical competition. There are arguments that regulating big tech is vital to national security. There are counterarguments pushing the exact opposite—that promoting big US tech “champions” with minimal regulation is vital to US geopolitical interest, especially vis-à-vis “competing with China.” Many permutations abound.

>

Claiming these arguments don’t hold water in Washington would suggest a certain naivete—that’s not what I’m saying. That major tech firms use these narratives to argue for lax regulatory oversight recognizes its worth. But with these framings, policymakers and commentators shouldn’t miss that democratically regulating technology is inherently vital to democracy.

Those who claim the United States does not have a history of oppressive surveillance need to read books like Simone Browne’s Dark Matters: On the Surveillance of Blackness or articles like Alvaro M. Bedoya’s “The Color of Surveillance.” Surveillance in the US goes back to the transatlantic slave trade, and its use has entirely targeted or had the worst impact on marginalized and systemically oppressed communities.

Post-9/11 surveillance of Muslim communities—including through CIA-NYPD cooperation—and the FBI’s COINTELPRO from 1956 to 1971, which targeted, among others, Black civil rights activists and supporters of Puerto Rican independence (though also the KKK), are notable state surveillance programs that may come to mind. But the history of surveillance in the US is much richer, from custodial detention lists of Japanese Americans to intense surveillance of labor movements to stop-and-frisk programs that routinely target people of color.

Thus, “rather than seeing surveillance as something inaugurated by new technologies, such as automated facial recognition or unmanned autonomous vehicles (or drones),” Browne writes, “to see it as ongoing is to insist that we factor in how racism and antiblackness undergird and sustain the intersecting surveillances of our present order.” Browne, along with numerous other scholars, lays bare the origins of digital surveillance and harm that still today has oppressive and disparate effects.

Virginia Eubanks’ Automating Inequality details the use of improperly regulated algorithms in state benefit programs, often with errors and unfairness that reinforce a “digital poorhouse.” These algorithms monitor, profile, and ultimately punish the poor across the US—like in Indiana, where a program rejecting public benefit applications sees application mistakes as “failure to cooperate.” Ruha Benjamin’s Race After Technology explores how automation can deepen discrimination while appearing neutral—the sinister myth of algorithmic objectivity. The obvious example might be facial recognition, but it’s much more than that: sexist résumé-reviewing algorithms, skin cancer predictors that can be trained mostly on lighter-toned skin, gender and ethnic stereotypes literally quantified in word embeddings used in machine learning.

Safiya Umoja Noble is another scholar who has revealed these deep-seated issues. In Algorithms of Oppression, she writes that search engine queries for “‘Black women’ offer sites on ‘angry Black women’ and articles on ‘why Black women are less attractive,’” digitally perpetuating “narratives of the exotic or pathetic black woman, rooted in psychologically damaging stereotypes.” Algorithmic unfairness goes well beyond technical design, reflective as well of US digital culture that forgoes discussion of how tech is interwoven with structural inequalities. Noble writes, “When I teach engineering students at UCLA about the histories of racial stereotyping in the US and how these are encoded in computer programming projects, my students leave the class stunned that no one has ever spoken of these things in their courses.”

Despite clear and innumerable examples of how digital surveillance and algorithmic decisionmaking perpetuates harm, it is far too often that policymakers and policy wonks call attention to digital abuses in other countries while ignoring the need for democratic tech regulation in our own. Perhaps most notably, some members of Congress continue framing needed regulatory action against large tech firms as a trade-off with US global competitiveness. None of this is to support political relativism; the United States is not, as some dictators like to suggest, as unfree as many other countries. But the US needs to curb digital harms for its own sake—to protect its own citizens—not just because of geopolitical considerations.

The United States doesn’t have adequate federal privacy protections to restrict rampant data collection, sale, and exploitation by private companies. Law enforcement facial recognition use is rapidly growing with few clear and consistent rules and little transparency in the first place. Dependencies have been built on platforms like Facebook whose chief executive, as Siva Vaidhyanathan recently argued, refuses to address fundamental issues with the platform. And much of this digital surveillance and algorithmic decisionmaking occurs with government organizations and companies intertwined: smart doorbell cameras and police partnerships, racist risk assessment algorithms in US courts, data brokerage firms fueling deportations.

While there again may be geopolitical considerations around tech regulations and governance best-practices (e.g., contrary to how some might frame it, could privacy rules make US firms more trustworthy overseas?), overfocusing on those points ignores the inherent need to curb these digital abuses at home. As a nation that should strive to uphold democratic ideals, democratic tech regulation is critical in and of itself.

Fixating purely on geopolitical reasons to curb digital harms in the US can also do damage beyond distracting from its normative importance. In her book The Known Citizen: A History of Privacy in Modern America, Sarah E. Igo notes that during the Cold War, “from the vantage point of those charged with the nation’s security, the risks inherent in A-bombs and subversive activity explained the need to know, test, and vet people as thoroughly as possible.” In a fashion akin to the present, policymakers were so concerned about real and perceived threats to state security that they expanded surveillance powers. But this, Igo writes, raised a question: “The inquisitorial procedures of the House Un-American Activities Committee, the use of political informants, the state controls over information and the press, and the policing of dissent, all in the name of staunching the Communist threat, led some to ask: Was the United States approximating its totalitarian foe in the effort to contain it?”

It shouldn’t have taken recent protests to bring surveillance issues to the fore; many scholars and activists have been raising alarms for years. But it is time that those speaking of democratic tech regulation only because of geopolitical competition, or only because we don’t want “digital authoritarianism” (though we shouldn’t), explicitly recognize the inherent importance of curbing digital harms to equitably protect everyone in the US. A functioning democracy requires it.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.

Related Articles

Latest Articles