When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.
Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren't well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse's many contradictions and unsubtle characterizations.
The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”
This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.
But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal's many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.
Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.
The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.
The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.
The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals' private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.
Law enforcement and security agencies undoubtedly want to use facial recognition—a likely explanation for why an earlier version of the proposal’s rules on prohibition were subsequently weakened. The privacy-versus-security argument is a go-to move in the false-dichotomy playbook on tech policy. (Other routes include Facebook’s “regulate us or compete with China” narrative, or discussing privacy versus business competitiveness—as if better US privacy rules, for instance, couldn’t make American firms more trustworthy overseas.)
Yet a lengthy discussion of the risks of AI harming marginalized groups doesn't mesh with giving law enforcement broad authority to use facial recognition in practice. This and other problems, such as what the proposal defines as AI—listing “statistical approaches” as one option is incredibly broad and vague to say the least—speak to an internationally urgent need: Scrutinizing the assumptions underpinning what constitutes “democratic” tech legislation. Is allowing wide use of facial recognition really the democratic ideal of AI regulation?
Tech-industry-aligned groups have already criticized the regulatory proposal for undermining market competitiveness, while some officials say it will boost market trust. For all the nuanced research and reporting on AI’s effects on society, the power of bumper-sticker rhetoric persists. It’s a broader problem with formulating democratic AI strategy, particularly underscored by Silicon Valley’s pushback to the commission.
Former Google CEO Eric Schmidt made his take on the proposal clear earlier this month: “Europe is doing things pretty much the wrong way right now,” he said at the Copenhagen Democracy Summit. “While there’s certainly a space for regulation, the most important thing for Europe to do is to do an analysis of its weaknesses relative to this Chinese threat, and American leadership, and get its act together, specifically, with more funding, more companies, more European investment in these areas, or Europe will then be essentially permanently behind.”
Schmidt’s comment not only fit into a well-established pattern of big American tech firms making arguments about Chinese competition to resist congressional regulation; it also echoed the convoluted and contradictory idea that underpins AI discourse in the US—and is now wielded with increasing fervor to rhetorically counter EU regulation.
The “arms race” rhetoric maintains a stranglehold on AI discussion in DC. It contains scattered grains of truth, such as that the US government and the Chinese government are indeed developing AI for military purposes. But it’s a bad framing, drawing on outdated Cold War metaphors to inaccurately suggest all AI development, particularly between the US and China, is winner-take-all. That framing also imprecisely treats “AI” as a single technology, which becomes problematic when dealing with fundamentally different applications, like iPhone facial recognition versus song recommendations on Spotify, that may themselves use different machine learning techniques that rely on different training data sets. Schmidt has nonetheless used this bumper sticker idea to criticize the basis of the EU Commission’s AI proposal.
As I have argued previously, there is some degree of contradiction in claims that democracies must “beat” China in an AI arms race—quite literally suggesting a sprint to the same destination—which exist right alongside labeling the Chinese government’s digital rights abuses, including with artificial intelligence, an end state to which democracies should never aspire. American policymakers will even talk about the urgent need for democratic AI regulation while still adopting this rhetoric of “winning” a “race” with China. Technology firms weaponizing this rhetoric, for their part, lean on vague and contradictory explanations for why concerns about an authoritarian government using AI mean a democratic state must abandon strong regulation altogether.
In practice, this convoluted rhetoric on what democracies want from and with AI shapes (and distorts) regulation.
As Lucy Suchman noted in a recent post for NYU’s AI Now Institute, not only does the US’s influential National Security Commission on Artificial Intelligence base its findings on an AI “arms race” metaphor, it also has many members—including Schmidt, its chair and a current Alphabet stockholder—with “vested interests in increased funding for AI research and development.” Rarely if at all are these perspectives and assumptions discussed—including when taking the NSCAI’s report as the supposed way forward on democratic AI strategy. Few policymakers ask: Who does this rhetoric benefit?
Similarly, the EU Commission’s AI regulatory proposal has its own assumptions with global ramifications. It recommends prohibiting some uses of artificial intelligence, such as those used for “social scoring.” But facial recognition is permissibly usable by law enforcement in a variety of broadly defined scenarios, despite the clearly documented harms that it inflicts on marginalized groups. Likewise, expressed concerns about “manipulative” or “exploitative” technologies might sound praiseworthy but will be defined by future rules and may thus run up against company claims of AI needed for “competition.” Given the EU’s large regulatory influence globally (technology included), these judgments matter.
All these contradictions highlight the world's democracies' long regulatory journey ahead. EU Commission members should remove the broad exemptions on prohibited facial recognition use in the regulatory proposal. They should be more explicit in recognizing in the report how European Union rules-of-the-road on AI will have global norm-setting effects. Subsequent draft reviewing and revising will hopefully allow further clarification on such terms as “exploitative” AI and “artificial intelligence” itself.
More broadly, policymakers in democratic countries must inspect rhetoric used to characterize and inform democratic AI strategy far more rigorously. That scrutiny begins by taking stock of where a democracy is today, where it wants to head, and how it will employ mechanisms to get there. Contradictions and lack of nuance within those visions will only continue to warp regulation and the construct of alternative models to unchecked state and corporate surveillance. And yet, given the power of bumper sticker ideas and slogans, it raises difficult and compelling questions about how to frame alternatives. Looking beyond the same few companies and commission reports for answers is one place to start.