Life in a nightmarish dystopia isn’t all bad. For Winston Smith, the tragic everyman of George Orwell’s 1984, the “greatest pleasure in life” is his work at the Ministry of Truth, where lies are manufactured and truths tossed down memory holes. To help “correct” the historical record, Winston takes pride in inventing fake statistics, fake events, fake people, and even revises newspapers to include phantoms like Comrade Ogilvy, a made-up war hero. In essence, the MoT is a fake news factory.
In the real world of 2020, we’re witnessing an inversion of the top-down fakery of 1984. High-quality fakes are bubbling up as AI is increasingly democratized. Meanwhile, members of “the state,” far from carrying out the efficient machinations imagined by Orwell, are themselves flummoxed by, and often the targets of, malicious fakes (like the “Drunk Pelosi” video). In a recent essay about the enduring relevance of 1984, George Packer writes, “The Ministry of Truth is Facebook, Google, and cable news. We have met Big Brother and he is us.” Any one of us can be a propagandist. What’s more, the powers that be lack an effective regulatory mechanism for dealing with the next phase of the disinformation age, when indistinguishable fakes will flood the internet. While countless commentators have viewed 1984 as a black cauldron simmering with horrors to avoid, we may need to salvage the idea of a Ministry of Truth in order to preserve what’s left of our shared reality.
Consider that our forgeries are already much better and far stranger than those of Big Brother. Right now, you can experience the pleasure of a well-done fake with only a click or a tap, no Winston Smith required. Visit the website This Person Does Not Exist, and refresh as many fictitious comrades as your heart desires. When the digital ghosts get too weird, you can marvel at a deepfake face swap: Behold, if you dare, Steve Buscemi as Jennifer Lawrence. Or you can fake yourself with FaceApp.
Fakery isn’t always harmless. More than 90 percent of deepfakes are pornographic, much of it “revenge porn.” Criminals have deployed deepfaked audio to impersonate CEOs. Synthetic content, experts warn, could be used to influence elections, sway financial markets, or trigger wars. Back in June 2019, House Democrat Adam Schiff, a man permanently on the cusp of letting out a long, tired sigh, led a congressional hearing on deepfakes. All told, it was a gloomy affair.
That day, representatives learned that a “high school kid with a good graphics card can make this stuff.” That the creators of malicious deepfakes (the bad guys) and those working to identify and intercept fake content (the good guys) are locked in an unending arms race. Hany Farid, an expert in digital forensics at UC Berkeley, has said, "We are outgunned … The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.” Finally, representatives learned of the tipping point of indistinguishability: In a few years, it will be impossible for the naked eye to distinguish a real video from a deepfake. The prospects are harrowing: perfect fakes, creatable by anyone, unleashed at scale and difficult to discern. It’s no wonder that during the hearing Washington representative Denny Heck repeatedly quoted from Dante’s Inferno: “Abandon hope all ye who enter here.”
Putting aside such brash pessimism, what can be done? The platforms on which fake content appears (Facebook, Instagram, YouTube, Twitter) have taken some steps to combat disinformation. In a recent blog post, Facebook pledged to “remove misleading manipulated media” if “it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.” This is important, but detection is a serious challenge, and few trust Big Tech to fully self-regulate. Sure, there’s a profusion of laws around identity theft and defamation that might dissuade creators of harmful fakes, but it’s unclear who will enforce them or how.
As law professors Danielle Citron and Robert Chesney describe in their paper “Deepfakes: A Looming Challenge for Privacy, Democracy, and National Security,” three federal agencies (the FCC, the FEC, and the FTC) could in theory regulate the dissemination of fake content, but "on close inspection, their potential roles appear quite limited.” The FCC's jurisdiction is limited to radio and television. The FEC is concerned only with the electoral process. The FTC oversees "fake advertising," but deepfakes aren't typically hawking products or services.
So, what about a new federal agency? A central body tasked with combating disinformation, parsing fact from fiction and thereby ensuring Americans’ collective sanity when the flood of fakes truly arrives: a Bureau of Information, a Department of Facts, a Ministry of … Truth!
Full circle, and we’re back at dystopia. The idea might sound absurd, unimaginable even: Washington bureaucrats regulating reality itself, dictating to Americans what’s true and what isn’t.
Is it really so crazy? The EPA protects our environment, the FDA protects our bodies, the DHS protects our borders. In the era of indistinguishability, difficult choices will need to be made in order to protect our minds. When the fakes come for you and yours—when, for example, your adolescent child is deepfaked by an internet bully—you might want a Ministry of Truth that actually lives up to the name, that doesn’t falsify but certifies the truth, that assertively stamps its authority atop fake videos: “This content is not real.” American history includes no shortage of necessary (if at first uneasy) interventions, in which citizens trade some degree of individual autonomy for collective peace of mind: “FDA-approved” food and drugs; “MoT-approved” audio and video.
Obviously, a name change would be in order. Orwell’s “Ministry” is something to think with and think against. Dorian Lynskey, author of The Ministry of Truth: The Biography of George Orwell’s 1984, told me that “for Orwell, it’s not the fact that the Ministry is a centralized agency that’s problematic. The problem is that it lies.” According to Lynskey, “the whole thrust of the novel is the horror of what happens if objective truth does not exist.” When millions feel that horror close to home, people will want a regulatory force—no matter what it’s called.
In practice, a new governmental entity could spearhead the application of technical solutions and industry standards, providing structure to what’s currently a disorganized deepfake resistance of concerned lawyers and lawmakers, activists and technologists. On the tech side, efforts are being made in two areas: authentication on the front end (using “verified-at-capture” technology that proves the “provenance,” or origin, of a particular piece of media) and forensic detection on the back end (using machine learning algorithms to identify, rather than create, deepfakes). Darpa is already pouring resources into detection efforts, as are Facebook and Google, but most observers agree that much more coordination between entities is needed. Witness, a nonprofit organization working at the intersection of human rights and media, recently published a report on media authentication entitled “Ticks or It Didn’t Happen.” The report imagines a world in which “every piece of media is expected to have a tick,” like a checkmark, “signaling authenticity.” Witness raises more than a dozen dilemmas with such a scenario while recognizing its potential eventuality. Good government could organize, enforce, and standardize authentication practices.
When it comes to technological solutions to the problem of malicious fakes, “there’s no silver bullet,” says Robert Chesney. I asked Chesney what a governmental response might look like, and he envisioned a “reasonably high-powered inter-agency task force or council or entity where all the relevant stake-holding federal entities—and hopefully state and local representation in some fashion—try to coordinate awareness of problems as they unfold.” Relevant stakeholders might include the Department of Homeland Security, which has previously tackled digital threats with the creation of the Cybersecurity and Infrastructure Agency, as well as the FCC, FEC, and FTC.
Federal oversight wouldn’t need to mean Orwell’s ministry. Our hypothetical entity could function more like connective tissue than menacing monolith, putting private companies, governmental departments, non-profit organizations, and university researchers into close and regular contact. At a time when the Administration often behaves like a dystopian MoT, it would be important to “watch the watchmen” and create safeguards against political bias or factionalism. Authority would need to be distributed among relevant parties, lest any one group gain a monopoly on truth.
To envision this, imagine a scenario in which political candidates are being deepfaked en masse during an election year, with perfectly realistic video and audio sowing confusion about what’s being said during town halls, rallies, debates, interviews. Social media platforms alert a centralized agency to the threat and share relevant data; researchers get to work on the tricky task of detection; regulators ensure that these videos are labeled as fake across all media platforms and work to encourage wider adoption of verified-at-capture technology. In this way, a governmental institution could serve as a referee and an arbiter with some legal backbone and regulatory teeth. Its purview would necessarily be limited to cases when the truth wasn’t up for debate, when no “alternative facts” could be claimed. Did AOC say that she’d socialize American farming on January 24, 2024, in Des Moines, Iowa, or didn’t she? There’s only one answer.
Already, there are steps being taken in a regulatory direction. The DEEPFAKES Accountability Act was introduced in June but looks largely unenforceable. Congresswomen Anna G. Eshoo and Zoe Logfren have proposed establishing a “Digital Privacy Agency” to “enforce privacy protections and investigate abuses” within the online sphere. Eventually, some form of legislation addressing synthetic media will be passed: the technology is becoming too powerful for Washington to ignore.
In the meantime, Americans should “prepare, not panic,” as Witness advises. No doubt, the next wave of fakes will be entertaining, surreal, and 100 percent meme-able; indistinguishable fakes will also generate confusion and further deepen our skepticism. Decisive action will be needed. Just as Winston Smith derives pleasure from crafting “delicate pieces of forgery,” Americans in the 2020s should find gratification in identifying and stomping out fakeries. In doing so, we’d do well to remember that the message of 1984 isn’t “fear the government.” The message is to resist the notion that 2+2=5 before it’s too late.