Fake, manipulated, and misattributed photos and videos have flooded social media feeds since the advent of social media itself. During times of crisis and political polarization, it only gets worse, as phony images spread like urban legends, propping up fringe conspiracy theories and mainstream political propaganda alike. Examples are endless, and constantly refreshed. Slowing down a video was all it took to make Speaker of the House Nancy Pelosi seem to be drunkenly slurring. Careful cropping made the anti-quarantine protests appear more populated than they ever were. During both the Covid-19 pandemic and the George Floyd protests against systemic racism and police brutality, images from other countries and other years were used to suggest that the situation was both more or less extreme than it was.
It’s not just random netizens that get taken in by and participate in making these images and videos, either. Last week, Fox News published misleading images of protests in downtown Seattle, mashing up multiple photos from different days and locations to collage together a horror story of their own making: a masked man with a giant rifle in front of a smashed up Old Navy.
Many criticized Fox for publishing such an image without disclosing that it had been digitally altered, and the images have since been removed from Fox’s website, but they won’t be the last misleading images to make headlines, particularly not as America dives headlong into its presidential election season. “We will see these techniques used in the coming year,” says Jen Golbeck, who studies algorithms and malicious social media activity at the University of Maryland. “It is absolutely not just a technique of the right. People on both sides are using image manipulation to make their point and appeal to people’s existing biases.” In early June, some left-leaning social media users went wild comparing an infamous photo of President Trump holding a Bible in front of a church to an eerily similar photo of Adolf Hitler. Trouble is, the photo of Hitler was manipulated, and few people bothered to check before sharing.
The internet didn’t create the practice of using altered images to score political points. Image manipulation has existed for as long as there have been photographs. Stalin was notorious for wiping political enemies from official images long, long before Photoshop, and other governments got up to plenty of image-based propagandizing during World War II as well. “In the 1940s, there were very few institutions that could actually produce high quality images. You had to have a lot of money behind you,” says Monica Rankin, a propaganda historian at the University of Texas at Dallas. “It was also pretty unsubtle: These are the good guys, these are the bad guys, this is how you should feel.” Now anyone with a halfway decent smartphone can alter an image or a video well enough that it would fool most at first glance, and propaganda works more by innuendo and analogy than patriotic morality plays. No wonder well-intentioned people are so easily misled.
How to Spot Manipulated Images and VideosDIY Digital Sleuthing
Generally speaking, image manipulation strategies fall into the following categories. First, there’s composition, where things are added to an existing image, like Fox News did to the imagery from the Seattle protest. Then, there’s elimination, which includes both vanishing objects within the image and misleading cropping. Other images are just slightly retouched, their meaning changed just by blurring out the background or someone’s face. Lastly, some misleading images are themselves entirely genuine, just wildly out of context—like an image of a police station that burned years ago being used as evidence of current looting.
To catch retouched and composited images, you’re going to have to start zooming in. In some corners of the internet, spotting manipulated images is already a popular pastime, one that has dedicated online communities like the subreddit r/badphotoshop. Typically, online sleuths turn their attention to advertising materials, magazine photo spreads, and celebrity social media posts. (Remember the time Vanity Fair ran a photo of a three-handed Oprah and a triple-legged Reese Witherspoon?) Whether you’re spotting additional limbs or a pasted-in Confederate flag, the principles are similar, and studies have shown that people who are more familiar with photo editing techniques are better equipped to identify fakery.
“Looking for digital artifacts is a tool of the Photoshop investigation community,” Golbeck says. “You’ll see wavy lines where they’re not supposed to be or a blurry spot that wouldn’t be there if [the image] was really what it was purported to be.”
You know what pictures are supposed to look like. A poorly altered image or video may trip your inner alarms, which are sensitive to things like impossible lighting and angles even if your conscious mind isn’t. Golbeck’s most basic advice: if it looks “wrong,” it probably is. Still, the reason that altered images and videos are such a problem is that many are flawless to the naked eye. If you’re a little more Photoshop savvy, you can use techniques like edge detection to see where images have been artificially put together, or check the histograms for gaps in what should be continuous ranges, another sign of editing. “The more you practice the better you get,” Golbeck says.
Consider the Source
To catch badly cropped or misattributed images, you have to find the original. The easiest way to trace an image to its source the way a researcher or reporter would is to do a reverse image search. Google Images, or alternatives like TinEye, will help you here. If you’ve never done a reverse image search, you basically upload an image you’ve seen and the search engine will surface other examples of that image or similar images. It’s the best way to find out if the suspect image you found is actually altered or a composite, or taken in Spain in 2014 rather than Nebraska in 2020.
Of course, not everyone has time to check each image over for signs of skullduggery every time they scroll through Twitter. That’s why getting your information from reliable sources in the first place is so important. Approach each image you see with skepticism. Does it come from a media outlet you recognize? Is the photographer credited? Does it have a caption that explains what’s happening in detail? All of these things can be faked, of course, but not without effort, and we’re trying to avoid getting taken in by bargain basement propagandists here. “I don’t like being fooled by people,” says James O’Brien, an expert in computer graphics and image and video forensics at UC Berkeley. “I think people should take that attitude. When you see the candidate you hate kicking puppies, stop and ask yourself where is this video coming from? How do I know it's real?” If it confirms all your bitterest feelings on a subject, that is a sign of truthiness rather than truth.
Rely on the Community and Experts
If it sounds like avoiding getting duped is a lot of hard work, that’s because it is. Fortunately, the internet is full of professionals and committed amateurs who may well have done the work for you if you know where to look. “I normally wouldn’t send people to the comments, but I’ve been really impressed with crowdsourced fact-checking over the last few months. People are putting in the work to say ‘this is real’ or debunking it if it’s manipulated by pointing to the original thing,” says Golbeck. “The way algorithms are working now, fact-checks tend to surface toward the top, in the first 10 or 20 replies.”
The WIRED Guide to Online Conspiracy Theories
Everything you need to know about George Soros, Pizzagate, and the Berenstain Bears.
Some platforms, like Twitter, YouTube, and Facebook have been making concerted efforts to post fact-checks or flag dubious information themselves. Facebook even has image-manipulation detection technology. However, most measures put in place by platforms have been both inconsistently and imperfectly deployed, so if you are going to be getting your news from social media, Golbeck recommends trusting some platforms over others simply on the strength of their community discourse. Twitter posts, because they tend to be public, also tend to get fact-checked by the community more quickly than posts on Instagram, where conversations aren’t as well-threaded and you can’t link to outside sources. Facebook remains a misinformation haven because it’s not as publicly visible. “The community response isn’t going to be the same if it’s just your uncle posting something,” says Golbeck. “You’re a little more on your own.” Which means you are more likely to have to bust out Photoshop or do some reverse image searching yourself—and therefore more likely to do nothing at all.
Why You Might Mess Up AnywayYou Aren’t Good at This
When Cindy Shen, who studies social media and misinformation at UC Davis, began researching visual misinformation in 2014, she was struck by how little anyone knew about it, considering that most people consume information in formats that are at least a mixture of text and images, if not entirely visual. So she presented people with (in her view, somewhat badly) altered images and asked if they thought they were genuine. “The results were astonishing. People are actually very bad at detecting fake images,” she says. “Almost all people assume they are real by default.” Even when some participants caught on that the images they were looking at might be faked, they consistently misidentified which elements of the photos had actually been manipulated.
In further research, Shen found that not only are people bad at identifying manipulated images, they’re bad at knowing what images to trust, too. The general understanding in academia is that the more credible the source is, the more likely people are to trust it. Yet, when Shen showed people the same images and claimed they were from a mainstream outlet or a fringe one, or from Bill Gates (then a highly trusted person) or a Twitter rando, no one seemed to care. “None of these cues mattered at all. People tend to rate the credibility pretty consistently,” says Shen. The biggest indicator of whether people believed an image was genuine or not was whether they agreed with it’s contents. Shen notes that people with higher levels of digital literacy and Photoshop experience tended to be better at spotting fakery, but the overall outlook isn’t great.
“It’s not surprising,” Shen says. “But very depressing.”
… But Computers Sure Are
Computers are excellent at detecting fakery: In a lab environment, researchers can identify fakes every time. Unfortunately, computers are also excellent at generating manipulated images, and the skill level required to make them do so is constantly dropping. In recent years, that’s even become true of manipulated videos like deepfakes. “Someone using deepfakes doesn’t have to be a skilled artist,” says O’Brien. “They just need to upload the images, designate the faces, and give it some annotations to get started. Luckily for us they’re not perfect yet.” (In most deepfake videos, you might be able to see that there’s something not quite right about the mouth and chin, but the software knowledge to prove it is beyond the average user.)
That luck will be short-lived: According to O’Brien, artificial intelligence will soon push the verisimilitude of computer-generated fake images and videos beyond what even skilled human editors can produce. “Citizens are helpless,” says Shen. “I can’t in good conscience say that they should be able to decide for themselves what is real or fake because they can’t.”
Solutions to Push For
Before you resign yourself to never knowing if images are real or fake ever again, know that you can support and advocate for solutions to this problem that don’t involve you becoming a full-time digital sleuth. “The community interest in the truth is the best solution in the short term,” Golbeck says. “Whether you’re doing that research yourself or finding other people who have, amplify the voices that do fact check.” That can be as simple as pressing Like. You can also encourage social media platforms to fact-check images more consistently and aggressively, though O’Brien notes that a universal, button-push fake detector will only result in an arms race between image manipulation algorithms and the algorithms detecting image manipulation.
A more permanent solution might need to be implemented further up the chain, before the dubious photo ever gets taken. “If you found a new Picasso, you’d want to look at the provenance: who sold it before, who bought it,” says Murat Kantarcioglu, a data security and privacy expert at the University of Texas at Dallas. “In the future, we will have phones that have special hardware that signs the image as we take it, and it will create a provenance chain.” A version of these signatures actually already exists, hidden in the specifics of how software writes a JPEG, but few people outside a lab are going to have access to raw files, much less the wherewithal to pour over code. “We should demand things that let us tell when something is fake or not,” O’Brien says. “We also have to be willing to wait.” Propaganda is fast. Fact-checks and verified images are slow—for now.
As technology improves and election season begins, paying attention to the images and video you consume could be the difference between maintaining rational discourse and relentless bandwagoning. “Their power is very clear,” O’Brien says. “A single video has made literally half the country come out to protest against police brutality and racism.” Footage of George Floyd’s death is tragically real, but just imagine, for a moment, if it wasn’t.