12.5 C
New York
Saturday, April 13, 2024

To See the Future of Disinformation, You Build Robo-Trolls

logoThe AI Database →

Application

Identifying Fabrications

Text generation

Safety

Company

Open AI

Sector

Public safety

Social media

Source Data

Text

Technology

Natural language processing

Machine learning

Jason Blazakis’ automated far-right propagandist knows the hits. Asked to complete the phrase “The greatest danger facing the world today,” the software declared it to be “Islamo-Nazism,” which it said will cause “a Holocaust on the population of Europe.” The paragraph of text that followed also slurred Jews and admonished that “nations who value their peoples [sic] legends need to recognize the magnitude of the Islamic threat.”

Blazakis is director of the Center on Terrorism, Extremism, and Counter-Terrorism at the Middlebury Institute of International Studies, where researchers are attempting to preview the future of online information warfare. The text came from machine-learning software they had fed a collection of manifestos from right-wing terrorists and mass murderers such as Dylann Roof and Anders Breivik. “We want to see what dangers may lie ahead,” says Blazakis.

Facebook and other social networks already fight against propaganda and disinformation campaigns, whether originating from terrorist groups like ISIS or accounts that are working on behalf of nation-states. All evidence suggests those information operations are mostly manual, with content written by people. Blazakis says his experiments show it’s plausible that such groups could one day adapt open source AI software to speed up the work of trolling or spreading their ideology. “After playing with this technology, I had a feeling in the pit of my stomach that this is going to have a profound effect on how information is transmitted,” he says.

Computers are a long way from being able to read or write in the way people do, but in the past two years AI researchers have made significant improvements to algorithms that process language. Companies such as Google and Amazon say their systems have gotten much better at understanding search queries and translating voice commands.

Some people who helped bring about those advances have warned that improved language software could also empower bad actors. Early this year, independent research institute OpenAI said it would not release full code for its latest text-generation software, known as GPT-2, because it might be used to create fake news or spam. This month, the lab released the full software, saying awareness had grown of how next-generation text generators might be abused, and that so far no examples have come to light.

Now some experts in online terrorism and trolling operations—like Blazakis’ group at Middlebury—are using versions of OpenAI’s GPT-2 to explore those dangers. Their experiments involve testing how well the software can mimic or amplify online disinformation and propaganda material. Although the output can be nonsensical or rambling, it has also shown moments of unnerving clarity. “Some of it is far better-written than right-wing text we have analyzed as scholars,” says Blazakis.

>

Philip Tully, a data scientist at security company FireEye, says the notion that text-generating software will become a tool for online manipulation needs to be taken seriously. “Advanced actors, if they’re determined enough, are going to use it,” he says. FireEye, which has helped unmask politically motivated disinformation campaigns on Facebook linked to Iran and Russia, began experimenting with GPT-2 over the summer.

FireEye’s researchers tuned the software to generate Tweets like those from the Internet Research Agency, a notorious Russian troll farm that used social posts to suppress votes and boost Donald Trump during the 2016 presidential election. OpenAI initially trained GPT-2 on 8 million webpages, a process that gave it a general feeling for language and the ability to generate text in formats ranging from news articles to poetry. FireEye gave the software additional training with millions of tweets and Reddit posts that news organizations and Twitter have linked to the Russian trolls.

Afterwards, the software could reliably generate tweets on political topics favored by the disinformation group, complete with hashtags like #fakenews and #WakeUp America. “There are errors that crop up, but a user scrolling through a social media timeline is not expecting perfect grammar,” Tully says.

This was not an especially difficult task: Much of the work was done by a single graduate student, the University of Florida’s Sajidur Rahman, over the span of a three-month internship. That suggests even a trolling operation without the backing of a nation-state could access the technology. However, a large-scale online disinformation campaign would require much greater effort, since trolls must also maintain and coordinate collections of phony social accounts.

FireEye’s project also showed AI language software could help clean up disinformation campaigns. The company adapted GPT-2 for use as a kind of disinformation detector that scores new tweets on how similar they were to prior tweets from the IRA. This tool could help FireEye’s analysts track information operations on Facebook and other sites.

Blazakis and his fellow Middlebury researchers weren’t focused exclusively on generating extremist right-wing propaganda. The team, to which OpenAI had granted early access to the full version of GPT-2, made three other ideology bots: One was conditioned on speeches from recently deceased ISIS leader Abu Bakr al-Baghdadi, another on writings from Marxist-Leninist thinkers including Mao Zedong, and a third on anarchist books and magazine articles.

Like FireEye, the Middlebury researchers also made tools that attempt to flag machine-generated text. Although the results were promising, it would be a mistake to think spam-style filtering could make the internet immune to this form of disinformation. Blazakis expects such text to get more convincing over time, while filtering will not be deployed widely. “The average person reading Twitter or a Reddit board is not going to have access to AI detection tools,” he says.

The Middlebury researchers now plan to test how persuasive AI-made text might be. They will soon begin trials testing whether people in the lab can tell the difference between real extremist writings and those generated by an ideological automaton.

Updated 11-19-19, 7.25 pm EST: This story was updated to correct the spelling of Jason Blazakis's last name.

Related Articles

Latest Articles