9.7 C
New York
Monday, March 25, 2024

Telegram Still Hasn’t Removed an AI Bot That’s Abusing Women

logoThe AI Database →

Application

Deepfakes

Ethics

Identifying Fabrications

Regulation

Safety

Content moderation

Company

Apple

End User

Consumer

Startup

Sector

Social media

Source Data

Images

Technology

Machine learning

Machine vision

Messaging app Telegram is under pressure to crack down on an AI bot that has generated tens of thousands of non-consensual images of women on its platform.

>

Law enforcement bodies are looking into the activity of a deepfake bot, which is believed to have been used to produce explicit images of underage girls. Data protection regulators in Italy have also opened an investigation into its use, and access to the bot has been restricted on Apple’s iOS.

The scrutiny of Telegram comes as multiple investigations into the use of the messaging service have also discovered private groups sharing non-consensual “revenge porn” photos and videos of women that were not generated by AI. Reports from America, Italy, South Korea, and Israel have all detailed how Telegram has been used to share abusive images over the past year.

Researchers claim Telegram has failed to act against the deepfake bot, which automates the creation of nude images of women. The bot uses a version of the DeepNude AI tool, which was originally created in 2019, to remove clothes from photos of women and generate their body parts. Anyone can easily use the bot to generate images. More than 100,000 such images have been publicly shared by the bot in several Telegram chat channels associated with it. These channels contained tens of thousands of members each.

When researchers at the security firm Sensity discovered the bot on Telegram at the start of this year, they reported it to the messaging app. The hope was that the chat platform would eject the bot and put a stop to the way women were being abused with technology. However, this hasn’t happened.

Since Sensity revealed the bot’s existence in October, the groups that used it to share images have hidden it. “It is actually harder now to actually get to the bot,” Giorgio Patrini, CEO and chief scientist at Sensity says. “The groups that were advertising the bots on Telegram have essentially gone silent.”

A number of groups that used the bot have changed their names so they can avoid being identified. Many of the channels now share other content related to deepfake technology in general, and a public gallery of "nude" images created by the bot was wiped by its owner. And some of the channels have vanished completely.

Although the groups around the bot are not currently posting about it, it still exists and continues to work. “The bot has never been taken down by anybody,” Patrini says. “Since we went public the bot is still operational and still is today.” In one instance the bot’s creator said the bot will keep operating under the radar. The creator, whose identity is unknown, did not respond to a request for comment.

At the end of October, the Telegram bot became inaccessible on iPhone and iPads and showed a message saying it violates section 1.1 of Apple’s developer guidelines. Apple’s rules say “overtly sexual or pornographic material” is not allowed in apps accessible through the App Store. That message within Telegram has since been replaced with a generic warning that says the bot cannot be displayed.

Apple did not respond to questions about Telegram or whether it told the company to put restrictions in place. Apple says that it is unable to block content or display any messaging in apps it doesn’t own, but that it does notify developers if it finds any content that goes against the App Store’s rules. These rules say apps that contain text, photos, or videos uploaded by people must also include a way to filter “objectionable” material from being posted. The bot is still available on Android devices and Telegram's Mac application.

In one Telegram group chat about the bot, its owner says that Telegram has blocked mentions of its name. However, WIRED was unable to confirm this or any action taken by Telegram. Neither Telegram’s spokesperson or the service’s founder, Pavel Durov, responded to requests for comment. The company, which is believed to be based in Dubai but has servers all around the world, has never publicly commented about the harm caused by the Telegram bot or its continued position to allow it to operate.

Since it was founded in 2013, Telegram has positioned itself as a private space for free speech, and its end-to-end encrypted mode has been used by journalists and activists around the world to protect privacy and evade censorship. However, the messaging app has run into trouble with problematic content. In July 2017, Telegram said it would create a team of moderators to remove terrorism-related content after Indonesia threatened it with a ban. Apple also temporarily removed it from its App Store in 2018 after finding inappropriate content on the platform.

“I think they [Telegram] have a very libertarian perspective towards content moderation and just any sort of governance on their platform,” says Mahsa Alimardani, a researcher at the Oxford Internet Institute. Alimardani, who has worked with activists in Iran, points to Telegram notifying its users about a fake version of the app created by authorities in the country. “It seems that the times that they have actually acted, it's when state authorities have got involved.”

On October 23, Italy’s data protection body, the Garante per la Protezione dei dati Personali, opened an investigation into Telegram and has asked it to provide data. In a statement, the regulator said the nude images generated by the bot could cause “irreparable damage” to their victims. Since Italian officials opened their investigation, Patrini has conducted more research looking for deepfake bots on Telegram. He says there are a number of Italian-language bots that appear to offer the same functionalities as the one Sensity previously found, however they do not appear to be working.

Separate research from academics of at the University of Milan and the University of Turin has also found networks of Italian-language Telegram groups, some of which were private and could only be accessed by invitation, sharing non-consensual intimate images of women that don’t involve deepfake technology. Some groups they found had more than 30,000 members and required members to share non-consensual images or be removed from the group. One group focused on sharing images of women that were taken in public places without their knowledge.

“Telegram should look inward and hold itself accountable,” says Honza Červenka, a solicitor at law firm McAllister Olivarius, which specializes in non-consensual images and technology. Červenka says that new laws are needed to force tech companies to better protect their users and clamp down on the use of abusive automation technology. “If it continues offering the Telegram Bot API to developers, it should institute an official bot store and certify bots the same way that Apple, Google, and Microsoft do for their app stores.” However, Červenka adds there is little government or legal pressure being put in place to make Telegram take this kind of step.

Patrini warns that deepfake technology is quickly advancing, and the Telegram bot is a sign of what is likely to happen in the future. The bot on Telegram was the first time this type of image abuse has been seen at such a large scale, and it is easy for anyone to use—no technical expertise is needed. It was also one of the first times that members of the public were targeted with deepfake technology. Previously celebrities and public figures were the targets of non-consensual AI porn. But as the technology is increasingly democratized, more instances of this type of abuse will be discovered online, he says.

“This was one investigation, but we are finding these sorts of abuses in multiple places on the internet,” Patrini explains. “There are, at a smaller scale, many other places online where images are stolen or leaked and are repurposed, modified, recreated, and synthesized, or used for training AI algorithms to create images that use our faces without us knowing.”

This story originally appeared on WIRED UK.

Related Articles

Latest Articles