A decade ago, Chatroulette was an internet supernova, exploding in popularity before collapsing beneath a torrent of male nudity that repelled users. Now, the app, which randomly pairs strangers for video chats, is getting a second chance, thanks in part to a pandemic that has restricted in-person social contact, but also thanks to advances in artificial intelligence that help filter the most objectionable images.
User traffic has nearly tripled since the start of the year, to 4 million monthly unique visitors, the most since early 2016, according to Google Analytics. Founder and chairman Andrey Ternovskiy says the platform offers a refreshing antidote of diversity and serendipity to familiar social echo chambers. On Chatroulette, strangers meet anonymously and don’t have to give away their data or wade through ads.
One sign of how thoroughly Chatroulette has cleaned up its act: an embryonic corporate conference business. Bits & Pretzels, a German conference about startups, hosted a three-day event on Chatroulette in September, including a Founders Roulette session that matched participants. “Without nudes though, but full of surprising conversations,” the conference heralded. Another change: Women now are 34 percent of users, up from 11 percent two years ago.
The AI that’s helped keep visitors free of unwanted nudity or masturbation has been a good investment, says Ternovskiy. It may also offer lessons for much larger social networks struggling to moderate content that can veer into falsehoods or toxicity. But Ternovskiy still dreams of a platform that creates happy human connections, and cautions that technology can’t deliver that alone. “I doubt the machine will be ever able to predict: Is this content desirable for my user base?” he says.
A 17-year-old Ternovskiy coded and created Chatroulette in November 2009 from his Moscow bedroom as a way to kill boredom. Three months later, the site attracted 1.2 million daily visitors. Then came the exodus. Ternovskiy dabbled in some ill-fated partnerships with Sean Parker and others to try to keep Chatroulette relevant. In 2014, he launched a premium offering that paired users based on desired demographics, which generated some revenue. He invested some of that money in cryptocurrency ventures that brought additional gains. Chatroulette today is based in Zug, Switzerland, a crypto hub.
In 2019, Ternovskiy decided to give Chatroulette one more spin, as a more respectable business, led by a professional team, with less “adult chaos.” The company was incorporated in Switzerland. Ternovskiy hired Andrew Done, an Australian with expertise in machine learning, as CTO. Earlier this year, Done became CEO. He was joined by a senior product researcher with a PhD in psychology, a community manager, a talent acquisition manager, and more engineers. Then Covid-19 hit, and traffic boomed.
The new team tapped the surge in traffic to conduct user research and test ways to moderate content, including AI tools from Amazon and Microsoft. It created a filtered channel, now known as Random Chat, designed to exclude nudity, alongside an Unmoderated channel. By demarcating the two channels, Chatroulette hoped to make the filtered feed feel safer and attract users interested in human connection. The unfiltered channel remains popular, but usage is shrinking, and Ternovskiy plans to eliminate it by the middle of 2021.
In June, Chatroulette brought in San Francisco-based Hive, an AI specialist, for a test on detecting nudity. Hive’s software also moderates content on Reddit. Executives were quickly impressed with Hive’s accuracy, especially in not flagging innocent users and actions. At the same time, Chatroulette tested moderation tools from Amazon Rekognition and Microsoft Azure; it had previously tried Google Cloud’s Vision AI.
“Hive is at a level of accuracy that makes it practical to use this technology at scale, which was not previously possible,” Done says. He says Hive is “so accurate that using humans in the moderation loop hurts the system’s performance. That is, humans introduce more errors than they remove.”
Hive’s cofounder and CEO, Kevin Guo, says the company’s tools benefit from its workforce of more than 2 million people in more than 100 countries annotating images with labels that include “male nudity,” “shirtless male,” and “gun in hand.” Guo says the distributed workforce inspired the company’s name. This training data feeds Hive’s model for predicting user behavior. The company attracts workers—who are paid per task completed—in part by offering payment in bitcoin. “Enabling payment through bitcoin was a big driver of growth for us, as word quickly spread one could ‘mine’ bitcoin by doing annotation tasks,” says Guo.
Another Hive moderation client, the social network Yubo, with more than 40 million users, dropped Amazon Rekognition and Google Cloud’s Vision AI in favor of Hive because it is cheaper and more accurate, says CEO Sacha Lazimi. Lazimi says Yubo still uses other services from Amazon and Google.
An Amazon Web Services spokesperson says the company’s vast offerings work very well for many clients big and small; Chatroulette and Yubo may have specialized needs. A Google Cloud spokesperson says the company’s computer vision service outranks Hive’s in a 2020 report from analysts at Forrester. Microsoft did not respond to a request for comment.
Hive has processed more than 600 million frames of Chatroulette video. Every connection produces three images or frames: one from each user at the session’s start and one from the user who ends the session. Chatroulette’s chief product officer, Jack Berglund, says Hive has helped reduce the number of conversations with inappropriate content by 75 percent. Some users have been banned; others, knowing they are being watched, are more careful. Streams with violators can be detected within one second. Hive then alerts Chatroulette human moderators In Switzerland or Russia who warn or ban these users.
Done, who was leading the Hive effort, left Chatroulette in October. Ternovskiy says he’s pleased with the improvements in moderation but cautions that some users can evade detection by removing cookies, changing their IP addresses, or violating Chatroulette’s rules between the sampling times. Ternovskiy says Chatroulette is also employing another AI technology, optical character recognition, to block and ban spammers on the site, aided by its own moderators.
But Ternovskiy thinks Chatroulette faces a bigger challenge than moderation: The typical interaction is “mediocre.” About 90 percent of first-time visitors never return, he says. Ternovskiy says Chatroulette needs to improve the product itself to survive and thrive post-pandemic. “Most of the users do not come back,” he says. “The challenge really is to build something worthy that would get people more interested to use it on a regular basis rather than it just being a one-off thing.”
Chatroulette’s research has found that the best predictor of whether a user will return is whether they engage in “activated conversations,” basically those lasting at least 45 seconds. That’s the point at which visitors get past the threshold of meaningless small talk. Users who have at least one conversation longer than 45 seconds are eight times more likely to return to Chatroulette within the next week, the company says. Heavy users, who visit the site several times per week, spend one to three hours per session and often participate in multiple activated conversations.
What will make Chatroulette 2.0 successful, says Ternovskiy, is creating incentives for all visitors to behave. He envisions a user-created and -regulated community built on valued exchange and mutual “happiness.” He’s seeking a way to ensure users have a “stake” in a community of responsible actors, while still respecting their anonymity and privacy.
He’s also very interested in users’ emotions. He talks about measuring the aggregate happiness of Chatroulette visitors, though he admits “it’s a bit dystopian.” Monitoring users’ emotions also could help police the platform. “Let’s say that partners tend to show emotion of disgust when talking to you,” he says. “That would be a good signal for us to kick you out. This is just a theory; I am not sure how that would play out in practice.”