Social media is often blamed for harming people’s mental health. Dystopian headlines like “Six Ways Social Media Negatively Affects Your Mental Health” and “Yes, Social Media Is Making You Miserable” dominate our news feeds. So it’s no surprise that the world’s most popular platforms are implementing policies to protect their users’ well-being.
Moderating mental health is a monumental task. While most social media companies say they don’t permit posts that might harm users’ mental health, they face the extremely difficult job of deciding what counts as harm. The problem is that we know precious little about what each company defines as “problematic” content, and that’s worrying, because conversations about mental health don’t always look like what you’d expect.
For our latest New Media & Society article, we wanted to know how people were talking about depression on Instagram. We researched the #depressed hashtag, and our initial data set included 3,496 public posts collected over a 48-hour period in March 2017. Our most significant, and surprising, finding was that only 15 percent of users who post with the #depressed hashtag do so with what we call “real name” accounts (through which users share their name, pictures of themselves, and other identifying details).
Most people using #depressed—76 percent of the accounts in our data set—do so pseudonymously to share humorous memes and inspirational content about mental health. Such users typically hide real identity markers, including images of their faces. While researchers might dismiss this kind of content as “noise,” we see it as a significant sign of the sort of cultural practices now needed to hide in plain sight when posting content deemed problematic.
In the absence of posts from real-name accounts, we found that the #depressed community is flooded with what we call “dark” posts. These convey a strong aesthetic, usually featuring black-and-white images accompanied by inspirational or “sad quotes.” The pseudonymous accounts that post in this way can be entirely dedicated to talking about mental health or relaying negative mental health experiences. They are hashtag-heavy and often focus on people’s day-to-day experience of depression. But we argue that these dark aesthetics should not be equated with danger.
When Instagram users do make their depression publicly visible via hashtags, they code their posts in a way that might seem to counteract a broader potential to make conversations about mental health more visible online. There are lots of potential reasons for this, including an awareness that Instagram moderates content and enduring stigma around depression. In a sense this is a cat-and-mouse game with platform content controls, and it’s an example of the kind of coded practices that help people connect with others online through affinity and relatability. Whatever the specific reasons, our findings force us to rethink how we recognize healthy or productive conversations about mental health.
Pseudonyms and memes—staples in our social media diet—clearly help people to open up about their mental health. Michele Zappavigna, a senior lecturer in linguistics at the University of New South Wales, says humorous memes are a helpful tool for social bonding on the web. One of our most surprising findings is that the #depressed hashtag is most commonly paired with #dank and #memes instead of words we might expect, like #suicide and #killme. But we worry about the future of these accounts and hashtags, mainly because the recently renewed push for more social media platforms to enforce identity verification risks the future of pseudonymity. Will more social media companies require users to go by their real names, like Facebook? Or will users be asked to verify their identity when they sign up but be permitted to use the platform pseudonymously? And would this deter conversations about mental health?
As of today, the #depressed hashtag is accessible on Instagram and has attracted just over 13 million posts, but it prompts a public service announcement with links to various forms of mental health support.
This tells us that #depressed sits somewhere in the gray area, not quite problematic enough for an outright ban, but not entirely off the platform’s radar. It might be understood as a form of “borderline” content: posts that don’t quite go against a platform’s rules but that might not be appropriate for all members of its community. The term gained traction in 2018, after Mark Zuckerberg wrote a blog post about Facebook’s decision to limit the spread of clickbait and misinformation.
Social media content sits in three core realms of acceptability: permitted, prohibited, and borderline. Permitted content circulates unproblematically, while prohibited content is removed, either algorithmically or by human content moderators. But borderline content is handled in other, fairly opaque ways. The aim of the game is to reduce the presence of borderline content in ways that don’t fall under an outright ban or removal. For example, a platform might not include certain posts in recommendation algorithms, or it might shadow-ban certain social media users and posts.
At present, the diverse and lively #depressed community is allowed to thrive on Instagram. But any small tweaks to the system—blocking the hashtag, limiting search results, shadow-banning users posting to this tag—can happen in the blink of an eye, and could be dangerous for those who express themselves through #depressed posts.
Depression is one of the most prevalent and debilitating mental health issues globally, and it is on the rise. In the UK, for example, almost one in five adults (19.2 percent) were likely to be experiencing some form of depression during the pandemic in June 2020, almost double the one in 10 adults experiencing depression before the pandemic (July 2019 to March 2020). The story is similarly bleak in the US, with depression being the leading cause of disability for North Americans between the ages of 15 and 44, with those in the 18-25 bracket experiencing the highest rates of depression in the country. But with access to professional mental health services being restricted in many countries, people are naturally turning to social media to talk about their experiences. It is therefore vital that we commit to understanding these spaces, not over-moderating them in response to public moral panics about the “harms” of social media, and embrace the reality that conversations about mental health don’t always look like you would expect them to.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at email@example.com.