18.3 C
New York
Tuesday, April 16, 2024

3 Ways for Big Tech to Protect Teens From Harm

A 17-year-old state-ranked soccer player walked into the psychiatrist’s office wearing a bulky grey sweater in the middle of June and 2 rubber bands around her small wrists. With a half smile, she said that in the past year she had lost 21 pounds as well as her menstrual periods in the past year, after reading “weight loss tips” on Instagram and Tumblr. The “tips” included specific recommendations and support for behaviors that lead to anorexia, like to snap the rubber band around her wrist when she wanted to eat. Now, she found she couldn’t stop searching for more pictures of dangerously underweight teens sharing tips on how to suppress appetite, and had learned tricks for finding more harmful content, like slightly changing the search terms by a letter or two to avoid getting caught by Instagram’s safety filters.

>

This story, though fictional, is unfortunately not unique. These patterns are common among the many teens we see as psychiatrists, whose mental health has been impacted by social media. Research shows that the more time teens spend online, the more likely they are to be exposed to self-harm content, and engage in self-harm behavior (like cutting or hitting oneself) and develop suicidal thoughts. In fact, a study showing the uptick in suicides in the months immediately following the release of 13 Reasons Why on Netflix illustrates a dangerous contagion effect that can occur when kids start to emulate what they see. Last year, Instagram was criticized after the suicide of UK 14-year-old Molly Russell, whose parents believe she had seen pictures encouraging self-harm and suicide on the sites. YouTube’s algorithms were found to be identifying and recommending videos of partially clothed children. Facebook Live continues to struggle with people broadcasting suicides. The list is only growing.

The online manifestations of mental health have added a complex new dimension to psychiatry. Facebook CEO Mark Zuckerberg issued a call to action in The Washington Post. “Internet companies should be accountable for enforcing standards on harmful content,” he wrote. He called on third-party experts to set safety standards.

Request received. As psychiatrists working in Silicon Valley, we split our days between treating patients and working directly with tech companies to create new products that improve mental health. We’ve learned that tech companies often lack clear guidelines for how to make user safety decisions, especially in the face of simultaneous pressures: public outcry, a lack of legal precedent, financial repercussions, and more. With our colleagues from Brainstorm, Stanford’s Lab for Mental Health Innovation, we’ve created guiding principles that companies can follow to create products that protect users and have potential to help:

Do no harm:

  • Do not allow teens to be harmed by what they consume.
  • Do not allow teens to harm others by what they create.
  • If there is concern for imminent high risk (like thoughts of harm to
    oneself or others) that is identified on a platform, address it
    immediately in line with legal, ethical, and cultural norms, and pass
    it along to experts. This is an obligation.

Do good:

  • Help teens get help on and off the platform.
  • Work with mental health experts like psychiatrists to understand how
    the platform can be leveraged to improve teens’ lives, and refer them
    to barrier-free resources.

“Do no harm” is inspired by the oath we take as physicians, “primum non nocere”, stressing that our first responsibility is to safeguard against the negative. For example, if someone posts saying they have no reason to live and want to say goodbye to their friends, that would qualify as “imminent risk” and cannot be ignored; at the least, warnings to call 911 or to go to the closest ER should be presented on the platform. Social media companies are not, however, physicians. It is not their responsibility or training, and they should not be making medical decisions or shoulder the burden of medical care. The highest-risk situations should be left to the professionals—and companies should help people get there when they are in danger.

Beyond that, companies can “do good” and offer tools to users that responsibly and safely help to improve their wellbeing.

We believe that tech and social media companies are uniquely suited to be a psychiatrist’s biggest ally in our mission to improve mental health for the 2 billion people around the world struggling with brain and behavioral health disorders. The same ingenuity that allowed Facebook to acquire 2.3 billion users and Twitter to help us send 500 million tweets per day can be the key to identifying people at risk of depression, preventing slut shaming, or shepherding teens to the best medical treatment center. With that many people on a platform, however, things are bound to go awry. Recently, a BBC article revealed an underground network across the world of thousands of Instagrammers with “dark” accounts. The posts ranged from serious self-harm to documentation of the final hours before someone’s death by suicide. Many are made under the guise of benign, everyday photos, in efforts to circumvent Instagram’s new bans on graphic content.

The story also revealed a 22-year-old woman as an Instagram suicide “lifeguard” who voluntarily follows hundreds of “dark” accounts and alerts authorities when she suspects someone is in despair. She admits to sleepless nights, fearing that if she’s not checking her phone, someone’s self-harm might go unnoticed. This is a burden no one person should have to bear. By companies following these guidelines and prioritizing safety, people won’t have to.

As companies start to solve user safety problems, they should make three key changes:

First, companies need to bend the curve from harmful to helpful. In response to searches for terms like suicide, algorithms historically have been suggesting other “dark” accounts, or related harmful content. Instead, search results and suggestions should include “nudges” to contact professionals with lists of local resources, and examples of coping skills like deep breathing, music, or distractions, or ways to recruit support from peers and trusted adults. Pinterest, for example, worked to enhance their results when users search for terms like “depression” or “sadness” with a set of bite-sized emotional wellbeing tools to help people feel better in the moment. They started with a broader set of tools as part of a “compassionate search” campaign, later releasing tools specifically geared toward self harm. (Disclaimer: We, along with our colleague Dr. Gowri Aragam from Brainstorm, worked with Pinterest to create these tools.)

What we have not yet seen companies do well is provide helpful mental health education (including what to look for and what to do about it) and access to specific resources (who to turn to for help and how), as in-person communities like schools do for their students. Yet, social media may be someone’s primary community, especially as teens are now spending more time online per day than the hours they’re spending attending school. Social media can, for example, teach teens the common signs of diseases like depression via bulleted lists, pictures, comics, or short video clips visualizing relevant symptoms such as low motivation and decreased energy. These tools should not be used as formal diagnostics—this is left for our colleagues—but teens can benefit from content that allows them to understand that there might be a problem and that help is available. Educational tools serve an important public health need by facilitating early diagnosis, which is especially important for teens, as 50 percent of mental illness starts before age 14, according to the National Alliance on Mental Illnesses.

Resources may connect users to evidence-based therapies like mindfulness meditation, give expert tips for healthy behavior change in nutrition or exercise, facilitate improved social connectedness, or give zip code based lists of local mental health clinics.

Next, the predominant “it’s better than nothing” approach to tools for suicide needs to change. Tools need to be practical and actionable. Give teens things they will actually use. Right now, the status quo response to suicide-related searches from companies including Facebook, Instagram, Twitter, and Tumblr is to provide a suicide hotline number or link to Crisis Text Line. This is the 2020 equivalent of handing a teen a tri-fold brochure. What’s the likelihood of him actually acting on it, and not just stuffing it his backpack or trash can? Though Crisis Text Line is a valuable service for those who need to talk to someone during an immediate crisis, it lacks ongoing care or a professional evaluation. We worry about the broader public health message of funneling teens to a call center instead of directly encouraging them to get further professional help. Suicide can and must be treated, and the tools provided to teens should reflect that.

Some companies, like Facebook and Instagram, have started adding more than crisis lines, including prompts to connect with trusted peers and sample messages (e.g., “I’m going through something difficult and was hoping to talk with you about it”). They offer tips like going outside or relaxing; while we’re concerned this isn’t enough for someone in crisis, it reflects a start in the right direction.

Finally, focused rules are also important to keep a community safe. People, especially kids, who are most vulnerable, should not be able to view harmful content such as images of people harming themselves that they cannot later unsee. Research shows that images, especially, are found to evoke a strong physical reaction and inspire behavioral enactment. However, a search for self-harm should not be met with an empty screen, either, as this may leave a person’s cry for help unanswered and inadvertently cause harm in a different way. Working with psychiatrists and experts can help design appropriate alternate responses that validate the teen’s intent and get appropriate tools in response. Consider protecting kids through parental controls on live content (as is done with traditional media), search results, and comments. A child posting worrisome content should receive immediate help, at the very least through notifications to parents or guardians so that they can take action before something worse happens.

Social media has added a new tune to psychiatry, and the volume has been cranked up. Just as cities revitalize to clean themselves and make them safer for citizens, tech companies have the opportunity to become stronger and more valued by their users if they can create better, more safe environments.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.

Related Articles

Latest Articles