Can Google’s and Facebook’s algorithms play a role in the prevention of suicide?

Picture of Richard van Hooijdonk
Richard van Hooijdonk
  • Google’s depression questionnaire encourages people to seek help
  • Facebook’s algorithm can pick up suicidal behaviour online quicker than a friend
  • Challenges and concerns around algorithms analysing moods on social media

Suicide is a leading cause of death around the world. There are millions of posts on social media, queries on Alexa and Siri, as well as conversations on Snapchat on a daily basis that relate to various mental health issues and suicide. One particularly harrowing story is that of a fourteen-year-old foster child from Miami who hung herself in front of her webcam in January last year, while streaming the horrendous event on Facebook Live for two hours. Among the thousands of people who witnessed her death was a friend, who alerted the authorities. A year before her tragic end, she had posted a message that said she didn’t want to live anymore. Also in January, another teen – this time from Cedartown, Georgia – committed suicide one month after she had posted a blog about having been abused, and about her abuser encouraging her to end her life. What if these and similar posts could be used as pointers and indicators, zoning in on people who are at risk of suicide in a bid to prevent them from harming themselves? Some tech giants are doing exactly that.

Free trendservice

Receive the latest insights, research material, e-books, white papers and articles from our research team every month, for free!

Google’s depression questionnaire encourages people to seek help

When you mobile search Google US for symptoms of depression, the search engine, in partnership with the US National Alliance on Mental Illness (NAMI), launches a window with a private depression screening test (the PHQ-9 questionnaire) and a knowledge panel, accompanied by referral and education resources, as well as possible treatment options. Google and NAMI use the collected anonymous information entered by millions of people to create a database that may, when combined with other data, be useful for generating a digital depression fingerprint. Google has indicated that it’s aware that the results of the questionnaire are “sensitive and private” and has ensured people’s personal data will be protected. The PHQ-9 questionnaire can be the first step to getting a proper diagnosis and help people to seek treatment, which could prevent suicide. NAMI’s CEO, Mary Gilibert, writes on Google’s blog that only 50 per cent of Americans who experience clinical depression actually receive treatment. She says that people with symptoms of depression “experience an average of a 6-8 year delay in getting treatment after the onset of symptoms”. NAMI hopes “that by making this information available on Google, more people will become aware of depression and seek treatment”. The service was launched in August last year.

Silhouette of a man holding a laptop, with the Google logo in the background
When you mobile search Google US for symptoms of depression, the search engine, in partnership with the US National Alliance on Mental Illness (NAMI), launches a window with a private depression screening test (the PHQ-9 questionnaire) and a knowledge panel, accompanied by referral and education resources, as well as possible treatment options.

Facebook’s algorithm can pick up suicidal behaviour online quicker than a friend

Responding to various live-streamed suicides, Facebook has introduced an algorithm that screens social media posts for phrases or images that could indicate severe depression, self-harm, or suicidal tendencies. And according to Mark Zuckerberg, artificial intelligence (AI) can spot suicidal behaviour online quicker than a friend. The AI learns which words and emojis are potentially indicative of suicidal thoughts or actions. When these posts are spotted, Facebook provides the user with various resources and also alerts an Empathy Team. They will then receive a message that says: “Someone thinks you might need extra support right now and asked us to help.” That someone isn’t the artificial intelligence, but in fact a human reviewer following up on the AI flagging a potential problem. They examine the posts of the user and provide information and resources. Should these messages not have the desired effect and avert the self-harming behaviour, Facebook may then even notify emergency services. However, they don’t engage the user further. But the combination of smart algorithms and professional counselors responding to posts could make a huge difference in the number of suicides in the future. Within a month of its launch, the algorithm had already assisted over 100 people. With more than 2 billion users, you can imagine the sheer size of the company’s database of this type of content.

https://www.youtube.com/watch?v=neJs53KpgNs

Challenges and concerns around algorithms analysing moods on social media

Of course these suicide prevention efforts appear to be carefully thought out and sincere. And it’s a positive and hopeful development that tech companies are getting these important conversations started. As with many new initiatives and innovations, however, there are also various challenges and concerns. The algorithms that analyse these posts on social media or respond to queries on Google are still far from perfect. The way depression or other mental illness is expressed, and the patterns of help-seeking, can vary significantly between countries and cultures. Furthermore, sexual orientation or identity and gender also play an important part. Are the algorithms able to pick up those differences? Furthermore, people who are planning suicide often don’t announce it and even deny it. And is there any information on whether people’s (mental) health data is used for targeted advertising campaigns – such as for psychotherapy or medication to treat depression? Because let’s be honest, being able to pinpoint the emotional state of a social media user could be extremely valuable in terms of competitiveness in the marketplace. Should the tech giants not require our consent before they’re allowed to monitor our mental health?

Some other concerns are that Google and Facebook don’t share many details about how (widely) and with whom exactly they’ll share their findings, leaving some critics concerned about privacy issues. Still, broadly sharing findings about the use of specific phrases, words or behavioural patterns that manifest before a suicide attempt might help save more lives. But then again, knowing that Facebook could potentially share sensitive information could also be detrimental to trust, causing users who are at risk to change their minds about sharing their emotions on social media.

Srini Pillay, M.D., CEO of NeuroBusiness Group, award-winning author, and part-time assistant professor of psychiatry at Harvard Medical School, wrote about social media and suicide prevention for Fortune. He insists we need to ask the following question: “Does social media help or harm suicide risk?” According to Pillay, even though social media can play an important role in forecasting and possibly preventing suicide, there are disadvantages as well. One of them is the prevalence of cyberbullying, enabling harassers to attack their victims from and on a myriad of social media platforms. He continues that, overall, artificial intelligence can greatly improve our knowledge and help with interventions. However, if it isn’t transparent and doesn’t adhere to the ‘first, do no harm’ principle, it could potentially exacerbate the situation.

Share via
Copy link