Is Netflix Doing Enough to Protect Viewers from Harmful Content?

This post was based on an article first published to HuffPost

Controversy over Netflix and one of its latest hit shows was sparked by a study published last month suggesting that teen drama ‘13 Reasons Why’ may have increased suicidal ideation in viewers.

Published in the Journal of the American Medical Association (JAMA), the study found that Google search queries involving suicide increased by 19% during the days following the release of 13 Reasons Why, which centres around a teen suicide.

Members of the media were quick to draw their own conclusions from the paper, with The Telegraph suggesting that the Netflix show “should be withdrawn” due to the possibility that it is “driving young people to consider suicide”. Meanwhile, the journal article’s authors suggested that shows ought to be evaluated before release to identify potential risks, and that troubling scenes ought to be removed retrospectively — something that is done in China.

Before rooting for all-out censorship, it’s worth considering that the study’s findings were not all bad. Search queries following exposure to the show also increasing for ‘suicide hotlines’ and ‘suicide prevention’, indicating “elevated suicide awareness”. While various conclusions could be drawn from this, at the very least it suggests that the show increased interest in suicide-prevention and help-seeking. This fits with the narrative of public health campaigners that advocate openness with young people about challenging issues, such as those concerning sexual health — so long as balanced, good quality information is made available. It might have been interesting, therefore, for the study to have considered which particular resources the search queries led to, and how useful they might have been. Information about this could be used to help funnel search queries (and distressed individuals) towards useful resources – something that Google has shown interest in facilitating since as early as 2010, and that Facebook is beginning to take more seriously.

The authors of the Netflix paper suggest that content producers ought to follow WHO media guidelines on suicide, but these guidelines only refer to news and documentary media — not to fictional content. In fact, of all of the literature that guides news media on how to cover suicide, guidance around fictional content is virtually non-existent.

Some of the non-fiction guidelines remain relevant with fictional content. Before or after the credits, films and dramas can look to provide helpline numbers and appropriate ‘factfiles’ that help balance views and educate the audience (examples of positive moderation). But leaving aside debates about the ‘philosophy of art’, morally dubious ideas and characters that engage in unwise activities (and say untrue things) are often seen as an important feature of artistic, fictional content. This sets it apart from non-fiction.

In developing guidance for fictional content, we might do well to look to parallel cases of other public health concerns, such as smoking. Studies over the past 10 years have linked exposure to smoking in films with increases in adolescent smoking. Subsequently, lobbying groups have been pushing for the film industry to award R ratings to films that feature smoking prominently. While this remains an ongoing struggle, it seems reasonable that impressionable children should be protected by ‘parental guidance’ when it comes to exposure to potentially harmful health behaviours. As for how age restrictions can be enforced with digital devices, this is something for technology companies to figure out, and figure it out they will if regulators pressure them.

There are no easy solutions for content moderation, but it should be clear that rather than panicking and sliding towards Chinese-style censorship, we ought to pursue a pragmatic middle ground based on evidence and compromise; one that champions and balances both emotionally-challenging art, and also protection, guidance and support for those who need it.

Facebook is getting serious about suicide prevention

This article was first published at HuffPost

Those with an interest in online safeguarding and recurring debates about social media moderation might have found an interesting anecdote hidden in the latest Facebook earnings call.

When asked about content moderation efforts, in light of recent incidences of troubling videos being broadcasted live on Facebook, Mark Zuckerberg used the opportunity to discuss the company’s approach to suicide prevention:

“A lot of what we’re trying to do here is not just about getting content off Facebook. Last week there was this case where someone was using Facebook Live to broadcast – or was thinking about suicide. And we saw that video and actually didn’t take it down and helped get in touch with law enforcement who used that live video to communicate with that person and help save their life. So a lot of what we’re trying to do is not just about taking the content down, but also about helping people when they’re in need on the platform, and we take that very, very seriously.”

As Zuckerberg implies, the instinctive approach for administrators dealing with disturbing content has tended to be to remove the content and line of communication immediately. This reduces the number of users exposed to the material, and minimises the risk of a PR crisis. However it doesn’t necessarily help the individual concerned or those already exposed to the content. One of the reasons for this is purely technical. For professionals to be able to de-escalate an incident, they need a direct line of communication with the individual affected, and just as with emergency calls, keeping the affected individual communicating can often help them to be located by emergency services.

This shift away from instinctively shutting down content reflects a more thoughtful, measured approach to safeguarding — one which I wrote about in a postgraduate public health thesis in 2015. Media depictions of the internet have often been marked by hysteria over outlier horror stories that were often as indicative of digital dangers as rare shark attacks are of the dangers of going for a swim. This isn’t to say that there aren’t dangers, but excessive fear doesn’t put us in a good condition to make rational judgments about how to manage the opportunities and risks, let alone prepare for threats should they arise.

If we consider moderation to involve employing a finite set of resources, then, to date, internet administrators have tended to heavily favour forms of what can be termed ‘negative moderation’ (as in subtractive), which is to remove, block, and ban offensive content and users.

Although it might seem intuitive that this prevents users from being exposed to troubling content, blocking content too liberally can just as easily push offensive content or conversations to the shadowy fringes of the internet where troubling behaviour can be normalised. Just as adolescents might discuss risky behaviours after school or in the playground that they were not allowed to discuss in class, the conversations can still take place — just in far murkier settings. A good school will recognise this and allow some issues to be discussed in the classroom so that tutors can monitor conversations and offer forms of ‘positive moderation’ through factual information, supportive resources, and balanced perspective.

Mark Zuckerberg’s anecdote, which he shared despite not being directly asked about suicide prevention, suggests both that Facebook take the issue very seriously, and that they will not be cowed by media sensationalism into taking a simplistic route of trying to block every potentially troubling piece of content irrespective of whether this approach helps or harms users. These are encouraging signs.

Perhaps the simplest example of positive moderation is with efforts by media organisations to provide relevant factfiles and helplines at the end of media content that features potentially upsetting themes (as recommended by Samaritans). In the case of social media and chat rooms, platforms can provide easily accessible links to authoritative content and resources, and also educate users about how to respond to content they find concerning or disagreeable. For example, the youth peer-support platform TalkLife is working to train volunteers to provide forms of peer-support and to signpost to resources.

Where content is deemed necessary for removal, rather than pretending it never existed, administrators can provide forms of follow-up support to those affected and pre-emptive educational material in the event of incidents reoccurring. These are difficult to do at scale, but they will become easier as platforms employ predictive algorithms and machine learning.

In his 1958 inaugural lecture at the University of Oxford, political philosopher Isaiah Berlin introduces the concept of negative liberty to describe freedom from imprisonment and coercion. Negative moderation is the digital antithesis of negative liberty; it interferes with the behaviour of some, but on its own it’s a crude method that can harm as many as it helps. A mature philosophy of the internet must be one that employs a balanced approach towards digital content if it’s to allow individuals to express themselves creatively, to learn and grow intellectually, and to access help when needed.