This post was based on an article first published to HuffPost.
Controversy over Netflix and one of its latest hit shows was sparked by a study published last month suggesting that teen drama ‘13 Reasons Why’ may have increased suicidal ideation in viewers.
Published in the Journal of the American Medical Association (JAMA), the study found that Google search queries involving suicide increased by 19% during the days following the release of 13 Reasons Why, which centres around a teen suicide.
Members of the media were quick to draw their own conclusions from the paper, with The Telegraph suggesting that the Netflix show “should be withdrawn” due to the possibility that it is “driving young people to consider suicide”. Meanwhile, the journal article’s authors suggested that shows ought to be evaluated before release to identify potential risks, and that troubling scenes ought to be removed retrospectively — something that is done in China.
Before rooting for all-out censorship, it’s worth considering that the study’s findings were not all bad. Search queries following exposure to the show also increasing for ‘suicide hotlines’ and ‘suicide prevention’, indicating “elevated suicide awareness”. While various conclusions could be drawn from this, at the very least it suggests that the show increased interest in suicide-prevention and help-seeking. This fits with the narrative of public health campaigners that advocate openness with young people about challenging issues, such as those concerning sexual health — so long as balanced, good quality information is made available. It might have been interesting, therefore, for the study to have considered which particular resources the search queries led to, and how useful they might have been. Information about this could be used to help funnel search queries (and distressed individuals) towards useful resources – something that Google has shown interest in facilitating since as early as 2010, and that Facebook is beginning to take more seriously.
The authors of the Netflix paper suggest that content producers ought to follow WHO media guidelines on suicide, but these guidelines only refer to news and documentary media — not to fictional content. In fact, of all of the literature that guides news media on how to cover suicide, guidance around fictional content is virtually non-existent.
Some of the non-fiction guidelines remain relevant with fictional content. Before or after the credits, films and dramas can look to provide helpline numbers and appropriate ‘factfiles’ that help balance views and educate the audience (examples of positive moderation). But leaving aside debates about the ‘philosophy of art’, morally dubious ideas and characters that engage in unwise activities (and say untrue things) are often seen as an important feature of artistic, fictional content. This sets it apart from non-fiction.
In developing guidance for fictional content, we might do well to look to parallel cases of other public health concerns, such as smoking. Studies over the past 10 years have linked exposure to smoking in films with increases in adolescent smoking. Subsequently, lobbying groups have been pushing for the film industry to award R ratings to films that feature smoking prominently. While this remains an ongoing struggle, it seems reasonable that impressionable children should be protected by ‘parental guidance’ when it comes to exposure to potentially harmful health behaviours. As for how age restrictions can be enforced with digital devices, this is something for technology companies to figure out, and figure it out they will if regulators pressure them.
There are no easy solutions for content moderation, but it should be clear that rather than panicking and sliding towards Chinese-style censorship, we ought to pursue a pragmatic middle ground based on evidence and compromise; one that champions and balances both emotionally-challenging art, and also protection, guidance and support for those who need it.