Do social media companies have a moral obligation to protect their users from harm? This debate has resurfaced in the wake of political commentator Ronan Farrow’s controversial Washington Post opinion piece “Why aren’t YouTube, Facebook, and Twitter doing more to stop terrorists from inciting violence?” and late July’s #twitterpurge debacle. Both cases called into question whether corporate oversight, as opposed to crowdsourced approaches to reporting unlawful content, could engender a chilling climate of corporate-led censorship.
In his piece, Farrow – to the ire of thinkers such as Jillian York and Glenn Greenwald – argued in favor of social media corporations taking a more active, explicit role in curbing access to “terrorist” content. Vaguely, he classified terrorist content as "material that drives ethnic conflict." Stating that this content deserved the same clampdowns as child porn, Farrow wondered why corporations haven’t employed every resource they possess to remove these incitements to violence.
Farrow acknowledged in passing the free speech abuses that could result from corporate content filtering, the pragmatic impossibility of monitoring voluminous content, and the notion that top-down restrictions will only inspire further resistance from offending parties. Still, he maintained that every Facebook, Twitter, and YouTube holds a moral obligation to its users that usurps these concerns. To bolster his argument, he looked to the example of radio media’s complicity in fueling ethnic conflict in Rwanda.
Can one person’s terrorist be another person’s freedom fighter? It’s a question that has enlivened counterrorism debates in the wake of 9/11. Corporate representatives responded to Farrow’s calls with this precise claim, justifying the calls for limited corporate interference in the name of protecting the fundamental principle of free speech.
Days later, York authored a scathing takedown of Farrow's piece. After pointing out that shutting down social media accounts is likely to bring more, rather than less, attention to offending causes, York critiqued Farrow’s call for an automatic algorithm to detect and remove calls for violence, noting that child pornography-detecting algorithms—on which Farrow bases his argument—often block legal content. York also pointed to something culturally problematic about Farrow’s suggestions: allowing these United States-based companies to become arbiters of what is and isn’t hate speech in contexts outside their borders may constitute something akin to cultural imperialism.
Similarly, Glenn Greenwald posed a provocative question to Farrow on Twitter – would a US-backed incitement to bomb an enemy state deserve these same restrictions Farrow proposed? Mike Masnick of TechDirt, too, noted that numerous calls by United States-based commentators for violence against Muslims would be casualties of Farrow's calls to quell ethnic violence. Restrictions like these could also result in the censorship of organizations that advocate for greater communal harmony, such as what happened in 2013 when YouTube took down the videos of a Syrian watchdog group on the grounds that it contained videos of graphic violence.
Dear @RonanFarrow: should tech execs also remove anything advocating violence by US, Israel & allies, or is that OK? http://t.co/bGKXxHFUOk
— Glenn Greenwald (@ggreenwald) July 11, 2014
Glenn Greenwald addresses Ronan Farrow on Twitter; via Twitter .
The question of corporate – and, by extension, moral – obligation arose once again a week later with #twitterpurge. The Purge: Anarchy , sequel to last year's The Purge , is a film that predicates itself on the premise that all crime is legal for one chaotic night of the year. In the run-up to the film’s release, a San Francisco minor found this premise so inspiring that he began various Twitter accounts dedicated to the film, jumpstarting the hashtag #twitterpurge. It was his desire to mimic the film's structure of lawlessness, transposing this to the Twitterverse.
Soon, this promise of anarchy was fulfilled, with numerous netizens posting revenge porn of ex-girlfriends. Many of these girls were underage. That #twitterpurge came so shortly after the #IAmJada campaign, mobilized to support an underage sexual assault survivor who had a picture of her unconscious post-attack body mocked on Twitter, disturbed many Twitter users. Twitter acted by, at first, deleting the original account. Copycats sprung up repeatedly, often in protest.
Soon, the #twitterpurge trending topic had morphed into a beast too expansive for Twitter’s policy, which asks users to report any content that they claim sexually exploits children, to handle. Contrary to Farrow's assertion that all social media companies employ algorithms to filter out unlawful speech and imagery, Twitter relies on user complaints.
Because social media can showcase more than the worst in us. #IAmJada http://t.co/1uWwM1Gk2i
— Ronan Farrow (@RonanFarrow) July 14, 2014
Ronan Farrow tweets about the #IAmJada controversy; via Twitter.
As Twitter struggled to deal with the onslaught of #twitterpurge content, the trend bled into Facebook and Instagram. Many netizens and cultural critics spoke out. This included Flavorwire's Tom Hawking, who argued that, even if Twitter possessed no “legal obligations” to police this unlawful content, “it sure as hell has moral ones,” especially when revenge porn has caused such pain, distress, and trauma that some victims have taken their own lives.
Hawking acknowledged that such a move would cause question in media activist circles, as corporate policing of sensitive content can quickly edge toward censorship of free speech. He advocated for Twitter to play a more active ethical role in cases like those of #twitterpurge, though, claiming that “it’s both unfair and unethical to palm off the responsibility for policing Twitter onto individual users, when it’s Twitter itself that has the best resources to do so”. To strengthen his case, Hawking called upon the very arguments that Farrow had come under fire for just a week earlier, raising further questions about where – and if – corporations should proactively monitor user generated content.