Social Media and Sensitive Content

On: September 10, 2014
Print Friendly, PDF & Email
About Nora McLeese
Half-American, Half-Dutch, Nora has a bachelor's in Media Arts and Design from James Madison University, a master's in Journalism from the London School of Communications and is interested in the effect Internet speech has on daily linguistics.

   

Since its inception, Twitter has considered itself a stalwart for free speech. It’s gotten them into some hot water in the past, including outcries over abusive Internet trolls, as well as the intricate balancing act of promoting open dialogue internationally, while many countries do not subscribe to the same free speech sentiment as the company’s United States home.

Recently, two high-profile cases have again shone a light on the role of social media platforms, such as Twitter, in the hosting and distribution of sensitive or controversial content. The first involved Zelda Williams very publicly quitting of Twitter after she was sent a tirade of abuse, including mock-up pictures of her deceased father, Robin Williams, in the wake of his passing.

https://twitter.com/zeldawilliams/status/499432576872755201

Twitter’s response was tepid at first but did announce a policy change a few days later. It announced that family members could report upsetting images of the deceased and they would be removed in certain circumstances. Full statement as posted by Twitter in their guidelines section, below:

In order to respect the wishes of loved ones, Twitter will remove imagery of deceased individuals in certain circumstances.  Immediate family members and other authorized individuals may request the removal of images or video of deceased individuals, from when critical injury occurs to the moments before or after death.

Of course, the wording immediately gives away that Twitter still has the final say in what is removed and what isn’t, keeping the door open for more debate in this gray area of free speech. The previous policy of policing trolls still stands as well. Emily Greenhouse explains the platform’s overarching approach to handling of treating and abusive tweets for The New Yorker:

Twitter’s official policy on violence and threats is, simply, “You may not publish or post direct, specific threats of violence against others.” For Twitter, the question of when to intervene comes in gauging what is “direct” and what is “specific.”

The clarity of the threat […] is what Twitter examines. If a user makes a specifically violent threat, Twitter will remove the threat, or even the user. Accounts that exist only to promote hate, exclusively tweeting “you deserve to die”-type messages, are barred.

However, as demonstrated in the case of Zelda Williams, the person receiving the abuse is more often chased off the platform than the one throwing out the abuse. Using sensitive content to hurl abuse at another person certainly seems like it should be banned, along with the person doing it, but where is the line drawn in terms of what images should be allowed on social media?

Were the images of unarmed teenager Mike Brown lying in the street after being shot by a cop early August (right before Williams passed away) not equally upsetting and worthy of removal? Or do they fall under the jurisdiction of news value? Now that social media platforms are increasingly becoming the go-to source for breaking news, should they be expected to operate under the same kind of editorial judgement and journalistic integrity as traditional news outlets?

The other incident that challenged the distribution of sensitive content was the beheading – more incidentally, the video thereof – of the American journalist James Foley by jihadists of the Islamic State. ISIS has harnessed social media in a way that is very challenging for the Western-owned companies: balancing free speech while fighting such blatant fear-mongering and exploitation.

James-Foley

Journalist James Foley

Jeff Bercovici writes for Forbes that:

For a group like ISIS, a video showing the beheading of an American captive is a twisted sort of win-win: Either it succeeds in turning the world’s most powerful and admired tech firms into distribution partners for a message of violent extremism, or those firms clamp down on the content, betraying their stated commitment to the American principle of free speech.

In the case of Foley – and more recently Steven Sotloff – YouTube also made its policies clear: beheading is not an act of free speech. Because the intention to intimidate through violent extremism is clear, such “speech” is afforded no protection on their platform, just as there are limitations to free speech in the real world.

YouTube seems to be able to handle these kinds of free speech quandaries better than Twitter, basing their policies off common sense, the interests of their business and parent company, Google. While Twitter, on the other hand, has a death grip hold on First Amendment absolutism. It may be loosening, but not by much.

 

Comments are closed.