Images of Horror: Who Decides What We See Online?

David Haines family photo.

David Haines family photo.

Written by Sam Gregory, Program Director at WITNESS. This post was originally published on the WITNESS blog.

In the last month, many people in the US and worldwide have been exposed to videos showing grotesque acts of violence, notably the killings of two journalists, James Foley and Steven Sotloff, and British aid worker David Haines by ISIS. Although sadly not exceptional in their graphic violence for those who watch the Twitter-stream of human rights and crisis footage, these received significant prominence in the mainstream media and have ignited an important conversation about roles and responsibilities in relation to graphic imagery and gross violations of human rights.

In response to the circulation of the videos created by ISIS, social media platforms responded by taking down the footage, and individual citizens, including family members of those murdered, started hashtag-driven movements and personal appeals around their own decision not to watch and encouraging others not to watch.

[Featured image from Twitter user Sajad Riyad – part of an effort by supporters and colleagues of British aid worker David Haines to circulate imagery of Mr. Haines “as he would want us to remember him” rather than the video of ISIS murdering him which became an international news story, September 2014.]

Within the community of people WITNESS works with, these news events point to critical questions for commercial social media platforms, and that we need to engage with as citizens, users of social media, and consumers of information and imagery.

Consistent Standards on What Stays on Commercial Platforms

Although often we perceive of spaces like YouTube and Facebook as free and open public spaces, they are not. In Ethan Zuckerman's analogy they are more akin to malls, largely driven by the commercial imperatives underlying them as much as by a public space role.  They are governed by relatively broad guidelines on what content is acceptable or not (that do allow a fair amount of discretion to the platforms), and differing perceptions from viewers and internally on what their audience and purpose is.

While it is not ideal that decisions on free speech are made by commercial platforms, given the role these platforms play in circulating critical information, WITNESS advocates for controversial content to be shareable and to remain on these platforms, and for as much clarity and consistency as possible on when/why material is removed.

YouTube, for instance, generally allows graphic footage only in contexts where it is framed as evidence of rights violations, rather than a glorification or validation of the act. So, in the case of the James Foley footage, YouTube referred a reporter for Slate to their content policies in relation to gratuitous violence, incitement to violence and hate speech:

“YouTube has clear policies that prohibit content like gratuitous violence, hate speech, and incitement to commit violent acts, and we remove videos violating these policies when flagged by our users,” a YouTube spokesperson told me over email. “We also terminate any account registered by a member of a designated Foreign Terrorist Organization and used in an official capacity to further its interests.”

Holding to these types of approaches consistently, irrespective of who is involved in violations, will help establish a more transparent standard.

Holding Onto Critical Evidence?

Even with clear parameters, it is the case that some footage on YouTube and on other social media spaces that has important value for justice and accountability will get taken down – either because it is not framed as evidence, or because of human judgement calls on whether material fits within the company's guidelines. Footage that shows human rights violations is often graphic and uncomfortable, sometimes re-victimizing and is frequently subject to being arbitrarily or correctly removed either by administrators or concerted take-down attacks by outside actors (a relatively common problem on many social media spaces). When we analyzed the playlists of citizen video that have been shared on the Human Rights Channel (which is hosted on YouTube), we found that of the almost 6,000 videos showing rights violations that we have shared, almost 5% are now missing. This could mean they were deleted, removed or made private.

VideoRemovedbyYouTube

One idea that has been circulating is that of a digital “evidence locker.” The “evidence locker” would make sure that powerful but offensive citizen media related to human rights is downloaded and saved. This would be done in a way that preserves metadata and other important video information, so that it can potentially be used in future prosecutions and investigations by NGOs and human rights actors even if it is rapidly deleted on a social platform.

Add to the mix conversations about how to enable better human rights-dedicated archives and tools that enable secure-sharing of dangerous or compromising material. The idea behind some of these tools (which includes projects like our own InformaCam app) is that they would allow a creator to send content direct to investigators and human rights groups.

Circulating and Re-circulating Perpetrator-shot Footage

For the past few years human rights groups and journalists have grappled with an increasing flood of graphic imagery – some more pervasive than we have recently seen- that they can access as documentation, an investigatory lead or potential news-story content.

Perpetrator footage forms a significant part of this content; but also has formed and will form a significant part of human rights campaigns, international criminal prosecutions and news investigations. Police brutality imagery shot by police themselves has galvanized public campaigns in multiple countries and the types of footage shot by groups like ISIS may one day be used as evidence in international criminal justice trials. Historically, in international trials it has often been the case that often the most incriminating statements are made by perpetrators themselves, linking themselves to crimes. Yet, of course, by circulating this imagery we can play into the propaganda needs of human rights violators, we can justify what has emerged in some settings as a commercial market for terrible imagery of violations, and we can re-victimize and re-violate for a third time the dignity of people who have already faced a direct abuse, and the humiliating violence of having this captured on camera.

There is a role that platforms, news outlets and concerned citizens can play here. We've advocated for more social media and video-sharing platforms to incorporate tools for visual anonymity, such as the blurring function on YouTube. These enable people to share first-hand documentation of human rights violations and to some extent protect the identity of victims and survivors. A tool like the face blur function or our ObscuraCam app can also enable activists to re-share a copy of perpetrator-shot footage while minimizing the re-victimization. The idea here is that by recirculating these images we create further harm to someone who has already been victimized and to those who are emotionally involved such as a victim's family.

This was brought home for us last year by the sharing of images of brutality towards LGBT people in Russia, and the relentless recirculation of images showing this violence by news outlets and citizens. In this case, we strongly argued for activists and news outlets to, at a minimum blur the faces of the victims, and recognize their own role in creating precisely the effect the perpetrators of these crimes wanted – to dehumanize and humiliate their victims.

Building Dignity and Respect into the Conversation

We can also in some circumstances make conscious decisions not to watch and not to share. A broader issues of dignity and respect for our fellow humans underlies the questions of sharing and how and when to do this. WITNESS believes in the right to free expression, which applies equally online as offline. But we also need to consider values around protecting people who have already faced the horrible indignities and violence imposed upon them by human rights violators; and the ethics of increasing the visibility of such images.

A campaign to combat hate speech directed at Muslims in Burma.  More about the campaign at https://www.facebook.com/supportflowerspeech Image (c) Kenneth Wong.

These ethics questions are best settled by a cultural conversation to establish broader consensus on when it's acceptable to share images that are deeply compromising of other people's basic humanity; and when we recognize that basic human rights values of privacy, dignity or consent are missing. And they're also part of a conversation that we can have proactively about counter-speech – how we use our own capacity to participate in online conversations to challenge debasing, violent or hateful online speech or images, as we saw many people try to do in the wake of the widespread sharing of the ISIS images.

There is a nascent international movement focused on the power of positive speech to counter violence and hate speech – epitomized in groups like the Panzagar movement in Burma, who counter the vicious anti-Muslim vitriol and images on Facebook and in public discourse in that country, or in a less coordinated way, the choices we make to recirculate positive images of people whose lives have been lost to human rights violations such as the image of David Haines with his family we highlight at the top of this post.

When Do We Need Videos? When should Images Not Matter?

We also need to recognize the need for a public conversation on what images and types of images move action, when we “need” to see images, and what happens when dire situations have no images. Despite WITNESS being an organization focused on the effective use of the moving image for human rights, it's been a consistent concern of ours about the imbalance of images from some contexts (and types of human rights situations) in a world increasingly predicated on the visual images.

There are few images circulating online from the violence in Central African Republic or rural Democratic Republic of Congo – should this make the crises in these countries any less newsworthy or actionable than the ceaselessly documented violence of Syria? And, on human rights issues that are systemic, for example, the pervasive discrimination in terms of access to education for Roma in Europe, it's hard to find a visual summation or sight-bite. On an ongoing issue like domestic violence against women it's rare we have such a clarion image to crystallize an issue in the public consciousness as has emerged in recent weeks in the US, with video showing a prominent American football player knocking out his fiancé in an elevator and dragging her out.

So as much as we celebrate the possibilities of accountability in a “cameras everywhere” world, we also must recognize the dangers of what this drives us to watch, share, prioritize and also what is excluded.

2 comments

Join the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.