Between religion and the right to free expression
by Sarah Myers West | Nov 18 2015
The violent attacks on French satire magazine Charlie Hebdo last January only served to reinvigorate support for free expression, with many public figures and celebrities expressing support for the principles many of the weekly’s journalists lived by. Facebook’s CEO Mark Zuckerberg was among them, responding with a message of support: “You can’t kill an idea,” he said.
But supporting free expression takes more than words alone: weeks after writing that post, Facebook began to block pages in Turkey that “offended the Prophet Muhammad” in accordance with a court order. A few months prior, Twitter and YouTube were blocked by Turkish authorities after users posted leaked information implying corruption within the government at its highest levels; Facebook’s lawyers likely deemed the blocking a necessary act of compliance with local law. But as Washington Post journalist Caitlin Dewey describes it, “It would be unfair to fault Facebook for complying with a legitimate foreign government request, regardless of how oppressive it may seem. But for Facebook to do that while simultaneously styling itself as the patron saint of political speech? It seems a little disingenuous, to say the least.”
It’s easier for Internet companies to draw the line against censorship of content when it’s overtly political in nature—what First Amendment law terms “core political speech.” But when it comes to whether or not to take down images of the Prophet Muhammad and other content religious in nature, Internet companies like Facebook seem to be having a harder time figuring out where to draw the line.
Perhaps the most prominent case dealing with this issue followed the posting of the now-infamous “Innocence of Muslims” video on YouTube, which contained an offensive portrayal of the history of the Muslim faith. The video instigated a long battle over whether or not Google should be responsible for taking down the content, after it allegedly contributed to violent demonstrations held at US Missions in Egypt and Libya.
By its own assessment, Google said the video did not violate the site’s terms of service: it determined it did not constitute hate speech or otherwise exceed YouTube’s content guidelines. Despite this, Google blocked the video temporarily in Egypt and Libya “due to difficult circumstances” before complying with government requests in eight other countries. It eventually had to remove the video from YouTube entirely after one of the actresses appearing in the film asserted a claim over the video’s copyright. But on May 18, 2015, an appeals court overturned this order and rejected the claim, making it possible once again for the video to appear on YouTube.
However, Google’s transparency report reveals a more complicated picture—20 countries in total requested the video be removed, and it’s unclear how Google made the determination not to block it in the other ten (Google did not comply in Australia, Bangladesh, Brazil, Brunei, Djibouti, Iran, Lebanon, Maldives, Pakistan, United Arab Emirates, and the United States). Whatever criteria were involved, the decision had negative consequences for Google in several countries; YouTube was banned in Afghanistan, Bangladesh, Egypt, Jordan, Pakistan and Sudan following the publication of the video and remains blocked in Iran and Pakistan.
Unquestionably these are challenging decisions for Google’s lawyers to make, in no small part because of their geopolitical consequence. But the absence of detail in the transparency report as to how the decision was made is somewhat troubling: why was it acceptable for Australians and Pakistanis to see but not for Egyptians or Malaysians? How should the balance be struck between free expression as an essential human right, and the religious values (as well as values of non-belief) that touch deeply for much of the world?
Ironically, Facebook censored a post addressing just this tension shortly after the Charlie Hebdo shootings. Pakistani actor Hamza Ali Abbasi wrote a post in response to the shootings condemning violence while questioning how free expression has been interpreted in the West. In the post, Hamza said the “west needs to understand too that freedom of expression includes criticism, disagreement or even rejection of faiths or ideology…but should not and must not allow ‘insult’”, using several epithets to support his case. The post was removed as a violation of Facebook’s Community Standards, and Hamza’s account was shut down shortly afterwards.
After the issue was raised to Mark Zuckerberg’s attention he apologized for what he called a “mistake” by Facebook’s team, and asked Facebook Vice President of Global Operations and Media Partnerships Justin Osofsky to look into it. Osofsky later replied “we made a mistake in taking this down. We try to do our best, but sometimes make mistakes. We apologize for this error, and hope that the author will re-post it”. In response, Hamza said “It’s just funny that their selective freedom of speech caused them to delete only this particular post of all things”.
Facebook’s guidelines on hate speech are somewhat complex. It removes speech that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, or gender identity, or serious disabilities or diseases. But it also allows the sharing of content that contains hate speech for the purpose of raising awareness or educating others about hate speech – as Hamza’s post does. According to the policy “when this is the case, we expect people to clearly indicate their purpose, which helps us better understand why they shared that content”.
Hamza’s celebrity status—he has a verified account on Facebook—may have worked for and against him in this case. On the one hand, his highly visible profile may have made it more likely for the post to be flagged (though it’s not entirely clear what process led to the deletion of both the post and the account). On the other, it also ensured the wrongful deletion reached Zuckerberg’s attention, who ensured a swift response by Facebook. At the moment, there’s little recourse available to users when content is wrongfully removed unless they have an inroad into the company, though Facebook does provide an appeals process for suspended accounts.
As these cases make clear, arbitrating between religious values and the right to free expression requires a careful balancing act. The two are often placed into tension, and Google, Facebook and other companies have to determine how to negotiate them.
In so doing, there’s more they could do to let users in on the process:
Provide greater transparency as to how decisions are actually made. For example, Facebook’s Community Standards provide helpful detail on how it assesses whether content is hate speech, and in what contexts such speech is allowable. But laws on hate speech and blasphemy differ widely from country to country—and both Facebook and Google are less transparent about how they decide whether or not to comply with them than they are about providing data on when they comply.
Provide clear, consistent and immediate explanations for the execution of a policy and ensure that the explanations are given in the language in which the service is used. Vague policy procedures invite abuse; censorship may masquerade in many forms.
Create a path of recourse for users when they feel their content has been wrongfully removed. At the moment, these decisions are made unilaterally by the company. But with the level of nuance required for these issues and growing volume of discussions, it’s likely that mistakes will be made. Users should have some system of accountability when they feel the company has incorrectly enforced its terms of service.
Engage locally with human rights groups and NGOs in addition to local governments when requests are made. It would be a challenge for even the most culturally sensitive of us to navigate the nuances of cultural and religious issues without the contextual knowledge that can only be built up over years living in a place. Listening to voices both inside and out of government can help ensure these choices are informed. However, maintaining a physical presence within the country may have legal jurisdictional consequences that should also be taken into consideration.
Oct 26 2016
Facebook's latest announcement promises greater consideration of context in content moderation. Onlinecensorship.org's Matthew Stender takes a look at the news.