The seemingly overbroad application of Facebook’s Community Standards has incurred intensified scrutiny of the platform’s rules and procedures in recent months. Public outcry following the removal of Nick Ut’s Pulitzer-winning photograph of Kim Phúc fleeing a napalm attack, on the grounds that she was nude, highlighted the absurdity of their prudish content enforcement policy.

Last week, the platform’s policy team stated its aim to ensure that the context of a post would now contribute to whether or not it was appropriate under the Community Standards. The change is likely in response to the removal of Ut’s The Terror of War, as well as several other high-profile content takedowns that have occurred this year, including the removal of several videos depicting police brutality in the United States.

In a press release, Facebook wrote: “In the weeks ahead we’re going to begin allowing more items that people find newsworthy, significant, or important to the public interest—even if they might otherwise violate our standards.” Many questions, however, remain, and the company’s enforcement of content standards as well as their reliance on algorithms to filter and promote content have shown themselves to be consistently inadequate to parse nuance or appreciate the gravity of a particular post.

Facebook has in several cases chosen to employ automated systems over humans. Earlier this year, Facebook fired the team of editors that it had hired to manage its ‘trending’ news module, opting instead to rely on an algorithm, which has been referred to by commentators as a “mess.”  Facebook also admitted as recently as September that it employs systems that “try and automatically detect content that violates our standards,” taking human moderators further out of the loop. The disastrous consequences of this machine-over-human approach can be seen in the slew of fake news stories being promoted as legitimate news as well as the removal of a video showing the shooting death of Philando Castile by a police officer, for which the company blames an automated filtering system.

In August, when asked by an Italian student if he planned to become a news editor, Facebook CEO Mark Zuckerberg responded by saying: "No, we are a tech company, not a media company." But Facebook now hosts journalistic content created for the platform by well-respected publications and employs standards on that content, making it far more than just a neutral platform. As journalist Mathew Ingram asks: “Isn’t deciding what is newsworthy exactly the kind of thing media companies do?”

Facebook says that the intent of this shift is “to allow more images and stories without posing safety risks or showing graphic images to minors and others who do not want to see them,” yet the platform continues to remove public health campaigns, particularly affecting women’s health advocacy and body positive images. The company says its goal is “to channel our community’s values, and to make sure our policies reflect our community’s interests,” yet its CEO overruled his policy team, choosing to allow a certain political candidate’s hate speech to remain on the platform.

Facebook, which boasts more than 1.5 billion users from around the world, is no longer exclusively a tech company. Facebook has inked millions of dollars worth of exclusive content deals with media houses including Buzzfeed, the New York Times, CNN and others in the push to promote its Facebook Live service and bring traditional media into the Facebook ecosystem. The platform is a place where news is now captured and important events play out live. While Facebook is correct in asserting that “respecting local norms and upholding global practices often come into conflict,” the company must also institute policies to ensure events that are of news value to the public interest will not simply be removed because the fall outside of restrictions on macro-constructions like ‘graphic content and ‘nudity’. A nude breast in a campaign to prevent breast cancer or a small nude child fleeing war is not the same thing as pornography. Life happens in complex tones, and no one-size-fits-all content policy or automated algorithm can provide ample nuance for what is truly in the public interest.

We’re encouraged to see Facebook’s interest in incorporating increased flexibility and contextualization as a tenet of a new content policy enforcement framework. The new enforcement rubric, however, is yet to be seen. This announcement, for all its merit, is only a hat tip to the fact that the current policy is lacking in nuance, therefore we look forward to engaging in constructive dialogue with the company advocate for a policy that ultimately decreases censorship and increases freedom of expression on the platform.
 

Furthermore, more transparency is necessary, whether content is moderated by humans or automated processes. The ‘black box’ nature of Facebook’s policies leave free expression advocates in the dark about the underlying enforcement procedures dictated by the company’s policy team. The more information Facebook releases on its enforcement policy, the better monitors can bring to light cases where content policies were erroneously enforced. Retraining content moderation teams will be necessary to ensure any new enforcement policy does not fall victim to the same shortcomings as its predecessor. Single events have the capacity to shape the course of history, and neither ‘human error’ or ‘technical glitches’ are an excuse for inadvertent censorship.

 

Here are some questions we have about Facebook’s latest announcement:

 
  • Does this announcement signal a change in policy or policy enforcement?

  • Will users have more control over how they report content?

  • How will these changes be made public through transparency reporting?

  • Will all users, Partners and politicians be held to the same standard?

  • How will users be able to appeal content removals in time sensitive situations?