A Resource Kit for Journalists

Intro

1. About This Resource

An “unintended consequence” of our project has been the amount of requests for comment and background we’ve received from journalists who see commercial content moderation as a crucial issue. We believe that our team’s expertise and combined experience working on this subject put us in the unique position of being able to provide a robust resource for journalists, researchers, and others seeking more information on content moderation and online censorship.

We hope that this toolkit will be a one-stop resource for information on issues related to content moderation policies. In this toolkit, you will find analysis of the impact of a number of policies related to specific areas of content moderation across multiple networks. We have also created case studies that are designed to show how the arc of censorship is not isolated to a single issue. From censorship of the human body to controversial hate speech takedowns, individuals and communities around the world continue to be negatively impacted by corporate content decisions.

Onlinecensorship.org has become an important resource for journalists reporting on content moderation in social media. While we are always happy to comment on new issues that crop up, each controversy develops out of the unique interplay between individual sites, their moderation policies, and the types of public speech they are censoring. We hope that this toolkit will provide you the information you need to report your story with depth and precision. 

2. Who We Are

Onlinecensorship.org seeks to encourage social media companies to operate with greater transparency and accountability toward their users as they make decisions that regulate speech. We collect reports from users in an effort to shine a light on what content is taken down, why companies make certain decisions about content, and how content takedowns are affecting communities of users around the world.

After winning the 2014 Knight News Challenge, Onlinecensorship.org officially launched in 2015 with the aim of expanding transparency by collecting reports from social media users who felt they had experienced censorship. Over time, the project has naturally evolved to include other components aimed at educating and raising awareness amongst users, journalists, and other stakeholders.

Onlinecensorship.org is a project of the Electronic Frontier Foundation and Visualizing Impact.

3. Contact Information

Our team can connect you to reliable experts and other resources for your media stories. For press inquiries or to get in touch, visit our Contact Us page and put “media request” in the subject line so we can get to it right away.

FAQ

Last updated: September 2017

1. What is content moderation?

All social media companies make decisions about what kinds of content they allow on their site, outlined in general principles in their community guidelines. Though the community guidelines are designed to help users police themselves, in nearly all instances the companies also moderate content users post, after the fact. Typically, they rely on other users’ reports to identify which content allegedly violates the community guidelines and needs to be taken down. When users report other users’ content, it’s called flagging.

2. What is the difference between Terms of Service and Community Guidelines?

Terms of service (also known as “terms of use” or “terms and conditions,” commonly abbreviated as ToS, ToU or ToC) are rules by which one must agree to abide in order to use a service. ToS are often very long and written using complicated jargon that may be legally binding. ToS typically don’t specify the companies’ content policies, but may include specific categories of speech (such as support for terrorism or violent threats) that are banned from a company’s platform(s). Furthermore, companies can adjust their ToS at any time and do not necessarily have to provide users with notice of these changes.

Community Guidelines (also known as “community standards”), on the other hand, are rules set forth for users of a platform, and usually go further than ToS in outlining categories of unacceptable speech, giving a high-level explanation of how the company defines them. Community Guidelines may set out expectations for users who violate them, but often do not do so with any specificity. The most detailed operationalization of company policies take form in the documents provided to content moderators, which provide very explicit and specific explanations of how to decide whether or not a particular piece of content violates the content policy. These documents are not made public, however leaked copies have sometimes surfaced, providing insight into how the company enforces its content policies. For more details, see ProPublica’s coverage and the Guardian’s Facebook Files.

3. How are Community Guidelines implemented?

As we’ve learned from media reports, investigative reporting and leaks, content moderators often have a more robust set of documents outlining the rules than do users. Facebook, YouTube, Instagram, and Twitter employ human content moderators to implement community guidelines, yet there is much we don't know about their internal policies.

None of the companies we currently track provide detailed reports regarding the percentage of reviewed content that is taken down. Without context and cultural insight, a content moderator may flag or mistakenly remove a piece of content as a violation of community guidelines. This is especially acute for traditionally marginalized groups like communities of color (racial epithets) and LGBTQ people (re-claimed slurs). Other issues like satire and humor and the substitution of a slur with an innocent word require a human to make a judgement decision.

The Guardian’s Facebook Files have given us some insight into how Facebook’s content moderation processes work. The company’s workers are given limited training, which some reports say may be as few as two weeks. One component of this training is a series of quizzes to test knowledge of platform rules. As The Guardian notes: “The guidelines also require moderators to learn the names and faces of more than 600 terrorist leaders, decide when a beheading video is newsworthy or celebratory, and allow Holocaust denial in all but four of the 16 countries where it’s illegal – those where Facebook risks being sued or blocked for flouting local law.” Furthermore, a source for the Facebook Files also told the Guardian that reviewers have less than ten seconds to make a decision whether a piece of content violates the rules. The amount of information that must be memorized by content moderators stands at 44 pages. Facebook's Monika Bickert told ProPublica that the company undertakes a weekly audit of the work being done by all Facebook reviewers.

Twitter recently rolled out new features including an updated in-app reporting mechanism, while Instagram and YouTube are relying more heavily on algorithms to police content. With this increased reliance on automated systems, increased transparency will remain a priority for researchers and advocates. The difficulty in analyzing the internal processes of how community guidelines are implemented will continue as long as technology companies rely on opaque, internal guidelines that they shield from the public. 

4. How does flagging work?

Typically, when a user flags a piece of content, it goes into a queue to be evaluated by a content moderator, either an employee of the company, a volunteer, or—most often—a contract worker who may operate outside of the country in which the content was posted. The moderator must quickly identify whether the flagged piece of content violates the site’s policies, and take some course of action. The possible types of action range widely from platform to platform—they may involve adding a filter that requires users to verify their age to view the content, removing the piece of content entirely, or shutting down the account of the user that posted it. 

In some cases, users have the option to appeal the action taken against them, though this option may be limited. Generally, users receive an email or in-app notification when their content is taken down and, when applicable, directs users on next steps for an appeal.  Often, an appeal reroutes content back through the moderation process and requires a second moderator to assess whether the original course of action was made in error.

5. Do companies have to moderate their content?

Social media companies headquartered in the US are not required to moderate their content. They are protected under the intermediary liability provision of Section 230 of the Communications Decency Act, sometimes called CDA 230. Section 230 generally immunizes Internet intermediaries from legal liability for hosting user-generated content that others claim is offensive or unlawful. In other words, online platforms cannot be sued for content posted by users on their social networks, with a few exceptions for criminal and intellectual property claims. Importantly, they also cannot be sued if they do decide to moderate content. Furthermore, moderating some content does not create an obligation for them to moderate all content, as Section 230 includes a provision designed to encourage companies to self-police. 

7. Are companies employing algorithms in content moderation?

Increasingly, companies are seeking ways to increase the efficiency and speed of content moderation. Content moderation by human moderators is a resource-intensive process, and results in significant levels of burnout among moderators employed to engage in clickwork that involves watching often violent and abusive content. 

As a result, companies are turning to algorithmic means to automate the moderation process where possible. This may include the use of PhotoDNA, a technology that computes hash values for images or audio files, reducing them to a digital signature that can be used to identify identical images or files. PhotoDNA is used in Project Vic, an initiative of the National Center for Missing and Exploited Children, to check retrieved images of child sexual abuse imagery to help law enforcement officials identify and locate missing children. Another approach involves using systems like YouTube’s ContentID, which, similar to PhotoDNA technology, automatically removes content it identifies as copyrighted uploaded by a user to YouTube’s systems. YouTube has also been piloting the use of AI to flag extremist content.

In addition, companies including Facebook and Twitter have adopted the institution of automatic bans for certain types of content and for multiple offenses by users. These bans range from 12 hours to 30 days, depending on the offense, and cause the user to be automatically locked out of their account, essentially instituting a “cool-down period” before they are allowed back on the platform. Automatic bans cannot be appealed by the user.

8. Who moderates content on social media sites?

Most content moderation (sometimes called “commercial content moderation”) is performed by human workers, in various cities around the world. Social media companies are typically not forthcoming about the fact that much of this work is outsourced to third-party companies, with workers receiving less pay and fewer benefits than full employees.  Workers have as little as ten seconds to adjudicate content, much of which is graphic in nature. Content moderation workers have complained about a lack of training and support.

We know that Facebook’s content moderation army—which the company calls “community operations analysts”—numbers around 4500 people. Although this may be one of the largest operations of its kind, 4500 people is a tiny fraction of the 1.8 billion people that use the site each month. With over 100 million pieces of flagged content to review each month, it’s hard to say if Facebook’s initiative to hire 3000 more moderators will make significant improvements to the system.

9. Can users appeal content moderation decisions?

Companies admit that they get some content decisions wrong, and as such, provide due process to users—but only in certain cases. For example, Facebook users cannot appeal content takedowns during an automatic ban period, and cannot appeal decisions about individual posts, photos, or videos. Instagram allows for appeals in the case of an account suspension, but not when content is removed. 

For more information, see our guide to appealing content decisions.

Issue Areas

Human body 1. The Human Body

Last updated: September 2017
Description of issue: A number of social media policies contain rules that ban certain depictions of the human body, particularly the nude body. 

Facebook
Facebook maintains a strict policy that forbids nearly all nude imagery from being shared on the platform, citing the “cultural background” and age of some of their users as justification. The policy explicitly forbids the display of genitals or fully exposed buttocks, while other aspects of the policy—such as when women’s nipples are allowed to be shown—remain open to interpretation by content moderators. Facebook’s nudity policy disproportionately impacts certain groups, namely women and trans individuals, as well as some indigenous communities

In 2014, Facebook clarified its Community Guidelines to specifically allow breastfeeding and post-mastectomy images, but users still complain of such images being removed. Furthermore, the use of nude imagery for important purposes—including breast cancer awareness campaigns, advocacy around trans and indigenous rights issues, and childbirth education—is often not permitted. Nude art, while technically allowed, is frequently the target of censors, which has led some to ask whether Facebook is too conservative for artists.

Facebook’s content moderation system—through which users report other users for potentially violating company policies—conflates nudity with sexuality and pornography.

A May 2017 leak of internal documents from Facebook—dubbed the Facebook Files—demonstrated how moderators must use these implementation guidelines to carefully determine whether nude and sexual content is in violation of the rules, but shows how  such decisions aren’t always simple. For example, child nudity “in the context of the Holocaust” is banned, but other newsworthy child nudity (such as the once famously-censored “Napalm Girl” photograph from the Vietnam War) should be permitted.

Instagram
Like its parent company Facebook, Instagram doesn’t allow nudity on its platform, stating so clearly in the short form of its community guidelines.

Instagram’s guidelines largely match Facebook’s, but in practice, Instagram frequently takes down content that seemingly complies with its guidelines. For example, while images showing women in bikinis or underwear are not explicitly banned, Instagram has in some cases removed such content, particularly when the individual in the image is fat. In another incident, Instagram censored a menstruation-themed series of photos by artist and poet Rupi Kaur, despite the fact that the images didn’t contain nudity.

Several campaigns have been aimed at changing Instagram’s policies. The Free the Nipple campaign started as a movement targeting public nudity laws in the United States, but began to criticize Facebook and Instagram’s nudity bans after Lina Esco, a member of the movement, was suspended from Facebook in 2013 for posting a trailer from a documentary where she runs topless through New York’s Central Park, where female toplessness is legal. The campaign has utilized innovative tactics in an effort to influence the company, and several celebrities (including Miley Cyrus, Lena Dunham, Chrissy Teigen, and Chelsea Handler) have joined the protest, sharing their own topless photographs.

The policy has also impacted transgender users, prompting several campaigns to raise awareness about how Facebook and Instagram treat the human body differently depending on its gender. One transgender woman, Courtney Demone, started a campaign called #DoIHaveBoobsNow to mark the point at which the companies began to view—and censor—her as a woman. The Instagram account Genderless Nipples posts images of nipples disconnected from the bodies they belong to.

 

 

Name policy 2. "Real Name" Policies

Last updated: September 2017
Description of issue: Since its inception, Facebook has maintained a policy requiring the use of “real” or “authentic” names by its users. Other social media companies have followed suit. 

Facebook
Initially launched as a social networking website for Harvard students, Facebook has maintained some of its early elements, including a policy that requires its users to go by their “authentic” name. CEO Mark Zuckerberg has claimed that requiring users to stand behind their opinions with an “authentic name and reputation” makes their “community” more accountable.

Early violations of the policy led to users being suspended permanently from the platform without the right to appeal. Later, users were asked to insecurely submit identification via email. Today, Facebook instead accepts several types of identification from users whose names have been challenged. Some users have reported being able to use fake documents.

As with its other policies, Facebook allows its users to report, or flag, others for using a “fake” name. This type of flagging is easily abused because doing so triggers a prompt for the reported user to submit identification, something that some users feel uncomfortable sharing with the company.

Over the years, the policy has negatively impacted a variety of individuals, including Native Americans; people who share names with famous fictional characters; drag queens and burlesque performers; and people, such as Tamils, who use mononyms

In 2014, after a number of San Francisco Bay-area drag performers were suspended from Facebook for using their stage names, members of that community mobilized to create the Nameless Coalition and the MyNameIs campaign to protest the policy. Demonstrations were held outside of Facebook’s corporate headquarters. As a result,  Facebook executives held meetings with and activists and the policy was slightly adjusted, but opposition to the policy continues. 

A year after the changes, in a Facebook Q&A, CEO Mark Zuckerberg defended the policy, writing:

“This is an important question. Real names are an important part of how our community works for a couple of reasons. 

First, it helps keep people safe. We know that people are much less likely to try to act abusively towards other members of our community when they're using their real names. There are plenty of cases -- for example, a woman leaving an abusive relationship and trying to avoid her violent ex-husband -- where preventing the ex-husband from creating profiles with fake names and harassing her is important. As long as he's using his real name, she can easily block him. 

Second, real names help make the service easier to use. People use Facebook to look up friends and people they meet all the time. This is easy because you can just type their name into search and find them. This becomes much harder if people don't use their real names.”

Zuckerberg’s first claim—that real names promote civil behavior—has been called into question by researchers on a number of occasions.

 

Live streaming 3. Live Streaming

Last updated: September 2017

Description of issue: Social media companies have transformed themselves from conversation platforms into media powerhouses. Twitter's Periscope service and Facebook Live are reshaping the way that video is captured, transmitted, and viewed. But companies’ content moderation processes aren’t always ready for new features.

Facebook

With the rollout of its “Live” video streaming service, Facebook has become a one-stop shop for both video production and distribution. Video posted by users can be viewed, shared, and commented upon during broadcast as well as after the stream has ended, but content remains under Facebook’s content policies and live video can be interrupted during broadcast by   content review teams. 

Livestreaming of police violence on Facebook Live and Instagram have made global headlines. Facebook explicitly commented on the livestreaming of graphic content in the aftermath of a murder by US police. In July 2016, following the police shooting of Philando Castile broadcast live on Facebook by Castile’s girlfriend and allegedly taken down due to a “technical glitch,” the company wrote: “One of the most sensitive situations involves people sharing violent or graphic images of events taking place in the real world. In those situations, context and degree are everything.” Content has been temporarily taken down in jurisdictions where it is legal to film the police.

Facebook has gone to great lengths to allow content that is, on its face, a violation of their ToS or Community Guidelines, but is still newsworthy and in the public interest. This was outlined in an October 2016 policy update

In September 2016, Facebook blocked a live stream of police arresting 22 people at a Dakota Access Pipeline protest. Facebook then restored the link and said the removal was due to a glitch. Given that law enforcement had created a limited access zone around the site of the protest, Facebook Live was one of the few ways that participants could broadcast scenes from the ground.

In January 2017, Russian media outlet RT was temporarily banned from posting videos, photos and links for 24 hours, which the channel attributed to their livestreaming of former US President Obama’s press conference. Facebook admitted that several takedowns and punitive measures—including the temporary ban received by Russia Today—were the result of its automated systems. 

Because Facebook “doesn’t want to censor or punish people in distress who are attempting suicide,” the platform does allow for the livestreaming of “self-harm” in some cases; the company will remove the post if it is uploaded by someone other than the person attempting self-harm. In defense of the policy, Facebook points to a May 2017 case where a Facebook user was attempting to live stream a suicide attempt, but was seen by family and friends who notified the police, who reached her in time thus saving her life. Moderation teams have been instructed to escalate self-harm related posts, and in certain cases perform a “welfare check” by contacting local authorities, but not remove them.

Violence on the platform remains an issue for Facebook Live. In April 2017, a man in Cleveland posted a video to Facebook announcing his intent to commit murder and subsequently posted another video of himself killing an elderly man. He then used Facebook Live to confess to the murder. Following the incident, Facebook claimed it received no user reports before the murder took place and released a statement that it would be investigating its “reporting flows” to be sure people were able to report videos and other material that violates Facebook’s standards quickly and easily. Facebook would go on to claim it "lost reports" that came in during the Live broadcast of a murder in Cleveland.

In May 2017, a man in Thailand streamed the murder of his baby and his own suicide on Facebook Live. The video remained up for 24 hours, and by the time it was removed, it had already been viewed hundreds of thousands of times.

As the leak of Facebook’s internal rulebook for content moderation demonstrates, it is enormously difficult to regulate live video content at scale. In the wake of another violent incident captured live, Facebook pledged to hire 3000 more content moderators to police for abuses.

Twitter

Twitter has integrated livestreaming service Periscope into their mobile app, but Periscope retains its own Community Guidelines. Users are given an in-app option to report a broadcast, which is then referred to Periscope’s content team. 

As abuse of flagging procedures and trolling were issues for Periscope, the livestreaming service now uses a system in which users to decide which user comments violate the platform’s rules. When a comment is reported, it’s shown to five users who decide whether it violates the rules. If a majority of them decide it does, the comment is removed and the user is temporarily banned or kicked out of the stream if they have multiple offenses.  Although Periscope’s guidelines forbid sexually explicit content, Motherboard has reported that live sex is common on the platform.

Hate speach 4. Hate Speech, Threats, and Extremism

Last updated: September 2017
Description of issue: With the rise of terrorism, anti-migrant sentiment, and right-wing extremism, companies have been called upon by governments and the public to tackle hate speech. Their efforts have been lauded by some and criticized by others, both for going too far and not going far enough to combat hate speech.

Facebook
Like many other companies, Facebook forbids hate speech, defined as content that attacks people based on their race; ethnicity; national origin; religious affiliation; sexual orientation; sex, gender or gender identity; or disabilities or diseases. Over the years, the company has been criticized by a number of different parties for its stance on particular types of hateful speech.

In 2009, TechCrunch founder Michael Arrington criticized the platform for allowing hateful speech toward Jews as well as Holocaust denial—the latter has been cause for criticism from many others. Similar criticism has come from Australian Aboriginal groups, Palestinians, women, and other groups who feel that the company has ignored hate speech directed at their communities.

In the past few years, with the rise of ethno-nationalism, terrorism, and right-wing extremism, Facebook has come under pressure from governments to remove terrorism and hate speech from its platform. The European Union and the German government have both enacted laws requiring Facebook, Google, and Twitter to remove hate speech within a certain timeframe or face penalties. Other countries have sought to strike similar deals with the companies.

The Facebook Files provide more insight into how the company moderates hate speech than do its Community Guidelines. From the leaked slides in the files, we know that Facebook considers certain classes of people to be protected groups (such as the followers of a religion, the inhabitants of a country, or sexual minorities) and not others (such as the members of a political party or social or economic class).

Curiously, Facebook has also implemented special classes of users, called Quasi Protected Categories (QPCs). This includes “people who cross an international border with intent to establish residency in a new country”—such as migrants, refugees, and asylum seekers. The slides—used as implementation guidelines by content moderators—offer complex calculations for what constitutes hate speech.

A leak of internal documents to ProPublica in June 2017 demonstrated that “white men” are considered a protected class by Facebook: A recreation of the leaked slides shows another calculation that categorizes “female drivers” and “black children” as quasi-protected groups because one of the characteristics in each (drivers, children) is not protected.

In recent months, a number of users have complained of being unfairly punished for violating Facebook’s hate speech policies, including queer activists, black women, and individuals sharing threats they’ve received.

Twitter

Although Twitter has long had a “hateful conduct policy,” the platform’s leadership once referred to the company as “the free speech wing of the free speech party” and was resistant to taking down content. Users have long complained that the company doesn’t do enough to moderate hate speech and harassment, especially when targeted at traditionally marginalized communities. Twitter acknowledged this issue in 2014, when the company partnered with Women, Action, and the Media (WAM!) in an effort to fix the platform’s harassment problem. The WAM report lists six recommendations to deal with targeted harassment including: “more broadly and clearly define what constitutes online harassment and abuse” and “develop new policies which recognize and address current methods that harassers use to manipulate and evade Twitter’s evidence requirements.”

Also beginning in 2014, the company put more resources into tackling online extremism. In March 2017, it was reported that Twitter had suspended 636,248 accounts for extremism between August 2015 and December 2016. Twitter has also permanently suspended several prominent users for hateful speech or harassment, including Azaelia Banks, Milo Yiannopoulos, and Chuck Johnson. Since the 2016 US election, the company has been bombarded with calls to take right-wing extremism seriously. Amidst the rise of movements like Antifa—that overtly confront white supremacist and neo-Nazis—the company has come under fire for taking down posts and accounts engaging in counterspeech.

YouTube

Like other companies, YouTube prohibits hate speech, yet the Alphabet Inc.-owned company has gone back and forth over time regarding whether to allow graphic content such as beheading videos. In the early days of the Syrian war, the company decided to allow such content if for documentary purposes; after the rise of ISIS, they backtracked.

YouTube has become important for documenting human rights abuses, human trafficking and even open source intelligence gathering. In the second half of 2017, the company was again under fire for removing crucial videos documenting the war in Syria and well as shuttering a North Korean propaganda channel, both for apparent violations of community guidelines.

 

Fake news 5. Fake News

Last updated: September 2017
Description of issue: Fake news involves the deliberate spreading of misinformation that is designed to manipulate. It often involves hyper-partisan sources, spam or scams, hoaxes, and rumors or accidental misinformation. 

Facebook
In December 2016 Facebook released a plan to “clear hoaxes spread by spammers for their own gain,” by engaging both their community and third party organizations.

Facebook stated that they plan to test ways to make it easier for users to report fake news if users see a story on the platform that they believe is false. 

Among its many efforts, the company has created partnerships with outside fact-checking organizations—such as FactCheck.org—that are signatories of Poynter’s International Fact Checking Code of Principles to help indicate when articles are false. It also plans to change some if its advertising practices to stop purveyors of fake news from profiting from it and will more broadly roll out Related Articles, a feature that highlights additional fact-checked stories below news links that are suspected of being hoaxes.

Unfortunately, mistakes and inconsistencies in identifying fake news are inevitable, whether identification depends on algorithms or human labor. Social media platforms already fail to offer sufficient redress when mistakes in content moderation occur, and have made little effort to establish transparency toward their users in their content moderation practices. Efforts to step up moderation must be accompanied by efforts to improve transparency and avenues for recourse.

Furthermore, governments have a strong interest in the control of fake news, since fake news can often influence elections. Expecting social media companies to take responsibility for fake news may open them up to further pressure by governments, including non-democratic governments. In the same vein, governments and government officials themselves are often the purveyors of fake news and misinformation. While fake news is a problem, freedom of expression advocates are concerned that efforts to combat fake news may lead to increased censorship of real news and accurate information and the favoring of “trusted” mainstream media sources, which offer a limited and narrow lens of the world.

 

 

Glossary of Terms

Appeals: Many platforms offer a process through which users can appeal decisions made about their content or accounts. 
Account suspension / shutdown: When a user violates a platform’s community guidelines, they may be subject to punitive action, including the temporary or permanent suspension (shutdown) of their account.
Algorithmic filtering: Many companies (including Facebook, Instagram, and Twitter) use algorithms to filter the content users see in their feeds. 
Automatic ban: Some platforms institute automatic suspensions on users for violating their rules or guidelines. Such bans can be limited to periods of time, or can be permanent.
Blasphemy: The action or offense of speaking sacrilegiously about God or sacred things; profane talk. A number of countries have laws that make content posted to social media platforms that is deemed ‘blasphemous’ a criminal offense.
Community guidelines: In addition to Terms of Service, which may be legally binding, most platforms publish a set of rules or guidelines enumerating inappropriate content that may result in punitive action.
ContentID: YouTube uses the ContentID system to police uploaded videos for copyrighted content. ContentID creates a digital signature for the content and checks it against a database of content that is known to be copyrighted. Videos that match this database may be removed from YouTube’s systems automatically.
Content moderation:  The practice of monitoring and applying a pre-determined set of rules (community guidelines) to user-submitted content.
Content moderation teams: Groups of content moderators that may focus on a particular area of content. These moderators often work for contracting, or outsourcing companies.
Content takedown / removal / page removals: Content takedowns or removals occur when a user is reported for violating a platform’s guidelines, and content moderators determine (correctly or otherwise) that the content in question is in violation of the guidelines.
DMCA: The Digital Millennium Copyright Act, a US law that criminalizes the violation or circumvention of digital copyrighted material. 
Fake news: The purposeful spread of misinformation with the intent to deceive the public.
Flagging: A mechanism through which users can report offensive or rules-violating content to a platform for moderation. Often, companies rely on flags to identify which content should be removed from a platform.
Flagging abuse occurs when an individual or group of individuals intentionally report a piece of content, often repeatedly, with the aim of having it removed. Such individuals or groups often intentionally use an inappropriate or unrelated category to flag content.
Hate speech: Hate speech is generally defined as speech which attacks a person or group on the basis of attributes such as race, religion, ethnic origin, sexual orientation, disability, or gender. Its legal status varies by jurisdiction.
Geo-blocking: Geo-blocking or geoblocking is a technological protection measure where access to Internet content is restricted based upon the user's geographical location.
Image hashing: A process to reduce a digital image to a single fingerprint or signature based on its visual appearance.
Intermediary liability: Refers to whether internet intermediaries (including social media companies, as well as other intermediaries such as ISPs) are liable for the content posted on their sites. In the US, social media companies are protected from liability under Section 230 of the Communications Decency Act.
PhotoDNA: A technology that computes hash values for images or audio files, reducing them to a digital signature that can be used to identify alike images or files. It is used in Project Vic, an initiative of the National Center for Missing and Exploited Children to check retrieved images of child pornography to help law enforcement officials identify and locate missing children.
Real names: Facebook [and some other sites?] has a policy that requires users to use their real or “authentic” name. While restrictions on real names have loosened slightly over the past few years, users are generally required to use at least part of their legal name on the site and, if the name they use is challenged by flagging, they may be required to present identification to Facebook.
Terms of Service: Terms of service (also known as terms of use or terms and conditions, commonly abbreviated as ToS or TOS) are rules by which one must agree to abide in order to use a service
Transparency reporting: A transparency report is a statement issued on a regular basis by a company, disclosing a variety of statistics related to requests for user data, records, or content.
Vulnerable groups: Facebook categorizes particular categories of individuals as ‘vulnerable' or 'quasi-protected' groups, including heads of state, activists and journalists, and homeless people (a full list is available here). Content that relates to these groups may be more likely to be taken down or to be escalated to supervisors.

 

Team and Contributors

Staff

Jillian C. York, Co-Founder

Jillian C. York is a writer and activist whose work examines state and corporate censorship and its impact on culture and human rights. Based in Berlin, she is the Director for International Freedom of Expression at the Electronic Frontier Foundation, where she works on issues of free expression, privacy, and digital security. Jillian is particularly interested in the censorship of the human body and the right to anonymity on social platforms.

Ramzi Jaber, Co-Founder

Ramzi Jaber is the cofounder of Visualizing Impact, an organization that specializes in data visualization on social issues. In 2012, he was a Social Entrepreneur-in-Residence at Stanford University. He is a YGL and Ashoka Fellow and has been featured by Creative Commons, Policy Mic, Foreign Policy, and Fast Company. Ramzi is interested in the intersection of digital rights and marginalized communities—especially in the Middle East.

Sarah Myers West, Project Strategist

Sarah Myers West is a doctoral candidate at the Annenberg School for Communication and Journalism at the University of Southern California. Sarah is particularly interested in content moderation processes, the right to appeal content takedowns, and the impact of social media censorship on users.

Jessica Anderson, Project Manager

Jessica Anderson's work focuses on forced migration, data journalism, and MENA social entrepreneurship. She is currently Operations Manager at Visualizing Impact. Jessica is particularly interested in how content moderation affects political speech.

Matthew Stender, Project Strategist

Matthew Stender is a Tech Ethicist who examines the ethical and cultural implications of emerging technology. His research focuses on algorithm transparency, network bias and freedom of expression online, as well as political and socio-cultural dynamics of censorship by social media networks.

Kim Carlson, Staff

Kim Carlson is the international project manager at the Electronic Frontier Foundation. She holds a B.A. in Journalism and Mass Communication from the University of Wisconsin-Madison. Her work focuses on managing content localization and the digital security guide, Surveillance Self-Defense.

Learn more about the team here.

 

Contributors

Corey H. Abramson

Corey H. Abramson is a self-described media literacy advocate, privacy policy wonk and digital content strategist. In his free time, Corey writes about media policy and lay user advocacy . Apart from digital literacy, Corey tends to focus on the consequences of modern marketing, pseudonymity, and online communities.

Ibrahim Altaweel

Ibrahim holds a bachelor's degree in computer science from UC Santa Cruz. He has co-authored various papers on privacy and spoke at FTC’s PrivacyCon about the current state of online privacy. He was a Google Public Policy Fellow at the Electronic Frontier Foundation in 2017.

Reem Farah

Reem Farah graduated from the University of Toronto with a bachelor's degree in International Relations and Peace, Conflict, and Justice Studies. She has worked in Bogota and Amman on civil society projects, and for the past two years has worked as a Research and Projects Coordinator with Visualizing Impact, particularly on portfolios of solidarity and activism.

Abeera Khan

Abeera holds a BA in Peace and Conflict Studies from the University of Toronto and an MA in Gender Studies at SOAS, University of London. She is currently a doctoral student at the Centre for Gender Studies at SOAS. Her research interests lie at the intersections of transnational feminism, queer of colour critique, and diaspora studies.

Tucker McLachlan 

Tucker McLachlan is a designer and writer in Toronto, Canada. He studied design at OCAD University and the Universität der Künste, Berlin.