Content Moderation: What is it?
Content Moderation is the practice of applying a predetermined set of rules to user-generated content, in order to determine whether that content is permissible or not. All kinds of platforms and internet service providers moderate content, according to varying rules. Those rules are usually some combination of legal obligations and other standards they put in place to try to uphold the types of community or content they want to foster. The documents that lay out those rules for users are usually called Terms of Service agreements and Community Guidelines/Standards.
How does content get marked for removal or other action?
There are four primary ways in which content is marked for removal: User reporting, government requests, automated flagging, and DMCA requests.
In the first instance, a user flags content as violating either the ToS or community guidelines of a particular service, and that flag is sent for review. The user is often taken through a queue of questions that help identify which rule they believe the content violated.
The second type, government requests, involve governments sending takedown requests directly to companies. Governments can submit judicial or law enforcement requests to companies, sometimes via official channels (such as an order issued by a competent judicial authority),, and those requests are then reviewed internally for compliance with the law. Sometimes, companies will also review government requests for removal under their own ToS or community guidelines, a phenomenon which are sometimes referred to as “platform policy takedown requests” or “voluntary government takedowns.” Some governments also employ people to utilize user reporting mechanisms in an effort to remove content. Platforms offer varying levels of transparency about how many requests they have received directly from governments, and how many of those requests were completed.
The third type is automated removal. Many companies use automation to flag content to be later reviewed by a human, while some employ machine-learning algorithms to detect and remove content before it is ever published. The latter practice is increasingly used to remove extremist content, for instance. Content subject to copyright restrictions is also routinely flagged by ContentID.
Lastly, users can file a report under the Digital Millennium Copyright Act (DMCA) if they hold the copyright to a piece of content that has been uploaded by someone else, or are an agent of the copyright holder, and the upload is unlawful.
After content is flagged, it will typically undergo review. This may not apply to content flagged for copyright violation.
When a review process kicks off to audit whether or not there is legitimate reason to take down some content that’s been flagged for violating ToS or community guidelines, there are two methods that are most common: review performed by an actual person, and automated review. Human review is usually some sort of paid labor where people walk through their queue of assigned material and manually determine if content does in fact violate community standards or legal rules, and needs to be removed. Automated review processes are algorithmic mechanisms that will check content against databases of known “bad” content and look for similarities. Sometimes these automated reviews will check for a sentiment analysis where algorithms attempt to determine the emotional tone, or mood, of the content. Because either automated or human review can still be prone to error, there may be several rounds of review that are some combination of either type.
What happens to content that violates the rules?
After content has been flagged and reviewed, it’s time for a decision to be made about what to do with it. If the content is determined to be in violation of the ToS or community standards, it is most often removed from the platform. Typically the user that created the content is given some sort of notice that action has been taken to remove it. Depending on the severity of the violation or the number of times that user has created inappropriate content, more punitive measures might be taken, such as temporary suspension of the account or banning the account indefinitely.
If the content is found to be within the rules laid out by ToS and community guidelines, the platform will allow it to stay up. Whether or not the content is then “immunized” against further review from the platform (in order to save time or money) is up to the platform itself.
Some companies also employ forms of content moderation that sit between “leave ups” and “take downs.” These enforcement mechanisms are often secretive, and users may not be provided with appropriate notice. They manifest in a variety of ways, but for instance, content flagged by a user may simply no longer be shown to that user. Another known example is the de-monetization of content, whereby a user is allowed to continue posting, but cannot host advertisements on their content. Given the lack of determined outcomes with this option, it can inspire conspiracy theories about algorithmic wizardry that is suppressing or boosting particular kinds of content as some kind of civil control.
Terms of Service vs Community Guidelines
As previously mentioned, the ToS document describes the responsibilities that the user agrees to when participating on a platform. This is usually presented to the user upon account registration. The ToS is usually supplemented by a community guidelines or standards document. This other document presents those rules in a more readable format, but also often lays out other rules that, in addition to the platform’s attempt to abide by the law, restricts other, legal behavior.
For example: some types of content are outright illegal, and are therefore never tolerated, such as child sexual abuse material. Anything that is clearly under one of these categories is up for immediate removal and sometimes punitive measures.
Other content, such as hate speech, extremism, nudity, or pornography, are not necessarily illegal but are often prohibited by a platform’s community guidelines. Some of these things may be unlawful in certain jurisdictions, whereas others may simply reflect the company’s own values or adherence to norms. Platforms are often incentivized to be overly broad in their restrictive measures and to remove any content that could take more resources to categorize.
For information about appealing content that has been taken down, visit our appeals guides.