Business

Facebook Reveals Its Censorship Rules

(Photo by Chip Somodevilla/Getty Images)

Daily Caller News Foundation logo
Eric Lieberman Managing Editor
Font Size:

Facebook revealed its once hush censorship rules Tuesday in an apparent act of transparency as the company has recently become the recipient of public censure.

“We decided to publish these internal guidelines for two reasons,” Monika Bickert, vice president of global product management, wrote in a blog post Tuesday. “First, the guidelines will help people understand where we draw the line on nuanced issues. Second, providing these details makes it easier for everyone, including experts in different fields, to give us feedback so that we can improve the guidelines – and the decisions we make – over time.”

Being more open about its content moderation strategy and similar processes may earn Facebook more support, yet new details about why and how it removes certain content will also likely open up more opportunities for user scrutiny. After all, Facebook has been accused of various forms of censorship in the past.

“Another challenge is accurately applying our policies to the content that has been flagged to us,” Bickert wrote. “In some cases, we make mistakes because our policies are not sufficiently clear to our content reviewers; when that’s the case, we work to fill those gaps. More often than not, however, we make mistakes because our processes involve people, and people are fallible.”

Topics covered in their community standards, which are around 25 pages long, include “adult nudity and sexual activity,” “graphic violence,” “hate speech,” and “cruel and insensitive” content and behavior, like bullying — all of which are enumerated with criteria that can be interpreted in many ways.

For instance, Facebook defines “hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease.”

The company continues:

We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation. We separate attacks into three tiers of severity, as described below.

Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others. Similarly, in some cases, words or terms that might otherwise violate our standards are used self-referentially or in an empowering way. When this is the case, we allow the content, but we expect people to clearly indicate their intent, which helps us better understand why they shared it. Where the intention is unclear, we may remove the content.

We allow humor and social commentary related to these topics. In addition, we believe that people are more responsible when they share this kind of commentary using their authentic identity.

Determining whether something is shared for amusement or to raise awareness, and if it was received as such, will be an inherently difficult task. That’s especially so considering the billions of pieces of content that are shared on and through the social media platform.

That’s why there is an appeals procedure in which users who feel as if the rules were inappropriately applied to them can petition for reversal. The process for appeals, however, can sometimes take a very long time, if at all(RELATED: Facebook Removed This Small Town Business’s Ads For American Flags Because It Sells Guns)

Facebook says it is working on extending the “process further, by supporting more violation types, giving people the opportunity to provide more context that could help us make the right decision, and making appeals available not just for content that was taken down, but also for content that was reported and left up.”

There are also rules against violence and criminal behavior, which entails dangerous individual and organizations. Both Antifa, a loose far-left movement purporting to be against fascism, and white nationalist groups are allegedly being allowed to operate on the social media platform although they arguably threaten and conduct illicit or violent behavior.

Fake or false news is also prohibited.

“Reducing the spread of false news on Facebook is a responsibility that we take seriously. We also recognize that this is a challenging and sensitive issue,” the community standards for the “Integrity and Authenticity” section reads. “We want to help people stay informed without stifling productive public discourse. There is also a fine line between false news and satire or opinion. For these reasons, we don’t remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed.”

Facebook has tried many times to combat fake news by launching several initiatives that had varying degrees of success. Its “Disputed Flags” program, for example, was designed to provide indicator warnings for certain articles it believed may have fraudulent or misleading information. Those signifiers, however, ultimately had the opposite effect for most people, who viewed them as distrustful and were therefore more likely to become more entrenched in their deeply held beliefs.

Facebook seems to be intensifying its content moderation, or at least the transparency around such systems. If prospective new projects to combat actions and subject matter deemed harmful are successful, then Facebook will probably continue to receive flak for doing too much.

Follow Eric on Twitter

Send tips to eric@dailycallernewsfoundation.org.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.