Facebook clarified its censorship rules Wednesday, saying it will remove certain content that could lead to violence.
Specifically, Facebook says that false information in some instances should be removed because it could spark a dangerous response, as has been seen in Myanmar and India in recent months.
“We have identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline,” Tessa Lyons, a Facebook product manager, told The New York Times. “We have a broader responsibility to not just reduce that type of content but remove it.”
The “new policy” was created because of “feedback” from community groups in areas where fake news has led to coordinated violence, Lyons told The Wall Street Journal.
The protocol changes, or more aptly extension, come only hours after an interview with CEO Mark Zuckerberg was published in which he said that determining intent is a key component of eliminating certain posts. Zuckerberg’s contentious example is Holocaust deniers.
“Let’s take this whole closer to home … I’m Jewish, and there’s a set of people who deny that the Holocaust happened,” Zuckerberg told Kara Swisher of Recode, bravely delving into a highly controversial, albeit important topic. “I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong.”
“I don’t think that they’re intentionally getting it wrong,” he said, with Swisher adding that “they might be.”
“It’s hard to impugn intent and to understand the intent,” Zuckerberg said.
In that same discussion, Zuckerberg said while he is reluctant to go down that path of aggressive content removal, he doesn’t want to make it as prominent on the platform.
“So you move them down? Versus, in Myanmar, where you remove it?” Swisher asked, to which Zuckerberg answered with a simple “yes.”
Facebook was accused by some, like investigators for the United Nations, of fueling insurgent attacks against a large Muslim population in the Southeast Asian nation because it offers its social media capabilities and features to anyone and doesn’t always step in when it’s arguably appropriate. (RELATED: Facebook Releases Censorship Stats In First Ever Such Report)
The changes apply to Facebook’s primary platform and its subsidiary Instagram, but not WhatsApp, the protected messaging service, according to The New York Times. In India, texts with false information sent through WhatsApp, which is highly popular in the country, incited mob violence. Rumors about child abductors and organ harvesters, for example, led to the spontaneous beating of a young man. Others reportedly died from being falsely accused of such heinous activities.
Send tips to email@example.com.
All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact firstname.lastname@example.org.