A group tasked with auditing Facebook’s civil rights record published a report Sunday suggesting the company go much further in removing borderline white nationalist content.
Facebook’s current ban of explicit white nationalism should be expanded to include content that supports the ideology even if the term white nationalism is not used, the report notes. The company banned explicit expressions of white nationalism in March, marking a significant change in policy governing content on the platform.
“The narrow scope of the policy leaves up content that expressly espouses white nationalist ideology without using the term ‘white nationalist.’ As a result, content that would cause the same harm is permitted to remain on the platform,” the report notes.
Former American Civil Liberties Union legislative director Laura Murphy and 90 prominent civil rights groups, including Muslim Advocates, were involved in conducting the audit. The report was also designed to address complaints of conservative bias within the company. (RELATED: Facebook Bans ‘Praise, Support And Representation Of White Nationalism’)
“Facebook’s civil rights audit began in 2018 at the encouragement of the civil rights community and as part of the company’s commitment to advance civil rights on the platform,” Facebook said in a statement Sunday. “The audit is intended to create a forum for dialogue between the civil rights community and Facebook.
Conservatives meanwhile say the company is targeting them.
President Donald Trump and Republican lawmakers are fleshing out ways to reign in Facebook, Google and Twitter before they become too big to regulate. Facebook banned Alex Jones and Infowars, political commentator Milo Yiannopoulos, radio host Paul Joseph Watson, activist Laura Loomer and former congressional candidate Paul Nehlen in May.
Some tech experts worry the tools Facebook uses to moderate such content are incapable of distinguishing the subtle differences between so-called borderline white supremacist content and legitimate conservative posts.
Facebook’s lack of transparency about the frailties of its AI-deep learning instruments makes it difficult for people to understand why and how content is being throttled, Emily Williams, an artificial intelligence researcher based in San Francisco, told the Daily Caller News Foundation in March.
“If Facebook came out and was transparent, that would be one thing, but a lot of people are arrogant and don’t want to admit their algorithms are imperfect,” said Williams, who also believes the company’s algorithmic system is simply not up to the task to suss out contextual clues.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact email@example.com.