Facebook Releases Censorship Stats In First Ever Such Report
Facebook removed millions of accounts and pieces of content in the past several months, the company stated in its first ever such report.
Specifically, it took down 837 million examples of “spam” in the first three months of 2018. It also purged 583 million fake accounts, “most of which were disabled within minutes of registration.”
Facebook, though, still wants to do more to combat content that could be against its rules — like those that are considered by its employees or artificial intelligence system to be hate speech, abuse, graphic violence, sexual activity, terrorist propaganda — while also ensuring that it doesn’t go too far in its censorship endeavors.
“As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse,” Guy Rosen, vice president of product management, wrote in a blog post announcing the report. “It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.”
Rosen even specifically admitted that “for hate speech, our technology still doesn’t work that well,” so Facebook employs review teams to fill in the purported gaps — which is part of somewhat widespread concerns that the company is over-censoring. (RELATED: Zuckerberg: It’s Easier ‘To Detect A Nipple’ Than Spot Hate Speech)
“In addition, in many areas — whether it’s spam, porn or fake accounts — we’re up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts,” Rosen continued. “It’s why we’re investing heavily in more people and better technology to make Facebook safer for everyone.”
Still, while Rosen said its systems leave more to be desired at the moment, Facebook removed 2.5 million pieces of content, 38 percent of which was flagged by its technology.
Facebook might never be able to fully satisfy calls from two primary portions of the public, with one arguing it should do even more to combat vitriolic content on the platform, and others urging the company to do more to promote a free speech ethos. (RELATED: Revenge Porn And The Tricky, Delicate Balance Between Freedom Of Speech And Freedom Of Privacy)
The move to divulge statistics and details of its removal measures is yet another attempt to increase transparency for a company that has been reeling from a spate of public backlash. The tech giant revealed its once hush censorship rules in April, although the degree of specificity is debatable.
Overall, being more open about its content moderation strategy and similar processes could earn Facebook more approval and sympathy. But, on the other hand, new information about how and why it removes certain content and communications will also almost certainly open up more opportunities for user — and even lawmaker — scrutiny.
After all, Facebook has been accused of various forms of censorship in the past, some ostensibly based on ideological reasoning, and others for no apparent reason other than maybe accidentally being considered inappropriate nudity.
Send tips to email@example.com.