Facebook Says A ‘Bug’ Accidentally Exposed Its Imminent ‘Hate Speech’ Finder
Many people scrolling through their respective Facebook news feeds Tuesday were shown an option to report whether a post included some form of “hate speech” or not. That new informing feature, however, was not supposed to be made available, a Facebook spokeswoman told The Daily Caller News Foundation.
“This was an internal test we were working on to understand different types of speech, including speech we thought would not be hate,” a company representative said. “A bug caused it to launch publicly.”
This is weird.
Facebook is asking me about whether *my own post* on Facebook contains hate speech.
And also, the post is just a NYT story about restaurant trends in Austin, to which I’ve added no commentary. pic.twitter.com/ZIrUW4IW4q
— Joe Weisenthal (@TheStalwart) May 1, 2018
Several of the posts with the addendum included content that appeared to be completely innocuous, leaving many to wonder if the hate speech-reporting mechanism would be embedded extensively, even for almost all posts, or if it was imperfect due to it perhaps not being fully ready.
Guy Rosen, vice president of product at Facebook, showed a telling example of its faultiness, while explaining it was only available for around 20 minutes.
Some people saw ‘does this post contain hate speech’ today on some posts. This was a test – and a bug that we reverted within 20 mins. It was shown for a short time on posts regardless of their content (like this one). pic.twitter.com/iuNKSVTOqQ
— Guy Rosen (@guyro) May 1, 2018
Options shown after choosing “Yes” seemed insufficient, and included the words “Test.” Regardless of what the initiative will ultimately look like, its accidental launch shows that Facebook is trying to do more to stop hate speech.
Facebook has felt pressure from large portions of the public and lawmakers to purge its platform and services of vitriolic and nefarious content and communications. At the same time, it has been constantly accused of going too far, inappropriately censoring things that are harmless, not conforming to the apparent ideologies of the tech company’s leaders, or both.
Facebook is pushing ahead in content moderation endeavors, yet CEO Mark Zuckerberg has expressed some doubts over aggressively policing the platform.
The executive said in March that he’s “fundamentally uncomfortable” about the prospect of personally deciding what content is acceptable to be seen and engaged with. He also more recently said that identifying hate speech is very difficult, and definitely not the same as being able to detect nudity like a bare nipple.
Fears of potentially censoring conservatives and libertarians without due cause seems to outweigh fears of ambiguous forms of hate speech. That, or Zuckerberg and other Facebook higher-ups think they can do it all, satisfying general concerns of both too much and not enough content on the platform.
Send tips to firstname.lastname@example.org.