Tech

Facebook Users Look For Answers As Company’s AI Goes Haywire After Moderators Were Sent Home

REUTERS/File Photos/File Photo

Daily Caller News Foundation logo
Chris White Tech Reporter
Font Size:

Facebook said Tuesday a bug in the company’s anti-spam system that is randomly and mistakenly flagging user content is unrelated to any changes in the workforce due to the coronavirus.

Twitter users tweeted images of a warning they received from Facebook suggesting their content violated company policies against spam. The content was flagged due to a bug rather than a lack of human oversight caused by social distancing, according to one Facebook security official.

“We’re on this – this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce. We’re in the process of fixing and bringing all these posts back. More soon,” Guy Rosen, Facebook’s vice president of safety and integrity, said in a tweet addressing the complaints.

Rosen was responding to a tweet Tuesday night from Facebook’s former head of security, Alex Stamos, who said from his vantage point the problem looks like “an anti-spam rule at FB is going haywire.”

Stamos added: “We might be seeing the start of the ML going nuts with less human oversight.” He also reminded people on Twitter that Facebook sent home their content moderators on Monday over concerns related to the coronavirus.

Facebook spokesman Andy Stone directed the Daily Caller News Foundation to Rosen’s tweet for further explanation. (RELATED: Twitter Says Social-Distancing Will Likely Mean Giving AI Much More Control Over Regulating User Content)

Twitter and Google’s YouTube were among the big tech companies to announce Monday that their artificial intelligence tools will now be taking on more responsibility for content moderation due to social distancing.

“We’re working to improve our tech,” Twitter noted in a statement, adding that “this might result in some mistakes.” Big tech companies often blame artificial intelligence system for mistakenly nixing or impacting user content that does not in any way violate Twitter’s policies.

Twitter, for instance, suggested in April 2019 that the auto system was partially to blame for the suspension of a pro-life group.

“When an account violates the Twitter Rules, the system looks for linked accounts to mitigate things like ban evasion,” a company spokeswoman told the Daily Caller News Foundation in April 2019. “In this case, the account was mistakenly caught in our automated systems for ban evasion.”

The spokeswoman was referring to an account called “Unplanned,” which promoted a movie about a former abortion clinic director who became pro-life. The system is designed to suspend so-called sock-puppet accounts connected to a profile that violated company policies, according to the spokeswoman.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.