Study: Facebook’s ‘Suggested Friends’ Are Helping Terrorists Connect
Facebook is indirectly helping thousands of terrorists connect through its “suggested friends” feature, according to The Telegraph, citing a study that will soon be fully released.
Part of its mission to connect people throughout the world, and thus build new global communities, Facebook uses its proprietary algorithms to propose adding a friend that shares certain interests. In doing so, evildoers that visit the site are also afforded the same service, and are provided expedient means for seeking out others who may also have the same goals of destruction and violence.
The researchers of the study are from the Counter Extremism Project, a nonprofit that urges tech companies to get more aggressive with combating extremist content on the platform. They learned, according to the Telegraph, that supporters of the Islamic State from 96 countries were constantly being introduced to each other through the “suggested friends” widget that greets users multiple times while browsing through the platform. The full report is expected to be completely published by the end of the month.
One of the researchers, Robert Postings, says that the potentially substantial effects of Facebook became even more stark after he accessed several articles that detailed a radical Islamist insurrection in the Philippines.
Soon after clicking on such content, he received a number of friend suggestions from extremists in that area.
“Facebook, in their desire to connect as many people as possible have inadvertently created a system which helps connect extremists and terrorists,” Postings told The Telegraph.
Such accusations have been made several times before, but the Counter Extremism Project’s findings could further show the platform’s influence on terrorist proliferation, and specifically that of the friend recommendation feature.
Facebook said in November 2017 that it removes 99 percent of terrorist content before anyone else flags it first by using artificial intelligence.
“We do this primarily through the use of automated systems like photo and video matching and text-based machine learning,” Monika Bickert, head of global policy management, and Brian Fishman, head of counterterrorism policy, wrote in a blog post. “Once we are aware of a piece of terror content, we remove 83% of subsequently uploaded copies within one hour of upload.”
Still, even with billions of pieces of content on the platform every day, many think Facebook should use its vast resources to do more. (RELATED: Facebook Needs ‘To Be A Hostile Environment For Terrorists,’ Exec Says)
And it’s not just Facebook. Google, Twitter and Facebook were all summoned by Congress late last year to explain how their respective services are harnessed, even manipulated, by those who wish to stoke division or coordinate evil projects.
The three tech giants are also often sued for allegedly not doing enough, although many of those complaints do not pass the legal scrutiny needed.
Families still grieving from the Orlando nightclub shooting that left dozens dead and wounded filed a federal suit Dec. 2016 against Twitter, Facebook, and Google. They blame the three web platforms for their respective losses, arguing they “provided the terrorist group ISIS with accounts they use to spread extremist propaganda, raise funds, and attract new recruits.”
Several legal experts and lawyers told The Daily Caller News Foundation at the time that those lawsuits would either not proceed past the initial proceedings, or wouldn’t be ultimately successful because of Section 230 of the Communications Decency Act. The statute exempts websites from liability of what users do, but was amended recently so that they would be more responsible for restricting sex trafficking endeavors.
Send tips to email@example.com.