Facebook Alerts Users Of ‘Extremist Content’


Lacey Kestecher Contributor
Font Size:

Facebook announced Thursday that the company is beginning to notify users when they view “extremist content.”

“This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk,” a Facebook spokesperson told Reuters.

The platform notifies users that have been exposed to content that violates Facebook’s policies, or to individuals that have violated its policies in the past, according to the Washington Examiner.

The users then are sent a message which states, “You may have been exposed to harmful extremist content recently. Violent groups try to manipulate your anger and disappointment. You can take action now to protect yourself and others.”

Users then have the option to “get support.” By clicking on the link, users are redirected to Life After Hate, an organization which “is committed to helping people leave the violent far-right.”

Facebook stated that “extremist content” is being flagged only on its main platform currently. The U.S. based test is the start of Facebook’s attempt to deter global extremism from its website.

These recent changes were made in response to Facebook’s Redirect Initiative, which started after Facebook began collaborating with the Christchurch Call, a group dedicated to removing terrorism content online.

“The Redirect Initiative helps combat violent extremism and dangerous organizations by redirecting hate and violence-related search terms towards resources, education, and outreach groups that can help,” Facebook states.

Facebook had also recently made changes to its community standards for “Dangerous Individuals and Organizations.” Individuals, groups and organizations are placed into one of three tiers “based on their behavior both online and offline, most significantly, their ties to violence.” (RELATED: Facebook Lifts Ban On Posts About Possible Man-Made Origins Of COVID-19)

Tier one removes users “involved in terrorism, organized hate, or organized crime.” It also allows Facebook to remove the content of “hateful ideologies.” The platform lists examples such as Nazism, white supremacy, white nationalism and white separatism.

Tier two removes “Violent Non-State Actors,” which Facebook describes as one who “engages in purposive and planned acts of violence primarily against a government military or other armed groups.”

Tier three removes “Militarized Social Movements, Violence-Inducing Conspiracy Networks, and individuals and groups banned for promoting hatred.”