Opinion

Political Leaders Cannot Allow Slippery Tech Tycoons To Evade Tough Questions On Terrorist Propaganda

AFP/Getty Image/ Fadel Senna, Reuters/Jose Miguel Gomez

David Isben President, Counter Extremism Project
Font Size:

In April, Facebook CEO Mark Zuckerberg made his first trip to Capitol Hill but avoided tough questions by relying on carefully crafted rhetoric and spin tactics. Just one month later, it was the leaders of the European Union who had the similar unsatisfying experience of listening to Mr. Zuckerberg’s talking points and non-answers to their queries about data privacy, terror content, Facebook’s monopolistic practices, and other pressing issues.

Initially, Mr. Zuckerberg — a man whose declared mission is “connecting people” — wanted to meet EU leaders privately and only reluctantly allowed the session to be livestreamed for public viewing. Still, after pulling teeth to confront the social media mogul, EU leaders were clearly frustrated at the end of the brief session.

Mr. Zuckerberg continued to rely on two claims to explain the ongoing existence of extremist content on his platform — that 99 percent of ISIS and al-Qaeda content is removed, and that artificial intelligence (AI) will be the panacea for everything from fake news to extremism. The facts suggest otherwise.

As early as 2014, the Western intelligence community raised concerns about ISIS using platforms like Facebook to spread propaganda and recruit members, but ISIS networks on the social media site have only grown, not decreased, since then. A new report by the Counter Extremism Project (CEP), Spiders of the Caliphate: Mapping the Islamic State’s Global Support Network on Facebook, found active Facebook accounts for 1,000 ISIS supporters, and six months after their discovery, Facebook had suspended less than half of the accounts. In the other cases, users who had posted terrorist content were allowed to remain on the platform following the content’s removal and some pro-ISIS accounts were even reinstated after users complained about the suspensions.

The EU and U.S. Congress deserve enormous credit for convincing the CEO of the most widely used social media site in the world to answer questions in the hopes of improving public safety and security. But granting too much deference and limiting each questioner to several minutes made it too easy for Mr. Zuckerberg to rely on his highly-polished spin to deflect unwanted questions and prevent a thorough airing of issues like Facebook’s troubled business model and its insufficient efforts to crack down on the misuse of its platform.

The focus of policymakers during future hearings must change if Facebook and the tech industry is ever to be transparent and accountable to the public. To achieve effective oversight of the tech industry, U.S. and EU representatives — who are charged with protecting public safety — need not become the equivalent of Silicon Valley engineers. Rather, they must use their foundational knowledge of industry-generated problems to help focus on the key questions and pursue them exhaustively with tech representatives until there is a clear response.

The most successful and revealing lines of questioning at the U.S. congressional hearings were conducted, for example, by Sen. Richard Blumenthal, who specifically zeroed in on Facebook’s terms of service and its policies on selling user information. Senator Blumenthal also noted that he is well aware of Facebook’s “apology tours,” and ultimately highlighted the company’s business model, which is to “monetize user information to maximize profit over privacy.”

By using simple and direct questions, the dubiousness of Mr. Zuckerberg’s two key claims can be revealed. For instance, while claims of taking down 99 percent of terrorist content sounds impressive, in reality Facebook is grading itself on a significant curve by referencing only content from two groups — ISIS and al-Qaeda. A Bloomberg report highlights that at least a dozen U.S.-designated terror groups, from Hamas to Boko Haram and FARC, are active on Facebook and using the platform to recruit, using tactics like posting images of grisly killings.

Additionally, the CEP report calls into question Facebook’s takedown efforts. The report highlights how Facebook’s “suggested friends” feature has successfully connected jihadists and how ISIS followers continue to exploit Facebook to host meetings, link to terrorist propaganda, and organize on the social media platform. One of the report’s authors, Gregory Waters, recounted how Facebook began to suggest pro-ISIS friends for him after he made contact with only one extremist on the site.

CEP Senior Advisor and the world’s leading expert in digital forensics, Dr. Hany Farid, has noted that Mr. Zuckerberg’s predictions for AI are overly optimistic and assume that advances will continue at their recent pace. History has shown that this is likely not the case. For example, Microsoft founder Bill Gates said in 2004 that spam will no longer exist in two years. As we all know, spam continues to be a problem. Mr. Zuckerberg also ignores the fact that these same AI systems can and are being used by bad actors to inflict harm and to circumvent detection. The public and lawmakers need to maintain a critical eye when evaluating tech’s overly optimistic statements.

Policymakers in the U.S. and EU took a big step forward in bringing in Mr. Zuckerberg. Now, they must learn from that experience. We must carefully peel away the platitudes of Mr. Zuckerberg and other tech titans and insist on direct and transparent responses on behalf of a public that is rightfully concerned about tech’s real life impacts on public safety, privacy and our democracies.

David Ibsen serves as Executive Director for the Counter Extremism Project (CEP), which works to combat the growing threat of extremism and extremist ideology.


The views and opinions expressed in this commentary are those of the author and do not reflect the official position of The Daily Caller.