Tech

British PM Wants More Internet Censorship: Delete ‘Extremist Content’ In 2 Hours

(Shutterstock/adike)

Ian Miles Cheong Contributor
Font Size:

British prime minister Theresa May on Wednesday called upon executives from Microsoft, Facebook and Google to remove extremist content within two hours of notification. The move may prompt social media, search engines, and content platforms to embrace widespread censorship for compliance with her request.

May’s request comes after a UK Home Office report that stated that links to “extremist content” remained online for an average of 36 hours before they are removed. The Home Office analyzed 27,000 links to extremist content in the first five months of 2017.

The British prime minister made her statements on the sidelines of the United Nations meeting in New York on Wednesday. She is reported to have addressed these companies alongside other France’s Emmanuel Macron and Italy’s Paul Gentiloni.

At the present, many platforms provide a 24-hour window (which tech websites like TechCrunch point out is “aggressive”) for their staff to manually review reported content and take them down. Due to the sheer volume of content, it is impossible to take them down within two hours without sweeping automation.

The UK government is calling on tech companies to develop technology to spot extremist content and remove it before it’s ever shared. The prime minister cited ISIS promotional videos and other material encouraging extremists to commit acts of  violence involving vehicles and knives.

“Terrorist groups are aware that links to their propaganda are being removed more quickly, and are placing a greater emphasis on disseminating content at speed in order to stay ahead,” she said.

“Industry needs to go further and faster in automating the detection and removal of terrorist content online, and developing technological solutions that prevent it being uploaded in the first place.”

However, as past efforts by Google/YouTube have shown, corporate efforts to sanitize their platforms has only led to false positive hits and removal of legitimate content.

In August, human rights organization Airwatch had much of its content automatically flagged and removed as “extremist content” for its coverage of the war in Syria.

When the technology was first announced, YouTubers who spoke to The Daily Caller expressed fears that their content, some of which consisted of political debates and news, would be affected. They were not wrong, as dozens of channels have since been demonetized, delisted, or outright suspended since then.

As past efforts show, attempts to aggressively take down “extremist content” without manual review is only going to result in censorship. It is a free speech disaster in the making that will likely hurt news outlets and silence discussions on controversial topics.

The only solution to prevent news organizations from being affected by an automated system ban would be to create a whitelist, which would essentially leave alternative media organizations and citizen journalists out in the cold. Everyone else would have to abide by a list of acceptable, politically correct topics.

Ian Miles Cheong is a journalist and outspoken media critic. You can reach him through social media at @stillgray on Twitter and on Facebook.