Twitter launched a new election-inspired tool last week that will allow users to report tweets that could mislead voters with false information, but experts say it could lead to bigger problems.
The social media giant first rolled out the new tool in India on April 25 during the country’s elections. It was then introduced to the EU on April 29 ahead of its elections, which will be held during the last week of May.
Public conversation on Twitter is never more important than during elections. Today, we’re launching a new reporting feature to tackle deliberate attempts to mislead about voting. We’ll start with #LokSabhaElections2019 & #EUelections2019 https://t.co/rDdEwX3FcR pic.twitter.com/jrLOc3k1hC
— Twitter Safety (@TwitterSafety) April 24, 2019
Since the tool was implemented in India, “Twitter deleted or disabled 34 of the 49 tweets/accounts” flagged for violating the company’s code of conduct, the Economic Times reported. Eight such posts were political tweets published during India’s “silent period” — a span of 48 hours during which all political advertising throughout the country must stop — six spread false voting information and two were flagged as “hate speech.”
“This is in addition to our existing proactive approach to tackling malicious automation and other forms of platform manipulation on the service,” Twitter told the Caller in a statement regarding the new program.
According to the tech giant, the types of content that violate the website’s rules include misleading information about how one can vote or register to vote, rules and requirements for one to participate in elections and the official date or time of an election. (RELATED: Twitter Restricts People Who Tweet, ‘Learn To Code’ — Even If They Aren’t Engaging In Harassment)
“For our part, we’re focused on proactive identification of problematic accounts and behaviors. Using machine-learning tools, we can spot and take action on entire networks of malicious automated accounts — across language, time zones, and cultural contexts — even before we get reports about them,” the statement continues.
In 2018, Twitter said it suspended 75% of the approximately 9 million accounts it “challenged … by requesting additional authentication details,” which brought the number of reports from concerned users down by about 364,000. This decreasing number of reports, according to Twitter, is indicative that their “proprietary technology has been proactively spotting and challenging accounts at source and at scale.”
But Cornell University Professor Drew Margolin, who teaches and studies the way people communicate online, says “the devil is in the details” of Twitter’s new reporting tool may not prevent users from spreading misleading information news and may even discredit real voting information.
Margolin told the Caller he believes Twitter’s new tool is the right step “to take explicit responsibility for making editorial judgments … not on what is posted, but on what is ultimately seen.” He also explained, however, that users “should expect that legitimate voting information will be ‘flagged’ by those for whom confusion about when, where, or how to vote would be advantageous,” in a statement published to Newswise.com.
In other words, those who spread false information will now be the ones flagging legitimate information.
“The key to keeping them from abusing this power is transparency,” he said. “They should report what was flagged and then, from among that set, what was taken down and what was allowed to pass through and why.”
He also told the Caller that in order to identify misleading information, there should be a “ground truth” to the kind of information Twitter will be seeking out in order to compare true information with false information. For instance, “identifying tweets that are misleading about polling places, etc., should be relatively straightforward, especially compared to other kinds of misinformation.”
Cornell SC Johnson College of Business Professor Shawn Mankad, who studies machine learning, innovation, entrepreneurship and technology, said in a statement to Newswise.com that the success of Twitter’s new reporting tool depends on the algorithm behind the program “and whether it strikes the right balance in catching as much misleading information as possible, without jeopardizing legitimate content.”
“Twitter’s new tool will surely be based on machine learning and therefore require careful tuning so that the tool balances how it labels content,” Mankad said. “If the algorithm is too strict, then legitimate content might mistakenly be labeled as misinformation. In the other direction, Twitter could err by not catching all misleading content and letting some of that misinformation go unflagged.”
Professor Margolin said Twitter’s algorithm could operate on “how much false information [or] true information about the elections would have diffused” under past tactics to regulate data. “This won’t be perfect the first time, but this at least provides some basis.”
A Twitter spokesperson said the company is “exploring this for critical elections outside the United States, and we’ll provide an update on 2020 if and when we have one” when asked whether their new election misinformation reporting tool would be implemented during the 2020 U.S. elections.
Twitter announced the new tool one day after President Donald Trump met with Twitter CEO Jack Dorsey to discuss “protecting the health of the public conversation ahead of the 2020 U.S. elections,” according to a spokesperson.
Many social media users and/or U.S. voters expressed concern after reports of Russian interference in U.S. elections through Facebook was discovered after the 2016 presidential elections. Election interference also created problems during Europe’s Brexit campaign that same year.
Well this has been a fun journey pic.twitter.com/5TTcgVe9Vh
— Kevin Roose (@kevinroose) April 30, 2019
Facebook, Google and Twitter are all participants of the European Commission’s Code of Practice against the spreading of false information, which requires each company to send a monthly report ahead of EU elections in May. (RELATED: Facebook, Google, Amazon Silent For Days After Twitter Drops SPLC)
A statement from the Commission published last week says, in part, that the three companies are making progress in terms of stopping the spread of misinformation, “it is regrettable that Google and Twitter have not yet reported further progress regarding transparency of issue-based advertising, meaning issues that are sources of important debate during elections.”
On top of receiving criticism for not doing enough to stop or filter misinformation regarding the news, Twitter, Facebook and Google have also come under fire for censoring conservative and liberal voices — sometimes for reasons unknown to those users who have been censored or banned.
Journalist Tim Pool says de-platforming on social media, in the context of free speech, may be “one of the worst problems we’re facing right now politically,” during his interview on the Joe Rogan Experience podcast.
“Twitter is where public discourse is happening,” Pool said. “It’s where journalists are — and this is a problem — sourcing a lot of their stories. And if you have somebody who’s completely removed from public discourse, that’s exile.”
Pool went on the say that “there are people on the left who have been banned unjustly. I could name so many people. Jesse Kelly was banned for no reason. … Twitter said it was an accident.”
While Pool admitted that the bans could be genuine mistakes, the reason he doesn’t believe they were mistakes is because “for one, we can see the ideological bend toward their rules. But then you look at someone like Milo Yiannopoulos. I’m not a fan of Milo. I have to make sure everybody knows that. But just because I’m critical of the actions taken against him doesn’t mean I support him, but why was he banned? Because he tweeted at Leslie Jones?”
Pool suggested a “mass jury system” as a solution to mistrust that could be created as a result of the election misinformation tool potentially targeting true posts:
I don’t know if there is a line that can be drawn that would be satisfactory in terms of removing posts. Some tweets are obviously fake and the ones that matter are hard to spot. If we give the power to dictate truth to massive tech monopolies then, conveniently, any news about restricting them will cease. Twitter will have the power to remove posts about candidates they don’t like. If the issue is that they leave the post up but mark it as fake, then the problems still stand. People will assume posts that are true or not and disregard them. I’m not sure what the solution is but as of late a mass jury system makes the most sense.
Last Friday, the election accounts for two U.K. Independent Party (UKIP) candidates, which had previously been banned, were banned again. Far-right, anti-Muslim activist Stephen Yaxley-Lennon (Tommy Robinson) and far-right YouTuber Carl Benjamin (Sargon of Akkad) were barred from their accounts, Buzzfeed News first reported.
In a statement to The Hill, Twitter said their accounts were suspended for violating the platform’s terms of service.
The user bans were likely not part of Twitter’s new flagging tool, though it does highlight how the social media platform is involved in campaign activity online.