Investigative Group

‘We Don’t Track Bots’: What The Media’s Russian Bot Coverage Is Getting All Wrong

Photo: Flickr/Gage Skidmore Graphic: Daily Caller

Daily Caller News Foundation logo
Font Size:
  • Establishment media claims “Russian bots” pushing conservative causes are littered with factual errors
  • Two groups at the heart of the Russian bot stories debunked media claims about their work
  • Multiple media outlets falsely claimed hordes of Russian bots were rallying around Fox News host Laura Ingraham

Media outlets are spreading “inherently inaccurate” stories about “Russian bots” on Twitter, according to the group behind the data cited to support many of those stories.

Outlets including The New York Times, The Washington Post and CNN have used the Alliance for Securing Democracy’s Hamilton 68 dashboard to blame “Russian bots” for boosting conservative issues and conspiracy theories on social media.

Most of the reporting on the dashboard is “inherently inaccurate,” the alliance’s communication’s director, Bret Schafer, told The Daily Caller News Foundation over the weekend.

“Most notably, and this is the most common error, we don’t track bots, or, more specifically, bots are only a small portion of the network that we monitor,” Schafer said.

“We’ve tried to make this point clear in all our published reporting, yet most of the third party reporting on the dashboard continues to appear with some variation of the headline ‘Russian bots are pushing X …'” he said. “This is inherently inaccurate.”

Examples abound of the kind of article Schafer described.

Business Insider on April 1 published an article titled, “Russian bots are rallying behind embattled Fox News host Laura Ingraham as advertisers dump her show.” The article racked up over 100,000 views.

The Washington Post told its readers that “Russian bots have flooded Twitter with false information” about the Parkland school shooting, citing the BI report in an article the next day titled, “Russian bots are tweeting their support of embattled Fox News host Laura Ingraham.”

The articles leaned on two sources, both of which were misrepresented.

The first was a 2800 percent increase in a pro-Ingraham hashtag on the Hamilton dashboard. But the vast majority of the accounts tracked by the dashboard aren’t bots, according to Schafer.

Moreover, an increase in percentage says little about the actual number of tweets using that hashtag. A hashtag not commonly used by the monitored accounts (e.g. #IStandWithLaura) can see a huge percentage increase when tweeted a relatively small number of times.

Friday evening, for example, “#FridayFeeling” was the top trending hashtag among accounts on the dashboard, increasing in use by 5,200 percent. But that didn’t translate to very many real tweets.

The hashtag didn’t crack the top 10 hashtags used, falling short of the hashtag “#us,” which was used just 64 times within the previous 48 hours. In other words, an impressive surge in frequency — and rising to the top of the “trending hashtags list” — only translated to a few dozen tweets.

“#Fridayfeeling” surged in percentage increase among monitored accounts but was only tweeted a handful dozen times and didn’t crack the top 10. (Screenshot of the Hamilton 68 dashboard on Friday, April 6, 10:01 P.M. Eastern Time.)

“Trending hashtags are inherently problematic because that section only measures the percent change with a hashtag over a 48 hour period, so it’s always going to favor new hashtag campaigns over, say, #Ukraine, #US, etc.,” Schafer told TheDCNF.

“Obviously, no one was using #istandwithlaura before the David Hogg controversy, so the spike there is likely evidence of only around 20-30 tweets.”

“For that reason, I only pay attention to the trending section if, for instance, all the top 10 trending hashtags are focused on the exact same subject. That’s an indicator that I should watch that topic as there’s clearly some early interest, but even then, it’s not usually something that I would flag until I see more evidence,” Schafer said.

Journalists made a similar error following the Brexit vote in 2016, mistaking enormous percentage increases in Google searches for “What is the EU?” as proof large numbers of people were searching for that phrase. They weren’t.

“Also, it’s important to stress that results on the dashboard are meant to be viewed in a nuanced way; i.e., not every URL or hashtag that appears on the dashboard should be interpreted as evidence that pro-Kremlin accounts favor or oppose a certain social or political position,” Schafer said.

“A lot of the partisan topics that appear on the dash are merely used to gain a following to push more targeted, Kremlin-friendly geopolitical content. Case in point, #Syria is far and away the most-used hashtag by monitored accounts since we started with the project, and #Skripal has been at or near the top of the dashboard every day for the past month,” he continued.

“This is rarely mentioned in reports, and without that as a baseline understanding, most of articles that focus on a single trending topic are losing the forest for the trees.”

BI and WaPo’s misleading claims that Russian bots boosted Ingraham came more than a month after a BuzzFeed report raised serious questions about the media’s recent coverage of the dashboard. (RELATED: Sinclair Producer Who Resigned To Protest ‘Obvious Bias’ Turns Out To Be A Left-Wing Activist [VIDEO])

Two days after the inaccurate BI report, CNN’s John Avalon told viewers that “Over the weekend, Hamilton 68, a project that tracks Russian bots found a dramatic increase in Russian bots using the hashtag surrounding that debate; trying to elevate the debate on Ingraham’s side overwhelmingly.”

The second source BI and WaPo misrepresented is Botcheck.me, which two University of California Berkeley students launched to track roughly 1,500 “propaganda bots.” Both news outlets claimed that Botcheck’s data showed “Russia-linked accounts” were backing Ingraham in large numbers.

But Botcheck doesn’t have any data on where the bots it tracks are located or who runs them, Ash Bhat, Botcheck’s cofounder, told TheDCNF on Sunday.

“We’ve been pretty vocal on this point. There isn’t any data that actually points to any specific location or group,” Bhat said. “We (being RoBhat Labs) don’t have the evidence to point to any individual group. Twitter may have that data but it’s not made publicly available.”

BI updated its article with an editor’s note on Monday in response to an inquiry from TheDCNF.  “This story has been updated to clarify what types of accounts Hamilton 68’s dashboard and botcheck.me track. We have clarified that Hamilton 68 tracks Russia-linked accounts, not all of which are bots, and that botcheck.me tracks propaganda bots, which aren’t necessarily Russia-linked,” the editor’s note reads.

However, the article still relies on misleading percentage increases, which Schafer said likely only amounts to “20-30 tweets.” And researchers with Botcheck saw bots pushing both sides of the gun debate.

“We saw bots on both sides push #guncontrol and #guncontrolnow during Parkland,” Bhat told TheDCNF. “What worried us was that these bots were pushing a topic that clearly would be divisive instead of a hashtag like #mentalhealth, which would be a rallying point for the nation to come together around.”

After this article was published, WaPo corrected its article in response to TheDCNF’s inquiry. “This post incorrectly stated that botcheck.me tracks Russia-linked bots specifically. It has been corrected and updated,” the correction reads.

The article now states that “many Twitter bots and propaganda accounts, including some linked to Russia, have rallied around the conservative talk-show host.” But that’s still not accurate. The 600 monitored Hamilton accounts sent roughly 20-30 tweets supporting Ingraham, according to Schafer — not quite “rallying around” her.

“There are bots on both sides of the aisle and our hypothesis is that they serve the same purpose – to further divide us politically. Categorizing all bots as being ‘left’ or ‘right’ issue is wrong and only further divides us,” Bhat said.

That presents a sharp contrast with articles on “Russian bots” that cited Botcheck’s work.

After the Parkland shooting, for example, CNN ran an article on Feb. 16 claiming a sharp uptick in Russian bots pushing pro-gun messaging within “the past 48 hours.”

“Russia-linked bots are promoting pro-gun messages on Twitter in an attempt to sow discord in the aftermath of the Florida school shooting, monitoring groups say,” read the article’s lede.

Like other articles on the topic, the CNN article cited the Hamilton 68 dashboard and Botcheck. And, like other articles on the topic, it was misleading.

First, the Hamilton 68 data doesn’t support the claim that the Russian bots were pushing specifically pro-gun arguments.

As of 11:00 p.m. on Feb. 15, according to an archived version of the dashboard, the top two hashtags related to the shooting were “#fbi” and “#gunreformnow.” Neither of those hashtags can be accurately described as “pro-gun messages.”

The 600 monitored accounts combined to tweet the two hashtags 111 times in the previous 48 hours — barely more than two tweets per hour. And, as Schafer emphasized, bots only account for a “small portion” of those 600 accounts monitored on the dashboard.

Contrary to the CNN article, the pro-Russia accounts monitored on the Hamilton 68 dashboard barely engaged in the gun debate, pushed both sides of the argument, and bots only accounted for a fraction of what was tweeted on the dashboard.

Second, as noted above, Botcheck’s data doesn’t support the claim that Russian bots were only pushing pro-gun messaging after the shooting.

BI, WaPo, and CNN are the most recent outlets to warn their audiences about Russian bots, but they’re far from the only ones.

The New York Times claimed in February that an “army” of Russian bots went into action following the Parkland shooting. The article cited the Hamilton 68 dashboard but included no hard data backing up its thesis.

“Parkland is an instance where I was and am comfortable saying that there was Russian-linked activity that was attempting to inflame divisions. We saw a concentrated focus on the shooting in the immediate aftermath (as we did with Vegas), and a sustained promotion of divisive content related to gun control over a several week period,” Schafer told TheDCNF on Monday.

But TheNYT still “again incorrectly labelled activity on the dashboard as being the work of ‘bots,'” Schafer said. BuzzFeed declared TheNYT article “total bullshit” in its piece criticizing the media’s sensationalized coverage of “Russian bots.”

“The dashboard is always going to show results from any major breaking news story, from Hurricane Harvey to the bridge collapse to the Austin bombings. But in those instances, the level of activity was not abnormal and died off after the normal 24-48 hour news cycle. We consider that level of activity to be largely irrelevant (other than in the big picture sense of using breaking news and trending topics to engage with users online),” he explained.

“If there are several thousand tweets on a subject or a continued focus on a topic over a several week period, that’s noteworthy. If there are a few hundred tweets (or less) on a subject over a day or two, that’s something that we dismiss as noise,” he added. “That’s the key point that is often lost. As researchers we look for trends over time or abnormal spikes in activity; therefore, it obviously is potentially problematic when someone looks at the dashboard for two minutes and cites one hashtag or URL as evidence of a campaign of influence.”
Others outlets including The Hill, Axios, and Wired have used the Hamilton 68 dashboard to claim Russian bots boosted conservative positions on social media.

Schafer believes “the vast majority of the misinterpretations and misunderstandings of Hamilton 68 would be solved if those reporting on the dashboard would read our original methodology paper and follow-up paper.”

The group’s methodology notes that the “Russian-linked accounts” it monitors includes users who “may or may not understand themselves to be part of a pro-Russian social network.”

“They are not all in Russia,” Clint Watts, one of Hamilton 68’s co-founders, told BuzzFeed. “We don’t even think they’re all commanded in Russia — at all. We think some of them are legitimately passionate people that are just really into promoting Russia.”

Schafer couldn’t rule out that pro-Russia Americans could be among the accounts tracked on the Hamilton 68 dashboard. Researchers “weed out” any known American accounts from the list of monitored accounts, he told TheDCNF, and he said the list is believed to be 95-98 percent accurate.

Still, he said, “We can’t be certain whether some accounts are run by those impersonating average Americans or by average Americans who are heavily engaged with pro-Kremlin content.”

What Schafer is certain about: the Hamilton 68 dashboard doesn’t represent an army of Russian bots.

This article has been updated to include The Washington Post’s correction in response to TheDCNF’s inquiry.

Follow Hasson on Twitter @PeterJHasson

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.