Politics

BuzzFeed’s ‘Fake News’ Survey Relies On Cheap Polling Methods Experts Say Unreliable

(REUTERS/Brendan McDermid)

Font Size:

“Most Americans Who See Fake News Believe It, New Survey Says,” the BuzzFeed headline blared. The story’s lead declared that “Fake news headlines fool American adults about 75% of the time.”

The story appeared to play into a newly popular narrative among liberal journalists in which “fake news” is a widespread problem among American voters. But BuzzFeed’s fake news story relied on survey methodology that experts say is often unreliable, with a tiny sample size. (RELATED: FAKE NEWS FLASHBACK: CNN President Says BuzzFeed Not ‘Legitimate’ News Organization)

Early on in the story, authors Craig Silverman and Jeremy Singer-Vine note that the data came “from an online survey of 3,015 US adults.” The 3,015 number is misleading, however. While the survey started with that number, the number of respondents about any of the 11 headlines consisted only of those who were already familiar with the headlines.

As a result, the number of people actually asked about the headlines’ accuracy was nowhere close to 3,015, as the authors note further down the story. Just 348 people — less than 12 percent of the original pool — were asked about the accuracy of the fake headline, “Donald Trump Protester Speaks Out: ‘I Was Paid $3,500 to Protest Trump’s Rally.'” BuzzFeed, which has previously come under criticism for its methods of reporting on “fake news,” notes that that fake story “received a high accuracy rating” of 79 percent.

Just 186 people were asked about the accuracy of the fake headline, “FBI Director Comey Just Put a Trump Sign On His Front Lawn.” Of the 186 people asked, 81 percent said the headline was very or somewhat accurate, which BuzzFeed notes was “the highest overall accuracy rating” among fake headlines. (The 186 respondents represented the smallest pool asked about any of the fake headlines.)

In addition to the minuscule sample sizes, the methodology used in conducting the survey is questionable, according to experts on the subject. (RELATED: Journalists Struggle To Define ‘Fake News’ Even As They Declare War On It)

BuzzFeed notes that the survey, conducted on its behalf by Ipsos Polling, had a credibility interval at plus or minus 2 percentage points. That may sound like a more trustworthy result than it actually is.

A credibility interval, the American Association for Public Opinion Research explains, is not at all the same as the margin-of-error that accompanies the typical electoral polling conducted over the phone.

“AAPOR would like to clarify for journalists and consumers of polls that the margin of sampling error and a credibility interval are not the same thing, even though both assign a ‘plus-or-minus’ to the accuracy of a poll,” the organization announced in a 2012 press release. (RELATED: BuzzFeed Editor-In-Chief Doubles Down On Hatchet Job Against Chip And Joanna Gaines)

“The credibility interval relies on assumptions that may be difficult to validate, and the results may be sensitive to these assumptions. So while the adoption of the credibility interval may be appropriate for nonprobability samples such as opt-in online polls, the underlying biases associated with such polls remain a concern,” the organization continued. The AAPOR has similarly cautioned about the reliability of nonprobability polls in general.

The problem with online non-probability sampling, Rutgers professor Cliff Zukin cautioned in a 2015 New York Times article, is that they’re unreliable. “These are largely unproven methodologically, and as a task force of the American Association of Public Opinion Research has pointed out, it is impossible to calculate a margin of error on such surveys.”

BuzzFeed’s survey method does have one thing going for it, according to Zukin: it’s cheap. “What they have going for them is that they are very inexpensive to do,” Zukin notes, which has “attracted a number of new survey firms to the game.” (RELATED: Website Labeled ‘Fake News’ Threatens To Sue WaPo For Defamation)

“While it’s true that some of the headline-specific numbers were based on small sample sizes — and we’re transparent about that — the big-picture findings were based on much larger sample sizes. The finding that fake-news headlines fooled respondents 75% of the time, for example, was based on 1,516 separate judgments,” Singer-Vine said. “I think the article is very clear and open about the methodology and survey structure.”

Near the bottom of the article, while discussing fake news headlines, Silverman and Singer-Vine concede that “these percentages came from small groups of respondents and should be read cautiously.”

This echoes what the AAPOR has said about the methods used in BuzzFeed’s survey: “AAPOR urges caution when using credibility intervals or otherwise interpreting results from electoral polls using non-probability online panels. The Association continues to recommend the use of probability based polling to measure the opinions of the general public.”

This article was updated to include comment from Singer-Vine.

Follow Hasson on Twitter @PeterJHasson