Vapers frequently make the observation about tobacco controllers, “How can they believe that evidence supports [something negative, like the gateway effect] when they do not believe the evidence supports [something positive, like the low risk of vaping]?” This reflects an intuitive understanding of scientific reasoning, that there is a double standard being employed. But a double standard for what? The “what” is scientific inference, and understanding more about what it means can be useful for honing that intuition.
To many who do not really understand science, there is the common myth that science is some kind of objective process and that data “speaks for itself.” This is patently false. Science is a subjective human process, from start to finish. Observations only lead to conclusions, even simple ones, through the human thought process of inferring what the evidence shows. Often there are some technical steps along the way, like statistical analyses, but these too are subjective (and often intentionally biased) human choices. More important, they are still only an aid to the ultimate step, inference. Only individual or collective cognitive processes can decide what the evidence says about how the universe works. The historical progress of science is as much about improving that thought process as it is about collecting new data.
Those seeking to determine the truth do the best they can to draw valid inferences, though they still make mistakes. Sometimes available evidence points in the direction of a false conclusion. Sometimes blind spots in our collective understanding lead our thoughts down a wrong path. Scientific inference is a difficult art (not a science!), rife with uncertainty. (The uncertainty too should be assessed and reported, but that is a science lesson for another day.) What is not difficult, however, is for dishonest pseudo-scientists to take advantage of the uncertainties in the inference step to create propaganda.
Consider the recent National Academies report on vaping and vapor products. The authors concluded that teenage vaping causes smoking initiation, despite all the evidence being very weak or completely uninformative. That conclusion was highlighted in news stories about the report, though it was another highlight of those stories that represents the greater crime against scientific inference: The authors pretended that it was impossible to draw conclusions about just how low the risks from vaping are, and pretended that we are profoundly uncertain about the long-term effects. Neither of these is true. We know what the exposures from vaping are, and we have good evidence that those exposures pose trivial risk, including after long-term exposure. It is safe to infer that any risk from vaping is low enough that it will never be measured. It is possible that we will be shocked and later discover that vaping is 5 percent, or even 20 percent, as harmful as smoking. But that would be shocking because we are able to infer from what we know now, with great confidence, that the risk is far lower.
When someone simply refuses to engage in scientific inference they can claim “unknown” about whatever they want. We only know anything because of inference. The tactic usually involves suggesting that we do not know for sure, so we just do not know. That way of thinking would eliminate all scientific knowledge, since we never know anything for sure. Still, it is easy to trick non-experts with this tactic since the difference between “do not know for sure” and “do not know enough to draw any conclusions with confidence” depends on the subtle art of scientific inference. With this tactic, someone can make the argument that we do not know that vaccines are reasonably safe or that the Earth is round.
Science depends on the assumption that those involved are trying to figure things out. If they are actively trying to avoid figure things out, it is easy to do.
Another example occurred when the FDA’s Tobacco Products Scientific Advisory Committee reviewed the evidence on the iQOS heat-not-burn product and reached a pair of absurdly contradictory conclusions: They correctly concluded that the evidence shows that exposures to harmful chemicals from using the iQOS are less than those from smoking (overwhelmingly so). But they also incorrectly concluded that the evidence is insufficient to conclude that using the iQOS rather than smoking is less harmful. How could someone reconcile these two conclusions? By pretending to not be capable of making the inference, which accords with everything we know, that a dramatic reduction in harmful exposures results in reduction in risk. This is exactly the inference found in almost every public health study of exposures, where the authors surmise that even modest increased exposures probably result in an increase in risk.
Most of the problem here is intentional dishonesty, though not all of it. Most authors in public health, including most or all of those referenced above, have never given any serious thought to how scientific knowledge is created. Having no understanding of this, they defer to “authoritative” pronouncements of inference by whatever random researcher wrote them down.
Authors of research reports have no privileged position in drawing the ultimate scientific conclusions from their results. Their inferences have no more authority than those from any expert reader. They have one potential advantage, that they could go back and use their data to put a conclusion to a more focused test than their broad cut at the statistics did. But since this basically never happens in public health research, that potential advantage is moot. To a real expert, a research paper is something to read in order to draw one’s own inferences. The authors might make a convincing case about inferences to draw, but might not. To a non-expert, there is no choice but to just say “this is the conclusion of a peer-reviewed paper, and I do not know enough to question it.”
Many public health authors, and most tobacco controllers, act like non-experts (usually with good reason). The authors of the National Academies report acted as non-expert reviewers (as I previously noted, as if they were newspaper reporters or Wikipedia authors). They just parroted conclusions by research report authors — many of which are quite obviously wrong — and avoided doing any real scientific thinking. That report might as well have been assigned to a journalist or an undergraduate student.
A strong nihilistic take on science — that no matter what we see, we never really know anything for sure, other than our own thoughts — is an interesting philosophical exercise, illustrating what happens without scientific inference. But it is obviously no way to inform policy. It is even worse when this is used selectively to further a political position, asserting nihilism when a conclusion is inconvenient but happily engaging in (often incorrect) inference to reach desired conclusions. This is a favored tactic of tobacco controllers, including the FDA and the National Academies authors.
If someone wants to pretend that they are incapable of inferring obvious conclusions from the evidence, there is no way to be sure they do not really believe it. But it is easy to be sure that they are not good at science.