Daily Vaper

Science Lesson: Lamppost Epidemiology And The Lack Of Self-Correction

Photo via Shutterstock

Carl V. Phillips Contributor
Font Size:

Sophisticated epidemiologists often criticize what they call “lamppost epidemiology.” This is a reference to the old joke about someone searching the sidewalk under a lamppost at night:

“What are you doing?”

“Looking for my keys.”

“Oh, you dropped them here?”

“No, I dropped them half a block back, but the light is better here.”

The implication — which is accurate — is that very little epidemiology includes the hard work of searching where the best answers can be found. Instead, most researchers just look where the light is — doing whatever analysis is easy with whatever data they happen to have — and report whatever result they get from that. As long as a dataset includes any rough measures of the exposure and the outcome of interest, this will produce a result. That is the core problem. It is inevitable that the searcher will find something under the lamppost. It might be that the exposure and outcome measures are poor, there is unresolvable confounding, and the study population is poorly chosen. The search under the lamppost will still produce a result.

This metaphor should not be confused with the observation that tobacco controllers, along with other activist pseudo-scientists, use research like a stumbling drunk uses a lamppost: for support rather than illumination. Lampposts are a recurring theme in bad epidemiology. It turns out, however, that the problem of lamppost epidemiology facilitates this activist pseudo-science. It is easy to get away with intentionally biasing results in this environment, an environment where it is acceptable to crank out a study just because it is possible to do the calculation, and to then draw conclusions with no pressure to produce accurate answers.

The practice of epidemiology takes the lamppost metaphor one step further than the joke. Researchers typically declare whatever they found to be their missing keys, even when it is a piece of worthless litter. They can do this because epidemiology results, unlike those in some areas of inquiry, contain no self-correction mechanism. Epidemiology is a science of measurement, and when you measure you will get some result, even if you did it very badly. Occasionally that result will be so absurd that it invalidates the study method, but that is fairly rare (and even then it is often presented as some exciting new discovery).

This contrasts with the areas of scientific research where imperfect results can be immediately recognized. No one ever says, “we discovered a new dinosaur species that had three legs and a large gap in its backbone” or “we discovered a new particle with .99 the charge of an electron.” They can surmise from their result that their methods or data were imperfect. Similarly, many sciences involve multiple attempts to observe the same phenomenon, and if the results are inconsistent we conclude that someone did something wrong. But every epidemiology study looks at a different phenomenon, even when they appear the same if you only read the headlines. Exposure and outcome measurements usually vary from one study to another, and populations alway vary. Results would differ even if every study were done perfectly.

When the average person thinks about science, they tend to think about sciences of yes-no discovery, where it is basically impossible to be off by thirty percent and not realize it. Or they think of the “experiments” from grade-school (which are really demonstrations) which either go boom, as planned, or not. Indeed, those “experiments” are usually engineering rather than science, and engineering usually involves a clear success or failure. Epidemiology is strongly influenced by medicine, which is a problem because medicine is a field of engineering (despite the popular misconception that it is science). The engineering research in medicine either finds a way to fix a problem or not. But public health research is a much more complicated science of measurement, and it fails when treated as if it were as simple medical research.

For readers who did not follow all that, the bottom line is that when epidemiologists metaphorically search for their keys in the wrong place, the results look just as real as the results of proper research. Identifying any flaws requires digging into the methods.

Consider the example of the previously analyzed study that is currently being touted as showing vaping increases the risk of heart attack. As explained in the previous analysis, there is no way that data could offer a useful answer the question. The outcome measure is terrible, failing to record when the heart attack took place. Confounding from smoking swamps any plausible risk from vaping, and the data did not include enough information about smoking to have any hope of controlling for that. This analysis was done because it was easy, not because it held any promise of being informative.

The work was originally done by medical students as a class exercise. They used data they had at hand — in this case, a simple public dataset — despite it being fundamentally unsuited for answering the question. They calculated a few simple statistical correlations, performing no sensitivity analyses or deductive tests with the data. This gave them a result, because those steps always produce a result. That is fine as a class exercise. The problem is that in epidemiology, this simplistic recipe is how almost all published research is done. The easy search is done under the lamppost, and then someone declares the result to be the missing keys rather than the discarded used tissue that it really is.

It so happens that this declaration was made by an activist pseudo-scientist because, though this result could never enlighten, it supported his political agenda. He apparently also intentionally manipulated the results to further exaggerate the junk science result. But these political manipulations are not the core problem.

In a truth-seeking field, getting a result like this would be only the start of a scientific inquiry. If epidemiology were done correctly, the researchers would realize this calculation has little information value, but presents an intriguing hypothesis. They would check it against other data they had available. If this was still uninformative, they would design methods that could test the hypothesis. They would check the validity of the data and methods in a hundred different ways. Only then would they publish (in any form) all of the results and claims. If other science was done like epidemiology is, researchers would publish their lab notes every week and make definitive claims based on them, no matter how incomplete their work.

There is no stopping activists, conmen, and self-promoters from touting their pseudo-scientific claims to the press. Serious scientists in telegenic fields are stuck with this. But serious economists scoff at the pop “economists” making pronouncements on television, and you will hear from some of them if you follow the topic. The same is true for climate and energy research, political science, and security. Serious epidemiologists, however, are outnumbered a thousand-to-one by people chattering about the litter they found under a lamppost. You would be excused for not even realizing there are serious truth-seekers in the field.

Follow Dr. Phillips on Twitter