Since 1955, The Journal of Irreproducible Results has offered “spoofs, parodies, whimsies, burlesques, lampoons and satires” about life in the laboratory.
Among its greatest hits: “Acoustic Oscillations in Jell-O, With and Without Fruit, Subjected to Varying Levels of Stress” and “Utilizing Infinite Loops to Compute an Approximate Value of Infinity.” The good-natured jibes are a backhanded celebration of science. What really goes on in the lab is, by implication, of a loftier, more serious nature.
It has been jarring to learn in recent years that a reproducible result may actually be the rarest of birds. Replication, the ability of another lab to reproduce a finding, is the gold standard of science, reassurance that you have discovered something true. But that is getting harder all the time. With the most accessible truths already discovered, what remains are often subtle effects, some so delicate that they can be conjured up only under ideal circumstances, using highly specialized techniques.
Fears that this is resulting in some questionable findings began to emerge in 2005, when Dr. John P. A. Ioannidis, a kind of meta-scientist who researches research, wrote a paper pointedly titled “Why Most Published Research Findings Are False.”
Given the desire for ambitious scientists to break from the pack with a striking new finding, Dr. Ioannidis reasoned, many hypotheses already start with a high chance of being wrong. Otherwise proving them right would not be so difficult and surprising — and supportive of a scientist’s career. Taking into account the human tendency to see what we want to see, unconscious bias is inevitable. Without any ill intent, a scientist may be nudged toward interpreting the data so it supports the hypothesis, even if just barely.
The effect is amplified by competition for a shrinking pool of grant money and also by the design of so many experiments — with small sample sizes (cells in a lab dish or people in an epidemiological pool) and weak standards for what passes as statistically significant. That makes it all the easier to fool oneself.
Paradoxically the hottest fields, with the most people pursuing the same questions, are most prone to error, Dr. Ioannidis argued. If one of five competing labs is alone in finding an effect, that result is the one likely to be published. But there is a four in five chance that it is wrong. Papers reporting negative conclusions are more easily ignored.
Putting all of this together, Dr. Ioannidis devised a mathematical model supporting the conclusion that most published findings are probably incorrect.