Reproducibility and replicability is a glossy science now so watch out for the hype
02 Mar 2017 By Jeff LeekReproducibility is the ability to take the code and data from a previous publication, rerun the code and get the same results. Replicability is the ability to rerun an experiment and get “consistent” results with the original study using new data. Results that are not reproducible are hard to verify and results that do not replicate in new studies are harder to trust. It is important that we aim for reproducibility and replicability in science.
Over the last few years there has been increasing concern about problems with reproducibility and replicability in science. There are a number of suggestions for why this might be:
- Papers published by scientists with lack of training in statistics and computation
- Treating statistics as a second hand discipline that can be “tacked on” at the end of a science experiment
- Financial incentives for companies and others to publish desirable results.
- Academic incentives for scientists to publish desirable results so they can get their next grant.
- Incentives for journals to publish surprising/eye catching/interesting results.
- Over-hyped studies with limited statistical characteristics (small sample size, questionable study populations etc.)
- TED-style sound bytes of scientific results that are digested and repeated in the press despite limited scientific evidence.
- Scientists who refuse to consider alternative explanations for their data
Usually the targets of discussion about reproducibility and replicability are highly visible scientific studies. The targets are usually papers in what are considered “top journals” or the papers in journals like Science and Nature that seek to maximize visibility. Or, more recently, entire fields of science that are widely publicized - like psychology or cancer biology are targeted for reproducibility and replicability studies.
These studies have pointed out serious issues with the statistics, study designs, code availability and methods descriptions in papers they have studied. These are fundamental issues that deserve attention and should be taught to all scientists. As more papers have come out pointing out potential issues, they have merged into what is being called “a crisis of reproducibility”, “a crisis of replicability”, “a crisis of confidence in science” or other equally strong statements.
As the interest around reproducibility and replicability has built to a fever pitch in the scientific community it has morphed into a glossy scientific field in its own right. All of the characteristics are in place:
- A big central “positive” narrative that all science is not replicable, reproducible, or correct.
- Incentives to publish these types of results because they can appear in Nature/Science/other glossy journals. (I’m not immune to this)
- Strong and aggressive responses to papers that provide alternative explanations or don’t fit the narrative.
- Researchers whose careers depend on the narrative being true
- TED-style talks and sound bytes (“most published research is false”, “most papers don’t replicate”)
- Press hype, including for papers with statistical weaknesses (small sample sizes, weaker study designs)
Reproducibility and replicability has “arrived” and become a field in its own right. That has both positives and negatives. On the positive side it means critical statistical issues are now being talked about by a broader range of people. On the negative side, researchers now have to do the same sober evaluation of the claims in reproducibility and replicability papers that they do for any other scientific field. Papers on reproducibility and replicability must be judged with the same critical eye as we apply to any other scientific study. That way we can sift through the hype and move science forward.