The Results of the Reproducibility Project Are In. They’re Not Good
Tom Bartlett | August 28, 2015
The results of the Reproducibility Project are in, and the news is not good. The goal of the project was to attempt to replicate findings in 100 studies from three leading psychology journals published in the year 2008. The very ambitious endeavor, led by Brian Nosek, a professor of psychology at the University of Virginia and executive director of the Center for Open Science, brought together more than 270 researchers who tried to follow the same methods as the original researchers — in essence, double-checking their work by painstakingly re-creating it.
<more at http://chronicle.com/article/The-Results-of-the/232695/; related links: https://osf.io/sejcv/ (An Open, Large-Scale, Collaborative Effort to Estimate the Reproducibility of Psychological Science. Open Science Collaboration. [Abstract: Reproducibility is a defining feature of science. However, because of strong incentives for innovation and weak incentives for confirmation,direct replication is rarely practiced or published. The Reproducibility Project is an open,large-scale, collaborative effort to systematically examine the rate and predictors of reproducibility in psychological science. So far, 72volunteer researchers from 41 institutions have organized to openly and transparently replicate studies published in three prominent psychological journals from 2008. Multiple methods will be used to evaluate the findings, calculate an empirical rate of replication, and investigate factors that predict reproducibility. Whatever the result, a better understanding of reproducibilitywill ultimately improve confidence in scientific methodology and findings.]) and http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124 (Why Most Published Research Findings Are False. John P.A. Ioannidis. Published: August 30, 2005DOI: 10.1371/journal.pmed.0020124. [Abstract: There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.]); further: http://www.scientificamerican.com/article/massive-international-project-raises-questions-about-the-validity-of-psychology-research/?WT.mc_id=SA_SP_20150831 (Massive International Project Raises Questions about the Validity of Psychology Research; When 100 past studies were replicated, only 39 percent yielded the same results. August 27, 2015)>