Reproducibility Project
The Reproducibility Project: Psychology was a crowdsourced collaboration of 270 contributing authors to repeat 100 published experimental and correlational psychological studies.[1] This project was led by the Center for Open Science and its co-founder, Brian Nosek, who started the project in November 2011. The results of this collaboration were published in August 2015. Reproducibility is the ability to produce the same findings, using the same methodologies as the original work, but on a different dataset (for instance, collected from a different set of participants). The project has illustrated the growing problem of failed reproducibility in social science. This project has started a movement that has spread through the science world with the expanded testing of the reproducibility of published works.[2]
Results
Brian Nosek of University of Virginia and colleagues sought out to replicate 100 different studies that all were published in 2008.[3] The project pulled these studies from three different journals, Psychological Science, the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory, and Cognition, published in 2008 to see if they could get the same results as the initial findings. In their initial publications 97 of these 100 studies claimed to have significant results.
The group went through extensive measures to remain true to the original studies, including consultation with the original authors. Even with all the extra steps taken to ensure the same conditions of the original 97 studies, only 35 (36.1%) of the studies replicated, and if these effects were replicated, they were often smaller than those in the original papers. The authors emphasized that the findings reflect a problem that affects all of science and not just psychology, and that there is room to improve reproducibility in psychology.
In 2021, the project showed that of 193 experiments from 53 top papers about cancer published between 2010 and 2012, only 50 experiments from 23 papers could get replicated. Moreover, it showed that the effect sizes of that fraction were 85% smaller on average than the original findings. None of the papers had its experimental protocols fully described and 70% of experiments required asking for key reagents.[4][5]
Statistical relevance
Failure to replicate can have different causes. The first is a type II error, which is when the null hypothesis fails to be rejected when it is false. This can be classified as a false negative. A type I error is the rejection of a null hypothesis even if it is true, so this is considered a false positive.
Center for Open Science
The Center for Open Science was founded by Brian Nosek and Jeff Spies in 2013 with a $5.25 million grant from the Laura and John Arnold Foundation.[6] By 2017 the Foundation had provided an additional $10 million in funding.[6]
Outcome and importance
There have been multiple implications of the Reproducibility Project. People all over have started to question the legitimacy of scientific studies that have been published in esteemed journals. Journals typically only publish articles with big effect sizes that reject the null hypothesis. Leading into the huge issue of people re-doing studies that have already found to fail, but not knowing because there is no record of the failed studies, which will lead to more false positives to be published. It is unknown if any of the original study authors committed fraud in publishing their projects, but some of the authors of the original studies are part of the 270 contributors of this project.
One earlier study found that around $28 billion worth of research per year in medical fields is non-reproducible.[7]
The results of the Reproducibility Project might also affect public trust in psychology.[8][9] Lay people who learned about the low replication rate found in the Reproducibility Project subsequently reported a lower trust in psychology, compared to people who were told that a high number of the studies had replicated.[10][8]
See also
External links
References
- Open Science Collaboration (28 August 2015). "Estimating the reproducibility of psychological science". Science. 349 (6251): aac4716. doi:10.1126/science.aac4716. hdl:10722/230596. PMID 26315443. S2CID 218065162.
- Jarrett, Christian (27 August 2015). "This is what happened when psychologists tried to replicate 100 previously published findings". Research Digest. BPS Research Digest. Retrieved 8 November 2016.
- Weir, Kristen. "A reproducibility crisis?". American Psychological Association. American Psychological Association. Retrieved 24 November 2016.
- "Dozens of major cancer studies can't be replicated". Science News. 7 December 2021. Retrieved 19 January 2022.
- "Reproducibility Project: Cancer Biology". www.cos.io. Center for Open Science. Retrieved 19 January 2022.
- Apple, Sam (22 January 2017). "The Young Billionaire Behind the War on Bad Science". Wired.
- Freedman, L. P.; Cockburn, I. M.; Simcoe, T. S. (2015). "The Economics of Reproducibility in Preclinical Research". PLOS Biology. 13 (6): e1002165. doi:10.1371/journal.pbio.1002165. PMC 4461318. PMID 26057340.
- Wingen, Tobias; Berkessel, Jana B.; Englich, Birte (24 October 2019). "No Replication, No Trust? How Low Replicability Influences Trust in Psychology". Social Psychological and Personality Science. 11 (4): 454–463. doi:10.1177/1948550619877412. ISSN 1948-5506. S2CID 210383335.
- Anvari, Farid; Lakens, Daniël (19 November 2019). "The replicability crisis and public trust in psychological science". Comprehensive Results in Social Psychology. 3 (3): 266–286. doi:10.1080/23743603.2019.1684822. ISSN 2374-3603.
- "The Replication Crisis Lowers The Public's Trust In Psychology — But Can That Trust Be Built Back Up?". Research Digest. 31 October 2019. Retrieved 30 November 2019.