“Just Post it: The Lesson from Two Cases of Fabricated Data Detected by Statistics Alone” by Uri Simonsohn
The last two videos discussed studies in which Uri Simonsohn found statistical irregularities and eventually uncovered fraudulent data. In this article, he discusses how he was able to confirm his suspicions that these irregularities were not just the result of reporting errors or merely unusual distributions. Simonsohn also makes the argument that this was made possible only because he had access to the studies’ raw data. When authors are required to post the raw data from their experiments, he says, it makes fraud much easier to detect and prevent.
In both studies – “Rising up to higher virtues: Experiencing elevated physical height uplifts prosocial actions” by Lawrence Sanna et al. and “The effect of color (red versus blue) on assimilation versus contrast in prime-to-behavior effects” by Dirk Smeesters et al. – Simonsohn found the reported summary statistics to be too similar across conditions to have come from random samples. The first study, which involved three different experiments testing how participants’ moral inclinations were impacted by their physical elevation, reported standard deviations for each experiment that were nearly identical. In the second study, which claimed to test how color priming affects assimilation or contrast for examplars and stereotypes, reported means from 12 different conditions were also seemingly too similar.
Simonsohn used simulated data for both studies based on normal distributions to assess the likelihood that the studies’ standards of deviation and means would be as similar to each other as they were reported to be. He also requested raw data from the studies’ authors, confirmed there were no reporting errors, and re-ran the simulations. For both articles, he found the data to be “inconsistent with random sampling,” as well as “excessive similarity” in standard deviations or means across conditions.
When asked about these abnormalities in a committee hearing investigating misconduct, Dirk Smeesters – primary author of the second study – suggested that the excessive similarities could have been due to coding errors. However the simulations show that even if sloppy coding had occurred, it couldn’t account for the similarities. Simonsohn’s simulations produced more variation in data when Smeester’s analysis produced what looked like too little variation or “a lack of statistical noise.”
Both of these studies were eventually retracted from the journals in which they were published. Simonsohn credits his access to the raw data for the effectiveness of his study, as well as the subsequent retractions.
“The availability of raw data… causally led to the retraction of existing publications with invalid data and in all likelihood prevented additional ones from being published. The absence of raw data, in contrast, led to suspicions of fraud in another case not being acted on. If journals, granting agencies, universities, or other entities overseeing research promoted or required data posting, it is hard to imagine that fraud would not be reduced.”
Though Simonsohn was able to obtain raw data from both Sanna and Smeesters, he found that with most data requests from other authors, data was reported to have been lost, accidentally destroyed, or no longer available. He makes the case that posting data is both beneficial and feasible for the scientific community.
“The proposal of data posting for psychology is not a utopian one. Some behavioral-science journals have already implemented both this required default policy and a reasonable- exception clause (e.g., Judgment and Decision Making and the American Economic Review). Data posting is policy also in fields with vastly larger data sets that are more expensive to collect, and in which competing teams study closely related questions, such as gene sequencing (see e.g., the National Center for Biotechnology Information’s Sequence Read Archive, http://www.ncbi.nlm.nih.gov/sra/, and Gene Expression Omnibus, http://www.ncbi.nlm.nih.gov/geo/). There are no obvious features of psychological data that make them less postable than genomics data, and there are several that make them more so… It is the status quo that rests on utopian premises: As a discipline, psychology has no protection against fraud, researchers—especially incompetent ones—have tremendous incentives to commit it, and yet everyone conveniently assumes that it does not happen. As the examples that [precede] painfully demonstrate, this assumption is inconsistent with evidence published in some of psychology’s most respected journals.”
While Simonsohn concludes that detecting and eliminating fraud is an important goal, he also warns against the mistake of accusing innocent scholars and he provides various measures that can be taken to conduct high quality analyses:
- Replicate analyses across multiple studies before suspecting foul play by a given author.
- Compare suspected studies with similar ones by other authors.
- Extend analyses to raw data.
- Contact authors privately and transparently, and give them ample time to consider your concerns.
- Offer to discuss matters with a trusted, statistically savvy advisor.
- If suspicions remain, convey them only to entities tasked with investigating such matters, and do so as discreetly as possible.
If you want to dive deeper into the material, you can read the entirety of Uri Simonsohn’s paper, a great interview with him in Nature, and both retracted articles by clicking on the links in the SEE ALSO section at the bottom of this page.
Reference
Simonsohn, Uri. 2013. “Just Post It The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone.” Psychological Science 24 (10): 1875–88. doi:10.1177/0956797613480366.
© Center for Effective Global Action.