A new study recently published in Science provides striking insights into publication bias in the social sciences:
Stanford political economist Neil Malhotra and two of his graduate students examined every study since 2002 that was funded by a competitive grants program called TESS (Time-sharing Experiments for the Social Sciences). TESS allows scientists to order up Internet-based surveys of a representative sample of U.S. adults to test a particular hypothesis […] Malhotra’s team tracked down working papers from most of the experiments that weren’t published, and for the rest asked grantees what had happened to their results.
What did they find?
There is a strong relationship between the results of a study and whether it was published, a pattern indicative of publication bias […] While around half of the total studies in [the] sample were published, only 20% of those with null results appeared in print. In contrast, roughly 60% of studies with strong results and 50% of those with mixed results were published […] However, what is perhaps most striking is not that so few null results are published, but that so many of them are never even written up (65%).
Why is this a problem?
The failure to write up null results is problematic for two reasons. First, researchers might be wasting effort and resources in conducting studies that have already been executed where the treatments were not efficacious. Second, and more troubling, if future researchers conduct similar studies and obtain significant results by chance, then the published literature on the topic will erroneously suggest stronger effects. Hence, even if null results are characterized by treatments that “did not work” and strong results are characterized by efficacious treatments, authors’ failures to write up null findings still adversely affects the universe of knowledge.
What can we do about this?
Based on communications with the authors of many experiments that resulted in null findings, [Malhotra et al.] found that some researchers anticipate the rejection of such papers but also that many of them simply lose interest in “unsuccessful” projects. These findings show that a vital part of developing institutional solutions to improve scientific transparency would be to understand better the motivations of researchers who choose to pursue projects as a function of results […] Proposed solutions such as two-stage review (the first stage for the design and the second for the results), pre-analysis plans, and requirements to pre-register studies should be complemented by incentives to not bury insignificant results in file drawers. Creating high-status publication outlets for these studies could provide such incentives. The movement toward open-access journals may provide space for such articles. Further, the pre-analysis plans and registries themselves will increase researcher access to null results. Alternatively, funding agencies could impose costs on investigators who do not write up the results of funded studies. Finally, resources should be deployed for replications of published studies if they are unrepresentative of conducted studies and more likely to report large effects.