Conservative Tests under Satisficing Models of Publication Bias BITSS ScholarsMetascience (Methods and Archival Science)Social Science
Justin McCrary Garret Christensen Daniele Fanelli
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that undo the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, 30% of published t-statistics fall between the standard and adjusted cutoffs.
Find the most recent version of this paper here.