New BITSS Paper on Publication Bias

Garret Christensen–BITSS Project Scientist


I’m happy to report that I just published a paper with Justin McCrary and Daniele Fanelli in PLOS ONE on publication bias, “Conservative Tests under Satisficing Models of Publication Bias.” In English, that’s “if I assume there’s a lot of publication bias, what t-statistics should I really interpret as significant?” And the answer turns out to be “3 instead of 2.” For the common distributions that we look at, our adjusted cutoffs for a 5% type I error rate are about 50% higher than the standard cutoff. We also looked at several published collections of distributions of published test statistics and found that roughly 30% fall between the adjusted and unadjusted cutoff.

Of course, our paper is based on several assumptions (hopefully clearly stated), and the method has some clear limitations, perhaps notably that if everyone used this method it would result in a t-ratio arms race and the method would break down. But for what it’s worth, if you have reason to believe a literature is rife with publication bias, you might want to try our method (in the back of your mind, without telling anybody else).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.