The original Upshot article advocates for a new publishing structure called Registered Reports (RRs):
A research publishing format in which protocols and analysis plans are peer reviewed and registered prior to data collection, then published regardless of the outcome.
In the following interview with the Washington Post, Nyhan explains in greater detail why RRs are more effective than other tools at preventing publication bias and data mining. He begins by explaining the limitations of preregistration.
As I argued in a white paper, […] it is still too easy for publication bias to creep in to decisions by authors to submit papers to journals as well as evaluations by reviewers and editors after results are known. We’ve seen this problem with clinical trials, where selective and inaccurate reporting persists even though preregistration is mandatory.
Nyhan continues by discussing his thoughts on promoting replication as a tool to combat bias and finishes the interview explaining some of the details of RRs.
What if more replications were published?
I’m of course a big supporter of replication in all of its forms […] I’m skeptical, though, that “expect[ing] researchers to provide multiple, independent tests of a claim before publishing work” is a realistic approach given the incentives scholars face. That’s what psychology did for many years and the result was a culture of tweaking data to find significant results and other questionable research practices such as suppressing unsuccessful replications.
I’m all in favor of “look[ing] more favorably on projects that attempt to replicate findings” but don’t know how to turn that sentiment into concrete changes in how articles are published. Most of our journals use article formats [that] are inappropriately long for direct replications, making them seem like a poor fit. In addition, editors are encouraged to maximize the impact factors of their journals, which creates a disincentive for them to publish replications—a type of article that tends to generate controversy (for unsuccessful replications) and relatively few citations (especially for successful replications). Still, I welcome efforts by new journals like the Journal of Experimental Political Science and Research & Politics to publish shorter articles, including more replications.
What are generally the criteria for publishing an article in a RR?
Authors would be encouraged to submit articles for which the results would be informative about a question of interest to the field regardless of the outcome, while reviewers and editors would be forced to evaluate the value of a design independent of the empirical result. The result should be fewer false positives and more replicable findings. (For more details, see my white paper or the Registered Reports FAQ.)
Are the pre-analysis plans that are required by RRs are overly restrictive?
I’m not suggesting that scholars be prevented from analyzing the data in ways that they did not initially anticipate. The format would instead distinguish more clearly between confirmatory and exploratory findings while putting the burden of proof on authors to justify deviations from analysis plans, which creates a disincentive against using specification searches to try to obtain statistically significant results.
Once an accepted study is submitted, is it published automatically?
[W]hile studies submitted in this format that pass peer review would be accepted in principle before the data were collected and analyzed, the final write-up would still be peer-reviewed to ensure that it met the field’s standards for writing quality, data analysis, etc.
Find the full interview here.