AEA 2014: The Use of Pre-analysis Plans and Other Transparency Approaches in Economics

By Guillaume Kroll (CEGA)

BITSS held a session on research transparency at the 2014 Annual Meeting of the American Economic Association on January 5 in Philadelphia. UC Berkeley economist Ted Miguel, who was presiding over the session, kicked off the discussion by highlighting the growing interest in transparency across social science disciplines. Drawing from influential work in economics, psychology, political science, and medical trials, Miguel argues that the use of rigorous experimental research designs, which has become more widespread over the last decade, may not be enough to ensure credible bodies of scientific evidence on which policy decisions can be based.

“The incentives, norms and institutions that govern social science research contribute to these problems”, says Miguel. “There is ample evidence of publication bias, with large number of studies with p-value just below 0.05 “. “Statistically significant, novel and theoretically ‘tidy’ results are published more easily than null, replication, and perplexing results, even conditional on the quality of the research design”. In addition, “social science journals too rarely require data-sharing or adherence to reporting standards”.

Miguel proposes the adoption of a new set of practices by the scientific community. Based on a paper that was recently published in Science, in which he was a co-author, he puts forward three mechanisms to increase transparency in scientific reporting: the disclosure of key details about data collection and analysis, the registration of pre-analysis plans, and open access to research data and material. “The emerging registration system in social science may become a model for medical trials, where research plans have traditionally been much less detailed”, says Miguel. “We need to foster the adoption of new practices to improve the quality and credibility of the social science enterprise”.

The AEA Registry is a Registration Platform for Pre-analysis Plans of RCTs in Economics and Other Social Science Disciplines
The AEA Registry is a Registration Platform for Pre-analysis Plans of RCTs in Economics and Other Social Science Disciplines

In the second presentation, Ben Olken from MIT focused on “the most controversial of these new approaches”, he says: pre-specification and the use of analysis plans. Olken weighs the costs against the benefits of having recourse to pre-analysis plans (PAPs) based on recent field experiments he conducted in Indonesia and Pakistan. “A fully specified pre-analysis plan means you need to essentially write every possible paper you could possibly write on the topic, which is virtually impossible”, argues Olken. “This generates boring papers, where much of what you show the reader isn’t really relevant to the key causal chain conditional on results”. Besides, “recent studies seeking to quantify the issues of specification mining and non-replicability suggest that the problem may not be that enormous after all”.

Olken praises the use of PAPs in specific situations: when there is an interested party looking for a specific answer in the study, when the experiment is very simple, or for key aspects of the study only (balance tests, control variables, statistical specifications, etc.). In other cases, Olken proposes a set of alternatives that overcome the pitfalls of PAPs: robustness checks showing that results are not sensitive to specification or to the choice of variables, conditional analysis plans covering only the first steps of the study, or the replication of important results by independent researchers.

Stanford political economist Kate Casey followed with a case study of when PAPs are most valuable, based on an impact evaluation of a community driven development program in Sierra Leone. “The study showed no evidence that the intervention durably strengthens institutional performance or social capital”, says Casey. “With our hands bound against data mining, the use of a PAP helped mitigate the risks raised by the inconclusive impact of the program”. Those included vested interests from donors and program managers to present the intervention as successful and the large scope for cherry picking among many different outcomes and sub-groups of interest.

Casey shows two different papers that she and her co-authors could have written in the absence of a PAP, which would have generated better, but erroneous interpretations of program impacts. She advocates for limited flexibility with full transparency in the use of PAPs. “Flexibility is needed to counter the risks of a purist approach that stifles learning and creates excessive upfront costs, but it should be accompanied by complete transparency to maintain the credibility of the pre-specification process”.

In the last presentation of the day, Sarah Taubman from NBER shared her experience designing a PAP for a randomized evaluation of the effects of Medicaid coverage on clinical outcomes. “Our study relied on many different data sources, so we decided to archive our analysis plan in different stages instead, as data become available”, says Taubman. “It’s not always clear what pre-specifying means, so further discussions like today about how to use PAPs and what those should entail are essential”.