The Controversy of Preregistration in Social Research

Guest post by Jamie Monogan (University of Georgia)

A conversation is emerging in the social sciences over the merits of study registration and whether it should be the next step we take in raising research transparency. The notion of study registration is that, prior to observing outcome data, a researcher can publicly release a data analysis plan that enables a correct test of a hypothesis.

Proponents argue that preregistration can curb four causes of publication bias, or the disproportionate publishing of positive, rather than null, findings:

  1. Preregistration would make evaluating the research design more central to the review process, reducing the importance of significance tests in publication decisions. The journal Cortex now allows for a publication decision based strictly on the research design (Chambers 2013), taking significance tests out of the publication decision. Whether the decision is made before or after observing results, releasing a design early would emphasize study quality.
  2. Preregistration would help the problem of null findings that stay in the author’s file drawer because the discipline would at least have a record of the registered study, even if no publication emerged. This will convey where past research was conducted that may not have been fruitful (Monogan 2013, 23).
  3. Preregistration would reduce the ability to add observations to achieve significance because the registered design would signal in advance the appropriate sample size. It is possible to monitor the analysis until a positive result emerges before stopping data collection.
  4. Preregistration can prevent fishing, or manipulating the model to achieve a desired result, because the researcher must describe the model specification ahead of time (Humphreys, de la Sierra, and van der Windt 2013). By sorting out the best specification of a model using theory and past work ahead of time, a researcher can commit to the results of a well-reasoned model.

In contrast, several arguments call for skepticism or opposition to making preregistration a new social science norm:

  1. Anderson (2013) offers replication as an alternative to registration. Beyond the importance of replication projects, the quality of published articles improves when replication materials are furnished even without replication attempts. Thus, preregistration is not a viable substitute for sharing replication information.
  2. Gelman (2013) argues that preregistration would reduce the number of printed results that turn out to be false. However, preregistration would be problematic if it led to robotic data analysis in which the simple evaluation of hypotheses came at the expense of broader data exploration. For instance, studies ought to present visualizations of data and embrace the uncertainty of estimation.
  3. Laitin (2013) contends that study registration works well for clinical research, but the incentives are stacked stronger in that field than in social research. Clinical researchers often work in labs that rely on funding from companies that wish to market proposed treatments, and the relative costs of Type I errors differ when comparing clinical to social research. Also, context matters for field-based social research in ways that do not emerge in the clinical setting.

These arguments raise important issues with respect to study registration as a norm. However, I would argue that careful formulation of norms can address these concerns. First, on the notion that the required provision of replication information is a better path to transparency: Study registration should not replace the sharing of replication information, but should work to enhance it. As long as journals require the sharing of replication data, preregistration can further transparency because it also requires the author to illustrate more about the research process. Second, on the idea that preregistration may stifle reports from broader data exploration, Gelman notes that preregistration need not preclude such activity. Data visualization still is possible when completing the work of a preregistered study, and interesting auxiliary findings can be reported, even while the primary hypothesis is tested in accord with the registered design.

Finally, on the contention that social research fundamentally differs from medical research, the incentives certainly are more strongly stacked in clinical trials. However, social scientists are expected to generate findings that advance knowledge, and null findings rarely are regarded as novel, so there is pressure for positive results. On the point that social contexts and external events can shape human behavior: anticipating all contingencies in social research design can be unreasonable, so any registration regime must bear this in mind. Studies with well-reasoned justifications for deviating from a preregistered design should still be regarded highly.

The dialogue about preregistration is important because it will continue to shape any registration practices that are implemented. Any policy that emerges must generally improve research quality without excluding valuable studies and information. Perhaps if more researchers try preregistering a study (see: here or here), a broader community will be more ready to weigh in on the tradeoffs of this step in transparency.



About the author: Jamie Monogan is an Assistant Professor at the Department of Political Science, University of Georgia. Contact info: