Introduction to replication

This activity consists of a series of videos that examine the different types of replication and the inherent differences between replications in the social sciences and those in the life sciences or physical sciences. It also introduces a viewpoint article by Dr. Daniel Hamermesh that discusses the importance of replication and how the scientific community can incentivize their publications. The next two videos focus on different aspects of Dr. Hamermesh’s paper. We also go into more depth below.

Replicability and replication are very important to the process of ensuring the validity of scientific findings. Pure replications can verify that a past analysis was done correctly and statistical replications provide some sense that a study’s (or aggregated studies’) findings are valid.

In the viewpoint article “Replication in Economics,” economist Daniel Hamermesh discusses both types of replication and examines why replication is so rare in published social science literature.

Pure replications, which use the same data as previously published studies to check for errors, are disturbingly rare, at least in economics. Dr. Hamermesh sent a survey to authors of empirical studies published in two leading labor economics journals between 2002 and 2004: Industrial and Labor Relations Review (ILRR) and the Journal of Human Resources(JHR). He found that the vast majority of these authors never received requests of any kind for their data sets.

Hamermesh then gives three positive incentives for research replicability and the sharing of data:

  • Fear of embarrassment or retaliation by peers or providers of funding should incentivize “careful documentation and maintenance of one’s data sets”.
  • Trust or credibility in social science research, and thus its usefulness in policymaking, rests on replicability. Hamermesh points out that “[the incentives we face here are clear: our ideas are unlikely to be taken seriously if our empirical research is not credible, so that the likelihood of positive payoffs to our research is enhanced if we maintain our data and records and ensure the possibility of replication. … [T]he greater ease of communication worldwide may have enhanced these returns, particularly in the areas of influencing policy and stimulating students.”
  • Finally, the cost-benefit relationship of replications of highly visible studies, especially as technology continues to reduce the costs associated with sharing data, suggests that replications should be more common. “[T]he likelihood of somebody attempting replication rises with the visibility of the published study and its author and decreases with the visibility of the potential replicating author. Under those circumstances the benefits of replicating are greater and the costs are lower. Technology has diminished the costs of providing the materials necessary for replication at the same time that changes in the publication process in economics have increased the benefits to authors of maintaining the records that might make replication possible.”

Hamermesh also gives advice to both researchers seeking to perform replications and authors whose studies have been replicated. In general, authors should make their data and code readily available and usable; replicating authors should “take a gentle, restrained professional tone in the comment”; and replicated authors should admit mistakes honestly and swiftly.

He states that

“…given the media interest in reporting novel or titillating empirical findings and politicians’ desires to robe their proposals in scientific empirical cloth, however novel or inconsistent with prior research, it is crucial that as a profession we ensure that replication, or at least fear of replication, is our norm. Empirical economics is never going to become a laboratory science, but recognizing the role of replication can move us slightly in that direction by preventing us from propagating erroneous results.”

He also suggests that journal editors take the lead on this issue by authoring a few highly visible and high quality replications of their own in order to provide a model for future replications, as well as to normalize the practice.

Similarly rare are scientific replications (or statistical replications) that re-examine “an idea in some published research by studying it using a different data set chosen from a different population from that used in the original paper.” Such replications are extremely important to the external validity of studies. Hamermesh asserts:

“By far the most important justification for scientific replication in nonexperimental studies is that one cannot expect econometric results produced for one time period or for one economy to carry over to another. Temporal change in econometric structure may alter the size and even the sign of the effects being estimated, so that the hypotheses we are testing might fail to be refuted with data from another time. This alteration might occur because institutions change, because incentives that are not accounted for in the model change and are not separable from the behaviour on which the model focuses, or, crucially, that even without these changes the behaviour is dependent on random shocks specific to the period over which an economy is observed.”

Furthermore, “[i]f our theories are intended to be general, to describe the behaviour of consumers, firms, or markets independent of the social or broader economic context, they should be tested using data from more than just one economy.”

Within-study replications, or studies that use multiple datasets, either from temporally or geographically distant sources, are currently our best bet for scientific replication in the social sciences since there are few incentives for journal editors to publish externally replicated research as “the profession puts a premium on the creativity and generality of the idea, not on verifying the breadth of its applicability.” Hamermesh concedes, however, that “the incentives for doing within-study scientific replication are non-existent.” Thus, he declares that “it is crucial that editors of the leading journals tilt the publishing process a bit more in favour of within-study scientific replication.”

What are the incentives for a researcher to replicate another’s data, code, or study?

You can read the full paper here.


Hamermesh, Daniel S. 2007. “Viewpoint: Replication in Economics.” Canadian Journal of Economics/Revue Canadienne D’économique 40 (3): 715–33. doi:10.1111/j.1365-2966.2007.00428.x.