Recap of Research Integrity in Economics Session at ASSA 2015

By Garret Christensen (BITSS)


BITSS just got back from the ASSA conference, the major annual gathering of economists. The conference largely serves to help new PhD economists find jobs, but there are of course sessions of research presentations, a media presence and sometimes big names like the Chair-of-the-Federal-Reserve in attendance. BITSS faculty Ted Miguel presided a session on research transparency. The session featured presentations by Eva Vivalt (NYU), Brian Nosek (UVA) and Richard Ball (Haverford College).

Vivalt presented part of her job market paper which shows that, at least in development economics, randomized trials seems to result in less publication bias and/or specification-searching than other types of evaluations.

Nosek’s presentation covered a broad range of transparency topics, from his perspective as a psychologist. His discussant, economist Justin Wolfers, concurred completely and focused on how Nosek’s lessons could apply to economics.

As an economist myself, I thought a few of his points were interesting:

  1. The Quarterly Journal of Economics should really have a data-sharing requirement.
  2. Economists don’t do enough meta-analysis (Ashenfelter et al.’s paper on the estimates of the returns to education is a great example of the work we could and should be doing)
  3. Somewhat tongue-in-cheek (I think), Wolfers discussed the fool/cheat paradox: whenever anyone is caught with a mistake in their research, they can either admit to having made an honest mistake, or having cheated. If they choose the “fool” option, as most do, there’s not much one can do to change one’s own intelligence. Why does nobody cop to having cheated? You could more easily make a case for mending your ways if you admitted to cheating.

Richard Ball spoke about Project TIER and his experience teaching empirical work to undergraduates. Two points are worth emphasizing:

  1. Yes, getting students to organize their work in a remotely reproducible fashion takes a bit of work up front. But it saves you time in the end, because now you actually have better than a snowball’s chance to actually understand what it is the student was trying to do.
  2. If we only require that researchers share “final” data sets, we may be losing a lot of information. Richard asked “what if we shared NO data, and instead required the entire process to be reproducible? Thinking about my own research, I can certainly imagine there being judgment calls, or just plain mistakes, in the thousands of lines of code that I use to go from the raw data to the final. (By the way, I’m happy to share some of my raw data on the US military. Forcing others to go through the same FOIA process I did would be pointless and cruel.)

I also attended the meetings, and spent most of my time manning a booth in the exhibit hall on behalf of BITSS and the Center for Open Science. As long as you don’t mind introducing yourself to hundreds of strangers, this is a very reasonable way to get the word out. Very interesting and relevant people, who I didn’t know existed, are now on BITSS’s radar.

People I met include the replications editor at the Journal of Applied Econometrics, multiple people from AIRLEAP (the Association for Integrity and Responsible Leadership in Economics and Associated Professions) and folks from the St. Louis Fed, who have been fighting the good fight on data and code archives for decades (learn more about their work here).

Great to meet you all, and keep up the good work!