Our 2017 ASSA Session

Garret Christensen –BITSS Project Scientist


BITSS organized a session on meta-analysis and reproducibility in economics for the recent Allied Social Sciences Associations (ASSA, but better known as the AEAs,

capture2

the largest annual economics conference and job market for PhD economists) in Chicago. Chicago, as much of the country, was in the midst of a cold snap, but at least it wasn’t snowing or windy, so the weather wasn’t too big a deal, though one of our presenter’s flights was cancelled so he was unfortunately unable to make it. We’re very happy that our session was one of at least three with a focus on replication and/or reproducibility.

capture3
Add ice around the lakefront, make it 5 degrees outside

Our session featured the work of three of our SSMART grant winners: Sean Tanner, Eva Vivalt, and Rachael Meager.

In light of the evidence that interventions often seem to have attenuated impacts when scaled up compared to the initial study, Sean uses the Department of Education’s What Works Clearinghouse (WWC) to test how representative the samples are in education experiments. His answer is that schools in studies tend to have much higher minority populations and lower socio-economic status than the national average, which may give us pause about the external validity if we’re thinking about scaling up the interventions nationally.

Eva’s work, joint with Aidan Coville, gathered priors on the likely effect and minimum meaningful effect from World Bank researchers, and uses Bayesian techniques to back out the false positive reporting rate and false negative reporting rate. At least in the context of conditional cash transfer (CCT) experiments, the false positive rate appears to be quite low.

Rachael developed a new method using Bayesian hierarchical methods to conduct meta-analysis of distributional treatment effects. Often we’re just interested in average treatment effects, but sometimes, as appears to be the case in some studies of micro-finance, interventions are beneficial for those at the upper tail of the income distribution, and we want to look at the treatment effect for certain subgroups. When aggregating these distributional effects over seven micro-finance RCTs, Rachael finds no effect below the 75th percentile, and a lot of noise beyond that.

Tom Stanley, with John Ioannidis and Hristos Doucouliagos, collected 159 meta-analyses to test the power of a large amount of economics research. The finding? In their own words:

Taking a ‘conservative’ approach (that is, one prone to over-estimate power), the median of these 159 median powers is no more than 18% and likely closer to 10%. Furthermore, 90% of reported findings are under-powered (relative to the widely-accepted 80% convention) in half of these areas of research, and 20% are comprised entirely of underpowered studies.

Their paper is forthcoming in the Economic Journal.

Links:

Lastly, if you’re interested in presenting in a similar BITSS-organized economics & reproducibility session at the Westerns (WEAI) in San Diego, CA in June, e-mail me and let me know this week!

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.