Bias Minimization Lessons from Medicine – How We Are Leaving a $100 Bill on the Ground

By Alex Eble (Brown University), Peter Boone (Effective Intervention), and Diana Elbourne (University of London)

The randomized controlled trial (RCT) now has pride of place in much applied work in economics and other social sciences. Economists increasingly use the RCT as a primary method of investigation, and aid agencies such as the World Bank and the Gates Foundation may rely on the RCT to determine which projects to fund.

Motivating this use of the method is the notion that RCT results give us the “gold standard” of evidence in understanding causal relationships, something that studies using observational data fall short of. Unfortunately, this is the case only if certain conditions are satisfied in the design, conduct, and reporting of RCTs, and we find that in a large proportion of RCTs run by economists, they aren’t.

The RCT has been used extensively in medicine for the past 70 years, with hundreds of thousands of trials registered at clinicaltrials.gov, the US national trial registry, alone. In the last two decades, a group of scientists has investigated the impact of flaws in trial design, conduct and reporting on the treatment effect estimates generated by trials. They’ve found that a set of mistakes, which past trials have often stumbled into, lead directly to biased results. How biased? Treatment effects of trials with these mistakes are found to be 10-30% larger than trials that don’t make these mistakes, and in several cases, these mistakes have led to spurious conclusions of treatment efficacy.

The most prominent results of this work are the CONSORT Statement, a set of guidelines for trialists to follow in reporting of trials, and the Cochrane Handbook, a tool for scholars performing meta-analyses which helps them evaluate the quality of evidence a study provides. These documents describe the mistakes made, help trialists design studies which avoid them, and provide readers with guidelines on how to identify the mistakes in reports of RCTs.

These mistakes overlap with much of what has been discussed earlier on this blog – failure to register trials leads to reporting bias, and failure to register pre-analysis plans results in the same trial generating suspiciously large numbers of positive findings which may or may not be statistical artefact – as well as many things which haven’t been discussed. The issues can be grouped into the following six categories:

  • Selection bias: does the trial achieve balance between treatment groups in key prognostic factors (both known and unknown) at randomization, or was the randomization corrupted/deterministic?
  • Performance bias: do the intervention providers in the trial know what treatment the participants are getting, and if so, can this knowledge bias the care they give; similarly, do participants in the trial know what treatment they’re getting, and if so, can this knowledge affect their behavior or bias the data they provide in favor of/against the treatment?
  • Detection bias: do the data collectors have reason to skew the data collected in favor of/against the treatment?
  • Attrition bias: is the balance between treatment arms achieved at randomization maintained throughout the trial?
  • Reporting bias: are we seeing the full results of the trial, or just a subset of the analyses performed which have been cherry-picked because they are statistically significant?
  • Sample size bias: is the study adequately powered to conclusively answer the question it poses?

In our recent working paper, Risk and Evidence of Randomized Controlled Trials in Economics, we use this medical literature on bias from trial design, conduct and reporting flaws, to evaluate the risk of these six biases in RCTs published in the economics literature between 2001 and 2011. We find that economists, despite attesting to lean on medicine in their use of the RCT, seem not to have taken advantage of this large literature on how to avoid bias. The papers that we collected and evaluated often stumbled into the exact same potholes that plagued medical trialists in the past and we find, as a whole, trials in economics are at much greater risk of having exaggerated treatment effects and biased conclusions than is necessary.

In order to avoid these mistakes and get the most precise, unbiased results possible from these studies, we propose that economists create and enforce a set of standards for trial design, conduct and reporting which draws heavily on the resources in CONSORT and Cochrane, but are adapted to the specific circumstances of trials in economics. In order to have bite, adherence to these standards must be required for publication in journals. Similar requirements have led to a large increase in the quality of reporting of trials in the medical literature. These standards could also be used to appropriately weight results from previous studies by the quality of evidence provided.

Much of the work has been done for us – the lessons in the CONSORT Statement and Cochrane Handbook are the result of decades of experimentation, rigorous scrutiny, and broad consensus on how to avoid bias in trials. To the extent that we aren’t using these lessons to improve our own trials, we are leaving a $100 bill on the ground…and economists hate that.

Alex Eble 5x7

About the authors:

Alex Eble is a PhD student in economics at Brown University. Prior to his PhD studies he worked as a Research Manager for Effective Intervention, a UK-based charity at the Centre for Economic Performance at the London School of Economics, designing and managing two RCTs and a retrospective study evaluating primary health care and education interventions in rural India. His current work attempts to better understand how parents make decisions about their children’s education and health.

 

peter pic large

 

Peter Boone is the Chair of Effective Intervention, a UK-based charity created in 2005 that designs and implements programs to improve children’s health and education in Africa and India. Effective Intervention works closely with medical statisticians and other experts to rigorously measure outcomes of all its aid projects. From 1997-2003 he was a Managing Partner and Research Director for Brunswick-UBS, an investment bank in Moscow and from 1993-1997 he was a lecturer at the London School of Economics and Director of the Emerging Markets Finance Programme at the CEP. He completed a Ph.D. in economics from Harvard University in 1990.

 

Diana

Diana Elbourne is Professor of Healthcare Evaluation in the Medical Statistics Department at the London School of Hygiene and Tropical Medicine. She has a background in social sciences and statistics, and has been conducting both methodological and applied research in randomized controlled trials (RCTs) for over 30 years. She has been a key member of the Consolidated Standards of Reporting Trials (CONSORT) group (http://www.consortstatement.org) since 1999, and is a co-author of both the 2001 and 2010 revisions to the CONSORT Statement and a lead author to its extension to two RCT designs (Cluster trials, and Non-inferiority and Equivalence trials, published in 2012).