Links: Blind Analysis and Pre-Analysis Plans, Replication Failure

Garret Christensen–BITSS Project Scientist


 

There’s an interesting new proposal to deal with the problems of bias, p-hacking, and reproducibility failures from Saul Perlmutter, Nobel laureate UC Berkeley physicist and director of the Berkeley Institute for Data Science (where I’m a fellow) called blind analysis. Perlmutter and Robert MacCoun of Stanford have an article in Nature explaining how this method, which apparently isn’t all that new to particle physics, could be used in the social sciences.

The general idea is described as follows:

“Blind analysis ensures that all analytical decisions have been completed, and all programmes and procedures debugged, before relevant results are revealed to the experimenter. One investigator — or, more typically, a suitable computer program — methodically perturbs data values, data labels or both, often with several alternative versions of perturbation. The rest of the team then conducts as much analysis as possible ‘in the dark’. Before unblinding, investigators should agree that they are sufficiently confident of their analysis to publish whatever the result turns out to be, without further rounds of debugging or rethinking. (There is no barrier to conducting extra analyses once data are unblinded, but doing so risks bias, so researchers should label such further analyses as ‘post-blind’.)”

My first reaction is to wonder how this compares to pre-analysis plans. In his article on pre-analysis plans in the Journal of Economic Perspectives, Ben Olken mentions the idea of doing some analysis with masked data–hiding the treatment status of observations, which seems right in line with the “cell scrambling” of blind analysis.  Perlmutter and MacCoun praise PAP and other proposed methods, but prefer blind analysis, saying:

“[P]reregistration requires that data-crunching plans are determined before analysis, and offers some of the same benefits as blind analysis. But it also limits the scope of analysis. Because many analytical decisions (and computer programming bugs) cannot be anticipated, investigators will be forced to make some decisions knowing (consciously or unconsciously) how their choices affect the results. Blind analysis enables the investigator to engage in analysis, exploration and finalization without worrying about such bias.”

It sounds like a good idea to me. I’d be curious if anyone knows of any examples of applications in the social sciences. The authors allude to the practice in medicine, where it is sometimes called “triple-blinding,” but don’t cite examples as far as I could tell.

 

 

In other all too common news, a flashy paper covered in the media failed to reproduce, and the data isn’t being made available. The replicator drew the following perhaps sad conclusion about replication:

“It’s a huge amount of trouble. I don’t know if I’ll ever do it again, because it’s exhausting to do, but I think it’s really important. I think what I come away thinking is the journal absolutely needs to figure out that everything they publish has to have materials that are shareable,” and if it is not, they should not publish the work, Winner said.