We had a lovely, foggy morning start to our Research Transparency and Reproducibility Training (RT2) here in Berkeley yesterday, and we have already had the opportunity to enjoy some great conversations with participants and faculty alike.
One early suggestion brought to us by Arnaud Vaganay was the need to first frame what we want to achieve with RT2 in a very practical and human sense. What is it we want to achieve with these three days of training – and through BITSS activities generally? Is it really about correcting “scientific misconduct”?
As actors working in social science research – whether as researchers, publishers, funders, educators, or other supporting staff – we inevitably deal with the fog of the day-to-day. Basically, it’s hard to comprehensively attend to all the things you have to when implementing projects in complex environments. As Sean Grant explained yesterday, research studies present similar challenges, as they operate in a complex system with conflicting priorities and incentives.
We may be trained in methods – and just looking around the room at RT2, there are many disciplines, sub-disciplines, and therefore methods across the social science spectrum. But we are rarely equally trained on PRACTICE… until we roll up our sleeves and start working and learning by doing (and failing). That’s when the messy, devils-in-the-details part of actually taking our training on methods from paper to practice happens.
The practice of research requires many, many decisions that we may or may not always be fully aware of – decisions related to methods, outcome measurement, survey design, assumptions for expected minimum detectable effect for power calculations, sample design, exposure to treatment – and then of course, there are decisions related to analysis, such as which covariates belong or not. And other decisions within the research system about which research to fund or not, which to publish or not, etc. So. Many. Decisions.
We use RT2 to focus primarily on research degrees of freedom, which was presented yesterday by Don Moore. We want to acknowledge these vast degrees of freedom and that the decisions we make are based on our own biases, training, and experiences. As one of the research transparency pioneers and great influencers for BITSS, Dr. Robert Rosenthal, found nearly 50 years ago now, researcher bias can have real consequences on the results of our research.
So, yes, we do talk about scientific misconduct, but this isn’t only about fraud and nefarious researcher behavior, though of course that exists. It’s about making that messy, gray period where countless decisions are made starting with the design of a study through the publication of its results a little clearer (and transparent!). We believe we can do this through disciplined, vigilant reflection and documentation of the decisions we make and why we make them as the actors across the research system. As presented by Courtney Soderberg and her discussion on replication, this improved documentation is not just for others, but your future self as well! The tools and practices presented at RT2 – including the Open Science Framework, Pre-Analysis Plans, Pre-Registration, GitHub, Dynamic Documents, Disclosure Guidelines – are intended to support just that…and try to clear the fog.