The 2019 BITSS Annual Meeting: A barometer for the evolving open science movement

By Aleksandar Bogdanoski and Katie Hoeberling

Each year we look forward to our Annual Meeting as a space for showcasing new meta-research and discussing progress in the movement for research transparency. These meetings offer snapshots of the evolving priorities and challenges faced by the scientific community working to change research norms. BITSS’s newly reframed mission to generate evidence, provide access to educational resources, and support a growing open science ecosystem also reflects how these changes have impacted our growth as an initiative.

The growing importance of access and inclusion

While we’ve certainly paid attention to work around open access and open peer review, we’ve historically focused our efforts on research conduct, rather than on publishing. It’s become abundantly clear in recent years, however, that access is a critical component in the production and evaluation of social science. Acknowledging this, we’ve forged fruitful partnerships with stakeholders on the “other end” of the scholarly communication cycle, including the Journal of Development Economics and the Open Science Framework Preprints platform.

This realization was also behind our invitation to Elizabeth Marincola to keynote this year’s Annual Meeting. Marincola, formerly CEO of the Public Library of Science, is a Senior Advisor for the African Academy of Sciences (AAS), and supports the developing AAS Open Research, a platform for rapid, openly reviewed, and open access publication. Challenges around access to research, opportunities to publish, and knowledge of what makes research publishable, are especially acute in the Global South where research institutions are commonly under-resourced. AAS Open Research aims to sidestep the access- and transparency-related challenges conventional journals have faced and perpetuated. We’re excited to hear about the platform’s success, as well as the lessons it has for publishing in the Global North!

Elizabeth Marincola discussed the AAS Open Research platform as the keynote speaker.

Wide access to training resources is similarly critical for normalizing open research practices. Open Science (with a capital O and S) has only recently entered the social science curriculum, and instructors who teach it often develop course materials in relative isolation, rarely having received formal training in open practices or pedagogy. The vast majority of students must also look outside of their core curriculum (or even their universities!) to find training. Making open science curriculum more accessible can help reduce the burden on both teachers and students, many of whom are early in their careers, to master these practices that are becoming increasingly required of them. Speakers on the meeting’s final panel discussed the challenges they’ve faced in trying to institutionalize open science curriculum and supporting their students in applying open principles outside of the classroom, as well as approaches and resources they’ve found helpful. Instructors and students looking to teach or learn transparent research practices can start at our Resource Library or this growing list of course syllabi on the OSF. 

The evolving meta-research landscape 

Having organized eight of these annual events, a few other patterns have begun to emerge. One of the most exciting developments we’ve seen is that our meetings have shifted focus from diagnosing problems in research, to testing interventions and assessing adoption and wider change. There is no longer a need, at least in this community, to debate whether or not publication bias exists or that perverse incentives lead researchers to use questionable research practices, for example. How we measure and correct for them, however, remain open questions. Such questions were discussed during the first block of research presentations, which proposed sensitivity analysis in meta-analysis and revised significance thresholds compatible with researcher behavior to correct for publication bias, plus a framework to translate open practices for observational research.

Relatedly, the use of pre-registration and pre-analysis plans (PAPs) is becoming more normative than cutting edge. Pre-registration and PAPs have been at the core of our work since day one. In fact, our first Annual Meeting in 2012 convened an interdisciplinary group of researchers to discuss the potential and limitations of pre-registration and PAPs, a relatively new concept at the time. BITSS was launched to coordinate related efforts (shout out to David Laitin for suggesting our name!). BITSS and the transparency movement have come a long way since then. The work presented in the meeting’s second block are clear indicators of this evolution, assessing their effectiveness and adapting the concept to other disciplines.

Finally, it’s become clear that the reach and efficacy of many open science tools can benefit from, and often requires, the support of diverse stakeholders, as well as rigorous evaluation components integrated in interventions from the beginning. The final block of presentations explored the application of open science principles in novel contexts, including Institutional Review Boards and qualitative research with sensitive data, and offered a general framework for designing and evaluating open science interventions.

We left this year’s Annual Meeting feeling equal parts energized and overwhelmed by the excitement and dedication of the speakers and participants. Like many of you, we hope that one day the phrase open science will become synonymous with good science—or just science. Until then, as the Annual Meeting reminded us, there is plenty of work to be done. If you missed the meeting, or want to revisit any of the sessions, you can find slides on this OSF page, watch videos on our YouTube channel, and find open access versions of the papers in the event agenda. The summaries of each session can be found below. Onward!

From left to right: Pascal Michaillat, Maya Mathur, and Cecilia Mo.
From left to right: Pascal Michaillat, Maya Mathur, and Cecilia Mo.

Presentation Block 1: Correcting Publication Bias and Specification Search

  • Maya Mathur (Stanford) presented a new method for sensitivity analyses for publication bias in meta-analyses, joint work with Tyler J. VanderWeele (Harvard), that researchers can use to estimate how severe bias would have to be to threaten the integrity of statistical significance of estimates in meta-analyses. They provide benchmarks to describe plausible values of this estimate across disciplines and share an R package PublicationBias for application.
  • Pascal Michaillat (Brown University) proposed “incentive-compatible” critical values, based on joint work with Adam McCloskey (University of Colorado-Boulder). Recognizing that publication bias gives researchers strong incentives to run multiple experiments until they find results that are statistically significant, they propose dynamic critical values that account for running n experiments and present readers with a more transparent and reliable measure of statistical significance.
  • Cecilia Mo discussed the application of open science tools and practices in observational research through a series of replications conducted with Matthew Graham and Gregory Huber (Yale), as well as Neil Malhotra (Stanford). They highlight that, in order to make replications publishable, replicators often face incentives to “null hack,” or to systematically search for specifications or cases that produce a null result. To control for this, they propose an observational open science “toolkit”, recommending extending original time series, collecting data independently (rather than using replication datasets provided by the authors), pre-registration, multiple simultaneous replications, and building teams with mixed incentives.
Dan Posner presents novel work on the use of pre-analysis plans in economics and political science.
Dan Posner presents novel work on the use of pre-analysis plans in economics and political science.

Presentation Block 2: Assessing the Effectiveness of Registries and Pre-Analysis Plans

  • Dan Posner (UCLA) presented joint work with George Ofosu (LSE) taking stock of the effectiveness of PAPs. Analyzing a representative sample of PAPs from studies registered on the AEA and EGAP registries, as well as a subset of registrations that resulted in publicly available papers, they found that the 83% of PAPs sufficiently limit researcher degrees of freedom and satisfy three of their four criteria for completeness. They also found significant room for improvement—the number of pre-specified control variables was unclear in 44% of the PAPs, and deviations were explicitly mentioned in only one of 19 papers that deviated from their PAPs. Given that this sample was collected between 2012 and 2016, it’s possible that PAP standards in political science and economics may have since changed. Learn more in their preprint.
  • Jessaca Spybrook (Western Michigan University) introduced the new Registry of Efficacy and Effectiveness Studies (REES) for research in education and related social sciences. Its disciplinary focus allows REES to use standardized domain-specific language for common concepts in the field, and to rely on categorical responses for certain parts of registration entries. REES’s developers hope that this will facilitate meta-analyses and systematic reviews of impact evaluation of education interventions.
  • Eirik Strømland (University of Bergen) presented joint work with Amanda Kvarven (University of Bergen) and Magnus Johannesson (Stockholm School of Economics), which compared the results of meta-analyses to large-scale pre-registered replications in psychology carried out at multiple labs. They found that the meta-analytic effect sizes were significantly different from the replication effect sizes for 12 out of the 17 meta-replication pairs. They argue that such differences are systematic and, on average, meta-analytic effect sizes are about three times as large as the replication effect sizes. Learn more in their preprint.

Presentation Block 3: Developing and Assessing Open Science Policies and Interventions

  • Sean Grant (Indiana University) presented joint work with Kathryn Bouskill (RAND Corporation) exploring the role of IRBs in enabling open science practices. Their surveys and interviews with IRB professionals found they were generally supportive of open science practices in terms of making publicly-funded research publicly available, and provide some guidance to facilitate such practices. On the other hand, the majority of IRB professionals disagreed with the notion that all research involving human subjects should be registered in a public database, and lacked the expertise in areas such as registering clinical trials, data sharing, and data privacy. They call for improved guidance and training to strengthen IRB capacity to enable open science practices while ensuring compliance with regulations for the protection of human subjects. Learn more in a related opinion piece by the authors.
  • Lori Frohwirth (Guttmacher Institute) presented joint work with colleagues Jennifer Mueller, Alicia VandeVusse, and Sebastian Karcher (Qualitative Data Repository). Reproducibility in qualitative research faces additional challenges due to high variability in how data is recorded (inter-coder reliability over 90% is rare, for example) and interpreted. However, Frowirth and colleagues argue that it is possible to make qualitative research more transparent, particularly through data sharing (where appropriate) and transparency in terms of how data are processed and analyzed. Learn more in their extended abstract.
  • Micah Altman (MIT Libraries) presented joint work with Phillip Cohen (University of Maryland) and Jessica Polka (ASAPbio), providing a conceptual framework for designing and evaluating open science interventions. Reflecting on recent discussions of interventions to reduce gender bias in the evaluation of economics research (Hospido and Sanz 2019), as well as successful open science interventions such as bioRxiv, the authors point out challenges such as partial or noisy outcome measures, non-generalizable samples, and uncertain mechanisms for change. To address these, they argue that open science interventions and projects should be developed with built-in evaluation components that are made explicit and are open by default.

Panel: Teaching Open Science

A panel on Teaching Open Science. From left to right: Fernando Hoces de la Guardia, Fernando Perez, Garret Christensen, Don Moore, and Simine Vazire.
From left to right: Fernando Hoces de la Guardia, Fernando Perez, Garret Christensen, Don Moore, and Simine Vazire.

This panel discussed experiences in open science education and challenges in teaching open science with limited institutional support, demonstrating the multitude of methods and venues where open research principles can be applied.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.