Resource Library

The BITSS Resource Library contains resources for learning, teaching, and practicing research transparency and reproducibility, including curricula, slide decks, books, guidelines, templates, software, and other tools. All resources are categorized by i) topic, ii) type, and iii) discipline. Filter results by applying criteria along these parameters or use the search bar to find what you’re looking for.

Know of a great resource that we haven’t included or have questions about the existing resources? Email us!

  • Topic

  • Type

  • Discipline

12 Results

Videos: Research Transparency and Reproducibility Training (RT2) – Washington, D.C. Data Management and De-identification+

BITSS hosted a Research Transparency and Reproducibility Training (RT2) in Washington DC, September 11-13, 2019. This was the eighth training event of this kind organized by BITSS since 2014. RT2 provides participants with an overview of tools and best practices for transparent and reproducible social science research. Click here to videos of presentations given during the training. Find slide decks and other useful materials on this OSF project page (
Read More →

Course Syllabi for Open and Reproducible Methods Anthropology, Archaeology, and Ethnography+

A collection of course syllabi from any discipline featuring content to examine or improve open and reproducible research practices. Housed on the OSF.

rOpenSci Packages Data Management and De-identification+

These packages are carefully vetted, staff- and community-contributed R software tools that lower barriers to working with scientific data sources and data that support research applications on the web.

Impact Evaluation in Practice Data Management and De-identification+

The second edition of the Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policymakers and development practitioners. First published in 2011, it has been used widely across the development and academic communities. The book incorporates real-world examples to present practical guidelines for designing and implementing impact evaluations. Readers will gain an understanding of impact evaluation and the best ways to use impact evaluations to design evidence-based policies and programs. The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation.

Read More →

Improving Your Statistical Inference Dynamic Documents and Coding Practices+

This course aims to help you to draw better statistical inferences from empirical research. Students discuss how to correctly interpret p-values, effect sizes, confidence intervals, Bayes Factors, and likelihood ratios, and how these statistics answer different questions you might be interested in. Then, they learn how to design experiments where the false positive rate is controlled, and how to decide upon the sample size for a study, for example in order to achieve high statistical power. Subsequently, students learn how to interpret evidence in the scientific literature given widespread publication bias, for example by learning about p-curve analysis. Finally, the course discusses how to do philosophy of science, theory construction, and cumulative science, including how to perform replication studies, why and how to pre-register an experiment, and how to share results following Open Science principles.

Read More →

Nicebread Data Management and De-identification+

Dr. Felix Schönbrodt’s blog promoting research transparency and open science.

DeclareDesign Dynamic Documents and Coding Practices+

DeclareDesign is statistical software to aid researchers in characterizing and diagnosing research designs — including experiments, quasi-experiments, and observational studies. DeclareDesign consists of a core package, as well as three companion packages that stand on their own but can also be used to complement the core package: randomizr: Easy-to-use tools for common forms of random assignment and sampling; fabricatr: Tools for fabricating data to enable frontloading analysis decisions in social science research; estimatr: Fast estimators for social science research.

Read More →

NeuroChambers Issues with transparency and reproducibility+

Chris Chambers is a psychologist and neuroscientist at the School of Psychology, Cardiff University. He created this blog after taking part in a debate about science journalism at the Royal Institution in March 2012. The aim of his blog is give you some insights from the trenches of science. He talks about a range of science-related issues and may even give up a trade secret or two.



Read More →

The New Statistics (+OSF Learning Page) Data Management and De-identification+

This OSF project helps organize resources for teaching the “New Statistics” — an approach that emphasizes asking quantitative questions, focusing on effect sizes, using confidence intervals to express uncertainty about effect sizes, using modern data visualizations, seeking replication, and using meta-analysis as a matter of course.


p-curve Dynamic Documents and Coding Practices+

P-curve is a tool for determining if reported effects in literature are true or if they merely reflect selective reporting. P-curve is the distribution of statistically significant p-values for a set of studies (ps < .05). Because only true effects are expected to generate right-skewed p-curves – containing more low (.01s) than high (.04s) significant p-values – only right-skewed p-curves are diagnostic of evidential value. By telling us whether we can rule out selective reporting as the sole explanation for a set of findings, p-curve offers a solution to the age-old inferential problems caused by file-drawers of failed studies and analyses.

Read More →

Metalab Data Visualization+

MetaLab is a research tool for aggregating across studies in the language acquisition literature. Currently, MetaLab contains 887 effect sizes across meta-analyses in 13 domains of language acquisition, based on data from 252 papers collecting 11363 subjects. These studies can be used to obtain better estimates of effect sizes across different domains, methods, and ages. Using our power calculator, researchers can use these estimates to plan appropriate sample sizes for prospective studies. More generally, MetaLab can be used as a theoretical tool for exploring patterns in development across language acquisition domains.

Read More →

pcpanel Economics and Finance+

This package performs power calculations for randomized experiments that use panel data. Unlike the existing programs “sampsi” and “power”, this package accommodates arbitrary serial correlation. The program “pc_simulate” performs simulation-based power calculations using a pre-existing dataset (stored in memory), and accommodates cross-sectional, multi-wave panel, difference-in-differences, and ANCOVA designs. The program “pc_dd_analytic” performs analytical power calculations for a difference-in-differences experimental design, applying the formula derived in Burlig, Preonas, and Woerman (2017) that is robust to serial correlation. Users may either input parameters to characterize the assumed variance-covariance structure of the outcome variable, or allow the subprogram “pc_dd_covar” to estimate the variance-covariance structure from pre-existing data.

Read More →