Resource Library
The BITSS Resource Library contains resources for learning, teaching, and practicing research transparency and reproducibility, including curricula, slide decks, books, guidelines, templates, software, and other tools. All resources are categorized by i) topic, ii) type, and iii) discipline. Filter results by applying criteria along these parameters or use the search bar to find what you’re looking for.
Know of a great resource that we haven’t included or have questions about the existing resources? Email us!
Disseminate
Collect & Analyze Data
Videos: Research Transparency and Reproducibility Training (RT2) – Washington, D.C. Data Management+InterdisciplinaryIssues with transparency and reproducibilityMeta-AnalysesPower analysisPre-Analysis PlansPreprintsRegistriesReplicationsResults-Blind Review & Registered ReportsStatistical LiteracyTransparent ReportingVersion Control
Course Syllabi for Open and Reproducible Methods Anthropology, Archaeology, and Ethnography+Data RepositoriesData VisualizationDynamic Documents and Coding PracticesEconomics and FinanceEngineering and Computer ScienceHealth SciencesHumanitiesInterdisciplinaryIssues with transparency and reproducibilityLife SciencesLinguisticsMeta-AnalysesMetascience (Methods and Archival Science)Open PublishingOther Social SciencesPolitical SciencePower analysisPre-Analysis PlansPsychologyPublic PolicyRegistriesReplicationsSociologyStatistical LiteracyStatistics and Data ScienceTransparent ReportingVersion Control
A collection of course syllabi from any discipline featuring content to examine or improve open and reproducible research practices. Housed on the OSF.
rOpenSci Packages Data Management+Dynamic Documents and Coding PracticesInterdisciplinaryMeta-AnalysesMetascience (Methods and Archival Science)Power analysisReplicationsStatistics and Data ScienceVersion Control
These packages are carefully vetted, staff- and community-contributed R software tools that lower barriers to working with scientific data sources and data that support research applications on the web.
Impact Evaluation in Practice Data Management+Health SciencesInterdisciplinaryPower analysisPublic Policy
The second edition of the Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policymakers and development practitioners. First published in 2011, it has been used widely across the development and academic communities. The book incorporates real-world examples to present practical guidelines for designing and implementing impact evaluations. Readers will gain an understanding of impact evaluation and the best ways to use impact evaluations to design evidence-based policies and programs. The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation.
Improving Your Statistical Inference Dynamic Documents and Coding Practices+Issues with transparency and reproducibilityPower analysisPsychologyStatistical Literacy
This course aims to help you to draw better statistical inferences from empirical research. Students discuss how to correctly interpret p-values, effect sizes, confidence intervals, Bayes Factors, and likelihood ratios, and how these statistics answer different questions you might be interested in. Then, they learn how to design experiments where the false positive rate is controlled, and how to decide upon the sample size for a study, for example in order to achieve high statistical power. Subsequently, students learn how to interpret evidence in the scientific literature given widespread publication bias, for example by learning about p-curve analysis. Finally, the course discusses how to do philosophy of science, theory construction, and cumulative science, including how to perform replication studies, why and how to pre-register an experiment, and how to share results following Open Science principles.
Nicebread Data Management+Data VisualizationDynamic Documents and Coding PracticesInterdisciplinaryIssues with transparency and reproducibilityMeta-AnalysesOpen PublishingPower analysisPre-Analysis PlansPreprintsPsychologyRegistriesReplicationsResults-Blind Review & Registered ReportsTransparent ReportingVersion Control
Dr. Felix Schönbrodt’s blog promoting research transparency and open science.
DeclareDesign Dynamic Documents and Coding Practices+InterdisciplinaryPolitical SciencePower analysisPre-Analysis PlansStatistics and Data Science
DeclareDesign is statistical software to aid researchers in characterizing and diagnosing research designs — including experiments, quasi-experiments, and observational studies. DeclareDesign consists of a core package, as well as three companion packages that stand on their own but can also be used to complement the core package: randomizr: Easy-to-use tools for common forms of random assignment and sampling; fabricatr: Tools for fabricating data to enable frontloading analysis decisions in social science research; estimatr: Fast estimators for social science research.
NeuroChambers Issues with transparency and reproducibility+Open PublishingPower analysisPre-Analysis PlansPsychologyReplicationsResults-Blind Review & Registered ReportsTransparent Reporting
Chris Chambers is a psychologist and neuroscientist at the School of Psychology, Cardiff University. He created this blog after taking part in a debate about science journalism at the Royal Institution in March 2012. The aim of his blog is give you some insights from the trenches of science. He talks about a range of science-related issues and may even give up a trade secret or two.
The New Statistics (+OSF Learning Page) Data Management+Dynamic Documents and Coding PracticesInterdisciplinaryMeta-AnalysesOpen PublishingPower analysisPre-Analysis PlansPsychologyReplicationsStatistical LiteracyStatistics and Data ScienceTransparent ReportingVersion Control
This OSF project helps organize resources for teaching the “New Statistics” — an approach that emphasizes asking quantitative questions, focusing on effect sizes, using confidence intervals to express uncertainty about effect sizes, using modern data visualizations, seeking replication, and using meta-analysis as a matter of course.
p-curve Dynamic Documents and Coding Practices+Issues with transparency and reproducibilityMetascience (Methods and Archival Science)Power analysisStatistics and Data Science
P-curve is a tool for determining if reported effects in literature are true or if they merely reflect selective reporting. P-curve is the distribution of statistically significant p-values for a set of studies (ps < .05). Because only true effects are expected to generate right-skewed p-curves – containing more low (.01s) than high (.04s) significant p-values – only right-skewed p-curves are diagnostic of evidential value. By telling us whether we can rule out selective reporting as the sole explanation for a set of findings, p-curve offers a solution to the age-old inferential problems caused by file-drawers of failed studies and analyses.
Metalab Data Visualization+LinguisticsMeta-AnalysesMetascience (Methods and Archival Science)Power analysisPsychology
MetaLab is a research tool for aggregating across studies in the language acquisition literature. Currently, MetaLab contains 887 effect sizes across meta-analyses in 13 domains of language acquisition, based on data from 252 papers collecting 11363 subjects. These studies can be used to obtain better estimates of effect sizes across different domains, methods, and ages. Using our power calculator, researchers can use these estimates to plan appropriate sample sizes for prospective studies. More generally, MetaLab can be used as a theoretical tool for exploring patterns in development across language acquisition domains.
pcpanel Economics and Finance+Power analysisPre-Analysis PlansStatistics and Data Science
This package performs power calculations for randomized experiments that use panel data. Unlike the existing programs “sampsi” and “power”, this package accommodates arbitrary serial correlation. The program “pc_simulate” performs simulation-based power calculations using a pre-existing dataset (stored in memory), and accommodates cross-sectional, multi-wave panel, difference-in-differences, and ANCOVA designs. The program “pc_dd_analytic” performs analytical power calculations for a difference-in-differences experimental design, applying the formula derived in Burlig, Preonas, and Woerman (2017) that is robust to serial correlation. Users may either input parameters to characterize the assumed variance-covariance structure of the outcome variable, or allow the subprogram “pc_dd_covar” to estimate the variance-covariance structure from pre-existing data.