The BITSS Resource Library contains resources for learning, teaching, and practicing research transparency and reproducibility, including curricula, slide decks, books, guidelines, templates, software, and other tools. All resources are categorized by i) topic, ii) type, and iii) discipline. Filter results by applying criteria along these parameters or use the search bar to find what you’re looking for.
Know of a great resource that we haven’t included or have questions about the existing resources? Email us!
Jupyter Notebooks Data Visualization
The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.
Docker Data Visualization
Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.
DeclareDesign Dynamic Documents and Coding Practices
DeclareDesign is statistical software to aid researchers in characterizing and diagnosing research designs — including experiments, quasi-experiments, and observational studies. DeclareDesign consists of a core package, as well as three companion packages that stand on their own but can also be used to complement the core package: randomizr: Easy-to-use tools for common forms of random assignment and sampling; fabricatr: Tools for fabricating data to enable frontloading analysis decisions in social science research; estimatr: Fast estimators for social science research.
NeuroChambers Issues with transparency and reproducibility
Chris Chambers is a psychologist and neuroscientist at the School of Psychology, Cardiff University. He created this blog after taking part in a debate about science journalism at the Royal Institution in March 2012. The aim of his blog is give you some insights from the trenches of science. He talks about a range of science-related issues and may even give up a trade secret or two.
The New Statistics (+OSF Learning Page) Data Management and De-identification
This OSF project helps organize resources for teaching the “New Statistics”–an approach that emphasizes asking quantitative questions, focusing on effect sizes, using confidence intervals to express uncertainty about effect sizes, using modern data visualizations, seeking replication, and using meta-analysis as a matter of course (Cumming, 2011).
JASP Dynamic Documents and Coding Practices
JASP is a cross-platform software program with a state-of-the-art graphical user interface. The JASP interface allows you to conduct statistical analyses in seconds, and without having to learn programming or risking a programming mistake. JASP is statistically inclusive as it offers both frequentist and Bayesian analysis methods. Open source and free of charge.
The p-uniform package provides meta-analysis methods that correct for publication bias. Three methods are currently included in the package. The p-uniform method can be used for estimating effect size, testing the null hypothesis of no effect, and testing for publication bias. The second method in the package is the hybrid method. The hybrid method is a meta-analysis method for combining an original study and replication and while taking into account statistical significance of the original study. The p-uniform and hybrid method are based on the statistical theory that the distribution of p-values is uniform conditional on the population effect size. The third method in the package is the Snapshot Bayesian Hybrid Meta-Analysis Method. This method computes posterior probabilities for four true effect sizes (no, small, medium, and large) based on an original study and replication while taking into account publication bias in the original study. The method can also be used for computing the required sample size of the replication akin to power analysis in null hypothesis significance testing.
p-curve Dynamic Documents and Coding Practices
P-curve is a tool for determining if reported effects in literature are true or if they merely reflect selective reporting. P-curve is the distribution of statistically significant p-values for a set of studies (ps < .05). Because only true effects are expected to generate right-skewed p-curves – containing more low (.01s) than high (.04s) significant p-values – only right-skewed p-curves are diagnostic of evidential value. By telling us whether we can rule out selective reporting as the sole explanation for a set of findings, p-curve offers a solution to the age-old inferential problems caused by file-drawers of failed studies and analyses.
DMAS Economics and Finance
The Distributed Meta-Analysis System is an online tool to help scientists analyze, explore, combine, and communicate results from existing empirical studies. It’s primary purpose it to support meta-analyses, by providing a database for empirically estimated models and methods to integrate their results. The current version supports a range of tools that are useful for analyzing empirical climate impact results, but it’s creators intend to expand its applicability to other fields including social sciences, medicine, ecology, and geophysics.
Metalab Data Visualization
MetaLab is a research tool for aggregating across studies in the language acquisition literature. Currently, MetaLab contains 887 effect sizes across meta-analyses in 13 domains of language acquisition, based on data from 252 papers collecting 11363 subjects. These studies can be used to obtain better estimates of effect sizes across different domains, methods, and ages. Using our power calculator, researchers can use these estimates to plan appropriate sample sizes for prospective studies. More generally, MetaLab can be used as a theoretical tool for exploring patterns in development across language acquisition domains.
statcheck is an R package that checks for errors in statistical reporting in APA-formatted documents. It can help estimate the prevalence of reporting errors and is a tool to check your own work before submitting. The package can be used to automatically extract statistics from articles and recompute p values. It is also available as a wep app.
pcpanel Economics and Finance
This package performs power calculations for randomized experiments that use panel data. Unlike the existing programs “sampsi” and “power”, this package accommodates arbitrary serial correlation. The program “pc_simulate” performs simulation-based power calculations using a pre-existing dataset (stored in memory), and accommodates cross-sectional, multi-wave panel, difference-in-differences, and ANCOVA designs. The program “pc_dd_analytic” performs analytical power calculations for a difference-in-differences experimental design, applying the formula derived in Burlig, Preonas, and Woerman (2017) that is robust to serial correlation. Users may either input parameters to characterize the assumed variance-covariance structure of the outcome variable, or allow the subprogram “pc_dd_covar” to estimate the variance-covariance structure from pre-existing data.
Handbook of the Modern Development Specialist Data Management and De-identification
Created by the Responsible Data Forum, this handbook is offered as a first attempt to understand what responsible data means in the context of international development programming. The authors have taken a broad view of development, opting not to be prescriptive about who the perfect “target audience” for this effort is within the space. This book builds on a number of resources and strategies developed in academia, human rights and advocacy, but aims to focus on international development practitioners. The handbook includes chapters on project design, data management, collection, analysis, sharing, and more.
Dataverse Data RepositoriesInterdisciplinary
Dataverse is an open source web application to share, preserve, cite, explore, and analyze research data. It facilitates making data available to others, and allows you to replicate others’ work more easily. Researchers, data authors, publishers, data distributors, and affiliated institutions all receive academic credit and web visibility.
OSF Data Management and De-identification
Open Science Framework (OSF) is part version control system, part data repository, part collaboration software that allows researchers to move study materials to the cloud, share and find materials, detail individual contributions, make research design more visible, and register materials to certify research design was not modified to alter outcomes. To increase workflow flexibility OSF offers a system where researchers can register a description of their study and its goals. The OSF emphasizes versatility with a very wide range of tools and features including add-ons from other related sites such as Dataverse and Github. Uploaded materials can also be archived and receive a Digital Object Identifier (DOI) or Archival Resource Key (ARK).
Dryad Data Management and De-identification
Dryad is a curated repository of data underlying peer-reviewed scientific and medical literature, particularly data for which no specialized repository exists. All material in Dryad is associated with a scholarly publication. Its notable features include easy integration into the manuscript submission workflow of its partner journals, the flexibility to make data privately available during peer review, and allowing submitters to set limited-term embargoes post-publication.
ICPSR Data Repositories
The Inter-university Consortium for Political and Social Research (ICPSR) maintains and provides access to a vast archive of social science data for research and instruction (over 10,000 discrete studies and surveys with more than 65,000 datasets). ICPSR has been archiving data since 1962.
Qualitative Data Repository Data Management and De-identification
QDR selects, ingests, curates, archives, manages, durably preserves, and provides access to digital data used in qualitative and multi-method social inquiry. The repository develops and publicizes common standards and methodologically informed practices for these activities, as well as for the reusing and citing of qualitative data. Four beliefs underpin the repository’s mission: data that can be shared and reused should be; evidence-based claims should be made transparently; teaching is enriched by the use of well-documented data; and rigorous social science requires common understandings of its research methods.
re3data.org Data Repositories
The Registry of Research Data Repositories (re3data.org) is a global registry of research data repositories that covers research data repositories from different academic disciplines. It presents repositories for the permanent storage and access of data sets to researchers, funding bodies, publishers and scholarly institutions. re3data.org promotes a culture of sharing, increased access and better visibility of research data. The registry went live in autumn 2012 and is funded by the German Research Foundation (DFG).
Scan.R Data Management and De-identificationInterdisciplinary
Scan.R searches all Stata (.dta), SAS (.sas7bdat), and comma-separated values (.csv) files found in the specified directory for variables that may contain personally identifiable information (PII) using strings that commonly appear as part of variable names or labels that contain PII. (Note: Scan.R does not search labels in .csv files.) Results are displayed to the screen and saved to a comma-separated values file in the current working directory containing the variables and data flagged as potential PII.
Transparent and Open Social Science Research Dynamic Documents and Coding Practices
Demand is growing for evidence-based policy making, but there is also growing recognition in the social science community that limited transparency and openness in research have contributed to widespread problems. With this course, you can explore the causes of limited transparency in social science research, as well as tools to make your own work more open and reproducible.
You can enroll in the full course for free and access hands-on and social activities on the FutureLearn platform during designated course runs, or access the course videos for self-paced learning on our website here.
Manual of Best Practices Dynamic Documents and Coding Practices
Manual of Best Practices, written by Garret Christensen (BITSS), is a working guide to the latest best practices for transparent quantitative social science research. The manual is also available, and occasionally updated on GitHub. For suggestions or feedback, contact email@example.com.
Curate Science Issues with transparency and reproducibility
Curate Science is a crowd-sourced platform to track, organize, and interpret replications of published findings in the social sciences. Curated replication study characteristics include links to PDFs, open/public data, open/public materials, pre-registered protocols, independent variables (IVs), outcome variables (DVs), replication type, replication design differences, and links to associated evidence collections that feature meta-analytic forest plots.
Open Science Training Initiative Data Management and De-identification
Open Science Training Initiative (OSTI), provides a series of lectures in open science, data management, licensing and reproducibility, for use with graduate students and postdoctoral researchers. The lectures can be used individually as one-off information lectures in aspects of open science, or can be integrated into existing course curriculum. Content, slides and advice sheets for the lectures and other training materials are being gradually released on the GitHub repository as the official release versions become available.
Reproducible Research Data Management and De-identification
Reproducible Research taught by Roger D. Peng, Jeff Leek, and Brian Caffoof of Johns Hopkins University is a course on Coursera that teaches methods to organize data analysis so that it is reproducible and accessible to others. In this course students will learn to write a document using R markdown, integrate live R code into a literate statistical program and compile R markdown documents using knitr and related tools.
Implementing Reproducible Research Dynamic Documents and Coding Practices
Implementing Reproducible Research by Victoria Stodden, Friedrich Leisch, and Roger D. Peng covers many of the elements necessary for conducting and distributing reproducible research. The book focuses on the tools, practices, and dissemination platforms for ensuring reproducibility in computational science.
The Workflow of Data Analysis Using Stata Data Management and De-identification
Stata by J. Scott Long, explains how to manage aspects of data analysis including cleaning data; creating, renaming, and verifying variables; performing and presenting statistical analyses and producing replicable results.
EGAP Economics and Finance
The Evidence in Governance and Politics (EGAP) Registry focuses on designs for experiments and observational studies in governance and politics. The registry allows users to submit an array of information via an online form. Registered studies can be viewed in the form of a pdf on the EGAP site. The EGAP registry is straightforward and emphasizes simplicity for registering impact evaluations.
Promise and Perils of Pre-Analysis Plans Economics and Finance
Promise and Perils of Pre-analysis Plans, by Ben Olken lays out many of the items to include in a pre-analysis plan, as well as their history, the benefits, and a few potential drawbacks. Pre-analysis plans can be especially useful in reaching agreement about what will be measured and how when a partner or funder has a vested interest in the outcome of a study.
Reshaping Institutions Economics and Finance
Reshaping Institutions is a paper by Katherine Casey, Rachel Glennerster, and Edward Miguel that uses a pre-analysis plan to analyze the effects of a community driven development program in Sierra Leone. They discuss the contents and benefits of a PAP in detail, and include a “cherry-picking” table that shows the wide flexibility of analysis that is possible without pre-specification. The PAP itself is included in Appendix A in the supplementary materials, available at the link above.
Pre-Analysis Plan Template Economics and Finance
Experimental Lab Standard Operating Procedures Data Management and De-identification
This standard operating procedure (SOP) document describes the default practices of the experimental research group led by Donald P. Green at Columbia University. These defaults apply to analytic decisions that have not been made explicit in pre-analysis plans (PAPs). They are not meant to override decisions that are laid out in PAPs. The contents of our lab’s SOP available for public use. We welcome others to copy or adapt it to suit their research purposes.
Standardized Disclosure Peer Review PsychologyTransparent Reporting
A standard statement developed for peer review in psychology.
“I request that the authors add a statement to the paper confirming whether, for all experiments, they have reported all measures, conditions, data exclusions, and how they determined their sample sizes. The authors should, of course, add any additional text to ensure the statement is accurate. This is the standard reviewer disclosure request endorsed by the Center for Open Science [see http://osf.io/project/hadz3]. I include it in every review.”
RIDIE Economics and Finance
Registry of International Development Impact Evaluations (RIDIE) made in collaboration with the International Initiative for Impact Evaluation 3ie and the RAND Corporation is a prospective registry that enables researchers and evaluators to record information about their evaluation designs before conducting the analysis, as well as update information as the study proceeds and post findings upon study completion.
Page 2 of 3