Resource Library

The BITSS Resource Library contains resources for learning, teaching, and practicing research transparency and reproducibility, including curricula, slide decks, books, guidelines, templates, software, and other tools. All resources are categorized by i) topic, ii) type, and iii) discipline. Filter results by applying criteria along these parameters or use the search bar to find what you’re looking for.

Know of a great resource that we haven’t included or have questions about the existing resources? Email us!

  • Topic

  • Type

  • Discipline

15 Results

Meta Life Sciences+

Meta is an interactive search tool that indexes research in the biomedical sciences. Users can create personalized news feeds by selecting from a range of concepts, journals, preprints, or papers of interest.

NRIN Collection of Resources on Research Integrity Data Management and De-identification+

Curated by the Netherlands Research Integrity Network (NRIN), this collection contains literature, tools, guidelines, and educational media related to research Integrity. Access the Collection here.

PhD Course Materials: Transparent, Open, and Reproducible Policy Research Data Management and De-identification+

BITSS Catalyst Sean Grant developed and delivered a PhD course on Transparent, Open, and Reproducible Policy Research at the Pardee RAND Graduate School in Policy Analysis. Find all course materials at the project’s OSF page.

Course Syllabi for Open and Reproducible Methods Anthropology, Archaeology, and Ethnography+

A collection of course syllabi from any discipline featuring content to examine or improve open and reproducible research practices. Housed on the OSF.

rOpenSci Packages Data Management and De-identification+

These packages are carefully vetted, staff- and community-contributed R software tools that lower barriers to working with scientific data sources and data that support research applications on the web.

PRISMA Interdisciplinary+

PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions.

 

Nicebread Data Management and De-identification+

Dr. Felix Schönbrodt’s blog promoting research transparency and open science.

The New Statistics (+OSF Learning Page) Data Management and De-identification+

This OSF project helps organize resources for teaching the “New Statistics”–an approach that emphasizes asking quantitative questions, focusing on effect sizes, using confidence intervals to express uncertainty about effect sizes, using modern data visualizations, seeking replication, and using meta-analysis as a matter of course (Cumming, 2011).

 

JASP Dynamic Documents and Coding Practices+

JASP is a cross-platform software program with a state-of-the-art graphical user interface. The JASP interface allows you to conduct statistical analyses in seconds, and without having to learn programming or risking a programming mistake. JASP is statistically inclusive as it offers both frequentist and Bayesian analysis methods. Open source and free of charge.

p-uniform Interdisciplinary+

The p-uniform package provides meta-analysis methods that correct for publication bias. Three methods are currently included in the package. The p-uniform method can be used for estimating effect size, testing the null hypothesis of no effect, and testing for publication bias. The second method in the package is the hybrid method. The hybrid method is a meta-analysis method for combining an original study and replication and while taking into account statistical significance of the original study. The p-uniform and hybrid method are based on the statistical theory that the distribution of p-values is uniform conditional on the population effect size. The third method in the package is the Snapshot Bayesian Hybrid Meta-Analysis Method. This method computes posterior probabilities for four true effect sizes (no, small, medium, and large) based on an original study and replication while taking into account publication bias in the original study. The method can also be used for computing the required sample size of the replication akin to power analysis in null hypothesis significance testing.

 

DMAS Economics and Finance+

The Distributed Meta-Analysis System is an online tool to help scientists analyze, explore, combine, and communicate results from existing empirical studies. It’s primary purpose it to support meta-analyses, by providing a database for empirically estimated models and methods to integrate their results. The current version supports a range of tools that are useful for analyzing empirical climate impact results, but it’s creators intend to expand its applicability to other fields including social sciences, medicine, ecology, and geophysics.

Metalab Data Visualization+

MetaLab is a research tool for aggregating across studies in the language acquisition literature. Currently, MetaLab contains 887 effect sizes across meta-analyses in 13 domains of language acquisition, based on data from 252 papers collecting 11363 subjects. These studies can be used to obtain better estimates of effect sizes across different domains, methods, and ages. Using our power calculator, researchers can use these estimates to plan appropriate sample sizes for prospective studies. More generally, MetaLab can be used as a theoretical tool for exploring patterns in development across language acquisition domains.

Curate Science Issues with transparency and reproducibility+

Curate Science is a crowd-sourced platform to track, organize, and interpret replications of published findings in the social sciences. Curated replication study characteristics include links to PDFs, open/public data, open/public materials, pre-registered protocols, independent variables (IVs), outcome variables (DVs), replication type, replication design differences, and links to associated evidence collections that feature meta-analytic forest plots.

Experimental Lab Standard Operating Procedures Data Management and De-identification+

This standard operating procedure (SOP) document describes the default practices of the experimental research group led by Donald P. Green at Columbia University. These defaults apply to analytic decisions that have not been made explicit in pre-analysis plans (PAPs). They are not meant to override decisions that are laid out in PAPs. The contents of our lab’s SOP available for public use. We welcome others to copy or adapt it to suit their research purposes.


Page 1 of 1