Research Transparency in Brazilian Political and Social Science: A First Look Political ScienceSSMART

George Avelino Scott Desposato

We propose to conduct the first meta-analysis and reproducibility analysis of political science in Brazil. Funds will be used for graduate student support and other expenses to collect data and assess reproducibility of all articles published in the last five years in the three leading Brazilian political science and general social science journals, the Brazilian Political Science Review, the Revista de Ciência Política and Dados. A meta-analysis will test the relationship between type and field of study and reproducibility, and results will be presented at the annual meeting of the Brazilian National Association of Political Scientists, the Assoçiação Brasileira de Ciência Política (ABCP).

Pre-Analysis Plans: A Stocktaking InterdisciplinarySSMART

George Ofosu Daniel Posner

The evidence-based community (including BITSS) has held up preregistration as a solution to the problem of research credibility, but—ironically—without any evidence that preregistration works. The goal of our proposed research is to provide an evidentiary base for assessing whether PAPs—as they are currently used—are effective in achieving their stated objectives of preventing “fishing,” reducing scope for the post-hoc adjustment of research hypotheses, and solving the “file drawer problem.” We aim to do this by analyzing a random sample of 300 studies that have been pre-registered on the AEA and EGAP registration platforms, evenly distributed across studies that are still in progress, completed and resulting in a publicly available paper, and completed but (as far as we can determine) not resulting in a publicly available paper. Given the significant costs of researcher time and energy in preparing PAPs, and the implications that adhering to them may have on opportunities for breakthroughs that come from unexpected, surprise results (Laitin 2013; Olken 2015), it is critical to take stock of whether PAPs are working.

Fostering Transparency in Government Institutions and Higher Education: A Research and Teaching Initiative Catalyst Training ProjectInterdisciplinary

Nicole Janz Dalson Figueiredo

Locations: University of Nottingham, UK; Recife, Brazil; Brasilia, Brazil

We find research findings resulting from data that is not publicly accessible to not be credible. Similarly, governments withholding administrative information should not be trusted. We argue that the lack of government and research transparency are connected, and can be tackled in by offering training on reproducibility. This project aims to foster transparency in scholarly research and in government institutions. In particular, we will conduct educational workshops that will leverage insights that have been used to increase governmental and research transparency in the UK to improve transparency in Brazil. Our target groups are 100 undergraduate and graduate students, 20 scholars, and 20 bureaucrats. The project will strengthen research skills and transparency norms that can contribute to scientific innovation, development, and social welfare.

The first workshop was part of a day-long conference: “The Gold Standard of Reproducible Research” at the University of Nottingham on March 9, 2017.

Best Practices of Openness for African Researchers and Research Transparency Workshops at Three Social Science Conferences Catalyst Training ProjectInterdisciplinary

Dief Reagen Nochi Faha Elise Wang Sonne

Locations: LSE-Africa Summit, London School of Economics, London, UK; Population Association of America, Chicago, IL; University of Dschang, Cameroon; UNU-MERIT, Maastricht, Netherlands

This project will communicate best practices for openness and reproducibility in research. We will hold a workshop for African researchers at the University of Dschang in Cameroon, focusing on sensitizing researchers to the necessity of avoiding academic research misconduct such as p-hacking, publication bias, and failure to replicate, but also on data management practices in Stata. A series of workshops will also be held for public policy graduate students, demographers, sociologists, economists, and public health professionals in the Netherlands, USA, and the UK. In addition to sensitizing researchers to the necessity of avoiding academic research misconduct and data management practices, these workshops will also include trainings on Github, the Open Science Framework, Project TIER, and Dynamic Documents using StatTag and Markdoc.

The first workshop “Research Transparency and Reproducibility in the Social Sciences” will be held on March 31, 2017 at the 2017 LSE Africa Summit Research Conference in London.

Improving transparency of complex interventions through the facilitation of process evaluation training Behavior ChangeCatalyst Training Project

Elaine Toomey

Locations: National University of Ireland, Galway (NUIG), Health Behaviour Change Research Group (HBCRG)

Process evaluation is a way of investigating how well an intervention, programme or treatment was implemented as intended. It is crucial for facilitating transparency in the development, conduct and reporting of interventions in numerous research fields, including psychology, social science and public health, as it helps to increase confidence that changes in study outcomes are due to the influence of the intervention being investigated, and not due to differences or variability in the implementation of the intervention. It is particularly important within complex interventions (interventions with several interacting components such as multiple providers or intervention sites) due to the number of components that can be implemented variably and thus influence outcomes separately. This increases scientific confidence in the results of the intervention, enhances the replication and implementation of effective interventions, facilitates the refinement or de-implementation of ineffective interventions and overall serves to reduce research waste. However, at present specific process evaluation training is not available in Ireland which represents a significant barrier to the transparency of Irish intervention research and the implementation of best quality evidence into practice.

At present, the gold-standard for process evaluation training in Europe is run by the Centre for Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer) in the UK, an International Centre of Excellence. This project aims to facilitate world-class DECIPHer process evaluation training in Ireland to improve the transparency of complex interventions in psychology, public health or social science settings and overall enhance the quality and impact of this research. Subsequent dissemination of the workshop proceedings and materials will also promote further understanding of this work amongst the wider BITSS community.

This project is supported in part by the John Templeton Foundation.

Research Transparency in the Social Sciences Workshop, Second Edition Catalyst Training ProjectInterdisciplinary

Zacharie Dimbuene

Location: University of Kinshasa (The Democratic Republic of the Congo)

Research Transparency is gaining attention in the scientific community around the world, including the United States, European countries, and Anglophone countries in sub-Saharan Africa; yet the concept is quite a “new world” in Francophone Africa. In my efforts to advance the movement in Francophone Africa, I successfully delivered the first Research Transparency Workshop at the University of Kinshasa (Democratic Republic of the Congo). This project is intended to sustain previous efforts to set up “Research Transparency in the Social Sciences” as a culture in the next generation of social scientists in Francophone Africa.

I will offer a training workshop for 60 graduate students, research staff, and professors at the University of Kinshasa to promote best practices for reproducible research with concrete guidance about how to make materials understandable for publication. The activities to be addressed during the workshop will include (1) organizing file structure; (2) creating understandable variable labels and value codes, as well as connecting variables to survey instruments through consistent labels and codebook creation; (3) version control of code and data; and (4) creating and maintaining documentation files about the survey and data, as well as data cleaning steps.

This workshop is supported in part by the John Templeton Foundation.

Development and implementation of a short course in Open Science Catalyst Training ProjectInterdisciplinary

Arnaud Vaganay

Locations: Utrecht, Netherlands; Essex, UK; London School of Economics, UK

I will coordinate a multi-site, university-accredited, and extra-curricular summer course in Open Science to be held at the Utrecht Summer School, Essex Summer School, and LSE Summer School. This course will build on existing materials and programs (including those of BITSS and COS), has been endorsed by the Center for Open Science, and will provide ECTS credits to students taking an optional exam. The exam is being developed with Thomas Leeper (Assistant Professor in Political Behaviour at LSE) who will also assist in teaching the course at LSE.

The course aims to develop the perspectives, knowledge, and skills researchers need to make their research more open, i.e. more transparent and reproducible. Upon completion of the course, students will be able to (1) define open science and evaluate the openness of current research; (2) discuss the main drivers and obstacles to openness and critically assess solutions; (3) implement fundamental open science practices in their own workflows; and (4) apply these skills through the use of open science software and apps. Each course will target 20 graduate students and early career researchers.

Disseminating Research Transparency in Perú, Bolivia, and Chile Catalyst Training ProjectInterdisciplinary

F. Hoces de la Guardia

Locations: Peru; Bolivia; Chile (final locations TBA)

The goal of this project is to bring to the attention of the academic communities in Peru, Bolivia, and Chile the recent developments in science regarding transparency and openness. This will be done in a two-fold format. First, a seminar-style talk will present the key issues (the reproducibility crisis, specification searching, and publication bias) and its solutions (pre-registrations, the TOP guidelines, and other tools for reproducible research). Second, a day-long workshop aimed at students will present the main tools for reproducible research (including R, Dynamic Documents, Git, and OSF). Increasing the scope of the research transparency community to this region can have additional benefits as it would bring highly talented researchers and students to elements of frontier research that are usually undisclosed in published papers.

Introducing the Transparent and Reproducible Research Paradigm in Ugandan Higher Institutions of Learning Catalyst Training ProjectInterdisciplinary

Jayne Byakika-Tusiime Saint Kizito Omala

Locations: Universities across Uganda (final locations TBA)

The concept of transparent and reproducible research is not known, nor appreciated, by researchers in Uganda and many other developing countries. There is usually delayed adoption of new knowledge and technologies in developing countries because of the slower flow of information in these regions. The concept of transparent and reproducible research is still relatively new even in the developed world and almost unknown in the developing world. As Catalysts of this paradigm shift, we wish to introduce this concept in Uganda. Groundbreaking research, especially in health has been done in Uganda and much more research continues to be done. However, the practice of transparent and reproducible is non-existent. We thus propose to start in Uganda a project to train 500 established and upcoming researchers in conducting transparent and reproducible research.

Our target population will be graduate and undergraduate students from 30 universities in Uganda, as well as faculty of both public and private universities that train students in research disciplines where theses, dissertations or journal article publications are required for either degree award or promotion. The objective of the project will be to sensitize and create awareness about conducting transparent and reproducible research. Specifically, we shall conduct ten regional workshops across Uganda. Following this introduction, we plan to design course modules to incorporate into existing academic programs at the participating universities.

Development of a Graduate Public Health Online Course in Research Integrity, Transparency, and Reproducibility Catalyst Training ProjectEpidemiology

Dennis M. Gorman

Location: Texas A&M University, USA

There is now a growing recognition within the scientific community that flexibility in study design, data analysis, and the reporting of research findings is increasingly leading to the publication of misleading results that capitalize on chance and cannot be replicated. It has been suggested that the use of such practices, if not made apparent in a manuscript describing the results of a study, is a form of research misconduct. There is little doubt that the widespread use of such practices undermines the integrity of a scientific field as they produce a body of non-reproducible, random findings. Both epidemiology and general public health are among the fields of research in which questions have been raised about research integrity, transparency, and reproducibility.

This course will examine various threats to the integrity of research, the professional and organizational factors that produce these threats, and the solutions that have been suggested to improve research quality (such as registered reports, open data, and team of rivals). Upon completion of the course, students should have the ability to differentiate research that is conducted with integrity and capable of producing valid and reproducible findings from research that is conducted without integrity and produces chance results that are trivial and non-reproducible. Students should also have the ability to incorporate practices into their own research that will increase its transparency and ensure it is conducted with integrity.

Creating Pedagogical Materials to Enhance Research Transparency at UCSD Catalyst Training ProjectInterdisciplinary

Scott Desposato Craig McIntosh

Location: UC San Diego (UCSD), USA

We will develop a core of teaching material around transparency and reproducibility that can be incorporated into graduate courses across the social sciences at UCSD. This project will draw on the library of materials from BITSS as well as from faculty at UCSD and the tools developed through the Policy Design and Evaluation Lab (PDEL)’s Data Replication service to create a one-week short course that can be deployed across courses. Our goal is to educate every new social science PhD student at UCSD about the importance of transparency and replicable research and empower them to incorporate transparency practices in their research from their first quarter at UCSD. Curricula will be made available on the BITSS library of educational materials as an alternative to the full semester course, and encourage the development of a set of discipline-specific add-on modules.

After completing the module, students will understand the importance of transparency for the scientific enterprise, they will recognize the institutional and incentive challenges to replicable research, they will be empowered with appropriate tools to adopt replicable practices, and they will understand the career costs and benefits of transparency.

MetaLab: Paving the way for easy-to-use, dynamic, crowdsourced meta-analyses Cognitive ScienceInterdisciplinarySSMART

Christina Bergmann, Sho Tsuji, Molly Lewis, Mika Braginsky, Page Piccinini, Alejandrina Cristia, and Michael C. Frank

Aggregating data across studies is a central challenge in ensuring cumulative, reproducible science. Meta-analysis is a key statistical tool for this purpose. The goal of this project is to support meta-analysis using MetaLab, an interface and central repository. Our user-friendly, online platform supports collaborative hosting and creation of dynamic meta-analyses, based entirely on open tools. MetaLab implements sustainable curation and update practices, promotes community use for study design, and provides ready-to-use educational material. The platform will lower workload for individual (or groups of) researchers through a crowdsourcing infrastructure, thereby automating and structuring parts of the workflow. MetaLab is supported by a community of researchers and students who can become creators, curators, contributors, and/or users of meta-analyses. Visualization and computation tools allow instant access to meta-analytic results. The analysis pipeline is open and reproducible, enabling external scrutiny. Additionally, an infrastructure for meta-meta-analyses allows MetaLab to address broader conceptual and methodological questions across individual meta-analyses. MetaLab is initially being developed for the case study of early language development, a field in which meta-analytic methods are vastly underused and which will benefit greatly from new meta-analyses created in this project; extensions to other subfields of the social sciences are however easily possible.

Publication Bias and Editorial Statement on Negative Findings EconomicsSSMART

Abel Brodeur and Cristina Blanco-Perez

The introduction of confidence at 95 percent or 90 percent has led the academic community to accept more easily starry stories with marginally significant coefficients than starless ones with insignificant coefficients. In February 2015, the editors of eight health economics journals sent out an editorial statement encouraging referees to accept studies that: “have potential scientific and publication merit regardless of whether such studies’ empirical findings do or do not reject null hypotheses that may be specified.” In this study, we test whether this editorial statement on negative results induced a change in the number of published papers not rejecting the null. More precisely, we collect p-values from health economics journals and compare the distribution of tests before and after the editorial statement.

Working Paper: https://osf.io/45eba/

Transparent and Reproducible Social Science Research (Textbook) BITSS ScholarsEconomicsPolitical SciencePsychology

Garret Christensen, Edward Miguel

Psychology, political science, and economics have all recently taken their turn in the spotlight with instances of influential research that fell apart when scrutinized. Beyond scandals featuring deliberate fraud, there is growing evidence that much social science research features sloppy yet inadvertent errors, and a sense that many analyses produce statistically “significant” results only by chance. Due in part to a rising number of highly publicized cases, there is growing demand for solutions. A movement is emerging across social science disciplines, and especially in economics, political science and psychology, for greater research transparency, openness, and reproducibility. Our proposed textbook, Transparent and Reproducible Social Science Research, will be the first to crystallize the new insights, practices and methods in this growing interdisciplinary field.

Data Sharing and Citations: Causal Evidence BITSS ScholarsEconomicsPolitical Science

Garret Christensen, Edward Miguel, and Allan Dafoe

This project attempts to estimate the causal effect of data-sharing on citations. There is a fair amount of evidence that published academic papers that make their data publicly available have, on average, a higher number of citations, but ours is the first evidence that attempts to address the causal nature of this relationship.

Developing a Guideline for Reporting Mediation Analyses (AGReMA) in randomized trials and observational studies Social ScienceSSMART

Hopin Lee, James H McAuley, Rob Herbert, Steven Kamper, Nicolas Henschke, and Christopher M Williams

Investigating causal mechanisms using mediation analysis is becoming increasingly common in psychology, public health, and social science. Despite increasing popularity, the accuracy and completeness when reporting mediation analyses are inconsistent. Inadequate and inaccurate reporting of research stifles replication, limits assessment of potential bias, complicates meta-analyses, and wastes resources. There is a pressing need to develop a reporting standard for mediation analyses. Up to now, there have been no registered initiatives on the “Enhancing the QUAlity and Transparency of health Research” (EQUATOR) network that guide the reporting of mediation analyses. Our proposed project aims to improve the reporting quality of future mediation analyses by developing a reporting guideline through a program of established methodologies (Systematic Review, Delphi Survey, Consensus Meetings, and Guideline Dissemination). The development and implementation of this guideline will improve the transparency of research findings on causal mechanisms across multiple disciplines.

Working Paper: https://osf.io/g7s9s/

A Large-Scale, Interdisciplinary Meta-Analysis on Behavioral Economics Parameters EconomicsSSMART

Colin Camerer and Taisuke Imai

We propose to conduct large-scale meta-analyses of published and unpublished researches in economics, psychology and neuroscience (and any other fields which emerge) to cumulate knowledge about measured parameters explaining preferences over risk and time. In the risk section, we will locate and meta-analyze studies which estimate parameters associated with curvature of utility and loss-aversion (the ratio between disutility of loss and utility of gain). In the time section, we will locate and meta-analyze studies which estimate parameters associated with near- and long-term discounting (exponential, hyperbolic, and quasi-hyperbolic parameters). Meta-analysis is the proper tool because: It is technologically easier than ever before; there are standard methods to weigh different estimates according to study quality; if unpublished work is found, it can help estimate whether there are biases in favor of publishing certain types of confirmatory bias (or a bias against publishing null results); and looking at a broad range of studies is the most efficient way to help resolve debates about how preference parameters vary across populations (e.g., how much less patient are young people?) and across methods (e.g., are time preferences inferred from monetary rewards different than non-monetary rewards?).

Bayesian Evidence Synthesis: New Meta-Analytic Procedures for Statistical Evidence PsychologySSMART

Eric-Jan Wagenmakers, Raoul Grasman, Quentin F. Gronau, and Felix Schonbrodt

The proposed project will develop a suite of meta-analytic techniques for Bayesian evidence synthesis. The proposed suite addresses a series of challenges that currently constrain classical meta-analytic procedures. Specifically, the proposed techniques will allow (1) the quantification of evidence, both for and against the absence of an effect; (2) the monitoring of evidence as new studies accumulate over time; (3) the graceful and principled “model-averaged” combination between fixed-effects and random-effects meta-analysis; (4) the principled planning of a new study in order maximize the probability that it will lead to a worthwhile gain in knowledge.

Assessing Bias from the (Mis)Use of Covariates: A Meta-Analysis Political ScienceSSMART

Gabriel Lenz and Alexander Sahn

An important and understudied area of hidden researcher discretion is the use of covariates. Researchers choose which covariates to include in statistical models and these choices affect the size and statistical significance of estimates reported in studies. How often does the statistical significance of published findings depend on these discretionary choices? The main hurdle to studying this problem is that researchers never know the true model and can always make a case that their choices are most plausible, closest to the true data generating process, or most likely to rule out alternative explanations. We attempt to surmount this hurdle through a meta-analysis of articles published in the American Journal of Political Science (AJPS). In almost 40% of observational studies, we find that researchers achieve conventional levels of statistical significance through covariate adjustments. Although that discretion may be justified, researchers almost never disclose or justify it.

The preprint for this project can be found here.

Integrated Theoretical Model of Condom Use for Young People in Sub-Saharan Africa PsychologySSMART

Cleo Protogerou, Martin Hagger, and Blair Johnson

Background: Sub-Saharan African nations continue to carry the substantive burden of the HIV pandemic, and therefore, condom use promotion is a public health priority in that region. To be successful, condom promotion interventions need to be based on appropriate theory.
 
Objective: To develop and test an integrated theory of the determinants of young peoples’ condom use in Sub-Saharan Africa, using meta-analytic path analysis.
 
Method: Theory development was guided by summary frameworks of social-cognitive health predictors, and research predicting condom use in SSA, adopting social-cognitive theories. Attitudes, norms, control, risk perceptions, barriers to condom use, intentions, and previous condom use, were included in our condom use model. We conducted an exhaustive database search, with additional hand-searches and author requests. Data were meta-analyzed with Hedges and Vevea’s (1998) random-effects models. Individual, societal, and study-level parameters were assessed as candidate moderators. Included studies were also critically appraised.
 
Results: Fifty-five studies (N = 55,069), comprising 72 independent data sets, and representing thirteen Sub-Saharan African nations, were meta-analyzed. Statistically significant corrected correlations were obtained among the majority of the constructs included and of sufficient size to be considered non-trivial. Path analyses revealed (a) significant direct and positive effects of attitudes, norms, control, and risk perceptions on condom use intentions, and of intention and control on condom use; and (b) significant negative effects of perceived barriers on condom use. The indirect effects of attitudes, norms, control, and risk perceptions on condom use, mediated by intentions, were also obtained. The inclusion of a formative research component moderated eight of the 26 effect sizes. The majority of studies (88%) were of questionable validity.
 
Conclusion: Our integrated theory provides an evidence-based framework to study antecedents of condom use in Sub-Saharan African youth, and to develop targets for effective condom promotion interventions.
 

The preprint for this project can be found here.

Welfare Comparisons Across Expenditure Surveys EconomicsSSMART

Elliott Collins, Ethan Ligon, and Reajul Chowdhury

This project aims to replicate and combine three recent experiments on capital transfers to poor households in two distinct phases. The first phase will produce three concise internal replications. These will be accompanied by a report detailing the challenges faced and tools and methods used. The final goal will be to produce practical insights useful to students and new researchers. The second phase will combine these experiments in an extended analysis to explore how economic theory can allow for meta-analysis and comparative impact evaluation among datasets that would otherwise be problematic or even impossible to compare.

Investigation of Data Sharing Attitudes in the Context of a Meta-Analysis PsychologySSMART

Joshua R. Polanin and Mary Terzian

Sharing individual participant data (IPD) among researchers, upon request, is an ethical and responsible practice. Despite numerous calls for this practice to be standard, however, research indicates that primary study authors are often unwilling to share IPD, even for use in a meta-analysis. The purpose of this project is to understand the reasons why primary study authors are unwilling to share their data and to investigate whether sending a data-sharing agreement, in the context of updating a meta-analysis, affects participants’ willingness to share IPD. Our project corresponds to the third research question asked in the proposal: Why do some researchers adopt transparent research practices, but not others? To investigate these issues, we plan to sample and survey at least 700 researchers whose studies have been included in recently published meta-analyses. The survey will ask participants about their attitudes on data sharing. Half of the participants will receive a hypothetical data-sharing agreement, half of the participants will not, and we will compare their attitudes toward data sharing after distributing the agreement. The results of this project will inform future meta-analysis projects seeking to collect IPD and will inform the community about data-sharing practices and barriers in general.

Will knowledge about more efficient study designs increase the willingness to pre-register? PsychologySSMART

Daniel Lakens

Pre-registration is a straightforward way to make science more transparent, and control Type 1 error rates. Pre-registration is often presented as beneficial for science in general, but rarely as a practice that leads to immediate individual benefits for researchers. One benefit of pre-registered studies is that they allow for non-conventional research designs that are more efficient than conventional designs. For example, by performing one-tailed tests and sequential analyses researchers can perform well-powered studies much more efficiently. Here, I examine whether such non-conventional but more efficient designs are considered appropriate by editors under the pre-condition that the analysis plans are pre-registered, and if so, whether researchers are more willing to pre-register their analysis plan to take advantage of the efficiency benefits of non-conventional designs.

The preprint for this project can be found here.

Transparency and Reproducibility in Economics Research BITSS ScholarsEconomics

There is growing interest in research transparency and reproducibility in economics and other fields. We survey existing work on these topics within economics, and discuss the evidence suggesting that publication bias, inability to replicate, and specification searching remain widespread problems in the discipline. We next discuss recent progress in this area, including through improved research design, study registration and pre-analysis plans, disclosure standards, and open sharing of data and materials, and draw on the experience in both economics and other social science fields. We conclude with a discussion of areas where consensus is emerging on the new practices, as well as approaches that remain controversial, and speculate about the most effective ways to make economics research more accurate, credible and reproducible in the future.

Reporting Guidance for Trial Protocols of Social Science Interventions Social ScienceSSMART

Sean Grant
Protocols improve reproducibility and accessibility of social science research. Given deficiencies in trial protocol quality, the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) Statement provides an evidence-based set of items to describe in protocols of clinical trials on biomedical interventions. This manuscript introduces items in the SPIRIT Statement to a social science audience, explaining their application to protocols of social intervention trials. Additional reporting items of relevance to protocols of social intervention trials are also presented. Items, examples, and explanations are derived from the SPIRIT 2013 Statement, other guidance related to reporting of protocols and completed trials, publicly accessible trial registrations and protocols, and the results of an online Delphi process with social intervention trialists. Use of these standards by researchers, journals, trial registries, and educators should increase the transparency of trial protocols and registrations, and thereby increase the reproducibility and utility of social intervention research.

Working Paper: https://osf.io/x6mkb/

Distributed meta-analysis: A new approach for constructing collaborative literature reviews EconomicsSSMART

Solomon Hsiang James Rising

Scientists and consumers of scientific knowledge can struggle to synthesize the quickly evolving state of empirical research. Even where recent, comprehensive literature reviews and meta-analyses exist, there is frequent disagreement on the criteria for inclusion and the most useful partitions across the studies. To address these problems, we create an online tool for collecting, combining, and communicating a wide range of empirical results. The tool provides a collaborative database of statistical parameter estimates, which facilitates the sharing of key results. Scientists engaged in empirical research or in the review of others’ work can input rich descriptions of study parameters and empirically estimated relationships. Consumers of this information can filter the results according to study methodology, the studied population, and other attributes. Across any filtered collection of results, the tool calculates pooled and hierarchically modeled common parameters, as well as the range of variation between studies.

Working Paper: https://osf.io/rj8v9/

Panel Data and Experimental Design EconomicsSSMART

Fiona Burlig Matt Woerman Louis Preonas

How should researchers design experiments to detect treatment effects with panel data? In this paper, we derive analytical expressions for the variance of panel estimators under non-i.i.d. error structures, which inform power calculations in panel data settings. Using Monte Carlo simulation, we demonstrate that, with correlated errors, traditional methods for experimental design result in experiments that are incorrectly powered with proper inference. Failing to account for serial correlation yields overpowered experiments in short panels and underpowered experiments in long panels. Using both data from a randomized experiment in China and a high-frequency dataset of U.S. electricity consumption, we show that these results hold in real-world settings. Our theoretical results enable us to achieve correctly powered experiments in both simulated and real data. This paper provides researchers with the tools to design well-powered experiments in panel data settings.

The preprint for this project can be found here.

Examining the Reproducibility of Meta Analyses in Psychology PsychologySSMART

Daniel Lakens

Meta-analyses are an important tool to evaluate the literature. It is essential that meta-analyses can easily be reproduced to allow researchers to evaluate the impact of subjective choices on meta-analytic effect sizes, but also to update meta-analyses as new data comes in, or as novel statistical techniques (for example to correct for publication bias) are developed. Research in medicine has revealed meta-analyses often cannot be reproduced. In this project, we examined the reproducibility of meta-analyses in psychology by reproducing twenty published meta-analyses. Reproducing published meta-analyses was surprisingly difficult. 96% of meta-analyses published in 2013-2014 did not adhere to reporting guidelines. A third of these meta-analyses did not contain a table specifying all individual effect sizes. Five of the 20 randomly selected meta-analyses we attempted to reproduce could not be reproduced at all due to lack of access to raw data, no details about the effect sizes extracted from each study, or a lack of information about how effect sizes were coded. In the remaining meta-analyses, differences between the reported and reproduced effect size or sample size were common. We discuss a range of possible improvements, such as more clearly indicating which data were used to calculate an effect size, specifying all individual effect sizes, adding detailed information about equations that are used, and how multiple effect size estimates from the same study are combined, but also sharing raw data retrieved from original authors, or unpublished research reports. This project clearly illustrates there is a lot of room for improvement when it comes to the transparency and reproducibility of published meta-analyses.

The preprint for this project can be found here.

Aggregating Distributional Treatment Effects: A Bayesian Hierarchical Analysis of the Microcredit Literature EconomicsSSMART

Rachael Meager
This paper develops methods to aggregate evidence on distributional treatment effects from multiple studies conducted in different settings, and applies them to the microcredit literature. Several randomized trials of expanding access to microcredit found substantial effects on the tails of household outcome distributions, but the extent to which these findings generalize to future settings was not known. Aggregating the evidence on sets of quantile effects poses additional challenges relative to average effects because distributional effects must imply monotonic quantiles and pass information across quantiles. Using a Bayesian hierarchical framework, I develop new models to aggregate distributional effects and assess their generalizability. For continuous outcome variables, the methodological challenges are addressed by applying transforms to the unknown parameters. For partially discrete variables such as business profits, I use contextual economic knowledge to build tailored parametric aggregation models. I find generalizable evidence that microcredit has negligible impact on the distribution of various household outcomes below the 75th percentile, but above this point there is no generalizable prediction. Thus, there is strong evidence that microcredit typically does not lead to worse outcomes at the group level, but no generalizable evidence on whether it improves group outcomes. Households with previous business experience account for the majority of the impact in the tails and see large increases in the upper tail of the consumption distribution in particular.

Working Paper: https://osf.io/rj9s4/

Using P-Curve to Assess Evidentiary Value of Social Psychology Publications PsychologySSMART

Leif Nelson

The proposed project will utilize p-curve, a new meta-analytic tool to assess the evidentiary value of studies from social psychology and behavioral marketing. P-curves differs from meta-analytic methods by analyzing the distribution of p-values to determine the likelihood that a study provides evidence for the existence of an effect; in the event that there is not evidentiary value in a study, p-curve can also determine whether a study is powered such that it would detect an effect 33% of the time, given it exists. We will apply p-curve to each empirical paper in the first issue of 2014 in three top-tier journals: Psychological Science, The Journal of Personality and Social Psychology, and The Journal of Consumer Research. Additionally, we will conduct a direct replication of one study from each of these issues.

External Validity in U.S. Education Research EconomicsSSMART

Sean Tanner
As methods for internal validity improve, methodological concerns have shifted toward assessing how well the research community can extrapolate from individual studies. Under recent federal granting initiatives, over $1 billion has been awarded to education programs that have been validated by a single randomized or natural experiment. If these experiments have weak external validity, scientific advancement is delayed and federal education funding might be squandered. By analyzing trials clustered within interventions, this research describes how well a single study’s results are predicted by additional studies of the same intervention in addition to analyzing how well study samples match the target populations of interventions. I find that U.S. education trials are conducted on samples of students who are systematically less white and more socioeconomically disadvantaged that the overall student population. Moreover, I find that effect sizes tend to decay in the second and third trials of interventions.

Working Paper: https://osf.io/w5bjs/

Getting it Right with Meta-Analysis: Correcting Effect Sizes for Publication Bias in Meta-Analyses from Psychology and Medicine SSMARTStatistics

Robbie van Aert

Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias is widespread, how strongly it affects different scientific literatures is currently less well-known. We examine evidence of publication bias in a large-scale data set of meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and the Cochrane Database of Systematic Reviews (representing meta-analyses from medical research). Psychology is compared to medicine, because medicine has a longer history than psychology with respect to preregistration of studies as an effort to counter publication bias. The severity of publication bias and its inflating effects on effect size estimation were systematically studied by applying state-of-the-art publication bias tests and the p-uniform method for estimating effect size corrected for publication bias. Publication bias was detected in only a small amount of homogeneous subsets. The lack of evidence of publication bias was probably caused by the small number of effect sizes in most subsets resulting in low statistical power for publication bias tests. Regression analyses showed that small-study effects were actually present in the meta-analyses. That is, smaller studies were accompanied with larger effect sizes, possible due to publication bias. Overestimation of effect size due to publication bias was similar for meta-analyses published in Psychological Bulletin and Cochrane Database of Systematic Reviews.

The working paper titled “Publication bias in meta-analyses from psychology and medical research: A meta-meta-analysis” for this project can be found on BITSS Preprints here.

Open Science & Development Engineering: Evidence to Inform Improved Replication Models EngineeringSSMART

Paul Gertler

**Updated on April 12, 2017** Replication is a critical component of scientific credibility. Replication increases our confidence in the reliability of knowledge generated by original research. Yet, replication is the exception rather than the rule in economics. Few replications are completed and even fewer are published. Indeed, in the last 6 years, only 11 replication studies were published in top 11 (empirical) Economics Journals. In this paper, we examine why so little replication is done and propose changes to the incentives to replicate in order to solve these problems.

Our study focuses on code replication, which seeks to replicate the results in the original paper uses the same data as the original study. Specifically, these studies seek to replicate exactly the same analyses performed by the authors. The objective is to verify that the analysis code is correct and confirm that there are no coding errors. This is usually done in a two-step process. The first step is to reconstruct the sample and variables used in the analysis from the raw data. The second step is to confirm that the analysis code (i.e., the code that fits a statistical model to the data) reproduces the reported results. By definition, the results reported in the original paper can then be replicated if this two-step procedure is successful. The threat of code replication provides an incentive for authors to put more effort into writing the code to avoid errors and incentive not to purposely misreport results.

The preprint for this project can be found here.

How often should we believe positive results? EconomicsSSMART

Eva Vivalt Aidan Coville

High false positive and false negative reporting probabilities (FPRP and FNRP) reduce the veracity of the available research in a particular field, undermining the value of evidence to inform policy. However, we rarely have good estimates of false positive and false negative rates since both the prior and study power are required for their calculation, and these are not typically available or directly knowable without making ad hoc assumptions. We will leverage on AidGrade’s dataset of 647 impact evaluations in development economics and complement this by gathering estimates of priors and reasonable minimum detectable effects of various intervention-outcome combinations from policymakers, development practitioners and researchers in order to generate estimates of the FPRP and FNRP rates in development economics.

Bangladesh Hygiene Study (2015) Data Publication GrantEconomics

David Levine (Professor of Business Administration, UC Berkeley) was awarded a Data Publication Grant in March 2015 in order to publish the data associated with the following paper: Credit Constraints, Discounting and Investment in Health: Evidence from Micropayments for Clean Water in Dhaka Dataverse.
View the Data. Read the Paper.

US Electoral Study (2010) Data Publication GrantPolitical Science

Spencer Piston (Assistant Professor of Political Science, Syracuse University) was awarded a Data Publication Grant in March 2015 in order to publish the data associated with the following papers: Why State Constitutions Differ in their Treatment of Same-Sex Couples and Is Implicit Racial Prejudice Against Blacks Politically Consequential? Evidence from the AMP.

Why State Constitutions Differ in their Treatment of Same-Sex Couples: 

View the DataRead the Paper.

Is Implicit Racial Prejudice Against Blacks Politically Consequential? Evidence from the AMP:

View the DataRead the Paper.

Sri Lanka Savings Study (2013) Data Publication GrantEconomics

Craig McIntosh (Professor of Economics, UC San Diego) was awarded a Data Publication Grant in March 2015 in order to publish the data associated with the following paper: Deposit Collecting: Unbundling the Role of Frequency, Salience, and Habit Formation in Generating Savings and What Are The Headwaters of Formal Savings? Experimental Evidence from Sri Lanka.

Unbundling the Role of Frequency, Salience, and Habit Formation in Generating Savings and What Are The Headwaters of Formal Savings? Experimental Evidence from Sri Lanka:

View the DataRead the Paper.

What Are The Headwaters of Formal Savings? Experimental Evidence from Sri Lanka:

View the Data, Read the Paper.

Ghana Electoral Fraud Study (2012) Data Publication GrantPolitical Science

Miriam Golden (Professor of Political Science, UCLA) was awarded a Data Publication Grant in March 2015 in order to publish the data associated with the following papers: Biometric Identification Machine Failure and Electoral Fraud in a Competitive Democracy (Read the Paper); Political Parties and Electoral Fraud in Ghana’s Competitive Democracy (Read the Paper).
View the Data.

Conservative Tests under Satisficing Models of Publication Bias BITSS ScholarsEconomicsSocial Science

Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that undo the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, 30% of published t-statistics fall between the standard and adjusted cutoffs.

Publication available at PLOS ONE – http://dx.doi.org/10.1371/journal.pone.0149590

Many analysts, one dataset: Making transparent how variations in analytical choices affect results BITSS ScholarsInterdisciplinary

In a standard scientific analysis, one analyst or team presents a single analysis of a data set. However, there are often a variety of defensible analytic strategies that could be used on the same data. Variation in those strategies could produce very different results.

In this project, we introduce the novel approach of “crowdsourcing a dataset.” We hope to recruit multiple independent analysts to investigate the same research question on the same data set in whatever manner they see as best. This approach should be especially useful for complex data sets in which a variety of analytic approaches could be used, and when dealing with controversial issues about which researchers and others have very different priors. If everyone comes up with the same results, then scientists can speak with one voice. If not, the subjectivity and conditionality on analysis strategy is made transparent. Read More →

Promoting an open research culture BITSS ScholarsInterdisciplinary

Transparency, openness, and reproducibility are readily recognized as vital features of science (1, 2). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (46). Read More →

Promoting Transparency in Social Science Research BITSS ScholarsInterdisciplinarySocial Science

There is growing appreciation for the advantages of experimentation in the social sciences. Policy-relevant claims that in the past were backed by theoretical arguments and inconclusive correlations are now being investigated using more credible methods. Changes have been particularly pronounced in development economics, where hundreds of randomized trials have been carried out over the last decade. When experimentation is difficult or impossible, researchers are using quasi-experimental designs. Governments and advocacy groups display a growing appetite for evidence-based policy-making. In 2005, Mexico established an independent government agency to rigorously evaluate social programs, and in 2012, the U.S. Office of Management and Budget advised federal agencies to present evidence from randomized program evaluations in budget requests (1, 2). Read More →