Research Transparency in Brazilian Political and Social Science: A First Look Political ScienceSSMART

George Avelino Scott Desposato

We propose to conduct the first meta-analysis and reproducibility analysis of political science in Brazil. Funds will be used for graduate student support and other expenses to collect data and assess reproducibility of all articles published in the last five years in the three leading Brazilian political science and general social science journals, the Brazilian Political Science Review, the Revista de Ciência Política and Dados. A meta-analysis will test the relationship between type and field of study and reproducibility, and results will be presented at the annual meeting of the Brazilian National Association of Political Scientists, the Assoçiação Brasileira de Ciência Política (ABCP).

Pre-Analysis Plans: A Stocktaking InterdisciplinarySSMART

George Ofosu Daniel Posner

The evidence-based community (including BITSS) has held up preregistration as a solution to the problem of research credibility, but—ironically—without any evidence that preregistration works. The goal of our proposed research is to provide an evidentiary base for assessing whether PAPs—as they are currently used—are effective in achieving their stated objectives of preventing “fishing,” reducing scope for the post-hoc adjustment of research hypotheses, and solving the “file drawer problem.” We aim to do this by analyzing a random sample of 300 studies that have been pre-registered on the AEA and EGAP registration platforms, evenly distributed across studies that are still in progress, completed and resulting in a publicly available paper, and completed but (as far as we can determine) not resulting in a publicly available paper. Given the significant costs of researcher time and energy in preparing PAPs, and the implications that adhering to them may have on opportunities for breakthroughs that come from unexpected, surprise results (Laitin 2013; Olken 2015), it is critical to take stock of whether PAPs are working.

MetaLab: Paving the way for easy-to-use, dynamic, crowdsourced meta-analyses Cognitive ScienceInterdisciplinarySSMART

Christina Bergmann, Sho Tsuji, Molly Lewis, Mika Braginsky, Page Piccinini, Alejandrina Cristia, and Michael C. Frank

Aggregating data across studies is a central challenge in ensuring cumulative, reproducible science. Meta-analysis is a key statistical tool for this purpose. The goal of this project is to support meta-analysis using MetaLab, an interface and central repository. Our user-friendly, online platform supports collaborative hosting and creation of dynamic meta-analyses, based entirely on open tools. MetaLab implements sustainable curation and update practices, promotes community use for study design, and provides ready-to-use educational material. The platform will lower workload for individual (or groups of) researchers through a crowdsourcing infrastructure, thereby automating and structuring parts of the workflow. MetaLab is supported by a community of researchers and students who can become creators, curators, contributors, and/or users of meta-analyses. Visualization and computation tools allow instant access to meta-analytic results. The analysis pipeline is open and reproducible, enabling external scrutiny. Additionally, an infrastructure for meta-meta-analyses allows MetaLab to address broader conceptual and methodological questions across individual meta-analyses. MetaLab is initially being developed for the case study of early language development, a field in which meta-analytic methods are vastly underused and which will benefit greatly from new meta-analyses created in this project; extensions to other subfields of the social sciences are however easily possible.

Publication Bias and Editorial Statement on Negative Findings EconomicsSSMART

Abel Brodeur and Cristina Blanco-Perez

The introduction of confidence at 95 percent or 90 percent has led the academic community to accept more easily starry stories with marginally significant coefficients than starless ones with insignificant coefficients. In February 2015, the editors of eight health economics journals sent out an editorial statement encouraging referees to accept studies that: “have potential scientific and publication merit regardless of whether such studies’ empirical findings do or do not reject null hypotheses that may be specified.” In this study, we test whether this editorial statement on negative results induced a change in the number of published papers not rejecting the null. More precisely, we collect p-values from health economics journals and compare the distribution of tests before and after the editorial statement.

Working Paper: https://osf.io/45eba/

Developing a Guideline for Reporting Mediation Analyses (AGReMA) in randomized trials and observational studies Social ScienceSSMART

Hopin Lee, James H McAuley, Rob Herbert, Steven Kamper, Nicolas Henschke, and Christopher M Williams

Investigating causal mechanisms using mediation analysis is becoming increasingly common in psychology, public health, and social science. Despite increasing popularity, the accuracy and completeness when reporting mediation analyses are inconsistent. Inadequate and inaccurate reporting of research stifles replication, limits assessment of potential bias, complicates meta-analyses, and wastes resources. There is a pressing need to develop a reporting standard for mediation analyses. Up to now, there have been no registered initiatives on the “Enhancing the QUAlity and Transparency of health Research” (EQUATOR) network that guide the reporting of mediation analyses. Our proposed project aims to improve the reporting quality of future mediation analyses by developing a reporting guideline through a program of established methodologies (Systematic Review, Delphi Survey, Consensus Meetings, and Guideline Dissemination). The development and implementation of this guideline will improve the transparency of research findings on causal mechanisms across multiple disciplines.

Working Paper: https://osf.io/g7s9s/

A Large-Scale, Interdisciplinary Meta-Analysis on Behavioral Economics Parameters EconomicsSSMART

Colin Camerer and Taisuke Imai

We propose to conduct large-scale meta-analyses of published and unpublished researches in economics, psychology and neuroscience (and any other fields which emerge) to cumulate knowledge about measured parameters explaining preferences over risk and time. In the risk section, we will locate and meta-analyze studies which estimate parameters associated with curvature of utility and loss-aversion (the ratio between disutility of loss and utility of gain). In the time section, we will locate and meta-analyze studies which estimate parameters associated with near- and long-term discounting (exponential, hyperbolic, and quasi-hyperbolic parameters). Meta-analysis is the proper tool because: It is technologically easier than ever before; there are standard methods to weigh different estimates according to study quality; if unpublished work is found, it can help estimate whether there are biases in favor of publishing certain types of confirmatory bias (or a bias against publishing null results); and looking at a broad range of studies is the most efficient way to help resolve debates about how preference parameters vary across populations (e.g., how much less patient are young people?) and across methods (e.g., are time preferences inferred from monetary rewards different than non-monetary rewards?).

Bayesian Evidence Synthesis: New Meta-Analytic Procedures for Statistical Evidence PsychologySSMART

Eric-Jan Wagenmakers, Raoul Grasman, Quentin F. Gronau, and Felix Schonbrodt

The proposed project will develop a suite of meta-analytic techniques for Bayesian evidence synthesis. The proposed suite addresses a series of challenges that currently constrain classical meta-analytic procedures. Specifically, the proposed techniques will allow (1) the quantification of evidence, both for and against the absence of an effect; (2) the monitoring of evidence as new studies accumulate over time; (3) the graceful and principled “model-averaged” combination between fixed-effects and random-effects meta-analysis; (4) the principled planning of a new study in order maximize the probability that it will lead to a worthwhile gain in knowledge.

Assessing Bias from the (Mis)Use of Covariates: A Meta-Analysis Political ScienceSSMART

Gabriel Lenz and Alexander Sahn

An important and understudied area of hidden researcher discretion is the use of covariates. Researchers choose which covariates to include in statistical models and these choices affect the size and statistical significance of estimates reported in studies. How often does the statistical significance of published findings depend on these discretionary choices? The main hurdle to studying this problem is that researchers never know the true model and can always make a case that their choices are most plausible, closest to the true data generating process, or most likely to rule out alternative explanations. We attempt to surmount this hurdle through a meta-analysis of articles published in the American Journal of Political Science (AJPS). In almost 40% of observational studies, we find that researchers achieve conventional levels of statistical significance through covariate adjustments. Although that discretion may be justified, researchers almost never disclose or justify it.

The preprint for this project can be found here.

Integrated Theoretical Model of Condom Use for Young People in Sub-Saharan Africa PsychologySSMART

Cleo Protogerou, Martin Hagger, and Blair Johnson

Background: Sub-Saharan African nations continue to carry the substantive burden of the HIV pandemic, and therefore, condom use promotion is a public health priority in that region. To be successful, condom promotion interventions need to be based on appropriate theory.
 
Objective: To develop and test an integrated theory of the determinants of young peoples’ condom use in Sub-Saharan Africa, using meta-analytic path analysis.
 
Method: Theory development was guided by summary frameworks of social-cognitive health predictors, and research predicting condom use in SSA, adopting social-cognitive theories. Attitudes, norms, control, risk perceptions, barriers to condom use, intentions, and previous condom use, were included in our condom use model. We conducted an exhaustive database search, with additional hand-searches and author requests. Data were meta-analyzed with Hedges and Vevea’s (1998) random-effects models. Individual, societal, and study-level parameters were assessed as candidate moderators. Included studies were also critically appraised.
 
Results: Fifty-five studies (N = 55,069), comprising 72 independent data sets, and representing thirteen Sub-Saharan African nations, were meta-analyzed. Statistically significant corrected correlations were obtained among the majority of the constructs included and of sufficient size to be considered non-trivial. Path analyses revealed (a) significant direct and positive effects of attitudes, norms, control, and risk perceptions on condom use intentions, and of intention and control on condom use; and (b) significant negative effects of perceived barriers on condom use. The indirect effects of attitudes, norms, control, and risk perceptions on condom use, mediated by intentions, were also obtained. The inclusion of a formative research component moderated eight of the 26 effect sizes. The majority of studies (88%) were of questionable validity.
 
Conclusion: Our integrated theory provides an evidence-based framework to study antecedents of condom use in Sub-Saharan African youth, and to develop targets for effective condom promotion interventions.
 

The preprint for this project can be found here.

Welfare Comparisons Across Expenditure Surveys EconomicsSSMART

Elliott Collins, Ethan Ligon, and Reajul Chowdhury

This project aims to replicate and combine three recent experiments on capital transfers to poor households in two distinct phases. The first phase will produce three concise internal replications. These will be accompanied by a report detailing the challenges faced and tools and methods used. The final goal will be to produce practical insights useful to students and new researchers. The second phase will combine these experiments in an extended analysis to explore how economic theory can allow for meta-analysis and comparative impact evaluation among datasets that would otherwise be problematic or even impossible to compare.

Investigation of Data Sharing Attitudes in the Context of a Meta-Analysis PsychologySSMART

Joshua R. Polanin and Mary Terzian

Sharing individual participant data (IPD) among researchers, upon request, is an ethical and responsible practice. Despite numerous calls for this practice to be standard, however, research indicates that primary study authors are often unwilling to share IPD, even for use in a meta-analysis. The purpose of this project is to understand the reasons why primary study authors are unwilling to share their data and to investigate whether sending a data-sharing agreement, in the context of updating a meta-analysis, affects participants’ willingness to share IPD. Our project corresponds to the third research question asked in the proposal: Why do some researchers adopt transparent research practices, but not others? To investigate these issues, we plan to sample and survey at least 700 researchers whose studies have been included in recently published meta-analyses. The survey will ask participants about their attitudes on data sharing. Half of the participants will receive a hypothetical data-sharing agreement, half of the participants will not, and we will compare their attitudes toward data sharing after distributing the agreement. The results of this project will inform future meta-analysis projects seeking to collect IPD and will inform the community about data-sharing practices and barriers in general.

Will knowledge about more efficient study designs increase the willingness to pre-register? PsychologySSMART

Daniel Lakens

Pre-registration is a straightforward way to make science more transparent, and control Type 1 error rates. Pre-registration is often presented as beneficial for science in general, but rarely as a practice that leads to immediate individual benefits for researchers. One benefit of pre-registered studies is that they allow for non-conventional research designs that are more efficient than conventional designs. For example, by performing one-tailed tests and sequential analyses researchers can perform well-powered studies much more efficiently. Here, I examine whether such non-conventional but more efficient designs are considered appropriate by editors under the pre-condition that the analysis plans are pre-registered, and if so, whether researchers are more willing to pre-register their analysis plan to take advantage of the efficiency benefits of non-conventional designs.

The preprint for this project can be found here.

Reporting Guidance for Trial Protocols of Social Science Interventions Social ScienceSSMART

Sean Grant
Protocols improve reproducibility and accessibility of social science research. Given deficiencies in trial protocol quality, the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) Statement provides an evidence-based set of items to describe in protocols of clinical trials on biomedical interventions. This manuscript introduces items in the SPIRIT Statement to a social science audience, explaining their application to protocols of social intervention trials. Additional reporting items of relevance to protocols of social intervention trials are also presented. Items, examples, and explanations are derived from the SPIRIT 2013 Statement, other guidance related to reporting of protocols and completed trials, publicly accessible trial registrations and protocols, and the results of an online Delphi process with social intervention trialists. Use of these standards by researchers, journals, trial registries, and educators should increase the transparency of trial protocols and registrations, and thereby increase the reproducibility and utility of social intervention research.

Working Paper: https://osf.io/x6mkb/

Distributed meta-analysis: A new approach for constructing collaborative literature reviews EconomicsSSMART

Solomon Hsiang James Rising

Scientists and consumers of scientific knowledge can struggle to synthesize the quickly evolving state of empirical research. Even where recent, comprehensive literature reviews and meta-analyses exist, there is frequent disagreement on the criteria for inclusion and the most useful partitions across the studies. To address these problems, we create an online tool for collecting, combining, and communicating a wide range of empirical results. The tool provides a collaborative database of statistical parameter estimates, which facilitates the sharing of key results. Scientists engaged in empirical research or in the review of others’ work can input rich descriptions of study parameters and empirically estimated relationships. Consumers of this information can filter the results according to study methodology, the studied population, and other attributes. Across any filtered collection of results, the tool calculates pooled and hierarchically modeled common parameters, as well as the range of variation between studies.

Working Paper: https://osf.io/rj8v9/

Panel Data and Experimental Design EconomicsSSMART

Fiona Burlig Matt Woerman Louis Preonas

How should researchers design experiments to detect treatment effects with panel data? In this paper, we derive analytical expressions for the variance of panel estimators under non-i.i.d. error structures, which inform power calculations in panel data settings. Using Monte Carlo simulation, we demonstrate that, with correlated errors, traditional methods for experimental design result in experiments that are incorrectly powered with proper inference. Failing to account for serial correlation yields overpowered experiments in short panels and underpowered experiments in long panels. Using both data from a randomized experiment in China and a high-frequency dataset of U.S. electricity consumption, we show that these results hold in real-world settings. Our theoretical results enable us to achieve correctly powered experiments in both simulated and real data. This paper provides researchers with the tools to design well-powered experiments in panel data settings.

The preprint for this project can be found here.

Examining the Reproducibility of Meta Analyses in Psychology PsychologySSMART

Meta-analyses are an important tool to evaluate the literature. It is essential that meta-analyses can easily be reproduced to allow researchers to evaluate the impact of subjective choices on meta-analytic effect sizes, but also to update meta-analyses as new data comes in, or as novel statistical techniques (for example to correct for publication bias) are developed. Research in medicine has revealed meta-analyses often cannot be reproduced. In this project, we examined the reproducibility of meta-analyses in psychology by reproducing twenty published meta-analyses. Reproducing published meta-analyses was surprisingly difficult. 96% of meta-analyses published in 2013-2014 did not adhere to reporting guidelines. A third of these meta-analyses did not contain a table specifying all individual effect sizes. Five of the 20 randomly selected meta-analyses we attempted to reproduce could not be reproduced at all due to lack of access to raw data, no details about the effect sizes extracted from each study, or a lack of information about how effect sizes were coded. In the remaining meta-analyses, differences between the reported and reproduced effect size or sample size were common. We discuss a range of possible improvements, such as more clearly indicating which data were used to calculate an effect size, specifying all individual effect sizes, adding detailed information about equations that are used, and how multiple effect size estimates from the same study are combined, but also sharing raw data retrieved from original authors, or unpublished research reports. This project clearly illustrates there is a lot of room for improvement when it comes to the transparency and reproducibility of published meta-analyses.

The preprint for this project can be found here.

Aggregating Distributional Treatment Effects: A Bayesian Hierarchical Analysis of the Microcredit Literature EconomicsSSMART

Rachael Meager
This paper develops methods to aggregate evidence on distributional treatment effects from multiple studies conducted in different settings, and applies them to the microcredit literature. Several randomized trials of expanding access to microcredit found substantial effects on the tails of household outcome distributions, but the extent to which these findings generalize to future settings was not known. Aggregating the evidence on sets of quantile effects poses additional challenges relative to average effects because distributional effects must imply monotonic quantiles and pass information across quantiles. Using a Bayesian hierarchical framework, I develop new models to aggregate distributional effects and assess their generalizability. For continuous outcome variables, the methodological challenges are addressed by applying transforms to the unknown parameters. For partially discrete variables such as business profits, I use contextual economic knowledge to build tailored parametric aggregation models. I find generalizable evidence that microcredit has negligible impact on the distribution of various household outcomes below the 75th percentile, but above this point there is no generalizable prediction. Thus, there is strong evidence that microcredit typically does not lead to worse outcomes at the group level, but no generalizable evidence on whether it improves group outcomes. Households with previous business experience account for the majority of the impact in the tails and see large increases in the upper tail of the consumption distribution in particular.

Working Paper: https://osf.io/rj9s4/

Using P-Curve to Assess Evidentiary Value of Social Psychology Publications PsychologySSMART

Leif Nelson

The proposed project will utilize p-curve, a new meta-analytic tool to assess the evidentiary value of studies from social psychology and behavioral marketing. P-curves differs from meta-analytic methods by analyzing the distribution of p-values to determine the likelihood that a study provides evidence for the existence of an effect; in the event that there is not evidentiary value in a study, p-curve can also determine whether a study is powered such that it would detect an effect 33% of the time, given it exists. We will apply p-curve to each empirical paper in the first issue of 2014 in three top-tier journals: Psychological Science, The Journal of Personality and Social Psychology, and The Journal of Consumer Research. Additionally, we will conduct a direct replication of one study from each of these issues.

External Validity in U.S. Education Research EconomicsSSMART

Sean Tanner
As methods for internal validity improve, methodological concerns have shifted toward assessing how well the research community can extrapolate from individual studies. Under recent federal granting initiatives, over $1 billion has been awarded to education programs that have been validated by a single randomized or natural experiment. If these experiments have weak external validity, scientific advancement is delayed and federal education funding might be squandered. By analyzing trials clustered within interventions, this research describes how well a single study’s results are predicted by additional studies of the same intervention in addition to analyzing how well study samples match the target populations of interventions. I find that U.S. education trials are conducted on samples of students who are systematically less white and more socioeconomically disadvantaged that the overall student population. Moreover, I find that effect sizes tend to decay in the second and third trials of interventions.

Working Paper: https://osf.io/w5bjs/

Getting it Right with Meta-Analysis: Correcting Effect Sizes for Publication Bias in Meta-Analyses from Psychology and Medicine SSMARTStatistics

Robbie van Aert

Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias is widespread, how strongly it affects different scientific literatures is currently less well-known. We examine evidence of publication bias in a large-scale data set of meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and the Cochrane Database of Systematic Reviews (representing meta-analyses from medical research). Psychology is compared to medicine, because medicine has a longer history than psychology with respect to preregistration of studies as an effort to counter publication bias. The severity of publication bias and its inflating effects on effect size estimation were systematically studied by applying state-of-the-art publication bias tests and the p-uniform method for estimating effect size corrected for publication bias. Publication bias was detected in only a small amount of homogeneous subsets. The lack of evidence of publication bias was probably caused by the small number of effect sizes in most subsets resulting in low statistical power for publication bias tests. Regression analyses showed that small-study effects were actually present in the meta-analyses. That is, smaller studies were accompanied with larger effect sizes, possible due to publication bias. Overestimation of effect size due to publication bias was similar for meta-analyses published in Psychological Bulletin and Cochrane Database of Systematic Reviews.

The working paper titled “Publication bias in meta-analyses from psychology and medical research: A meta-meta-analysis” for this project can be found on BITSS Preprints here.

Open Science & Development Engineering: Evidence to Inform Improved Replication Models EngineeringSSMART

Paul Gertler

**Updated on April 12, 2017** Replication is a critical component of scientific credibility. Replication increases our confidence in the reliability of knowledge generated by original research. Yet, replication is the exception rather than the rule in economics. Few replications are completed and even fewer are published. Indeed, in the last 6 years, only 11 replication studies were published in top 11 (empirical) Economics Journals. In this paper, we examine why so little replication is done and propose changes to the incentives to replicate in order to solve these problems.

Our study focuses on code replication, which seeks to replicate the results in the original paper uses the same data as the original study. Specifically, these studies seek to replicate exactly the same analyses performed by the authors. The objective is to verify that the analysis code is correct and confirm that there are no coding errors. This is usually done in a two-step process. The first step is to reconstruct the sample and variables used in the analysis from the raw data. The second step is to confirm that the analysis code (i.e., the code that fits a statistical model to the data) reproduces the reported results. By definition, the results reported in the original paper can then be replicated if this two-step procedure is successful. The threat of code replication provides an incentive for authors to put more effort into writing the code to avoid errors and incentive not to purposely misreport results.

The preprint for this project can be found here.

How often should we believe positive results? EconomicsSSMART

Eva Vivalt Aidan Coville

High false positive and false negative reporting probabilities (FPRP and FNRP) reduce the veracity of the available research in a particular field, undermining the value of evidence to inform policy. However, we rarely have good estimates of false positive and false negative rates since both the prior and study power are required for their calculation, and these are not typically available or directly knowable without making ad hoc assumptions. We will leverage on AidGrade’s dataset of 647 impact evaluations in development economics and complement this by gathering estimates of priors and reasonable minimum detectable effects of various intervention-outcome combinations from policymakers, development practitioners and researchers in order to generate estimates of the FPRP and FNRP rates in development economics.