Transparency and Trust in the Research Ecosystem

CEGA launched the Berkeley Initiative for Transparency in the Social Sciences (BITSS) based on the argument that more transparency in research could address underlying factors driving publication bias, unreliable research findings, a lack of reproducibility in the published literature, and a problematic incentive structure within the research ecosystem.  Meanwhile an ever-increasing number of tools have emerged to improve how we science: Collaboration platforms! Big Data! Machine Learning! So much software 

For this reason, we view the open social science movement as a means to both fuel this new world—by creating opportunities for shared research and learning—and temper it—by leveraging new tools and methods for transparency to mitigate publication and researcher bias that have driven the credibility crisis. 

We believe it’s possible to accomplish both, particularly with the development and widespread adoption of study registration, pre-analysis plans (PAPs), and repositories for data + code, as well as publishing practices like registered reports, preprints, and open access journals. However, we also think the benefits of opening up how research is designed, implemented, and published can only be fully realized if we understand that those benefits are built on trust—public trust in the research ecosystem, and trust among those within it.  

While transparency can be a tool for reinforcing trust, it can also damage it if not wielded properly. With this in mind, we offer four reflections on how trust and transparency can operate in the current research ecosystem: 

  1. Transparency can mean increased scope, not scooping. The argument we often hear against more openness in study design is that researchers will be “scooped.” The scenario typically presented is: a junior researcher with an innovative idea posts their PAP and/or registration, only to have a more established (and better funded) researcher seize the idea, fund the study faster, and publish it before the original researcher has time to complete the study.This concern highlights a lack of trust in fellow researchers. But consider two counter-arguments: First, compared to the standard practice of presenting early stage design and ideas in academic conferences or funding proposals, the registered PAP (for example on the OSF) has a stronger paper trail to ensure appropriate credit for the original researcher. Second, particularly when the idea is novel, why can’t we consider publication of a PAP as an invitation to collaborate (when appropriate)? There is only so much the original researcher can do in terms of scope – sample size and power, geographic location, context, etc. – given available funding. But if publication of PAPs could be used as an invitation to other researchers to collaborate (and trust them as collaborators) the result could be a boost in sample sizes and/or contribution to a meta-analysis merges and leverages other resources to address constraints to statistical power, external validity, or other challenges of the original study. (With this in mind, we’re interested in work like StudySwap but curious to hear any positive/negative experiences with such an approach!)
  2. Trust between researchers and journals needs to be rebuilt. It’s common to assume that journals only publish innovative results that are statistically significant – leading researchers to p-hack (see example here) to a certain statistical threshold (though there are great debates on if that threshold should change from p≤.05 to p≤.005 or if there should be a threshold at all!). It’s also common to assume journals will only publish replications that debunk original study findings. As a result, researchers may leave replicated results in file drawers (we’ll never know!) or actively seek any differences. There is a lot of emphasis on how researchers must respond and adapt to the credibility crisis, but how much can be done if journals don’t respond as well to rebuild trust? While there are calls on journals to complement researcher efforts for improved data and code for replication, other efforts led by journals – such as pilot testing pre-results review and publishing replication studies regardless of results – are encouraging because it shows an appetite for journals to take on responsibility and risk and demonstrate it’s not flashy results, but sound science that is published.
  3. How transparent we are should align with our ethical obligations. Much of science relies on trust that research participants place in researchers to protect personal and/or sensitive data. This trust is built into the informed consent process and how well participants understand what data they are asked to provide, how it will be used, and how it will be protected. Consider there are (at least!) three groups of data – (i) data which is not public and can never be made public due to ownership, sensitivity, or other potential harm to participants; (ii) data which is not public but can be de-identified or otherwise managed in a way that allows for transparency without harm to the participants (See this useful visual regarding de-identification); and (iii) data that is public – or at least accessible – but should be carefully considered for research purposes (think OKCupid or Facebook). Just how transparent we can be and the ethics of that transparency vary across these three (or more) groups – basically there is a transparency spectrum.This is why we argue for a balanced approach to transparency in our trainings and are tracking evolving practices in informed consent and data sharing (like the revised Common Rule). To maintain trust in how we practice and open up science, researchers need to clearly define, from the beginning, where their study falls on the transparency spectrum based on expected benefits and risks. (With this in mind, we’re excited to follow work like how to open up qualitative and multi-method research, ethics in computational research through Pervade, and the Open Data Initiative’s Data Ethics Canvas).
  4. Maintaining and strengthening trust is a good defense against those who would weaponize transparency. There is a simple argument being made that should cause anyone who cares about good science concern – that research is only credible if it is 100% transparent. This argument ignores the spectrum discussed above. So, how to defend ourselves? Well, a good place to start is Lewandowsky & Bishop’s summary on ways in which transparency can be weaponized! Recognizing it’s a double-edged sword should inform our efforts to use transparency to improve science and not do more harm than good. Second, everyone needs to play their part to rebuild and maintain trust in the research ecosystem. For example: if you’re a researcher, don’t falsely claim protection of human subjects as an excuse for not sharing data underlying analysis when it is possible (through de-identification or other methods); if you’re a funder or policymaker, don’t make blanket requirements for researchers to share all data that doesn’t account for the transparency spectrum. 

The bottom line: we are advocates for more transparency in social science (and the policy analysis it informs) insofar as we believe opening up research improves its rigor and credibility. But this assumption only holds when transparency builds and strengthens trust in the research ecosystem, not diminishes it.


This post is authored by Jennifer Sturdy, BITSS Program Advisor. It is cross-posted with the CEGA Blog.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.