An illustration of how the policy community can use OPA to strengthen the evidence-to-policy link.
By Fernando Hoces de la Guardia
What is the deworming OPA?
Opacity in policy analysis presents barriers to applying rigorously generated evidence to novel settings. Open Policy Analysis (OPA) is a framework to improve the transparency and reproducibility of policy analysis based on three principles: (1) open output: where the result of a policy analysis is one clear display item (table or figure) that presents the best representation of the facts and their dependence on underlying assumptions; (2) open analysis where all the details behind the policy analysis are clearly documented; and (3) open materials where the resources to reproduce the display item and the analysis are available allowing for third parties to independently reproduce all calculations with minimal barriers.
We’ve applied the principles of OPA to address this challenge in the context of deworming interventions. This OPA makes fully accessible the assumptions, analytic decisions, and data used in the analysis through an interactive app, an open policy report, and a GitHub repository.
See this companion blog post for detail on how this new OPA can transform deworming policy.
Who should use the deworming OPA
The Deworming OPA has three components developed with three different audiences in mind. The first component—an app—meant to inform policymakers, who would take what policy analysts consider to be the best representation of the facts and use those facts to inform their deliberation. In the context of deworming, policymakers might include decision-level officials in ministries of finance or health in one of the many countries stricken by worm infections, or NGOs involved in policy decision making (like Evidence Action). The app also can be used by advisors to policymakers interested in how different assumptions behind the analysis affect the final policy estimate. If different policymakers come to the table with different understandings of the best representation of the facts, they can use the OPA to clearly pin down which assumptions create the divergence, rather than talking past each other based on completely different reports.
The second component—the open policy report—is meant to show policy analysts and researchers all of the ins and outs of how the final policy estimate is obtained. Researchers in the development economics community or policy analysts in the ministries mentioned above might be particularly interested in these processes. Readers can focus just on the narrative of the report, or they can consult the equations that operationalize the narrative. (The goal of our extensive use of equations is not to deter the readers with “mathiness,” but rather to add clarity by ensuring that all steps are stated clearly and transparently). In this document, readers can also see the pieces of code that operationalize these equations. These three levels of information in the report (narrative, equations, and code) are presented in a layered fashion to avoid overwhelming the reader.
The third component—the repository—is meant to be used by policy analysts who want to extend, update, or build on top of the current policy analysis. It contains all the materials necessary to reproduce the OPA documentation and the app with one click . All the analysis is written in open source statistical software, in order to facilitate use in settings where licences can be barriers to adoption. These policy analysts might be the same as those using the report. Or analysts at other organizations, such as the World Bank or United States Agency for International Development, interested in conducting similar analyses to inform decision making in new contexts.
Why and how to use this OPA
The evidence-based policy community, which includes policymakers, policy analysts, and researchers, would benefit from adopting the OPA framework for deworming for three major reasons. First, enhanced transparency can help fight the emergence of alternative facts by shedding light on how competing analyses have reached different conclusions. To see this we only need to remove any one of the OPA’s three key principles—open output, open materials, or open analysis—to show how competing policy analyses can emerge. It is possible to imagine a policy analysis produced by a think tank that claims to be objective but, under the hood, chooses (consciously or not) some assumptions to favor their position.
To illustrate this, we’ll simulate two policy analyses: one in favor and one against deworming interventions (see Figure 1 below). Both reports have an app with one policy estimate with clear links to assumptions (following the OPA principle of open output) and a repository with all the materials to reproduce the results with one click (following the OPA principle of open materials). However, both are missing the detailed documentation described above (following the OPA principle of open analysis). As a result, both policy analyses can claim to produce the same key policy estimate with very different results: a net present value of $72 vs $659 per child treated. This example illustrates why creating user-friendly dashboards or making a project open source is not enough to prevent the emergence of alternative facts in policy analysis–a detailed open policy report is also needed.
A similar problem can emerge if there is no clear link between the final policy estimate and all of its underlying assumptions. Repeating the same strategy of two competing policy analyses, we can see that it is fairly easy to bury within the code small analytical choices that, combined, lead to large differences in the policy estimate (see Figure 2). Here the two opposing policy analyses share all the materials and documentation, but their apps are incomplete (specifically, they are missing the “All Assumptions Tab” included in our OPA), so it is not possible to see where the small differences are.
Second, the OPA approach makes it possible to automate reports on deworming across different settings. Without the information made available by this OPA, analysts at different agencies and organizations would have to create their own policy analysis for their specific setting. Using this OPA, it’s possible to start from the same modeling choices, inputting the key contextual information (such as the prevalence of disease, unit cost of treatment, and length of treatment) to see how the results of the analysis change for their own specific location. For example, policy analysts in India and Vietnam can produce context-specific estimates by simply adjusting the key assumptions and obtain very different policy estimates. A prevalence of 57% and a unit cost of $0.06 produces an NPV of $312 for India, while a prevalence of 14% and unit cost of $0.52 produces an NPV of $56 for Vietnam (see the results of these two analyses in Figure 3).
Finally, OPA creates a much clearer connection between policy analysis and the research that informs it. In traditional policy analysis, it is quite hard to identify the specific estimates that came from original research. In this OPA, researchers can see that the analysis used a treatment effect of 79.51 (standard error of 76). Moreover, they can modify it and prospectively see how new findings would affect the current policy estimate, informing, for example, sample size calculations of future studies.
In light of its potential to mitigate the use of alternative facts, support automated analysis, and infuse policy analysis with rigorous evidence, it is worth emphasizing how OPA can be particularly helpful in the domain of development economics. OPA can better align the timing between researchers and policy makers described earlier; if a goal from the onset, the translation of research to policy can be expedited. For example, funders or policy partners can request that researchers develop an OPA at the beginning of a research project, making clear at the onset how the study may affect policy, and updating the OPA as new findings emerge to inform decision-making in real time.
The best-suited interventions are those that are meant to inform recurrent policy decisions and/or could be implemented across contexts. For example, development financing agencies such as the Inter-American Development Bank, the World Bank, and the United States Agency for International Development conduct a large number of ex-ante economic analyses to inform their decisions to issue loans and grants (e.g. for infrastructure development, educational programs, job training programs, etc.). Since many of these economic analyses employ a similar morphology, having a centralized OPA for each type of intervention that can be easily adapted to several settings—as opposed to dozens of one-off economic analyses—could be especially beneficial. Other development programs well-suited to this novel approach include those that could benefit from a boost in credibility (e.g. politically sensitive minimum wage policies or tax policy) and/or increased transparency on how results from research are actually used in policy analyses (e.g. cash transfers, micro credit, graduation programs).
We hope that this OPA will strengthen the evidence-policy link in deworming interventions, and that it will also encourage other researchers and policy analysts in the development community to adopt these practices where possible.