Forecasting Social Science Results

Policymakers often rely on expert forecasts when choosing between different policy options, and academic researchers are increasingly integrating forecasts into their research designs and analyses. Yet there is currently neither common standards for eliciting or collecting forecasts, nor any repository to catalog them. Systematic collection would help us learn how actual results relate to scientists’ prior beliefs (or those of non-specialists), and how we can improve their accuracy and identify best practices for forecast elicitation.

Led by Stefano DellaVigna (UC Berkeley) and Eva Vivalt (Australian National University), BITSS is supporting the development of Social Science Prediction Platform (currently in beta), where researchers can systematically collect, catalog, and access forecasts. This platform has three main goals:

  1. Allow researchers and policymakers to credibly capture forecasts about the results of experiments before their results are known.
  2. Mitigate publication bias against null results by using the average expert forecast as a reference point for the kind of result that is expected, unsurprising, or uninteresting.
  3. Measure the accuracy of forecasts, drawing insights to improve research designs and the process of eliciting forecasts.

Learn more in this piece in Science Policy Forum by the project PIs and Devin Pope (University of Chicago).

The development of the platform is made possible with joint support from the Laura and John Arnold Foundation and an anonymous donor. Its development was informed by an ideation workshop organized by BITSS in December 2018.

Have feedback on the platform? Get in touch!