Forecasting Social Science Results

Policymakers often rely on expert forecasts when choosing between different policy options, and academic researchers are increasingly integrating forecasts into their research designs and analyses. Yet there are currently neither common standards for eliciting or collecting forecasts, nor any repository to catalog them. Collecting predictions systematically would help us learn how actual results relate to scientists’ prior beliefs (or those of non-specialists), and how we can improve their accuracy and identify best practices for forecast elicitation.

Led by Stefano DellaVigna (UC Berkeley) and Eva Vivalt (University of Toronto), BITSS is supporting the development of the Social Science Prediction Platform, where researchers can systematically collect, catalog, and access forecasts. This platform has three main goals:

  1. Allow researchers and policymakers to credibly capture forecasts about the results of experiments before their results are known.
  2. Mitigate publication bias against null results by using the average expert forecast as a reference point for the kind of result that is expected, unsurprising, or uninteresting.
  3. Measure the accuracy of forecasts, drawing insights to improve research designs and the process of eliciting forecasts.
Introducing the Social Science Prediction Platform.

Learn more in this piece in Science Policy Forum by the project PIs and Devin Pope (University of Chicago).

The development of the platform is made possible with joint support from the Sloan Foundation and an anonymous donor. Its development was informed by an ideation workshop organized by BITSS in December 2018.

Have feedback on the platform? Get in touch!