What is forecast calibration?

The Ultimate Guide to Crowdsourced Forecasting

« Back to All Topics

Many people look to prediction markets for accurate predictions about popular events, especially when it comes to politics and elections. But each time an outcome deemed by prediction markets to have a high probability does not occur, naysayers chirp about how predictions markets were "wrong" about that particular event or election. This chorus is especially loud in particularly very high probability events, like the 2016 presidential election, where prediction markets were pegging Hillary Clinton as a 90+% favorite over Donald Trump. But even when prediction markets peg the likelihood of some outcome in the mid-60% range, people have a tendency to think that means that the event will happen, when in reality it is suggesting that 3-4 times out of 10, the other outcome will occur. 


To understand accuracy relative to forecasts made by prediction markets, it's important to understand forecast calibration. Forecast calibration describes how well the forecasted probability of an outcome matches the actual, observed frequency of that outcome occurring. Said with numbers: if we forecast that something has a 60% chance of occurring, does it actually occur 6 times out of 10?


Another way to think about calibration is to consider the rolling of a 6-sided die. We can forecast that there is a 16.66% chance that you will roll a 3. Then, we roll the die 1000 times. If a 3 comes up 166 times, we would consider our original forecast "well calibrated" -- reality matched our forecast.


This concept can be a little more difficult to follow when the event can't be replayed 1000 times like rolling a die. Obviously, you can't go back and re-run the 2016 presidential election 1000 times to see if the original forecast was well calibrated or not. Instead, to measure calibration of a prediction market, we can consider different events that have matching forecasts. Let's say we run 100 different questions on a wide array of topics. Among all of our questions, we forecast a 90% probability for 10 of them to occur. Among those 10, 9 of them occur and 1 does not. The questions may be forecasting different events, but among those 10, the prediction market was perfectly calibrated.


Often, people mistake a high probability forecast for an assurance that an outcome will definitely occur. But this is largely because we're looking for assurances about uncertain events -- not because a 75% chance means that an outcome is sure to happen. It's important to keep in mind that a 75% chance means that 25 times out of 100, the opposite is expected to occur.


Go back to The Ultimate Guide to Crowdsourced Forecasting and Prediction Markets