How Trump Stumped the Prediction Markets

By Vanessa Pineda on November 22, 2016

The results of the 2016 Presidential election surprised the media, pollsters, and prediction markets. In the week leading up to November 8th, predictions[i] on the U.S. Presidential election on our public prediction market, Alphacast, oscillated quite a bit, but never gave Trump more than a 15% shot at winning the election. By the morning of November 8th, a handful of forecasters had pulled the projected probability of a Trump win down to just one percent.

Other prediction markets and crowdsourced forecasting sites similarly gave Trump a slim chance of becoming the next president. On the day before Election Day, PredictIt[ii] and Good Judgment Open [iii] gave Trump a 22% and 24% shot at winning the election, respectively. PredictWise,[iv] which combines polling and prediction market forecasts, gave Trump an 11% chance.

How did Alphacast and other crowdsourced forecasting sites get the election so wrong? Perhaps the biggest contributor goes back to the source of information on which people base their predictions. We’re now learning much of that information was highly inaccurate, including news stories about candidates, poll results, and even opinions from people’s social circles. Fake news stories infiltrated social media. People were misrepresenting their support for candidates, even to friends. The accuracy of the polls, in particular, likely had a strong influence on the prediction market since people tend to base predictions on statistics. Many poll aggregators projected single-digit probabilities of a Trump presidency. For example, HuffPost Pollster[v] allowed Trump a 2% chance, and the Princeton Election Consortium[vi] reported a 7% chance. The FiveThirtyEight[vii] model notoriously gave Trump a much higher chance than most poll aggregators, but still favored Clinton 2:1.

Regardless of the multitudes of inaccurate information, Trump’s consensus probability of 1% on Alphacast was extreme. This can be attributed to a couple reasons. For one, we are still new and need a larger, more diverse group of users to provide more accurate signals. Secondly, users who have earned more in-game influence in other questions have a greater influence on the projection than newer or less successful players. However, they may have earned this influence through forecasts on non-political questions or by taking advantage of opportunities provided by day trading type strategies. Therefore, although their influence on Alphacast allowed them to have a greater say on the aggregate prediction, it’s not necessarily the case that they had superior acumen in regards to predicting elections. Indeed, a handful of rich Alphacast users—confident that Clinton would win—were able to peg the likelihood of a Clinton win to up to 99%.

Prediction markets may have gotten it partially ‘wrong’ when it came to the US president-elect, but for one, let’s still give credit to the fact that Clinton won the popular vote, and many prediction markets, including ours, did not pose a question related to this technicality. Nonetheless, prediction markets provide crowd-generated signals in real-time, and will always be much faster than traditional polls when it comes to reflecting the impacts of changing information on a future event’s outcome. For example, on Alphacast and other prediction markets, you can see the probability of a Clinton presidency climb after the first presidential debate on September 26th, and fall after FBI Director James Comey’s letter to Congress was published on October 28th.

The bottom line here is that when prediction market crowds are seemingly ‘stumped’ in terms of the directional signal they provide for the outcome of an event -- there is still plenty of value in diagnosing ‘how’ and ‘why’ that signal came to be.








prediction markets crowdsourced forecasting