Why We Stopped Supporting Prediction Markets

By Ben Roesch on August 04, 2022

For many years, Cultivate Forecasts supported two different forecasting interface modes: prediction markets and opinion pools (aka opinion surveys or probability surveys). In a prediction market, forecasters buy and sell shares of answer options using real or virtual/fantasy currency (ie. I spend $10 to buy shares of “Yes” in the market “Will candidate X win the election?”). In an opinion pool, forecasters assign a probability to each potential answer (ie. I forecast 75% Yes / 25% No in the question “Will candidate X win the election?”).

Supporting both mechanisms was an ongoing technical challenge that consumed time and resources, but also hamstrung development of new features in our platform. To allow us to move faster and provide a single, high-quality experience (rather than 2 experiences that each received half of our attention), we decided to remove all prediction market features from the platform and focus exclusively on opinion pools.

While there were technical challenges that initiated the discussion, the most important aspect of the decision may have been non-technical. Within the universe of crowdsourced forecasting, you’ll find people who are passionate about the prediction market vs. opinion pool debate – that one is better or more accurate. Further, there’s a constant temptation within the world of forecasting to get enamored with complicated or impressive-sounding approaches to generating forecasts – prediction markets, AI, machine learning, Bayesian-something-or-other – whatever the latest buzzword might be.

Starting with Inkling Markets all the way back in 2006, our team has decades of collective experience running crowdsourced forecasting tournaments for governments and large organizations. Our experience has taught us a clear but counterintuitive truth: the mathematical mechanism for generating forecasts doesn’t really matter that much. While there are steps we can (and do) take to improve the accuracy of a crowd’s forecasts (e.g. accuracy weighting), the accuracy of the forecasts themselves - as long as they're relatively good - are almost always a secondary concern to what decision or policy makers really care about using crowdsourced forecasting for: trends, rationales, early warning, or post-mortem lessons learned as part of their broader analysis or decision-making process.

Looking back at where we've ever failed, a forecasting program has many other elements that are much more likely to determine its success: forecaster engagement, effective communications, good forecasting question selection, and a plan for integrating forecasting results into decision making processes. 

With that all said, we still needed to make a decision about which mechanism we would focus on – prediction markets or opinion pools. And while both mechanisms have strengths, we chose opinion pools for four main reasons:

1. We want flexibility in forecast aggregation.

In a prediction market, each trade (ie. a forecast) is executed at the current market price for that answer option and results in an updated price (we used the logarithmic market scoring rule to generate prices). Because of this, each forecast is mathematically dependent on the trades that came before it. This mathematical dependency made it effectively impossible to calculate a crowd forecast using only a subset of forecasts.

In an opinion pool, this limitation does not exist. If we want to do an aggregate forecast for a subset of forecasters (e.g. what do my top 10% of forecasters predict on this question?), we can do that. In fact, our platform now supports numerous mechanisms for doing this, including forecasting teams, user cohorts, and accuracy-weighted crowd aggregations.

2. Prediction market currency can be intimidating.

While a subset of a populations enjoy the market mechanism and currency and find it motivating, we received consistent feedback that a much larger majority of people found it confusing or intimidating. Rather than being a fun gamification element, it became an impediment to participation -- even when we had user experiences that abstracted out the concepts of making a trade. We heard this consistently about our own platform, and we hear it about other platforms. Submitting a basic probability (I think it’s a 10% chance) turns out to be easier for most people to understand.

3. Markets are ripe for manipulation.

Running a real-money prediction market requires regulatory approval, so many people turn to prediction markets using virtual/fantasy currencies. Typically, each forecaster will receive a currency allotment upon signing up, which they can then use to make trades/forecasts and leaderboards are then based on who has earned the most currency.

Occasionally, we would see users register for additional accounts, “dump” their currency allotment into making shares of a certain answer very cheap, then buy up the same answer using their “real” account. This would help their “real” account improve their leaderboard standing. Allowing people to create their own markets that purposely allow them to profit (because they know or control the outcome) is also an easy form of prediction market manipulation.

In an opinion pool (and with some simple general approval processes,) these loophole do not exist since forecasts and leaderboards are not based on currency winnings. Instead, we use a proper scoring rule, which incentivizes each forecaster to give their “true” beliefs about a question.

4. We don't know what game people are playing.

The goal of an individual participating in a prediction market is to earn the most currency. Given that overriding incentive, we don't know why you made the trade you did. Did you make it because that's your prediction, or did you make it because you think that's going to be everyone else's prediction and you want to profit from that? Or perhaps you were awake at 2am when a new prediction market question was released, and you had first mover advantage to adjust the market away from an equal odds starting point. Maybe there are a lot of questions and not enough participants to move prices to where they "should" be based on the amount of factual information known. Easy money, but less of a signal of how you think as a forecaster. Finally, because you're competing with others, you have no incentive to act collaboratively and share information with anyone else. So while prediction markets are still very accurate, understanding individual or cohort rationales for their trades is far more opaque - and potentially untrustworthy.


We realize foregoing prediction markets is actually bucking the trend in the latest platform development, especially in the crypto space or with other well-funded efforts. Further, research has shown prediction market prices (which equate to probabilities) are just as accurate as the other aggregation mechanisms we're using. But for fulfilling our own mission with our clients of increasing the rigor by which decisions and policies are made using crowdsourced forecasting, we gain far more insight - and make more of an impact to our stakeholders - by foregoing a mechanism only economists and self-declared "quants" have ever loved.



prediction markets cultivate forecasts