Prediction Market Failure

By Ben Golden on October 02, 2015


Effective prediction markets require a certain amount of liquidity--meaning that after users invest points in making a forecast, and while the question is still running, they need a way to exit their investment. In financial markets, liquidity is provided by other traders--after I buy shares in GE, to exist my investment I need to find someone else to sell my shares to. On Inkling Markets, liquidity is provided by the market itself, (usually) using an algorithm called LMSR. This means that users can always exit their positions, though doing so might be expensive. Because the market itself is providing liquidity, the creation of a market can alter the total amount of currency (Inkles) in circulation, usually increasing it. For a play-money market, increasing the money supply generally isn't a problem, but in one particular case it's caused a huge distortion for Inkling.

I wrote that Inkling usually uses LMSR, but there are two cases where it doesn't. Most market questions have answers that range from 0 to 1; for the question: "Which republican candidate will win the South Carolina Primary in 2016?", each candidate will either win (1) or not win (0). But sometimes question authors want to ask questions where the answers don't range from 0 to 1--for instance, "How many games will each NFL team with in the 2015 season." For questions that ask for a specific point estimate (or for a specific date), we use a different algorithm called QMSR.

QMSR itself is a perfectly reasonable algorithm, but it requires some additional input from a question author. Because questions can have a variety of likely answer ranges--NFL team wins range from 0-16, but team points range from roughly 200 to 500--we asked authors to provide a scaling factor, which adjusts the amount of liquidity provided by the question. A question ranging from 0-16 requires far more liquidity than one ranging from 200-500, because the endpoints are closer together. Ideally, all questions would provide approximately the same amount of liquidity, regardless of question type. And while this generally occurred for our client sites, it didn't on our public site, creating a huge distortion in user scores.

The average Binary question (any Yes/no question) on Inkling has a subsidy of of 538 Inkles, meaning that on average, each of these questions increases the total number of Inkles.  Some users win, some lose, but in total, they come out 538 Inkles ahead. For Options questions, which often have more than two answers, the average subsidy is 965 Inkles. By comparison, whenever a new user joins Inkling, they receive 5,000 Inkles, so creating a question in one of these types has a relatively small effect. Unfortunately, this isn't the case for questions that ask for a point estimate--these questions have an average subsidy of 267,542. Furthermore, these subsidies have risen steeply over time.

Before 2011, the average subsidy for a point estimate question was 4,629, which is high, but not absurd. By 2012, that figure had increased to 73,142; in 2014 it was 730,727; and in 2015 it's at 4,158,425. This exponential growth has had a number of undesirable consequences for the market:

  • "Normal" prediction market questions have become mostly meaningless from a scoring perspective, since point estimate questions have a more than four thousand times larger effect.
  • Point estimate questions are created with increasingly large scaling factors, meaning that new users are unable to impact the forecasts on these questions.
  • The leaderboard has become incredibly skewed towards users who participate in (and create) point estimate questions.

We've now taken steps to limit the creation of point estimate questions with large scaling factors, but not before this loophole had a sizable effect on our markets.  The important lesson is that prediction market administrators, even when running free prediction markets, need to monitor the subsidy being offered by each question. If certain question types are weighted more heavily than others, it can greatly skew user scores, and distort the functioning of the market.

If you liked this post, follow us on Twitter at @cultivatelabs, and sign up for our monthly newsletter.

Ben Golden/@BenGoldn is an Engineer at Cultivate Labs.

prediction markets