Joining a prediction market or prediction pool can be intimidating. You may feel like you've plunged into a world of impossible questions, ongoing arguments, and have no idea where to start. You might think there's no way you could possibly add anything, or that your forecasts couldn't possible be better than anyone else's. But prediction markets don't have to be intimidating - here are four tips to get started.
In more than 10 years of working with large companies on building prediction markets and other crowdsourced forecasting mechanisms, we’ve seen one common thread with the projects that are unsuccessful. Project owners overestimate the technology, and underestimate what it takes to engage people to make the technology successful. Avoid these 5 pitfalls to improve the success of your Cultivate Forecasts prediction market.
So, you asked a prediction market question, and the outcome is now known. The question has been resolved and winnings have been disbursed to the forecasters who held winning positions. Forecasters know how well they did based upon their profits in the question, and you know who your good forecasters were too. But how accurate was your organization at answering the question itself? There are several things to consider when thinking about accuracy of the prediction market:
A multinational energy company uses Cultivate Forecasts predict economic and geopolitical events that impact oil and gas prices. We find out why they decided to incorporate internal crowdsourcing within their business and how they have been so successful at engaging employee participation.
I enjoyed John Horgan's piece on Bayes Theorem for Scientific American. Bayes Theorem and Bayesian reasoning are highly applicable when thinking about forecasting and prediction markets; indeed, one prediction market built a Bayes Net into its platform. In this post I'll explain what Bayesian reasoning is, why it matters to prediction markets, and give a concrete (but semi-fictitious) example of how it's applied.
Our sites use a popular prediction market algorithm called LMSR to determine how markets adjust when someone makes a forecast, and how user scores are affected by making correct and incorrect forecasts.
I've been trying to pick NFL game winners. I'm not using any complex analytical model; rather, I'm making decisions the way most sports bettors do--I watch some games, read the news, and use my judgment. I make each of my picks on SportsCast, which allows me to track my performance, interact with other forecasters, and track the performance of the prediction market--that is, the collective performance of all the forecasters on SportsCast.
Joining a prediction market can be confusing and anxiety-inducing. It's easy to be overwhelmed by all the questions, to not understand the forecasting interface, or to have trouble forming opinions to base forecasts on. All of this is pretty natural--as a now-experienced forecaster, I can remember these feelings the first time I joined a prediction market. In this post I'll address a few specific emotional barriers that make it difficult to start forecasting.
On a recent podcast, Jack Schultz and I discussed two razor companies that are poised to become unicorn companies. Unicorns--startups that grow to billion-dollar valuations while remaining private--are somewhat mysterious and the subject of continuous speculation.
One use of prediction markets I've been really excited about is forecasting individual players' performances in major sports. These predictions are incredibly useful when playing fantasy sports--both daily fantasy and season-long leagues--and the forecasts that currently exist tend to be, in my experience, pretty mediocre. Prediction markets present an opportunity for the wisdom of the crowds to intervene, and will likely lead to more accurate forecasts.
In my last post analyzing my own forecasting history on Inkling Markets, I showed that I was consistently identifying long-shot bets that were more likely to pay off than their existing probability would suggest. In this post, I'll look at how my forecasts improve the accuracy of these markets, calculating how many the change in component Brier score within different percentiles.
We're developing some new tools to analyze forecasters' performance, biases, and ways to help them improve, and I've been digging into my own forecasting history on Inkling. I've focused on a set of 3,343 forecasts I've made in questions that use an LMSR algorithm and have already resolved. The first interesting finding is that most of my forecasts have been wrong.
(You know, besides how to cheat...)The recent hack of Ashley Madison--a dating service marketed towards married people--revealed that almost no women were actively using the site. Rather, the site's almost entirely male userbase was paying to interact with non-existent, fake, or inactive female accounts.
Prediction markets are generally very good at generating accurate forecasts, but a key secondary challenge is to determine which forecasters are contributing most to forecast accuracy. User scores are closely linked to their accuracy because the underlying market mechanism rewards users when they move a forecast closer to its actual result and penalizes them when they move the forecast away from the result.
On December 7 of last year, the Carolina Panthers were 3-8-1 and I spent about 1100 Inkles forecasting that they would make the playoffs. Even at long odds, this may seem like wasted Inkles--I'm pretty sure that a 3-8-1 had never gone on to make the playoffs. But there were a couple other important factors.
As I've become more involved with prediction markets, I've grown increasingly frustrated with journalists who make predictions (aka pundits) without linking to prediction market questions. This is, in my opinion, lousy journalism, and insulting to readers.