One of the first and most important questions we get from clients, forecasters, and consumers of our data is: “How accurate are these forecasts?”. In order to answer this question, we have utilized and built upon a widely accepted proper scoring rule, i.e. a way to measure accuracy for a probabilistic forecast.
Joining a prediction market can be confusing and anxiety-inducing. It's easy to be overwhelmed by all the questions, to not understand the forecasting interface, or to have trouble forming opinions to base forecasts on. All of this is pretty natural--as a now-experienced forecaster, I can remember these feelings the first time I joined a prediction market. In this post I'll address a few specific emotional barriers that make it difficult to start forecasting.
On a recent podcast, Jack Schultz and I discussed two razor companies that are poised to become unicorn companies. Unicorns--startups that grow to billion-dollar valuations while remaining private--are somewhat mysterious and the subject of continuous speculation.
Wikipedia’s intro paragraph for prediction markets is the following:Prediction markets (also known as predictive markets, information markets, decision markets, idea futures, event derivatives, or virtual markets) are exchange-traded markets created for the purpose of trading the outcome of events.
One use of prediction markets I've been really excited about is forecasting individual players' performances in major sports. These predictions are incredibly useful when playing fantasy sports--both daily fantasy and season-long leagues--and the forecasts that currently exist tend to be, in my experience, pretty mediocre. Prediction markets present an opportunity for the wisdom of the crowds to intervene, and will likely lead to more accurate forecasts.
Over at Grantland, Zach Lowe has published an article featuring 35 crazy prediction for the upcoming NBA season. Writes Lowe: For this to be fun, we have to find the sweet spot between bat-crap crazy and probable. Let’s all be wrong together!
Two key take-aways from the emerging scandal surrounding Daily Fantasy sites: one, gambling data can be extremely valuable, and two: the only thing Americans love more than gambling is hating on gambling. Taken together, these findings illustrate why large-scale prediction markets present a path towards improving human knowledge in a wide range of topics.
Amongst the leadership teams of the portfolio companies at any medium to large investment firm, there is an incredible amount of experience, wisdom, and perspective that is not collectively being taken advantage of.
We recently started working with a Houston-based client in the Energy sector, who wanted to use a prediction market to help with internal operations, and to create greater transparency and communication within their company. We spent a couple months meeting with our client to learn about their business and objectives, and using test questions (e.g. asking about Houston sports teams) to help participants understand how prediction markets work. Our initial questions focused on specific operations
Have you ever been tasked with driving a project you’ve felt was going nowhere? Maybe you were a project manager or project owner, coordinating a team that was working on something you felt wasn’t gaining traction within the organization.
Effective prediction markets require a certain amount of liquidity--meaning that after users invest points in making a forecast, and while the question is still running, they need a way to exit their investment.
I've been following the development of TruthCoin, a platform for decentralized prediction markets, for a while now. Prediction markets at their core are about crowdsourcing forecasts. At Cultivate Labs, we've also built a platform to crowdsource question authorship. TruthCoin goes one step further and crowdsources question resolution.
In my last post analyzing my own forecasting history on Inkling Markets, I showed that I was consistently identifying long-shot bets that were more likely to pay off than their existing probability would suggest. In this post, I'll look at how my forecasts improve the accuracy of these markets, calculating how many the change in component Brier score within different percentiles.
We're developing some new tools to analyze forecasters' performance, biases, and ways to help them improve, and I've been digging into my own forecasting history on Inkling. I've focused on a set of 3,343 forecasts I've made in questions that use an LMSR algorithm and have already resolved. The first interesting finding is that most of my forecasts have been wrong.
(You know, besides how to cheat...)The recent hack of Ashley Madison--a dating service marketed towards married people--revealed that almost no women were actively using the site. Rather, the site's almost entirely male userbase was paying to interact with non-existent, fake, or inactive female accounts.
Prediction markets are generally very good at generating accurate forecasts, but a key secondary challenge is to determine which forecasters are contributing most to forecast accuracy. User scores are closely linked to their accuracy because the underlying market mechanism rewards users when they move a forecast closer to its actual result and penalizes them when they move the forecast away from the result.
What is Crowdsourced Forecasting and Why is it the Best Forecasting Tool Around? Francis Galton was an English scientist who believed in the stupidity of the average human. Galton believed that, to have a healthy society, you needed to concentrate power in the select few who didn't fit that bill.
On December 7 of last year, the Carolina Panthers were 3-8-1 and I spent about 1100 Inkles forecasting that they would make the playoffs. Even at long odds, this may seem like wasted Inkles--I'm pretty sure that a 3-8-1 had never gone on to make the playoffs. But there were a couple other important factors.
As I've become more involved with prediction markets, I've grown increasingly frustrated with journalists who make predictions (aka pundits) without linking to prediction market questions. This is, in my opinion, lousy journalism, and insulting to readers.
In growing my Inkling score from five thousand to ten million Inkles, one of the most important questions was related to the number of points each team would score in the most recent NBA season. The question asked about the difference between each team's points and the average of all teams.
Barry Ritholtz has written a curious column titled The 'Wisdom of Crowds' Is Not That Wise for Bloomberg View, which criticizes prediction markets. This is not a new view for Ritholtz, as he reminds us by linking to six blog posts critical of prediction markets each written by...Barry Ritholtz. Indeed Ritholtz has made it his mission to find instances of prediction markets 'failing', and has found six of them.