Our sites use a popular prediction market algorithm called LMSR to determine how markets adjust when someone makes a forecast, and how user scores are affected by making correct and incorrect forecasts.
I've been trying to pick NFL game winners. I'm not using any complex analytical model; rather, I'm making decisions the way most sports bettors do--I watch some games, read the news, and use my judgment. I make each of my picks on SportsCast, which allows me to track my performance, interact with other forecasters, and track the performance of the prediction market--that is, the collective performance of all the forecasters on SportsCast.
One of the first and most important questions we get from clients, forecasters, and consumers of our data is: “How accurate are these forecasts?”. In order to answer this question, we have utilized and built upon a widely accepted proper scoring rule, i.e. a way to measure accuracy for a probabilistic forecast.
Joining a prediction market can be confusing and anxiety-inducing. It's easy to be overwhelmed by all the questions, to not understand the forecasting interface, or to have trouble forming opinions to base forecasts on. All of this is pretty natural--as a now-experienced forecaster, I can remember these feelings the first time I joined a prediction market. In this post I'll address a few specific emotional barriers that make it difficult to start forecasting.
On a recent podcast, Jack Schultz and I discussed two razor companies that are poised to become unicorn companies. Unicorns--startups that grow to billion-dollar valuations while remaining private--are somewhat mysterious and the subject of continuous speculation.
Wikipedia’s intro paragraph for prediction markets is the following:Prediction markets (also known as predictive markets, information markets, decision markets, idea futures, event derivatives, or virtual markets) are exchange-traded markets created for the purpose of trading the outcome of events.
One use of prediction markets I've been really excited about is forecasting individual players' performances in major sports. These predictions are incredibly useful when playing fantasy sports--both daily fantasy and season-long leagues--and the forecasts that currently exist tend to be, in my experience, pretty mediocre. Prediction markets present an opportunity for the wisdom of the crowds to intervene, and will likely lead to more accurate forecasts.
While I’ll grant the naysayers that some networking events end up being a waste of time, I’ve been pleasantly surprised to find one aspect universally helpful: the ability to have short, 5 minute conversations with everyone I meet about our business.
Over at Grantland, Zach Lowe has published an article featuring 35 crazy prediction for the upcoming NBA season. Writes Lowe: For this to be fun, we have to find the sweet spot between bat-crap crazy and probable. Let’s all be wrong together!
Two key take-aways from the emerging scandal surrounding Daily Fantasy sites: one, gambling data can be extremely valuable, and two: the only thing Americans love more than gambling is hating on gambling. Taken together, these findings illustrate why large-scale prediction markets present a path towards improving human knowledge in a wide range of topics.
Amongst the leadership teams of the portfolio companies at any medium to large investment firm, there is an incredible amount of experience, wisdom, and perspective that is not collectively being taken advantage of.
We recently started working with a Houston-based client in the Energy sector, who wanted to use a prediction market to help with internal operations, and to create greater transparency and communication within their company. We spent a couple months meeting with our client to learn about their business and objectives, and using test questions (e.g. asking about Houston sports teams) to help participants understand how prediction markets work. Our initial questions focused on specific operations
Have you ever been tasked with driving a project you’ve felt was going nowhere? Maybe you were a project manager or project owner, coordinating a team that was working on something you felt wasn’t gaining traction within the organization.
Effective prediction markets require a certain amount of liquidity--meaning that after users invest points in making a forecast, and while the question is still running, they need a way to exit their investment.
I've been following the development of TruthCoin, a platform for decentralized prediction markets, for a while now. Prediction markets at their core are about crowdsourcing forecasts. At Cultivate Labs, we've also built a platform to crowdsource question authorship. TruthCoin goes one step further and crowdsources question resolution.
In society’s harsh klieg light, if you’re not a world class athlete with multiple Super Bowl rings, your company isn’t worth a billion dollars, you don't have rock hard abs 3 months after having a baby, or you’re a politician polling in single digits, you’re “middling.” 20 years ago, it was unheard of to meet anyone who had run a marathon, and rarer still to meet anyone who had completed an Ironman. Now you’re lucky to get a “nice job” on Facebook.
In my last post analyzing my own forecasting history on Inkling Markets, I showed that I was consistently identifying long-shot bets that were more likely to pay off than their existing probability would suggest. In this post, I'll look at how my forecasts improve the accuracy of these markets, calculating how many the change in component Brier score within different percentiles.
We're developing some new tools to analyze forecasters' performance, biases, and ways to help them improve, and I've been digging into my own forecasting history on Inkling. I've focused on a set of 3,343 forecasts I've made in questions that use an LMSR algorithm and have already resolved. The first interesting finding is that most of my forecasts have been wrong.
(You know, besides how to cheat...)The recent hack of Ashley Madison--a dating service marketed towards married people--revealed that almost no women were actively using the site. Rather, the site's almost entirely male userbase was paying to interact with non-existent, fake, or inactive female accounts.
Prediction markets are generally very good at generating accurate forecasts, but a key secondary challenge is to determine which forecasters are contributing most to forecast accuracy. User scores are closely linked to their accuracy because the underlying market mechanism rewards users when they move a forecast closer to its actual result and penalizes them when they move the forecast away from the result.
When Ben and I started Cultivate and talked about the core tenets we wanted the company to follow, one was to have people we worked with share in the profit the company was generating outside of any normal compensation. But that may be having some unintended consequences.