In my last post analyzing my own forecasting history on Inkling Markets, I showed that I was consistently identifying long-shot bets that were more likely to pay off than their existing probability would suggest. In this post, I'll look at how my forecasts improve the accuracy of these markets, calculating how many the change in component Brier score within different percentiles.
We're developing some new tools to analyze forecasters' performance, biases, and ways to help them improve, and I've been digging into my own forecasting history on Inkling. I've focused on a set of 3,343 forecasts I've made in questions that use an LMSR algorithm and have already resolved. The first interesting finding is that most of my forecasts have been wrong.
(You know, besides how to cheat...)The recent hack of Ashley Madison--a dating service marketed towards married people--revealed that almost no women were actively using the site. Rather, the site's almost entirely male userbase was paying to interact with non-existent, fake, or inactive female accounts.
Prediction markets are generally very good at generating accurate forecasts, but a key secondary challenge is to determine which forecasters are contributing most to forecast accuracy. User scores are closely linked to their accuracy because the underlying market mechanism rewards users when they move a forecast closer to its actual result and penalizes them when they move the forecast away from the result.
When Ben and I started Cultivate and talked about the core tenets we wanted the company to follow, one was to have people we worked with share in the profit the company was generating outside of any normal compensation. But that may be having some unintended consequences.
What is Crowdsourced Forecasting and Why is it the Best Forecasting Tool Around? Francis Galton was an English scientist who believed in the stupidity of the average human. Galton believed that, to have a healthy society, you needed to concentrate power in the select few who didn't fit that bill.
“How do we get people to do things?” It was only the first week at my new job here at Cultivate Labs when a client asked about how to increase participation in their prediction market. It’s the million-dollar question that undoubtedly comes up in every work project that requires any sort of change management effort... ever.
On December 7 of last year, the Carolina Panthers were 3-8-1 and I spent about 1100 Inkles forecasting that they would make the playoffs. Even at long odds, this may seem like wasted Inkles--I'm pretty sure that a 3-8-1 had never gone on to make the playoffs. But there were a couple other important factors.
As I've become more involved with prediction markets, I've grown increasingly frustrated with journalists who make predictions (aka pundits) without linking to prediction market questions. This is, in my opinion, lousy journalism, and insulting to readers.
In growing my Inkling score from five thousand to ten million Inkles, one of the most important questions was related to the number of points each team would score in the most recent NBA season. The question asked about the difference between each team's points and the average of all teams.
When my grandmother immigrated to the United States, she couldn't afford to call her family on the telephone. That was about 70 years ago. Today, I have a friend whose brother moved to Sri Lanka to become a Buddhist monk and literally lives in a cave. He and his family Skype. This is the power of the Internet--for a significant portion of the planet, it's now possible for any two people to communicate from anywhere, in real-time, basically for free.
Barry Ritholtz has written a curious column titled The 'Wisdom of Crowds' Is Not That Wise for Bloomberg View, which criticizes prediction markets. This is not a new view for Ritholtz, as he reminds us by linking to six blog posts critical of prediction markets each written by...Barry Ritholtz. Indeed Ritholtz has made it his mission to find instances of prediction markets 'failing', and has found six of them.