Well, that was embarrassing: lessons from forecasting on UK’s prime minister
By Zach Siewert on October 25, 2022
Cultivate employees are eager to exercise their forecasting muscles by participating in our public forecasting sites. It’s not just fun for us – this helps us understand the platform’s user experience to see what’s working and what can be improved. Here’s what Zach learned about forecasting blindspots after a recent foray with a question on the future of the British Prime Minister…
A key practice of a good forecaster is doing post-mortems on your forecasts. Whether the result was good or bad, a quality post-mortem can help you identify what you did well or poorly and can improve on next time.
My recent forecasts about the future of Liz Truss’ premiership in the UK went… poorly.
After starting out with an initial forecast of a 10% chance of Truss’ premiership ending before 2024, I jumped up to consensus at 30% on September 26. Financial markets had demonstrated their displeasure with Truss’ economic plans, and in what seemed like uncertain times, I trusted the crowd until I could do more research on the mechanics of how Truss might be removed.
After a reasonable start, how did things go so wrong? I’ll let the
economists and political analysts tell you what went wrong for Truss, but as
for me, I’m embarrassed to report that I fell into several of the forecasting
traps that we routinely teach new forecasters to avoid:
1. Confirmation bias:
A week after adjusting toward consensus, I had time to do some basic
research, and I latched on to information that confirmed my initial
beliefs. Conservative party rules don’t allow a vote of confidence on
the leader during their first year in office. Satisfied with this information
that the party couldn’t remove her (and confident the Conservatives wouldn’t
risk an election with their polling figures in the basement) I dropped back to
just 20% and didn’t look further into the matter.
Between having had some success with political forecasting in the past, and being a Canadian with working knowledge of the parliamentary system, I assumed I understood the UK’s political culture. This caused me to dismiss the possibility of what eventually happened - Truss resigning.
I had plenty of opportunity to observe the cultural differences between Canada & the UK - I even noted to friends and colleagues that the scandals that ended Boris Johnson’s premiership just months ago would have been but a minor speed bump for Canada’s present government.
Despite Johnson’s (and previously, May’s) resignations, I assumed that, as
with our politics in Canada, the storm would eventually pass. This assumption
ultimately led me to ignore what was so plain to everyone else: Truss simply
couldn’t continue as PM.
The best forecasters focus on getting the most accurate forecast, no matter what the crowd says. Sometimes this means being contrarian, as I was, and other times, it means remaining in line with the crowd, even if it’s not as thrilling.
When it comes to being scored however, forecasters who are correct when the crowd is wrong are rewarded much more generously than those that get it right along with everyone else.
What I justified as independent thinking was indeed my true opinion, but I was also biased by a desire to beat the crowd. Any self doubt from the crowd consensus moving inexorably away from me was outweighed by the thought that my forecast score relative to the crowd was getting better and better.
So what’s next? None of the “traps” above are new information to me, but having them laid bare in my own forecasting process is a great reminder for me to:
Search for contrarian information that goes against my prior beliefs
Operate with humility and test my biases
Respect the crowd and seek to understand the consensus before straying too far from it
P.S. Not only did I make a mess of this forecast, but I also forecasted a
related question asking about the chances that Truss would be out of office by
April 2023. I forecasted just 5%... Sigh.