With the rise of AI, disinformation in mass media will only continue to grow and evolve. Turning our attention to crowd forecasting is one way that we can better understand, monitor, and respond to global uncertainty – together.
As a company, we hold small group forecasting sessions regularly. We’ve learned a few tactics to help other teams surface diverse viewpoints and avoid groupthink. Here are some recommendations for any teams forecasting together.
Earlier this month, one of our client programs INFER – the crowd forecasting program supporting the U.S. Government – hosted a special forecasting event about the future of AI. The event was attended by select INFER forecasters and nearly two dozen forecasting enthusiasts who work at Google, joining in a personal capacity.
Cultivate has released a new “request for response” capability that allows you to send any audience a simple survey, in which they can submit forecasts that will automatically get aggregated with your crowd’s on your Forecasts platform.
One of the first steps in a crowdsourced forecasting effort involves establishing a process for developing forecast questions that will deliver meaningful signals to decision-makers. We wanted to shed some light on our process, so we talked to a few of our team members that focus on developing questions for our client platforms.
We recently launched an update to our binary question interface – it will now show both “yes” and “no” answer options, rather than a single “yes” option. When you change the probability for one answer, the other will automatically update to ensure the probabilities always add to 100%.
Our forecasters have been asking for a mobile app – and we are excited to share that it is now available for INFER, a crowd forecasting program in partnership with the University of Maryland’s ARLIS to support U.S. Government policymakers. You will now be able to submit forecasts on critical questions to support INFER’s government stakeholders directly from our native app on a phone or tablet.
tl;dr - Yes, of course it matters. But improving it in lieu of the other
benefits crowdsourced forecasting can provide continues to receive an outsized
portion of attention when thinking about how to use crowdsourced forecasting to
Manual and time consuming sound like a perfect use case for some automated technology which is why we were so interested in what ChatGPT could do. Ultimately our goal would be to do the same thing a human was doing: summarize the rationales representing different probabilistic judgments.
In a recent meeting in Washington, DC with a group responsible for continuously thinking about the needs of the U.S. Intelligence Community to improve their mission, I was asked how crowdsourced forecasting can help mitigate "gray rhinos." Gray rhinos are impactful, highly probable events, that everyone knows are coming, but are not acted upon.
We were thrilled to see a couple different projects we have the opportunity
to work on come together in a formal collaboration last week Cosmic Bazaar and INFER, the UK and US Government's forecasting efforts respectively, have agreed to a partnership of asking similar questions and sharing data between the two platforms - a first ever such collaboration.
Crowd forecasting allows you to get signals about events before they happen. We're making it even easier to be alerted to important signals by introducing crowd forecast change alerts – which notify you of sudden shifts in the consensus forecast.
I wanted to do a brief year-end retrospective on what we’ve been focused on both for your sake and for ours. Sometimes we take for granted how much we’ve accomplished in a 12-month window. Taking a moment to pause and look back helps us appreciate that better.
Many effective altruist (EA) core values illustrate why they are enthusiastic to use crowd forecasting methods. EAs seek to tackle problems of global significance, placing an emphasis on not only doing good, but doing good effectively. "When decision-makers in government have to make high-stakes judgment calls, using rigorous forecasting techniques can improve our ability to predict the future and make better decisions."
A key practice of a good forecaster is doing post-mortems on your forecasts. Whether the result was good or bad, a quality post-mortem can help you identify what you did well or poorly and can improve on next time. See what Zach learned about his blindspots on a recent forecast about now-former UK Prime Minister, Liz Truss.
Cultivate Labs is excited to announce a partnership with The Bertelsmann
Stiftung, an independent foundation headquartered in Germany, and the
Washington, DC-based Bertelsmann Foundation, part of the Stiftung’s
Cultivate Labs and Pytho.io announced today the creation of a forecaster training curriculum for INFER
(INtegrated Forecasting and Estimates of Risk), a crowdsourced forecasting
program run by the Advanced Research Lab for Intelligence and Security (ARLIS)
at the University of Maryland.
For many years, Cultivate Forecasts supported two different forecasting interface modes: prediction markets and opinion pools (aka opinion surveys or probability surveys). In a prediction market, forecasters buy and sell shares of answer options using real or virtual/fantasy currency (ie. I spend $10 to buy shares of “Yes” in the market “Will candidate X win the election?”). In an opinion pool, forecasters assign a probability to each potential answer (ie. I for
As consumers learn to use these forecasts as part of their own analysis and decision-making, we've been thinking through how we can make sure they see a complete picture - not just the one represented by the graph visualization that tracks the consensus, but why that consensus may be wrong. We want the consumer to always question their assumptions and question the consensus.
Questioning the assumptions and probabilities of the consensus is a simple best practice of forecasting. Do I currently agree with the prevailing winds, or do I predict something different will occur? We've recently introduced the "contrarian sort..."