There is mounting evidence of the superiority of the Superforecasting approach, which Economics Explored hosts Gene Tunny and Tim Hughes discussed with Warren Hatch, CEO of Good Judgment, on an episode earlier this year (see How to be a superforecaster, or at least a better forecaster). Superforecasting is an approach to forecasting that, as the blurb for the 2015 book Superforecasting notes, “involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course.”
The success of Good Judgment’s superforecasters in forecasting the US Federal Reserve’s policy decisions was profiled in the New York Times last month. Good Judgment has been asking its superforecasters an ongoing series of questions about the upcoming three meetings of the Fed, asking if they will cut, hold, or raise. For the four meetings so far in 2023, the superforecasters were spot on with their probabilities for three hikes and a pause. For the next three meetings, they forecast two hikes followed by a longer pause.
Good Judgement data scientist Chris Karvetski has prepared an analysis showing the superforecasters extraordinary performance in forecasting the Federal Funds rate targeted by the Fed (see Superforecasting the Fed’s Target Range). He has calculated Brier scores of forecast accuracy, where 0 denotes perfect accuracy and 1 denotes perfect inaccuracy, for different sets of forecasts. The Superforecasters are doing 3x better than CME futures for the Federal Funds rate, with far less volatility.
Separately, superforecasting pioneer and Good Judgment co-founder Philip Tetlock and his research colleagues just released a study on existential risk with interesting approaches to generate forecasts for low probability but high impact events, such as an AI apocalypse (see Results from the 2022 Existential Risk Persuasion Tournament). This study was summarised by The Economist earlier this month: What are the chances of an AI apocalypse? Thankfully, as The Economist observes:
Professional “superforecasters” are more optimistic about the future than AI experts.
For more information on the superforecasting approach, check out the Economics Explored podcast episode from earlier this year:
Several clips from the video of the interview are available via YouTube. The first clip is “What Makes a Superforecaster?”:
It identifies the importance of being cognitively reflective and having good pattern recognition skills. Incidentally, one way to identify people with good pattern recognition is to test them with Raven’s progressive matrices, as noted by Warren Hatch in this clip:
Another clip covers how we can overcome our own prejudices and biases to make better forecasts:
Tips from Warren in this regard include:
- getting feedback; and
- forecasting teams in which members can interact with each other anonymously so everyone’s views are considered solely on their merits with no prejudices.