Are Treasury forecasts credible?
In a recent opinion piece economics columnist Ross Gittins defended Treasury on the grounds that:
- It assesses its own performance and publishes the results
- It bases its forecasts on reasonable assumptions and sophisticated modeling
- Its critics don’t do either of these things
But still, are they credible? What does “credible” mean?
As evidence of Treasury’s credibility Gittins points to a new section of the recent Budget Papers, Statement 7: Forecasting Performance and Scenario Analysis, in which Treasury describes its own performance.
Here is the first data displayed in that Statement:
It shows Treasury GDP growth forecasts (dots) and actual GDP growth (columns). Eyeballing the chart suggests that Treasury are usually out by half a percent or more, and sometimes much more. But then, economic forecasting is notoriously difficult. Is this performance good or bad or OK? How do we tell?
One way is to compare against simple benchmarks. For example, what if I were to “compete” with Treasury by forecasting that growth one year will be the same as growth in the previous year? Clearly some years I’d do well, and some years I’d be way out. Would I be on average more or less accurate than Treasury? You can’t tell just by looking, but it can be calculated easily enough.
I took the data from the chart and calculated the mean squared error for Treasury forecasts vs actual growth, and for two alternative “naive” strategies vs actual growth – the one described in the previous paragraph, and another where growth next year is forecast to be equal to the average of growth in all the previous years. Here are the results:
Naive 1: 1.22
Naive 2: 0.93
Apparently, Treasury is doing about the same as one dumb strategy, same as last year and worse than another, average of prior years. (Note that for mean squared error measures, low is good.)
In other words, on the face of it, despite all its the effort, intelligence, data and modeling, Treasury forecasts GDP growth worse than a simple extrapolation well within the ability of most high school students.
If that’s right, I’d say Treasury forecasts are not credible.
To be sure, this is very rough and ready. The analysis could be made more sophisticated in all sorts of ways. The key question however is whether Treasury is managing to outperform simple benchmarks. If they aren’t demonstrably doing so, why shouldn’t we ignore what they say and just go with the benchmarks?
Interestingly, Statement 7 of the Budget Papers makes no comparison with simple benchmarks. They tell us how well they did, and provide a long list of reasons why they think doing as well as they did is pretty hard. They don’t tell us how well they would have done if they had used some other less expensive strategy.
The other thing they don’t tell us is how their forecasts compare with how good it is possible to be. The implicit claim is that their forecasts are as good as anyone could do, but this is far from obvious.
I’m raising these points not to condemn Treasury forecasts but to throw out some challenges.
First, Treasury should compare how it is performing against simple benchmarks. Indeed, I’d be surprised if they weren’t doing this already in some cubicle somewhere. They should make the results easily accessible to the public.
Second, Treasury and its critics should enter into independently-run public forecasting competitions. These competitions should be open to anyone who’d like to try their hand at it, using whatever methods or data they like. Such competitions would be the best way to establish the level of credibility Treasury forecasts really have.
The new site www.rba.tips – a “tipping competition” for RBA interest rate decisions – is an example of the kind of approach that could be used.
Such steps might help make Treasury, in Gittins’ phrase, “the only honest players in this game.”