Climate scientists have been getting a lot of flak. Something I repeatedly read is that their prediction of temperature rises have been wildly inaccurate and so therefore we shouldn’t believe anything they say. This is simply not true. One forecast made in 1999 by Oxford physicist, Myles Allen, has proved to be impressively spot-on. WottsUpWithThatBlog has already made a good post about this at Tests of a climate forecast, so I won’t go into too much detail here other than to show the forecast.
A paper published this year, Test of a decadal climate forecast, which tests the accuracy of this forecast comes to the same conclusion:
Early climate forecasts are often claimed to have overestimated recent warming. However, their evaluation is challenging for two reasons. First, only a small number of independent forecasts have been made. And second, an independent test of a forecast of the decadal response to external climate forcing requires observations taken over at least one and a half decades from the last observations used to make the forecast, because internally generated climate fluctuations can persist for several years. Here we assess one of the first probabilistic climate forecasts with a full uncertainty assessment that was based on climate models and data up to 1996. Using observations of global temperature over the ensuing 16 years, we find that the original forecast is performing significantly better than a hypothetical alternative based on the assumption that decade-to-decade temperature fluctuations consist of a random walk, that is, a sequence of random fluctuations with no externally driven warming trend. The original climate forecast also outperforms a very simple interpretation of the climate models used for the latest Assessment of the Intergovernmental Panel on Climate Change (IPCC), supporting the conclusions of previous assessments that the spread of such an ensemble is not, on its own, an adequate measure of forecast uncertainty.