Alex Sen Gupta, a biological scientist, has written an article (reviewed by John Cook) which attempts to defend the validity of climate models which can’t predict the climate.
According to Gupta;
To understand what’s happening, it is critical to realise that the climate changes for a number of reasons in addition to CO₂. These include solar variations, volcanic eruptions and human aerosol emissions.
The influence of all these “climate drivers” are included in modern climate models. On top of this, our climate also changes as a result of natural and largely random fluctuations – like the El Nino Southern Oscillation, ENSO and the Interdecadal Pacific Oscillation, [IPO] – that can redistribute heat to the deep ocean (thereby masking surface warming).
Such fluctuations are unpredictable beyond a few months (or possibly years), being triggered by atmospheric and oceanic weather systems. So while models do generate fluctuations like ENSO and IPO, in centennial scale simulations they don’t (and wouldn’t be expected to) occur at the same time as they do in observations.
The problem with this claim is that, as Gupta says, the climate models are supposed to take these random fluctuations into account. Climate models are supposed to accommodate randomness, by providing a range of predicted values – the range is produced by plugging in different values for the random elements which cannot be predicted. However, observations are right on the lower border of that range. The divergence between climate models and predictions is now so great, that climate models are on the brink of being incontrovertibly falsified.
As Judith Curry recently said, If the pause continues for 20 years (a period for which none of the climate models showed a pause in the presence of greenhouse warming), the climate models will have failed a fundamental test of empirical adequacy.
This is important, because it strikes at the heart of the claim that climate models can detect human influence on climate change. If climate models cannot model climate, if the models cannot be reconciled with observations, how can the models possibly be useful for attributing the causes climate change? If scientists defending the models claim the discrepancy is because of random fluctuations in the climate, which have pushed the models to the brink of falsification, doesn’t this demonstrate that, at the very least, the models very likely underestimate the amount of randomness in the climate? Is it possible that the entire 20th century warming might be one large random fluctuation?
Nature is certainly capable of producing large, rapid climate fluctuations, such as the Younger Dryas, an abrupt return to ice age conditions which occurred 12,500 years ago. You can’t use climate models which demonstrably underestimate the randomness of climate change, to calculate how much of the observed 20th century warming is not random.
If current mainstream climate models cannot predict the climate, then scientists have to consider the possibility that other models, with different assumptions, can do a better job. It is no accident that Monckton, Soon, Legates and Brigg’s paper on an irreducibly simple climate model, which does a better job of hind casting climate than mainstream models, has received over 10,000 downloads. As every scientific revolution in history has demonstrated, being right is ultimately more important than being mainstream, even if it sometimes takes a few years to win acceptance.
For now, mainstream climate scientists are mostly hiding in the fringes of their estimates. When they acknowledge it at all, they claim that the anomaly, the pause, is a low probability event which is still consistent with climate models. Hans Von Storch, one of the giants of German climate research, a few years ago claimed that 98% of climate models cannot be reconciled with reality – which still, for now, leaves 2% possibility that climate scientists are right.
Is the world really preparing to spend billions, trillions of dollars, on a 2% bet?