Guest post by Dr. David Stockwell
In Australia, the carbon-tax juggernaut rolls on, justified in part by fear of droughts, increasing in frequency and severity as CO2 increases.
I have always found that checking one’s assumptions was good advice, and with that in mind I checked the models used in a major drought study by the CSIRO and the Australian BoM. The study, the Drought Exceptional Circumstances Report (DECR), was widely used to support the contention that major increases in drought frequency and severity in Australia will result from further increases in CO2.
My paper contributes in the areas of validation of climate models (the subject of a recent post at Climate, Etc.), and regional model disagreement with rainfall observations (see post by Willis Eschenbach).
Specifically, droughts have decreased last century in line with increasing rainfall, but the climate models used in the DECR showed the opposite (and significantly so).
Overall, it is a case study demonstrating the need for more rigorous and explicit validation of climate models if they are to advise government policy.
It is reasonably well known that general circulation models are virtually worthless at projecting changes in regional rainfall, the IPCC says so, and the Australian Academy of Science agrees. The most basic statistical tests in the paper demonstrate this: the simulated drought trends are statistically inconsistent with the trend of the observations, a simple mean value shows more skill that any of the models, and drought frequency has dropped below the 95% CL of the simulations (see Figure).
The larger issue is how to get people to accept that there will always be worthless models, and the task of genuinely committed modellers to identify and eliminate worthless models. It’s not convincing to argue that validation is too hard for climate models, they are the only ones we have got, they are justified by physical realism, or they are ‘close enough’.
My study shows that the obvious testing regimes would have shown the drought models in the DECR study were unfit for use, if they had been tested. I asked CSIRO, but no validation results were supplied.
The concerns of scientists are different to decision-makers. While scientists are mainly interested with the relative skill of models in order to gauge improvements, decision-makers are (or should be) concerned primarily with whether the models should be used at all (are fit-for-use). Because of this, model-testing regimes for decision-makers must have the potential to completely reject some or all of the models if they do not rise above a predetermined standard, or benchmark.
There are a number of ways that benchmarking can be set up, which engineers or others in critical disciplines would be familiar with, usually involving a degree of independent inspection, documentation of expected standards, and so on. My favorite benchmark test is quick and easy: the Nash-Sutcliffe Efficiency, an indicator of whether a model shows more skill than a simple mean value.
I believe that decision-makers should not take results seriously unless rigorous validation of the models is also demonstrated.
It is up to the customers of climate studies to not rely on the authority of the IPCC, the CSIRO and the BoM, and to demand “Show us your tests”, as would be expected with any economic, medical or engineering study where the costs of making the wrong decision are high. Duty of care requires confidence that all reasonable means have been taken to validate all of the models and assumptions that support the key conclusions.
About the author (from his website here)
After receiving a Ph.D. in Ecosystem Dynamics from the Australian National University in 1992, I worked as a consultant (WHO, Parks and Wildlife, Land and Natural Resources services) until moving to the San Diego Supercomputer Center at University of California San Diego in 1997. There I helped to develop computational and data intensive infrastructure for ecological niche modeling mainly using museum collections data with grants from the NSF, USGS and DOT. I developed the GARP (Genetic Algorithm for Rule-set Production) system making contributions in many fields: modeling of invasive species, epidemiology of human diseases, the discovery of seven new species of chameleon in Madagascar, and effects on species of climate change. I have published in major journals and was judged by the US Immigration Service as an Outstanding Researcher, recognized internationally as outstanding in their academic field.