Richard Betts heads the Climate Impacts area of the UK Met Office. The first bullet point on his webpage under areas of expertise describes his work as a climate modeler. He was one of the lead authors of the IPCC’s 5th Assessment Report (WG2). On a recent thread at Andrew Montford’s BishopHill blog, Dr. Betts left a remarkable comment that downplayed the importance of climate models.
Dr. Betts originally left the Aug 22, 2014 at 5:38 PM comment on the It’s the Atlantic wot dunnit thread. Andrew found the comment so noteworthy he wrote a post about it. See the BishopHill post GCMs and public policy. In response to Andrew’s statement, “Once again this brings us back to the thorny question of whether a GCM is a suitable tool to inform public policy,” Richard Betts wrote:
Bish, as always I am slightly bemused over why you think GCMs are so central to climate policy.
Everyone* agrees that the greenhouse effect is real, and that CO2 is a greenhouse gas. Everyone* agrees that CO2 rise is anthropogenic Everyone** agrees that we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know. The old-style energy balance models got us this far. We can’t be certain of large changes in future, but can’t rule them out either. So climate mitigation policy is a political judgement based on what policymakers think carries the greater risk in the future – decarbonising or not decarbonising.
A primary aim of developing GCMs these days is to improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then futher refining the models). Clearly, contingency planning and adaptation need to be done in the face of large uncertainty.
*OK so not quite everyone, but everyone who has thought about it to any reasonable extent
**Apart from a few who think that observations of a decade or three of small forcing can be extrapolated to indicate the response to long-term larger forcing with confidence
As noted earlier, it appears extremely odd that a climate modeler is downplaying the role of—the need for—his products.
“…WE CAN’T PREDICT LONG-TERM RESPONSE OF THE CLIMATE TO ONGOING CO2 RISE WITH GREAT ACCURACY”
Unfortunately, policy decisions by politicians around the globe have been and are being based on the predictions of assumed future catastrophes generated within the number-crunched worlds of climate models. Without those climate models, there are no foundations for policy decisions.
“…CLIMATE MITIGATION POLICY IS A POLITICAL JUDGEMENT BASED ON WHAT POLICYMAKERS THINK CARRIES THE GREATER RISK IN THE FUTURE – DECARBONISING OR NOT DECARBONISING”
But policymakers—and more importantly the public who elect the policymakers—have not been truly made aware that there is great uncertainty in the computer-created assumptions of future risk. Remarkably, we now find a lead author of the IPCC stating (my boldface):
… we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know.
I don’t recall seeing the simple statement “We don’t know” anywhere in any IPCC report. Should “we don’t know” become the new theme of climate science, their mantra?
“THE OLD-STYLE ENERGY BALANCE MODELS GOT US THIS FAR”
Yet the latest and greatest climate models used by the IPCC for their 5th Assessment Report show no skill at being able to simulate past climate…even during the recent warming period since the mid-1970s. So the policymakers—and, more importantly, the public—have been misled or misinformed about the capabilities of climate models.
For much of the year 2013, we presented those model failings in dozens of blog posts, including as examples:
- Will their Failure to Properly Simulate Multidecadal Variations In Surface Temperatures Be the Downfall of the IPCC?
- Models Fail: Land versus Sea Surface Warming Rates
- Polar Amplification: Observations versus IPCC Climate Models
- Model-Data Comparison: Hemispheric Sea Ice Area
- Model-Data Precipitation Comparison: CMIP5 (IPCC AR5) Model Simulations versus Satellite-Era Observations
- Model-Data Comparison with Trend Maps: CMIP5 (IPCC AR5) Models vs New GISS Land-Ocean Temperature Index
In other words, the climate models presented in the IPCC’s 5th Assessment Report cannot simulate what many persons would consider the basics: surface temperatures, sea ice area and precipitation.
Shameless Plug: These and other model failings were presented in my ebook Climate Models Fail.
“APART FROM A FEW WHO THINK THAT OBSERVATIONS OF A DECADE OR THREE OF SMALL FORCING CAN BE EXTRAPOLATED TO INDICATE THE RESPONSE TO LONG-TERM LARGER FORCING WITH CONFIDENCE”
A few? In effect, that’s all the climate models used by the IPCC do with respect to surface temperatures. Figure 1 shows the annual GISS Land-Ocean Temperature Index data and linear trend (warming rate), for the Northern Hemisphere, from 1975 to 2000, a period to which climate models are tuned. The linear trend of the data has also been extrapolated until 2100. Also shown in the graph is the multi-model ensemble member mean (the average of all of the individual climate model runs) of the simulations of Northern Hemisphere surface temperature anomalies for the climate models stored in the CMIP5 archive. The CMIP5 archive was used by the IPCC for their 5th Assessment Report.
The model simulations of 21st Century surface temperature anomalies and their trends have been broken down into thirds to show that there was little increase in the expected warming rate through two-thirds of the 21st Century with the constantly increasing forcings. In other words, the models simply follow the extrapolated data trend through about 2066, in response to the increased forcings. See Figure 2 for the forcings.
So, Dr. Betts’s “a few” appears to, in reality, be the consensus of the climate science community…the central tendency of mainstream thinking about climate dynamics…the groupthink.
And the problem with the groupthink was that the climate science community tuned their models to a naturally occurring upswing in surface temperatures. See Figure 3.
Should the modelers have anticipated another cycle or two when making their pre-programmed prognostications of the future? Of course they should have. The models are out of phase with reality.
But why didn’t they tune their models to the long-term trend? If they had tuned their models to the long-term trend, there’s nothing alarming about a 0.07 deg C warming rate in Northern Hemisphere surface temperatures. Nothing alarming at all.
You may be wondering why I focused on Northern Hemisphere surface temperatures. Well, it’s well known that climate models can’t simulate the warming that took place in the Southern Hemisphere during the recent warming period. See Figure 4. The models almost double the warming that took place there since 1975.
Dr. Betts noted:
A primary aim of developing GCMs these days is to improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then futher refining the models).
In order for the climate science community to create forecasts of regional climate on decadal timescales, the models will first have to be able to simulate coupled ocean-atmosphere processes. Unfortunately, with their politically driven focus on CO2, they are no closer now at being able to simulate those processes than they were two decades ago.
The GISS LOTI data and the climate model outputs are available through the KNMI Climate Explorer.