A low-sensitivity climate model that outperforms the Met Office's HADGEM2

Climate sensitivity is IMHO, the most important unresolved issue in climate science. A number of recent papers, including the IPCC AR5 leak, plus the recent Economist leak of a later AR5 draft are pointing to lower climate sensitivities than what have been sold in the past. Now, we have an older model that seems to do a better job of hindcasting than even the models run on the new Met Office supercomputer.

Steve McIntyre writes at his blog:

Results from a Low-Sensitivity Model

Anti-lukewarmers/anti-skeptics have a longstanding challenge to lukewarmers and skeptics to demonstrate that low-sensitivity models can account for 20th century temperature history as well as high-sensitivity models. (Though it seems to me that, examined closely, the supposed hindcast excellence of high-sensitivity models is salesmanship, rather than performance.)

Unfortunately, it’s an enormous undertaking to build a low-sensitivity model from scratch and the challenge has essentially remained unanswered.

Recently a CA reader, who has chosen not to identify himself at CA, drew my attention to an older generation low-sensitivity (1.65 deg C/doubling) model. I thought that it would be interesting to run this model using observed GHG levels to compare its success in replicating 20th century temperature history.

The author of this low-sensitivity model (denoted GCM-Q in the graphic below) is known to other members of the “climate community”, but, for personal reasons, has not participated in recent controversy over climate sensitivity. For the same personal reasons, I do not, at present, have permission to identify him, though I do not anticipate him objecting to my presenting today’s results on an anonymous basis.

In addition to the interest of a low-sensitivity model, there’s also an intrinsic interest in running an older model to see how it does, given observed GHGs. Indeed, it is a common complaint on skeptic blogs that we never get to see the performance of older models on actual GHGs, since the reported models are being constantly rewritten and re-tuned. That complaint cannot be made against today’s post.

The lower sensitivity of GCM-Q arises primarily because it has negligible net feedback from the water cycle (clouds plus water vapour). It also has no allowance for aerosols.

This is a must read. See the results here:

Results from a Low-Sensitivity Model

0 0 votes
Article Rating
29 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
richard verney
July 22, 2013 4:09 am

Unless back radiation has no effect, it is impossible to resolve climate sensitivity without first fully resolving natural variation.
Any temperature change is either the product of some forcing encompassed within natural variation and/or it is a factor of climate sensitivity to GHGs and/or to aerosols..
The holy grail of climate science should be to fully understand what natural variation encompasses, each and every constituent forcing making up natural variation and the upper and lower bounds of each and every consitituent forcing. In short, we need to know and understand everything there is to know and understand about natural variation. It is only once we possess the requisite knowledge and understanding of natural variation that we will be in a position to ascertain what temperature changes in the various thermometer temperature data sets are caused by natural variation and which ones are caused by a response to some manmade cause and/or other aerosol forcings. Until that time we shall not be in a position to seperate the signal if any to CO2 from the noise of natural variation.
Presently, since there is no first order correlation between temperature and CO2 in any of the thermometer data sets, or for that matter in the paleo proxy reconstruction, one would have to say that climate sensitivity is weaker than natural variation and is conceivably a factor of someone’s over fertile imagination.
Any discussions at this stage regarding climate sensitivity are disengenous. The empirical observational data is not sufficient (whether due to lack of resolution, and/or bastardisation resulting from endless adjustments the appropriatenenss for which is moot, and/or pollution due to UH, siting issues and station drop outs, and/or generally insufficient data length) to even begin to mount an assessment of so called climate sensitivty.
Personally, I consider that sceptics should call this out for what it is, namely just a wild guess of a theoretical concept. That said, if climate sensitivity is low then CAGW cannot exist, and there could at most be only modest future warming resulting from ciontinued manmade CO2 emissions. It is only for that reason that climate sensitivity warrants even a cursory comment 9since presently due to shortcomings in the data, we are wasting out time if we want to adduce climate sensitivity. May be in 30 or so years time it will be possible but that depends upon improving the various data sets and getting rid of all the bastardisation and other problems that beset them)..

Bloke down the pub
July 22, 2013 4:10 am

It still shows temps rising inexorably so my guess is correlation not causation. Time will tell.
PS Anthony. Have WordPress been making changes? When I first tried to post this I got a message telling me to slow down!

Ian W
July 22, 2013 4:15 am

I have never understood why it is thought that there is a single figure for ‘climate sensitivity’ that is not how the water cycle works, and it is the water cycle that even the CAGW proponents need for CAGW to work. As a simple experiment if you add heat to a pan of water the temperature of the water will rise until there is a phase change to steam as the water boils at that point (at sea level) the temperature will stay around 100C so what is the sensitivity of the water to added heat? It is NOT linear. The atmosphere is full of phase changes and convective currents and circulations with geostrophic and Coriolis effects – there is no linearity in response. The idea that there is one sensitivity is a mathematicians simplification of a problem that is too large – and the simplification has forced an error in logic.
“he specific heat is the amount of heat per unit mass required to raise the temperature by one degree Celsius. The relationship between heat and temperature change is usually expressed in the form shown below where c is the specific heat. The relationship does not apply if a phase change is encountered, because the heat added or removed during a phase change does not change the temperature. “
http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/spht.html
We are repeatedly told that these GCMs follow the laws of physics – they do not.

Nick Stokes
July 22, 2013 4:47 am

Ian W says: July 22, 2013 at 4:15 am
“We are repeatedly told that these GCMs follow the laws of physics – they do not.”

Proper GCMs do not employ any notion of sensitivity. That is something that people try to glean from the results.
That is why I am dubious of GCM-Q, with a stated sensitivity. But we are told so little about it that it’s barely worth wondering at this stage. I notice that Anthony’s question at CA asking who actually obtained the plotted results is not yet answered.

David Smith
July 22, 2013 5:31 am

I notice that arch-warmist Richard Telford was straight onto Steve’s blog to post a comment moanin’ and complainin’ that Steve has given no details about the model. It seems to have touch a nerve with Rikkie!
RT whinging about a lack of detail is rather ironic considering the refusal of the Climate Crew at UEA to reveal their data!

Australis
July 22, 2013 6:03 am

If you are a skeptic you have no idea what natural variation has been over the past 15+ years.
But if you are a believer – and you know that AGW forcing (net of aerosols) has added 0.3°C/decade since 1997 – then you can very accurately calculate what cooling effects have been produced by natural variation during that period.
Will solar cycle 24 (plus whatever) continue to force linear cooling at 0.3°C/decade?
Of course, if sensitivity turns out to be 33% lower than we thought, then natural cooling has been only 0.2°C/decade.

July 22, 2013 6:09 am

richard verney writes “Personally, I consider that sceptics should call this out for what it is, namely just a wild guess of a theoretical concept.”
I have been trying for years, without success, to do just that. Many years ago, I realized that, with current technology, it is impossible to measure climate sensitivity; one cannot do controlled experiments on the earth’s atmosphere. Hence, no-one has any idea how accurate any of the hypothetical estimates are. ALL values of climate sensitivity are nothing more that guesses. And my guess that the value of climate sensitivity is indistinguishable from zero, is just as good as anyone else’s.

commieBob
July 22, 2013 6:47 am

Anti-lukewarmers/anti-skeptics have a longstanding challenge to lukewarmers and skeptics to demonstrate that low-sensitivity models can account for 20th century temperature history as well as high-sensitivity models.

Scafetta claims to have met the above challenge.
Scafetta postulates a sensitivity to CO2 which is something like half that of the GCMs.
“Solar and planetary oscillation control on climate change” http://arxiv.org/abs/1307.3706

July 22, 2013 7:16 am

Since the canonical no feedback doubling value is between 1.0 (black body) and 1.2 (grey body, agreed by Lindzen) degrees C, the GCM-Q actually does have a net positive feedback of roughly 0.25 using Lindzens definition of S. That is fully consistent with meta analysis of the full scientific literature (not the selection biased AR4 and AR5 SOD) showing a lower water vapor plus probably negative cloud feedbacks. It is fully consistent with Forster and Gregory (2006). Exhaustive details are contained in the climate chapter of The Arts of Truth ebook published 2012.

July 22, 2013 7:48 am

Ian W says:
July 22, 2013 at 4:15 am
I have never understood why it is thought that there is a single figure for ‘climate sensitivity’
==========
The single figure for climate sensitivity is based on the 19th century “clockwork” view of physics. That you can set the parameters, wind up the model, and you will always obtain the same result.
The 19th century model in effect assumes “the future is written”. That the events today fully determine what will happen tomorrow. However, if this was true, then today’s events were fully determined by yesterdays, and therefore yesterday fully determined tomorrow. Which would mean that today’s events have absolutely no meaning or effect.
Extended to the limits, this view of reality means that everything that happens in the universe was per-ordained at the moment of creation, and the actions each and every one of us take were determined at that point in time. In effect there is no such thing as freedom of choice, that all actions are slaves to the past. That we are all wind-up machines.
While many still have this view of the universe, they fail to grasp the wider implications. 20th century physics takes a much different view. The past does not determine a single future, but rather a near infinite range of futures, of which today is the realization of one of the possibilities. Under this model of climate, it can be argued that climate sensitivity is not a single value, it is a probability density function. That for any single set of forcings that are many different temperatures that could result, with different probabilities for each.
Thus, there is no way to realistically assign a single value to CO2 sensitivity and expect it to be correct. Each value is nothing more than a guess, some more likely to happen than others, and what actually will happen is beyond our ability to calculate in finite time. It takes longer to calculate the future than it does to arrive at the future. In effect, reality is simply the fastest known way to calculate what will actually happen tomorrow.

FrankK
July 22, 2013 8:04 am

Can’t say the comparison is all that much better and particulary poor during the last 15 years.. I’m giving it the thumbs down.

Alan the Brit
July 22, 2013 8:21 am

That is the whole point of it all. Climate Sensitivity, until that is known, along with everything else SM notes, we’re whistling (euphemism) in the wind. So one has to invent the Precautionary Principle, that way you can create all kinds of scares & prove them right!!!!!

July 22, 2013 8:33 am

Climate sensitivity to CO2 doubling plays only a bit part on the larger stage of natural terrestrial and cosmic variation. Sensitivity can change under the influence of myriad factors that are not well understood. Puzzles within puzzles, with humans marveling at all the pretty colors on the outermost facade, and trying to make a fortune or a career out of quasi mystical superficialities..

Peter Stroud
July 22, 2013 8:47 am

Any increase in surface temperature, whether the result of natural, or anthropogenic forcing, will increase the fraction of water vapour in the atmosphere. I don’t think any reasonable person will quarrel with this. Furthermore, if the additional water content remains in the vapour phase, then its greenhouse gas properties will lead to further warming.
However, if the water vapour is transformed into clouds, or precipitation, then the increasing content can, surely, either increase warmth, result in no change, or result in cooling. in other words the feedback parameter will be either positive, neutral or negative
My knowledge of climatology is less than sparse. So I have no idea if the behaviour of atmospheric water is well understood, or not. But, if the complex behaviour is not well understood, then I find it difficult to understand how the ‘establishment’ climate modellers can possibly say just what the feedback parameter will be, either in sign or numerical value In other words, how can they ascribe any sensible value to climate sensitivity

gnomish
July 22, 2013 8:54 am

“The relationship does not apply if a phase change is encountered, because the heat added or removed during a phase change does not change the temperature. ”
and neither does stefan bolzman, does it now?
and water gas is the lightest thing next to helium.
how can you get a hotspot with a stratospheric wetspot?

GaryM
July 22, 2013 10:08 am

“Climate sensitivity is IMHO, the most important unresolved issue in climate science.”
“Climate sensitivity” is just short hand for a model of the climate. If you don’t know enough to model the climate, you can’t determine what CS is. Only if you can model the climate, does it even make sense to discuss how it will react to a doubling of CO2, or any other change in any other forcing. The term eliminates the “all other things being equal” aspect of discussing the green house effect. It says that, in our chaotic, complex climate, a doubling of CO2 will result in this change in temperature.
When skeptics argue about what the “correct” climate sensitivity is, they are conceding the central issue in the climate debate. Do we know enough to model the climate?

Bill
July 22, 2013 11:21 am

>FrankK says:
>July 22, 2013 at 8:04 am
>Can’t say the comparison is all that much better and particulary poor during the last 15 years.. I’m >giving it the thumbs down.
It goes dead center through the average path over the entire time. The excursions of +/- 0.3 C may just be natural climate noise. You can find many, many places in the climate record where temperatures go up and down and then back over 5 year periods. The total range may be up to 0.8 C over a short time span but it may be 0.4 C down and then 0.3 up or vice versa. The point is to explain the actual data before you try predictions. I want more details but it is very interesting.

Bill Illis
July 22, 2013 11:25 am

It is now been 250 years since we started adding CO2 to the atmosphere. 250 years.
There has been enough time now to assess how much warming will result from it. The Earth system does not just switch on and off how it responds to its atmosphere. We are talking about energy moving at up to the speed of light here and the Earth has already demonstrated over and over again what will happen as CO2 content increases further.
And that response is, certainly, well below the range of estimates produced by global warming theory.
0.7C in 250 years. The 1.65C doubling sensitivity built into this model is close, but it is also probably too high. The Earth system has responded so far, as if the CO2 sensitivity is only 1.4C per doubling.

beng
July 22, 2013 12:33 pm

***
Nick Stokes says:
July 22, 2013 at 4:47 am
Proper GCMs do not employ any notion of sensitivity. That is something that people try to glean from the results.
***
Word-smithing? OK, the implied CO2 sensitively from the GCM results (GCM results from CO2 and its added-on, exaggerated water-vapor feedbacks).

Nick Stokes
July 22, 2013 1:44 pm

beng says: July 22, 2013 at 12:33 pm
“Word-smithing? OK, the implied CO2 sensitively…”

No, it’s not word-smithing. Ian W said there are some difficulties with CS (no unique number, non-linearity etc), and there’s some truth in that. But he phrased it as a criticism of GCM’s, and that’s wrong. They don’t use CS. Figuring out what CS really means is a problem for those who want to deduce it, whatever the source of data. It’s not a problem with the source.

richard verney
July 22, 2013 1:48 pm

Bill Illis says:
July 22, 2013 at 11:25 am
///////////////////////////
You are assuming that the entire 0.7degC is due to CO2 and that none of it is due to natural variation. How do you know that to be the case? If some of it is due to natural variation then Climate Sensitivity would be less than the 1.4degC figure that you have assessed.
You are also assuming that the thermometer record is accurate and has not been distorted by UHI, poor siting and station drop outs, nor for that matter by endless adjustments.
Whilst I consider that it has warmed since the LIA, I do not know by how much because of problems with the various data sets which are so significant that one can have no reasonable confidence in their accuracy, and accordingly the 0.7deg C figure you cite may be too high.

Janice Moore
July 22, 2013 2:51 pm

Re: “…challenge to lukewarmers and skeptics to demonstrate that low-sensitivity models can account for 20th century temperature history… .” [above article]
LOL, not so fast, Mr. Slick. WE have nothing to prove: YOU do.
Scientists:
1) All the data shows that rises in temperature PRECEDE rises in CO2 by a quarter cycle. [Source: Dr. Murry Salby, April 18, 2013, Hamburg lecture]
2) Native Sources of CO2 = 150 (96%) gigatons/yr — Human CO2 = 5 (4%) gtons/yr;
Native Sinks approximately* balance Native Sources.
*Re: “approximately,” even a small imbalance can overwhelm any human CO2. [Ibid]
We SCIENTISTS have presented a prima facie case, based on DATA. The burden of proof is now on you, Climatologists.
WE are not going to waste our time in a futile attempt to make YOUR case for YOU.
Climatologists: Models this, models that, blah, blah, blah.
Scientists: Where is the evidence that CO2 causes ANYTHING to happen with climate globally speaking?
Climatologists: mutter, mutter, mutter, models, ensemble mean, mean, mean, mean…spppeesssrrffweroidweyiouriogggwollll
[YEAR AFTER YEAR GOES BY…]
Scientists: Still waiting.

Bill Illis
July 22, 2013 6:05 pm

richard verney says:
July 22, 2013 at 1:48 pm
You are assuming that the entire 0.7degC is due to CO2 and that none of it is due to natural variation.
———————————–
I’ve been running the numbers on natural variation changes for a long time now including the ENSO, the AMO, the Southern AMO, Volcanoes and the Solar Cycle and can explain about 79% of the temperature changes on a monthly basis going back to the 1850s. At this level of explanation, CO2 is responsible for more-or-less the whole 0.7C temperature increase since the natural cycles right now are basically Zero (+0.078C for today to be more precise and around zero as well in the 1850s).
Then there is a question about how much the historical temperature record has been manipulated which has increased the trend and I fully believe that the temps have been screwed around with. I’m counting about 0.2C of manipulation in the sensitivity value number of 1.4C per doubling I quoted. If I just use the satellite and weather balloon record (rather than also take into account the numbers from the NCDC, GISS and Hadcrut), I would probably drop the sensitivity into the 1.2C per doubling range.

Brian H
July 22, 2013 8:58 pm

Now run it with 0 sensitivity, to form a model baseline.
I predict even the Emperor’s underwear will fade away.

Gary Pearse
July 23, 2013 6:41 am

richard verney says:
July 22, 2013 at 4:09 am
“Unless back radiation has no effect, it is impossible to resolve climate sensitivity without first fully resolving natural variation.”
Ian W says:
July 22, 2013 at 4:15 am
“The relationship does not apply if a phase change is encountered, because the heat added or removed during a phase change does not change the temperature.”
Part of what I commented at Steve M’s site (and embellished here) concerns the fact that the earth’s crust was thinner and was hotter in the Archean ~3B ybp evidenced by extrusion of komatiite lavas at 1700C compared to modern ~1100C basalts. The crust thickened and the surface cooled over the next 1B yrs or so, reaching an “equilibrium” around which natural variation of 7 to 9C has oscillated ever since. These limits on temp appear to have had nothing or very little to do with CO2 and are supported by the unbroken chain of life. Supermodels should be constructed that give a horizontal trend with the differences in extremes of 7-9C – probably a combination of orbitals and phase changes of water (as long as a substantial part of the ocean doesn’t freeze the lower temp is essentially capped and as apparent from the maximum sea surface temp achievable of 31C a la Willis’s Thermostat Hypothesis expanded, so is the upper temperature limit). On the very long term, I can’t see any reasonable argument against negative feedbacks being the main players in constraining temperature change. If this is the case, then negative feedback on the micro scales of days to decades is logically acting, too (max 31C SST). Even CO2 oscillations are feedbacks. If not, then there would be nothing to stop us from boiling the seas or freezing solid. These could happen but it would be due to an astronomic cataclysm, not a puff of CO2.

David L.
July 23, 2013 11:43 am

If you take the model predictions at each point and subtract from them the actual data, you are left with the “residuals”. What a modeler would be looking for are residuals that are normally distributed about “zero” with small variance.
After digitizing the data in the figure (using Engauge) I performed this calculation for both models. Both gave residuals that are centered at about 0.11 with a standard deviation of 0.16. This tells me that both models are biased on the warm side by 0.11(i.e. overall point for point the models overpredict the actual temperature by 0.11 throughout the range of measured values).
Why is that?
If the model can’t at least predict the known range of measured values without a bias how can you trust forecasting with it?

Richard M
July 24, 2013 9:49 am

Bill Illis, first of all I appreciate all your hard work. I use many of your charts to inform alarmists of the truth. Thank you.
However, I think you are missing one natural change because it is almost unmeasurable. What I’m referring to is a general equilibrium factor. Given we have seen many times as warm or warmer than the present over the last 10K years without any additional CO2, it’s not too difficult to believe there’s a baseline temperature determined in general by solar energy.
The LIA might very well have taken us below this equilibrium condition and part of the change over the last 250 years is simply a slow return, most likely due to a return to normal solar input. Since there’s no clear way to understand exactly why the LIA led to cooling (although an inactive sun during the Maunder or volcanoes could both be the culprit) it is difficult to determine what the equilibrium condition should be. If it is represented by the MWP, RWP, etc. then returning to this value needs to be factored into the determination of CS.
The point is, it doesn’t take a more active sun to cause warming from a point below the equilibrium. It only needs to be consistent. Time takes care of the rest.

Brian H
August 4, 2013 5:19 pm

And I betcha a model ignoring CO2 would outperform them both.