Dr. Kiehl's Paradox

Guest Post by Willis Eschenbach

Back in 2007, in a paper published in GRL entitledTwentieth century climate model response and climate sensitivityJeffrey Kiehl noted a curious paradox. All of the various different climate models operated by different groups were able to do a reasonable job of emulating the historical surface temperature record. In fact, much is made of this agreement by people like the IPCC. They claim it shows that the models are valid, physical based representations of reality.

kiehl sensitivity vs total forcing

Figure 1. Kiehl results, comparing climate sensitivity (ECS) and total forcing. 

The paradox is that the models all report greatly varying climate sensitivities but they all give approximately the same answer … what’s up with that? Here’s how Kiehl described it in his paper:

[4] One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

[5] The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy?

How can that be? The models have widely varying sensitivities … but they all are able to replicate the historical temperatures? How is that possible?

Not to give away the answer, but here’s the answer that Kiehl gives (emphasis mine):

It is found that the total anthropogenic forcing for a wide range of climate models differs by a factor of two and that the total forcing is inversely correlated to climate sensitivity.

This kinda makes sense, because if the total forcing is larger, you’ll have to shrink it more (smaller sensitivity) to end up with a temperature result that fits the historical record. However, Kiehl was not quite correct.

My own research in June of this year, reported in the post Climate Sensitivity Deconstructed,  has shown that the critical factor is not the total forcing as Kiehl hypothesized. What I found was that the climate sensitivity of the models is emulated very accurately by a simple trend ratio—the trend of the forcing divided by the trend of the model output.

lambda vs trend ratio allFigure 2. Lambda compared to the trend ratio. Red shows transient climate sensitivity (TCR) of four individual models plus one 19-model average. Dark blue shows the equilibrium climate sensitivity (ECS) of the same models. Light blue are the results of the forcing datasets applied to actual historical temperature datasets.

Note that Kiehl’s misidentification of the cause of the variations is understandable. First, the output of the models are all fairly similar to the historical temperature. This allowed Kiehl to ignore the model output, which simplifies the question, but it increases the inaccuracy. Second, the total forcing is an anomaly which starts at zero at the start of the historical reconstruction. As a result, the total forcing is somewhat proportional to the trend of the forcing. Again, however, this increases the inaccuracy. But as a first cut at solving the paradox, as well as being the first person to write about it, I give high marks to Dr. Kiehl.

Now, I probably shouldn’t have been surprised by the fact that the sensitivity as calculated by the models is nothing more than the trend ratio. After all, the canonical equation of the prevailing climate paradigm is that forcing is directly related to temperature by the climate sensitivity (lambda). In particular, they say:

Change In Temperature (∆T) = Climate Sensitivity (lambda) times Change In Forcing (∆F), or in short,

∆T = lambda ∆F

But of course, that implies that

lambda = ∆T / ∆F

And the right hand term, on average, is nothing but the ratio of the trends.

So we see that once we’ve decided what forcing dataset the model will use, and decided what historical dataset the output is supposed to match, at that point the climate sensitivity is baked in. We don’t even need the model to calculate it. It will be the trend ratio—the trend of the historical temperature dataset divided by the trend of the forcing dataset. It has to be, by definition.

This completely explains why, after years of better and better computer models, the models are able to hindcast the past in more detail and complexity … but they still don’t agree any better about the climate sensitivity.

The reason is that the climate sensitivity has nothing to do with the models, and everything to do with the trends of the inputs to the models (forcings) and outputs of the models (emulations of historical temperatures).

So to summarize, as Dr. Kiehl suspected, the variations in the climate sensitivity as reported by the models are due entirely to the differences in the trends of the forcings used by the various models as compared to the trends of their outputs.

Given all of that, I actually laughed out loud when I was perusing the latest United Nations Inter-Governmental Panel on Climate Change’s farrago of science, non-science, anti-science, and pseudo-science called the Fifth Assessment Report (AR5). Bear in mind that as the name implies, this is from a panel of governments, not a panel of scientists:

The model spread in equilibrium climate sensitivity ranges from 2.1°C to 4.7°C and is very similar to the assessment in the AR4. There is very high confidence that the primary factor contributing to the spread in equilibrium climate sensitivity continues to be the cloud feedback. This applies to both the modern climate and the last glacial maximum.

I laughed because crying is too depressing … they truly, truly don’t understand what they are doing. How can they have “very high confidence” (95%) that the cause is “cloud feedback”, when they admit they don’t even understand the effects of the clouds? Here’s what they say about the observations of clouds and their effects, much less the models of those observations:

• Substantial ambiguity and therefore low confidence remains in the observations of global-scale cloud variability and trends. {2.5.7}

• There is low confidence in an observed global-scale trend in drought or dryness (lack of rainfall), due to lack of direct observations, methodological uncertainties and choice and geographical inconsistencies in the trends. {2.6.2}

• There is low confidence that any reported long-term (centennial) changes in tropical cyclone characteristics are robust, after accounting for past changes in observing capabilities. {2.6.3}

I’ll tell you, I have “very low” confidence in their analysis of the confidence levels throughout the documents …

But in any case, no, dear Inter-Governmental folks, the spread in model sensitivity is not due to the admittedly poorly modeled effects of the clouds. In fact it has nothing to do with any of the inner workings of the models. Climate sensitivity is a function of the choice of forcings and desired output (historical temperature dataset), and not a lot else.

Given that level of lack of understanding on the part of the Inter-Governments, it’s gonna be a long uphill fight … but I got nothing better to do.

w.

PS—me, I think the whole concept of “climate sensitivity” is meaningless in the context of a naturally thermoregulated system such as the climate. In such a system, an increase in one area is counteracted by a decrease in another area or time frame.  See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of these issues.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
5 2 votes
Article Rating
124 Comments
Inline Feedbacks
View all comments
thingadonta
October 1, 2013 9:42 pm

Someone once said of Plato’s idea of a Republic, run by an unelected expert panel of philosophers, selected and trained for this panel from birth, and who would decide how society was to be run and how everyone else was supposed to live.
‘He never seemed to ask how would such a social arrangement would effect the minds of those within it’.
When you quote the IPCC for example: “There is very high confidence that the primary factor contributing to the spread in equilibrium climate sensitivity continues to be the cloud feedback.” I would suggest that what they are really showing is how the organisational culture they are involved with has effected their minds, i.e. they really mean: ‘pretty much (95%) the only thing the organisation concerns itself with in trying to explain the spread in equilibrium climate sensitivity is cloud feedback’. Note the difference, the organisation they are involved with has become the source of what it true, or likely, or relevant to be studied, not what goes on in real external world.

October 1, 2013 9:47 pm

Trouble is, Willis, first they don’t understand and then they won’t understand. They don’t want to get it right, they want to get it scary.

October 1, 2013 9:54 pm

It’s really a “duh,” I’m afraid. Different models need different values of aerosol cooling to offset the excess warming arising from different (but too high) climate sensitivities. This makes aerosols basically an adjustable parameter.

October 1, 2013 9:57 pm

Great, Will,
now you see why, I, as an expert reviewer of parts of the draft of AR5 decided to back out of the AGU, who openly declares that they fully back the IPCC conclusions…

Amber
October 1, 2013 10:03 pm

How can real scientists condon the outright model fraud portrayed as credible science .At best it is outcome driven programing which is often found in commercial transactions.

Jknapp
October 1, 2013 10:06 pm

Willis, I think you and IPCC are not really disagreeing on this. They say that net forcing is based on Tsi, humidity, green house gasses, land use, clouds, etc… then they assume that they have all the rest correct so the trend difference in forcing is due to clouds but that is still trend difference.
Of course, since the different models get different climate sensitivities, they don’t have the other stuff right.
I think that the bigger point is that since they all get different climate sensitivities yet still match historical temperatures it is clear that they are “tuned” and are not really derived from physical principals as claimed.

kadaka (KD Knoebel)
October 1, 2013 10:08 pm

Where’s Nick Stokes to point out, again, that you’re mixing up transient and equilibrium climate sensitivities, again?
Oh wait, I don’t think he’s on the clock yet. Will we get a senior obfuscator like KR, or that trainee, Jai Mitchell?

Darren Potter
October 1, 2013 10:10 pm

“… were able to do a reasonable job of emulating the historical surface temperature record”
Stating the obvious…
Given various models are so good at emulating the historical surface temperature, the models should be equally as good at emulating future surface temperatures.
Why then did the models so badly miss actual global temperatures of past decade?
Possibly because the models were designed to produced future Alarming results needed by Global Warming Scammers, then tweaked to produce results matching historical temperatures to boost credibility.

Darren Potter
October 1, 2013 10:16 pm

Jknapp says: “… bigger point is that since they all get different climate sensitivities yet still match historical temperatures it is clear that they are “tuned” and are not really derived from physical principals as claimed.”
Bingo!
The models are incapable of predicting real world values, and instead were/are “tuned” to provide Alarmists values. Models they aren’t.

Jquip
October 1, 2013 10:18 pm

Ah, brilliant timing. Thanks for this, Willis. Exactly answers the questions I had about the variations in the temp trends after a step change in CO2 in the Back to the Future thread.

Tucci78
October 1, 2013 10:23 pm

Mr. Eschenbach:
It simply works like so:

First, draw your curve. Then plot your points.

Learned that one in high school, I did.

wayne
October 1, 2013 10:26 pm

“[5] The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy?”
Conversely to ratios of 2 to 3 times spread, if the sensitivities go from 4.5 to 1.5 with no apparent sizeable effect on the outcome why are so many still not seeing it could easily go from 1.5 to zero in like manner? (properly parameter “tuned” that is) Is that not the real possibility? I think Spencer and Monckton are firmly planted in the camp that co2 sensitivity from trace amounts are real though they continue to mess around with just how much lower they are than from other various documents, but I also see it very close to zero, if not zero at these concentrations and think time will add credence to that over time.

dp
October 1, 2013 10:34 pm

They say “We’re sure (95% confident) it’s something we know little about” and I say “something they’re sure of is wrong”. The problem is because Lambda is what it is, they are already sure what ∆T should be. It is in their charter and Mikey’s hockey stick. It is settled science then that ∆F is what ever is required to make Lambda match observed. It’s sausage all the way down.

October 1, 2013 10:51 pm

Ah, but that would be science W.
The critical variable here must be something which can, maybe, be effected politically. Toug h to tax clouds so CO2 has to be the driver. Which, in turn, means that sensitivity is the single most important number. So important that the SOP chose not ro report it at all rather than dropping the value to less than scary levels.

FrankK
October 1, 2013 10:51 pm

Lets face it the whole paradigm that atmospheric CO2 effects temperature is arse-up. Temperature affects the significant part of the atmospheric CO2. Human created CO2 (5GtC/yr) compared to approximately 150 GtC/yr by natural sources has very little effect. The bulk of the CO2 in the atmosphere is related to the integral of temperature.
http://tallbloke.wordpress.com/2013/06/10/murray-salby-in-significant-part-co2-is-controlled-by-global-temperature-murray-salby/

October 1, 2013 10:52 pm

The net of all of this is that the so called climate models are nothing but high order curve fit exercises. It is well known that high order curve fits can often predict the reference data set rather well. However, when one starts to extrapolate in order to predict well outside of the reference data set, the “predictions” rather rapidly go off track. It is rather difficult to obtain accuracy of extrapolation beyond even a small fraction of the length of the reference data set. To extrapolate by a major fraction of the length to the reference data set is to guarantee a gross failure in prediction.
Reality doesn’t pay any attention to what the models say it must do. It simply does its thing in its own way. It always has and always will. This no matter what the size of the consensus is that says otherwise and without any regard to the certifications held by the members of the consensus.

Richard111
October 1, 2013 11:02 pm

As a layman trying to puzzle his way through this morass, I end up thinking every CO2 molecule is the ‘surface’ of that mass of CO2 in any given volume of the atmosphere. Therefore every CO2 molecule will be at the local air temperature. If the sun is shining the CO2 molecules can absorb energy in the 2.7 and 4.3 micron bands and warm the air a bit. But when there is no sun every CO2 molecule is back to local air temperature and thus, within the troposphere, will be too warm to absorb any energy from the surface in the 15 micron band. By the same argument, every CO2 molecule is warm enough to radiate in the 15 micron band. Some of this radiation is reaching the surface. But how can it warm that surface since it is already radiating in the 15 micron band?
Any tutorials on this subject for baffled laymen?

Greg Goodman
October 1, 2013 11:26 pm

“PS—me, I think the whole concept of “climate sensitivity” is meaningless in the context of a naturally thermoregulated system such as the climate. In such a system, an increase in one area is counteracted by a decrease in another area or time frame. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of these issues.”
Sensitivity is not meaningless Willis. Even accepting your tropical governor hypothesis which , as you know, I am quite supportive of, no regulator is perfect. Every regulation system will maintain the control variable within certain limits yet it needs an error signal to operate on.
The better (tighter) the regulation the less sensitive it will be do outside “forcings”.
It’s that simple. Sensitivity is a measure of how good the regulator is. So your emergent phenomena and the rest do not negate the concept of sensitivity , they rely on it.

Greg Goodman
October 1, 2013 11:29 pm

“Change In Temperature (∆T) = Climate Sensitivity (lambda) times Change In Forcing (∆F), or in short, ∆T = lambda ∆F”
You made this incorrect relationship the centre of your last post on the subject and then ignored it when I Paul_K and Frank pointed out it was fundamentally wrong. Not only did you not address those issues , you now repeat the same mistake.

Greg Goodman
October 1, 2013 11:46 pm

As I pointed out here on your “Eruption” post:
http://wattsupwiththat.com/2013/09/22/the-eruption-over-the-ipcc-ar5/#comment-1425762
T2 = T1 + lambda (F2 – F1) (1 – exp(-1/tau)) + exp(-1/tau) (T1 – T0)
it seems your regression fit is:
T2 = T1 + lambda (F2 – F1) ; not the same thing.
The reason the results were similar when you did a more correct method (which unfortunately you did not report on , preferring to detail the incorrect method) is probably that the diff of and exponential decay is also an exponential decay.
Your truncated ∆T = lambda ∆F does not represent the linear feedback assumption and the lambda it gives will not be the same as climate sensitivity. Though it will probably be of the same order.

October 1, 2013 11:52 pm

“The reason is that the climate sensitivity has nothing to do with the models, and everything to do with the trends of the inputs to the models (forcings) and outputs of the models (emulations of historical temperatures).”
yes. this comes as a surprise? check the relationship between aerosol forcing ( a free knob) and the sensitivity of models.

Greg Goodman
October 1, 2013 11:53 pm

“Climate sensitivity is a function of the choice of forcings and desired output (historical temperature dataset), and not a lot else.”
The above notwithstanding, that is basically true if we understand “choice of forcings” to include any _assumed_ feedbacks that are added to the equations inside the models. The cloud feedback being the key issue and it is still unknown whether this is even positive or negative as a feedback.

October 1, 2013 11:56 pm

Roger Cohen says:
October 1, 2013 at 9:54 pm
It’s really a “duh,” I’m afraid. Different models need different values of aerosol cooling to offset the excess warming arising from different (but too high) climate sensitivities. This makes aerosols basically an adjustable parameter.
###########
Opps sorry I missed your comment.
Yes, the historical forcings for aerosols have large uncertainties. depending on how you set them you can get higher or lower sensitivity.
The clue is that folks like Hansen and others think that models are poor source of information about sensitivity. Paleo and observations are better. Models.. dont really give you any “new” information about sensitivity.

temp
October 1, 2013 11:57 pm

So question. Since you’ve pretty much got this do can you apply it to the models…
Such as make perfect hind casts but has say a -5 climate BS factor?
This gutter trash love the models. If you can take one of their models and get an ice age from it you can force them to dump the models and then they got nothing.
Since it seems that the model are simple trends and such can you not find the “inverse” factor and then send it into the ice age and post out hansen model now says were all doomed from a future ice age and with have .99 R blah blah blah.

1 2 3 5