Model Climate Sensitivity Calculated Directly From Model Results

Guest Post by Willis Eschenbach

[UPDATE: Steven Mosher pointed out that I have calculated the transient climate response (TCR) rather than the equilibrium climate sensitivity (ECS). For the last half century, the ECS has been about 1.3 times the TCR (see my comment below for the derivation of this value). I have changed the values in the text, with strikeouts indicating the changes, and updated the graphic. My thanks to Steven for the heads up. Additionally, several people pointed out a math error, which I’ve also corrected, and which led to the results being about 20% lower than they should have been. Kudos to them as well for their attention to the details.]

In a couple of previous posts, Zero Point Three Times the Forcing and Life is Like a Black Box of Chocolates, I’ve shown that regarding global temperature projections, two of the climate models used by the IPCC (the CCSM3 and GISS models) are functionally equivalent to the same simple equation, with slightly different parameters. The kind of analysis I did treats the climate model as a “black box”, where all we know are the inputs (forcings) and the outputs (global mean surface temperatures), and we try to infer what the black box is doing. “Functionally equivalent” in this context means that the contents of the black box representing the model could be replaced by an equation which gives the same results as the climate model itself. In other words, they perform the same function (converting forcings to temperatures) in a different way but they get the same answers, so they are functionally equivalent.

The equation I used has only two parameters. One is the time constant “tau”, which allows for the fact that the world heats and cools slowly rather than instantaneously. The other parameter is the climate sensitivity itself, lambda.

However, although I’ve shown that two of the climate models are functionally equivalent to the same simple equation, until now I’ve not been able to show that is true of the climate models in general. I stumbled across the data necessary to do that while researching the recent Otto et al paper, “Energy budget constraints on climate response”, available here (registration required). Anthony has a discussion of the Otto paper here,  and I’ll return to some curious findings about the Otto paper in a future post.

cmip5 model temperature and forcing changeFigure 1. A figure from Forster 2013 showing the forcings and the resulting global mean surface air temperatures from nineteen climate models used by the IPCC. ORIGINAL CAPTION. The globally averaged surface temperature change since preindustrial times (top) and computed net forcing (bottom). Thin lines are individual model results averaged over their available ensemble members and thick lines represent the multi-model mean. The historical-nonGHG scenario is computed as a residual and approximates the role of aerosols (see Section 2).

In the Otto paper they say they got their forcings from the 2013 paper Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models by Forster et al. (CMIP5 is the latest Coupled Model Intercomparison Project.) Figure 1 shows the Forster 2013 representation of the historical forcings used by the nineteen models studied in Forster 2013, along with the models’ hindcast temperatures. which at least notionally resemble the historical global temperature record.

Ah, sez I when I saw that graph, just what I’ve been looking for to complete my analysis of the models.

So I digitized the data, because trying to get the results from someone’s scientific paper is a long and troublesome process, and may not be successful for valid reasons. The digitization these days can be amazingly accurate if you take your time. Figure 2 shows a screen shot of part of the process:

digitization processFigure 2. Digitizing the Forster data from their graphic. The red dots are placed by hand, and they are the annual values. As you can see, the process is more accurate than the width of the line … see the upper part of Figure 1 for the actual line width. I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.

Once I had the data, it was a simple process to determine the coefficients of the equation. Figure 3 shows the result:

black box analysis averaged climate modelsFigure 3. The blue line shows the average hindcast temperature from 19 models in the  the Forster data. The red line is the result of running the equation shown in the graph, using the Forster average forcing as the input.

As you can see, there is an excellent fit between the results of the simple equation and the average temperature hindcast by the nineteen models. The results of this analysis are very similar to my results from the individual models, CCSM3 and GISS. For CCSM3, the time constant was 3.1 years, with a sensitivity of 1.2 2.0°C per doubling. The GISS model gave a time constant of 2.6 years, with the same sensitivity, 1.2 2.0°C per doubling. So the model average results show about the same lag (2.6 to 3.1 years), and the sensitivities are in the same range (1.2 2.0°C/doubling vs 1.6 2.4°C/doubling) as the results for the individual models. I note that these low climate sensitivities are similar to the results of the Otto study, which as I said above I’ll discuss in a subsequent post.

So what can we conclude from all of this?

1. The models themselves show a  lower climate sensitivity (1.2 2.0°C to 1.6 2.4°C per doubling of CO2) than the canonical values given by the IPCC (2°C to 4.5°C/doubling).

2. The time constant tau, representing the lag time in the models, is fairly short, on the order of three years or so.

3. Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.

4. This analysis does NOT include the heat which is going into the ocean. In part this is because we only have information for the last 50 years or so, so anything earlier would just be a guess. More importantly, the amount of energy going into the ocean has averaged only about 0.25 W/m2 over the last fifty years. It is fairly constant on a decadal basis, slowly rising from zero in 1950 to about half a watt/m2 today. So leaving it out makes little practical difference, and putting it in would require us to make up data for the pre-1950 period. Finally, the analysis does very, very well without it …

5. These results are the sensitivity of the models with respect to their own outputs, not the sensitivity of the real earth. It is their internal sensitivity.

Does this mean the models are useless? No. But it does indicate that they are pretty worthless for calculating the global average temperature. Since all the millions of calculations that they are doing are functionally equivalent to a simple lagged linear transformation of the inputs, it is very difficult to believe that they will ever show any skill in either hindcasting or forecasting the global climate.

Finally, let me reiterate that I think that this current climate paradigm, that the global temperature is a linear function of the forcings and they are related by the climate sensitivity, is completely incorrect. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of this issue.

Regards to everyone, more to come,

w.

DATA AND CALCULATIONS: The digitized Forster data and calculations are available here as an Excel spreadsheet.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

133 Comments
Inline Feedbacks
View all comments
Greg Goodman
May 22, 2013 3:03 am

richard : ” it is difficult to conclude that temperatures today today truly are warmer than those either in the 1930s or 1880s such that the present temperature record does not show the full extent of variability in past temperatures.”
I did point out over a year ago that Hadley adjustments were removing 2/3 of the variability from the earlier half of the record and that this was done based on rather speculative reasoning not fact.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-188237
Those adjustments helped make the data better fit the model.
If that was not the concious intent of those adjustments , it was certainly the effect.
There is also post war adjustment of -0.5C which makes the late 20th c. rise look more like CO2.
However, the Ice Classic record would suggest that the wartime bump was a real event , not something that requires one sided correction.
This is why I do most of my work using ICOADS SST not Hadley’s adjusted datasets.

Greg Goodman
May 22, 2013 3:08 am

Note the 1939-40 that usually gets “corrected” leaving just the post war drop. This was Folland’s folly, introduces in late 80’s IIRC It has been smoothed out a bit in HadSST3 but is still there and still -0.5 deg C.
http://climategrog.wordpress.com/?attachment_id=258

Greg Goodman
May 22, 2013 3:14 am

Bloke down the pub says:If the organisations that funded those models saw that they could be replicated by a couple of lines of equations, do you think they might ask for their money back?
Why would they do that ? They are using the results to set up a $100bn slush fund with no legal auditing or accountability.
I’m sure they are very happy with the return on investment.

May 22, 2013 3:14 am

Did anyone else get a broken link at cell G3 of Mr. Eschenbach’s spreadsheet?

agricultural economist
May 22, 2013 3:27 am

Willis,
you use aggregate forcings (F) as a driving variable in your equation, but you interpret the result as sensitivity to CO2. But in these models, CO2 is not the only forcing factor. To arrive at CO2 forcing, you would have to split F up into its individual historical components (CO2, soot, aerosols etc.). Many of these other forcing factors have a negative temperature effect, making it likely that the models still assume a higher sensitivity for CO2 … which would be expected.
To many commenters here: climate models try to simulate physical processes, so they are called process models. Willis puts the results of this into an ex-post statistical model to identify the importance of driving factors. The fact that this is possible does not mean that the process models are worthless. Willis has just simplified the models tremendously, thereby losing a lot of information that went into these models.
The problem with process models is that they cannot simulate processes which you do not program into them, either because you don’t know them or you find them insignificant or inconvenient for the outcome.This means that you really have to look at such models in detail to be able to fundamentally criticize them. Due to the complexity of the models this is very difficult even for scientifically literate persons.
Says an economic modeler …

Evan Jones
Editor
May 22, 2013 3:28 am

Well, that appears to show that a simple top-down analysis beats sickeningly complex bottom-up every time. As a wargame designer, this comes as no surprise. If you want to “simulate” the Russian Front, you do it by looking at Army Groups and fronts, not man-to-man, for heaven’s sake.

Greg Goodman
May 22, 2013 3:35 am

agricultural economist says:
you use aggregate forcings (F) as a driving variable in your equation, but you interpret the result as sensitivity to CO2. But in these models, CO2 is not the only forcing factor. To arrive at CO2 forcing, you would have to split F up into its individual historical components (CO2, soot, aerosols etc.).
===
Yes, that confirms my comment above.
” Many of these other forcing factors have a negative temperature effect, making it likely that the models still assume a higher sensitivity for CO2 … which would be expected.”
Ah, expectations. That is indeed the primary forcing in climate modelling.

Greg Goodman
May 22, 2013 4:13 am

agricultural economist says: Willis puts the results of this into an ex-post statistical model to identify the importance of driving factors.
What Willis shows, I think, is that after all the supposedly intricate modelling of basic physics and guessed forcings and parameters, all that comes out is a simple solution to the Laplace equation.
The equation he has used is basically a solution to the equation:
http://en.wikipedia.org/wiki/Heat_equation
Now once models work they will tell us a lot of detail on a regional scale but IMO we are decades away from that level of understanding where know enough to make a first principals approach work.
A climate model that cannot produce it’s own tropical storms is, frankly, worthless as it stands.
The problem now is that we know lots of the processes in great detail but that ones that really matter are still guesswork. All the guesses are adjusted until the output is about what is “expected”.
At which point we may as well use Willis’ equation.

Greg Goodman
May 22, 2013 4:23 am

Since Willis’ approach seems to capture the behaviour of the GCMs I would suggest he digitises all the individual ‘forcings’ , reduces volcanism to something that he feels to be more in line with observation, puts CO2 at it’s value based on radiation physics (1.3 from memory) and see whether his model produces a better post 2000 result.
If I understand what is being shown in the graphs , without reading the paper, I think the GHG plot will be real forcing calculations plus hypothetical water vapour feedback.

May 22, 2013 4:24 am

agricultural economist: “Willis puts the results of this into an ex-post statistical model to identify the importance of driving factors. The fact that this is possible does not mean that the process models are worthless. Willis has just simplified the models tremendously, thereby losing a lot of information that went into these models.”
But the fact that dispensing with that information had little effect on the results shows how rapidly the returns to adding that information diminished–i.e., how little the modelers really accomplished with all their teraflops of computer power.

Bill Illis
May 22, 2013 4:44 am

The issue is how the “Historical_nonGHGs” are used to offset the HistoricalGHGs in the hindcast.
Essentially, the Aerosols and other negative forcings like Land-Use, increasingly offset the warming caused by GHGs in the hindcast.
The GHG temperature impact will follow a formula something like X.X * ln(CO2ppm) – 2y.yC. The 2003 version of GISS ModelE was 4.053 * ln(CO2ppm) – 23.0 [which is just a small 7 year lag from an instantaneous temperature response which is 4.33 * ln(CO2) – 24.8] but each individual model will have a different sensitivity to GHGs and then a different offset from aerosols etc. The higher the GHG sensivity, the higher the aerosols offset there is in the hindcast.
http://img183.imageshack.us/img183/6131/modeleghgvsotherbc9.png
In the future, of course, no climate model is building in an increase in the negative from aerosols. IPCC AR5 assumptions have the (direct and indirect) aerosols offset becoming smaller in the future, changing from -1.1 W/m2 today to -0.6 W/m2 by 2100. The GHG temperature impact becomes dominant.
http://s13.postimg.org/rx2hw6s1j/IPCC_AR5_Aerosols.png

Richard Smith
May 22, 2013 4:52 am

Willis, is it not logical to assume that the forcings are actually a back calculation? Where does one get the data to calculate the forcings back in 1850?

Greg Goodman
May 22, 2013 5:17 am

Bill Inis: “IPCC AR5 assumptions have the (direct and indirect) aerosols offset becoming smaller in the future”
Thanks, what is the basis of that assumption?
Presumably the volcanic forcing is about zero already by about 2000. Are they assuming the Chinese are going to stop using coal ?
What is the big drop in aerosols, especially direct, from 2000 onwards caused by ?
Are you aware of what factor in hadGEM3 caused it give less warming ?
thx

May 22, 2013 5:18 am

Willis,
I was wondering about your basis for interpreting λ as climate sensitivity. But before I could figure that, the units don’t seem right. λ does seem to have the units ΔT/ΔF, so λ ΔF has units T, but then it is divided by τ which has units years? It’s also unexpected to see an exponential with a dimensional argument.
But anyway, what sort of CS do you mean? This seems to be a transient version. In fact, with your model, if you make a one-time increase of 1 in ΔF, T increases in that step by λ/τ, but then it goes on increasing for a while, in fact to λ/τ/(1-exp(-1/τ)). Which actually is pretty close to λ, but the unit issues are a puzzle.

agricultural economist
May 22, 2013 5:21 am

Greg Goodman:
“Ah, expectations. That is indeed the primary forcing in climate modelling.”
It’s not that I myself expect a higher sensitivity for CO2. But the climate modelers do.
But I would like to stress that I find Willis’ effort highly useful … given he manages to isolate different forcings in a multivariate regression.

Bill_W
May 22, 2013 5:26 am

I agree that this seems to be some sort of net forcing, not just due to CO2.
Nick Stokes, we need to give this “goes on increasing for awhile a name”.
I suggest the climate multiplier effect. Then we can get Paulie Krugnuts to come on board and join the conversation.

Bill Illis
May 22, 2013 5:38 am

Greg Goodman says:
May 22, 2013 at 5:17 am
Are you aware of what factor in hadGEM3 caused it give less warming?
——————————
Obviously fudgefactors.
HadGEM2 submitted to the upcoming IPCC AR5 report has a very high GHG sensitivity.
http://s2.postimg.org/6uehe2sdl/Had_GEM2_2100.png
Aerosols decline because they assume we are going to increasingly regulate sulfate emissions from all sources. Its mostly cleaned up already.

kadaka (KD Knoebel)
May 22, 2013 5:48 am

Joe Born said on May 22, 2013 at 3:14 am:

Did anyone else get a broken link at cell G3 of Mr. Eschenbach’s spreadsheet?

=CORREL(#REF!,G8:G163))
And an extra right parentheses.

CORREL
Returns the correlation coefficient between two data sets.
Syntax
CORREL(Data1; Data2)
Data1 is the first data set.
Data2 is the second data set.
Example
=CORREL(A1:A50;B1:B50) calculates the correlation coefficient as a measure of the linear correlation of the two data sets.

Willis, excuse me, gotta question. Correlation command at E4 is “=CORREL(D10:D163,E10:E163)”
Why not start at row 8, start of data? Result is still 0.991.

T. G. Brown
May 22, 2013 5:55 am

Willis,
At risk of oversimplification, you have shown that that the models provide an integrated (globally averaged temperature) response that is a linear function of the perturbation (forcing due to CO2). There are many examples of this in physics and allied fields. For example, one can take a solid beam that is a very inhomogeneous collection of atoms, molecules, and even randomly oriented crystalline domains (as with a metal beam) and reduce its mechanical response to several simple macroscopic material parameters (Young’s modulus, Poisson ratio, etc). The reason these work is that when one considers weak perturbations about a mean value, the first terms in a Taylor expansion are usually linear and decoupled. The coupling — which is the more complex behavior — doesn’t usually start until the nonlinear terms are considered.
What would be interesting (and quite a bit harder) is to see if a simple physical descriptor also governs the mean-square fluctuation. (Systems like yours usually follow something called a fluctuation-dissipation theorem.) However, you may not have access to the data necessary to do that analysis.

May 22, 2013 5:56 am

Willis,
Paraphrasing Von Neumann elsewhere: “Climate change is a bit like an elephant’s jungle trail (randomness with a purpose), whereby the elephant is a Milankovic cycle, far north Atlantic the elephant’s trunk (slowly up/down and sideways), ENSO its tail (swishing around back and forth), and CO2 analogous to few flees that come and go.with the least of a consequence”
Modellers are only sketching the Hindustan plain thrusting into Himalayas , with no elephants roaming around.

Quinn the Eskimo
May 22, 2013 6:02 am

Patrick Frank’s article “A Climate of Belief”, http://wattsupwiththat.com/2010/06/23/a-climate-of-belief/, makes a very similar point. I can’t cut and paste, but he shows that a simple passive warming model, equally as simple as Willis’, is very very close to model mean projections.

May 22, 2013 6:03 am

Willis, you’re saying something I’ve said for the last 10 years, the models are programmed for a specific CS, in fact I remember reading an early Hansen paper where he says exactly the same thing.
As far as the code goes, I think they are all derivatives of the same code, and are almost functionally equivalent. The cell model is pretty simple, the complexity is around processing and managing all of the cell data as it runs.
Lastly, the code is easy to get (well NASA’s is) I just downloaded the source code for ModelII and ModelE1, it is in fortran though, you’d think someone would bother to rewrite it in a modern language. And if you think about it, since I think they’re all fortran, it shows their common roots, no one has bothered to rewrite it.
Lastly This NASA link points out that the models aren’t so accurate when you don’t use real SSTs.

cd
May 22, 2013 6:19 am

Greg
By ecologists I mean biologists whose specialism is in ecology. They are not the only ones botanists, zoologists etc.
I don’t agree that just because one modeller – did Hansen actually design/write the models – makes them all bad. I would suspect few of the actual people who do this would actually suggest that the models should be used for predictions. You’ll probably find it is a team leader who did very little in the design or implementation phase are the ones making all the shrill predictions.

May 22, 2013 6:40 am

Kasuha says:
May 21, 2013 at 10:26 pm
What you have just proved is that Nic Levis’ and Otto et al analyses are worthless.
========
That doesn’t follow. Sounds more like wishful thinking on your part without any evidence in support.

May 22, 2013 6:55 am

agricultural economist says:
May 22, 2013 at 3:27 am
This means that you really have to look at such models in detail to be able to fundamentally criticize them.
====
a mathematician says otherwise. looking at the details is a fools errand when it comes to validating complex models. you will miss the forest for the trees.
there are very simple tests based on the use of “hidden data” – data that is kept hidden from both the model builders and the model itself – that can be used to test the skill of any complex model. if the model cannot predict the hidden data better than chance it has no skill, regardless of what the model details may be telling you.
where the model builders go off the rails is in assuming that the model are predicting the future. They are almost certainly not, because predicting the future is quite a complex problem. what the models are almost certainly predicting is what the model builders believe the future will be. this is a much simpler problem because the model builders will tell you what they believe the future will be if you ask them.
and this is what the models are doing when they print out a projection. they are asking the model builders “is this correct? is this what you believe?” if it is, if the answer is “yes” then the model builders will leave the model as is. if the model builders say “no”, then the models builders will change the model. in this fashion the models are always asking the model builders what they believe to be correct.