Model Climate Sensitivity Calculated Directly From Model Results

Guest Post by Willis Eschenbach

[UPDATE: Steven Mosher pointed out that I have calculated the transient climate response (TCR) rather than the equilibrium climate sensitivity (ECS). For the last half century, the ECS has been about 1.3 times the TCR (see my comment below for the derivation of this value). I have changed the values in the text, with strikeouts indicating the changes, and updated the graphic. My thanks to Steven for the heads up. Additionally, several people pointed out a math error, which I’ve also corrected, and which led to the results being about 20% lower than they should have been. Kudos to them as well for their attention to the details.]

In a couple of previous posts, Zero Point Three Times the Forcing and Life is Like a Black Box of Chocolates, I’ve shown that regarding global temperature projections, two of the climate models used by the IPCC (the CCSM3 and GISS models) are functionally equivalent to the same simple equation, with slightly different parameters. The kind of analysis I did treats the climate model as a “black box”, where all we know are the inputs (forcings) and the outputs (global mean surface temperatures), and we try to infer what the black box is doing. “Functionally equivalent” in this context means that the contents of the black box representing the model could be replaced by an equation which gives the same results as the climate model itself. In other words, they perform the same function (converting forcings to temperatures) in a different way but they get the same answers, so they are functionally equivalent.

The equation I used has only two parameters. One is the time constant “tau”, which allows for the fact that the world heats and cools slowly rather than instantaneously. The other parameter is the climate sensitivity itself, lambda.

However, although I’ve shown that two of the climate models are functionally equivalent to the same simple equation, until now I’ve not been able to show that is true of the climate models in general. I stumbled across the data necessary to do that while researching the recent Otto et al paper, “Energy budget constraints on climate response”, available here (registration required). Anthony has a discussion of the Otto paper here,  and I’ll return to some curious findings about the Otto paper in a future post.

cmip5 model temperature and forcing changeFigure 1. A figure from Forster 2013 showing the forcings and the resulting global mean surface air temperatures from nineteen climate models used by the IPCC. ORIGINAL CAPTION. The globally averaged surface temperature change since preindustrial times (top) and computed net forcing (bottom). Thin lines are individual model results averaged over their available ensemble members and thick lines represent the multi-model mean. The historical-nonGHG scenario is computed as a residual and approximates the role of aerosols (see Section 2).

In the Otto paper they say they got their forcings from the 2013 paper Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models by Forster et al. (CMIP5 is the latest Coupled Model Intercomparison Project.) Figure 1 shows the Forster 2013 representation of the historical forcings used by the nineteen models studied in Forster 2013, along with the models’ hindcast temperatures. which at least notionally resemble the historical global temperature record.

Ah, sez I when I saw that graph, just what I’ve been looking for to complete my analysis of the models.

So I digitized the data, because trying to get the results from someone’s scientific paper is a long and troublesome process, and may not be successful for valid reasons. The digitization these days can be amazingly accurate if you take your time. Figure 2 shows a screen shot of part of the process:

digitization processFigure 2. Digitizing the Forster data from their graphic. The red dots are placed by hand, and they are the annual values. As you can see, the process is more accurate than the width of the line … see the upper part of Figure 1 for the actual line width. I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.

Once I had the data, it was a simple process to determine the coefficients of the equation. Figure 3 shows the result:

black box analysis averaged climate modelsFigure 3. The blue line shows the average hindcast temperature from 19 models in the  the Forster data. The red line is the result of running the equation shown in the graph, using the Forster average forcing as the input.

As you can see, there is an excellent fit between the results of the simple equation and the average temperature hindcast by the nineteen models. The results of this analysis are very similar to my results from the individual models, CCSM3 and GISS. For CCSM3, the time constant was 3.1 years, with a sensitivity of 1.2 2.0°C per doubling. The GISS model gave a time constant of 2.6 years, with the same sensitivity, 1.2 2.0°C per doubling. So the model average results show about the same lag (2.6 to 3.1 years), and the sensitivities are in the same range (1.2 2.0°C/doubling vs 1.6 2.4°C/doubling) as the results for the individual models. I note that these low climate sensitivities are similar to the results of the Otto study, which as I said above I’ll discuss in a subsequent post.

So what can we conclude from all of this?

1. The models themselves show a  lower climate sensitivity (1.2 2.0°C to 1.6 2.4°C per doubling of CO2) than the canonical values given by the IPCC (2°C to 4.5°C/doubling).

2. The time constant tau, representing the lag time in the models, is fairly short, on the order of three years or so.

3. Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.

4. This analysis does NOT include the heat which is going into the ocean. In part this is because we only have information for the last 50 years or so, so anything earlier would just be a guess. More importantly, the amount of energy going into the ocean has averaged only about 0.25 W/m2 over the last fifty years. It is fairly constant on a decadal basis, slowly rising from zero in 1950 to about half a watt/m2 today. So leaving it out makes little practical difference, and putting it in would require us to make up data for the pre-1950 period. Finally, the analysis does very, very well without it …

5. These results are the sensitivity of the models with respect to their own outputs, not the sensitivity of the real earth. It is their internal sensitivity.

Does this mean the models are useless? No. But it does indicate that they are pretty worthless for calculating the global average temperature. Since all the millions of calculations that they are doing are functionally equivalent to a simple lagged linear transformation of the inputs, it is very difficult to believe that they will ever show any skill in either hindcasting or forecasting the global climate.

Finally, let me reiterate that I think that this current climate paradigm, that the global temperature is a linear function of the forcings and they are related by the climate sensitivity, is completely incorrect. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of this issue.

Regards to everyone, more to come,

w.

DATA AND CALCULATIONS: The digitized Forster data and calculations are available here as an Excel spreadsheet.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

133 Comments
Inline Feedbacks
View all comments
Greg Goodman
May 22, 2013 7:00 am

Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-t / tau) for this to make sense.

May 22, 2013 7:04 am

cd says:
May 22, 2013 at 6:19 am

I don’t agree that just because one modeller – did Hansen actually design/write the models – makes them all bad.

I don’t know, I think they’re all derived from the same GCM code.
I read something, that I haven’t found lately, that said early models would not generate rising temps, as the climate was actually doing, rising Co2 wasn’t doing it. Hansen (who was a planetary scientist studying Venus for NASA) added a CS factor to the models, which of course made the temps rise to match what weather station data was reporting.

May 22, 2013 7:08 am

ferd berple says:
May 22, 2013 at 6:55 am

……

This is modelers bias, the model is written in such a way that it’s results match what the modeler think it’s suppose to do. Complex models have to be compared to the system they’re modeling. The issue is climate is chaotic and there’s neither a lab model, nor actual measurements to compare to(because it isn’t deterministic).

Greg Goodman
May 22, 2013 7:11 am

Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-dt / tau) for this to make sense.
Similarly, I think the other term should lambda.dF.dt / tau
In the spreadsheet it is annual data and hence dt = 1 so that’s the way the cells are coded. I think Willis made some transcription errors converting his spreadsheet cells to the analytic formula.
Well spotted, though. That sort of dimension checking is an essential step and can catch careless errors.

May 22, 2013 7:15 am

Greg Goodman says:
May 22, 2013 at 3:35 am
Ah, expectations. That is indeed the primary forcing in climate modelling.
=========
exactly. the model important feedback in climate science is not water vapor, it is the model builder-model feedback loop. if is the existence of this feedback loop that ensures that what models are predicting is the expectations of the model builder.
the model builder creates the model. the result of the model are then fed back to the model builder and based on these results the model builder makes changes to the model. unless very careful experimental design is used to break the feedback loop, you are highly unlikely to ever create a complex model that does more than model the beliefs and expectations of the model builders.
we see this problem all the time in the design of computer software. it is one of the reasons for breaking development teams into coders and testers. people are very poor at proof reading their own material. their eyes see what they expect to see, not what is written. some words, such as “to” are basically invisible to humans.

This Years Model
May 22, 2013 7:21 am

An artfull demonstration.
Of the paucity of art.
In the state of the art.
By the ‘artless’ amateur.

Richard M
May 22, 2013 7:30 am

agricultural economist says:
May 22, 2013 at 3:27 am
Willis,
you use aggregate forcings (F) as a driving variable in your equation, but you interpret the result as sensitivity to CO2. But in these models, CO2 is not the only forcing factor. To arrive at CO2 forcing, you would have to split F up into its individual historical components (CO2, soot, aerosols etc.). Many of these other forcing factors have a negative temperature effect, making it likely that the models still assume a higher sensitivity for CO2 … which would be expected.

Exactly, the value Willis got is the sensitivity to all forcings. Now, consider what this means. They most likely use a CO2 forcing close to 3C/doubling. If the bottom line is 1.2-1.6 then that means the other forcings are in the neighborhood of -1.6C during the time period a CO2 doubling occurs. Now, if we were to drop CO2 forcing to something like a no feedback 1.2C (the low end of the Otto paper) then all the warming goes away.

Greg Goodman
May 22, 2013 7:51 am

“… Now, if we were to drop CO2 forcing to something like a no feedback 1.2C (the low end of the Otto paper) then all the warming goes away.”
I think you would have strong cooling if you did just that. You need to look at the strong volcanic forcing too.
This may not be as simple as just scaling down. What is seen in the temp plot without GHG is that volcanoes produce a strong _and permanent_ negative offset. This is what is not matched by real climate data.
What this means is that climate is compensating for the effects of volcanoes and this is not in the models at all. This may well be due to the lack of Willis’ “governor’ : tropical storms.
As he has discussed elsewhere , they have the means and the power to adjust on an almost hourly level and modulate solar input in the tropics.
Now unless someone with an expectation that this is the case incorporates this into the “parametrisation” of clouds it will never happen in the a model which cannot produce it’s own tropical storms.
And since modellers know that this will kill the need for the hypothesised water vapour feedback to GHG and kill off CAGW and kill the goose that lays those golden eggs, I don’t see that happening any time soon.
They will likely come up with another combination of fudge factors that will need another 10 years of data to invalidate and so on until a well earned retirement day.
Unfortunately the motivations to destroy your own grant source are not strong.

Greg Goodman
May 22, 2013 8:07 am

Much has been said of the lack of lack of the predicted mid tropo hot spot. In fact there is some trace of it but it’s far too weak.
If you get rid of volcanic cooling because climate system largely compensates, reduce CO2 effect to what the physics indicates , it may all start to work.
Thunderstorms would also act to evacuate the reduced CO2 warming and produce a reduced hot spot ( which is what is actually seen). Dubious ‘missing’ heat could be forgotten and post 2000 with no volcanoes and a reduced CO2 may actually start to match reality.

May 22, 2013 8:09 am

with the OHC component you are calculating TCR.
write this down

May 22, 2013 9:24 am

Rather than a true black box, I believe Willis has discovered exactly how the modellers operate. Use the total forcings from all sources combined to give a fit, and then going with your sacrosanct 3C per doubling of CO2, adjust all other factors to trim the overall figure to 1.2. There is no real complexity in the models – they are Rube Goldberg devices.
http://en.wikipedia.org/wiki/Rube_Goldberg_machine

Steve McIntyre
May 22, 2013 9:55 am

Willis, nice spotting with the digitization and the fitting of the function. That there was a relatively simple relationship between model forcing and model global temperature is something that has been chatted about from time to time, but the fit here is really impressive. Wigley and Raper’s MAGICC program, used in past IPCC studies, also emulated key model outputs from forcings: I wonder if it does something similar.

Greg Goodman
May 22, 2013 10:08 am

To be fair Willis you did confuse the two:

So what can we conclude from all of this?
1. The models themselves show a much lower climate sensitivity (1.2°C to 1.6°C per doubling of CO2) than the canonical values given by the IPCC (2°C to 4.5°C/doubling).

[Thanks, Greg, see the update to the head post and my comment below. -w.]

Ashby
May 22, 2013 10:08 am

If the modeled time lag is roughly 3 years, flat temperatures for a decade and a half with rising CO2 must really have them sweating. They know their models are broken.

Greg Goodman
May 22, 2013 10:15 am

I’m wonder whether it is that surprising that things work out this way.
If the longer term temperature difference did not match this kind of heat equation it would signify that they had got the energy budget significantly wrong.
I think this is exactly why we have the talk of “missing heat” since y2k, because when the incorrect volcanics are no longer there to balance the incorrect GHG forcing the energy budget goes wrong and temp change goes with it.

Greg Goodman
May 22, 2013 10:24 am

http://climategrog.wordpress.com/?attachment_id=258
If our newly adopted friend in Alaska is to be believed that volcanic effect is very short lived and there is not permanent offset.
Willis does not think even that is attributable so presumably would want to put volcanic “forcing” at zero.
Either way I think there is a enormous lack of evidence to support the idea that volcanoes exert a permanent drop in temperature. I think Willis’ equatorial governor takes care of it.

Greg Goodman
May 22, 2013 10:27 am

Not sure whether you can have an “enormous” lack (but it is pretty big 😉 )

kadaka (KD Knoebel)
May 22, 2013 11:05 am

Greg Goodman said on May 22, 2013 at 7:11 am:

Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-dt / tau) for this to make sense.

In the spreadsheet it is annual data and hence dt = 1 so that’s the way the cells are coded. I think Willis made some transcription errors converting his spreadsheet cells to the analytic formula.
Well spotted, though. That sort of dimension checking is an essential step and can catch careless errors.

From spreadsheet, at E12, calculated temperature for year 1854:
=E11+$E$1*H12/$E$2+I11*EXP(-1/$E$2)
= Previous calculated temp + lambda*(G12-G11)/tau + (E11-E10)*e^(-1/tau)
= Prev.calc.temp + lambda*(average model forcing 1854 – average model forcing 1853)/tau
+ (calculated temperature change from 1852 to 1853)*e^(1-tau)
Equation on graph translates as:
E12 = E11 + $E$1*H12/$E$2 + I11*EXP(-1/$E$2)
So equation on graph matches spreadsheet, no transcription errors.
He’s given the formula before:
T(n+1) = T(n)+λ ∆F(n+1) / τ + ΔT(n) exp( -1 / τ )
So why assume now there’re transcription errors?
Oh, the exponent should be dimensionless, thus it can be Δt, set at 1 year, so that little bit is fine.

May 22, 2013 11:12 am

FerdBerple says
“where the model builders go off the rails is in assuming that the model are predicting the future. They are almost certainly not, because predicting the future is quite a complex problem. what the models are almost certainly predicting is what the model builders believe the future will be. this is a much simpler problem because the model builders will tell you what they believe the future will be if you ask them.”
An excellent proof of mental dishonesty/fantasy/hope.
These are speculators who assume that their speculations are not speculations. They know the actual future; so they believe. Almost a claim of psychic power, backed by “consensus” where doubt is forbidden.
Maybe it would be healthier to be trying to build a real time machine, rather than claiming to have proven the existence of a desperately needed, preimagined future. Or if you’re going to be a psychic, you might want to be good at it first before making predictions.