Model Climate Sensitivity Calculated Directly From Model Results

Guest Post by Willis Eschenbach

[UPDATE: Steven Mosher pointed out that I have calculated the transient climate response (TCR) rather than the equilibrium climate sensitivity (ECS). For the last half century, the ECS has been about 1.3 times the TCR (see my comment below for the derivation of this value). I have changed the values in the text, with strikeouts indicating the changes, and updated the graphic. My thanks to Steven for the heads up. Additionally, several people pointed out a math error, which I’ve also corrected, and which led to the results being about 20% lower than they should have been. Kudos to them as well for their attention to the details.]

In a couple of previous posts, Zero Point Three Times the Forcing and Life is Like a Black Box of Chocolates, I’ve shown that regarding global temperature projections, two of the climate models used by the IPCC (the CCSM3 and GISS models) are functionally equivalent to the same simple equation, with slightly different parameters. The kind of analysis I did treats the climate model as a “black box”, where all we know are the inputs (forcings) and the outputs (global mean surface temperatures), and we try to infer what the black box is doing. “Functionally equivalent” in this context means that the contents of the black box representing the model could be replaced by an equation which gives the same results as the climate model itself. In other words, they perform the same function (converting forcings to temperatures) in a different way but they get the same answers, so they are functionally equivalent.

The equation I used has only two parameters. One is the time constant “tau”, which allows for the fact that the world heats and cools slowly rather than instantaneously. The other parameter is the climate sensitivity itself, lambda.

However, although I’ve shown that two of the climate models are functionally equivalent to the same simple equation, until now I’ve not been able to show that is true of the climate models in general. I stumbled across the data necessary to do that while researching the recent Otto et al paper, “Energy budget constraints on climate response”, available here (registration required). Anthony has a discussion of the Otto paper here,  and I’ll return to some curious findings about the Otto paper in a future post.

cmip5 model temperature and forcing changeFigure 1. A figure from Forster 2013 showing the forcings and the resulting global mean surface air temperatures from nineteen climate models used by the IPCC. ORIGINAL CAPTION. The globally averaged surface temperature change since preindustrial times (top) and computed net forcing (bottom). Thin lines are individual model results averaged over their available ensemble members and thick lines represent the multi-model mean. The historical-nonGHG scenario is computed as a residual and approximates the role of aerosols (see Section 2).

In the Otto paper they say they got their forcings from the 2013 paper Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models by Forster et al. (CMIP5 is the latest Coupled Model Intercomparison Project.) Figure 1 shows the Forster 2013 representation of the historical forcings used by the nineteen models studied in Forster 2013, along with the models’ hindcast temperatures. which at least notionally resemble the historical global temperature record.

Ah, sez I when I saw that graph, just what I’ve been looking for to complete my analysis of the models.

So I digitized the data, because trying to get the results from someone’s scientific paper is a long and troublesome process, and may not be successful for valid reasons. The digitization these days can be amazingly accurate if you take your time. Figure 2 shows a screen shot of part of the process:

digitization processFigure 2. Digitizing the Forster data from their graphic. The red dots are placed by hand, and they are the annual values. As you can see, the process is more accurate than the width of the line … see the upper part of Figure 1 for the actual line width. I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.

Once I had the data, it was a simple process to determine the coefficients of the equation. Figure 3 shows the result:

black box analysis averaged climate modelsFigure 3. The blue line shows the average hindcast temperature from 19 models in the  the Forster data. The red line is the result of running the equation shown in the graph, using the Forster average forcing as the input.

As you can see, there is an excellent fit between the results of the simple equation and the average temperature hindcast by the nineteen models. The results of this analysis are very similar to my results from the individual models, CCSM3 and GISS. For CCSM3, the time constant was 3.1 years, with a sensitivity of 1.2 2.0°C per doubling. The GISS model gave a time constant of 2.6 years, with the same sensitivity, 1.2 2.0°C per doubling. So the model average results show about the same lag (2.6 to 3.1 years), and the sensitivities are in the same range (1.2 2.0°C/doubling vs 1.6 2.4°C/doubling) as the results for the individual models. I note that these low climate sensitivities are similar to the results of the Otto study, which as I said above I’ll discuss in a subsequent post.

So what can we conclude from all of this?

1. The models themselves show a  lower climate sensitivity (1.2 2.0°C to 1.6 2.4°C per doubling of CO2) than the canonical values given by the IPCC (2°C to 4.5°C/doubling).

2. The time constant tau, representing the lag time in the models, is fairly short, on the order of three years or so.

3. Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.

4. This analysis does NOT include the heat which is going into the ocean. In part this is because we only have information for the last 50 years or so, so anything earlier would just be a guess. More importantly, the amount of energy going into the ocean has averaged only about 0.25 W/m2 over the last fifty years. It is fairly constant on a decadal basis, slowly rising from zero in 1950 to about half a watt/m2 today. So leaving it out makes little practical difference, and putting it in would require us to make up data for the pre-1950 period. Finally, the analysis does very, very well without it …

5. These results are the sensitivity of the models with respect to their own outputs, not the sensitivity of the real earth. It is their internal sensitivity.

Does this mean the models are useless? No. But it does indicate that they are pretty worthless for calculating the global average temperature. Since all the millions of calculations that they are doing are functionally equivalent to a simple lagged linear transformation of the inputs, it is very difficult to believe that they will ever show any skill in either hindcasting or forecasting the global climate.

Finally, let me reiterate that I think that this current climate paradigm, that the global temperature is a linear function of the forcings and they are related by the climate sensitivity, is completely incorrect. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of this issue.

Regards to everyone, more to come,

w.

DATA AND CALCULATIONS: The digitized Forster data and calculations are available here as an Excel spreadsheet.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

133 Comments
Inline Feedbacks
View all comments
Admin
May 21, 2013 10:01 pm

HIlarious Willis – reminds me of one of my favourite scenes out of the Chevy Chase / Dan Aykroyd movie “Spies Like Us”.

Admin
May 21, 2013 10:05 pm

Here’s a better version of the clip – “Break it down again with the machine!” 🙂

RockyRoad
May 21, 2013 10:08 pm

Something I’ve always felt was nothing but a bunch of bloviating, hyperventilating, egomaniacal climate scientists hooked on petaflops, and Willis has proven my hunch to be correct.
They should have stuck with the KISS* principle long ago. Sure, they wouldn’t have had the luxury and notoriety of playing with some of the most powerful computers on the planet, but they’d be honest in their frugality, which is always a better approach.
* KISS–Keep it simple, stupid (climate scientists)!

Kasuha
May 21, 2013 10:26 pm

What you have just proved is that Nic Levis’ and Otto et al analyses are worthless. Climate sensitivity is a parameter of models so if it cannot be reliably established from their results, the method of establishing the climate sensitivity from results (or actual data) is wrong.
But I rather believe that your analysis is wrong. For instance, the only model you are analysing is your linear model and its regression to multi-model mean. And in the end you declare that linear model (i.e. your model) is wrong. Well, duh.

May 21, 2013 10:30 pm

“Any intelligent fool can make things bigger and more complex. It takes a touch of genius – and a lot of courage – to move in the opposite direction.”
– A. Einstein

Rud Istvan
May 21, 2013 10:37 pm

Willis, we disagree on many things (yup, future fossil fuel max annual energy extraction) but my complements to you on this. Man, you appear to have nailed this thesis (meaning my formerly well trained (supposedly) mind can find no flaw, so MUST AGREE!). Good job!
(You might even get the irony of that, not directed at you at all, given other comments elsewhere this date.)

Lance Wallace
May 21, 2013 10:52 pm

I tried looking at the forcing and exponential pieces of your equation separately. They are graphed in your modified Excel file. The five sharp drops in the temperature are associated with the forcing term,which always recovers either 1 or 2 years later to a strong positive value. What this means I have no idea, but you might be interested in looking at it.
https://dl.dropboxusercontent.com/u/75831381/CMIP5%20black%20box%20reconstruction–law.xlsx

Peter Miller
May 21, 2013 11:16 pm

The net result of all this is that ‘climate scientists’ are more likely to hide their models’ research findings behind paywalls and increase their obfuscation in regards to providing access to raw data and the methodology in how it is processed. In their eyes, complexity for its own sake is a virtue.
One analogy to climate models is geological models. There is always a very strong temptation to generate increasingly complex models when you do not really understand what you are looking at. Then, all of a sudden, one day some bright spark summarises it into something relatively simple which fits all the known observations and bends no laws of science. At first, this is ignored or attacked, but eventually it is accepted – but this can take several years.
If today’s ‘climate scientists’ were to derive Einstein’s Law of Relativity of E=Mc2, they would end up with a formula spread over several pages.
In any scientific field, unless there are strong commercial or strategic issues, those involved should strive for KISS and be totally open about how they arrive at their conclusions. There is not much of either of these in climate science.
In any event, attacks on Willis’ findings are likely to be mostly of the “It’s far too simplistic” variety.

Stephen
May 21, 2013 11:34 pm

I think we can conclude something else profound about the models:
If CO2-sensitivity and lag-time are the only effective parameters, then those models do not effectively consider any other processes which might affect the global mean temperature. Either other drivers are assumed to be small or all of these models predict feedbacks which drive them to zero. That is an unbelievable coincidence, if it is one. It seems more likely that those who constructed the models simply did not effectively include any other long-term processes (like these: http://wattsupwiththat.com/reference-pages/research-pages/potential-climatic-variables/). Their central result, “confirmation” that warming will continue (to one degree or another) as overall CO2-concentration increases seems to be pure circular logic, rendering them worthless.

Manfred
May 21, 2013 11:42 pm

As absolutely nothing in the forcings covers AMO and PDO, these climate models will fail, or actually did fail.

James Bull
May 21, 2013 11:43 pm

This sort of thing happens in big companies as well management spend large sums on studies to find how to make things more efficient (sack people) and when the’re done all that happens is what those who make the products said would happen before the start of it all.
It does beg the question what are these super computers doing if climate calcs are so easy. It must be one heck of a game of Tetras.
James Bull

Ben D.
May 21, 2013 11:53 pm

Is this an example of Occam’s razor?

tty
May 22, 2013 12:13 am

Willis, have you tried fiddling with tau and lambda to match the actual historical record? You might produce a very superior GCM that you could flog to some credulous government for a billion dollars.

Frank
May 22, 2013 12:14 am

Stephen: if I understand this post correctly, the climate models estimate a much higher climate sensitivity parameter than the same parameter derived from the model’s predictions. This means, I think, that the other parameters, feedbacks etc. in the models force the model to derive a higher value for climate sensitivity in order to backfit the known data. So the other parameters and feedbacks do have an effect.

May 22, 2013 12:32 am

Willis, congratulations. Excellent work and a very significant finding.
I’d make a few points.
Complexity in science is a bad thing. I’d go as far as to say, all scientific progress occurs when simply explanations are proposed. Occam’s Razor, etc.
Predictive models of the climate do not require physically plausible mechanisms as part of the predictive process, ie the model, in order to produce valid predictions. Trying to simulate the physics of the climate at our current level of understanding is a fool’s errand.
People have a, in part, irrational faith in what computers tell them. Try walking into your local bank, and telling them the computer is wrong. This faith is justified with commercial applications which are very extensively tested against hard true/false reality. But there is no remotely comparable testing of climate models, for a number of reasons, most of which you are doubtless aware off. However, this faith in computer (outputs) was the main way in which the climate models were sold to the UN, politicians, etc.
Further, increasing complexity was the way to silence any dissenting sceptical voices, by making the models increasing hard to understand (complex) and hence to criticize. You have cut that Gordian Knot.
Again, congratulations.

May 22, 2013 1:30 am

Sweet, so sweet! Looking back from the perspective of a poor old downtrodden Generation VW scientist back at these Gen X model exercises in circular logic I can only say…bravo. You have nailed them. I take it you are aware that ~3 year lag factor has authoritative precedents too, in terms of the mean e-folding time for recirculation of CO2 between atmosphere and oceans and hence is a primary reflection of the responses of the real earth (AMO, ENSO etc.)

May 22, 2013 1:43 am

Sorry I meant recirculation of heat (not CO2). See Schwartz, 2007 etc.

cd
May 22, 2013 2:05 am

Willis
Do you think we give too much credence to the models by even discussing their results? They are trained on historical data that has huge associated errors.
3) Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.
But if I understand you correctly, the forcings are derived from the model runs. So one of the inputs to your function has to be derived at first. So although, the output of the collated models can be expressed as a simple algorithm (as you have done here and the models commonly do), they still need to do the runs in order to define the inputs to the final model. Do you not think that you need a caveat here – give them their dues. That the cleverness is in the derivation not the output. Perhaps if we massage the egos of the people who design/write the models they might be less adversarial?
Finally, and perhaps I am wrong here also, but the models also have evolving feedbacks so one would expect dependency on previous result and hence the lagged dependence in your function.

Richard LH
May 22, 2013 2:23 am

Hmm. So if the output is a lagged response of today +2 to 3 years then you should be able to predict the future up to that point in time from already measured values.
So what does the future hold (for the models anyway)?

cd
May 22, 2013 2:28 am

Steve Short
I think there seems to be a lot of hatred toward the people who generate these models.
We only have one climate system on Earth, so we can hardly do experiments like a “Generation VW scientist” would like ;). Computers do give us an opportunity to at least have a play and see what might happen in a simplified world and as we increase the complexity and computational power, the models may converge on the real world. So I think we are attacking the wrong people. The problem lies with those (such as ecologists say) who use the model outputs to do impact assessments/predictions without taking the time to understand the model limitations (if not the underlying theory) and then support alarmists nonsense with unfounded confidence in the press.
BTW, most engineering projects (big and small) depend on computer models; as do many scientific fields were experiments are very expensive or impractical.

Greg Goodman
May 22, 2013 2:38 am

Good work Willis. This is a good way to analyse the behaviour of the models. Amazingly good demonstration that despite all the complexity they are telling us nothing more than the trivially obvious about the key quesiton.
However, unless I have misunderstood what you are extracting is the overall sensitivity to all forcings , not the CS to CO2. ie it is the sum of volcanic and CO2 over a period that corresponds to a double of CO2.
This is what I’ve been saying for a while, exaggerated volcanics allows exaggerated GHG. Since there was lots of volcanism during the period when they wanted lot of GHG forcing it works … up to 2000.
Last big volcano was Mt P and then once the dust settled the whole thing goes wrong. Post y2k is the proof that they have both volcanoes and GHG too strong.
The orange line shows the huge and permanent offset that Mt Agung is supposed to have made, yet this is not shown in any real data, as you have pointed out on many occasions.
Since clouds , precipitation and ocean currents are not understood and modelled but are just guesswork “parameters” this is all the models can do.
Does your digitisation allow us to put volcanics to net zero and see what GHG still gives about the right curve.
[Sorry I have not been able to look at you xlsx file, it crashes Libre Office. It’s flakey . ]

richard verney
May 22, 2013 2:39 am

Willis
It would be useful to list the major volcanic eruptions since 1850 and the claimed negative feedback with respect to each.
Does anyone really think that Pinatubo (1991) had the same impact as Krakatoa (1883). Without digitalizing, to my unaided eye, the negative forcings appear similar.
Does anyone really have any confidence in the temperatures? Surely few people hold the view that today is about 0.6degC warmer than it was in the 1930s, or for that matter some 0.8degC warmer than the 1980s. I would have tjought that within the margins error, it is difficult to conclude that temperatures today today truly are warmer than those either in the 1930s or 1880s such that the present temperature record does not show the full extent of variability in past temperatures.
As far as i am concerned, models are simply GIGO, and one cannot even begin to properply back tune them until one has a proper record of past temperatures.
Just to point out the obvious, if they are tuned to incorrect past temperatures, going forward they will obviously be wrong. It is now more difficult to fudge current temperature anomaly changes because of the satellite data sets, and this is one reason why we are seeing divergence on recent timescales.

richard verney
May 22, 2013 2:41 am

The 3rd paragraph in my above post contains typo and should have read:
“Does anyone really have any confidence in the temperatures? Surely few people hold the view that today is about 0.6degC warmer than it was in the 1930s, or for that matter some 0.8degC warmer than the 1880s?”

Bloke down the pub
May 22, 2013 2:48 am

If the organisations that funded those models saw that they could be replicated by a couple of lines of equations, do you think they might ask for their money back?

Greg Goodman
May 22, 2013 2:52 am

cd “So I think we are attacking the wrong people. The problem lies with those (such as ecologists say) who use the model outputs to do impact assessments/predictions without taking the time to understand the model limitations ”
The problem is that a lot of research groups are filled with your “ecologists” (by which I presume you mean environmentalists) .
If the modellers were honest about their state of development they would be saying don’t use them for prediction , we are about 50 years away from being able to do that. They are not. They are on the gravy train by playing down the uncertainties and promoting their use now. Many seem to be filled with some environmentalist zeal that is influencing not only their presentation of suitability but the actual “parameters” that are chosen to get an expected or politically desired result out of the model.
Hansen is a self declared activist and his volcanic forcing are bigger than anyone else’s.
When scientists attempt to use (abuse) their authority as experts to be non scientific and push an agenda, this does create a certain animosity.

1 2 3 6