Guest Post by Willis Eschenbach
[UPDATE: Steven Mosher pointed out that I have calculated the transient climate response (TCR) rather than the equilibrium climate sensitivity (ECS). For the last half century, the ECS has been about 1.3 times the TCR (see my comment below for the derivation of this value). I have changed the values in the text, with strikeouts indicating the changes, and updated the graphic. My thanks to Steven for the heads up. Additionally, several people pointed out a math error, which I’ve also corrected, and which led to the results being about 20% lower than they should have been. Kudos to them as well for their attention to the details.]
In a couple of previous posts, Zero Point Three Times the Forcing and Life is Like a Black Box of Chocolates, I’ve shown that regarding global temperature projections, two of the climate models used by the IPCC (the CCSM3 and GISS models) are functionally equivalent to the same simple equation, with slightly different parameters. The kind of analysis I did treats the climate model as a “black box”, where all we know are the inputs (forcings) and the outputs (global mean surface temperatures), and we try to infer what the black box is doing. “Functionally equivalent” in this context means that the contents of the black box representing the model could be replaced by an equation which gives the same results as the climate model itself. In other words, they perform the same function (converting forcings to temperatures) in a different way but they get the same answers, so they are functionally equivalent.
The equation I used has only two parameters. One is the time constant “tau”, which allows for the fact that the world heats and cools slowly rather than instantaneously. The other parameter is the climate sensitivity itself, lambda.
However, although I’ve shown that two of the climate models are functionally equivalent to the same simple equation, until now I’ve not been able to show that is true of the climate models in general. I stumbled across the data necessary to do that while researching the recent Otto et al paper, “Energy budget constraints on climate response”, available here (registration required). Anthony has a discussion of the Otto paper here, and I’ll return to some curious findings about the Otto paper in a future post.
Figure 1. A figure from Forster 2013 showing the forcings and the resulting global mean surface air temperatures from nineteen climate models used by the IPCC. ORIGINAL CAPTION. The globally averaged surface temperature change since preindustrial times (top) and computed net forcing (bottom). Thin lines are individual model results averaged over their available ensemble members and thick lines represent the multi-model mean. The historical-nonGHG scenario is computed as a residual and approximates the role of aerosols (see Section 2).
In the Otto paper they say they got their forcings from the 2013 paper Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models by Forster et al. (CMIP5 is the latest Coupled Model Intercomparison Project.) Figure 1 shows the Forster 2013 representation of the historical forcings used by the nineteen models studied in Forster 2013, along with the models’ hindcast temperatures. which at least notionally resemble the historical global temperature record.
Ah, sez I when I saw that graph, just what I’ve been looking for to complete my analysis of the models.
So I digitized the data, because trying to get the results from someone’s scientific paper is a long and troublesome process, and may not be successful for valid reasons. The digitization these days can be amazingly accurate if you take your time. Figure 2 shows a screen shot of part of the process:
Figure 2. Digitizing the Forster data from their graphic. The red dots are placed by hand, and they are the annual values. As you can see, the process is more accurate than the width of the line … see the upper part of Figure 1 for the actual line width. I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.
Once I had the data, it was a simple process to determine the coefficients of the equation. Figure 3 shows the result:
Figure 3. The blue line shows the average hindcast temperature from 19 models in the the Forster data. The red line is the result of running the equation shown in the graph, using the Forster average forcing as the input.
As you can see, there is an excellent fit between the results of the simple equation and the average temperature hindcast by the nineteen models. The results of this analysis are very similar to my results from the individual models, CCSM3 and GISS. For CCSM3, the time constant was 3.1 years, with a sensitivity of 1.2 2.0°C per doubling. The GISS model gave a time constant of 2.6 years, with the same sensitivity, 1.2 2.0°C per doubling. So the model average results show about the same lag (2.6 to 3.1 years), and the sensitivities are in the same range (1.2 2.0°C/doubling vs 1.6 2.4°C/doubling) as the results for the individual models. I note that these low climate sensitivities are similar to the results of the Otto study, which as I said above I’ll discuss in a subsequent post.
So what can we conclude from all of this?
1. The models themselves show a lower climate sensitivity (1.2 2.0°C to 1.6 2.4°C per doubling of CO2) than the canonical values given by the IPCC (2°C to 4.5°C/doubling).
2. The time constant tau, representing the lag time in the models, is fairly short, on the order of three years or so.
3. Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.
4. This analysis does NOT include the heat which is going into the ocean. In part this is because we only have information for the last 50 years or so, so anything earlier would just be a guess. More importantly, the amount of energy going into the ocean has averaged only about 0.25 W/m2 over the last fifty years. It is fairly constant on a decadal basis, slowly rising from zero in 1950 to about half a watt/m2 today. So leaving it out makes little practical difference, and putting it in would require us to make up data for the pre-1950 period. Finally, the analysis does very, very well without it …
5. These results are the sensitivity of the models with respect to their own outputs, not the sensitivity of the real earth. It is their internal sensitivity.
Does this mean the models are useless? No. But it does indicate that they are pretty worthless for calculating the global average temperature. Since all the millions of calculations that they are doing are functionally equivalent to a simple lagged linear transformation of the inputs, it is very difficult to believe that they will ever show any skill in either hindcasting or forecasting the global climate.
Finally, let me reiterate that I think that this current climate paradigm, that the global temperature is a linear function of the forcings and they are related by the climate sensitivity, is completely incorrect. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of this issue.
Regards to everyone, more to come,
w.
DATA AND CALCULATIONS: The digitized Forster data and calculations are available here as an Excel spreadsheet.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
@ur momisugly Willis Eschenbach on May 22, 2013 at 9:03 am:
Thanks Willis. I noticed that later when I was hacking through the sheet, should have said I had figured it out back then right away. My apologies.
It’s models all the way down, even to the centre of the Earth.
Greg Goodman says:
May 22, 2013 at 7:00 am
Thanks, Greg. You are correct. However, since this is a yearly iteration, t=1.
w.
Willis –
Like this. The black box calculation equivalency is disturbing in that it says all those expensive computers turn out to be unnecessary: so much for the UK Met Office’s excuse.
The calculation equivalency proof also legitimizes the knowledgeable amateur in his hindsight and forward casting work: a 99% match has to mean a good equivalency … unless you are saying that the future will not be a match for even the recent past. Which I wouldn’t be surprised to hear said.
When you hold that something is “special”, you can say anything is reasonable because neither past nor process are limiting factors.
Along this equivalency: recently on WUWT there was discussion of actual ocean heating vs projected ocean heating. If we view ocean heating processes as another black box in which there is only one item of concern, the radiative forcing of CO2, the ratio of observed to modelled heating gives us a correction factor for the principal forcing. All it takes is digitizing two trends, the modeled and the measured, graphing them and taking a linear trend of the results.
In the case referenced, I think the correction factor is about 4/7 (taken from energy measurements rather than temperature from 2005 to 2013), i.e. measured additional energy/time = 0.57 X modelled additional energy/time.
If the radiative forcing of CO2 is the fundamental variable in the black box for oceanic heating, and taken to be 3.0C/doubling of CO2, then the corrected forcing is 1.7C/2XCO2.
So Willis, you have this equation:-
T(n+1) = T(n) + (lambda). (deltaF)(n+1) / tau + deltaT(n) exp (-1/tau)
And T(n) is presumably Temperature at some time epoch, and of course T(n+1) the Temperature at the next time value.
lambda is the climate sensitivity; deltaT / CO2 double , and tau is some delay time, and that leaves only the need for some data input , which presumably is deltaF some Watts/m^2 “forcing.
Now of course , exp (-1/tau) is meaningless; there’s a slight dimensions disparity there somewhere.
exp (-1/ square feet) is similarly meaningless.
Now if the very famous Andy Grove can accept a transistor noise equation that is not dimensionally balanced, we shouldn’t worry about a simple nonsense exponential.
But now what haven’t we assumed here.
For starters we have not assumed the validity of the expression:-
T2 – T1 = lambda. log2(CO2,2 / CO2,1) as being more valid, than (T2-T1)/(CO2,2-CO2,1)=lambda or even more valid than: CO2,2 – CO2,1 = lambda.log2(T2/T1)
Nor have we asserted that exp (-1/tau) is any different than -1/tau .or different from log(1-1/tau).
I daresay, a curve fitting process such as you have followed, would yield equally impressive correlation numbers with any one of these alternative possible assumptions, I have outlined above.
I’m not impressed with correlation numbers. specially really rough correlation numbers like 0.991.
How about a correlation number like 0.99999999 ; would that grab your attention ?
I’m aware of at least one instance of a mathematical equation fit to the experimentally determined value of a fundamental constant of Physics; the fine structure constant, that predicts the correct value to within 30% of the standard deviation of the very best experimentally measured value.
Yet that mathematical expression is quite bogus, and was one of about a dozen similarly bogus equations, each of which scored a hit inside that standard deviation.
Just goes to show that quite complex functions can be modeled to high precision, by much simpler functions.
But I do prefer to see equations that are at least dimensionally balanced.
Greg Goodman says:
Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-t / tau) for this to make sense.
Willis. Thanks, Greg. You are correct. However, since this is a yearly iteration, t=1.
No worries..
What do you make of the other term? The equation cannot be correct as shown because each term must have same physical dimensions. (You cant add kelvin to to kelvin per year ).
I’m not sure how you derived it but I think it’s same thing: (t / tau) is your scaled time variable. Your cell does the calculations correctly because dt is always one, but your equation is an invalid transcription.
It would be good if it was correct. That sort of thing dents the cred. of what you are presenting.
Sorry Willis , I’m confusing my T’s. I have another think, but something is astray.
Someone explain the difference between TCR and ECS in plain simple easy to understand terms please.
Greg Goodman says: May 22, 2013 at 7:11 am
“Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-dt / tau) for this to make sense.”
Greg, I agree. But in terms of approximating a differential equation, if that is what it is doing, it still doesn’t seem right. It looks like it is related to the model of Schwartz. But that was a response to a one-off rise in F, and the exponential was exp(-t/τ), not exp(-dt/τ). And if the ΔF relates to a derivative, then it should be divided by dt too.
None of this disputes the fact that the model fits. But the question is still there – how do we know λ is climate sensitivity? And is it ECS?
Willis said: I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.
Engauge Digitizer. Freeware. Download here, Windows exe and Linux versions.
Willis, thank you! It’s already in the Debian Linux distribution, I just installed it and used digitizing software for the first time. Wonderful!
But it’s version 4.1 through the package manager, Debian itself just went to Version 7 and I haven’t yet, and the latest Engauge version is 5.1. Not so wonderful!
Willis,
Interesting post. Here is a table of the actual TCR and ECS in various climate models: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2-3.html#table-8-2 , at least as of the AR4, and they are different from what you report. For example, the CCSM3 and GISS models have an ECS of 2.7 C and a TCR of 1.5-1.6 C.
So, your simple simple lagged linear transformation model of their model is not diagnosing the ECS or TCR of the climate models quite correctly (worse on the ECS than the TCR). You can presumably read there how the modeling groups generally diagnose the ECS and TCR in the models, but it is presumably by some method that is more rigorous than fitting the models reproduction of the historical temperature record to the sort of simple exponential.
We’ve talked before about the fact that there is a slower relaxation component in climate models so they really have to be fit by at least a 2-box rather than 1-box model (which is essentially what yours is…or a close variant on a 1-box model).
“how do we know λ is climate sensitivity? And is it ECS?”
I just noticed the update. But I still don’t see the basis for saying it isTCR either.
The model doesn’t include Mosh’s distinction, because the ocean isn’t part of it.
Yes indeed Nick. I had noted that Willis’ delta T was the same thing a T(n)-T(n+1) and had rearranged it to find something like equation 6 in your link.
Of course Schwartz calls sensitivity 1/lambda but that’s just definition.
However that seems to suggest that the tau in that term is spurious. Even while suggesting it needed to be t/tau from dimensional arguments , I could not see why it was there.
Mosh pointed out above that this will give transient CS, not equilibrium CS.
Perhaps Willis could help out and explain where the 1/tau came from .
(Sorry Frank, but I don’t know how to properly quote comments.)
If the other mechanisms have effects in the models, for the models to be parametrizable with just the CO2-sensitivity and time-lag, then those must be proportional to the effects of CO2. They may drive up the internal number (to get the backfit) by cooling the planet historically, but the only way that backfit could work is by having them all fail to actually produce additional degrees of freedom. Unless the effect of CO2 is built backwards from the total temperature-change, it would be an unbelievable coincidence that their impacts cumulatively scale directly with that of CO2 throughout the temperature-record.
” But I still don’t see the basis for saying it isTCR either.”
The equation assumes a fix heat capacity that is what is changing in temperature. So the oceans are there, they are the major part of the fixed heat capacity.
To the extent that this is the “total” CS K/W/m2 (not CO2x2 CS) this ignores any longer scale exchanges with the deeper ocean. Hence TCS rather than ECS as Mosh noted.
Hi Willis,
Thanks for an excellent post. It would be good if some people who do discrete modeling could weigh in on your discretizing, because your time constant is not a large multiple of your time step. I am no expert, but here is the way I would have done it:
T(n+1) = T(n) + [ lambda * average(F(n+1), F(n)) – T(n) ] / ( tau + 0.5 )
This gives tau = 3.5 years and lambda = 0.56 (transient sensitivity of 1.7 degC/doubling) with an error of 0.184.
Stephen said on May 22, 2013 at 2:46 pm:
Dear Stephen, go to top of this page. Click on “Test”, that’s the page with the formatting help, and a comment section where you can try out various HTML tricks for your comments.
But here’s the quick version:
<blockquote>began quoted text
more text
end of quote</blockquote>
Continue comment right after “closing tag”, no line space, or WordPress will display something strange.
This yields:
Comment continues.
Later on the Test page you can experiment with nested quotes. Very fun!
Stephen wrote: “Unless the effect of CO2 is built backwards from the total temperature-change, it would be an unbelievable coincidence that their impacts cumulatively scale directly with that of CO2 throughout the temperature-record.”
Exactly. Curious, isn’t it?
This thread is a nice example of true ‘peer’ review in quasi real time. Wonder if the IPCC will take any note of the similar process that followed the leak of AR5 SOD. Actually, that was a rhetorical question. Likely not.
Willis,
Not sure if we have directed you to this before, but here is a post by Isaac Held covering a subject similar to what you are here (the emulation of GCMs by simpler models like 1-box models): http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-forced-response/ Note that he finds the same sort of thing that you do: He computes a time scale for relaxation of about 4 years and a “climate sensitivity” (again, a sort of transient sensitivity, I think) of 1.5 K, but notes that the actual ECS in the CM2.1 model he is emulating is ~3.4 K.
“””””…..Greg Goodman says:
May 22, 2013 at 12:15 pm
Greg Goodman says:
Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-t / tau) for this to make sense.
Willis. Thanks, Greg. You are correct. However, since this is a yearly iteration, t=1…..”””””
NO IT ISN’T !!
t = 1 year; it does NOT = 1
Hi Willis,
I’m sure we’ve been here before…
See my comment of June 4 at 07:02am in your previous
http://wattsupwiththat.com/2012/05/31/a-longer-look-at-climate-sensitivity/
In the present article, you seem to have reverted to your “old” formula before the correction we discussed. This gives you values of lambda which are too small by a factor of tau*(1-exp(-1/tau)). Specifically, in this study, this means a factor of 0.846.
More importantly, after you make this correction, you will be calculating the unit climate sensitivity (not TCR) under the assumption of a constant linear feedback, and you ARE taking ocean heat content into account, albeit under the very simple assumption of a constant heat capacity. So if you apply a forcing of 3.7 W/m2 you should get an approximation of the ECS for a doubling of CO2, again under the assumption of a constant linear feedback.
.
With the correction, the formula is the numerical solution of the linear feedback equation given by:-
CdT/dt = F(t) – T(t)/lambda
Rate of heat gain by oceans = Cumulative forcing at time (t) LESS (Temperature change from t=0)/climate sensitivity
The above is a two parameter equation in C and lambda. By setting tau = C*lambda it becomes a two parameter equation in lambda and tau, but obviously the apparent heat capacity can always be back-calculated if values of tau and lambda are known. The total heat gain in the oceans at time t is just given by C*T(t) in units of watt-years.
People have commented that I must have made a transcription error, and it’s true, I did. The actual formula should be:
T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-1/tau)) + ΔT(n) exp( -∆T / τ )
This is the proper form and fixes the units problem. The full equation is
T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-∆t / τ )) ∆t+ ΔT(n) exp( -∆t / τ ) ∆t
This simplifies to the equation above, since ∆t = 1. The two terms on the right are in degrees.
Including units, the full equation is
T(n+1) [degrees] = T(n) [degrees] + λ [degrees/W m-2] * ∆F(n+1) [W m-2/year] * (1-exp(-∆t [years] / τ [years]) * ∆T [years] + ΔT(n) [degrees/year] * exp( -∆T [years]/ τ [years] )
This can be written in units alone, as
[degrees] = [degrees] +[degrees/W m-2] * [W m-2/year] * [years] * (1- exp( -[years] / [years]) + [degrees/year] exp( -[years] / [years] [years]
Once again, I’ve corrected the head post and the graphic. The error translates to an increase of about 25% in the calculated climate sensitivity, but doesn’t change the time constant. My thanks to those who noticed the error.
w.
I haven’t confirmed the iterative equation this time, but, if it’s otherwise correct, shouldn’t
“T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-∆t / τ )) ∆t+ ΔT(n) exp( -∆t / τ ) ∆t”
be
“T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-∆t / τ )) + ΔT(n) exp( -∆t / τ ) ,”
i.e. shouldn’t you drop a couple of ∆t’s to make the dimensional analysis work? This is because ∆F(n+1) is in W m-2, not W m-2/year.
Reminds me of something I heard long long ago… that one could be a near expert weather predictor in many parts of the country simply by saying tomorrow would be very much like today. Something like 95%+ accuracy in Phoenix, IIRC. When wrong, it’s a regime change of some sort, but the next day you tend to return to accuracy…
So for climate you just lag it a bit, eh? Easy peasy…
😉