Guest Post by Willis Eschenbach
[UPDATE: Steven Mosher pointed out that I have calculated the transient climate response (TCR) rather than the equilibrium climate sensitivity (ECS). For the last half century, the ECS has been about 1.3 times the TCR (see my comment below for the derivation of this value). I have changed the values in the text, with strikeouts indicating the changes, and updated the graphic. My thanks to Steven for the heads up. Additionally, several people pointed out a math error, which I’ve also corrected, and which led to the results being about 20% lower than they should have been. Kudos to them as well for their attention to the details.]
In a couple of previous posts, Zero Point Three Times the Forcing and Life is Like a Black Box of Chocolates, I’ve shown that regarding global temperature projections, two of the climate models used by the IPCC (the CCSM3 and GISS models) are functionally equivalent to the same simple equation, with slightly different parameters. The kind of analysis I did treats the climate model as a “black box”, where all we know are the inputs (forcings) and the outputs (global mean surface temperatures), and we try to infer what the black box is doing. “Functionally equivalent” in this context means that the contents of the black box representing the model could be replaced by an equation which gives the same results as the climate model itself. In other words, they perform the same function (converting forcings to temperatures) in a different way but they get the same answers, so they are functionally equivalent.
The equation I used has only two parameters. One is the time constant “tau”, which allows for the fact that the world heats and cools slowly rather than instantaneously. The other parameter is the climate sensitivity itself, lambda.
However, although I’ve shown that two of the climate models are functionally equivalent to the same simple equation, until now I’ve not been able to show that is true of the climate models in general. I stumbled across the data necessary to do that while researching the recent Otto et al paper, “Energy budget constraints on climate response”, available here (registration required). Anthony has a discussion of the Otto paper here, and I’ll return to some curious findings about the Otto paper in a future post.
Figure 1. A figure from Forster 2013 showing the forcings and the resulting global mean surface air temperatures from nineteen climate models used by the IPCC. ORIGINAL CAPTION. The globally averaged surface temperature change since preindustrial times (top) and computed net forcing (bottom). Thin lines are individual model results averaged over their available ensemble members and thick lines represent the multi-model mean. The historical-nonGHG scenario is computed as a residual and approximates the role of aerosols (see Section 2).
In the Otto paper they say they got their forcings from the 2013 paper Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models by Forster et al. (CMIP5 is the latest Coupled Model Intercomparison Project.) Figure 1 shows the Forster 2013 representation of the historical forcings used by the nineteen models studied in Forster 2013, along with the models’ hindcast temperatures. which at least notionally resemble the historical global temperature record.
Ah, sez I when I saw that graph, just what I’ve been looking for to complete my analysis of the models.
So I digitized the data, because trying to get the results from someone’s scientific paper is a long and troublesome process, and may not be successful for valid reasons. The digitization these days can be amazingly accurate if you take your time. Figure 2 shows a screen shot of part of the process:
Figure 2. Digitizing the Forster data from their graphic. The red dots are placed by hand, and they are the annual values. As you can see, the process is more accurate than the width of the line … see the upper part of Figure 1 for the actual line width. I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.
Once I had the data, it was a simple process to determine the coefficients of the equation. Figure 3 shows the result:
Figure 3. The blue line shows the average hindcast temperature from 19 models in the the Forster data. The red line is the result of running the equation shown in the graph, using the Forster average forcing as the input.
As you can see, there is an excellent fit between the results of the simple equation and the average temperature hindcast by the nineteen models. The results of this analysis are very similar to my results from the individual models, CCSM3 and GISS. For CCSM3, the time constant was 3.1 years, with a sensitivity of 1.2 2.0°C per doubling. The GISS model gave a time constant of 2.6 years, with the same sensitivity, 1.2 2.0°C per doubling. So the model average results show about the same lag (2.6 to 3.1 years), and the sensitivities are in the same range (1.2 2.0°C/doubling vs 1.6 2.4°C/doubling) as the results for the individual models. I note that these low climate sensitivities are similar to the results of the Otto study, which as I said above I’ll discuss in a subsequent post.
So what can we conclude from all of this?
1. The models themselves show a lower climate sensitivity (1.2 2.0°C to 1.6 2.4°C per doubling of CO2) than the canonical values given by the IPCC (2°C to 4.5°C/doubling).
2. The time constant tau, representing the lag time in the models, is fairly short, on the order of three years or so.
3. Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.
4. This analysis does NOT include the heat which is going into the ocean. In part this is because we only have information for the last 50 years or so, so anything earlier would just be a guess. More importantly, the amount of energy going into the ocean has averaged only about 0.25 W/m2 over the last fifty years. It is fairly constant on a decadal basis, slowly rising from zero in 1950 to about half a watt/m2 today. So leaving it out makes little practical difference, and putting it in would require us to make up data for the pre-1950 period. Finally, the analysis does very, very well without it …
5. These results are the sensitivity of the models with respect to their own outputs, not the sensitivity of the real earth. It is their internal sensitivity.
Does this mean the models are useless? No. But it does indicate that they are pretty worthless for calculating the global average temperature. Since all the millions of calculations that they are doing are functionally equivalent to a simple lagged linear transformation of the inputs, it is very difficult to believe that they will ever show any skill in either hindcasting or forecasting the global climate.
Finally, let me reiterate that I think that this current climate paradigm, that the global temperature is a linear function of the forcings and they are related by the climate sensitivity, is completely incorrect. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of this issue.
Regards to everyone, more to come,
w.
DATA AND CALCULATIONS: The digitized Forster data and calculations are available here as an Excel spreadsheet.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Hi Paul… good to see you’re back online.
I think the commitment scenario is a fair comparison to make. Looking at the spreadsheet a little closer, I see the sum of delta F over the last 4 years was -0.10W/m^2, so this probably explains why I found zero rise post commitment. What little was in the pipeline was subtracted at the end of the period.
I guess the clarification is that Willis’s model can produce seemingly functional equivalent results over the instrumentation period, but will probably diverge from the MMM in future projections.
I have my own made up model which is capable of producing MMM-like hindcasts and projections under the commitment scenario. My interest is in projected trends given a constant delta F of 0.02W/m^2 over the rest of the century. This is my own made up BAU scenario. Depending on assumptions and data used, I get decadal trends in the range of 0.10C to 0.16C. So I likely would be on the low end of the IPCC’s range and Willis’s model is on the low end of my range. I just like to compare results.
Your point regarding the curvilinear relationship is taken. IIRC correctly this was due to the slow slow albedo impacts in the high latitudes. I’m not really very interested in ECS or very long term impacts and I certainly wouldn’t use my model or Willis’s to project multi-century scenarios. Given uncertainties in forcings, I also consider the MMM projections useless.
What Paul_K says in this post http://wattsupwiththat.com/2013/05/21/model-climate-sensitivity-calculated-directly-from-model-results/#comment-1314851 seems right to me.
So, the lambda is an estimate of the equilibrium sensitivity but under the 1-box assumption of there being just a single relaxation timescale. As Isaac Held explains in the post I referenced ( http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/11/3-transient-vs-equilibrium-climate-responses/ ), fitting to such a 1-box model is known to underestimate the actual ECS in the models because of the existence of longer timescales in the models.
Willis, in your most recent post, you say:
I couldn’t find where Mosh referred to a paper by Held. Could you provide the reference to the paper that you are talking about? Thanks.
Joel, here’s the Isaac Held paper. Note that he has specified the time constant at four years rather than fitting it. There’s a typo in my note you quoted, ECS should be 3.4°C, not 3.3°C.
w.
Hi Willis,
No he didn’t. Held uses the term lambda as total feedback which is the inverse of your lambda. The climate sensitivity he obtained was therefore 0.435 deg C/(W/m^2) = 1/2.3.
The ECS he obtained under the assumption of linear feedback was 1.5 deg C .
The ECS of the GFDL2.1 model is 3.4 deg C.
Once again Wills please note that the difference between the 3.4 deg C and the 1.5 deg C is due to the fact that GFDL2.1 displays a strong curvilinear relationship between net flux and temperature when in prediction mode. It has nothing to do with the ratio of ECS to TCR which is completely different.
joeldshore,
I had a go at the subject of ECS from models vs the apparent climate sensitivity over the historical period in a post here:-
http://rankexploits.com/musings/2012/the-arbitrariness-of-the-ipcc-feedback-calculations/
Basically, the ECS displayed by the GCMs works out to be about twice the ECS value obtained by fitting a constant linear feedback model to the GCM data over the instrumental period. .
Joe Born,
I think you are a brilliant engineer masquerading as a lawyer masquerading as a brilliant engineer.
Let me try to give you a very short version of Poseidon’s vagaries.
Textually the energy balance equation can be written as:-
the rate of change of ocean heat energy = the net incoming radiative flux imbalance
The RHS is given by F – T/lambda, using Willis’s definition of lambda.
.On the RHS of this equation, we note that the net flux is dependent on the surface (and atmospheric) STATE, but it is NOT dependent in any way on historic ocean heat uptake. The radiative response doesn’t care what the history of ocean heat uptake was to get to the particular state; it is only interested in the state itself. This is true EVEN IF we change out the assumption of constant lambda for something more sophisticated.
Now one definition of ECS is the temperature at which the net radiative imbalance goes to zero. From this we see that we can set the LHS of the equation to zero to estimate the climate sensitivity. Note that we have done this without talking at all about the ocean model, so what are we missing? Well we are missing the behaviour of temperature in the time domain, since this is controlled by the ocean heat uptake – LHS of the equation.
OK, so now we can consider the LHS of the equation. You (quite reasonably) are scathing about the quality of choice of a constant heat capacity for ocean/mixed layer or sumpn. Agreed, it is a model with very limited applicability. Well suppose then that we don’t try to define the ocean model at all? We have estimates of the OHC uptake (energy units/m^2) and the LHS of the equation can be expressed as d(OHC)/dt. Voila, we can write the energy balance without making any assumptions about the ocean dynamics:
d(OHC)/dt = F(t) – T/lambda
If the values of OHC, F(t) and T(t) are known, then we can estimate lambda directly with no assumptions about the ocean dynamics. Suppose we apply this equation to the historical data in a GCM, what do we find. Well we find that the values of lambda work out to be typically around about 0.45 deg C/unit of forcing, giving an ECS of around 1.5 deg C under the assumption of a linear feedback – typically about half the declared ECS for the GCM. The difference is explained by the fact that the GCMs do not adhere to the assumption of a constant linear feedback for reasons which are still not completely clear. (Try an interesting paper by Kyle Armour et al 2012 on exactly this subject)
If we do the same thing with observational data, i.e. using measurements of OHC and temperature together with estimates of forcing, then the ECS values come out a bit higher – modal values of ECS are around 1.7 deg C. The recent Otto et al study did something very similar to come up with a ML ECS value of 1.9 deg C.
Now I can (instead) substitue the LHS of the equation for a more sophisticated ocean model -a two-slab or an upwelling-diffusion model – and it really doesn’t change the estimates of lambda very much provided that I simultaneously fit both temperature AND OHC. It does however tend to give rise to larger estimates of the system response time. However these estimates are nowhere near the response times observed in the GCMs when in predictive mode. Are the GCMs more correct than the simple analytic models, so that we should really scale up estimates of sensitivity obtained from the linear feedback assumption? A completely separate question, and one which I am still working on.
Paul_K,
Thanks a lot for taking the time to provide such a clear response (and the compliment–although if I really were brilliant my head wouldn’t hurt so much when I try to figure this stuff out). Much of my career was spent asking dumb questions of experts, and my position was such that they had to humor me. Now that I’m retired, getting a good answer is a luxury–which I really appreciate on the odd occasions when it happens.
Paul_K says:
I assume the paper Paul is speaking of is the one available here: http://earthweb.ess.washington.edu/roe/GerardWeb/Publications_files/Armouretal_EffClimSens.pdf
This blog posting by Isaac Held also seems relevant: http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/19/time-dependent-climate-sensitivity/