**Guest Post by Willis Eschenbach**

[UPDATE: Steven Mosher pointed out that I have calculated the transient climate response (TCR) rather than the equilibrium climate sensitivity (ECS). For the last half century, the ECS has been about 1.3 times the TCR (see my comment below for the derivation of this value). I have changed the values in the text, with strikeouts indicating the changes, and updated the graphic. My thanks to Steven for the heads up. Additionally, several people pointed out a math error, which I’ve also corrected, and which led to the results being about 20% lower than they should have been. Kudos to them as well for their attention to the details.]

In a couple of previous posts, Zero Point Three Times the Forcing and Life is Like a Black Box of Chocolates, I’ve shown that regarding global temperature projections, two of the climate models used by the IPCC (the CCSM3 and GISS models) are functionally equivalent to the same simple equation, with slightly different parameters. The kind of analysis I did treats the climate model as a “black box”, where all we know are the inputs (forcings) and the outputs (global mean surface temperatures), and we try to infer what the black box is doing. “Functionally equivalent” in this context means that the contents of the black box representing the model could be replaced by an equation which gives the same results as the climate model itself. In other words, they perform the same function (converting forcings to temperatures) in a different way but they get the same answers, so they are functionally equivalent.

The equation I used has only two parameters. One is the time constant “tau”, which allows for the fact that the world heats and cools slowly rather than instantaneously. The other parameter is the climate sensitivity itself, lambda.

However, although I’ve shown that two of the climate models are functionally equivalent to the same simple equation, until now I’ve not been able to show that is true of the climate models in general. I stumbled across the data necessary to do that while researching the recent Otto et al paper, “Energy budget constraints on climate response”, available here (registration required). Anthony has a discussion of the Otto paper here, and I’ll return to some curious findings about the Otto paper in a future post.

*Figure 1. A figure from Forster 2013 showing the forcings and the resulting global mean surface air temperatures from nineteen climate models used by the IPCC. ORIGINAL CAPTION. *The globally averaged surface temperature change since preindustrial times (top) and computed net forcing (bottom). Thin lines are individual model results averaged over their available ensemble members and thick lines represent the multi-model mean. The historical-nonGHG scenario is computed as a residual and approximates the role of aerosols (see Section 2).

In the Otto paper they say they got their forcings from the 2013 paper Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models by Forster et al. (CMIP5 is the latest Coupled Model Intercomparison Project.) Figure 1 shows the Forster 2013 representation of the historical forcings used by the nineteen models studied in Forster 2013, along with the models’ hindcast temperatures. which at least notionally resemble the historical global temperature record.

Ah, sez I when I saw that graph, just what I’ve been looking for to complete my analysis of the models.

So I digitized the data, because trying to get the results from someone’s scientific paper is a long and troublesome process, and may not be successful for valid reasons. The digitization these days can be amazingly accurate if you take your time. Figure 2 shows a screen shot of part of the process:

*Figure 2. Digitizing the Forster data from their graphic. The red dots are placed by hand, and they are the annual values. As you can see, the process is more accurate than the width of the line … see the upper part of Figure 1 for the actual line width. I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.*

Once I had the data, it was a simple process to determine the coefficients of the equation. Figure 3 shows the result:

*Figure 3. The blue line shows the average hindcast temperature from 19 models in the the Forster data. The red line is the result of running the equation shown in the graph, using the Forster average forcing as the input.*

As you can see, there is an excellent fit between the results of the simple equation and the average temperature hindcast by the nineteen models. The results of this analysis are very similar to my results from the individual models, CCSM3 and GISS. For CCSM3, the time constant was 3.1 years, with a sensitivity of 1.2 2.0°C per doubling. The GISS model gave a time constant of 2.6 years, with the same sensitivity, 1.2 2.0°C per doubling. So the model average results show about the same lag (2.6 to 3.1 years), and the sensitivities are in the same range (1.2 2.0°C/doubling vs 1.6 2.4°C/doubling) as the results for the individual models. I note that these low climate sensitivities are similar to the results of the Otto study, which as I said above I’ll discuss in a subsequent post.

So what can we conclude from all of this?

1. The models themselves show a lower climate sensitivity (1.2 2.0°C to 1.6 2.4°C per doubling of CO2) than the canonical values given by the IPCC (2°C to 4.5°C/doubling).

2. The time constant tau, representing the lag time in the models, is fairly short, on the order of three years or so.

3. Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.

4. This analysis does NOT include the heat which is going into the ocean. In part this is because we only have information for the last 50 years or so, so anything earlier would just be a guess. More importantly, the amount of energy going into the ocean has averaged only about 0.25 W/m2 over the last fifty years. It is fairly constant on a decadal basis, slowly rising from zero in 1950 to about half a watt/m2 today. So leaving it out makes little practical difference, and putting it in would require us to make up data for the pre-1950 period. Finally, the analysis does very, very well without it …

5. These results are the sensitivity of the models *with respect to their own outputs*, not the sensitivity of the real earth. It is their internal sensitivity.

Does this mean the models are useless? No. But it does indicate that they are pretty worthless for calculating the global average temperature. Since all the millions of calculations that they are doing are functionally equivalent to a simple lagged linear transformation of the inputs, it is very difficult to believe that they will ever show any skill in either hindcasting or forecasting the global climate.

Finally, let me reiterate that I think that this current climate paradigm, that the global temperature is a linear function of the forcings and they are related by the climate sensitivity, is completely incorrect. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of this issue.

Regards to everyone, more to come,

w.

DATA AND CALCULATIONS: The digitized Forster data and calculations are available here as an Excel spreadsheet.

HIlarious Willis – reminds me of one of my favourite scenes out of the Chevy Chase / Dan Aykroyd movie “Spies Like Us”.

Here’s a better version of the clip – “Break it down again with the machine!” :-)

Something I’ve always felt was nothing but a bunch of bloviating, hyperventilating, egomaniacal climate scientists hooked on petaflops, and Willis has proven my hunch to be correct.

They should have stuck with the KISS* principle long ago. Sure, they wouldn’t have had the luxury and notoriety of playing with some of the most powerful computers on the planet, but they’d be honest in their frugality, which is always a better approach.

* KISS–Keep it simple, stupid (climate scientists)!

What you have just proved is that Nic Levis’ and Otto et al analyses are worthless. Climate sensitivity is a parameter of models so if it cannot be reliably established from their results, the method of establishing the climate sensitivity from results (or actual data) is wrong.

But I rather believe that your analysis is wrong. For instance, the only model you are analysing is your linear model and its regression to multi-model mean. And in the end you declare that linear model (i.e. your model) is wrong. Well, duh.

“Any intelligent fool can make things bigger and more complex. It takes a touch of genius – and a lot of courage – to move in the opposite direction.”– A. Einstein

Willis, we disagree on many things (yup, future fossil fuel max annual energy extraction) but my complements to you on this. Man, you appear to have nailed this thesis (meaning my formerly well trained (supposedly) mind can find no flaw, so MUST AGREE!). Good job!

(You might even get the irony of that, not directed at you at all, given other comments elsewhere this date.)

I tried looking at the forcing and exponential pieces of your equation separately. They are graphed in your modified Excel file. The five sharp drops in the temperature are associated with the forcing term,which always recovers either 1 or 2 years later to a strong positive value. What this means I have no idea, but you might be interested in looking at it.

https://dl.dropboxusercontent.com/u/75831381/CMIP5%20black%20box%20reconstruction–law.xlsx

The net result of all this is that ‘climate scientists’ are more likely to hide their models’ research findings behind paywalls and increase their obfuscation in regards to providing access to raw data and the methodology in how it is processed. In their eyes, complexity for its own sake is a virtue.

One analogy to climate models is geological models. There is always a very strong temptation to generate increasingly complex models when you do not really understand what you are looking at. Then, all of a sudden, one day some bright spark summarises it into something relatively simple which fits all the known observations and bends no laws of science. At first, this is ignored or attacked, but eventually it is accepted – but this can take several years.

If today’s ‘climate scientists’ were to derive Einstein’s Law of Relativity of E=Mc2, they would end up with a formula spread over several pages.

In any scientific field, unless there are strong commercial or strategic issues, those involved should strive for KISS and be totally open about how they arrive at their conclusions. There is not much of either of these in climate science.

In any event, attacks on Willis’ findings are likely to be mostly of the “It’s far too simplistic” variety.

I think we can conclude something else profound about the models:

If CO2-sensitivity and lag-time are the only effective parameters, then those models do not effectively consider any other processes which might affect the global mean temperature. Either other drivers are assumed to be small or all of these models predict feedbacks which drive them to zero. That is an unbelievable coincidence, if it is one. It seems more likely that those who constructed the models simply did not effectively include any other long-term processes (like these: https://wattsupwiththat.com/reference-pages/research-pages/potential-climatic-variables/). Their central result, “confirmation” that warming will continue (to one degree or another) as overall CO2-concentration increases seems to be pure circular logic, rendering them worthless.

As absolutely nothing in the forcings covers AMO and PDO, these climate models will fail, or actually did fail.

This sort of thing happens in big companies as well management spend large sums on studies to find how to make things more efficient (sack people) and when the’re done all that happens is what those who make the products said would happen before the start of it all.

It does beg the question what are these super computers doing if climate calcs are so easy. It must be one heck of a game of Tetras.

James Bull

Is this an example of Occam’s razor?

Willis, have you tried fiddling with tau and lambda to match the actual historical record? You might produce a very superior GCM that you could flog to some credulous government for a billion dollars.

Stephen: if I understand this post correctly, the climate models estimate a much higher climate sensitivity parameter than the same parameter derived from the model’s predictions. This means, I think, that the other parameters, feedbacks etc. in the models force the model to derive a higher value for climate sensitivity in order to backfit the known data. So the other parameters and feedbacks do have an effect.

Willis, congratulations. Excellent work and a very significant finding.

I’d make a few points.

Complexity in science is a bad thing. I’d go as far as to say, all scientific progress occurs when simply explanations are proposed. Occam’s Razor, etc.

Predictive models of the climate do not require physically plausible mechanisms as part of the predictive process, ie the model, in order to produce valid predictions. Trying to simulate the physics of the climate at our current level of understanding is a fool’s errand.

People have a, in part, irrational faith in what computers tell them. Try walking into your local bank, and telling them the computer is wrong. This faith is justified with commercial applications which are very extensively tested against hard true/false reality. But there is no remotely comparable testing of climate models, for a number of reasons, most of which you are doubtless aware off. However, this faith in computer (outputs) was the main way in which the climate models were sold to the UN, politicians, etc.

Further, increasing complexity was the way to silence any dissenting sceptical voices, by making the models increasing hard to understand (complex) and hence to criticize. You have cut that Gordian Knot.

Again, congratulations.

Sweet, so sweet! Looking back from the perspective of a poor old downtrodden Generation VW scientist back at these Gen X model exercises in circular logic I can only say…bravo. You have nailed them. I take it you are aware that ~3 year lag factor has authoritative precedents too, in terms of the mean e-folding time for recirculation of CO2 between atmosphere and oceans and hence is a primary reflection of the responses of the real earth (AMO, ENSO etc.)

Sorry I meant recirculation of heat (not CO2). See Schwartz, 2007 etc.

Willis

Do you think we give too much credence to the models by even discussing their results? They are trained on historical data that has huge associated errors.

3) Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.But if I understand you correctly, the forcings are derived from the model runs. So one of the inputs to your function has to be derived at first. So although, the output of the collated models can be expressed as a simple algorithm (as you have done here and the models commonly do), they still need to do the runs in order to define the inputs to the final model. Do you not think that you need a caveat here – give them their dues.

That the cleverness is in the derivation not the output. Perhaps if we massage the egos of the people who design/write the models they might be less adversarial?Finally, and perhaps I am wrong here also, but the models also have evolving feedbacks so one would expect dependency on previous result and hence the lagged dependence in your function.

Hmm. So if the output is a lagged response of today +2 to 3 years then you should be able to predict the future up to that point in time from already measured values.

So what does the future hold (for the models anyway)?

Steve Short

I think there seems to be a lot of hatred toward the people who generate these models.

We only have one climate system on Earth, so we can hardly do experiments like a “Generation VW scientist” would like ;). Computers do give us an opportunity to at least have a play and see what might happen in a simplified world and as we increase the complexity and computational power, the models may converge on the real world. So I think we are attacking the wrong people. The problem lies with those (such as ecologists say) who use the model outputs to do impact assessments/predictions without taking the time to understand the model limitations (if not the underlying theory) and then support alarmists nonsense with unfounded confidence in the press.

BTW, most engineering projects (big and small) depend on computer models; as do many scientific fields were experiments are very expensive or impractical.

Good work Willis. This is a good way to analyse the behaviour of the models. Amazingly good demonstration that despite all the complexity they are telling us nothing more than the trivially obvious about the key quesiton.

However, unless I have misunderstood what you are extracting is the overall sensitivity to all forcings , not the CS to CO2. ie it is the sum of volcanic and CO2 over a period that corresponds to a double of CO2.

This is what I’ve been saying for a while, exaggerated volcanics allows exaggerated GHG. Since there was lots of volcanism during the period when they wanted lot of GHG forcing it works … up to 2000.

Last big volcano was Mt P and then once the dust settled the whole thing goes wrong. Post y2k is the proof that they have both volcanoes and GHG too strong.

The orange line shows the huge and permanent offset that Mt Agung is supposed to have made, yet this is not shown in any real data, as you have pointed out on many occasions.

Since clouds , precipitation and ocean currents are not understood and modelled but are just guesswork “parameters” this is all the models can do.

Does your digitisation allow us to put volcanics to net zero and see what GHG still gives about the right curve.

[Sorry I have not been able to look at you xlsx file, it crashes Libre Office. It’s flakey . ]

Willis

It would be useful to list the major volcanic eruptions since 1850 and the claimed negative feedback with respect to each.

Does anyone really think that Pinatubo (1991) had the same impact as Krakatoa (1883). Without digitalizing, to my unaided eye, the negative forcings appear similar.

Does anyone really have any confidence in the temperatures? Surely few people hold the view that today is about 0.6degC warmer than it was in the 1930s, or for that matter some 0.8degC warmer than the 1980s. I would have tjought that within the margins error, it is difficult to conclude that temperatures today today truly are warmer than those either in the 1930s or 1880s such that the present temperature record does not show the full extent of variability in past temperatures.

As far as i am concerned, models are simply GIGO, and one cannot even begin to properply back tune them until one has a proper record of past temperatures.

Just to point out the obvious, if they are tuned to incorrect past temperatures, going forward they will obviously be wrong. It is now more difficult to fudge current temperature anomaly changes because of the satellite data sets, and this is one reason why we are seeing divergence on recent timescales.

The 3rd paragraph in my above post contains typo and should have read:

“Does anyone really have any confidence in the temperatures? Surely few people hold the view that today is about 0.6degC warmer than it was in the 1930s, or for that matter some 0.8degC warmer than the 1880s?”

If the organisations that funded those models saw that they could be replicated by a couple of lines of equations, do you think they might ask for their money back?

cd “So I think we are attacking the wrong people. The problem lies with those (such as ecologists say) who use the model outputs to do impact assessments/predictions without taking the time to understand the model limitations ”

The problem is that a lot of research groups are filled with your “ecologists” (by which I presume you mean environmentalists) .

If the modellers were honest about their state of development they would be saying don’t use them for prediction , we are about 50 years away from being able to do that. They are not. They are on the gravy train by playing down the uncertainties and promoting their use now. Many seem to be filled with some environmentalist zeal that is influencing not only their presentation of suitability but the actual “parameters” that are chosen to get an expected or politically desired result out of the model.

Hansen is a self declared activist and his volcanic forcing are bigger than anyone else’s.

When scientists attempt to use (abuse) their authority as experts to be non scientific and push an agenda, this does create a certain animosity.

richard : ” it is difficult to conclude that temperatures today today truly are warmer than those either in the 1930s or 1880s such that the present temperature record does not show the full extent of variability in past temperatures.”

I did point out over a year ago that Hadley adjustments were removing 2/3 of the variability from the earlier half of the record and that this was done based on rather speculative reasoning not fact.

http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-188237

Those adjustments helped make the data better fit the model.

If that was not the concious intent of those adjustments , it was certainly the effect.

There is also post war adjustment of -0.5C which makes the late 20th c. rise look more like CO2.

However, the Ice Classic record would suggest that the wartime bump was a real event , not something that requires one sided correction.

This is why I do most of my work using ICOADS SST not Hadley’s adjusted datasets.

Note the 1939-40 that usually gets “corrected” leaving just the post war drop. This was Folland’s folly, introduces in late 80’s IIRC It has been smoothed out a bit in HadSST3 but is still there and still -0.5 deg C.

http://climategrog.wordpress.com/?attachment_id=258

Bloke down the pub says:If the organisations that funded those models saw that they could be replicated by a couple of lines of equations, do you think they might ask for their money back?

Why would they do that ? They are using the results to set up a $100bn slush fund with no legal auditing or accountability.

I’m sure they are very happy with the return on investment.

Did anyone else get a broken link at cell G3 of Mr. Eschenbach’s spreadsheet?

Willis,

you use aggregate forcings (F) as a driving variable in your equation, but you interpret the result as sensitivity to CO2. But in these models, CO2 is not the only forcing factor. To arrive at CO2 forcing, you would have to split F up into its individual historical components (CO2, soot, aerosols etc.). Many of these other forcing factors have a negative temperature effect, making it likely that the models still assume a higher sensitivity for CO2 … which would be expected.

To many commenters here: climate models try to simulate physical processes, so they are called process models. Willis puts the results of this into an ex-post statistical model to identify the importance of driving factors. The fact that this is possible does not mean that the process models are worthless. Willis has just simplified the models tremendously, thereby losing a lot of information that went into these models.

The problem with process models is that they cannot simulate processes which you do not program into them, either because you don’t know them or you find them insignificant or inconvenient for the outcome.This means that you really have to look at such models in detail to be able to fundamentally criticize them. Due to the complexity of the models this is very difficult even for scientifically literate persons.

Says an economic modeler …

Well, that appears to show that a simple top-down analysis beats sickeningly complex bottom-up every time. As a wargame designer, this comes as no surprise. If you want to “simulate” the Russian Front, you do it by looking at Army Groups and fronts, not man-to-man, for heaven’s sake.

agricultural economist says:

you use aggregate forcings (F) as a driving variable in your equation, but you interpret the result as sensitivity to CO2. But in these models, CO2 is not the only forcing factor. To arrive at CO2 forcing, you would have to split F up into its individual historical components (CO2, soot, aerosols etc.).

===

Yes, that confirms my comment above.

” Many of these other forcing factors have a negative temperature effect, making it likely that the models still assume a higher sensitivity for CO2 … which would be expected.”

Ah, expectations. That is indeed the primary forcing in climate modelling.

agricultural economist says: Willis puts the results of this into an ex-post statistical model to identify the importance of driving factors.

What Willis shows, I think, is that after all the supposedly intricate modelling of basic physics and guessed forcings and parameters, all that comes out is a simple solution to the Laplace equation.

The equation he has used is basically a solution to the equation:

http://en.wikipedia.org/wiki/Heat_equation

Now once models work they will tell us a lot of detail on a regional scale but IMO we are decades away from that level of understanding where know enough to make a first principals approach work.

A climate model that cannot produce it’s own tropical storms is, frankly, worthless as it stands.

The problem now is that we know lots of the processes in great detail but that ones that really matter are still guesswork. All the guesses are adjusted until the output is about what is “expected”.

At which point we may as well use Willis’ equation.

Since Willis’ approach seems to capture the behaviour of the GCMs I would suggest he digitises all the individual ‘forcings’ , reduces volcanism to something that he feels to be more in line with observation, puts CO2 at it’s value based on radiation physics (1.3 from memory) and see whether his model produces a better post 2000 result.

If I understand what is being shown in the graphs , without reading the paper, I think the GHG plot will be real forcing calculations plus hypothetical water vapour feedback.

agricultural economist: “Willis puts the results of this into an ex-post statistical model to identify the importance of driving factors. The fact that this is possible does not mean that the process models are worthless. Willis has just simplified the models tremendously, thereby losing a lot of information that went into these models.”

But the fact that dispensing with that information had little effect on the results shows how rapidly the returns to adding that information diminished–i.e., how little the modelers really accomplished with all their teraflops of computer power.

The issue is how the “Historical_nonGHGs” are used to offset the HistoricalGHGs in the hindcast.

Essentially, the Aerosols and other negative forcings like Land-Use, increasingly offset the warming caused by GHGs in the hindcast.

The GHG temperature impact will follow a formula something like X.X * ln(CO2ppm) – 2y.yC. The 2003 version of GISS ModelE was 4.053 * ln(CO2ppm) – 23.0 [which is just a small 7 year lag from an instantaneous temperature response which is 4.33 * ln(CO2) – 24.8] but each individual model will have a different sensitivity to GHGs and then a different offset from aerosols etc. The higher the GHG sensivity, the higher the aerosols offset there is in the hindcast.

In the future, of course, no climate model is building in an increase in the negative from aerosols. IPCC AR5 assumptions have the (direct and indirect) aerosols offset becoming smaller in the future, changing from -1.1 W/m2 today to -0.6 W/m2 by 2100. The GHG temperature impact becomes dominant.

Willis, is it not logical to assume that the forcings are actually a back calculation? Where does one get the data to calculate the forcings back in 1850?

Bill Inis: “IPCC AR5 assumptions have the (direct and indirect) aerosols offset becoming smaller in the future”

Thanks, what is the basis of that assumption?

Presumably the volcanic forcing is about zero already by about 2000. Are they assuming the Chinese are going to stop using coal ?

What is the big drop in aerosols, especially direct, from 2000 onwards caused by ?

Are you aware of what factor in hadGEM3 caused it give less warming ?

thx

Willis,

I was wondering about your basis for interpreting λ as climate sensitivity. But before I could figure that, the units don’t seem right. λ does seem to have the units ΔT/ΔF, so λ ΔF has units T, but then it is divided by τ which has units years? It’s also unexpected to see an exponential with a dimensional argument.

But anyway, what sort of CS do you mean? This seems to be a transient version. In fact, with your model, if you make a one-time increase of 1 in ΔF, T increases in that step by λ/τ, but then it goes on increasing for a while, in fact to λ/τ/(1-exp(-1/τ)). Which actually is pretty close to λ, but the unit issues are a puzzle.

Greg Goodman:

“Ah, expectations. That is indeed the primary forcing in climate modelling.”

It’s not that I myself expect a higher sensitivity for CO2. But the climate modelers do.

But I would like to stress that I find Willis’ effort highly useful … given he manages to isolate different forcings in a multivariate regression.

I agree that this seems to be some sort of net forcing, not just due to CO2.

Nick Stokes, we need to give this “goes on increasing for awhile a name”.

I suggest the climate multiplier effect. Then we can get Paulie Krugnuts to come on board and join the conversation.

Greg Goodman says:

May 22, 2013 at 5:17 am

Are you aware of what factor in hadGEM3 caused it give less warming?

——————————

Obviously fudgefactors.

HadGEM2 submitted to the upcoming IPCC AR5 report has a very high GHG sensitivity.

Aerosols decline because they assume we are going to increasingly regulate sulfate emissions from all sources. Its mostly cleaned up already.

Joe Born said on May 22, 2013 at 3:14 am:

=CORREL(#REF!,G8:G163))

And an extra right parentheses.

Willis, excuse me, gotta question. Correlation command at E4 is “=CORREL(D10:D163,E10:E163)”

Why not start at row 8, start of data? Result is still 0.991.

Willis,

At risk of oversimplification, you have shown that that the models provide an integrated (globally averaged temperature) response that is a linear function of the perturbation (forcing due to CO2). There are many examples of this in physics and allied fields. For example, one can take a solid beam that is a very inhomogeneous collection of atoms, molecules, and even randomly oriented crystalline domains (as with a metal beam) and reduce its mechanical response to several simple macroscopic material parameters (Young’s modulus, Poisson ratio, etc). The reason these work is that when one considers weak perturbations about a mean value, the first terms in a Taylor expansion are usually linear and decoupled. The coupling — which is the more complex behavior — doesn’t usually start until the nonlinear terms are considered.

What would be interesting (and quite a bit harder) is to see if a simple physical descriptor also governs the mean-square fluctuation. (Systems like yours usually follow something called a fluctuation-dissipation theorem.) However, you may not have access to the data necessary to do that analysis.

Willis,

Paraphrasing Von Neumann elsewhere: “Climate change is a bit like an elephant’s jungle trail (randomness with a purpose), whereby the elephant is a Milankovic cycle, far north Atlantic the elephant’s trunk (slowly up/down and sideways), ENSO its tail (swishing around back and forth), and CO2 analogous to few flees that come and go.with the least of a consequence”

Modellers are only sketching the Hindustan plain thrusting into Himalayas , with no elephants roaming around.

Patrick Frank’s article “A Climate of Belief”, https://wattsupwiththat.com/2010/06/23/a-climate-of-belief/, makes a very similar point. I can’t cut and paste, but he shows that a simple passive warming model, equally as simple as Willis’, is very very close to model mean projections.

Willis, you’re saying something I’ve said for the last 10 years, the models are programmed for a specific CS, in fact I remember reading an early Hansen paper where he says exactly the same thing.

As far as the code goes, I think they are all derivatives of the same code, and are almost functionally equivalent. The cell model is pretty simple, the complexity is around processing and managing all of the cell data as it runs.

Lastly, the code is easy to get (well NASA’s is) I just downloaded the source code for ModelII and ModelE1, it is in fortran though, you’d think someone would bother to rewrite it in a modern language. And if you think about it, since I think they’re all fortran, it shows their common roots, no one has bothered to rewrite it.

Lastly This NASA link points out that the models aren’t so accurate when you don’t use real SSTs.

Greg

By ecologists I mean biologists whose specialism is in ecology. They are not the only ones botanists, zoologists etc.

I don’t agree that just because one modeller – did Hansen actually design/write the models – makes them all bad. I would suspect few of the actual people who do this would actually suggest that the models should be used for predictions. You’ll probably find it is a team leader who did very little in the design or implementation phase are the ones making all the shrill predictions.

Kasuha says:

May 21, 2013 at 10:26 pm

What you have just proved is that Nic Levis’ and Otto et al analyses are worthless.

========

That doesn’t follow. Sounds more like wishful thinking on your part without any evidence in support.

agricultural economist says:

May 22, 2013 at 3:27 am

This means that you really have to look at such models in detail to be able to fundamentally criticize them.

====

a mathematician says otherwise. looking at the details is a fools errand when it comes to validating complex models. you will miss the forest for the trees.

there are very simple tests based on the use of “hidden data” – data that is kept hidden from both the model builders and the model itself – that can be used to test the skill of any complex model. if the model cannot predict the hidden data better than chance it has no skill, regardless of what the model details may be telling you.

where the model builders go off the rails is in assuming that the model are predicting the future. They are almost certainly not, because predicting the future is quite a complex problem. what the models are almost certainly predicting is what the model builders believe the future will be. this is a much simpler problem because the model builders will tell you what they believe the future will be if you ask them.

and this is what the models are doing when they print out a projection. they are asking the model builders “is this correct? is this what you believe?” if it is, if the answer is “yes” then the model builders will leave the model as is. if the model builders say “no”, then the models builders will change the model. in this fashion the models are always asking the model builders what they believe to be correct.

Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-t / tau) for this to make sense.

cd says:

May 22, 2013 at 6:19 am

I don’t know, I think they’re all derived from the same GCM code.

I read something, that I haven’t found lately, that said early models would not generate rising temps, as the climate was actually doing, rising Co2 wasn’t doing it. Hansen (who was a planetary scientist studying Venus for NASA) added a CS factor to the models, which of course made the temps rise to match what weather station data was reporting.

ferd berple says:

May 22, 2013 at 6:55 am

This is modelers bias, the model is written in such a way that it’s results match what the modeler think it’s suppose to do. Complex models have to be compared to the system they’re modeling. The issue is climate is chaotic and there’s neither a lab model, nor actual measurements to compare to(because it isn’t deterministic).

Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-dt / tau) for this to make sense.

Similarly, I think the other term should lambda.dF.dt / tau

In the spreadsheet it is annual data and hence dt = 1 so that’s the way the cells are coded. I think Willis made some transcription errors converting his spreadsheet cells to the analytic formula.

Well spotted, though. That sort of dimension checking is an essential step and can catch careless errors.

Greg Goodman says:

May 22, 2013 at 3:35 am

Ah, expectations. That is indeed the primary forcing in climate modelling.

=========

exactly. the model important feedback in climate science is not water vapor, it is the model builder-model feedback loop. if is the existence of this feedback loop that ensures that what models are predicting is the expectations of the model builder.

the model builder creates the model. the result of the model are then fed back to the model builder and based on these results the model builder makes changes to the model. unless very careful experimental design is used to break the feedback loop, you are highly unlikely to ever create a complex model that does more than model the beliefs and expectations of the model builders.

we see this problem all the time in the design of computer software. it is one of the reasons for breaking development teams into coders and testers. people are very poor at proof reading their own material. their eyes see what they expect to see, not what is written. some words, such as “to” are basically invisible to humans.

An artfull demonstration.

Of the paucity of art.

In the state of the art.

By the ‘artless’ amateur.

agricultural economist says:May 22, 2013 at 3:27 am

Willis,

you use aggregate forcings (F) as a driving variable in your equation, but you interpret the result as sensitivity to CO2. But in these models, CO2 is not the only forcing factor. To arrive at CO2 forcing, you would have to split F up into its individual historical components (CO2, soot, aerosols etc.). Many of these other forcing factors have a negative temperature effect, making it likely that the models still assume a higher sensitivity for CO2 … which would be expected.

Exactly, the value Willis got is the sensitivity to all forcings. Now, consider what this means. They most likely use a CO2 forcing close to 3C/doubling. If the bottom line is 1.2-1.6 then that means the other forcings are in the neighborhood of -1.6C during the time period a CO2 doubling occurs. Now, if we were to drop CO2 forcing to something like a no feedback 1.2C (the low end of the Otto paper) then all the warming goes away.

“… Now, if we were to drop CO2 forcing to something like a no feedback 1.2C (the low end of the Otto paper) then all the warming goes away.”

I think you would have strong cooling if you did just that. You need to look at the strong volcanic forcing too.

This may not be as simple as just scaling down. What is seen in the temp plot without GHG is that volcanoes produce a strong _and permanent_ negative offset. This is what is not matched by real climate data.

What this means is that climate is compensating for the effects of volcanoes and this is not in the models at all. This may well be due to the lack of Willis’ “governor’ : tropical storms.

As he has discussed elsewhere , they have the means and the power to adjust on an almost hourly level and modulate solar input in the tropics.

Now unless someone with an expectation that this is the case incorporates this into the “parametrisation” of clouds it will never happen in the a model which cannot produce it’s own tropical storms.

And since modellers know that this will kill the need for the hypothesised water vapour feedback to GHG and kill off CAGW and kill the goose that lays those golden eggs, I don’t see that happening any time soon.

They will likely come up with another combination of fudge factors that will need another 10 years of data to invalidate and so on until a well earned retirement day.

Unfortunately the motivations to destroy your own grant source are not strong.

Much has been said of the lack of lack of the predicted mid tropo hot spot. In fact there is some trace of it but it’s far too weak.

If you get rid of volcanic cooling because climate system largely compensates, reduce CO2 effect to what the physics indicates , it may all start to work.

Thunderstorms would also act to evacuate the reduced CO2 warming and produce a reduced hot spot ( which is what is actually seen). Dubious ‘missing’ heat could be forgotten and post 2000 with no volcanoes and a reduced CO2 may actually start to match reality.

with the OHC component you are calculating TCR.

write this down

Joe Born says:

May 22, 2013 at 3:14 am

Sorry, broken calculation from an earlier incarnation, delete it.

w.

agricultural economist says:

May 22, 2013 at 3:27 am

Climate sensitivity, while often expressed as the response to a doubling of CO2, is actually the (very theoretical) response to any kind of forcing. It can just as well be expressed in degrees per a 1 watt/m2 increase in forcing, it’s just different units.

w.

Richard Smith says:

May 22, 2013 at 4:52 am

Indeed, the forcings “data” is a mishmash of various guesses, facts, and assumptions that have been carefully picked to provide the desired outcome …

w.

kadaka (KD Knoebel) says:

May 22, 2013 at 5:48 am

Because building the answer using the formula uses two years of prior data, the new estimate only starts in the third cell …

w.

Rather than a true black box, I believe Willis has discovered exactly how the modellers operate. Use the total forcings from all sources combined to give a fit, and then going with your sacrosanct 3C per doubling of CO2, adjust all other factors to trim the overall figure to 1.2. There is no real complexity in the models – they are Rube Goldberg devices.

http://en.wikipedia.org/wiki/Rube_Goldberg_machine

Willis, nice spotting with the digitization and the fitting of the function. That there was a relatively simple relationship between model forcing and model global temperature is something that has been chatted about from time to time, but the fit here is really impressive. Wigley and Raper’s MAGICC program, used in past IPCC studies, also emulated key model outputs from forcings: I wonder if it does something similar.

To be fair Willis you did confuse the two:

[Thanks, Greg, see the update to the head post and my comment below. -w.]If the modeled time lag is roughly 3 years, flat temperatures for a decade and a half with rising CO2 must really have them sweating. They know their models are broken.

Steven Mosher says:

May 22, 2013 at 8:09 am

Thanks, Steven. I think you meant

withoutthe OHC component, but other than that you are correct. As usual, when I can understand your comments they are on point. I’ve pulled the graph of your equations into your comment so folks won’t miss it.I have mentioned above my reasons for leaving out the ocean data.

1. We have almost no ocean temperature data before 1950.

2. What we have is very spotty until recently.

3. The splicing of the Argo float data onto previous data has led to a large unphysical jump in the claimed ocean heat content, with a couple of transition years claiming a heat increase of an amazing 2.3 W/m2.

However, we can make a back-of-the-envelope estimate, and I should have done so. The recent

gives decadal values for total earth system heat uptake, along with the corresponding net radiative forcing values. Solving your equations above, we getOtto paperFor the four decades given in the Otto data, this averages out to about

ECS = 1.3 TCR

So … you could multiply the values I gave above by 1.3 to give a reasonable guess at the equilibrium climate sensitivity. This would make the equilibrium climate sensitivity of the average climate model 2.1°C per doubling of CO2, and the GISS and CCSM3 individual models I analyzed earlier have an ECS of 1.6 °C per doubling … which still leaves the question.

Given that the climate models themselves put the climate sensitivity at 2.1°C per doubling, with some models above that and some models below that, why does the IPCC say the likely value is 3°C with a confidence interval of 2°C to 4.5°C?

Thanks for pointing that out, I’ve added an update to the head post.

w.

I’m wonder whether it is that surprising that things work out this way.

If the longer term temperature difference did not match this kind of heat equation it would signify that they had got the energy budget significantly wrong.

I think this is exactly why we have the talk of “missing heat” since y2k, because when the incorrect volcanics are no longer there to balance the incorrect GHG forcing the energy budget goes wrong and temp change goes with it.

http://climategrog.wordpress.com/?attachment_id=258

If our newly adopted friend in Alaska is to be believed that volcanic effect is very short lived and there is not permanent offset.

Willis does not think even that is attributable so presumably would want to put volcanic “forcing” at zero.

Either way I think there is a enormous lack of evidence to support the idea that volcanoes exert a permanent drop in temperature. I think Willis’ equatorial governor takes care of it.

Not sure whether you can have an “enormous” lack (but it is pretty big ;) )

Steve McIntyre says:

May 22, 2013 at 9:55 am

Thanks, Steven, For those not familiar with the name, Steven McIntyre is the founder of the climate ur-blog,

. After my previous analyses of the GISS and the CCSM3 model, finding that the average of the model outputs is functionally equivalent to a lagged linear transformation of the inputs was the icing on the cake.ClimateAuditI have wondered about the

program before. For those who object to my “black box” style of analysis, the IPCC does the same thing. Many of their results are not from the climate models themselves, but from a simplified program called MAGICC. This program is functionally equivalent to the climate models themselves, because it is able to emulate the results of each individual climate model quite accurately. A quick search finds their model parametersMAGICCI count 2 variable and 5 discrete parameters. The model is an iterative one.

To simulate the various global climate models used by the IPCC, the outputs of the MAGICC program are then combined with a library of the various model characteristics by another model called SCENGEN, which then generates the emulation of the model outputs themselves for various future climate “scenarios”.

w.

Greg Goodman said on May 22, 2013 at 7:11 am:

From spreadsheet, at E12, calculated temperature for year 1854:

=E11+$E$1*H12/$E$2+I11*EXP(-1/$E$2)

= Previous calculated temp + lambda*(G12-G11)/tau + (E11-E10)*e^(-1/tau)

= Prev.calc.temp + lambda*(average model forcing 1854 – average model forcing 1853)/tau

+ (calculated temperature change from 1852 to 1853)*e^(1-tau)

Equation on graph translates as:

E12 = E11 + $E$1*H12/$E$2 + I11*EXP(-1/$E$2)

So equation on graph matches spreadsheet, no transcription errors.

He’s given the formula before:

T(n+1) = T(n)+λ ∆F(n+1) / τ + ΔT(n) exp( -1 / τ )

So why assume

nowthere’re transcription errors?Oh, the exponent should be dimensionless, thus it can be Δt, set at 1 year, so that little bit is fine.

FerdBerple says

“where the model builders go off the rails is in assuming that the model are predicting the future. They are almost certainly not, because predicting the future is quite a complex problem. what the models are almost certainly predicting is what the model builders believe the future will be. this is a much simpler problem because the model builders will tell you what they believe the future will be if you ask them.”

An excellent proof of mental dishonesty/fantasy/hope.

These are speculators who assume that their speculations are not speculations. They know the actual future; so they believe. Almost a claim of psychic power, backed by “consensus” where doubt is forbidden.

Maybe it would be healthier to be trying to build a real time machine, rather than claiming to have proven the existence of a desperately needed, preimagined future. Or if you’re going to be a psychic, you might want to be good at it first before making predictions.

@ Willis Eschenbach on May 22, 2013 at 9:03 am:

Thanks Willis. I noticed that later when I was hacking through the sheet, should have said I had figured it out back then right away. My apologies.

It’s models all the way down, even to the centre of the Earth.

Greg Goodman says:

May 22, 2013 at 7:00 am

Thanks, Greg. You are correct. However, since this is a yearly iteration, t=1.

w.

Willis –

Like this. The black box calculation equivalency is disturbing in that it says all those expensive computers turn out to be unnecessary: so much for the UK Met Office’s excuse.

The calculation equivalency proof also legitimizes the knowledgeable amateur in his hindsight and forward casting work: a 99% match has to mean a good equivalency … unless you are saying that the future will not be a match for even the recent past. Which I wouldn’t be surprised to hear said.

When you hold that something is “special”, you can say anything is reasonable because neither past nor process are limiting factors.

Along this equivalency: recently on WUWT there was discussion of actual ocean heating vs projected ocean heating. If we view ocean heating processes as another black box in which there is only one item of concern, the radiative forcing of CO2, the ratio of observed to modelled heating gives us a correction factor for the principal forcing. All it takes is digitizing two trends, the modeled and the measured, graphing them and taking a linear trend of the results.

In the case referenced, I think the correction factor is about 4/7 (taken from energy measurements rather than temperature from 2005 to 2013), i.e. measured additional energy/time = 0.57 X modelled additional energy/time.

If the radiative forcing of CO2 is the fundamental variable in the black box for oceanic heating, and taken to be 3.0C/doubling of CO2, then the corrected forcing is 1.7C/2XCO2.

So Willis, you have this equation:-

T(n+1) = T(n) + (lambda). (deltaF)(n+1) / tau + deltaT(n) exp (-1/tau)

And T(n) is presumably Temperature at some time epoch, and of course T(n+1) the Temperature at the next time value.

lambda is the climate sensitivity; deltaT / CO2 double , and tau is some delay time, and that leaves only the need for some data input , which presumably is deltaF some Watts/m^2 “forcing.

Now of course , exp (-1/tau) is meaningless; there’s a slight dimensions disparity there somewhere.

exp (-1/ square feet) is similarly meaningless.

Now if the very famous Andy Grove can accept a transistor noise equation that is not dimensionally balanced, we shouldn’t worry about a simple nonsense exponential.

But now what haven’t we assumed here.

For starters we have not assumed the validity of the expression:-

T2 – T1 = lambda. log2(CO2,2 / CO2,1) as being more valid, than (T2-T1)/(CO2,2-CO2,1)=lambda or even more valid than: CO2,2 – CO2,1 = lambda.log2(T2/T1)

Nor have we asserted that exp (-1/tau) is any different than -1/tau .or different from log(1-1/tau).

I daresay, a curve fitting process such as you have followed, would yield equally impressive correlation numbers with any one of these alternative possible assumptions, I have outlined above.

I’m not impressed with correlation numbers. specially really rough correlation numbers like 0.991.

How about a correlation number like 0.99999999 ; would that grab your attention ?

I’m aware of at least one instance of a mathematical equation fit to the experimentally determined value of a fundamental constant of Physics; the fine structure constant, that predicts the correct value to within 30% of the standard deviation of the very best experimentally measured value.

Yet that mathematical expression is quite bogus, and was one of about a dozen similarly bogus equations, each of which scored a hit inside that standard deviation.

Just goes to show that quite complex functions can be modeled to high precision, by much simpler functions.

But I do prefer to see equations that are at least dimensionally balanced.

Greg Goodman says:

Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-t / tau) for this to make sense.

Willis. Thanks, Greg. You are correct. However, since this is a yearly iteration, t=1.

No worries..

What do you make of the other term? The equation cannot be correct as shown because each term must have same physical dimensions. (You cant add kelvin to to kelvin per year ).

I’m not sure how you derived it but I think it’s same thing: (t / tau) is your scaled time variable. Your cell does the calculations correctly because dt is always one, but your equation is an invalid transcription.

It would be good if it was correct. That sort of thing dents the cred. of what you are presenting.

Sorry Willis , I’m confusing my T’s. I have another think, but something is astray.

Someone explain the difference between TCR and ECS in plain simple easy to understand terms please.

Greg Goodman says: May 22, 2013 at 7:11 am“Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-dt / tau) for this to make sense.”

Greg, I agree. But in terms of approximating a differential equation, if that is what it is doing, it still doesn’t seem right. It looks like it is related to the model of Schwartz. But that was a response to a one-off rise in F, and the exponential was exp(-t/τ), not exp(-dt/τ). And if the ΔF relates to a derivative, then it should be divided by dt too.

None of this disputes the fact that the model fits. But the question is still there – how do we know λ is climate sensitivity? And is it ECS?

Willis said:

I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.Engauge Digitizer. Freeware. Download here, Windows exe and Linux versions.

Willis, thank you! It’s already in the Debian Linux distribution, I just installed it and used digitizing software for the first time. Wonderful!

But it’s version 4.1 through the package manager, Debian itself just went to Version 7 and I haven’t yet, and the latest Engauge version is 5.1. Not so wonderful!

Willis,

Interesting post. Here is a table of the actual TCR and ECS in various climate models: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2-3.html#table-8-2 , at least as of the AR4, and they are different from what you report. For example, the CCSM3 and GISS models have an ECS of 2.7 C and a TCR of 1.5-1.6 C.

So, your simple simple lagged linear transformation model of their model is not diagnosing the ECS or TCR of the climate models quite correctly (worse on the ECS than the TCR). You can presumably read there how the modeling groups generally diagnose the ECS and TCR in the models, but it is presumably by some method that is more rigorous than fitting the models reproduction of the historical temperature record to the sort of simple exponential.

We’ve talked before about the fact that there is a slower relaxation component in climate models so they really have to be fit by at least a 2-box rather than 1-box model (which is essentially what yours is…or a close variant on a 1-box model).

“how do we know λ is climate sensitivity? And is it ECS?”I just noticed the update. But I still don’t see the basis for saying it isTCR either.

The model doesn’t include Mosh’s distinction, because the ocean isn’t part of it.

Yes indeed Nick. I had noted that Willis’ delta T was the same thing a T(n)-T(n+1) and had rearranged it to find something like equation 6 in your link.

Of course Schwartz calls sensitivity 1/lambda but that’s just definition.

However that seems to suggest that the tau in that term is spurious. Even while suggesting it needed to be t/tau from dimensional arguments , I could not see why it was there.

Mosh pointed out above that this will give transient CS, not equilibrium CS.

Perhaps Willis could help out and explain where the 1/tau came from .

(Sorry Frank, but I don’t know how to properly quote comments.)

If the other mechanisms have effects in the models, for the models to be parametrizable with just the CO2-sensitivity and time-lag, then those must be proportional to the effects of CO2. They may drive up the internal number (to get the backfit) by cooling the planet historically, but the only way that backfit could work is by having them all fail to actually produce additional degrees of freedom. Unless the effect of CO2 is built backwards from the total temperature-change, it would be an unbelievable coincidence that their impacts cumulatively scale directly with that of CO2 throughout the temperature-record.

” But I still don’t see the basis for saying it isTCR either.”

The equation assumes a fix heat capacity that is what is changing in temperature. So the oceans are there, they are the major part of the fixed heat capacity.

To the extent that this is the “total” CS K/W/m2 (not CO2x2 CS) this ignores any longer scale exchanges with the deeper ocean. Hence TCS rather than ECS as Mosh noted.

Hi Willis,

Thanks for an excellent post. It would be good if some people who do discrete modeling could weigh in on your discretizing, because your time constant is not a large multiple of your time step. I am no expert, but here is the way I would have done it:

T(n+1) = T(n) + [ lambda * average(F(n+1), F(n)) – T(n) ] / ( tau + 0.5 )

This gives tau = 3.5 years and lambda = 0.56 (transient sensitivity of 1.7 degC/doubling) with an error of 0.184.

Stephen said on May 22, 2013 at 2:46 pm:

Dear Stephen, go to top of this page. Click on “Test”, that’s the page with the formatting help, and a comment section where you can try out various HTML tricks for your comments.

But here’s the quick version:

<blockquote>began quoted text

more text

end of quote</blockquote>

Continue comment right after “closing tag”, no line space, or WordPress will display something strange.

This yields:

Comment continues.

Later on the Test page you can experiment with nested quotes. Very fun!

Stephen wrote: “Unless the effect of CO2 is built backwards from the total temperature-change, it would be an unbelievable coincidence that their impacts cumulatively scale directly with that of CO2 throughout the temperature-record.”

Exactly. Curious, isn’t it?

This thread is a nice example of true ‘peer’ review in quasi real time. Wonder if the IPCC will take any note of the similar process that followed the leak of AR5 SOD. Actually, that was a rhetorical question. Likely not.

Willis,

Not sure if we have directed you to this before, but here is a post by Isaac Held covering a subject similar to what you are here (the emulation of GCMs by simpler models like 1-box models): http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-forced-response/ Note that he finds the same sort of thing that you do: He computes a time scale for relaxation of about 4 years and a “climate sensitivity” (again, a sort of transient sensitivity, I think) of 1.5 K, but notes that the actual ECS in the CM2.1 model he is emulating is ~3.4 K.

“””””…..Greg Goodman says:

May 22, 2013 at 12:15 pm

Greg Goodman says:

Nick, your are correct about what is written on the graph, it seems the text is wrong. The exp must exp(-t / tau) for this to make sense.

Willis. Thanks, Greg. You are correct. However, since this is a yearly iteration, t=1…..”””””

NO IT ISN’T !!

t = 1 year; it does NOT = 1

Hi Willis,

I’m sure we’ve been here before…

See my comment of June 4 at 07:02am in your previous

https://wattsupwiththat.com/2012/05/31/a-longer-look-at-climate-sensitivity/

In the present article, you seem to have reverted to your “old” formula before the correction we discussed. This gives you values of lambda which are too small by a factor of tau*(1-exp(-1/tau)). Specifically, in this study, this means a factor of 0.846.

More importantly, after you make this correction, you will be calculating the unit climate sensitivity (not TCR) under the assumption of a constant linear feedback, and you ARE taking ocean heat content into account, albeit under the very simple assumption of a constant heat capacity. So if you apply a forcing of 3.7 W/m2 you should get an approximation of the ECS for a doubling of CO2, again under the assumption of a constant linear feedback.

.

With the correction, the formula is the numerical solution of the linear feedback equation given by:-

CdT/dt = F(t) – T(t)/lambda

Rate of heat gain by oceans = Cumulative forcing at time (t) LESS (Temperature change from t=0)/climate sensitivity

The above is a two parameter equation in C and lambda. By setting tau = C*lambda it becomes a two parameter equation in lambda and tau, but obviously the apparent heat capacity can always be back-calculated if values of tau and lambda are known. The total heat gain in the oceans at time t is just given by C*T(t) in units of watt-years.

People have commented that I must have made a transcription error, and it’s true, I did. The actual formula should be:

T(n+1) = T(n)+λ ∆F(n+1)

* (1-exp(-1/tau)) + ΔT(n) exp( -∆T / τ )This is the proper form and fixes the units problem. The full equation is

T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-∆t / τ )) ∆t+ ΔT(n) exp( -∆t / τ ) ∆t

This simplifies to the equation above, since ∆t = 1. The two terms on the right are in degrees.

Including units, the full equation is

T(n+1) [degrees] = T(n) [degrees] + λ [degrees/W m-2] * ∆F(n+1) [W m-2/year] * (1-exp(-∆t [years] / τ [years]) * ∆T [years] + ΔT(n) [degrees/year] * exp( -∆T [years]/ τ [years] )

This can be written in units alone, as

[degrees] = [degrees] +[degrees/W m-2] * [W m-2/year] * [years] * (1- exp( -[years] / [years]) + [degrees/year] exp( -[years] / [years] [years]

Once again, I’ve corrected the head post and the graphic. The error translates to an increase of about 25% in the calculated climate sensitivity, but doesn’t change the time constant. My thanks to those who noticed the error.

w.

I haven’t confirmed the iterative equation this time, but, if it’s otherwise correct, shouldn’t

“T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-∆t / τ )) ∆t+ ΔT(n) exp( -∆t / τ ) ∆t”

be

“T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-∆t / τ )) + ΔT(n) exp( -∆t / τ ) ,”

i.e. shouldn’t you drop a couple of ∆t’s to make the dimensional analysis work? This is because ∆F(n+1) is in W m-2, not W m-2/year.

Reminds me of something I heard long long ago… that one could be a near expert weather predictor in many parts of the country simply by saying tomorrow would be very much like today. Something like 95%+ accuracy in Phoenix, IIRC. When wrong, it’s a regime change of some sort, but the next day you tend to return to accuracy…

So for climate you just lag it a bit, eh? Easy peasy…

;-)

From Willis Eschenbach on May 22, 2013 at 7:13 pm:

No it doesn’t.

Your “full equation” with bold added:

T(n+1) [degrees] = T(n) [degrees] + λ [degrees/W m-2] *

∆F(n+1) [W m-2/year]* (1-exp(-∆t [years] / τ [years]) * ∆T [years] + ΔT(n) [degrees/year] * exp( -∆T [years]/ τ [years] )ΔF is change in forcing. From your Figure 1, “Adjusted Forcing” was “W sq.m”, obviously a slash is missing.

Why the hell would you be throwing in time for your change in forcing value? Forcing is a light bulb, on/off, gives Watts per square meter, which is

Joules per secondper square meter.Now go back to your original equation, as found in “Black box of Chocolates”:

T(n+1) = T(n)+λ ∆F(n+1) / τ + ΔT(n) exp( -1 / τ )

What was the problem? Exponent had to be dimensionless, and the “1” was standing in for a Δt of one year.

So that was the problem! Where you said “1/τ” it should be “Δt/τ”, where Δt = 1 yr.

So try that for the middle term. Make it λ*ΔF(n+1)*

Δt/τ, see how that works.PS: If you still decide you need to so drastically change that equation, then

pleasechange it in “Black Box of Chocolates” as well, for consistency’s sake.KDK,“Make it λ*ΔF(n+1)*Δt/τ, see how that works.”

I agree with that. When it’s sorted out, it says that

dT/dt = λ*ES(dF/dt,τ), where ES means exponentially smoothed dF/dt, characteristic time period τ. And that seems like a reasonable model, and λ is acting as a sensitivity. I’m still not convinced that it can be equated with ECS or TCR. That needs to be shown.

The actual formula should be:

T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-1/tau)) + ΔT(n) exp( -∆T / τ )

Sorry Willis, not sure this is correct either. I’m not trying to be picky but I assume you wnat to get this right.

Dimensions are good (as long as you explicitly put ∆T in both expressions) but if you recognise the that T(n+1) – T(n) = ΔT(n) and rearrange you get (1-exp(-∆T/tau) on both sides and it disappears!

You are left wtih ΔT=λ ∆F and the exponential behaviour has disappeared.

I think my original suggestion is the most likely , the scaling factor should be ∆T / tau , but I can’t be sure because you have not given any indication of your starting point.

I suggest you always explicity write ∆T even if ∆T=1 year if that is what the foruma is. Obviously optimise the calculation in the spreadsheet cells but make sure when you write a mathematical formula that the terms are correct.

Paul K says:

https://wattsupwiththat.com/2012/05/31/a-longer-look-at-climate-sensitivity/#comment-1000758

If you want to apply a single formula for temperature, which DOES represent the solution to the linear feedback equation, you need, using your definition of lambda:-

T(k) = Fk*α *λ + (1-α) * T(k-1)

where Fk is the cumulative forcing at the kth time step,

α = 1 – exp(-DELT/τ)

===

That brings up back to Schwartz eqn 6 which was mentioned above and then the tau is spurious. as I previously pointed out.

Maybe you need to say how you got to that forumla so we can find out what the correct form is rather than just guessing.

Joe Born says:

May 22, 2013 at 8:05 pm

I was thinking of the ∆F term as the change in forcing from one year to the next, so it would have units of W/m2 per year … no? If not, then you are correct.

w.

Greg G,

” T(n+1) – T(n) = ΔT(n)”As used in the spread, T(n) – T(n-1) = ΔT(n), so they aren’t the same.I think Willis’ new version, with typos and units fixed, is OK. This is how it works. We have:

ΔT(n+1) = λ ∆F(n+1) * (1-exp( -∆t / τ )) + ΔT(n) exp( -∆t / τ )

Solving:

ΔT(n+1)=λ *(1-a)*(∆F(n+1)+a*∆F(n)+a^2*∆F(n-1)+…) where a= exp( -∆t / τ )

This is exactly the formula ΔT(n+1)=λ *ES(∆F(n+1),a) where ES is exponential smoothing, a as factor

or ΔT=λ *ES(∆F,a). The (1-a) is what is needed to normalize the sum (to have area 1).

In the limit of small ∆t, 1-a = ∆t / τ, which is where the original 1 / τ came from.

I think none of this ensures that λ is numerically equivalent to ECS or TCR.“T(n) – T(n-1) = ΔT(n), so they aren’t the same.” my bad, thanks.

“Solving: ΔT(n+1)=λ *(1-a)*(∆F(n+1)+a*∆F(n)+a^2*∆F(n-1)+…) where a= exp( -∆t / τ )”

Ah, so now we get to see where it came from , good work.

“In the limit of small ∆t, 1-a = ∆t / τ, which is where the original 1 / τ came from.”

And knowing where it comes from shows us the correct term is ∆t / tau which was my original suggestion.

So finally Willis’ original figures were correct.

In the limit of small ∆t, 1-a = ∆t / τ, which is where the original 1 / τ came from.

That explains the original “1/tau” but since t is comparable to tau , it should be used in the full form.

Nick” : I think none of this ensures that λ is numerically equivalent to ECS or TCR.”

T is temp of a constant heat capacity so I go with Mosh’s argument that this is TCS, but since F and dF are the total forcing, this CS is in the sense of it sensitivity to total forcing , not CO2x2 sensitivity.

Jeez,

What a pile of confusion!

Let me start again so that everyone can see where everything comes from, AND what the underlying assumptions are.

Firstly, a statement of energy balance relative to a pseudo steady state condition at time t = 0:-

Change in Net flux at top of atmosphere = change in Forcing – change in outgoing radiation (shortwave plus longwave).

Assume that (a) the ocean has a constant heat capacity, C, in units of watt-years/deg C/m2 and that (b) all of the net energy gained must end up in the oceans.

We write the equation as:

CdT/dt = F(t) – T(t)/λ (1)

Equation (1) is the most common form of the “linear feedback equation”.

Note that T is the total change in surface temperature from time t = 0.

λ is the climate sensitivity expressed in units of deg C/(W/m^2)

F(t) is the change in cumulative forcing from time t = 0 to time t

Now consider a fixed step forcing, F applied at time t = 0, that is F(t) = F constant Equation (1) can be solved analytically using an Integrating Factor to yield

T(t) = F*(1-exp(-t/(Cλ)) /λ (2)

For convenience only, we can make the substitution τ = Cλ, and Eq (2) then becomes

T(t) = F*(1-exp(-t/τ) /λ (3)

OK, then with the above solution available for a FIXED step forcing, we can apply either a convolution integral or a superposition solution to develop the solution for the more general case where F(t) is varying with time.

The superposition solution is given by

T(k) = Fk*α *λ + (1-α) * T(k-1) (4)

where Fk is the cumulative forcing applied at the start of the kth time step,

and α = 1 – exp(-DELT/τ)

where DELT = the timestep length

Equation (4) yields an accurate numerical solution to Equation (1) when F(t) is arbitrarily varying in time. To avoid a half-timestep displacement, the actual cumulative forcing data at time t=k should be replaced by mid timestep estimates, (Fk = (F(t = k+1) + F(t= k))/2 but that is a refinement.

This post is already too long, so I’ll show in a second post how to get from Equation (4) to Willis’s solution, and the implications for ECS.

Further to my last post, if we expand Eq (4) by swapping out alpha, we obtain

T(k) = Fk* α*λ + exp(-DELT/τ) * T(k-1) (5)

Rewriting for the next timestep, we obtain:-

T(k+1) = Fk+1 * α*λ + exp(-DELT/τ) * T(k) (6)

Subtracting Eq (5) from Eq (6), we obtain:

ΔT(k+1) = ΔFk+1 * α*λ + exp(-DELT/τ) * ΔT(k) (7)

Where the Δ values represent incremental values of T and F between timesteps.

Finally, we set the timestep value equal to 1 (year), and expand the first alpha value to obtain:-

ΔT(k+1) = ΔFk+1 * (1 – exp(-1/τ))*λ + exp(-1/τ) * ΔT(k) Eq (8)

Break open the LHS and voila, we obtain:

T(k+1) = T(k) + ΔFk+1 * (1 – exp(-1/τ))*λ + exp(-1/τ) * ΔT(k) Eq (9)

I notice that Willis has already now corrected to this form. (Thanks, Willis.) On one brief diversion, if the “alpha” term is expanded using Taylor series…we obtain

α = (1 – exp(-1/τ)) = 1 – (1 -1/τ + (1/τ)^2 -… )

This is roughly equal to (1/τ) if the higher order terms are dropped, and this would give the formula which Willis first reported. This approximation is not recommended. It introduces the “dimension” error which people picked up on, but more importantly while the approximation is not too bad for large values of τ, it introduces a substantial error for small values (like 2.9).

I’ll do one further post to try to untangle the issue of ECS vs Transient Climate Response vs transient climate sensitivity, if I can.

ERRATUM

In trying to dispel some confusion, I have added to it by incorrectly writing down Equations (2) and (3).

It should read as follows:- QUOTE

T(t) = F*(1-exp(-t/(Cλ)) *λ (2)

For convenience only, we can make the substitution τ = Cλ, and Eq (2) then becomes

T(t) = F*(1-exp(-t/τ) *λ (3)

ENDQUOTE

My apologies for any puzzlement caused.

Willis,

Steven Mosher differentiated between EFFECTIVE Climate Sensitivity and Transient Climate Response, and this seems to have led to a lot of confusion, including inducing you to modify your calculation. Let me have a go at this subject.

Let’s assume that you use the values of τ and λ derived from the “corrected” form of numerical solution, then we can return to the original linear feedback equation and its assumptions to work out what can and can’t be deduced from the values.

The critical assumptions are that the heat capacity C is invariant and that the temperature-dependent radiative feedback varies linearly with temperature. WITH these assumptions, it is perfectly reasonable to say that λ multiplied by a forcing of 3.7 W/m2 (which corresponds to a doubling of CO2) should yield an estimate of the Equilibrium Climate Sensitivity. NO CORRECTION IS REQUIRED to this calculation, but it is imperative to state the assumptions.

In fact we can see this directly from Equation (3) in my post above. For a fixed forcing F the temperature goes to F* λ as time goes to infinity. Alternatively, we see from Equation 1 that as the net flux (imbalance) goes to zero, then we have 0 = F – T/ λ , which also tells us that T-> F* λ as the system approaches equilibrium. So F* λ is indeed an estimate of the Equilibrium Climate Sensitivity if F is set equal to 3.7 W/m2.

The problem that I think Mosh is highlighting is a different one. In most of the GCMs, there is a curvature seen in the net flux response to temperature change. This means that the assumption that λ is a constant is not supported by the GCMs. In general, values of lambda estimated from plots of net flux vs temperature tend to increase with time and temperature. Opinions differ about whether this is a realworld phenomenon or an artifact of the GCMs, but it leaves open the possibility that the EFFECTIVE climate sensitivity estimated from “short-term” observational datasets with relatively small temperature change (a sort of transient estimate of climate sensitivity based on linear extrapolation of netflux-temperature behavior to the zero net flux line) may be lower than the true value of Equilibrium Climate Sensitivity.

One of the things you have confirmed with this work is that this effect is not small. (I already knew that for the several GCMs tested – you have extended that observation to the set of results.) The ECS estimates from the GCMs have a median value around 3.2 deg C. Using the same inputs and outputs over the instrumental period, you have deduced a value about half of that.

For those of you interested in how the numerical (superposition) solution is derived, check out Appendix A in this post

http://rankexploits.com/musings/2011/noisy-blue-ocean-blue-suede-shoes-and-agw-attribution/

I wouldn’t necessarily call this functionally equivalent. I tested Willis’s model on the “committment” scenario, where delta F is zero from 2006 to 2100. A paltry 0.00002C came out of the pipeline post-2005, whereas the CMIP3 MMM is about 0.3C. When I specified delta F as 0.02W/m^2 per year, the calculated temperature increase was about 0.10C/dec in the 2nd half of this century. That didn’t seem unreasonable, but my guess is that it would be low compared to the models.

Paul_K:

Just in case you’re wondering whether anyone is listening, I’ll raise my hand. My initial (and current) opinion is consistent with yours that the equilibrium T equals lambda times steady-state F under the linear-relationship assumptions.

“””””……Greg Goodman says:

May 23, 2013 at 12:04 am

The actual formula should be:

T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-1/tau)) + ΔT(n) exp( -∆T / τ )……”””””

No no no ! A thousand times NO !

(T) is Temperature; (t) is time; tau is time

So exp (-delta t /tau )

George, how about your read more that the first two lines before sounding off.? Note the line directly AFTER where you cut me off to criticise:

===

Greg Goodman says:

May 23, 2013 at 12:04 am

The actual formula should be:

T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-1/tau)) + ΔT(n) exp( -∆T / τ )

Sorry Willis, not sure this is correct either. I’m not trying to be picky but I assume you wnat to get this right.

====

I was pointing out to Willis that the revised form he had just posted was still wrong.

Anyway that is ancient history at this stage , why both bringing that back in now? The discussion’s moved on.

Seems like you and KDK are out to snipe,

How about we just try to get the maths right first.

Paul_K says:

For those of you interested in how the numerical (superposition) solution is derived, check out Appendix A in this post

http://rankexploits.com/musings/2011/noisy-blue-ocean-blue-suede-shoes-and-agw-attribution/

===

An excellent couple of articles there Paul. In particular, your SURE ties in with a couple of things I’ve found in analysing the data. I’ll ask Lucia to reopen comments rather then mixing it in here.

Very solid work.

Paul_K says:

May 23, 2013 at 9:52 am

First, Paul, my thanks as always for your very detailed and supportive posts.

Regarding the claim above about ECS and TCR, let me see if I can clarify it. The key issue is that the lambda’s aren’t the same. From Mosh’s graphic, the difference between the ECS and the TCR is the difference in lambda. In the TCR, lambda is T/F.

But in the ECR, lambda is T/(F-Q), where Q is the heat going into/out of the ocean. So you are right that as time goes to infinity, the final value is F* … but which lambda? All I have is the TCR lambda

I solved the problem by noting that

ECS/TCR = ∆T/(∆F-∆Q)

In the Otto paper there are calculated ESC and TCR values for four decades. One decade is suspiciously high from the uncorrected shift in the Levitus data due to the introduction of the Argo floats. It has an ECR/TCR value of about 1.5. The other three decades have an ECR/TCR value of about 1.3, so that’s what I used to convert my TCR values (from my TCR lambda) into ECR values. It could be more, might be 1.4, but in any case it is remarkably stable over the last forty years. That should be no surprise, because the ocean is a huge place, and lambda is a function of the rate at which the deep layers of the ocean exchange heat with the surface … I don’t see that changing a lot.

Isaac Held, in the paper Mosh cited, got a TCS of 2.3 and an ECS of 3.3, again a ratio of 1.5. However, he was using a four-year relaxation time, So in answer to your question, the ECS is about 1.3 to 1.5 times the TCS. I suspect I’ll use 1.4 in the future.

w.

Joe Born, Greg Goodman,

Thanks for your comments. Good to know someone read it!

Hi Willis,

Thanks for your response.

I think there is still some confusion here though, and it is arising from definitional differences.

I will start by repeating that you should not correct your value of lambda, at least not to account for the difference between TCR and ECS. The Equilibrium Climate Sensitivity from your scheme is unambiguously your derived value of lambda times the forcing associated with a doubling of CO2.

Let’s try to distinguish between four distinct definitions which I think is where the confusion is arising.

A) Equilibrium Climate Sensitivity. Units deg C. This is the temperature achieved after an infinite time following a forcing corresponding to a doubling of CO2. It can also be defined as the temperature achieved when the net flux goes to zero following a forcing corresponding to a doubling of CO2. With your definitions, this is just 3.7 times lambda.

B) Effective Climate Sensitivity. Units deg C/(W/m^2). This is an estimate of climate sensitivity per unit of forcing made when you only have transient information available. Suppose that you run a numerical experiment where you impose a fixed forcing ∆F corresponding to a doubling of CO2. At some arbitrary time, t, you make an observation of the change in temperature, T, and the residual net flux , ∆Q. Theory says that the change in net flux since t=0 was (∆F-∆Q). The Effective Climate Sensitivity is then estimated by

Effective CS = T/(∆F-∆Q) = lambdadash, say

Note that this is a method for estimating lambda, using your definitions. The Equilibrium Climate Sensitivity estimated from this calculation is then 3.7 * lambdadash.

C) Transient Climate Response. Units deg C. This is the transient temperature observed in climate models at the point in time when CO2 reaches a doubling during the 1% per year CO2 growth experiment (i.e. after about 70 years). In any given model, it is typically about 70% of the final ECS value for that model. THIS VALUE HAS NOTHING TO DO WITH WHAT YOU ARE DOING HERE, and there is no justification for using the ratio of ECS/TCR to correct your estimate of lambda.

To summarise, your uncorrected value of lambda when multiplied by the forcing corresponding to a doubling of CO2 yields a perfectly valid estimate of Equilibrium Climate Sensitivity under the assumption of a constant linear climate feedback. Hope this helps, seriously. Paul

“””””……Greg Goodman says:

May 23, 2013 at 3:11 pm

George, how about your read more that the first two lines before sounding off.? Note the line directly AFTER where you cut me off to criticise:…..””””””

Greg, I did not cut you off to criticize.

I excerpted enough of your post to point readers to where my comment, applied, then I merely pointed out that it needed to be lower case (t) and not upper case (T).

And it was not a criticism; I assumed a priori, that of course you knew which was correct, and it was just a typo.

Yes maybe with a drum roll, but that was for those who might not understand any of it; not for you; because by your very post, you demonstrated that you clearly understood the exponent needed to be dimensionless.

Hi AJ,

Given the very low values of tau found by Willis, anything left in the pipeline from Willis’s model would be resolved in about 6 or 7 years, so it is very important to compare exact dates when making a comparison. Having said that, I found some time ago that the temperature left in the pipeline from the “20th century commitment” runs in the AOGCMs was always higher than that gained from this linear feedback model. I now know that the reason for this is that the GCMs exhibit a curvature in the net-flux vs temperature relationship which has the effect of adding future temperature gain relative to the linear feedback model. As to which is more correct, the jury is still out, but it should be emphasised that you are comparing a PREDICTION from the GCMs with a PREDICTION from the linear model. I don’t think it is entirely fair therefore to argue that the linear feedback model is not a good emulator of the GCMs. It clearly is – over the entire instrumental record. As a matter of record, the prediction from the linear feedback model turns out to have been a lot more accurate than the GCMs in terms of PREDICTED temperature gain since 2000, although I wouldn’t make too much of that.

Paul_K:

Thanks very much for the enlightening comment defining terms. I for one had not (at least recognized that I had) encountered such a beast as “transient climate response.”

That said, for me at least your comment did not address Mr. Eschenbach’s point about the heat that “disappears” into the oceans. To caricature your positions, you see (surface) temperature as the total response to the forcing, whereas Mr. Eschenbach sees it as only part of the response, the other part being the warming of the deep oceans–although that warming will at some time outside of our experiment influence the temperature.

To state the caricature differently, you see what we’re looking at as the lumped-parameter system that you (correctly) say the differential equation dT/dt = (lambda / tau) F – T / tau expresses; restated in terms of heat capacity, that equation subsumes the (whole? well-mixed part of the?) ocean as well as the surface. He says that you must look beyond this equation because it does not reflect Poseidon’s vagaries.

Obviously, neither of you will agree with my characterization of your positions, and at the post here https://wattsupwiththat.com/2012/07/13/of-simple-models-seasonal-lags-and-tautochrones/ I demonstrated that the question is beyond what little remains of my once-passable mathematics, so I won’t hazard a position of my own. Since you have contributed constructively to the discussion, however, you may want to correct my characterization of where deep-ocean heat enters your view.

(And I believe it goes without saying–even though Mr. Eschenbach did say so explicitly–that none of us believes this linear-system exercise has much to do with what the real response to doubling CO2 enrichment.)

Hi Paul… good to see you’re back online.

I think the commitment scenario is a fair comparison to make. Looking at the spreadsheet a little closer, I see the sum of delta F over the last 4 years was -0.10W/m^2, so this probably explains why I found zero rise post commitment. What little was in the pipeline was subtracted at the end of the period.

I guess the clarification is that Willis’s model can produce seemingly functional equivalent results over the instrumentation period, but will probably diverge from the MMM in future projections.

I have my own made up model which is capable of producing MMM-like hindcasts and projections under the commitment scenario. My interest is in projected trends given a constant delta F of 0.02W/m^2 over the rest of the century. This is my own made up BAU scenario. Depending on assumptions and data used, I get decadal trends in the range of 0.10C to 0.16C. So I likely would be on the low end of the IPCC’s range and Willis’s model is on the low end of my range. I just like to compare results.

Your point regarding the curvilinear relationship is taken. IIRC correctly this was due to the slow slow albedo impacts in the high latitudes. I’m not really very interested in ECS or very long term impacts and I certainly wouldn’t use my model or Willis’s to project multi-century scenarios. Given uncertainties in forcings, I also consider the MMM projections useless.

What Paul_K says in this post https://wattsupwiththat.com/2013/05/21/model-climate-sensitivity-calculated-directly-from-model-results/#comment-1314851 seems right to me.

So, the lambda is an estimate of the equilibrium sensitivity but under the 1-box assumption of there being just a single relaxation timescale. As Isaac Held explains in the post I referenced ( http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/11/3-transient-vs-equilibrium-climate-responses/ ), fitting to such a 1-box model is known to underestimate the actual ECS in the models because of the existence of longer timescales in the models.

Willis, in your most recent post, you say:

I couldn’t find where Mosh referred to a paper by Held. Could you provide the reference to the paper that you are talking about? Thanks.

Joel,

the Isaac Held paper. Note that he has specified the time constant at four years rather than fitting it. There’s a typo in my note you quoted, ECS should be 3.4°C, not 3.3°C.here’sw.

Hi Willis,

No he didn’t. Held uses the term lambda as total feedback which is the inverse of your lambda. The climate sensitivity he obtained was therefore 0.435 deg C/(W/m^2) = 1/2.3.

The ECS he obtained under the assumption of linear feedback was 1.5 deg C .

The ECS of the GFDL2.1 model is 3.4 deg C.

Once again Wills please note that the difference between the 3.4 deg C and the 1.5 deg C is due to the fact that GFDL2.1 displays a strong curvilinear relationship between net flux and temperature when in prediction mode. It has nothing to do with the ratio of ECS to TCR which is completely different.

joeldshore,

I had a go at the subject of ECS from models vs the apparent climate sensitivity over the historical period in a post here:-

http://rankexploits.com/musings/2012/the-arbitrariness-of-the-ipcc-feedback-calculations/

Basically, the ECS displayed by the GCMs works out to be about twice the ECS value obtained by fitting a constant linear feedback model to the GCM data over the instrumental period. .

Joe Born,

I think you are a brilliant engineer masquerading as a lawyer masquerading as a brilliant engineer.

Let me try to give you a very short version of Poseidon’s vagaries.

Textually the energy balance equation can be written as:-

the rate of change of ocean heat energy = the net incoming radiative flux imbalance

The RHS is given by F – T/lambda, using Willis’s definition of lambda.

.On the RHS of this equation, we note that the net flux is dependent on the surface (and atmospheric) STATE, but it is NOT dependent in any way on historic ocean heat uptake. The radiative response doesn’t care what the history of ocean heat uptake was to get to the particular state; it is only interested in the state itself. This is true EVEN IF we change out the assumption of constant lambda for something more sophisticated.

Now one definition of ECS is the temperature at which the net radiative imbalance goes to zero. From this we see that we can set the LHS of the equation to zero to estimate the climate sensitivity. Note that we have done this without talking at all about the ocean model, so what are we missing? Well we are missing the behaviour of temperature in the time domain, since this is controlled by the ocean heat uptake – LHS of the equation.

OK, so now we can consider the LHS of the equation. You (quite reasonably) are scathing about the quality of choice of a constant heat capacity for ocean/mixed layer or sumpn. Agreed, it is a model with very limited applicability. Well suppose then that we don’t try to define the ocean model at all? We have estimates of the OHC uptake (energy units/m^2) and the LHS of the equation can be expressed as d(OHC)/dt. Voila, we can write the energy balance without making any assumptions about the ocean dynamics:

d(OHC)/dt = F(t) – T/lambda

If the values of OHC, F(t) and T(t) are known, then we can estimate lambda directly with no assumptions about the ocean dynamics. Suppose we apply this equation to the historical data in a GCM, what do we find. Well we find that the values of lambda work out to be typically around about 0.45 deg C/unit of forcing, giving an ECS of around 1.5 deg C under the assumption of a linear feedback – typically about half the declared ECS for the GCM. The difference is explained by the fact that the GCMs do not adhere to the assumption of a constant linear feedback for reasons which are still not completely clear. (Try an interesting paper by Kyle Armour et al 2012 on exactly this subject)

If we do the same thing with observational data, i.e. using measurements of OHC and temperature together with estimates of forcing, then the ECS values come out a bit higher – modal values of ECS are around 1.7 deg C. The recent Otto et al study did something very similar to come up with a ML ECS value of 1.9 deg C.

Now I can (instead) substitue the LHS of the equation for a more sophisticated ocean model -a two-slab or an upwelling-diffusion model – and it really doesn’t change the estimates of lambda very much provided that I simultaneously fit both temperature AND OHC. It does however tend to give rise to larger estimates of the system response time. However these estimates are nowhere near the response times observed in the GCMs when in predictive mode. Are the GCMs more correct than the simple analytic models, so that we should really scale up estimates of sensitivity obtained from the linear feedback assumption? A completely separate question, and one which I am still working on.

Paul_K,

Thanks a lot for taking the time to provide such a clear response (and the compliment–although if I really were brilliant my head wouldn’t hurt so much when I try to figure this stuff out). Much of my career was spent asking dumb questions of experts, and my position was such that they had to humor me. Now that I’m retired, getting a good answer is a luxury–which I really appreciate on the odd occasions when it happens.

Paul_K says:

I assume the paper Paul is speaking of is the one available here: http://earthweb.ess.washington.edu/roe/GerardWeb/Publications_files/Armouretal_EffClimSens.pdf

This blog posting by Isaac Held also seems relevant: http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/19/time-dependent-climate-sensitivity/