Guest Post by Willis Eschenbach
David Rose has posted this , from the unreleased IPCC Fifth Assessment Report (AR5):
‘ECS is likely in the range 1.5C to 4.5C… The lower limit of the assessed likely range is thus less than the 2C in the [2007 report], reflecting the evidence from new studies.’ SOURCE
I cracked up when I read that … despite the IPCC’s claim of even greater certainty, it’s a step backwards.

You see, back around 1980, about 33 years ago, we got the first estimate from the computer models of the “equilibrium climate sensitivity” (ECS). This is the estimate of how much the world will warm if CO2 doubles. At that time, the range was said to be from 1.5° to 4.5°.
However, that was reduced in the Fourth Assessment Report, to a narrower, presumably more accurate range of from 2°C to 4.5°C. Now, however, they’ve backed away from that, and retreated to their previous estimate.
Now consider: the first estimate was done in 1980, using a simple computer and a simple model. Since then, there has been a huge, almost unimaginable increase in computer power. There has been a correspondingly huge increase in computer speed. The number of gridcells in the models has gone up by a couple orders of magnitude. Separate ocean and atmosphere models have been combined into one to reduce errors. And the size of the models has gone from a few thousand lines of code to millions of lines of code.
And the estimates of climate sensitivity have not gotten even the slightest bit more accurate.
Can anyone name any other scientific field that has made so little progress in the last third of a century? Anyone? Because I can’t.
So … what is the most plausible explanation for this ludicrous, abysmal failure to improve a simple estimate in a third of a century?
I can give you my answer. The models are on the wrong path. And when you’re on the wrong path, it doesn’t matter how big you are or how complex you are or how fast you are—you won’t get the right answer.
And what is the wrong path?
The wrong path is the ludicrous idea that the change in global temperature is a simple function of the change in the “forcings”, which is climatespeak for the amount of downward radiation at the top of the atmosphere. The canonical (incorrect) equation is:
∆T = lambda ∆F
where T is temperature, F is forcing, lambda is the climate sensitivity, and ∆ means “the change in”.
I have shown, in a variety of posts, that the temperature of the earth is not a function of the change in forcings. Instead, the climate is a governed system. As an example of another governed system, consider a car. In general, other things being equal, we can say that the change in speed of a car is a linear function of the change in the amount of gas. Mathematically, this would be:
∆S = lambda ∆G
where S is speed, G is gas, and lambda is the coefficient relating the two.
But suppose we turn on the governor, which in a car is called the cruise control. At that point, the relationship between speed and gas consumption disappears entirely—gas consumption goes up and down, but the speed basically doesn’t change.
Note that this is NOT a feedback, which would just change the coefficient “lambda” giving the linear relationship between the change in speed ∆S and the change in gas ∆G. The addition of a governor completely wipes out that linear relationship, de-coupling the changes in gas consumption from the speed changes entirely.
The exact same thing is going on with the climate. It is governed by a variety of emergent climate phenomena such as thunderstorms, the El Nino/La Nina warm water pump, and the PDO. And as a result, the change in global temperature is totally decoupled from the changes in forcings. This is why it is so hard to find traces of e.g. solar and volcano forcings in the temperature record. We know that both of those change the forcings … but the temperatures do not change correspondingly.
To me, that’s the Occam’s Razor explanation of why, after thirty years, millions of dollars, millions of man-hours, and millions of lines of code, the computer models have not improved the estimation of “climate sensitivity” in the slightest. They do not contain or model any of the emergent phenomena that govern the climate, the phenomena that decouple the temperature from the forcing and render the entire idea of “climate sensitivity” meaningless.
w.
PS—I have also shown that despite their huge complexity, the global temperature output of the models can be emulated to a 98% accuracy by a simple one-line equation. This means that their estimate of the “climate sensitivity” is entirely a function of their choice of forcings … meaning, of course, that even on a good day with a following wind they can tell us nothing about the climate sensitivity.
Nick Stokes says:
September 15, 2013 at 4:27 am
Don’t pretend you haven’t heard of the “Eschenbach Effect”. It’s peer reviewed, and as you can appreciate, if a skeptic gets something published in climate science, it has run a severe gauntlet to get there. I can’t believe you haven’t read it and the other related articles here.
It is telling if you haven’t. Can you explain why no matter how high the temperature may go, ocean temperatures cannot (do not!) exceed 31ºC, and often hover within a couple of degrees below and up to the 31ºC number?
I was blown away by the statement of Willis’s that climate models have had millions of lines written for them! Eventually you will be forced to let go of the C-phlogiston2 theory of climate, and will see that enthalpy changes in H2O and all its phases, plus convection and clouds overpower all else. Even if the ECS was 10 or 20, the Eschenbach Effect would operate to counteract the temperature rise.
http://wattsupwiththat.com/2009/06/14/the-thermostat-hypothesis
Imagine a system with a SST max of 31ºC, and at the poles, ice, and variable temperatures. If the planet warms significantly, the ITZ is near 31ºC on average and the polar areas warms up (polar amplification is easier to understand with the governor system); ice melts, etc.
Now, imagine a cooling world where the polar amplification goes the other way, cooling down dramatically. The water and air exchange with the tropics results in a drop of tropical SST to something, say, in the mid-twenties. ITZ cumulus formation arrives later and later in the day, to a point that they do not form at all, giving full force to the sun’s unimpeded insolation to the surface, to try to get the ITZ temperature up.
Depending on how cold the polar regions are, the SST may not be able to rise much — this is when all stops are open to incoming solar to no great avail, and we go into an ice age. I can imagine a string of temperature buoys that move along with the ITZ through the seasons, being all we need to tell which way the climate is going over the globe. No mention of CO2 here. The Thermostat (operating on the well known thermodynamics of water being heated and cooled) doesn’t care what is causing the temporary forcing. The forcing involved in stoking the climate steam engine to the limit is counteracted by changes in water’s phases of liquid-vapour plus work being done to use it up, plus cloud formation and thunderstorms to stop the stoking from being overdone.
Nick, don’t end up being one of the 100 scientists against the modern “General Relativity” of climate.
Eric1skeptic says: September 15, 2013 at 6:29 am
‘Nick Stokes asks “You’ve said this before, but again not quoting anyone. Whose ludicrous idea is it? Whose canonical equation? What did they say?”
Here’s one that is close:’
Well, it’s hardly close. There are four simultaneous equations, with temperature derivatives as well as temperatures, and several forcing terms. And they call that a minimalist model.
There is no such thing as ‘greenhouse gases’. A GAS absorbs and radiates energy, proportional to its local thermal limits. IT CANNOT STORE ENERGY. Now nitrogen, oxygen and argon — gases which make up 99.9% of the atmosphere — do not absorb or radiate. They are considered TRANSPARENT.
So what happens to that energy once those gases have been heated? The gas expands, and rises. Has it cooled? NO, IT HASN’T! But those gases can pass energy by conduction to the ‘greenhouse gases’ which can then RADIATE that energy away. A small amount of energy can be passed to a cool surface but the cooled gas molecules will insulate much further energy transfer unless there is considerable turbulence near the surface. Cooled air sinks.
There you have it. Only the major atmospheric gases that must be warmed by conduction can store that heat energy until the so called ‘greenhouse gases’ radiate the energy away.
The wrong gases have been blamed for the ‘greenhouse effect’. If it wasn’t for those trace gases radiating away to space, the atmosphere would be so hot life would never have moved out of the sea.
Tom in Florida says: September 15, 2013 at 5:29 am
“What is the estimated gigatons of the entire atmosphere?”
Nick Stokes says:
September 15, 2013 at 5:51 am
“500,000. Basically, surface atm pressure * surface area. Varies a bit with humidity.”
So basically about .6% is CO2 and the CO2 we add annually is about .006% of the atmosphere (30 gigatons). Considering better living conditions, life style and conveniences, I can live with that very happily and without remorse. So can my kids, my grand kids, my great grand kids, my great great grand kids and everyone after that.
Dear Mr. Eschenbach:
The reason the reason growth in computing power hasn’t affected the accuracy of model output is that a mistake is a mistake – breaking it into smaller pieces doesn’t affect its nature. If y is a function of some set of x sub i, then increasing i has no positive effect on accuracy of output if your functional specification is wrong to begin with.
–and, on an unrelated topic: cruise contorl is not a governer in newtonian physics. gas consumption varies with terrain, crosswinds, and so on. A better example would be the role of wetlands in reducing the downstream effects of upstream flooding – however I (and I assume most others) take your point: discount the limiters and you can imagine a runaway climate vehicle, but only if you have no idea how it works.
Nick Stokes says:
September 15, 2013 at 4:27 am
Thanks for the reminder, Nick. I’ve cited this so many times it’s getting old, so sometimes I forget to cite it, and I’m more than happy to continue doing so.
To start with, I have shown (here among other places) that there is a simple equation that emulates the global temperature results from the climate models, either individually or on average, to about 98% accuracy. The equation is:
T2 = T1 + lambda (F2 – F1) (1-a) + a (T1 – T0)
where T is temperature, F is forcing, and a is exp(-1/tau).
What you may not have noticed is that that equation is simply a lagged version of what I’ve described as the “canonical equation” … and since it fits all the models to a “T”, I’d say it’s pretty darn canonical, wouldn’t you?
As to its origin, I’ve gone into that in great detail in a post called “The Cold Equations“, q.v.
Best regards,
w.
Nick, it is a nice example of why the complex GCMs have failed, so in that sense you are right. The main difference from the GCMs is that they lumped all the parameters into a single parameter. Typically the GCM will parameterize some of the weather and model some of it. Here they placed it all in a single parameter to determine by regression with empirical data. That is why this model outperforms the complex GCMs, it simply doesn’t pretend that the “physics” of weather can be modeled.
But in fact equation 1 is their complete model, and the other 3 equations are included only to provide the one month seasonal lag from land and ocean surface thermal inertia.
I say again, has anyone run a climate model with a LOW climate sensitivity?
edcaryl says:
September 15, 2013 at 7:00 am
The sensitivity is not an inherent property of the model. It is determined by the forcings. If you assume large forcings, then the model will show low sensitivity, and vice versa. This was first noted by Kiehl in this paper in 2007, and has been completely explained by my work cited above.
w.
Nick says: “Why is El-Nino a forcing?”
I don’t believe Willis said ENSO was a forcing. He listed it under one of the governing factors.
Regards
“Can anyone name any other scientific field that has made so little progress in the last third of a century? Anyone? Because I can’t.”
Oh, I can think of quite a few! UFOlogy, Sasquatchology, Phrenology, the Pernicious Evil of Chemtrails, and Who Was on the Grassy Knoll?
What’s interesting is that in 1979, there were two groups doing the early modelling. One managed by Syukuro Manabe at the NOAA’s GFDL lab and one managed by James Hansen at NASA’s GISS. Hansen got Manabe pushed the sidelines somehow immediately after this time, the start of maintaining/bullying the consensus.
Manabe’s models had the doubling sensitivity at 2.0C, 3.0C and 3.0C. Hansen’s models were 3.5C and 3.9C. The earliest consensus number as a result of these five models was 3.0C with an uncertainty of +/- 1.5C and, thus, the earliest range was given as 1.5C to 4.5C.
The Charney Report from 1979 outlines the science at the time (just 22 short pages). Its amazing how little the science has changed from this report. And it certainly talks a lot about the points Willis raised with the forcing lambda shortcut’s which took everyone down the wrong path.
Have a read of the Charney Report. Its short, provides a better explanation of the theory than you will see from any climate scientist and it highlights how little has changed. You’d think we would try to collect some data regarding the main uncertainties to try to reduce them rather than just drumbeat “warming” over and over again for 34 years.
http://www.atmos.ucla.edu/~brianpm/download/charney_report.pdf
edcaryl:
At September 15, 2013 at 7:00 am you ask
OK. I will answer that yet again, and I hope all who are bored with this answer will skip over it and forgive my posting it again.
The models cannot be run with low or zero climate sensitivity because they would be unstable.
I yet again explain this as follows.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:
And, importantly, Kiehl’s paper says:
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard
***
Can anyone name any other scientific field that has made so little progress in the last third of a century? Anyone? Because I can’t.
***
And yet, so many “scientists”, so much money, so many computer models, so much media & political frenzy, and so many new regulations & restrictions.
Pesadia,
“Not much has happened in physics, unless you believe that the god particle has been discovered,
which I don’t”
I was going to say the ITER project, but I agree, physics is a better example because it is pure science. Physics has been dwelling on String theory for the last 30 years and has gotten nowhere. Steven Hawking is still looking for his theory of everything, and quantum theory still cannot be reconciled with GR.
So Mr Stokes
3000 G Tonnes currently, 30 G Tonnes being added annually by man kind.
Can I ask how much is produced through natural factors and how much is absorbed by natural cycles ?
Willis, your theory of thunderstorms as governors is impressive. I wonder what kind of empirical evidence it would take to confirm it, or do you think of it as already confirmed? Seems to me that we have two types of candidates to explain recent non-warming, one, that heat is being vented into deep space (which I think is essentially yours) and the other, that heat is being vented into deep oceans. Although the second seems to me to be a piece of ad-hockery, it’s possibly true. Do we presently have the kind of observational capability that could decide either of these theories?
Nick Stokes says:
September 15, 2013 at 4:49 am
//////////////////
AND? So WHAT?
Whilst on the point:
1.how many tonnes of GHGs do ants and termites emit into the atmosphere each year?
2. has the quantity they emit been steady since the pre-industrial era, or has it in fact increased these past 150 years or so?
richardscourtney says:
September 15, 2013 at 7:17 am
/////////////
This follows from the fact that just when CO2 emissions began to rise significantly (circa 1940s) rather than there being a corresponding temperature increase (which CO2 GHG theory requires), there was a temperature decline until about the mid 1970s.
This anomalous result required a negative forcing greater than the positive forcing of CO2 – hence the application of aerosols and highly neagative forcings associated with them. None of this was based on real observational data, nor upon scientific experiment. just a fudge.
“Can anyone name any other scientific field that has made so little progress in the last third of a century? Anyone? Because I can’t.
So … what is the most plausible explanation for this ludicrous, abysmal failure to improve a simple estimate in a third of a century?”
While your answer is true, I think it’s incomplete. It’s “consensus science” or the politicization of science or however you want to term the hijacking of science for a particular agenda that has ensured that only one path could be explored with any reasonable support. In real science progress is often made through failure, but failure is not an option in climate science. Failure would undermine the support for action (the cause); therefore there can be no admitted failures. Everything must be “consistent with” the theory. Models, no matter how useless, cannot be termed “failures” but are averaged into ensembles; lessons therefore cannot be taken from the failures and applied to new and improved models. Since the models are purported to be useful and the important “forcings” are said to be well known hence the certainty of the need for action, only minor tweaks can be made to them or one is guilty of high treason against the consensus. Anyone who commits high treason against the consensus can kiss his/her career goodbye, therefore only the retired or nearly retired scientists or those outside of the climate science “community” (that are the least likely to be actively engaged in climate science research) can possibly challenge said consensus. The more senior scientists can be easily marginalized as being behind current research or even portrayed as senile and those outside of the climate science clique can similarly be dismissed publicly. It’s a sad state of affairs.
James Strom says:
September 15, 2013 at 7:26 am
Thanks, James, good question. I’ve provided a variety of empirical evidence, including in my original post here, and also here and here. See also my post, “It’s Not About Feedback“.
w.
Great analogy climate analogized to a speed governed vehicle. You have to be careful assigning the corresponding parts though. Obviously temperature is analogized to speed but at first I wanted to analogize the sun or insolation to fuel flow then I realized nothing on earth could control that flow like a governor controls fuel flow. So I decided the sun is more appropriately likened to the environmental factors, e.g. hills, wind speed and direction, road friction etc. that tend to speed up or slow down a car like sun/insolation fluctuations would tend to heat up or cool the climate. Then the car speed governor must be compared to the various climate feed backs, e.g. cloud formation, convection and heat transport, particulates, ocean oscillations, other albedo factors etc. that in combination govern the flow of heat, i.e. flow of fuel. It works for me. Thanks.
Two things. 1) typo alert “the El Nino/La Lina warm water pump” should be “La Nina”.
2) “Can anyone name any other scientific field that has made so little progress in the last third of a century? Anyone? Because I can’t.” I nominate fusion research — we’ve been 10 to 20 years away from limitless fusion energy for the past 40 years now. Probably spent more money there than in climate research.
richard verney:
re your post at September 15, 2013 at 7:37 am.
Yes, quite so. And it is that balance between warming and cooling periods combined with the different amounts of ‘run hot’ of each climate model which define the unique climate sensitivity of each climate model.
Richard
It is also likely that Co2 is an emergent phenomenon of climate, specifically temperature over time, being the integral thereof, as Prof. Salby has explained.