Guest Post by Willis Eschenbach
[Update: I have found the problems in my calculations. The main one was I was measuring a different system than Kiehl et al. My thanks to all who wrote in, much appreciated.]
The IPCC puts the central value for the climate sensitivity at 3°C per doubling of CO2, with lower and upper limits of 2° and 4.5°.
I’ve been investigating the implications of the canonical climate equation illustrated in Figure 1. I find it much easier to understand an equation describing the real world if I can draw a picture of it, so I made Figure 1 below.
Be clear that Figure 1 is not representing my equation. It is representing the central climate equation of mainstream climate science (see e.g. Kiehl ). Let us accept, for the purpose of this discussion, that the canonical equation shown at the bottom left of Figure 1 is a true representation of the average system over some suitably long period of time. If it is true, then what can we deduce from it?
Figure 1. A diagram of the energy flowing through the climate system, as per the current climate paradigm. I is insolation, the incoming solar radiation, and it is equal to the outgoing energy. L, the system loss, is shown symbolically as lifting over the greenhouse gases and on to space. Q is the total downwelling radiation at the top of the atmosphere. It is composed of what is a constant (in a long-term sense) amount of solar energy I plus T/S, the amount of radiation coming from the sadly misnamed “greenhouse effect”. T ≈ 288 K, I ≈ 342 W m-2. Units of energy are watts per square metre (W m-2) or zetta-joules (10^21 joules) per year (ZJ yr-1). These two units are directly inter-convertible, with one watt per square metre of constant forcing = 16.13 ZJ per year.
In the process of looking into the implications this equation, I’ve discovered something interesting that bears on this question of sensitivity.
Let me reiterate something first. There are a host of losses and feedbacks that are not individually represented in Figure 1. Per the assumptions made by Kiehl and the other scientists he cites, these losses and feedbacks average out over time, and thus they are all subsumed into the “climate sensitivity” factor. That is the assumption made by the mainstream climate scientists for this situation. So please, no comments about how I’ve forgotten the biosphere or something. This is their equation, I haven’t forgotten those kind of things. I’m simply exploring the implications of their equation.
This equation is the basis of the oft-repeated claim that if the TOA energy goes out of balance, the only way to re-establish the balance is to change the temperature. And indeed, for the system described in Figure 1, that is the only way to re-establish the balance.
What I had never realized until I drew up Figure 1 was that L, the system loss, is equal to the incoming solar I minus T/S. And it took even longer to realize the significance of my find. Why is this relationship so important?
First, it’s important because (I – Losses)/ I is the system efficiency E. Efficiency measures how much bang for the buck the greenhouse system is giving us. Figure 1 lets us relate efficiency and sensitivity as E = (T/I) / S, where T/I is a constant equal to 0.84. This means that as sensitivity increases, efficiency decreases proportionately. I had never realized they were related that way, that the efficiency E of the whole system varies as 0.84 / S, the sensitivity. I’m quite sure I don’t yet understand all the implications of that relationship.
And more to the point of this essay, what happens to the system loss L is important because the system loss can never be less than zero. As Bob Dylan said, “When you got nothin’, you got nothin’ to lose.”
And this leads to a crucial mathematical inequality. This is that T/S, temperature divided by sensitivity, can never be greater than the incoming solar I. When T/S equals I, the system is running with no losses at all, and you can’t do better than that. This is an important and, as far as I know, unremarked inequality:
I > T/S
or
Incoming Solar I (W m-2) > Temperature T (K) / Sensitivity S (K (W m-2)-1)
Rearranging terms, we see that
S > T/I
or
Sensitivity > Temperature / Incoming Solar
Now, here is the interesting part. We know the temperature T, 288 K. We know the incoming solar I, 342 W m-2. This means that to make Figure 1 system above physically possible on Earth, the climate sensitivity S must be greater than T/I = 288/342 = 0.84 degrees C temperature rise for each additional watt per square metre of forcing.
And in more familiar units, this inequality is saying that the sensitivity must be greater than 3° per doubling of CO2. This is a very curious result. This canonical climate science equation says that given Earth’s insolation I and surface temperature T, climate sensitivity could be more, but it cannot be less than three degrees C for a doubling of CO2 … but the IPCC gives the range as 2°C to 4.5°C for a doubling.
But wait, there’s more. Remember, I just calculated the minimum sensitivity (3°C per doubling of CO2). As such, it represents a system running at 100% efficiency (no losses at all). But we know that there are lots of losses in the whole natural system. For starters there is about 100 W m-2 lost to albedo reflection from clouds and the surface. Then there is the 40 W m-2 loss through the “atmospheric window”. Then there are the losses through sensible and latent heat, they total another 50 W m-2 net loss. Losses through absorption of incoming sunlight about 35 W m-2. That totals 225 W m-2 of losses. So we’re at an efficiency of E = (I – L) / I = (342-225)/342 = 33%. (This is not an atypical efficiency for a natural heat engine). Using the formula above that relates efficiency and sensitivity S = 0.84/E, if we reduce efficiency to one-third of its value, the sensitivity triples. That gives us 9°C as a reasonable climate sensitivity figure for the doubling of CO2. And that’s way out of the ballpark as far as other estimates go.
So that’s the puzzle, and I certainly don’t have the answer. As far as I can understand it, Figure 1 is an accurate representation of the canonical equation Q = T/S + ∆H. It leads to the mathematically demonstrable conclusion that given the amount of solar energy entering the system and the temperature attained by the system, the climate sensitivity must be greater than 3°C for a doubling of CO2, and is likely on the order of 9°C per doubling. This is far above the overwhelming majority of scientific studies and climate model results.
So, what’s wrong with this picture? Problems with the equation? It seems to be working fine, all necessary energy balances are satisfied, as is the canonical equation — Q does indeed equal T/S plus ∆H. It’s just that, because of this heretofore un-noticed inequality, it gives unreasonable results in the real world. Am I leaving something out? Problems with the diagram? If so, I don’t see them. What am I missing?
All answers gratefully considered. Once again, all other effects are assumed to equal out, please don’t say it’s plankton or volcanoes.
Best wishes for the New Year,
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

“CO2 is an insulator not a heater. Write that down and pass the note along to anyone else who just doesn’t get it.”
CO2 is a thermostat, not an insulator.
LOL.
I always thought that climate sensivity was the derivative of the Stefan-Boltzmann equation and equal to 0.21. Merely dividing T by I [Global temperature by insolation] ignores the Albedo and Emissivity elements. Not to mention the Boltzmann constant. Thus I suspect your figure of 0.84 could be wrong.
Mind you I appear to be approaching the problem from a very different perspective; so may be misinterpreting. [still need to look at your paper in more detail].
One interesting aspect I have noted is that on the current perceived values of Albedo and Emissivity of 0.3 & 0.612 respectively, the ratio of 0.49 determines the equilibrium temperature; for constant Insolation. Thus should this rise then the planet cools and visa versa.
It also puzzles me why we are all chasing temperature measurements when we should be looking at Albedo and Emissivity. After all it is these two elements that determine the temperature not the other way round.
I do concede, however, that this would be a very difficult task.
In fact the Warmists should be delighted with the recent cold patch ;as it has been very busy COOLING the earth with all that Albedo cluttering up our roads.
Is that not what they want?
Nullius in Verba says:
January 6, 2011 at 4:34 pm
“CO2 is an insulator not a heater. Write that down and pass the note along to anyone else who just doesn’t get it.”
CO2 is a thermostat, not an insulator.
LOL.
Anyone that thinks CO2 is an insulator needs to look up the definition of an insulator. A vacuum (which is the absence of matter) insulates because there is no ability of thermal conduction. Fiberglass and other materials insulate because the have low thermal conductivities. The last time I read about CO2 it adsorbs energy quite well in the IR. That’s exactly what you don’t want in an insulator.
Dave says: CO2 is an insulator not a heater. Write that down and pass the note along to anyone else who just doesn’t get it.
Sorry Dave, CO2 is a superb conductor, not an insulator and *especially* at temps where IR is near it’s absorption/emission bands. Have you bought any CO2 filled multi-layer glass panes lately? I hear you can have them installed basically for free thanks to friendly utility company sponsored discounts (their sales are hurting a bit lately with high energy costs). Or, you could just open the windows this winter.
Is Willis Eschenbach out there? No responses since January 4th. Preparing a new story which somehow continues this one, I guess?
Dave Springer says:
I don’t think albedo is a constant in the models. In fact, in the Lacis et al. paper that I linked to, it is one of the things that changes when they remove CO2. I agree that it is one of the things that can effectively be tuned but I think that is by adjusting other parameters. The +/-4% sounds high to me (where did that come from)…Do you mean +/-4% as in it could be anywhere from 26% to 34% or do you mean that it is +/- 0.04 times the central estimate of ~30%?
I’m not sure where you get your 0.2 kilowatts per square meter number…and I also disagree with your claim that if you don’t know values of certain things perfectly then it means the model will give total garbage. In fact, I might expect that if you only know the albedo to 4% then perhaps you would get similar error because of that in your climate sensitivity calculation. Surely, the error bars around climate sensitivity are a lot greater than that from other uncertainties. (I did a web search and did find one paper that investigated the affect of tuning the albedo to one data set or another for the NCAR Community Model and reported “a small but statistically significant difference in the model’s equilibrium climate sensitivity”. Unfortunately, I could only find the abstract so I couldn’t see what that meant quantitatively, but given the spread in climate sensitivity between different models, I doubt that is a particularly large source of uncertainty.)
There is no doubt that significant uncertainties exist in climate modeling. However, that does not mean the models are garbage. It seems only in a politically-charged atmosphere, as exists in this case because so many people don’t like the policy implications of the science, that people seem to adopt a black-white view that the models are either perfect or useless. In the real world, models are always wrong and yet often still very useful.
That is not what Trenberth was actually saying.
Mr Eschenbach:
I applaud your common sense approach. You have an intuitive grasp of feedback systems. That clouds and storms are relatively quick is what keeps system stable, stability requires that feedback be faster than the process being controlled. Slow ocean and quick atmosphere is why it works.
If i’m correct in thinking that ocean heat is the bulk of climate heat, there’s one more beauty to the natural balance you propose. I’ll use the analogy of a control system because that’s what i am familiar with.
To be stable a thermostat needs to sense temperature fairly near the heat source, at least timewise, else the process will get ahead of the thermostat and divergent oscillation will result..
To that end industrial controls often have a sensor wrapped right around the heating element. This gives a little bit of “feed-forward” or “anticipation” which counteracts the natural delays in the process (thermal inertias).
You have picked up on this with your use of thunderstorms to improve the convection process, which is what determines lapse rate which in turn determines surface temperature.
I point out that tropical cyclones, being close to the heat source(tropical oceans) of earth’s heat engine, provide that feedforward. A hurricane moves about 5.2 X 10^19 from the ocean surface to the stratosphere [see http://www.aoml.noaa.gov/hrd/tcfaq/D7.html about the middle of article]. A hundred hurricane-days i think would amount to 5.2 zetajoules, which is a significant chunk of your delta-h. A typical season might have twice that many hurricane days just in the Atlantic, twenty named storms at ten days apiece.
I believe tropical cyclones are Mother Nature’s feed-forward mechanism and it is you who have made this clear to laymen such as myself. Thank you.
Your closed loop control system approach to climate is the right one. See my comment on your Thermostat paper, it’s the last one on the 2009 post at WUWT.
Water vapor is faster than CO2 and that’s why it’ll win.
Sincerely,
old jim hardy (retired power plant engineer who’s moved a lot of heat in his day)
Oops! Make that “A hurricane moves about 5.2 X 10^19 JOULES/DAY”
sorry, i was checking my units..
jim hardy says:
January 7, 2011 at 9:40 am
“If i’m correct in thinking that ocean heat is the bulk of climate heat”
You betcha! All divers know that 33 feet of water is equal to 1 atmosphere. The global ocean averages 12,000 feet deep. Take 71% of that to account for land surface and we get the ocean with a mass of over 250 atmospheres. Water has a specific heat 4 times that of dry air so the final number (close enough for government work) is:
The global ocean has 1000 times the heat capacity of the atmosphere.
You’ve heard of the tail wagging the dog? Well, a tail is around 1-2% the weight of the dog. The atmosphere has only 0.1% the thermal mass of the oceans.
The top 1200 feet is pretty much the only part of the ocean with a temperature that ever exceeds 3C. Everything below that, or 90% of the thermal mass, is 3C. Factoring in the average temperature of the top 1200 feet gives us a global average ocean temperature of 4C.
I gots me a couple questions for the climate boffins. How come the ocean is so dang cold? Is that the real global average temperature when you factor in 100,000 years of ice age along with 10,000 years of years of interglacial?
Note to climate boffins: Please don’t try to tell me the ocean is 3C below 1200 feet because that’s where water reaches its maximum density. OCEAN water is a 3.5% saline solution which has a freezing point of -1.8C and continues to get denser all the way to the freezing point. See:
http://www.nature.com/nature/journal/v428/n6978/fig_tab/428031a_F1.html
Joel Shore says:
January 7, 2011 at 8:08 am
“I don’t think albedo is a constant in the models. In fact, in the Lacis et al. paper that I linked to, it is one of the things that changes when they remove CO2.”
It changes for snow/ice but it’s a constant otherwise.
See GISS GCM ModelE description and reference manual:
http://www.giss.nasa.gov/tools/modelE/modelE.html
@joel (continued)
“I’m not sure where you get your 0.2 kilowatts per square meter number…”
Sorry. Didn’t state that right. TOA incoming and outgoing are approximately equal at 1366w/m2 (the solar constant).
If earth’s average albedo has an error bar of +-5% then the amount of energy at TOA that is reflected and thus never makes to the surface to warm the ocean has an error range of 136 watts. I guess I rounded the wrong way. Should have said they don’t know how much insolation is reflected except to the nearest 1oo watts.
“I doubt that is a particularly large source of uncertainty.”
Plus or minus 68 watts/m2 at the surface isn’t particularly large? Anthropogenic CO2 (and equivalents) they say cause an additional 2W/m2 at the surface. If the uncertainty in global average albedo is taken into account that would be 2W/m2 plus or minus 50 watts. Seems pretty significant to me.
“people seem to adopt a black-white view that the models are either perfect or useless”
Not me. I’m of the opinion that a model that is wrong is worse than useless – it inspires potentially disastrous decisions in the real world like bridges collapsing, airplanes crashing, and a million other things that involve death and destruction. Maybe even something as bad making a decision to limit greenhouse gases because it might get too warm and instead your efforts bring on an ice age. That’s called “learning the hard way” that your model is broken.
“And we all know the models are a “travesty” thanks to a private candid admission from Trenberth that went embarrassingly public 14 months ago.”
“That is not what Trenberth was actually saying.”
Actually it was. He said they couldn’t explain the lack of significant warming in the past decade. The GCMs predicted warming. He couldn’t explain why it predicted warming that didn’t happen and that was a travesty. It was an indictment of the model.
dave springer says:
I don’t see where it says that. And, are you sure that you don’t just mean surface albedo (i.e., not including clouds)? I think at least some models now have different albedos for different vegetation and can track changes in vegetation. I am not sure if GISS Model E does this.
Yow! Lots of errors here. First, it makes no sense to multiply the total solar constant by 5% and then compare it to the forcing due to CO2. They have to both be measure as W/m^2 of earth’s surface. I.e., the solar constant has to be divided by 4. So, that brings us down to +/- 17 W/m^2. Second, you still haven’t justified where you got the +/-5% number and particular whether your source really meant the albedo could be any where from 25% to 35% or rather if they meant a 5% error in the 30% value (i.e., it could be 28.5% or could be 31.5%). Maybe your interpretation is correct but if the latter one is correct, then your number is further reduced to +/- 5 W/m^2.
Third, you are mixing up changes vs absolute values. If, in the modeling that I had done (including 13 years of working for industry), I always had to have my model within a certain accuracy in predicting something in order to predict a change on that order as I varied a parameter then I would have been unable to do very much in many cases.
In other words, it is not necessary to have all the absolute numbers for all of the energy flows known to a W/m^2 in order to predict how a change in forcing by 1 W/m^2 should roughly change things. In particular, the climate models are always compared to control runs in order to figure out how much warming a certain change produces. That is why the paper that I referenced found that tuning to one measurement vs. another for the model’s albedo only made a small difference in the climate sensitivity that the model predicted.
Well, then if you are never willing to accept a model or prediction as long as the model is not 100% correct, you might as well just simplify this and state your positions thusly: “I do not believe in making decisions based on science at all. I am so dead-set against policies to reduce carbon dioxide emissions that there are no circumstances under which I would support them” because that is the honest characterization of your position given that the models will ALWAYS be imperfect. You are demanding absolute certainty in an uncertain world.
By the way, the one point that you (at least implicitly) make in your statement that I do sort of agree with: One thing that makes geoengineering a questionable strategy is that they are much more reliant on the models being absolutely accurate. I.e., the models can already tell us that the perturbation that we are producing through greenhouse gas emissions is dangerous and should be mitigated by reducing our perturbation but I don’t think they are yet accurate enough to tell us exactly what sort of “counter-perturbation” we might want to make (through shooting aerosols into the stratosphere for instance) to counter the effects and not introduce new problems!
If that was really the point that he was making then he was simply misinformed since the GCMs clearly show similar periods of temperatures being steady. It is simply a facet of any system having a slow (approximately) linear trend with superimposed fluctuations and measuring trends over time periods where the underlying trend cannot accurately be determined.
Since I don’t think Trenberth is misinformed and since he wrote a paper that spelled out in more detail what he found to be a problem, I think that we can safely say that wasn’t really the issue that he had. Rather, he wanted to really be able to nail down energy flows to the point where we could understand the details of what was happening during these fluctuations, i.e., is the heat going into the oceans (and how deep?), are there fluctuations in the albedo, … Trenberth was expressing the frustration that we still aren’t able to measure these flows to the accuracy that allows us to answer these questions.
My opinion on clilmate change is naturally long term based on the slow pace of geologic events. It is believed that plate tectonics had a hand in the glacial environment we are in today when 50 million years ago the Indian plate impacted southern Asia, closing the Tethys seaway. Thereby disrupting a long standing equatorial warm ocean current that in the Mesozoic Era produced a dominant reptilian fauna, keeping even our polar areas quite warm. Recent core drilling in the Arctic Ocean evidenced temperatures as high as 23 deg C into early Tertiary times. The temperature of the Arctic today is 0 deg C. The past 50 my show a steadily declining temperature. 32 million yrs ago the Antarctic continent iced up. 14 my ago the Arctic Ocean froze over. 1 3/4 my ago the first Pleistoce Ice Ages began with several with 10,000 year periods of Ice. The past 5 have been approximately 100,000 yrs duration with variable interglacials. The present interglacial (Holocene) began melting 14,000 yrs ago but this warming was interrupted by a 1,300 year cold spell (Youngerf Dryas) and it required 6,000 yrs to melt. There is nothing unusual in our climate today despite Dr. Mann’s phoney “hockey stick” graph that omitted the Middle Ages warm period and the following Little Ice Age and compounded the mythical warming beginning with the Industrial Revolution around 1850 AD when human use of carbon based fuels began in earnest, for the blade. All of these fabrications were of enormous concern to the AGW crowd. We have a world financial disaster brewing because of political and environmental support for this false POLICY. But my crowning concern is this mass distraction from the real geologic situation, the Pleistocene Ice Ages as long as we have polar ice caps.
My moderation
Considering that our planet has been in the grip of the Pleistocene Ice Ages for over
a million years and in that time, the northern hemisphere has been subject to 5 major continental glaciations. The present Interglacial (Holocene) still has ice in our
polar regions. As long these conditions endure, it is counterproductive to use heroic measures to preserve the status quo and enormously expensive. If we do not warm up a few degrees we will be back in the ice sooner or later.
Just to clarify, that post by “wrt104” was actually mine…Just logged into someone else’s WORDPRESS account.
wrt104 says:
January 8, 2011 at 10:13 am
dave springer says:
It changes for snow/ice but it’s a constant otherwise.
See GISS GCM ModelE description and reference manual:
http://www.giss.nasa.gov/tools/modelE/modelE.html
“I don’t see where it says that.”
Rather than giving you a fish I’m going to teach you how to fish. Search the manual for the word “albedo”. Let me know what you find.
wayne says:
January 6, 2011 at 10:44 pm
Prove it and get a Nobel prize for overturning basic thermodynamic properties of gases established by experiment 150 years ago.
David L.
An insulator restricts the flow of energy across a boundary. It can do it by restricting conduction, convection, radiation, or all three.
It’s you who needs to learn what an insulator does and how it works. CO2 is an insulator and it works via impeding long wave infrared radiation energy flow from surface to space while doing nothing to impede the flow of short wave energy from sun to surface. It isn’t rocket science and was demonstrated experimentally 150 years ago. Maybe you should reproduce Tyndall’s experiments and try to prove him wrong. You’d be the first to do it.
Dave Springer says:
The only reference I find to the word “albedo” is this paragraph under Sea Ice Model:
Am I missing something?
Hey Springer, if you still “have your ears on:”
“jae says:
January 6, 2011 at 5:31 am
“You, and nearly everyone else, are arguing that a “greenhouse effect” explains that extra heating.”
Been skiing, so not paying much attention. Just in case you are still looking at this, YOU STILL DON’T GET THE POINT! There is NO greenhouse effect!!!!
jae says:
Well, maybe not in jae’s universe there isn’t, but there is in the actual universe that the rest of us inhabit. But, hey, continue to marginalize yourself by arguing scientifically-ridiculous things if you want to!