Guest Post by Willis Eschenbach
“Climate sensitivity” is the name for the measure of how much the earth’s surface is supposed to warm for a given change in what is called “forcing”. A change in forcing means a change in the net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation.
There is an interesting study of the earth’s radiation budget called “Long-term global distribution of Earth’s shortwave radiation budget at the top of atmosphere“, by N. Hatzianastassiou et al. Among other things it contains a look at the albedo by hemisphere for the period 1984-1998. I realized today that I could use that data, along with the NASA solar data, to calculate an observational estimate of equilibrium climate sensitivity.
Now, you can’t just look at the direct change in solar forcing versus the change in temperature to get the long-term sensitivity. All that will give you is the “instantaneous” climate sensitivity. The reason is that it takes a while for the earth to warm up or cool down, so the immediate change from an increase in forcing will be smaller than the eventual equilibrium change if that same forcing change is sustained over a long time period.
However, all is not lost. Figure 1 shows the annual cycle of solar forcing changes and temperature changes.
Figure 1. Lissajous figure of the change in solar forcing (horizontal axis) versus the change in temperature (vertical axis) on an annual average basis.
So … what are we looking at in Figure 1?
I began by combining the NASA solar data, which shows month-by-month changes in the solar energy hitting the earth, with the albedo data. The solar forcing in watts per square metre (W/m2) times (1 minus albedo) gives us the amount of incoming solar energy that actually makes it into the system. This is the actual net solar forcing, month by month.
Then I plotted the changes in that net solar forcing (after albedo reflections) against the corresponding changes in temperature, by hemisphere. First, a couple of comments about that plot.
The Northern Hemisphere (NH) has larger temperature swings (vertical axis) than does the Southern Hemisphere (SH). This is because more of the NH is land and more of the SH is ocean … and the ocean has a much larger specific heat. This means that the ocean takes more energy to heat it than does the land.
We can also see the same thing reflected in the slope of the ovals. The slope of the ovals is a measure of the “lag” in the system. The harder it is to warm or cool the hemisphere, the larger the lag, and the flatter the slope.
So that explains the red and the blue lines, which are the actual data for the NH and the SH respectively.
For the “lagged model”, I used the simplest of models. This uses an exponential function to approximate the lag, along with a variable “lambda_0” which is the instantaneous climate sensitivity. It models the process in which an object is warmed by incoming radiation. At first the warming is fairly fast, but then as time goes on the warming is slower and slower, until it finally reaches equilibrium. The length of time it takes to warm up is governed by a “time constant” called “tau”. I used the following formula:
ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)
where ∆T is change in temperature, ∆F is change in forcing, lambda (λ) is the instantaneous climate sensitivity, “n” and “n + 1” are the times of the observations,and tau (τ) is the time constant. I used Excel to calculate the values that give the best fit for both the NH and the SH, using the “Solver” tool. The fit is actually quite good, with an RMS error of only 0.2°C and 0.1°C for the NH and the SH respectively.
Now, as you might expect, we get different numbers for both lambda_0 and tau for the NH and the SH, as follows:
Hemisphere lambda_0 Tau (months)
NH 0.08 1.9
SH 0.04 2.4
Note that (as expected) it takes longer for the SH to warm or cool than for the NH (tau is larger for the SH). In addition, as expected, the SH changes less with a given amount of heating.
Now, bear in mind that lambda_0 is the instantaneous climate sensitivity. However, since we also know the time constant, we can use that to calculate the equilibrium sensitivity. I’m sure there is some easy way to do that, but I just used the same spreadsheet. To simulate a doubling of CO2, I gave it a one-time jump of 3.7 W/m2 of forcing.
The results were that the equilibrium climate sensitivity to a change in forcing from a doubling of CO2 (3.7 W/m2) are 0.4°C in the Northern Hemisphere, and 0.2°C in the Southern Hemisphere. This gives us an overall average global equilibrium climate sensitivity of 0.3°C for a doubling of CO2.
Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.
w.
NOTE: The spreadsheet used to do the calculations and generate the graph is here.
NOTE: I also looked at modeling the change using the entire dataset which covers from 1984 to 1998, rather than just using the annual averages (not shown). The answers for lambda_0 and tau for the NH and the SH came out the same (to the accuracy reported above), despite the general warming over the time period. I am aware that the time constant “tau”, at only a few months, is shorter than other studies have shown. However … I’m just reporting what I found. When I try modeling it with a larger time constant, the angle comes out all wrong, much flatter.
While it is certainly possible that there are much longer-term periods for the warming, they are not evident in either of my analyses on this data. If such longer-term time lags exist, it appears that they are not significant enough to lengthen the lags shown in my analysis above. The details of the long-term analysis (as opposed to using the average as above) are shown in the spreadsheet.

Sherlock says:
May 29, 2012 at 8:35 am
Yes, it is accounted for. The NASA solar data that I used includes all of that.
w.
Steven Mosher says:
May 29, 2012 at 9:46 am
Since I mentioned that difference in responses, I’m not sure what you mean. And as usual, your twitter style of posting is useless.
Please, Steven, either make your point or stop with your teasing “I’m so much smarter than you” type of posts.
I suspect you have something to say, and knowing you, I suspect it might be true and valuable and accurate. However, I cannot glean it from your mini-post, and so I invite you to either tell us what it is, or go away. I’m not willing to play your game.
w.
My issue [or one of them] with “climate sensitivity” is this: What questions are better answered or described by calculating ‘sensitivity’, especially if it is not constant? To what use will it be put?
It seems it is an attempt to reduce an embarrassment of complexity to a single quotable number which is the first derivative of something called “forcing”. This could then be used/abused by the disingenuous. So, if global CO2 levels are falling during the Northern Hemisphere summer, then could someone legitimately claim that “climate sensitivity” is actually increasing?
It reminds me of the story that President Nixon became the first President to use the third derivative of prices [with respect to time] when he said:
“The rate of increase in inflation is falling”. True, maybe. But not very helpful.
Eric Adler says:
May 29, 2012 at 10:38 am
Your comment is a joke, right?
I say that because claiming something is “nonsense”, with no attempt to even begin to tell us why it is “nonsense”, is a joke in the world of science. I may well be wrong, I have been many times, but you have not said one word about where or how I might have gone off the rails.
w.
PS—As far as I can find out, Schwartz didn’t “take it back”. He modified his estimate to make it ≈ 8 years.
I would also note that the climate models I’ve analyzed use a time constant “tau” which is on the order of two to three years. The question is obviously still open. Note that I make no claims above that this is the only time constant in the system.
Philip Bradley:
Repeating an error does not stop it being an error however many times it is repeated.
At May 29, 2012 at 7:37 am you say;
NO! You are absolutely wrong.
The absence of correlation disproves causation, but the presence of correlation indicates nothing about causation except the possibility of a causal link. This is a matter of logic and statistics have nothing to do with it.
Try this. Say out loud to yourself 100 times
“Correlation is not evidence of causality”.
Richard
OOps! of course I intended to write
The absence of correlation disproves causation, but the presence of correlation indicates nothing about causation except the possibility of a causal link.
Sorry.
Richard
[Fixed. -w.]
slp says:
May 29, 2012 at 4:31 am
I will disagree with that. Assume that we move from winter forcing to summer forcing in one step. After that, ΔF = 0. However, the new forcing will still cause things to warm up. Instead of “a change in forcing”, perhaps what you mean is “the difference between incoming energy and energy emitted by the surface”. A value defined like that might make sense if it was the difference of the fourth roots. At any rate, according to Stefan’s equation, there should be a fourth power in there somewhere.
“This is because more of the NH is land and more of the SH is ocean … and the ocean has a much larger specific heat.” i.e. Btu/# F or kcal/kg C
Which simply points out the reason why any warming from any variable will always show up in the NH, it has a low specific heat meaning any sensible heat change will be larger than the SH but more importantly demonstrates the fallacy of measuring heat in units of temperature (partial heat) instead of units of total heat in Btu/# or kcal/kg.
Comparing the SH and NH is also an apples and oranges fallacy as well given an ocean dominated hemisphere retains more heat latently than sensibly. This is why Hansen isn’t a scientist at all, he is an incompetent boob for making such an elemental mistake to make assertions based on partial heat measurements instead of reputable scientists who recognize heat/energy has two components.
jorgekafkazar says:”Why an exponential function?”
Because the model for temperature response to forcing being used is basically a first order linear time invariant differential equation. The generic solution for an unperturbed system which is not at the equilibrium state (which we define as zero) is the initial value times e^-t/tau. That is, the system tends to approach it’s equilibrium state when undisturbed, in an exponentially decaying manner. It asymptotically approaches equilibrium if it starts away from equilibrium, but can never reach it (ie it takes infinite time) but it can get arbitrarily close. The system also asymptotically approaches a “new equilibrium” state when forced by a step function, either above or below the original equilibrium state. This kind of system is encountered fairly frequently by engineers, especially electrical and they probably recognize it from modeling circuits, for example. It also occurs in modeling of convective heat transfer.
W.;
Even on Twitter, Mosh could have spared you ~28 more characters!
Some interjections seem intended only to spread CUD (Confusion, Uncertainty, and Doubt). Mosh certainly likes chewing his!
As for transients, etc., am I wrong in thinking that this has much to do with the finding that OLR varies positively and monotonically with surface temperature, rather than negatively as AGW requires? My take-away from that is that there is no “blanketed heat” available to power the GE. It leaves on the first available radiative train.
“”””” Re John West…………….. ld this be thermostats at work? This also aligns with the Stephan-Boltzman Law where as the temperature increases it takes less and less heat flux for the same increment of temperature increase. “””””
Well that statement is exactly wrong; it takes ever higher amounts of radiant energy to increase the Temperature by some designated increment; NOT less.
The amount of energy it takes to raise a system Temperature from 1 K to 2 K is microscopic, compared to that required to take a system from 1,000,000 K up to 1,000,001 K
The standard definitions:
Instantaneous sensitivity – Immediate changes upon forcing changes. Given the time constants and thermal inertia for the climate as a whole, seasonal responses are very close to this. Short term up/down changes in temperature don’t penetrate far into the oceans – the thermal content of the top 2.5 meters of sea water is equal to that of the entire atmosphere.
Transient sensitivity – As defined, and as used in the literature, the response over about 20 years. Currently the median estimate of this is 3-3.5C/doubling of CO2. To a first pass estimate this can be considered the time for the well mixed layer of the ocean (75-100 meters) to adjust to a sustained +/- change in forcings.
Equilibrium sensitivity – Hundreds of years out, when changes in forcing have fully percolated/circulated into the deep oceans.
—-
Willis Eschenbach – the calculations you did in this post are most relevant to the “instantaneous sensitivity”, not the “transient sensitivity”. Apples and oranges…
Again, I would refer you to Knutti and Meehl 2006 (http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3865.1), “Constraining Climate Sensitivity from the Seasonal Cycle in Surface Temperature”, where they look at various models and compare the seasonal responses to observational data in multiple (25?) regions to constrain transient sensitivity. If the climate had the transient sensitivity you compute it simply wouldn’t have the instantaneous sensitivity to produce the observed seasonal responses.
There is no backradiation or backscattering or whatever. Gases (whatever gas) simply don’t emit any IR-radiation unless they are very hot or burning. Atmospheric gases are so low temrature that they simly can’t emit any energy (IR- radiation). There is no energy source in atmosphere.
richardscourtney says: “The absence of correlation disproves causation”
If by “disproves causation” you mean “shows there is no dependence of either variable on the other” this is not necessarily the case. For example, if they are both marginally normally distributed and jointly normally distributed, then the two random variables are independent if their covariance is zero (ie they are uncorrelated) BUT, if they are not jointly normal, they can be uncorrelated while not being independent:
http://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent
Oh, how I’d love to see the radiation “equilibrium” cartoons redone using enthalpy!
It wad frae monie a blunder free us,
An’ foolish notion
Isn’t the average warming per the land- and satellite-based measurements already since 1979 roughly +0.3C?
With only an increase of atmospheric CO2 of less than 20%?
Re: Willis Eschenbach
May 29, 2012 at 11:16 am
Stephen Schwartz revised his original estimate of effective time constant from ~5 years to 8.5+/-2.5 years. His corresponding estimate of equilibrium sensitivity (final temperature response to doubling CO2) based on that time constant was revised to 1.9+/-1 C. 0.3C is way lower than every estimate I have read, and much lower than even a simple blackbody response (about 1C per doubling). For what it is worth, Schwartz agreed with critics that his first estimate was biased low due to inclusion of too much very short term response in the autocorrelation calculation, but he continued to defend the revised value ~1.9C per doubling) as reasonably accurate, in spite of howls from many in the climate science establishment (of which aerosol specialists Stephen Schwartz is certainly a member!). (http://www.ecd.bnl.gov/pubs/BNL-80226-2008-JA.pdf)
1.8C to 1.9C per doubling corresponds to the expected warming if the atmosphere’s relative humidity remains constant as the temperature rises (that is, if total atmospheric moisture content rises on average with warmer temperatures to maintain constant relative humidity). Anything higher than 1.9C per doubling requires substantial positive “cloud feedback”. Anything lower than 1.8C requires substantial negative “cloud feedback”. Cloud feedback is uncertain; the available data are just not very good.
“”””” Billy Liar says:
May 29, 2012 at 10:16 am
Steven Mosher says:
May 29, 2012 at 9:46 am
“not even close to right. the instantaneous response is different from the transient response and the equillbrium [sic] response.”……
For the benefit of the people who are wondering what you are talking about; can you define the terms:
instantaneous response
transient response
equilibrium response
and the system to which you are applying these terms. “””””
Well Billy, I am not sure exactly how Mosh defines those terms; particularly “instantaneous response”; NO physical system responds “instantaneously” ; but let me give it a shot at what I “think” he means, and what those terms mean.
First of all we can dispense with “equilibrium response”
; that’s an oxymoron; a system in equilibrium is a stationary system and it isn’t “responding” to anything, or else it wouldn’t be in equilibrium. Since earth is never even remotely in a state of equilibrium, I suspect that Steven really meant a “Steady State Response”. This would often be referred to as the “Frequency Response”, the result of applying a ‘small’ sinusoidal cyclic disturbance to the system; small so the system remains ‘linear’. In some sense, Willis’ “Lisajous figures” are a pretty good example of a steady state response, although in his chosen “system” the whole planet, the varying signal is not exactly small, and the system not too linear; but the cyclic return to a familiar state, is characteristic of a steady state response, as is the phase shift between drive signal and sytem response.
For “Instantaneous response” which I have explained doesn’t exist, a likely substitute would be the “Impulse Response”. Mathematically, an “Impulse” is a short (zero length) application of a high (infinite) “force”, such that the product of the high force and the small time, which is THE measure of Impulse, has a finite value. A perfect example of an Impulse response, is yesterday’s photo Anthony posted of the BS Iowa 16 inch broadside. The gun powder applies an astronomical force to the shell for a very short time, during which the round transits the barrel, and in turn the recoil from that applies an equal force for the same time, in the other direction to the ship. Long after the shell has reached its target, the ship will still be “responding” to the applied impulse, and the ship will eventually settle down to a new position somewhere to port of where it started. You could do the same thing by swimming up to the Iowa abeam, and pushing on the side of the ship as you swim . Sometime next year if you survive, the ship might move to the same place; well maybe next century.
Impulse response of physical systems is a well understood discipline. Note it is a TIME RESPONSE unlike the steady state FREQUENCY RESPONSE. Frequency is a figment of our imagination; but “time” actually happens in real time.
So what of “Transient Response.” Well impulse response, most certainly IS a transient response; a response to a transient signal, namely the impulse.
But usually people who mean impulse response, use THAT term specifically, so when they say “Transient Response”, they most likely mean the response to a STEP FUNCTION.
Whereas an impulse signal immediately goes away, a step function changes value and stays there till the end of the universe.
Mathematically, step function and impulse responses are related at least for linear systems in their linear operating regimes. So they are also related to steady state response; but you have to know the steady state amplitude relationshp AND ALSO the steady state phase relation ship.
Now I don’t KNOW, that MOSH really meant those things, as I described them; but I wouldn’t be taking any bets against my belief that he does.
Magic Turtle says:
May 29, 2012 at 10:51 am
1. The amount of radiative forcing produced by a doubling of the CO2 concentration is not a regular fixed amount. The IPCC’s logarithmic formula from which the fixed amount results is false. The correct formula can be derived from the Beer-Lambert law of optics and it follows an inverse exponential law, not a logarithmic one. Consequently, repeatedly doubling the amount of CO2 in the atmosphere produces progressively smaller increments of radiative forcing at each repetition, not equal ones as the IPCC’s formula pretends.
Not true, the logarithmic dependence does not come from the Beer-Lambert Law which applies in optically thin situations, but from the optically thick situation which applies to CO2 absorption in the IR in our atmosphere. Look up ‘Curve of Growth’ to see a derivation.
Basically, weak lines have a linear dependence on concentration, moderately strong lines have a logarithmic dependence and very strong lines a square root dependence.
George E. Smith; says:
“ it takes ever higher amounts of radiant energy to increase the Temperature by some designated increment; NOT less.”
True that. But also true is that it takes less incremental temperature increase to increase heat flux emission by some incremental amount as the temperature increases in a black/grey body in the range of Earthly temperatures.
For example: a black body has to increase 1.9 degrees to increase emission by 10 W/m2 if going from 370 to 380 W/m2, but only a 1.8 degree increase is required to emit an additional 10 W/m2 from 400 to 410 W/m2, and only a 1.7 degree increase for a 10 W/m2 increase from 440 to 450 W/m2.
BB emission increase from 370→380 W/m2 = 1.9 K increase from 284.2K to 286.1K
BB emission increase from 400→410 W/m2 = 1.8 K increase from 289.8K to 291.6K
BB emission increase from 440→450 W/m2 = 1.7 K increase from 296.8K to 298.5K
The temperature increase gets smaller for same heat flux emission increase as the temperature of the body increases.
So, as the temperature of the surface warms it can radiate more heat with less incremental increase in temperature.
In other words the “sensitivity” (temperature increase per unit heat flux) of a black/grey body goes down as the temperature goes up in the Earthly temperature range.
excel graph
KR,
Thanks for those definitions. What is the ‘standard’ system model to which these are applied?
George E Smith,
Many thanks for your explanation. I guess like me you have some control engineering background. I often wonder whether anyone in climate science knows anything about control systems.
The conventional definition of transient response is the surface temperature at the time of doubling of CO2 (assumed ~3.7 watts/M^2 forcing) if CO2 concentration increases by 1% per year. Doubling then takes place after 70 years; the temperature increase at that time is “transient sensitivity”.
The conventional definition of equilibrium sensitivity is the temperature a nearly infinite time after CO2 doubles. What is a reasonable approximation of “infinite time” depends on the heat capacity of the deep ocean and how quickly the deep ocean can warm, but the response at 1,000 -2,000 years is often considered a good appoximation. Most all of the response is (of course) MUCH quicker than this. The equilibrium sensitivity value is (I think) becoming recognized as not a very useful number, since CO2 is never going to stay doubled for 1,000 years (the oceans will absorb most of it over a couple of centuries or less). The “transient sensitivity” is probably a more meaningful number in terms of estimating how much future warming might actually take place in response to forcing.
There is a possible problem with this analysis in that some of the energy may go into phase change of water (aka known as melting snow & freezing ice). I seem to recall that the surface heat penetrates around 4m into the rock each year. If snow or ice were a similar thickness, then large amounts of the yearly energy could go into melting and cooling ice giving the appearance that the effect of solar radiation was having little affect, whereas the phase change of water involves large amounts of energy for almost no temperature change.
Billy Liar:
Global mean temperatures ( not sea surface ) peak at ( in the same month ) as aphelion and
trough at ( in the same month ) as perihelion. Run the analysis again using the lowest atmospheric layer.
Billy Liar – “What is the ‘standard’ system model to which these are applied?”
Those are the climate sensitivity terms in use in the literature – the system is the entire physical climate, both in terms of the observations and in the models.