Guest Post by Willis Eschenbach
“Climate sensitivity” is the name for the measure of how much the earth’s surface is supposed to warm for a given change in what is called “forcing”. A change in forcing means a change in the net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation.
There is an interesting study of the earth’s radiation budget called “Long-term global distribution of Earth’s shortwave radiation budget at the top of atmosphere“, by N. Hatzianastassiou et al. Among other things it contains a look at the albedo by hemisphere for the period 1984-1998. I realized today that I could use that data, along with the NASA solar data, to calculate an observational estimate of equilibrium climate sensitivity.
Now, you can’t just look at the direct change in solar forcing versus the change in temperature to get the long-term sensitivity. All that will give you is the “instantaneous” climate sensitivity. The reason is that it takes a while for the earth to warm up or cool down, so the immediate change from an increase in forcing will be smaller than the eventual equilibrium change if that same forcing change is sustained over a long time period.
However, all is not lost. Figure 1 shows the annual cycle of solar forcing changes and temperature changes.
Figure 1. Lissajous figure of the change in solar forcing (horizontal axis) versus the change in temperature (vertical axis) on an annual average basis.
So … what are we looking at in Figure 1?
I began by combining the NASA solar data, which shows month-by-month changes in the solar energy hitting the earth, with the albedo data. The solar forcing in watts per square metre (W/m2) times (1 minus albedo) gives us the amount of incoming solar energy that actually makes it into the system. This is the actual net solar forcing, month by month.
Then I plotted the changes in that net solar forcing (after albedo reflections) against the corresponding changes in temperature, by hemisphere. First, a couple of comments about that plot.
The Northern Hemisphere (NH) has larger temperature swings (vertical axis) than does the Southern Hemisphere (SH). This is because more of the NH is land and more of the SH is ocean … and the ocean has a much larger specific heat. This means that the ocean takes more energy to heat it than does the land.
We can also see the same thing reflected in the slope of the ovals. The slope of the ovals is a measure of the “lag” in the system. The harder it is to warm or cool the hemisphere, the larger the lag, and the flatter the slope.
So that explains the red and the blue lines, which are the actual data for the NH and the SH respectively.
For the “lagged model”, I used the simplest of models. This uses an exponential function to approximate the lag, along with a variable “lambda_0” which is the instantaneous climate sensitivity. It models the process in which an object is warmed by incoming radiation. At first the warming is fairly fast, but then as time goes on the warming is slower and slower, until it finally reaches equilibrium. The length of time it takes to warm up is governed by a “time constant” called “tau”. I used the following formula:
ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)
where ∆T is change in temperature, ∆F is change in forcing, lambda (λ) is the instantaneous climate sensitivity, “n” and “n + 1” are the times of the observations,and tau (τ) is the time constant. I used Excel to calculate the values that give the best fit for both the NH and the SH, using the “Solver” tool. The fit is actually quite good, with an RMS error of only 0.2°C and 0.1°C for the NH and the SH respectively.
Now, as you might expect, we get different numbers for both lambda_0 and tau for the NH and the SH, as follows:
Hemisphere lambda_0 Tau (months)
NH 0.08 1.9
SH 0.04 2.4
Note that (as expected) it takes longer for the SH to warm or cool than for the NH (tau is larger for the SH). In addition, as expected, the SH changes less with a given amount of heating.
Now, bear in mind that lambda_0 is the instantaneous climate sensitivity. However, since we also know the time constant, we can use that to calculate the equilibrium sensitivity. I’m sure there is some easy way to do that, but I just used the same spreadsheet. To simulate a doubling of CO2, I gave it a one-time jump of 3.7 W/m2 of forcing.
The results were that the equilibrium climate sensitivity to a change in forcing from a doubling of CO2 (3.7 W/m2) are 0.4°C in the Northern Hemisphere, and 0.2°C in the Southern Hemisphere. This gives us an overall average global equilibrium climate sensitivity of 0.3°C for a doubling of CO2.
Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.
w.
NOTE: The spreadsheet used to do the calculations and generate the graph is here.
NOTE: I also looked at modeling the change using the entire dataset which covers from 1984 to 1998, rather than just using the annual averages (not shown). The answers for lambda_0 and tau for the NH and the SH came out the same (to the accuracy reported above), despite the general warming over the time period. I am aware that the time constant “tau”, at only a few months, is shorter than other studies have shown. However … I’m just reporting what I found. When I try modeling it with a larger time constant, the angle comes out all wrong, much flatter.
While it is certainly possible that there are much longer-term periods for the warming, they are not evident in either of my analyses on this data. If such longer-term time lags exist, it appears that they are not significant enough to lengthen the lags shown in my analysis above. The details of the long-term analysis (as opposed to using the average as above) are shown in the spreadsheet.

stevefitzpatrick says:”The conventional definition of equilibrium sensitivity is the temperature a nearly infinite time after CO2 doubles.”
There is no such thing as “nearly infinite” so I assume you mean the time at which the remaining response is sufficiently neglidible so that it would not change the last digit of the reported value if added.
Since the models should be giving an exponentially decaying rate of change in response to an instantaneous doubling (ie dT/dt = (C/tau)*e^-t/tau, T = C – Ce^-t/tau) they wouldn’t ever actually reach a true equilibrium state, but they can get arbitrarily close to it (meaning there are “at” it to within the accuracy desired). Of course, if one knows the exact equations rather than having results pop out of a model, one can calculate the value of the asymptote exactly.
Allan MacRae said @ur momisugly May 29, 2012 at 3:01 am
Newton’s Law of Universal Gravitation states that every point mass in the universe attracts every other point mass with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. Newton had no Theory of Gravitation and categorically rejected the idea. He published this in 1686.
The first generally accepted Theory of Gravitation was Einstein’s General Relativity where gravity is a consequence of the curvature of spacetime governing the motion of inertial objects. This was published in 1916.
How on Earth (or off it if you prefer) did a Theory published in 1916 evolve into a Law published in 1686?
Geoff Alder said @ur momisugly May 29, 2012 at 8:15 am
You are missing nothing. The obvious follow-on question to ask is: “Why don’t “climatologists” use an appropriate measure?”
Willis,
You only calculated the high frequency response of the climate system, which acts as a low pass filter. i.e. slow perturbations have a bigger impact than fast perturbations.
see http://members.casema.nl/errenwijlens/co2/Climate_sensitivity_period.gif
Billy Liar said @ur momisugly May 29, 2012 at 1:33 pm
It would appear that those in climate “science” have a sound grasp of control systems; they have been controlling what we read about climate in the MSM and climatology journals for a quarter of a century now. Unfortunately…
This result is very different from observational based estimates in the scientific literature.
For the first couple of hits on Google Scholar:
An Observationally Based Estimate of the Climate Sensitivity: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%282002%29015%3C3117%3AAOBEOT%3E2.0.CO%3B2
“From the probability distribution of T2 we obtain a 90% confidence interval, whose lower bound (the 5-percentile) is 1.6 K. The median is 6.1 K, above the canonical range of 1.5–4.5 K; the mode is 2.1 K.”
Using multiple observationally-based constraints to estimate climate
sensitivity: http://www.image.ucar.edu/idag/Papers/Annan_Constraints.pdf
“The resulting distribution can be represented by (1.7, 2.9, 4.9) in the format used throughout this paper. That is to say, it has a maximum likelihood value of 2.9°C, and […] a […] range of 1.7–4.9°C (95%).”
If my cursory reading of your article gleaned your methods correctly, then one I suspect that by analysing annual cycles, you’re not picking up on all the elements of lag in the system. Longer ones, such as the propagation of the heat into the deep ocean takes 50 years or so, and therefore do not appear in annual data. I suspect that because your time constant is so low, because annual analysis doesn’t find most of it, you’re missing most of the warming due to the increase in CO2.
“Andrew
Since the models should be giving an exponentially decaying rate of change in response to an instantaneous doubling (ie dT/dt = (C/tau)*e^-t/tau, T = C – Ce^-t/tau) they wouldn’t ever actually reach a true equilibrium state, but they can get arbitrarily close to it (meaning there are “at” it to within the accuracy desired).”
But delta T is in fact (Tmax+Tmin)/2 the Tmax is a function of influx in the order of 880 W/m2 and Tmin is where influx is about 150 w/m2. The system oscillates daily and yearly.
richardscourtney says:
May 29, 2012 at 11:34 am
Philip Bradley:
NO! You are absolutely wrong.
Go read the wikipedia page. And note I said a casual relationship, rather than A causes B. The latter being the logical fallacy.
http://en.wikipedia.org/wiki/Correlation_does_not_imply_causation
To my saying (May 29, 2012 at 10:51 am):
“1. The amount of radiative forcing produced by a doubling of the CO2 concentration is not a regular fixed amount. The IPCC’s logarithmic formula from which the fixed amount results is false. The correct formula can be derived from the Beer-Lambert law of optics and it follows an inverse exponential law, not a logarithmic one.”
Phil says (May 29, 2012 at 12:56 pm):
‘ Not true, the logarithmic dependence does not come from the Beer-Lambert Law which applies in optically thin situations, but from the optically thick situation which applies to CO2 absorption in the IR in our atmosphere. Look up ‘Curve of Growth’ to see a derivation.
Basically, weak lines have a linear dependence on concentration, moderately strong lines have a logarithmic dependence and very strong lines a square root dependence.’
What I said is true. I never said that the logarithmic relationship is derived from the Beer-Lambert law, which I agree does apply to optically-thin lines. But it also applies to groups of optically-thin lines and for that reason it can be applied to entire absorption/emission wavebands. The thicknesses of the individual lines become irrelevant in that case. Such an application provides a straightforward method of deriving a general formula for the absorption of terrestrial surface radiance by atmospheric CO2. This has the form:
A = S.k.[1 – exp(-g.C)]
where A is the mean radiance absorbed by the atmospheric CO2 (in W/sq.m);
S is the mean surface radiance (in W/sq.m);
k is the fraction of S that is emitted on the GHG’s absorption wavelengths;
g is a constant that is specific to CO2 (ie. different for different GHGs);
C is the concentration of CO2 (in ppmv, although this is merely a convenient approximate metric to use in place of the specific mass of CO2 in units of kg/sq.m).
If we say that a fraction (p) of this absorbed power is returned to the planet’s surface by whatever means, then the amount of surface radiance being thus recycled (R) is simply the product (p.A) and this leads us to the basic equation:
R = p.S.k.[1 – exp(-g.C)]
Suppose now that the CO2 concentration is increased to a new level C’ and that this causes the amount of power being recycled back to the surface to increase to R’. Then the formula just derived tells us that
R’ = p.S.k.[1 – exp(-g.C’)]
The amount of ‘radiative forcing’ (F say) produced at the surface by the increase in CO2 concentration from C to C’ is simply R – R’ and hence we obtain:
F = p.S.k.[1 – exp(-g.C’)] – p.S.k.[1 – exp(-g.C)]
= p.S.k.[exp(-g.C) – exp(-g.C’)]
Compare this with the IPCC’s logarithmic formula for radiative forcing from CO2:
F = 5.35.Ln(C’/C)]
As you can see, they are completely different formulae following different mathematical laws that imply different physical laws as their bases.
I looked up ‘Curve of Growth’ as per your instruction and could not find any derivation of the IPCC’s logarithmic formula anywhere. Perhaps you could provide one, or at least a link to one?
DocMartyn says: “But delta T is in fact (Tmax+Tmin)/2 the Tmax is a function of influx in the order of 880 W/m2 and Tmin is where influx is about 150 w/m2. The system oscillates daily and yearly.”
I was making a comment about the functional form of a climate model’s response to a step change forcing. Not about how the real world reacts to the insolation cycles within the day and the year.
But, by the way, one could use this same kind of model to simulate a day and night cycle. It is just that the solution to the differential equation (ie dT/dt +T/tau = f(t) ) is not the same as the solution for a step change. In this case, T refers to the, say, minute to minute temperature or hourly temperature observed within the day, not the “daily mean” (ie Tmax+Tmin/2) and f(t) rather than being a Heaviside function, is the time evolution of the daily insolation cycle. How well this model would work depends on how well the differential equation describes the system. But climate models give no indication that their makers expect this form to be inadequate, since this kind of differential equation can easily be fit to climate model output if you know the sensitivity and response time. So the functional form I describe is basically what models do. Whether that is how reality works is another question.
Philip Bradley said @ur momisugly May 29, 2012 at 3:12 pm
While casual relationships are not necessarily causal, marriages usually are, hence the popular sobriquet: She Who Must be Obeyed. Even though marriage and death rates are well-correlated, we do not believe that marriages cause death, or vice versa. That would be fallacious.
Guest Post by Willis Eschenbach
“Climate sensitivity” is the name for the measure of how much the earth’s surface is supposed to warm for a given change in what is called “forcing”. A change in forcing means a change in the net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation.
…Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.
w.
===============================================================
One important hole is in the phrase “net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation”. Longwave SOLAR radiation is somehow unfortunately disappeared through the hole. Let us reinstate it.
Of course, we’ll immediately face the problem, that the so called “greenhouse gases” also block a part of the incoming longwave SOLAR radiation, thus contributing to cooling. This is a severe blow to the AGW hypothesis, although the question about the net effect remains. But the AGW people would not like it.
People need to keep in mind that this is real data about how the real climate system operates.
This is a complicated system and I will take real data over theory anytime.
Compiled from high resolution data boiled down to monthly averages over 14 years. What else are we looking for? The infinite/10,000 years of equilibrium climate sensitivity to occur?
For example, the global Albedo values have declined by more in relative terms than the global temperature series increases (global brightening?). But that means the net/total Greenhouse Effect (about 150 W/m2) has declined by 0.13 W/m2 over the period, not risen by 5.35 ln(CO2,1998/CO2,1984).
Dig into it – this is an important dataset.
we do not believe that marriages cause death, or vice versa. That would be fallacious.
I’ll suggest the causal relationship involves birth.
Funny how other peoples typos are glaring, but you often can’t see your own.
KR,
Thanks. So where is the model (in control system terms) documented. Any idea?
“”””” Andrew says:
May 29, 2012 at 2:07 pm
stevefitzpatrick says:”The conventional definition of equilibrium sensitivity is the temperature a nearly infinite time after CO2 doubles.” “”””””
In simple exponential decay situations, the residual amount halves every increment of time equal to
t x ln(2), which you are supposed to learn in 4H club, is 0.6931 x t, where (t) is the decay time constant. The decay time constant is the time it would take to reach zero if the rate of decay stayed at its original value, at the start of the decay. And you are also supposed to know that the amount decays to 5% in three time constants, and to 1% in 5 time constants. So in 10 time constants the remainder would be 0.01%, close enough for most people.
You can get an even lower sensitivity if you calculate it with the diurnal cycle, and that points to a strong frequency dependence of the response. This is what Hansen means by the response function (or the Greens function if you are mathematical). If you add an impulse, the immediate response is quite slow, but over time it gets to equilibrium. Long period cycles like the solar 11-year one have sensitivities that would imply several degrees per doubling. Basically by oscillating the system, it can’t respond as fully as if you just give it a steady forcing in one direction, and the higher the frequency, the less the response. I am sure there is an engineering analogy with harmonic oscillators.
Climate Weenie says:
May 29, 2012 at 1:55 pm
Ah, you meant global atmospheric temperatures. Not much heat in the atmosphere compared to the ocean – all the missing heat is stored there.
Philip Bradley says:
May 29, 2012 at 7:37 am
…
Jeez. For the hundredth time, correlation is proof of causation. With the usual statistical caveats.
Use of the phrase ‘Correlation is not evidence of causation.’ is proof the utterer doesn’t understand science or statistics.
Things like this always leave a smile on my face, as either the poster is joking or they simply don’t understand. Causality is a very well understood concept. Correlation’s relationship to causality is less so. This (causality) is decidedly not a statistical phenomenon, but its impact (observed correlation) may be seen in observational data.
Sadly, I think the conflation of these is due in large part to the propensity to replace a real theoretical construction with an observational statistical analysis. The latter has no real conception of causality, only correlation. The former makes (hopefully) testable predictions. The predictions ought to demonstration some amount of correlation. And more so, if you remove the inputs from the model, and have it generate noise, the output ought not to be correlated in that case, otherwise all you’ve demonstrated is an inability to distinguish between signal and noise. Which also means your theory (or mathematical model) is junk.
Which I think happened at some point to some hockey stick thingy.
The logarithmic “climate sensitivity” is a point where Phil and I part company.
A fixed Temperature increase per doubling of CO2, means a change of deltaT for CO2 going from 280 ppm, to 560 ppm. It also means going from 1 ppm to 2 ppm, or from one CO2 molecule per cubic metre, to two CO2 molecules per cubic meter.
“Approximately logarithmic,” doesn’t mean anything; “logarithmic” has a precise meaning.
Approximately logarithmic is also approximately linear, or it could be fitted to the function:-
Y = exp (-1/x^2)..
There is NO earth climate experimental data, that allows an unambiguous decision between these three possibilities or many other possible mathematical functions.
As for Beer’s Law, sometimes referred to as the Beer-Lambert Law, it was an approximate relationship for absorption in DILUTE solutions of a given solute, from chemistry, and some will argue that it has an optical application as well, in that if a given thickness of an absorbing optical glass (or other specimen), attenuates a given input wavelength optical beam by 1/2 ( 1/2 left after traversing (t) ), then double that thickness will leave only 1/4 remaining.
But a necessary condition for application of Beer’s law, is that the output, is identical to the input except for quantity.. Photons go in and some of them die; PERMANENTLY.
Many optical materials fail to obey Beer’s law, because the photons don’t stay dead; they resurrect as some other wavelength, so the transmitted POWER is much greater, than what is calculated using Beer’s law.
And that is what happens in the atmosphere with CO2, and other GHGs. The long wavelength IR absorbed by GHG molecules, doesn’t stay absorbed; the molecule “fluoresces” at some other wavelength, so the power transmission does not decay exponentially.
That’s why the “CO2 saturation” notion is a non-starter. CO2 absorbs LWIR; re-emits some other wavelength, and then is ready to grab another LWIR photon from the surface; so it never really “saturates”.
But regardless of how atmospheric CO2 absorbs, and emits LWIR photons; that’s a far cry from showing a resultant increase in the Temperature of the rest of the planet.
Physical systems which appear to follow logarithmic relationships ( same as exponential in reverse) such as radioactive decay for example; do so in a statistically noisy manner. The
exp (-t/tau) is only the statistical average relationship; it does not predict when the very next decay event will occur.
Guest Post by Willis Eschenbach
…forcing from a doubling of CO2 (3.7 W/m2)…
==========================================================
This is another hole. The 3.7 W/m2 is derived from the notion, that the “greenhouse gasses” cause the on average -18 degree Celsius cold surface get +15 degree Celsius on average warm. -18 Degree Celsius is the temperature in the freezer. I suggest you turn off the freezer, open it, then close the opening with a glass lid (blocking the IR radiation) and wait till the temperature rises to +15 degree Celsius by means of back radiation.
Wait, I have a better idea. Let us just read about the professor R.W.Wood’s experiment with the back radiation: http://www.wmconnolley.org.uk/sci/wood_rw.1909.html . No warming effect through back radiation! Now we know the truth. And the SOLAR IR-radiation is there! The glass lid cooled the box by 10 degrees by blocking a good portion of the SOLAR IR-radiation! AGW people would not like it.
Jim D says:
Well put…and thus bears repeating. I think that this is indeed likely the major problem with Willis’s analysis. Actually, the analysis isn’t too dissimilar to this analysis that someone pointed me to by some guy named George White: http://www.palisad.com/co2/eb/eb.html
As I told that person, the first thing that you should do is repeat the analysis using a climate model and see what sensitivity you predict for the model on the basis of this analysis. The advantage of this is that in the model you know what the answer is because most of the climate models have had their sensitivity well-determined. If your method correctly diagnoses the model’s climate sensitivity (which I am quite sure Willis’s method…and this other guy George White’s method won’t)…then it at least MIGHT work in the more complicated real world. However, if it doesn’t work in the simplified world of a climate model, it is quite unlikely that it will magically work in the real world!
By the way, there has been some work on looking at the seasonal cycle and compare to climate models in order to try to diagnose the climate sensitivity, see here: http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3865.1 Their conclusion is “Subject to a number of assumptions on the models and datasets used, it is found that climate sensitivity is very unlikely (5% probability) to be either below 1.5–2 K or above about 5–6.5 K, with the best agreement found for sensitivities between 3 and 3.5 K.”
George E. Smith; says:
May 29, 2012 at 6:27 pm
That’s why the “CO2 saturation” notion is a non-starter. CO2 absorbs LWIR; re-emits some other wavelength, and then is ready to grab another LWIR photon from the surface; so it never really “saturates”.
=========
Why assume that the absorbed energy will come in the form of a photon from the surface? It is more likely the CO2 molecule will absorb kinetic energy from N2 and O2 and convert this into radiation, and about 1/2 of which will then be radiated into space that would otherwise remain in the atmosphere and be conducted back to the surface.
Back radiation? That isn’t radiation being reflected back. It is energy from N2\O2 being radiated back to the surface. The more back radiation from CO2, the more it is cooling the atmosphere.
George E. Smith; says: “And you are also supposed to know that the amount decays to 5% in three time constants, and to 1% in 5 time constants. So in 10 time constants the remainder would be 0.01%, close enough for most people.”
The model’s response time is not necessarily known ahead of time, so they do long runs and then when the computer can no longer register a change, that is the time to “equilibration” or statistically indistinguishable from such, from which the most accurate value for the model’s sensitivity can be determined. At the lengths they run the models for either they think the response time is centuries or longer, they have no idea, or want ridiculous accuracy.
Isn’t the average warming per the land- and satellite-based measurements already since 1979 roughly +0.3C?……………….With only an increase of atmospheric CO2 of less than 20%?
Please, tell me that I’m full of sh!t……….