An Observational Estimate of Climate Sensitivity

Guest Post by Willis Eschenbach

“Climate sensitivity” is the name for the measure of how much the earth’s surface is supposed to warm for a given change in what is called “forcing”. A change in forcing means a change in the net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation.

There is an interesting study of the earth’s radiation budget called “Long-term global distribution of Earth’s shortwave radiation budget at the top of atmosphere“, by N. Hatzianastassiou et al. Among other things it contains a look at the albedo by hemisphere for the period 1984-1998. I realized today that I could use that data, along with the NASA solar data, to calculate an observational estimate of equilibrium climate sensitivity.

Now, you can’t just look at the direct change in solar forcing versus the change in temperature to get the long-term sensitivity. All that will give you is the “instantaneous” climate sensitivity. The reason is that it takes a while for the earth to warm up or cool down, so the immediate change from an increase in forcing will be smaller than the eventual equilibrium change if that same forcing change is sustained over a long time period.

However, all is not lost. Figure 1 shows the annual cycle of solar forcing changes and temperature changes.

Figure 1. Lissajous figure of the change in solar forcing (horizontal axis) versus the change in temperature (vertical axis) on an annual average basis.

So … what are we looking at in Figure 1?

I began by combining the NASA solar data, which shows month-by-month changes in the solar energy hitting the earth, with the albedo data. The solar forcing in watts per square metre (W/m2) times (1 minus albedo) gives us the amount of incoming solar energy that actually makes it into the system. This is the actual net solar forcing, month by month.

Then I plotted the changes in that net solar forcing (after albedo reflections) against the corresponding changes in temperature, by hemisphere. First, a couple of comments about that plot.

The Northern Hemisphere (NH) has larger temperature swings (vertical axis) than does the Southern Hemisphere (SH). This is because more of the NH is land and more of the SH is ocean … and the ocean has a much larger specific heat. This means that the ocean takes more energy to heat it than does the land.

We can also see the same thing reflected in the slope of the ovals. The slope of the ovals is a measure of the “lag” in the system. The harder it is to warm or cool the hemisphere, the larger the lag, and the flatter the slope.

So that explains the red and the blue lines, which are the actual data for the NH and the SH respectively.

For the “lagged model”, I used the simplest of models. This uses an exponential function to approximate the lag, along with a variable “lambda_0” which is the instantaneous climate sensitivity. It models the process in which an object is warmed by incoming radiation. At first the warming is fairly fast, but then as time goes on the warming is slower and slower, until it finally reaches equilibrium. The length of time it takes to warm up is governed by a “time constant” called “tau”. I used the following formula:

ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)

where ∆T is change in temperature, ∆F is change in forcing, lambda (λ) is the instantaneous climate sensitivity, “n” and “n + 1” are the times of the observations,and tau (τ) is the time constant. I used Excel to calculate the values that give the best fit for both the NH and the SH, using the “Solver” tool. The fit is actually quite good, with an RMS error of only 0.2°C and 0.1°C for the NH and the SH respectively.

Now, as you might expect, we get different numbers for both lambda_0 and tau for the NH and the SH, as follows:

Hemisphere         lambda_0     Tau (months)

    NH               0.08           1.9

    SH               0.04           2.4

Note that (as expected) it takes longer for the SH to warm or cool than for the NH (tau is larger for the SH). In addition, as expected, the SH changes less with a given amount of heating.

Now, bear in mind that lambda_0 is the instantaneous climate sensitivity. However, since we also know the time constant, we can use that to calculate the equilibrium sensitivity. I’m sure there is some easy way to do that, but I just used the same spreadsheet. To simulate a doubling of CO2, I gave it a one-time jump of 3.7 W/m2 of forcing.

The results were that the equilibrium climate sensitivity to a change in forcing from a doubling of CO2 (3.7 W/m2) are 0.4°C in the Northern Hemisphere, and 0.2°C in the Southern Hemisphere. This gives us an overall average global equilibrium climate sensitivity of 0.3°C for a doubling of CO2.

Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.

w.

NOTE: The spreadsheet used to do the calculations and generate the graph is here.

NOTE: I also looked at modeling the change using the entire dataset which covers from 1984 to 1998, rather than just using the annual averages (not shown). The answers for lambda_0 and tau for the NH and the SH came out the same (to the accuracy reported above), despite the general warming over the time period. I am aware that the time constant “tau”, at only a few months, is shorter than other studies have shown. However … I’m just reporting what I found. When I try modeling it with a larger time constant, the angle comes out all wrong, much flatter.

While it is certainly possible that there are much longer-term periods for the warming, they are not evident in either of my analyses on this data. If such longer-term time lags exist, it appears that they are not significant enough to lengthen the lags shown in my analysis above. The details of the long-term analysis (as opposed to using the average as above) are shown in the spreadsheet.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

252 Comments
Inline Feedbacks
View all comments
BarryW
May 29, 2012 7:14 pm

Two questions come to mind. The first is that it’s unclear whether you can assume that the albedo remains constant. The second is that since you are using an average, are you losing information due to a change in phase angle over time? In other words, shouldn’t the figure show a rotation over time if the response changes and none if it doesn’t?

Matthew R Marler
May 29, 2012 7:23 pm

Now you have: ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)
which could as easily have been written as: ΔT(n+1) = a∆F(n+1) + bΔT(n).
This way a and b are not confounded as λ and τ were. Their estimates are correlated, with the correlation depending on the correlation ∆F(n+1) and ΔT(n), which are probably uncorrelated (? — except maybe seasons of the year when both decline (fall) or both increase (spring) produce small positive correlation.)
Why is the change in mean temperature a linear function of the change in forcing over the interval, instead of something like the mean (or integrated) forcing?

May 29, 2012 7:46 pm

The Pompous Git says: May 29, 2012 at 2:37 pm
You are well-named, sir. What a load of pedantic nonsense.
Apart from totally missing the point, do you have anything of value to add?

joeldshore
May 29, 2012 9:00 pm

Willis,
I also think you are confused about what your tau represents…It is not the relaxation time for equilibration of the system. In fact, in the limit that the time constant for equilibration of the system approached infinity, the lag time that you compute would approach 3 months. That is to say, in the limit of a very large heat capacity for the system, the temperature would be 90deg out-of-phase with the cyclical forcing. (In other words, the temperature would reach its peak when the value of the forcing crossed through zero…i.e., the maximum temperature would occur at the equinoxes.) In the limit of zero heat capacity in the system, the temperature would be in-phase with the cyclical forcing. (The maximum temperature would occur at the summer solstice.) Hence, the tau that you have computed would always fall in the range of 0 to 3 months, no matter how long the relaxation time of the system is!
You can see this already with a simple one-box model given by a differential equation of the form:
c dT/dt = F(t) – (1/lambda)*T
where c is the heat capacity, T is the temperature (relative to the equilibrium temperature in absence of any forcing), lamdba is the climate sensitivity, t is the time, and F(t) is the forcing [which, for the seasonal cycle will have a form like F(t) = F_0 * cos(omega*t) where omega = 2*pi if you measure t in years]. I just set up a MATLAB function to solve this equation numerically (although I imagine that it may not be too hard to solve analytically if I had thought about it a little more).
As an example, I find that in this simple model, a lag time of 2.4 months occurs when the relaxation time (given by c*lambda) is 6 months. [In order to get considerably larger relaxation times, say of a few years or more, this simple model requires the temperature data to have a lag time very close to 3 months, but I think a more realistic model with more than one relaxation time could show a lag time of, say, 2.4 months and still have a much slower relaxation to equilibrium.]

George E. Smith;
May 29, 2012 9:11 pm

“”””” Andrew says:
May 29, 2012 at 6:54 pm
George E. Smith; says: “And you are also supposed to know that the amount decays to 5% in three time constants, and to 1% in 5 time constants. So in 10 time constants the remainder would be 0.01%, close enough for most people.”
The model’s response time is not necessarily known ahead of time, so they do long runs and then when the computer can no longer register a change, that is the time to “equilibration” or statistically indistinguishable from such, from which the most accurate value for the model’s sensitivity can be determined. “””””
Sorry Andrew, if the model postulates some thermal time delay process, then the rate of rise (or fall) can be determined in the very first time intervals of the “simulation,”, as the initial rate of change.
and from the known disturbance from the steady state, the time constant can immediately be obtained. You don’t have to run a process into the ground, to finally figure out how long that will take.
Besides earth is never in equilibrium, so it never does stop changing.

George E. Smith;
May 29, 2012 9:13 pm

“”””” ferd berple says:
May 29, 2012 at 6:53 pm
George E. Smith; says:
May 29, 2012 at 6:27 pm
That’s why the “CO2 saturation” notion is a non-starter. CO2 absorbs LWIR; re-emits some other wavelength, and then is ready to grab another LWIR photon from the surface; so it never really “saturates”.
=========
Why assume that the absorbed energy will come in the form of a photon from the surface? “””””
didn’t !

Magic Turtle
May 29, 2012 9:20 pm

George E. Smith says (May 29, 2012 at 6:27 pm):
‘As for Beer’s Law, sometimes referred to as the Beer-Lambert Law, it was an approximate relationship for absorption in DILUTE solutions of a given solute, from chemistry, and some will argue that it has an optical application as well,…’
Actually the Beer-Lambert law is a theoretically-ideal expression for radiative absorption in relatively transparent media in all phases of matter, ie. solid, liquid and gas, as the Wikipedia article on it (here: http://en.wikipedia.org/wiki/Beer-Lambert_law#Derivation ) makes clear.

mb
May 29, 2012 9:42 pm

I think I understand your model now. The formulation is a little unusual, but it seems perfectly valid. It cannot handle a constant forcing (Delta F=0), but behaves correctly for dealing with changes of forcing, which is what we are interested in anyhow.
I’m not sure how you get the values for Delta F for the two hemispheres. I can imagine several methods, but for comparison I would prefer to know which method you used. The NASA table gives average insolation according to latitude and time, say s(theta,t). For simplicity of formulas, lets give the latitude theta in radians.
To compute the hemispherical forcing F, we also need the albedo in dependence of latitude and time. Lets say it’s a(theta,t), where a(theta,t) is number between 0 and 1. The average forcing of the hemisphere at time t would, I think, be given as the integral over theta from 0 to pi/2 of (1-a(theta,t))s(theta,t)cos(theta).
The problem I have is that the paper by Hatzianastassiou et al. only seems to give graphs of the computed latitude dependent, time averaged albedo, and the time dependent, space averaged albedo for each hemisphere. So how exactly did you proceed from there? Did you get the actual values somewhere, or did you do some approximation?

May 29, 2012 10:12 pm

Robbie says:
May 29, 2012 at 4:22 am
Robbie,
It is not enough that your model is based on physics. It actually needs to be based on the appropriate physics. When subtracting one part in 2500 from the model (all the co2) from the model produces several degrees GMAT cooling in the first year, I humbly submit that the appropriate physics is not included in the model.

May 30, 2012 12:50 am

Allan MacRae said May 29, 2012 at 7:46 pm

The Pompous Git says: May 29, 2012 at 2:37 pm
You are well-named, sir. What a load of pedantic nonsense.
Apart from totally missing the point, do you have anything of value to add?

I thought your point was: “In science, first there is Hypothesis, then Theory (Evolution) and finally Law (Gravity).” I do not believe that is true and gave an example that contradicts your assertion. Just in case you found the Newton/Einstein example too difficult to follow, consider Snell’s Law.
Snell’s Law (la loi de Descartes if you are French) describes the relationship between the angles of incidence and refraction when a pencil of light passes through the boundary between two different isotropic media, such as water and glass. It was first described by the Arab philosopher Ibn Sahl in 984. The corpuscular theory of light was developed by the Frenchman Pierre Gassendi and the wave theory of light by the Dutchman Christiaan Huygens in the 18th C.
I do not see how the wave, or corpuscular theories of light could have evolved into Snell’s Law half a millennium prior. Nor do I know of any theory that ever evolved into a physical law. So far, you have given no example of this evolution. I invite you to do so, or explain why I am mistaken in the examples I give. Perhaps then we can proceed to any point that I may be missing. Historians and philosophers are rather averse to arguments proceeding from false premisses.
Glad you like my moniker BTW. I stole it from Stuart Littlemore after an ABC budget-cut reduced him to being Stuart Littleless 😉

May 30, 2012 2:34 am

Willis,OT, apologies for that – my piece on Graeff is up at Tallbloke’s blog and if you want to reply but not there you can always email me direct. OTOH it would be nice to see that piece here – when my computer is back functioning so I can cope with replies…

George E. Smith;
May 30, 2012 3:15 am

“”””” Magic Turtle says:
May 29, 2012 at 9:20 pm
George E. Smith says (May 29, 2012 at 6:27 pm):
‘As for Beer’s Law, sometimes referred to as the Beer-Lambert Law, it was an approximate relationship for absorption in DILUTE solutions of a given solute, from chemistry, and some will argue that it has an optical application as well,…’
Actually the Beer-Lambert law is a theoretically-ideal expression for radiative absorption in relatively transparent media in all phases of matter, ie. solid, liquid and gas, as the Wikipedia article on it (here: http://en.wikipedia.org/wiki/Beer-Lambert_law#Derivation ) makes clear. “””””
“”””” the Beer-Lambert law is a theoretically-ideal expression for radiative absorption “””””
So what means “theoretically ideal expression”. Would it also be a practically ideal expression ?
You also state (or does Wikipedia, also stand in for you here) it is an expression for “radiative absorption ” ; OF WHAT ? The incident photons, or the energy they carry. If it is the latter, then B-L could also be used to compute THE ENERGY TRANSMISSION of solids, liquids and gases.
As for Beer’s law, it IS, as I stated, an approximate law for the absorption of a dilute solution of a solute as a function of the concentration of the solute. And the same holds for dye containing optical glasses, as a function of the dye concentration in the glass. Only when the solute concentration is fixed, in either liquid or solid solutios, does the absorption follow an exponential with THICKNESS law; and for a great many such solutions, even then the exponential law holds ONLY for the original input beam wavelength; ie, the input photons; it does not yield a correct result for the energy or power transmission.
For example, all of the Schott sharp cut long wave pass optical filter glasses, have steep cutoffs versus wavelength, that give attenuations of the input wavelength of 10^4 or 10^5 just beyond the cutoff wavelength, and you can prove that for yourself, with a laser and a double monochromator tuned to the laser wavelength. But if you remove the monochromator, to eliminate the restriction to the original input wavelength, a power meter wii show that a large fraction of the input power is still being transmitted; so Beer’s law is not being obeyed for the power transmission.
And the same goes for the LWIR transmission of the atmosphere in the presence of CO2.
Sure the CO2 absorbs surface emitted LWIR photons; but they don’t stay absorbed; the energy is re-emitted at some other wavelength so it is not stopped by the CO2. As a result, the energy transmission is greater than the Beer law would claim; both as a function of CO2 abundance, and also as a function of distance. I happen to have a full set of all Schott optical filter glasses in the standard 50 x 50 x 3 mm size, and very few of them follow the exponential decay law for the total transmitted power. And of course they are all constant concentration samples, so they can’t be checked for linearity of the concentration eponential.
Just because something is in wikileaks does not make it reliable information.

Galane
May 30, 2012 3:30 am

“So … what are we looking at in Figure 1?” Looks like four rubber bands on a piece of paper. 😉

George E. Smith;
May 30, 2012 3:33 am

And if you read your own reference, you will see; that it is far from a theoretically ideal law, and even Wiki erroneously ascribes it to the transmission, instead of the absorpton. See also the notes on deviations from the approximate law, and the conditions under which it applies, specially that one about the radiation not affecting the atoms. They fail to mention the material must not be fluorescent, so the absorbed photons have to stay dead. and that’s impossible because if they do, then the sample will heat, and since it is a solid or liquid, or gas, above absolute zero, then it must radiate a thermal spectrum, to get rid of the temporarily absorbed energy.
Once again, wiki lives up to its reputation.

Robbie
May 30, 2012 3:58 am

Gymnosperm says:
On May 29, 2012 at 10:12 pm
I am not basing my model on physics. It’s just using some common sense. Eschenbach is claiming a negative forcing by water vapor of more than 75%. Yes, more than 75%, because an increase in water vapor due to increased CO2 warming (1-1.2°C by 2xCO2) makes that value more than 75%. That’s simply wrong. It isn’t happening now and it won’t happen into the future.
Negative forcing by water vapor and mainly clouds is just 45% in the real world at this moment. In the long run (that takes a few hundreds of years) equilibrium will be reached when CO2 stays doubled and doesn’t increase anymore. But then the temperature will have risen at least somewhere between 1.5-2°C. And not what Eschenbach is claiming.
He is simply 100% wrong in his 0.3°C sensitivity claim.

May 30, 2012 5:05 am

scalability.org says:
May 29, 2012 at 6:18 pm
Things like this always leave a smile on my face, as either the poster is joking or they simply don’t understand. Causality is a very well understood concept. Correlation’s relationship to causality is less so. This (causality) is decidedly not a statistical phenomenon, but its impact (observed correlation) may be seen in observational data.

Correlation is a statistical phenomena. Hence the statistical caveat.
From wikipedia,
“Correlation does not imply causation” is a phrase used in science and statistics to emphasize that correlation between two variables does not automatically imply that one causes the other (though correlation is necessary for linear causation in the absence of any third and countervailing causative variable).
If various critics above have a problem with this, I suggest you edit the entry.

joeldshore
May 30, 2012 5:36 am

Willis Eschenbach says:

Thanks, Joel, but … I’ve already done that. Twice. Here and here. In both cases I found, using the same lagged model I used above, that the climate sensitivity is ≈ 0.3 … curiously, it’s the same result I get above.

…Which shows that your result is wrong since that is most assuredly not the correct climate sensitivity for the model. The climate sensitivity in the models are not hard to determine by putting a certain forcing in and seeing how much temperature change you get when you run the model out for a long time. The answer obtained is very different from the answer that you obtain. Therefore, your method is very poor in diagnosing the climate sensitivity of climate models.

Despite the fact that I’m using a much lower value for the sensitivity than they claim for the model, I’m able to reproduce the model output to a very high degree of fidelity (correlation = 0.995) … which means that in fact I am using the correct climate sensitivity.

No…It does not show that at all, any more than Nikolov and Zeller’s fit shows that they are correct. In fact, it shows that your method of diagnosing climate sensitivity DOES NOT WORK in a case where the answer is known.

I fear your argument is like that of the communists, who used to say “That works well in practice, comrade … but it will never work in theory!”.

No…You have shown no evidence that it works well in practice. Being able to fit a bit of data for the seasonal cycle is not evidence that your method is effective in diagnosing the climate sensitivity! And, the fact that it is known to work extremely poorly in diagnosing the sensitivity in a climate model shows that it works very poorly in theory. Techniques that work poorly in theory (i.e., the simplified world of a model) seldom end up working well in practice (the real world).

joeldshore
May 30, 2012 5:48 am

WIllis: Now that I have looked back at the two previous posts of yours that I linked to, I have two more comments:
(1) In those posts, you found a climate sensitivity of ~0.3 C per W/m^2 for the GISS climate model. So, no, that is not particularly close to the value that you have found here of ~0.3 C for a change in forcing of 3.7 W/m^2. Rather, it is about 4 times as large.
(2) For those posts, you were not looking at the seasonal cycle in the models…You were looking at the long term warming trend over the last century or so. Hence, it is not surprising that you got a higher answer there (closer to the correct value for equilibrium sensitivity in that model but still too low because of the difference between transient climate response and equilibrium climate sensitivity). As myself, Jim D, and Hans Erren ( http://wattsupwiththat.com/2012/05/29/an-observational-estimate-of-climate-sensitivity/#comment-996383 ) are all trying to point out to you, your method of diagnosing climate sensitivity will give you a lower sensitivity for higher frequency forcings.

May 30, 2012 6:43 am

George E. Smith; says: “Sorry Andrew, if the model postulates some thermal time delay process, then the rate of rise (or fall) can be determined in the very first time intervals of the “simulation,”, as the initial rate of change.”
That might be the case…if the models were exactly as neat as the equations that simulate their long term behavior quite well. The problem is that there is a lot of noise about those neat functional forms that hides them. So the very first time step will show something different from that correct “initial rate of change”…if one averages a large number of models, the model’s noise begins to cancel out and approach a neat functional form almost exactly. But I do think they probably run the models excessively. Unless they are unaware of the simplicity of the underlying functional form the models approach.

1 4 5 6 7 8 10
Verified by MonsterInsights