An Observational Estimate of Climate Sensitivity

Guest Post by Willis Eschenbach

“Climate sensitivity” is the name for the measure of how much the earth’s surface is supposed to warm for a given change in what is called “forcing”. A change in forcing means a change in the net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation.

There is an interesting study of the earth’s radiation budget called “Long-term global distribution of Earth’s shortwave radiation budget at the top of atmosphere“, by N. Hatzianastassiou et al. Among other things it contains a look at the albedo by hemisphere for the period 1984-1998. I realized today that I could use that data, along with the NASA solar data, to calculate an observational estimate of equilibrium climate sensitivity.

Now, you can’t just look at the direct change in solar forcing versus the change in temperature to get the long-term sensitivity. All that will give you is the “instantaneous” climate sensitivity. The reason is that it takes a while for the earth to warm up or cool down, so the immediate change from an increase in forcing will be smaller than the eventual equilibrium change if that same forcing change is sustained over a long time period.

However, all is not lost. Figure 1 shows the annual cycle of solar forcing changes and temperature changes.

Figure 1. Lissajous figure of the change in solar forcing (horizontal axis) versus the change in temperature (vertical axis) on an annual average basis.

So … what are we looking at in Figure 1?

I began by combining the NASA solar data, which shows month-by-month changes in the solar energy hitting the earth, with the albedo data. The solar forcing in watts per square metre (W/m2) times (1 minus albedo) gives us the amount of incoming solar energy that actually makes it into the system. This is the actual net solar forcing, month by month.

Then I plotted the changes in that net solar forcing (after albedo reflections) against the corresponding changes in temperature, by hemisphere. First, a couple of comments about that plot.

The Northern Hemisphere (NH) has larger temperature swings (vertical axis) than does the Southern Hemisphere (SH). This is because more of the NH is land and more of the SH is ocean … and the ocean has a much larger specific heat. This means that the ocean takes more energy to heat it than does the land.

We can also see the same thing reflected in the slope of the ovals. The slope of the ovals is a measure of the “lag” in the system. The harder it is to warm or cool the hemisphere, the larger the lag, and the flatter the slope.

So that explains the red and the blue lines, which are the actual data for the NH and the SH respectively.

For the “lagged model”, I used the simplest of models. This uses an exponential function to approximate the lag, along with a variable “lambda_0” which is the instantaneous climate sensitivity. It models the process in which an object is warmed by incoming radiation. At first the warming is fairly fast, but then as time goes on the warming is slower and slower, until it finally reaches equilibrium. The length of time it takes to warm up is governed by a “time constant” called “tau”. I used the following formula:

ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)

where ∆T is change in temperature, ∆F is change in forcing, lambda (λ) is the instantaneous climate sensitivity, “n” and “n + 1” are the times of the observations,and tau (τ) is the time constant. I used Excel to calculate the values that give the best fit for both the NH and the SH, using the “Solver” tool. The fit is actually quite good, with an RMS error of only 0.2°C and 0.1°C for the NH and the SH respectively.

Now, as you might expect, we get different numbers for both lambda_0 and tau for the NH and the SH, as follows:

Hemisphere         lambda_0     Tau (months)

    NH               0.08           1.9

    SH               0.04           2.4

Note that (as expected) it takes longer for the SH to warm or cool than for the NH (tau is larger for the SH). In addition, as expected, the SH changes less with a given amount of heating.

Now, bear in mind that lambda_0 is the instantaneous climate sensitivity. However, since we also know the time constant, we can use that to calculate the equilibrium sensitivity. I’m sure there is some easy way to do that, but I just used the same spreadsheet. To simulate a doubling of CO2, I gave it a one-time jump of 3.7 W/m2 of forcing.

The results were that the equilibrium climate sensitivity to a change in forcing from a doubling of CO2 (3.7 W/m2) are 0.4°C in the Northern Hemisphere, and 0.2°C in the Southern Hemisphere. This gives us an overall average global equilibrium climate sensitivity of 0.3°C for a doubling of CO2.

Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.

w.

NOTE: The spreadsheet used to do the calculations and generate the graph is here.

NOTE: I also looked at modeling the change using the entire dataset which covers from 1984 to 1998, rather than just using the annual averages (not shown). The answers for lambda_0 and tau for the NH and the SH came out the same (to the accuracy reported above), despite the general warming over the time period. I am aware that the time constant “tau”, at only a few months, is shorter than other studies have shown. However … I’m just reporting what I found. When I try modeling it with a larger time constant, the angle comes out all wrong, much flatter.

While it is certainly possible that there are much longer-term periods for the warming, they are not evident in either of my analyses on this data. If such longer-term time lags exist, it appears that they are not significant enough to lengthen the lags shown in my analysis above. The details of the long-term analysis (as opposed to using the average as above) are shown in the spreadsheet.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
252 Comments
Inline Feedbacks
View all comments
KR
May 29, 2012 7:39 am

Seasonal changes, while interesting, are not a direct comparison to the usual definition of climate sensitivity:
“A model estimate of equilibrium sensitivity thus requires a very long model integration. A measure requiring shorter integrations is the transient climate response (TCR) which is defined as the average temperature response over a twenty year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year. The transient response is lower than the equilibrium sensitivity, due to the “inertia” of ocean heat uptake. Fully equilibrating ocean temperatures requires integrations of thousands of model years.” http://en.wikipedia.org/wiki/Climate_sensitivity – emphasis added.
Given the thermal inertia of the oceans, estimates derived directly from seasonal changes (short term) are always going to be underestimates of the transient climate sensitivity (~20 years).
You might be interested in Knutti and Meehl 2006 (http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3865.1), “Constraining Climate Sensitivity from the Seasonal Cycle in Surface Temperature”, where they examine seasonal signals in ~25 regions of the world, and find a transient sensitivity from 1.5-2°C to 5-6.5°C, most likely 3-3.5°C/doubling of CO2.

Gail Combs
May 29, 2012 7:40 am

Willis, several people bring up “only the shortwave radiation” but from what you said, “…There is an interesting study of the earth’s radiation budget called “Long-term global distribution of Earth’s shortwave radiation budget at the top of atmosphere“, by N. Hatzianastassiou et al. Among other things it contains a look at the albedo by hemisphere for the period 1984-1998. I realized today that I could use that data, along with the NASA solar data, to calculate an observational estimate of equilibrium climate sensitivity….”
It looks like you ate using the albedo by hemisphere from the N. Hatzianastassiou et al. study and using that to modify the NASA solar data to estimate the total energy at the earth’s surface at each point in time. Is that correct?
Alos I am assuming the albedo numbers would also include the effects of volcanoes or is it only clouds.

ferd berple
May 29, 2012 7:42 am

The method appears essentially correct to me, in that it gives a measure of how much global temperature is OBSERVED to change as a result of a change in solar radiation.
So, if 3.7W/m2 is the effect of a doubling of CO2, then the result must follow. A warming of 0.3C based on observation. The question is whether 3.7W/m2 is correct, as it can be shown that CO2 has both a warming and cooling effect, and the absence of the atmospheric hotspot contrary to theory indicates that the cooling effect of CO2 may predominate over the warming effect. As such 3.7W/m2 may well be much too high.
This is a critical item. The absence of the atmospheric hotspot, contrary to theory, indicates that the cooling effects of CO2 may predominate over the warming effect. As such 3.7W/m2 may well be much too high. This is further confirmed by the leveling of temperatures post WWII and at present.
Both these times were periods of rapid CO2 increase as a result of increased economic activity. The post war reconstruction and the current industrialization of India and China, the most populous nations on earth. As such, the assumption that CO2 has a net warming effect of 3.7W/m2 is not supported by the OBSERVATION evidence.
The only weakness I can see offhand in the method is in Tau
Hemisphere lambda_0 Tau (months)
NH 0.08 1.9
SH 0.04 2.4
The values for Tau look like the season lag – for example, June 21 is the first day of NH summer, but the hottest days lag this. Mid August should be hottest in NH according to your numbers, which is pretty close to observed. The argument might be made that your model is understating the longer term lags and only modelling the seasonal lags, which would make 0.3C the minimum.

eyesonu
May 29, 2012 7:49 am

Quoted from Willis:
“Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.”
====================
There are no holes in that statement above.
I commend your approach. An open and transparent peer review will begin. Knowledge will be gained and shared. Tribal agendas will be exposed. Maybe proper scientific methods will be learned from this example.

May 29, 2012 7:57 am

richard telford says:
May 29, 2012 at 6:27 am
A simple test of whether Eschenbach’s method is junk is to apply it to climate model output. The climate sensitivity of the models are known, so if Eschenbach’s massively underestimates it in the model, we can be confident that it will massively underestimate it in reality.

That statement is junk. All it says is, if the climate models are correct, then the climate models are correct.

ferd berple
May 29, 2012 8:04 am

ps: CO2 cools the atmosphere by radiating away energy from N2 and O2. This increases the vertical circulation to restore the lapse rate, reducing surface temperatures. The observational evidence suggests this effect is greater than any temperature increase due to CO2.
It is the change in vertical circulation that heats and cools real greenhouses and is the real greenhouse effect. Radiation is not the mechanism that heats and cools real greenhouses, thus it is unscientific for science to pretend that the mechanisms are somehow related or equivalent. Maintaining that two dissimilar process have a similar effect because they have the same name is not science.

May 29, 2012 8:05 am

KR says:
May 29, 2012 at 7:39 am
Given the thermal inertia of the oceans, estimates derived directly from seasonal changes (short term) are always going to be underestimates of the transient climate sensitivity (~20 years).

I agree, but you need longer term ocean effects to be a factor 10 greater than seasonal changes to reach the IPCC prediction. Care to suggest a physical mechanism?

Babsy
May 29, 2012 8:13 am

eyesonu says:
May 29, 2012 at 7:49 am
“I commend your approach. An open and transparent peer review will begin. Knowledge will be gained and shared. Tribal agendas will be exposed. Maybe proper scientific methods will be learned from this example.”
Nah! That’s WAAAAAAAY too simple! Bawhahahaha!

Geoff Alder
May 29, 2012 8:15 am

Dear Willis
Maybe somewhat OT, and sorry for that. But I don’t know who else to contact with my point. The temperatures initially stirring the pot were (dry bulb) surface temperatures. My question: Why temperature? Why not enthalpy? To illustrate my point, sea level air at 25º and 10% rh has an enthalpy value of 30,4 kJ/kg. Same air, but at 25º and 90% rh has an enthalpy value of 71,8 kJ/kg. An enormous difference for the same dry bulb temperature. Am I missing something, or is everyone else missing something?

Sherlock
May 29, 2012 8:35 am

As an amateur astronomer I am aware that due to the eccentricity of its orbit the Earth is closer to the Sun in Southern Hemisphere Summer and conversely farther away from the Sun in Northern Hemisphere Summer. I do not know the relative difference in distances, but has this been accounted for in the estimates of solar radiation for the two hemispheres?

Allan MacRae
May 29, 2012 8:45 am

Willis says:
While it is certainly possible that there are much longer-term periods for the warming, they are not evident in either of my analyses on this data. If such longer-term time lags exist, it appears that they are not significant enough to lengthen the lags shown in my analysis above. The details of the long-term analysis (as opposed to using the average as above) are shown in the spreadsheet.
tallbloke says: May 29, 2012 at 4:04 am
IIRC the SST in the northern hemisphere reaches a maximum some time after the longest day of the year. This is indicative of a lag in the system caused by the time it takes for heat to re-emerge from the ocean on a seasonal basis.
I think there are some much longer lags, but these are to do with runs of highly active or less active solar cycles at the multi-decadal scale rather than CO2.
____________________
Some Thoughts Regarding the Evidence of Longer Cycles and Lags:
We know there is a ~9 month lag of atmospheric CO2 concentration after temperature on a ~~4 year cycle of natural global temperature variation.
http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/
We also know that CO2 lags temperature by ~800 years on a much longer time cycle (ice core data).
As you suggest tallbloke, there is probably at least one intermediate lag, and quite possibly several, between these two – perhaps associated with the Wolf-Gleissberg Cycle, Hale Polarity Cycle, etc., AND-OR with the PDO, etc.
The lag of CO2 after temperature observed in these longer cycles is probably mostly physical in origin, related to ocean solution and exsolution of CO2, but also includes a long term biological component.
Willis’s analysis deals with the seasonal (annual) cycle, in which the biological component of the CO2 lag is comparatively much greater.
I have the opinion that we are looking at several natural cycles of varying duration in which there are external natural drivers (Sun, Earth orbits, stars), then some randomization associated with large ocean phenomena (PDO, etc.); these drive Earth’s natural temperature cycles at all time scales, and result in a series of related CO2 lags after temperature.
Finally:
Atmospheric CO2 variation is primarily a result, not a driver of temperature, and human fossil fuel combustion is probably NOT causing the recent increases in atmospheric CO2 – it is more likely the result of the cumulative impact of all these aforementioned natural cycles – for example, the Medieval Warm Period was ~~800 years ago.

Slartibartfast
May 29, 2012 9:01 am

Jeez. For the hundredth time, correlation is proof of causation. With the usual statistical caveats.
Use of the phrase ‘Correlation is not evidence of causation.’ is proof the utterer doesn’t understand science or statistics.

Someone may not have seen the global warming vs number of pirates chart.

pochas
May 29, 2012 9:06 am

Willis,
You wrote “This gives us an overall average global equilibrium climate sensitivity of 0.3°C for a doubling of CO2.”
Congratulations, Willis! This is the correct number! (vigorous and lengthy applause)

George E. Smith;
May 29, 2012 9:26 am

Well I think you are barking up the wrong tree Willis.
Plotting the “average” of some data set of Temperature (global) against the varying (very predictable) TSI variation, which presumably results in an annual variation in the solar energy stored in the oceans and rocks (as well as the atmosphere) will of course yield an open cyclic plot such as those you have, because of the very well recognized thermal delays. So ho hum; maybe we haven’t seen your graph before, but the data and its relationship is well known. So no problem there; a picturesque way of showing thermal delay time effects.
But none of that has anything to do with carbon dioxide, the effect of which, seems to be, to warm the atmosphere, but simultaneously cooling the ocean and rocks ( by blocking some solar spectrum energy from EVER reaching those energy storage sites).
There’s no relationship at all between the processes by which the sun heats the planet, and the quite different mechanisms by which CO2 and other GHGs like H2O and O3 warm the ATMOSPHERE, or the minute degree to which the atmosphere via LWIR radiation can effect the Temperature of the much greater thermal mass of the non gaseous part of the planet.
And there still is no evidence, observational or theoretical, that something called “climate sensitivity”, evidently invented by the late Stephen Schneider, even exists. No such logarithmic relationship can be shown, and trying to perpetuate that myth through ordinary cyclic variation of instantaneous TSI, is a wasted effort. Sorry, that’s about as polite as I can put it Willis.

DavidA
May 29, 2012 9:26 am

“To simulate a doubling of CO2, I gave it a one-time jump of 3.7 W/m2 of forcing.”
Is this a reliable estimate? And am I right that we could triple that and still be in lukewarmer territory?
Models sure must be doing some funky stuff with clouds and water.

May 29, 2012 9:39 am

The response time for the annual cycle definitely needs to be short. This paper found the same thing:
http://www.pas.rochester.edu/~douglass/papers/DBK%20Physics%20Letters%20A%20.pdf
However, I don’t think that this captures the way the system would respond to different sorts of perturbations: for one thing, the annual cycle in solar insolation varies strongly with latitude and flips sign in opposite hemispheres. Compare that to a relatively uniform forcing effect from say CO2. Of course the reaction of the system will be very different: one will act much more to alter atmospheric circulation than the other, for one thing.
That being said, if you look at responses on the inter-annual (as opposed to intra-annual) timescale to things like volcanoes, the response times that work best are still shorter than would be required get a high sensitivity.

May 29, 2012 9:46 am

not even close to right. the instantaneous response is different from the transient response and the equillbrium response.

jorgekafkazar
May 29, 2012 9:57 am

Why an exponential function?

Billy Liar
May 29, 2012 10:01 am

Climate Weenie says:
May 29, 2012 at 6:32 am
I don’t believe this is a valid assessment of sensitivity.
By comparing only shortwave, you are missing the amount of thermal energy that’s either going
into or coming our of the oceans and the amount of shortwave that’s either going into the oceans, or heating the atmosphere.
On a global average, after all, the earth is warmest at aphelion and coolest at perihelion.
So it turns out that the oceanic buffers are almost completely out of phase with the solar forcing – over the seasonal variation.

I believe you are wrong. Take a look global sea surface temperatures at:
http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps
They peak at approximately 2.5 months after perihelion (~0100UTC 5 Jan 2012) and are lowest about 5 months after aphelion.

Matthew R Marler
May 29, 2012 10:01 am

From the abstract: At pixel level, the OSR differences between model and
ERBE are mostly within ±10Wm−2, with ±5Wm−2 over
extended regions, while there exist some geographic areas
with differences of up to 40Wm−2, associated with uncertainties
in cloud properties and surface albedo.

Are you using the authors’ model values or the data that they cite? Their model seems to have larger error than the effect you are trying to estimate, and you usually deprecate the use of model output.
Why do you assume that lambda is constant over the recording interval? Is the recording interval long enough for what you want to estimate?
I think that a plot of predicted vs actual temperatures, in chronological order, NH and SH separately would add to the exposition — sorry, I always seem to recommend more work.
You wrote: “∆F is change in forcing,” but your model has F, not ∆F. I think that the model has it right, with change in T being proportional to F over the time interval, not change in F. That said, I think that tau in the first term on the right is redundant; estimation of tau and lambda is confounded.
So, you have a first order autoregressive model for unexplained variation in delta T with an exogenous linear function of F. From seat of the pants, I expect that the error in the estimate of lambda is about +/- 10.
I think your approach is defensible, but that you need many more data points than what are available.

Billy Liar
May 29, 2012 10:16 am

Steven Mosher says:
May 29, 2012 at 9:46 am
not even close to right. the instantaneous response is different from the transient response and the equillbrium [sic] response.

For the benefit of the people who are wondering what you are talking about; can you define the terms:
instantaneous response
transient response
equilibrium response
and the system to which you are applying these terms.

Eric Adler
May 29, 2012 10:38 am

This post is a joke right?
Climate is a complex system. Reducing the modeled time dependence to a single time constant based on an oscillating forcing is nonsense. A paper by Schwartz based on historical data estimated a single time constant of 5 years, and had to take it back. If there is a single time constant that could describe what is happening it is more like 80 years.

Magic Turtle
May 29, 2012 10:51 am

That’s an ingenious approach Willis. However, I fear it was doomed to failure from the outset because the basic concept of ‘climate sensitivity’ to a greenhouse gas (CO2 in this case), defined by the IPCC as the temperature-increase at the earth’s surface due to a doubling of its atmospheric concentration, is essentially meaningless. There are two reasons for this.
1. The amount of radiative forcing produced by a doubling of the CO2 concentration is not a regular fixed amount. The IPCC’s logarithmic formula from which the fixed amount results is false. The correct formula can be derived from the Beer-Lambert law of optics and it follows an inverse exponential law, not a logarithmic one. Consequently, repeatedly doubling the amount of CO2 in the atmosphere produces progressively smaller increments of radiative forcing at each repetition, not equal ones as the IPCC’s formula pretends.
2. The relationship between the amount of radiation absorbed at the earth’s surface and the global mean temperature is not a linear one but is the end-product of numerous factors including the Stefan-Boltzmann law, the specific heats and latent heats of surface substances and so on. This relationship implies that constant increments of incident radiation will produce progressively smaller temperature increases at the surface as it warms.
These two factors combine to produce rapidly diminishing increments of surface temperature with each proportional increase of CO2-concentration. Consequently, your estimate of 0.3°C for a doubling of CO2 could only apply to the unique situation where the initial surface temperature and the initial CO2 concentration have particular unique values. It cannot be a general rule.

John West
May 29, 2012 10:56 am

Sniff test:
Stephan-Boltzman warming 0.97 emissivity from … (say) … 396 to 399.7W/m2 = 0.18 K/W/m2 or 0.68 K per doubling CO2e (3.7 W/m2) if the surface wasn’t in thermal contact with anything else (which of course it is) and that the energy didn’t perform any work (which of course it does), such that 0.68K per 2XCO2e would be more of an estimate of an actually impossible maximum.
Since 0.3 is less than 0.68 and the same order of magnigtude I’m calling it a PASS.
Therefore, I like the 0.3 C per 2xCO2e estimate. I also like the novel approach and graphical representation of the data, although, a couple of points (at intercepts) of the month would have made it easier to determine whether clockwise or counterclockwise was the appropriate way to read it. Assuming I’ve had enough coffee and clockwise is indeed the proper way to follow the graph, note the “sensitivity” changes toward the “peaks”; especially at the top where temperature sort of maxes out where although the heat flux is still increasing the temperature isn’t so much anymore. Could this be thermostats at work? This also aligns with the Stephan-Boltzman Law where as the temperature increases it takes less and less heat flux for the same increment of temperature increase.
Nice work. Plenty here for me to ponder over.