Updated: Low Climate Sensitivity Estimated from the 11-Year Cycle in Total Solar Irradiance
By Dr. Roy W. Spencer

NOTE: This has been revised since finding an error in my analysis, so it replaces what was first published about an hour ago.
As part of an e-mail discussion on climate sensitivity I been having with a skeptic of my skepticism, he pointed me to a paper by Tung & Camp entitled Solar-Cycle Warming at the Earth’s Surface and an Observational Determination of Climate Sensitivity.
The authors try to determine just how much warming has occurred as a result of changing solar irradiance over the period 1959-2004. It appears that they use both the 11 year cycle, and a small increase in TSI over the period, as signals in their analysis. The paper purports to come up with a fairly high climate sensitivity that supports the IPCC’s estimated range, which then supports forecasts of substantial global warming from increasing greenhouse gas concentrations.
The authors start out in their first illustration with a straight comparison between yearly averages of TSI and global surface temperatures during 1959 through 2004. But rather than do a straightforward analysis of the average solar cycle to the average temperature cycle, the authors then go through a series of statistical acrobatics, focusing on those regions of the Earth which showed the greatest relationship between TSI variations and temperature.
I’m not sure, but I think this qualifies as cherry picking — only using those data that support your preconceived notion. They finally end up with a fairly high climate sensitivity, equivalent to about 3 deg. C of warming from a doubling of atmospheric CO2.
Tung and Camp claim their estimate is observationally based, free of any model assumptions. But this is wrong: they DO make assumptions based upon theory. For instance, it appears that they assume the temperature change is an equilibrium response to the forcing. Just because they used a calculator rather than a computer program to get their numbers does not mean their analysis is free of modeling assumptions.
But what bothers me the most is that there was a much simpler, and more defensible way to do the analysis than they presented.
A Simpler, More Physically-Based Analysis
The most obvious way I see to do such an analysis is to do a composite 11-year cycle in TSI (there were 4.5 solar cycles in their period of analysis, 1959 through 2004) and then compare it to a similarly composited 11-year cycle in surface temperatures. I took the TSI variations in their paper, and then used the HadCRUT3 global surface temperature anomalies. I detrended both time series first since it is the 11 year cycle which should be a robust solar signature…any long term temperature trends in the data could potentially be due to many things, and so it should not be included in such an analysis.
The following plot shows in the top panel my composited 11-year cycle in global average solar flux, after applying their correction for the surface area of the Earth (divide by 4), and correct for UV absorption by the stratosphere (multiply by 0.85). The bottom panel shows the corresponding 11-year cycle in global average surface temperatures. I have done a 3-year smoothing of the temperature data to help smooth out El Nino and La Nina related variations, which usually occur in adjacent years. I also took out the post-Pinatubo cooling years of 1992 and 1993, and interpolated back in values from the bounding years, 1991 and 1994.
Note there is a time lag of about 1 year between the solar forcing and the temperature response, as would be expected since it takes time for the upper ocean to warm.
It turns out this is a perfect opportunity to use the simple forcing-feedback model I have described before to see which value for the climate sensitivity provides the best fit to the observed temperature response to the 11-year cycle in solar forcing. The model can be expressed as:
Cp[dT/dt] = TSI – lambda*T,
Where Cp is the heat capacity of the climate system (dominated by the upper ocean), dT/dt is the change in temperature of the system with time, TSI represents the 11 year cycle in energy imbalance forcing of the system, and lambda*T is the net feedback upon temperature. It is the feedback parameter, lambda, that determines the climate sensitivity, so our goal is to find a value for a best value for lambda.
I ran the above model for a variety of ocean depths over which the heating/cooling is assumed to occur, and a variety of feedback parameters. The best fits between the observed and model-predicted temperature cycle (an example of which is shown in the lower panel of the above figure) occur for assumed ocean mixing depths around 25 meters, and a feedback parameter (lambda) of around 2.2 Watts per sq. meter per deg. C. Note the correlation of 0.97; the standard deviation of the difference between the modeled and observed temperature cycle is 0.012 deg. C
My best fit feedback (2.2 Watts per sq. meter per degree) produces a higher climate sensitivity (about 1.7 deg. C for a doubling of CO2) than what we have been finding from the satellite-derived feedback, which runs around 6 Watts per sq. meter per degree (corresponding to about 0.55 deg. C of warming).
Can High Climate Sensitivity Explain the Data, Too?
If I instead run the model with the lambda value Tung and Camp get (1.25), the modeled temperature exhibits too much time lag between the solar forcing and temperature response….about double that produced with a feedback of 2.2.
Discussion
The results of this experiment are pretty sensitive to errors in the observed temperatures, since we are talking about the response to a very small forcing — less than 0.2 Watts per sq. meter from solar max to solar min. This is an extremely small forcing to expect a robust global-average temperature response from.
If someone else has published an analysis similar to what I have just presented, please let me know…I find it hard to believe someone has not done this before. I would be nice if someone else went through the same exercise and got the same answers. Similarly, let me know if you think I have made an error.
I think the methodology I have presented is the most physically-based and easiest way to estimate climate sensitivity from the 11-year cycle in solar flux averaged over the Earth, and the resulting 11-year cycle in global surface temperatures. It conserves energy, and makes no assumptions about the temperature being in equilibrium with the forcing.
I have ignored the possibility of any Svensmark-type mechanism of cloud modulation by the solar cycle…this will have to remain a source of uncertainty for now.
The bottom line is that my analysis supports a best-estimate 2XCO2 climate sensitivity of 1.7 deg. C, which is little more than half of that obtained by Tung & Camp (3.0 deg. C), and approaches the lower limit of what the IPCC claims is likely (1.5 deg. C).

Re: Bart says: June 5, 2010 at 10:54 pm
A cross correlation analysis might show definitively whether water vapor is a positive or negative feedback, irrespective of radiative or non-radiative forcing. All you should need to do is look for either a positive or negative phase slope in the frequency band of interest.
—
This intrigues me. I’ve been curious if TSI is too blunt an instrument to draw meaningful conclusions. My background is in telecomms where there are dispersion challenges in fibre, particularly around the water peak. Launch power may be constant, received power may vary depending on band and dispersion characteristics of the fibre. Luckily lasers and fibre are usually more constant than the Sun and atmosphere. Surely it’s the energy variations in the bands of interest that is more significant, eg water and CO2 absorbption peaks, or PAR?
1.7 deg C for CO2 doubling is invalidated by observations, where the PDO/AMO warm period in 40ties was just 0.3 deg C colder than similar warm peak in 2000s (and that’s per HadCRUT with its known warm biases) . Had the relation between CO2–>2xCO2 and temperature was for simplicity linear, CO2 increase of 80ppm since 1940 should have delivered 0.5 deg C.
Roy Spencer: You wrote, “Note there is a time lag of about 1 year between the solar forcing and the temperature response, as would be expected since it takes time for the upper ocean to warm.”
The Southern Hemisphere Sea Surface temperatures show a seasonal peak in February and March, leading to a lag from the Southern Hemisphere summer solstice of two to three months:
http://i49.tinypic.com/qriywl.jpg
But the surface does not reflect the inertia of the water column. So let’s look at steric sea level. In “Assessing the Globally Averaged Sea Level Budget on Seasonal to Interannual Time Scales”…
http://ecco.jpl.nasa.gov/~jwillis/willis_sl_budget_final.pdf
…Willis et al (2008) write, “Steric sea level has a seasonal amplitude of 3.7 +/- 0.8 mm, peaking in early April. Since two-thirds of the world’s oceans lie in the southern hemisphere, the phase reflects the peak warming in Austral summer.”
That’s about a 3- to 4-month lag from solstice to steric sea level peak.
So the one-year ocean lag is closer than the multiyear lags suggested in other papers.
“As part of an e-mail discussion on climate sensitivity I been having (ah’s bin havin?) with a skeptic of my skepticism”
Nicola Scafetta says:
June 5, 2010 at 9:52 pm
… Therefore the climate sensitivity to doubling of CO2 cannot be calculated by simply using a regression model between the 11-year TSI cycle and the equivalent cycle found in the temperature. The two things are apples and oranges.
Glad you pointed that out. Dr. Spencer seemed to be “out there” on this one. Roy, maybe you can clarify how you justify tying solar forcing to c02 forcing.
tallbloke: You commented to Roy Spencer, “This is way beyond anything co2 can do, so it must be down to the sun, amplified by cloud cover variation IMO.”
IMO those variations in cloud amount for the period you were discussing (1992-2002) would have resulted from the unusually strong trade winds associated with the 1995/96 La Niña and the shear length of the 1998/99/00/01 La Niña. Both had significant impacts on tropical Pacific OHC:
http://i36.tinypic.com/eqwdvl.png
And the impact of the 1998/99/00/01 La Niña is also visible in Indian Ocean OHC:
http://i35.tinypic.com/2pphbf4.png
Is the curious rise the tropical Atlantic in 2003 a lagged response?
http://i38.tinypic.com/2me2vc1.png
Can we put this in perspective?
A paper was done showing a high climate sensitivity, placing it in warmist territory, which then was used to support (C)AGW. Dr. Spencer took his own simple model, matched it to HadCRUT3 which is warmist-approved, and came up with a much lower climate sensitivity near the low end of the IPCC-provided range. He used the Tung and Camp number in his model and found it didn’t work.
So a warmist paper was debunked using Dr. Spencer’s model, which was tuned to warmist-approved data and had yielded a result within the IPCC-specified range. Using the enemy’s own weapons against themselves, if you wish to view it that way.
It was noted the figure still didn’t match the real-world observations, namely from satellites which showed a much lower climate sensitivity, and that the model was simple and left out “…any Svensmark-type mechanism…” which are looking to be very important in regulating global temperatures.
It’s all there, written in a gentlemanly scientific non-confrontational manner. ‘This ain’t reality, but if it was you’d still be wrong.’ What more do you want? 😉
Juraj V. says:
June 6, 2010 at 2:11 am
1.7 deg C for CO2 doubling is invalidated by observations, where the PDO/AMO warm period in 40ties was just 0.3 deg C colder than similar warm peak in 2000s (and that’s per HadCRUT with its known warm biases) . Had the relation between CO2–>2xCO2 and temperature was for simplicity linear, CO2 increase of 80ppm since 1940 should have delivered 0.5 deg C.
The Hadcrut linear trend since 1940 is ~0.08 deg per decade. This gives an overall warming of 0.56 deg. The decadal average for the 1940s is approx -0.07, the average since 2000 is ~0.42, i.e. a difference of ~0.5 deg.
What did you do – pick the highest number from the 1940s and compare it with the lowest from the 2000s?
Dr. Spencer
I am odd one out here.
I think that there is a strong possibility that oceans’ conveyor belt currents are affected (depending on their direction) by twists and turns of the Earth’s magnetic field, evolution of which (during last 400 years, for selected latitudes) is shown here:
http://www.vukcevic.talktalk.net/NFC6.htm
Since the Earth’s magnetic field and solar activity show certain degree of synchronisation http://www.vukcevic.talktalk.net/LFC1.htm
than it is tempting to misplace one for the other.
Norman Page –
Agree with your statement on cloud albedo effect. This effect is in the same direction as the solar cycle TSI variation effect. When sunspots are at a minimum, solar insolation is also at a minimum (cooling effect), cosmic rays are at a maximum (allowing for some months lag time) making the cloud albedo effect also at a maximum (cooling effect).
Dr. Spencer’s analysis lumps the TSI and cloud albedo effects together and we can have no idea how much of the warming response is due to which. Therefore there must be great uncertainty or “error bars” on his estimated climate sensitivity value for TSI.
Nevertheless, his calculation does indicate the magnitude of combined solar effects.
Leif Svalgaard says:
June 5, 2010 at 9:25 pm
“The Sun’s magnetic field right now is what it was 108 years ago. So, based on your ‘logic’ the climate should be the same today as back then…”
That does not follow. Let’s start with the fact that the temperature was much higher before the decrease in in the Sun’s magnetic field this time around. A forcing should be expected to exert the same influence on temperature, but not that the influence result in the same temperature, unless the initial conditions, and the associated forcings, were also the same. In that latter connection, current temperatures are also under the strong influence of an El Nino. I am not a solar forcing enthusiast and I respect your solar expertise, but I think your argument is unsound.
So it is suggested that the oceans and the atmosphere integrate the total solar effect in some so far uncharacterized manner.
More data. Send shysters, gats, and loot.
=========================
Question?
Since water can have and hold trace elements, can CO2 do the same?
Roy,
So it sounds like you have 0.2 w/m^2 +/- 5.0 W/m^2 uncertainty in albedo variation. Wouldn’t it make more sense to look for sensitivity in the albedo variation?
Someone, about a year ago or so, posted a chart on this blog showing the actually length of each solar cycle. I seem to remember seeing that the actual lengths varied from about 9.5 years to 13.5 years. This report compares 11 years solar cycles with corresponding 11 year cycles in temperature (“The most obvious way I see to do such an analysis is to do a composite 11-year cycle in TSI (there were 4.5 solar cycles in their period of analysis, 1959 through 2004) and then compare it to a similarly composited 11-year cycle in surface temperatures. “).
It would seem to me that one should take the actual solar cycle length for each specific cycle and compare it with the exact same time period for temperature (with the starting point adjusted for lag if any) rather than using the 11 year average cycle for the entire period.
I still don’t understand how a temperature series can be used to wriggle match. The temperature response to weather (an incredibly strong driver) is a chaotically unbalanced compactly noisy data set. Any filtered smoothing based on a set time period (IE 11 years) is likely able to show what ever “hump” or “trend” you want based on the filter you use, but is it related to your proposed driver or is it still emphasizing close enough noisy temperature responses to the the influence of weather?
It seems to me that one would have to first subtract somewhat predictable temperature responses to every pressure system impinging upon every sensor on at least a week by week basis in order to find temperature response to something other than weather, before then applying smoothing and filtering.
In my opinion, trying to find trends related to any other forcing outside of weather systems using quasi-raw data (IE week by week weather related temperature response has not been removed) makes no sense whatsoever. Weather systems are an overwhelmingly powerful driver of temperature, are not anywhere near white noise, and cannot simply be “smoothed” out, or averaged out, to show some other driver. As it stands, chances are you will find a temperature response trend near your desired filter window, and will mistakenly call that something other than a weather related temperature response.
In my mind, the only correlation that makes sense, and the only conclusion we can make, given the data set we have, has to do with the drivers of weather systems, the oceans. Until we can remove the oceanic driven weather system affect from every sensor on a weekly basis, we will not be able to find any other match to anything else.
Charles Higley,
You have an interesting viewpoint but I believe we have proxies telling us atmospheric CO2 has been more than double its current level in milleniums past. If it has happened before, why not in the future?
One such way to subtract weather response would be to subtract month to month predictions of weather related “change from a longterm (IE over the life of the record, not some artificial time span) average”. These predictions are probably stored somewhere in agricultural departments at state universities. For example, see:
http://www.oregon.gov/ODA/NRD/docs/pdf/dlongrange.pdf
It would be interesting to do this for the state of Oregon and see if some kind of long term trend shows up after weather related month to month changes are removed.
But even then, in some parts of Oregon weather related changes can occur within 10 minutes, not predicted by the local weather service. One minute we are under a pressure system coming from the West and the next minute we are under a pressure system coming from the North. It is not unusualtso experience sudden and drastic drops or climbs to temperatures completely unpredicted. I believe that the record range for temp change in a 24 hour period in Oregon is 70+ degrees.
The more I study this issue, the more I am convinced that you cannot determine the difference between weather system temperature response from anthropogenic response. The weather noise is just too powerful.
Bob Tisdale says:
June 6, 2010 at 3:15 am
tallbloke: You commented to Roy Spencer, “This is way beyond anything co2 can do, so it must be down to the sun, amplified by cloud cover variation IMO.”
IMO those variations in cloud amount for the period you were discussing (1992-2002) would have resulted from the unusually strong trade winds associated with the 1995/96 La Niña and the shear length of the 1998/99/00/01 La Niña.
Hi Bob, thanks for the observation. I wasn’t trying to imply the cloud variation must be solely don to heightened solar activity. I don’t have a problem with other factors being involved as well. Any idea why the trade winds were unusually strong? Because temps were up and the engine was running on high pressure maybe?
Is the curious rise the tropical Atlantic in 2003 a lagged response?
http://i38.tinypic.com/2me2vc1.png
More likely a bad splice in the XBT/ARGO data IMO. With the degree to which Levitus et al have been playing fast and loose with estimates of OHC in the last ten years, who knows. What I do know is that they downwards revised their OHC estimates all the way back when they realised the big jump couldn’t be accounted for by co2 on their Levitus et al 2000 figures.
Doug in Seattle says:
June 6, 2010 at 1:15 am
stephan says:
June 5, 2010 at 11:32 pm
who is lying?
http://weather.unisys.com/surface/sst_anom.html
or
http://www.osdpd.noaa.gov/data/sst/anomaly/2010/anomnight.6.3.2010.gif
They both show the same thing but use different color scales. The NOAA one uses yellow (a warm color) starting at 0 degrees, while the Unisys one has green (a cooler color). The end result is that the NOAA map “looks” warmer than the Unisys map.
In a way the NOAA map tries to fool the reader into thinking there is warming going on where it is not, so in that sense is not entirely honest.
I wouldn’t go so far though as to say they are lying. All one has to do is look at the color scale to see which colors mean warming. But if one doesn’t look at the scale (most people?) one would get the impression of a world on fire.
_________________________________________________________________________
Doug I do not think he was referring to the colors. I put the graphics up on two different screens and the areas do not match. Look at the area off the tip of south America or the area up near Alaska.
@Charles Higley
Hi Charles
>>With the 50 to 1 partitioning between sea and air, we would be hard put to raise CO2 by 20% if we tried by burning all our available carbon.<<
Prof. Tom V. Segalstad (Oslo Norway) made that calculation long time ago when the CO2 level in the atmosphere was 380 PPMv… if you burn all known fossil fuel, it will end up with 456 PPMv in the atmosphere (a rise of 20 %), so a doubling of the CO2 in the atmosphere is not possible due burning fossil fuel.
But a CO2 release from the oceans (because of warming) can do it without any doubt.
Here is the good news :
http://tomnelson.blogspot.com/2010/06/breaking-fourth-grade-climate-realist.html
Kind regards from Greenland
Bob Tisdale says:
June 6, 2010 at 2:52 am
Not just inertia, Bob. Heat in water moves by convection from hottest to coldest, so the sea at great depth gradually gets warmer if the sea surface warms, and more slowly burps it out to space to reduce the heat sink (e.g. El Nino). But there must be a huge lag due to the protection of the haloclinic and the thermoclinic strata, especially during the potential heat build-up cycles. El Nino may be one of the rejection mechanisms when the thermocline interface rejects the heat movement? These interfaces, no matter what the real operational aspects, serve as elastic protective interfaces which must have huge lag times, barring any turnover caused by volcanic upheaval, for instance. No one puts these in models, because no one could get a fix on what a meaningful lag time would be, and any correlation would be blunted and lost over long times, eroded by any other influences.
Nothing on earth is homogeneous or simple, yet models rely on homogeneity and simplicity for setting and choosing parameters.
That is why Spencer uses simplistic TSI from the sun, leaving out more powerful UV, cosmic ray, radio, solar wind, and magnetic field (protective) influences: there aren’t enough Cray computers in the universe to string together to get any meaningful result from these variables. Too many variables, and only speculation for the (too few) equations.
Let’s spend our (to be sharply reduced in the near future, of course, now that we see the ignominious waste and fraud involved) taxpayer money on useful enterprises, like actually doing real measurements in the field, and then we will be justified in spending a little capital crunching the results.
The best fit being for an effective (thermal) ocean depth of 25 meters, which is way too shallow, fairly well screams that the results are very doubtful. The temperature data over a few solar cycles is almost certainly too noisy to do this kind of analysis. I expect that if error bars were calculated, it is likely that the uncertainty would be comparable to or larger than the effect. If the analysis were extended to many solar cycles then the results may be more robust, but uncertainty in the early part of the temperature record would then be greater, and you would just be guessing at solar energy variation anyway…. more uncertainty. This may be tilting at windmills.
Tom in Florida says:
June 6, 2010 at 6:24 am
” I seem to remember seeing that the actual lengths varied from about 9.5 years to 13.5 years. ”
“It would seem to me that one should take the actual solar cycle length for each specific cycle and compare it with the exact same time period for temperature ”
Hear, Hear!!! Normalize the time scale to be 1 cycle, based on the observed length, and then compare the relative variations of TSI and Temperature. The variable cycle length introduces unnecessary uncertainty.
It was 105 years ago that Roald Amundson sailed through the Northwest Passage. No one duplicated that feat without an icebreaker until 2007 and again in 2008. The passage has since frozen back up again.
It looks like history did indeed repeat itself at least where the arctic ice mass & extent is concerned.
It could be just coincidence that the Northwest Passage opened up briefly only during periods of very very low sunspot minima but it’s enough to raise suspicion of a causal link.