The Search for a Short Term Marker of Long Term Climate Sensitivity
By Dr. Roy Spencer. October 4th, 2009
[This is an update on research progress we have made into determining just how sensitive the climate system is to increasing atmospheric greenhouse gas concentrations.]

While published studies are beginning to suggest that net feedbacks in the climate system could be negative for year-to-year variations (e.g., our 2007 paper, and the new study by Lindzen and Choi, 2009), there remains the question of whether the same can be said of long-term climate sensitivity (and therefore, of the strength of future global warming).
Even if we find observational evidence of an insensitive climate system for year-to-year fluctuations in the climate system, it could be that the system’s long term response to more carbon dioxide is very sensitive. I’m not saying I believe that is the case – I don’t – but it is possible. This question of a potentially large difference in short-term and long-term responses of the climate system has been bothering me for many months.
Significantly, as far as I know, the climate modelers have not yet demonstrated that there is any short-term behavior in their models which is also a good predictor of how much global warming those models project for our future. It needs to be something we can measure, something we can test with real observations. Just because all of the models behave more-or-less like the real climate system does not mean the range of warming they produce encompasses the truth.
For instance, computing feedback parameters (a measure of how much the radiative balance of the Earth changes in response to a temperature change) would be the most obvious test. But I’ve diagnosed feedback parameters from 7- to 10-year subsets of the models’ long-term global warming simulations, and they have virtually no correlation with those models known long-term feedbacks. (I am quite sure I know the reason for this…which is the subject of our JGR paper now being revised…I just don’t know a good way around it).
But I refuse to give up searching. This is because the most important feedbacks in the climate system – clouds and water vapor – have inherently short time scales…minutes for individual clouds, to days or weeks for large regional cloud systems and changes in free-tropospheric water vapor. So, I still believe that there MUST be one or more short term “markers” of long term climate sensitivity.
Well, this past week I think I finally found one. I’m going to be a little evasive about exactly what that marker is because, in this case, the finding is too important to give away to another researcher who will beat me to publishing it (insert smiley here).
What I will say is that the marker ‘index’ is related to how the climate models behave during sudden warming events and the cooling that follows them. In the IPCC climate models, these warming/cooling events typically have time scales of several months, and are self-generated as ‘natural variability’ within the models. (I’m not concerned that I’ve given it away, since the marker is not obvious…as my associate Danny Braswell asked, “What made you think of that?”)
The following plot shows how this ‘mystery index’ is related to the net feedback parameters diagnosed in those 18 climate models by Forster and Taylor (2006). As can be seen, it explains 50% of the variance among the different models. The best I have been able to do up to this point is less than 10% explained variance, which for a sample size of 18 models might as well be zero.
Also plotted is the range of values of this index from 9 years of CERES satellite measurements computed in the same manner as with the models’ output. As can be seen, the satellite data support lower climate sensitivity (larger feedback parameter) than any of the climate models…but not nearly as low as the 6 Watts per sq. meter per degree found for tropical climate variations by us and others.
For a doubling of atmospheric carbon dioxide, the satellite measurements would correspond to about 1.6 to 2.0 deg. C of warming, compared to the 18 IPCC models’ range shown, which corresponds to warming of from about 2.0 to 4.2 deg. C.
The relatively short length of record of our best satellite data (9 years) appears to be the limiting factor in this analysis. The model results shown in the above figure come from 50 years of output from each of the 18 models, while the satellite range of results comes from only 9 years of CERES data (March 2000 through December 2008). The index needs to be computed from as many strong warming events as can be found, because the marker only emerges when a number of them are averaged together.
Despite this drawback, the finding of this short-term marker of long-term climate sensitivity is at least a step in the right direction. I will post progress on this issue as the evidence unfolds. Hopefully, more robust markers can be found that show even a stronger relationship to long-term warming in the models, and which will produce greater confidence when tested with relatively short periods of satellite data.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

The current Arctic sea ice extent is not a fossil. It has an exact number as I type.
It does? Where exactly is it? And how does it measure up when compared to an alternate number using another method? What exactly are the size of the error bars?
These are important questions to answer at a time when the data is affected by splicing and sensor failures. And they are certainly important given the revelation that the Polar 5 measurements showed that the expected ice thickness provided by the satellites was off by 100%.
Vangel (16:50:51) :
Exactly. I was going to get to the accuracy part and the “relentless decline” part once we had established a volume number. bill (10:02:53), I suppose it’s too early for Kwok et al., to have published a calculation for the 2009 minimum volume ??
Of course, yes Antarctica too. How come the Catlin crew haven’t gone swimming down there yet ??
So why are you and danappaloupe having such difficulty in typing one ??
Here you go:
Expected thickness (from satellite data) — two meters.
Measured thickness (Polar 5) — four meters
Of course, this is all BS because you do not have sufficient satellite data points over several cycles to come up with anything definitive about the long term trend. And if you look at the entire data set and go back far enough you find no evidence of statistically significant changes. Below is the abstract from Polyakov, 2003: Long-term ice variability in arctic marginal seas. J. Climate, 16, 2078–2085.
Examination of records of fast ice thickness (1936–2000) and ice extent (1900–2000) in the Kara, Laptev, East Siberian, and Chukchi Seas provide evidence that long-term ice thickness and extent trends are small and generally not statistically significant, while trends for shorter records are not indicative of the long-term tendencies due to large-amplitude low-frequency variability. The ice variability in these seas is dominated by a multidecadal, low-frequency oscillation (LFO) and (to a lesser degree) by higher-frequency decadal fluctuations. The LFO signal decays eastward from the Kara Sea where it is strongest. In the Chukchi Sea ice variability is dominated by decadal fluctuations, and there is no evidence of the LFO. This spatial pattern is consistent with the air temperature–North Atlantic Oscillation (NAO) index correlation pattern, with maximum correlation in the near- Atlantic region, which decays toward the North Pacific. Sensitivity analysis shows that dynamical forcing (wind or surface currents) dominates ice-extent variations in the Laptev, East Siberian, and Chukchi Seas. Variability of Kara Sea ice extent is governed primarily by thermodynamic factors.
Please note the evidence that natural oscillations dominate the ice extent and thickness observations.
Using ICESat measurements, scientists found that overall Arctic sea ice thinned about seven inches a year, for a total of 2.2 feet over four winters….
Four years? That is entirely meaningless. We need accurate and complete data over several cycles to find anything meaningful. The simple fact is that we have seen low ice cover before and will see it again because the natural oscillations are the drivers of thickness. Until those natural oscillations are no longer in place the planet will work pretty much as it has.
Vangel, have you seen the date of the Polyka article? It is from 2002. How have the authors reacted to the sudden dramatic melt as of autumn 2005? In particular the melt happened mainly in the area least or not at all affected by the LFO.
Apparently the cycles are modulated by a different and new evolution dat is gaining dominance.
Vangel, have you seen the date of the Polyka article? It is from 2002. How have the authors reacted to the sudden dramatic melt as of autumn 2005? In particular the melt happened mainly in the area least or not at all affected by the LFO.
Apparently the cycles are modulated by a different and new evolution dat is gaining dominance.
Yes, I am aware of the publication date of the Polyakov article. And I am aware that he has published other articles later and that his conclusions still stand because we have documented evidence of lower ice cover in the Arctic and of rapid melting during previous periods.
From a December 2004 article we read in the abstract, Recent observations show dramatic changes of the Arctic atmosphere–ice–ocean system, including a rapid warming in the intermediate Atlantic water of the Arctic Ocean. Here it is demonstrated through the analysis of a vast collection of previously unsynthesized observational data, that over the twentieth century Atlantic water variability was dominated by low-frequency oscillations (LFO) on time scales of 50–80 yr. Associated with this variability, the Atlantic water temperature record shows two warm periods in the 1930s–40s and in recent decades and two cold periods earlier in the century and in the 1960s–70s. Over recent decades, the data show a warming and salinification of the Atlantic layer accompanied by its shoaling and, probably, thinning. The estimate of the Atlantic water temperature variability shows a general warming trend; however, over the 100-yr record there are periods (including the recent decades) with short-term trends strongly amplified by multidecadal variations. Observational data provide evidence that Atlantic water temperature, Arctic surface air temperature, and ice extent and fast ice thickness in the Siberian marginal seas display coherent LFO. The hydrographic data used support a negative feedback mechanism through which changes of density act to moderate the inflow of Atlantic water to the Arctic Ocean, consistent with the decrease of positive Atlantic water temperature anomalies in the late 1990s. The sustained Atlantic water temperature and salinity anomalies in the Arctic Ocean are associated with hydrographic anomalies of the same sign in the Greenland–Norwegian Seas and of the opposite sign in the Labrador Sea. Finally, it is found that the Arctic air–sea–ice system and the North Atlantic sea surface temperature display coherent low-frequency fluctuations. Elucidating the mechanisms behind this relationship will be critical to an understanding of the complex nature of low-frequency variability found in the Arctic and in lower-latitude regions.
What you are missing is the evidence of the low-frequency fluctuations, which are natural factors and not attributable to man’s emissions of CO2. Also, Polyakov makes it clear that the factors effecting ice cover are numerous and complex, which indicates that trying to argue that man’s emissions of CO2 is a primary factor won’t work very well on the scientific level and can only be an unsubstantiated narrative.
And I hope that you do not believe that Polyakov is the only person finding this natural variability. In the abstract of their 2005 GGR paper, Historical variability of sea ice edge position in the Nordic Seas, Dmitry Divine and Chad Dick write, “Historical ice observations in the Nordic Seas from April through August are used to construct time series of ice edge position anomalies spanning the period 1750–2002. While analysis showed that interannual variability remained almost constant throughout this period, evidence was found of oscillations in ice cover with periods of about 60 to 80 years and 20 to 30 years, superimposed on a continuous negative trend. The lower frequency oscillations are more prominent in the Greenland Sea, while higher frequency oscillations are dominant in the Barents. The analysis suggests that the recent well-documented retreat of ice cover can partly be attributed to a manifestation of the positive phase of the 60–80 year variability, associated with the warming of the subpolar North Atlantic and the Arctic. The continuous retreat of ice edge position observed since the second half of the 19th century may be a recovery after significant cooling in the study area that occurred as early as the second half of the 18th century.
Once again we have researchers noting the existence of natural variability of long duration, which means that looking at any short period of time is scientifically meaningless.
Of course, we could look at the short period that you are probably talking about but even there we see no dangerous trend. While there was a low in 2007, the data only runs a very short period of time and has ice cover is now recovering strongly. And as Anthony points out, if you look at the reporting by NANSEN, NSIDC, Cryrosphere Today, and DMI you see the use of SSMI data, which has reliability problems due to sensor drift, malfunctions, and outright failure. If you look at data from the newer AMSRE sensor, which does not have those problems, you see that 2009 ice cover is in the middle of the range not far from the eight year average. How you can interpret that data as some crisis is beyond me.
Then there is the SH data, which clearly shows that this year’s ice melt was the lowest in thirty years and, once again, ice cover not far off the mean. While you might feel comfortable jumping up and down because of short term trends that are indistinguishable from noise some of us are actually interested in seeing what the signal is telling us. And as Polyakov, Divine and Dick point out, there is nothing much to see other than the natural variation due to low frequency oscillations. With the ice cover increasing and NH temperatures falling I believe that your side is about to drop this argument very soon.
[REPLY – Note the NOAA adjustment graph is in °F, not °C. Even so, the adjustment almost doubles the warming trend. ~ Evan]
Thanks. I screwed up by typing too fast. Note that the adjustment is about the same amount as the increase in the reported global increase, which you can find in Figure 1 on the release in which Hansen, Ruedy, Sato, and Glascoe admitted there was no American warming since the 1940s. The divergence between the US and global data is striking but cannot be examined because CRU claims to have lost the unadjusted data set. So while we have a slight decrease in the US data, there is a 0.3C increase since the 1930s in the global data.
danappaloupe (19:42:47) :
NOAA uses 30 years of data to define an average temperature.
The number 30 is also found in statistics for being able to approximate the mean and SD of samples to populations. Usually when n-1>30; s->S.
I don’t know what else to say…
I see this same lack of understanding on weather and climate from AGW skeptics and no one ever corrects them. It drives me insane when AGW supports try to do it because it makes everyone look uneducated.
I wonder what you think of your previous postings now that we know that the temperature reconstructions were based on ‘adjusted’ data.