The Search for a Short Term Marker of Long Term Climate Sensitivity
By Dr. Roy Spencer. October 4th, 2009
[This is an update on research progress we have made into determining just how sensitive the climate system is to increasing atmospheric greenhouse gas concentrations.]

While published studies are beginning to suggest that net feedbacks in the climate system could be negative for year-to-year variations (e.g., our 2007 paper, and the new study by Lindzen and Choi, 2009), there remains the question of whether the same can be said of long-term climate sensitivity (and therefore, of the strength of future global warming).
Even if we find observational evidence of an insensitive climate system for year-to-year fluctuations in the climate system, it could be that the system’s long term response to more carbon dioxide is very sensitive. I’m not saying I believe that is the case – I don’t – but it is possible. This question of a potentially large difference in short-term and long-term responses of the climate system has been bothering me for many months.
Significantly, as far as I know, the climate modelers have not yet demonstrated that there is any short-term behavior in their models which is also a good predictor of how much global warming those models project for our future. It needs to be something we can measure, something we can test with real observations. Just because all of the models behave more-or-less like the real climate system does not mean the range of warming they produce encompasses the truth.
For instance, computing feedback parameters (a measure of how much the radiative balance of the Earth changes in response to a temperature change) would be the most obvious test. But I’ve diagnosed feedback parameters from 7- to 10-year subsets of the models’ long-term global warming simulations, and they have virtually no correlation with those models known long-term feedbacks. (I am quite sure I know the reason for this…which is the subject of our JGR paper now being revised…I just don’t know a good way around it).
But I refuse to give up searching. This is because the most important feedbacks in the climate system – clouds and water vapor – have inherently short time scales…minutes for individual clouds, to days or weeks for large regional cloud systems and changes in free-tropospheric water vapor. So, I still believe that there MUST be one or more short term “markers” of long term climate sensitivity.
Well, this past week I think I finally found one. I’m going to be a little evasive about exactly what that marker is because, in this case, the finding is too important to give away to another researcher who will beat me to publishing it (insert smiley here).
What I will say is that the marker ‘index’ is related to how the climate models behave during sudden warming events and the cooling that follows them. In the IPCC climate models, these warming/cooling events typically have time scales of several months, and are self-generated as ‘natural variability’ within the models. (I’m not concerned that I’ve given it away, since the marker is not obvious…as my associate Danny Braswell asked, “What made you think of that?”)
The following plot shows how this ‘mystery index’ is related to the net feedback parameters diagnosed in those 18 climate models by Forster and Taylor (2006). As can be seen, it explains 50% of the variance among the different models. The best I have been able to do up to this point is less than 10% explained variance, which for a sample size of 18 models might as well be zero.
Also plotted is the range of values of this index from 9 years of CERES satellite measurements computed in the same manner as with the models’ output. As can be seen, the satellite data support lower climate sensitivity (larger feedback parameter) than any of the climate models…but not nearly as low as the 6 Watts per sq. meter per degree found for tropical climate variations by us and others.
For a doubling of atmospheric carbon dioxide, the satellite measurements would correspond to about 1.6 to 2.0 deg. C of warming, compared to the 18 IPCC models’ range shown, which corresponds to warming of from about 2.0 to 4.2 deg. C.
The relatively short length of record of our best satellite data (9 years) appears to be the limiting factor in this analysis. The model results shown in the above figure come from 50 years of output from each of the 18 models, while the satellite range of results comes from only 9 years of CERES data (March 2000 through December 2008). The index needs to be computed from as many strong warming events as can be found, because the marker only emerges when a number of them are averaged together.
Despite this drawback, the finding of this short-term marker of long-term climate sensitivity is at least a step in the right direction. I will post progress on this issue as the evidence unfolds. Hopefully, more robust markers can be found that show even a stronger relationship to long-term warming in the models, and which will produce greater confidence when tested with relatively short periods of satellite data.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

RR Kampen (07:47:22) :
This is why ice volume is the only parameter that counts. Extent hides it. It means that coming spring the extent could be close to normal, while three months later all that ice might disappear in a matter of only two weeks; that is when this threshold thickness has been crossed and wholesale breakup occurs – like half the Arctic Sea showed in 2007.
And less of it melted in 2008, and even less in 2009. Do you think the ice has been getting thicker or thinner in the last 2 years?
RR Kampen,
Then provide citations showing that you were likely “the first to pay attention” to ice thickness. If you go through the archives here you will see that this has been regularly discussed.
And your claim @05:50:56 almost makes it appear that CO2 has its own conscious motivations, forcing warming when convenient, then stepping back and watching other factors take over.
Please provide empirical evidence to back those claims. Real world evidence, not computer model-generated speculation. Thanks in advance.
Re: tallbloke (07:51:10) :
“And the evidence supporting this story can be found where?”
—
E.g., http://icebubbles.ucsd.edu/Publications/CaillonTermIII.pdf .
Any short search will yield results, by the way.
There needs to be an IQ test to post on here….
Re: Smokey (08:09:29) :
“Then provide citations showing that you were likely ‘the first’ to pay attention to ice thickness. ”
Might have been the first. Do you read Dutch? The citations are on Dutch fora.
“If you go through the archives here you will see that this has been regularly discussed.”
Then I might not have been the first. Unless the archive starts after autumn 2005, then I still might have been the first.
I agree with Leif that a “teaser” like this isn’t appropriate scientific behavior.
What catches my attention is the non-overlap between the observed range and the range in the models. IF the index/marker has some physical significance, then it should lead to a revision of the various models.
Re: tallbloke (08:09:19) :
“And less of it melted in 2008, and even less in 2009. Do you think the ice has been getting thicker or thinner in the last 2 years?”
In 2008, much thinner than in 2007.
In 2009, wait until the numbers come out. Couple of weeks. Given the unique hole that existed this year to the north of the gap between Ellesmere and Greenland, I expect another season of decay for the multiyear ice (thus the thickness) to have happened.
MattN (08:12:50) : “There needs to be an IQ test to post on here….”
Yes, but do we only allow the average IQ’s or just the endpoint IQ’s?
Some folks miss what AGW skepticism entails. It does not mean a priori rejection of the AGW CO2 hypothethis. It would not mean a stubborn adherence to the Earth is not warming in the face of substantive contrary data. Skepticism should maintain an open mind to data that refutes any personal convictions or previously held hypothethes. Of course, such new data needs to be corroborated in case it’s a noise spike or anomaly.
What’s best about this blog besides Leif’s tutorials is Anthony’s policy of encouraging and welcoming contrary opinions.
Slightly O/T but I think that SM’s latest thread on CA is possibly even more explosive than the initial exposure of the Yamal series. It seems that Gavin at RC has adoped “Tom P” as his new ‘guru’ on dendrochronology. It also seems that the mast Gavin is now pinning his “robustness” to – namely selection criteria via Tom P – is just as shaky as all those defunct hockey sticks RC put up recently, presumably as a smoke screen whilst they try and get their stories right….
http://www.climateaudit.org/?p=7278
Steve’s laconic description of this latest debacle is just GOLD!!!!!!! Perhaps he would do a guest post on WUWT and precis the current state of the “Emperor’s Clothes”?
************************
RR Kampen (07:47:22) :
I might have been the first to pay attention to thickness of ice. Rest followed finally after the disaster of 2007.
******************************
By what stretch of the imagination was 2007 a disaster WRT ice? Were people killed. Did polar bears go extinct? Did a bunch of animals die or any species go extinct? What’s up with that??
RR Kampen says:
“About two thirds of melting happens from below. Right now, even.”
But that is not why Arctic ice thins. Melting has only a minor effect. Wind is the cause of ice loss and consequently, of sea ice thinning, because new ice forms over open water.
Polar ice is thinner now because the older, multi-year ice was blown out of the Arctic and melted in warmer waters. The ARGO buoys show that the deep ocean is cooling. Water temperature under the Polar ice really has little to do with thinning ice.
AGW has nothing to do with ice thickness.
Kampen 3:53
Please look at the JAXA chart for today. 2009 sea ice is flat lining and not growing. 2009 may soon start to fall behind the 2008 and 2007 sea ice extents and the 2009 plot looks very atypical compared to the last 10 years of data.
Thanks
William
Also, the Antarctic — which must be taken equally into account with the Arctic when talking about ‘global’ sea ice — is well above average: click.
Global sea ice fluctuations have nothing to do with AGW. Further, current conditions are well within historical norms. It is alarmist speculation to claim that a minor trace gas rules the planet’s oceans, when there are perfectly normal explanations for what is happening.
Speaking of warming has anyone seen that the arctic temps. on the DMI failed to go down significantly and bounced back up since that big uptick? Could it be related to the refreezing rate on JAXA not being very high right now and getting closer to the 2008 line? Does this mean the arctic temperature trend for October is going to look like a hockey stick unless it goes down significantly over the next weeks?
A Comparison of EMD & Fourier Filtering for Long Term Global Temperatures.
The following is a comparison of the Fourier Convolution methods and the Empirical Mode Decomposition (EMD), used in signal analysis. The Fourier procedures were those recommended by Blackman & Tucky’s book “Measurement of Power Spectra”. These methods were later refined by Cooley & Tucky’s presentations on the Fast Fourier Transform, which they developed. This method was compared with the EMD analysis “On the Trend, De trending and Variability of Nonlinear and Non stationary Time Series” by Wu, Huang, Long and Peng. One of the advantages of the EMD method is the ability to evaluate stationary, non-stationary, linear and non-liners systems, while the Fourier methods are more for stationary processes. If there are wide variations between the EMD and Fourier results, it might indicates the data set is highly non-stationary or non-linear. If there are close comparisons, it might imply the data set would be “nearly stationary”, in which Fourier methods would be applicable.
The data set used was the 1856-2003 CRU set found at ftp://ftp.cru.uea.ac.uk/data/gat.csv
Figure1 shows the CRU temperature , along with a “De-trend” line. This “De-trend” line intersects both end point of the data set, and used to avoid “leakage” problems with Fourier convolution. Figure-2 (blue line) shows the difference, or error between the De-trend line and raw data, noting that both end points are equal to zero.
Figure-1 http://www.imagenerd.com/uploads/cru-fig-1-VMn7J.gif
Figure-2 http://www.imagenerd.com/uploads/cru-fig-2-ZnKrn.gif
Figure-3 http://www.imagenerd.com/uploads/cru-fig-3-3ww7z.gif
The Fourier convolution is performed on this “error” from the “De-trend” line. In this case, the “error”, or difference, is inserted into a sample frame of 512 sample (powers of 2 reqts.). The rest of the frame is padded with zeros so no discontinuity is present at the sample end points, so “leakage” was not considered significant In order to check the computational accuracy, the input was “echoed” back. That is the input, was converted to the freq. domain, and back to the time domain, with no filtering, or “mask”. Figure 2 (red line) also shows the reconstruction of the original signal (blue line). In this case the reconstructed signal matched the input, indicating computational and convolution integrity.
Figure-3 is the power spectral density plot (actually 1/2 of it) that shows the energy contained in the various frequencies. Note that amplitudes get smaller at higher frequencies, while more energy is concentrated in the lower ones, stating about 0.14 cycles per year, or 8 year periods. This tapering off of energy at the higher frequency, also indicates “pre-whitening” is not needed.
Figure 4 shows the effect of a “mask” that removes frequencies above 0.025 cycles/year. Basically a low pass filter. Figure 5 shows the PSD filter or “masking” effect in the frequency domain.
Figure-4 http://www.imagenerd.com/uploads/cru-fig-4-uFtYj.gif
Figure-5 http://www.imagenerd.com/uploads/cru-fig-5-ZTrvV.gif
The resultant signal is a smoothed line, that shows the lower frequency content of the input, uncluttered by the higher frequencies.
He last step is to use the “de-trend” line and the filtered signal to re-construct the final filtered signal, as shown below in Figure 6.
Figure-6 http://www.imagenerd.com/uploads/cru-fig-6-NMyC0.gif
Here the Fourier filtered signal (red line) is compared to the multidecadal EMD results (thick gray line). Note that there is very good comparison between the two plots, especially at the end points. Also the plot title should read “Fourier Reconstruction of CRU Data”.
One of the stated advantages of the EMD method, is that it works on stationary and non-stationary, linear and non-linear data sets. Based on the similarity of results, one could come to the conclusion that the CRU data set is, to use Blackman’s and Tukey’s words “nearly stationary”, in their book “Measurement of the Power Spectra”
Also shown is a example of using the more current Hadcet3 data set, shown in figure 7, along with the same 0,025 cycles/year Fourier filter.
Figure-7 http://www.imagenerd.com/uploads/hadcet-fig-7-BGOn6.gif
In this case, there is more of a flattening at the end of the curve. Both data sets seem to show the ~ 60 year curve. From these two data sets, it would appear that we are in the beginning of a downward global temperature cycle.
300 YEAR DATA SETS
Two other data sets are also included for analysis. The first is a composite from the East English data (1659-2008). The second, Ave14, is an average of thirteen of some the longest western European temperature records plus the East English data set. These records were from the Rimfrost site: http://www.rimfrost.no/
A more detail description of this latter average is in the WUWT thread Forecasting the Earth’s Temperature 9/9/2009. Figures 8 and 9 show the Ave14 composite average analysis. Figure 9 also shows a 40 year moving average and 4 pole Chebushev filter (fc=0.025 cycles/year). Shifting the Chebushev filter back 180 deg. of 20 years, results in the curve almost on top of the Fourier line, except at the final end point. One method to partially correct for the phase shift is to run the analysis forward in time with a 2 pole filter, and then reverse it, in time, with the same filter. This is the “filtfilt” option in MATLAB. This method works well if the informative part is in the middle. Unfortunately, the point of interest (say the last 10-20 years) is at end point, where this method breaks down. This is shown in Figure 10.
Figure-8 http://www.imagenerd.com/uploads/ave14-hadcet-raw-0Fio2.gif
Figure-9 http://www.imagenerd.com/uploads/ave14-raw-smoothed-huBw1.gif
Figure-10 http://www.imagenerd.com/uploads/ave14-smoothed-rev_cheb-j0m9Y.gif
The Figure 11 is the East English data alone, shown below.
Figure-11 http://www.imagenerd.com/uploads/t_est_28-bGGxs.gif
In this case a more defined downward trend is seen in the recent years.
FINAL CONCLUSIONS
Fourier Convolution Filtering compared well with the EDM analysis.
Presence of a long term ~60 year cycle in the temperature
That there were some fairly warm temperatures in the late 1700’s in Europe
That there appears the beginning of a cooling trend in global temperatures
From the above analysis, it would appear, at least to this author, the Fourier does a very good job in matching the CRU EMD analysis. Hence it would also appear, unless some drastic changes in similar temperature data sets took place, the Fourier analysis would work just as well and produce reasonably accurate results. This would be true, especially at the most recent end point.
The second point is the persistent ~60 year period oscillation that continues to show up even in the earliest records. So something was going on, prior to the “greenhouse gasses” showed up.
The third point is temperatures, at least in western Europe went through periods where their temperature was very close to the present. If one were to look at a trend analysis, the long term slopes are also much flatter then looking at the 1850-2008 trend slopes.
The last point is the question of the flatting, or downward beginning of the ~60 Year plot. Assuming this plot will continue, and CO2 levels going up, it is hard to justify the correlation between the two.
Anyway, is appears that the earth does have a very slow long term increase, and the more recent changes are part of some “to be determined” natural cycle.
That’s one person’s train of thought.
The President’s science czar John P. Holdren should be fired for incompetence and failing to inform the President that the Earth has been getting colder. We have global cooling now since the Sun has been asleep with very few sunspots indicating very low Sun activity for more than 2 years now. Any high school student could figure this out. The satellite data and ARGOS ocean data confirm the planet is cooling down rapidly.
The President’s science czar is making Obama look like a dope and a buffoon. In addition Al Gore’s movie “An Inconvenient Truth” has been prove to be a work of science fiction with the infamous hockey stick graph being completely dis-proven. Further proof people are aware of the science is, Carbon credits that were trading at $7 in 2008 are now going for 10 cents each.
Educate yourself, the MSM won’t.
“”” danappaloupe (23:56:14) :
I did some thinking about the discussion about the area of sea ice. Although you never clarified which numbers you are using to make you claim of a 28.7% increase in area, you still say I am wrong.
It doesn’t matter anyway. The real important characteristic of ice as it applies to climate change is the volume of ice, expressed in thickness. “””
That last statement of yours, danappaloupe , is that a statement of your opinion, or do you have observational data to support that statement.
I’m not sure everyone would agree with the statement; opinion or not.
I for one would never express the volume of anything as a thickness; the don’t even have the same dimensions; one is L^3, and the other is L^1 so they can’t possibly be the same.
But to the more important point of the ice effect on climate, it is claimed that the open water produces warming because the ocean absorbs almost as a black body; while the sea ice reflects sunlight and increases the albedo thus producing cooling.
Both of those phenomena are purely optical surface phenomena, and have very little to do with either thickness of the ice or the open water.
So show us some climate data that proves that it is NOT the ice coverage area that is the major effect of Arctic sea ice, and not the volume, which does not affect either the albedo, or the ocean absorption of sunlight; but is merely an indicator of how long some particular arctic condition has persisted. Given that we have seen massive short term changes from the ice advances of the 1979 era to the retreats of the 2007 minimum, and now a significant return towards normalcy; I don’t think that ice volume is a good indicator of anything but natural variability.
But that is just my opinion; I await your data supporting your claim.
Then I hear that there is a possible new marker for short-term change. So I wonder if the marker is based upon something biological.
I think the marker is for longer term changes (years?). Otherwise I too thought it would be a manifestation of a biological process. We know that organisms are adapted to take advantage of natural cycles, eg coral. My guess would biological processes over months to years affecting ocean albedo and the reflection spectrum, perhaps because they correlate with how the organisms affect ocean evaporation or CO2 absorption.
RR Kampen (08:10:43) :
Re: tallbloke (07:51:10) :
“And the evidence supporting this story can be found where?”
—
E.g., http://icebubbles.ucsd.edu/Publications/CaillonTermIII.pdf .
Any short search will yield results, by the way.
And this one confirms what all the others say: that co2 lags temperature all the way to the top of the curve. Fig 4 shows the 40Ar curve and the co2 coincident, but this is because the co2 curve has been shifted 800 years to the left.
Next!
Also, I looked at Cryosphere’s Ice graphic, it seems like the winds are at it again causing ice to be blown away from Siberia and losing Ice extent that way only being made up by way of Ice growing towards Alaska and Eastern Siberia.
On the up side most of the area taken up by the ice is 90 percent concentrated or higher.
“”” J. Bob (08:57:54) :
A Comparison of EMD & Fourier Filtering for Long Term Global Temperatures.
The following is a comparison of the Fourier Convolution methods and the Empirical Mode Decomposition (EMD), used in signal analysis. The Fourier procedures were those recommended by Blackman & Tucky’s book “Measurement of Power Spectra”. These methods were later refined by Cooley & Tucky’s presentations on the Fast Fourier Transform, which they developed. “””
Not to be picky J.Bob, since typos are built into everybody’s keyboard; BUT
You are presumably referring to the Cooley-Tukey Fast Fourier Transform.
I had a mental blockage, until I figured out what you were referring to. I imagine that Google would direct you to many thousands of references to the Cooley-Tukey FFT. It is really more of a computing efficiency algorithm , than it is any new methodology; since Fourier Transform methods were well known long before CT-FFT. I use it daily in Optical Modulation Transfer Function calculations; but generally rely on Huygens rigorous integration methods for MTF determination.
One can always tell when the faithful feel threatened – they snarl and snap like a cornered animal.
RR Kampen
Since you claim to have been paying attention to ice thickness for so long, why don’t you post some numbers relating to ice volume ??
a – b = c
You claim to know something about “c”. Can you please post what “a” and “b” are in cubic meters, so we can check your math !!!
Artic sea ice????
Watching the Russian RT channel in English on Saturday they showed a one hour program on a team that have just returned from the North Pole (magnetic) driving 2 trucks (homemade), they had big flotation tires and trailers, never found any water but did encounter lots of broken pack ice. I have given up on the BBC, RT gives you the world news without the BS