"Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2"

Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record.  He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now.  – Anthony

Future Perfect

By Pat Frank

In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves.  It turned out they have a lot to reveal.

As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.

Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.

For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.

I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.

The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.

Figure 1. Upper: Trends for the annual surface air temperature anomalies, showing the OLS fits with a combined cosine function plus a linear trend. Lower: The (data minus fit) residual. The colored lines along the zero axis are linear fits to the respective residual. These show the unfit residuals have no net trend. Part a, GISS data; part b, CRU data.

Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like?  Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.

Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.

Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.

Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.

Figure 2. Upper: The linear part of the annual surface average air temperature anomaly trends, obtained by subtracting the fitted cosines from the entire trends. The two straight lines in each plot are: OLS fits to the linear trends and, the linear parts of the fits shown in Figure 1. The two lines overlay. Lower: The subtracted cosine functions.

The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.

Figure 3: Comparison of the GISS and CRU fitted cosines.

The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).

The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.

In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.

So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.

But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.

The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.

Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.

But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010.  In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.

Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.

Figure 4: The Figure 2 linearized trends from the GISS and CRU surface air temperature anomalies showing separate OLS linear fits to the 1880-1940 and 1960-2010 sections.

The fit results of the early and later temperature anomaly trends are in Table 1.

 

Table 1: Decadal Warming Rates for the Early and Late Periods.

Data Set

C/d (1880-1940)

C/d (1960-2010)

(late minus early)

GISS

0.056

0.087

0.031

CRU

0.044

0.073

0.029

“C/d” is the slope of the fitted lines in Celsius per decade.

So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.

If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.

Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.

Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.

I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.

All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2).  That plot is in Figure 5.

Figure 5. Blue line: the 1960-2010 excess warming, 0.03 C/decade, plotted against the net GHG forcing trend due to increasing CO2, CH4, and N2O. Red line: the OLS linear fit to the forcing-temperature curve (r^2=0.991). Inset: the same lines extended through to the year 2100.

There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.

So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.

Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.

The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.

Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase.  This value is 4.4x -13x smaller than the range projected by the IPCC.

The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?

But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.

But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.

So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.

If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.

It couldn’t be  the negative feedback of Roy Spencer’s clouds, could it?

The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.

Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.

The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.

The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.

If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.

Semi-empirical linear model: 0.84 C warmer by 2100.

Fully empirical negative feedback model: 0.42 C warmer by 2100.

And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.

So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.

The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.

And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.

So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.

So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.

Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.

But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.

The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.

In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
337 Comments
Inline Feedbacks
View all comments
June 5, 2011 11:02 pm

Apologies for neglecting to close that link. 🙁

Bart
June 5, 2011 11:26 pm

I second Alan’s motion. I always feel like I need to bathe after I look in over there.
Spector says:
June 5, 2011 at 6:46 pm
“…but if you ask an optimization program to find the best fitting sine curve, it may give you a good answer.”
I’m just saying, it would be nice to ask it to optimize something which could be pondered as having physical significance. You could use the proxy data. If you claim it’s crap, I won’t disagree. But, it might be interesting to see what a long term cyclic expansion predicts.

Ryan
June 6, 2011 3:43 am

@Matt, @Leif Svalgaard,
Well of course you are correct that it may not be “proper” to fit a sine to a dataset just because its tempting peaks and troughs more or less beg you to do so. When you really only have two cycles to go on that isn’t really enough. You need more data to be sure.
The obvious source of more data is the Central England Temperature Record:-
http://en.wikipedia.org/wiki/File:CET_Full_Temperature_Yearly.png
You can see the same peaks and troughs in the Central England Temperature Record from 1880 to 2007, i.e. the same 60 yr cycle is apparent, but sadly if you go back in time you can see that cycle breaks down. However, if you take this chart as a means of disputing Pat Frank’s claims you are in trouble if you are hoping to see AGW, since this dataset clearly shows that there is nothing special about temperatures post 1950. Thanks to the LIA recovery temperature records will likely get broken by 0.2Celsius every 60 yrs or so, and for the UK the last time just happened to be in 2002 by just that amount.
Oh by the way, it was 25Celsius in Southern UK on Saturday – had a nice BBQ and set up the kids trampoline. Sadly today it has clouded over and the wind is blowing from the north – it might reach 15Celsius today if we are lucky. Something must have sucked all the CO2 out of the atmosphere over the weekend……

Spector
June 6, 2011 5:37 am

RE: Bart: (June 5, 2011 at 11:26 pm)
“You could use the proxy data. …”
Do you have a preferred realistic public proxy? I used the HadCRUT3v data because it has the longest official record based on measured data and is at least similar to one of the data sets used in the main article.

JOhn H
June 6, 2011 11:27 am

After reading the thoughtful responses of commentors here, and Tamino’s analysis (you may not like the ton,e but the substance of his argument is valid), it seems like Frank needs to consider a thorough revision of his essay.

Bart
June 6, 2011 1:32 pm

Spector says:
June 6, 2011 at 5:37 am
“Do you have a preferred realistic public proxy?”
Not really. They’re all dubious. But, at least including it in the analysis might give an idea of how sensitive the prediction going forward is to what was modeled coming before.
JOhn H says:
June 6, 2011 at 11:27 am
OK, I looked. And, I need a shower. His claims have no merit. Two full cycles is statistically significant. It is too much of a coincidence. In 1940, if you had said, “temperatures have risen in apparently cyclical fashion, and we should hit another peak in about the year 2000,” it would have been proper to say “there is not enough data to say that with any confidence.” But, when the second rise is confirmed on schedule, it is clear that there is something to it.
I looked further at his link here. Jeez, he’s using periodograms. How elementary and jejune. And, he fails utterly to make his case. The higher peaks in the graph following “The periodogram looks like this:” are at too low frequency in which to have any confidence given the time span of the data. The others are clearly not Cauchy peaks. Fail.

Bart
June 6, 2011 1:59 pm

There is an additional point to make. There are some plots where he uses ridiculous piecewise fits and such and says that, since these cannot be said to reflect the underlying processes, neither can the sinusoidal fits. But, this ignores the ubiquity of cyclical processes in the entire panorama of natural processes in every field of science and engineering. This ubiquity comes about because A) trig functions for a functional base, and can be used in an expansion to describe any bounded process and B) because every physical process anywhere in the universe depends on vector projections, which are always proportional to a cosine function.
This is not numerology in any way, shape, or form. It is making the assumption that the functional form of the process we are observing is likely to be the same as that of every other natural process we have ever observed. This is why Fourier analysis is such a powerful tool for ferreting out the underlying principles governing any natural time series, and one of the first things an investigator should look at when attempting to do so.

June 7, 2011 11:38 am

A very powerful riposte, Bart, thanks.

June 8, 2011 8:32 am

Bart says:
June 6, 2011 at 1:59 pm
This is not numerology in any way, shape, or form. It is making the assumption that the functional form of the process we are observing is likely to be the same as that of every other natural process we have ever observed. This is why Fourier analysis is such a powerful tool for ferreting out the underlying principles governing any natural time series
The numerology is in the assumption. BTW, Fourier analysis on global temperatures [e.g. Loehle’s] show no power at 60 years, or any other period for that matter:
http://www.leif.org/research/FFT%20Loehle%20Temps.png

June 8, 2011 11:06 am

Leif Svalgaard says:
June 8, 2011 at 8:32 am
Fourier analysis on global temperatures [e.g. Loehle’s] show no power at 60 years, or any other period for that matter: http://www.leif.org/research/FFT%20Loehle%20Temps.png
For periods around 60 years.
Now, there is a curious sequence of peaks at higher frequencies: http://www.leif.org/research/FFT%20Loehle%20Temps-Freq.png
The spacing between the peaks is 0.0345 [in frequency per year]. This corresponds to a period of 29.0 years and is likely due to Loehle’s data being 30-yr averages sampled every year, creating appropriate fake periods. Again an example of how Fourier analysis misleads you.

Bart
June 8, 2011 11:57 am

“The numerology is in the assumption.”
“It is making the assumption that the functional form of the process we are observing is likely to be the same as that of every other natural process we have ever observed. “
Sorry. I don’t see it.
“BTW, Fourier analysis on global temperatures [e.g. Loehle’s] show no power at 60 years, or any other period for that matter:”
This is a naive analysis. All you’ve got here is essentially a reflection of the offset and trend in the data and a bunch of noise. PSD estimation is a lot more involved than that. I almost posted the below, but decided people probably wouldn’t be interested. Now, I think I will go ahead.
If you would like me to take a crack at it, let me know where I can get the data.
——————————-
Some pointers about constructing a PSD estimate, for those who might be interested. PSDs are well-defined only for stationary processes. People sometimes use them for quantifying non-stationary processes, but it’s not generally a good idea for a variety of reasons. Thus, before performing a PSD on data with both stationary and non-stationary components, some pre-treatment is advisable.
For FFT based methods, detrending, or subtracting out other higher order polynomial fits, is often useful for diminishing the impact of non-stationary components. However, one must be aware that this does introduce bias into the PSD estimate, particularly at low frequencies.
A PSD is the Fourier transform of the autocorrelation function. A periodogram is an estimator of the PSD calculated as the magnitude squared of the FFT of the data, divided by the data record length. As such, it is a biased estimator, though the bias decreases for stationary processes as the length of the data record increases. Furthermore, it is highly variable, and the variance does not go down as the length of the data record increases. Averaging together periodograms of chunks of data is a common method employed to reduce the variance, but at the cost of greater bias. Bias becomes particularly bad when those chunks are shorter than the longest significant correlation time.
A better FFT method is first to compute the autocorrelation estimate. By inspection, you can then see where the function is well behaved, how long the longest correlation time is, and where the autocorrelation estimate starts to lose coherence. You window it to that time with an appropriate window function (see, e.g., the classic text by Papoulis) and then compute the PSD. This method is generally far superior to averaging windowed periodograms, where one goes in blind without knowing any of the correlation details.
The FFT is actually a sampled version of a continuous function, the discrete Fourier Transform, where the frequency samples are spaced proportional to 1/N, where N is the length of the input record. A simple method to sample with higher density is to “zero pad” the autocorrelation estimate past the point where the window function goes to zero.
Once one determines parameters for the higher frequency content of the signal, an ARMA (autoregressive moving-average) model can be constructed for it, and this can be used to aid more sophisticated estimation methods, as desired.

June 8, 2011 12:28 pm

Bart says:
June 8, 2011 at 11:57 am
This is a naive analysis. All you’ve got here is essentially a reflection of the offset and trend in the data and a bunch of noise.
The temperature reconstruction is so noisy [and uncertain] that a more sophisticated analysis is hardly worth the effort, but have a go at it. The data is here: http://www.ncasi.org/publications/Detail.aspx?id=3025

Bart
June 8, 2011 1:26 pm

It isn’t pretty. I didn’t realize this was proxy reconstruction. But, I do discern peaks at 88, 62, and 23 years.
There are others, but these seem have the most significant energy. A couple of apparent peaks also occur at 52 and 43 years, but they’re kind of ambiguous.

Bart
June 8, 2011 2:48 pm

Could have sworn I posted back on this, but it has disappeared.
I did not realize you were looking at proxy data. Very messy. But, I am able to pick out the most significant peaks at 134, 88, 62, and 23 years. A couple more at 52 and 43 years are kind of ambiguous.

Bart
June 8, 2011 2:50 pm

Well, now it’s there. I was looking back because I wanted to add the 134 year one in.

June 8, 2011 3:18 pm

Bart says:
June 8, 2011 at 1:26 pm
It isn’t pretty. I didn’t realize this was proxy reconstruction. But, I do discern peaks at 88, 62, and 23 years.
The time resolution is in reality 30 years. The 30-yr averages were then re-sampled every year, but that does not really create any new data.

June 8, 2011 4:04 pm

Bart says:
June 8, 2011 at 2:48 pm
<i?I did not realize you were looking at proxy data. Very messy. But, I am able to pick out the most significant peaks at 134, 88, 62, and 23 years. A couple more at 52 and 43 years are kind of ambiguous.
for 30-year data, you cannot pick out anything below 2*30 years [remember Nyquist?]. “most significant” should not be conflated with ‘just the largest’ peaks. a peak can be the largest, yet not be significant.

June 8, 2011 4:20 pm

Bart says:
June 8, 2011 at 2:48 pm
I did not realize you were looking at proxy data. Very messy. But, I am able to pick out the most significant peaks at 134, 88, 62, and 23 years. A couple more at 52 and 43 years are kind of ambiguous.
for 30-year data, you cannot pick out anything below 2*30 years [remember Nyquist?]. “most significant” should not be conflated with ‘just the largest’ peaks. a peak can be the largest, yet not be significant.
A standard ‘naive’ method of getting a handle on significance is simply to calculate the FFT for the two halves of the data. Here is what you get: httt://www.leig.org/research/FFT%20Loehle%20Temps-2%20Halves.png
You can see the effect of the oversampling in the dips and peaks below 30 years. Above 30 [or 60] there are no consistent peaks. This is not rocket science.

June 8, 2011 4:20 pm

httt://www.leif.org/research/FFT%20Loehle%20Temps-2%20Halves.png

June 8, 2011 4:23 pm

Leif Svalgaard says:
June 8, 2011 at 4:20 pm
http://www.leif.org/research/FFT%20Loehle%20Temps-2%20Halves.png
I’m extra fat-fingered today

Bart
June 8, 2011 6:53 pm

That’s not how filters work. It isn’t a sharp cutoff. A 30 year average has its first zero at 1/30 years^-1. The next one is at 1/15 years^-1, then at 1/10 years^-1, and so on. At 1/23 years^-1, the gain is about 0.2. So, all this means is that the energy in the component at 23 years is, in reality, 25X larger than it appears in my PSD. And, the center of the peak could be a little shifted by the filter lobe, so it might really be +/- a couple of years.
It is the resampling which allows me to see the 23 year cycle. Otherwise, it would have been aliased to 98.6 years.

Bart
June 8, 2011 7:07 pm

It may be of interest to note that 20 to 23 year cyclic behavior commonly crops up in environmental variables, as “Spector” found above in his fit. There is also a significant roughly 21 year cycle in the MLO CO2 measurements as well. Those measurements have significant sinusoidal components at roughly 1/4, 1/3, 1/2, 1, 3.6, 8.5, and 21 years.

Bart
June 8, 2011 7:12 pm

Leif… stop. Read my note above. You are incorrect.
‘“most significant” should not be conflated with ‘just the largest’ peaks’
Indeed. Which is why I wrote “these seem have the most significant energy“.

June 8, 2011 7:42 pm

Bart says:
June 8, 2011 at 7:12 pm
‘“most significant” should not be conflated with ‘just the largest’ peaks’
Indeed. Which is why I wrote “these seem have the most significant energy“.

“Energy” ? Perhaps you mean ‘power’? ‘Seem to have” either they have or they don’t. The 30-year average is a running average.
The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half. All the peaks and valleys you see below 30 years are not real. The resampling does not help you here. I could resample with one-month resolution and study the annual variation right?

June 8, 2011 8:07 pm

Leif Svalgaard says:
June 8, 2011 at 7:42 pm
The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half.
The FFT gives the amplitude of the sine wave, Your 22-yr period [when present, before 1000AD] has an amplitude of 0.01C which is way below the accuracy of the reconstruction. http://www.leif.org/research/FFT%20Loehle%20Temps%20Comp.png
As I said: numerology.

1 4 5 6 7 8 14