Periodic climate oscillations

Guest essay by:

Horst-Joachim Lüdecke, EIKE, Jena, Germany

Alexander Hempelmann, Hamburg Observatory, Hamburg, Germany

Carl Otto Weiss, Physikalisch-Technische Bundesanstalt Braunschweig, Germany

In a recent paper [1] we Fourier-analyzed central-european temperature records dating back to 1780. Contrary to expectations the Fourier spectra consist of spectral lines only, indicating that the climate is dominated by periodic processes ( Fig. 1 left ). Nonperiodic processes appear absent or at least weak. In order to test for nonperiodic processes, the 6 strongest Fourier components were used to reconstruct a temperature history.

clip_image002

Fig. 1: Left panel: DFT of the average from 6 central European instrumental time series. Right panel: same for an interpolated time series of a stalagmite from the Austrian Alps.

Fig. 2 shows the reconstruction together with the central European temperature record smoothed over 15 years (boxcar). The remarkable agreement suggests the absence of any warming due to CO2 ( which would be nonperiodic ) or other nonperiodic phenomena related to human population growth or industrial activitiy.

For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.

However one has to caution for artefacts. An obvious one is the limited length of the records. The dominant ~250 year period peak in the spectrum results from only one period in the data. This is clearly insufficient to prove periodic dynamics. Longer temperature records have therefore to be analyzed. We chose the temperature history derived from a stalagmite in the Austrian Spannagel cave, which extends back by 2000 years. The spectrum ( Fig. 1 right ) shows indeed the ~250 year peak in question. The wavelet analysis ( Fig. 3 ) indicates that this periodicity is THE dominant one in the climate history. We ascertained also that a minimum of this ~250 year cycle coincides with the 1880 minimum of the central European temperature record.

clip_image004

Fig 2: 15 year running average from 6 central European instrumental time series (black). Reconstruction with the 6 strongest Fourier components (red).

clip_image006

Fig 3: Wavelet analysis of the stalagmite time series.

Thus the overall temperature development since 1780 is part of periodic temperature dynamics prevailing already for ~2000 years. This applies in particular to the temperature rise since 1880, which is officially claimed as proof of global warming due to CO2, but clearly results from the 250 year cycle. It also applies to the temperature drop from 1800 ( when the temperature was roughly as high as today, Fig. 4 ) to 1880, which in all official statements is tacitly swept under the carpet. One may also note that the temperature at the 1935 maximum was nearly as high as today. This shows in particular a high quality Antarctic ice core record in comparison with the central-european temperature records (Fig. 4, blue curve).

clip_image008

Fig 4: Central European instrumental temperatures averaged the records of Prague, Vienna, Hohenpeissenberg, Kremsmünster, Paris, and Munich (black). Antarctic ice core record (blue).

As a note of caution we mention that a small influence of CO2 could have escaped this analysis. Such small influence could by the Fourier transform have been incorporated into the ~250 year cycle, influencing slightly its frequency and phase. However since the period of substantial industrial CO2 emission is the one after 1950, the latter is only 20% of the central European temperature record length and can therefore only weakly influence the parameters of the ~250 year cycle.

An interesting feature reveals itself on closer examination of the stalagmite spectrum ( Fig.1 right ). The lines with a frequency ratio of 0.5, 0.75, 1, 1.25 with respect to the ~250 year periodicity are prominent. This is precisely the signature spectrum of a period-doubling route to chaos [2]. Indeed, the wavelet diagram Fig. 3 indicates a first period-doubling from 125 to 250 years around 1200 AD. The conclusion is that the climate, presently dominated by the 250 year cycle is close to the point at which it will become nonperiodic, i.e. “chaotic”. We have in the meantime ascertained the period-doubling clearer and in more detail.

In summary, we trace back the temperature history of the last centuries to periodic ( and thus “natural” ) processes. This applies in particular to the temperature rise since 1880 which is officially claimed as proof of antroprogenic global warming. The dominant period of ~250 years is presently at its maximum, as is the 65 year period ( the well-known Atlantic/Pacific decadal oscillations ).

Cooling as indicated in Fig. 2 can therefore be predicted for the near future, in complete agreement with the lacking temperature increase since 15 years. The further future temperatures can be predicted to continue to decrease, based on the knowledge of the Fourier components. Finally we note that our analysis is compatible with the analysis of Harde who reports a CO2 climate sensitivity of ~0.4 K per CO2 doubling by model calculations [3].

Finally we note that our analysis is seamlessly compatible with the analysis of P. Frank in which the Atlantic/Pacific decadal oscillations are eliminated from the world temperature and the increase of the remaining slope after 1950 is ascribed to antropogenic warming [4], resulting in a 0.4 deg temperature increase per CO2 doubling. The slope increase after 1950, turns out in our analysis as simply the shape of the 250 year sine wave. A comparable small climate sensitivity is also found by the model calculations /3/.

[1] H.-J. Lüdecke, A. Hempelmann, and C.O. Weiss, Multi-periodic climate dynamics: spectral analysis of long-term instrumental and proxy temperature records, Clim. Past, 9, 447-452, 2013, doi:10.5194/cp-9-447-2013, www.clim-past.net/9/447/2013/cp-9-447-2013.pdf

[2] M.J. Feigenbaum, Universal behavior in nonlinear systems, Physica D, 7, 16-39, 1983

[3] H. Harde, How much CO2 really contributes to global warming? Spectroscopic studies and modelling of the influence of H2O, CO2 and CH4 on our climate, Geophysical Research Abstracts, Vol. 13, EGU2011-4505-1, 2011, http://meetingorganizer.copernicus.org/EGU2011/EGU2011-4505-1.pdf

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
101 Comments
Inline Feedbacks
View all comments
J. Bob
May 7, 2013 9:03 am

rgbatduke,
tamino & I disagreed with his CEL projection, in his 08 post. Which is why I use additinal methods as a fundamental cross check. Here is the one from back then showing, an apparent downturn.
http://dc594.4shared.com/download/Lyy028dy/Ave1-CEL-2008-3fil-40yr.JPG?tsid=20130507-152145-cb7c6690
Looking at current CEL data, I would say my estimation is a bit better then tamino’s. .
As far as competence in this area, DOD & NASA haven’t complained. As a side issue, spectral methods were part of the very basic predictor estimatioin work, such as with Norbert Wiener & Charlie Draper.
You might want to consult “The Measurement of Power Spectra”, Blackman & Tukey, on sampling periods for measuring signals in you freq, window..

Editor
May 7, 2013 9:43 am

rgbatduke, as usual, is right on the money (emphasis mine):

This curve does contain important lessons for the antiwarmist and warmist alike. First of all, the global temperature curve has the same general shape as this but plotting only the anomaly, and on a normal scale, not semilog. This is what is so astounding — the climate is enormously stable. Forget 3dB corrections on the kelvin scale. One cannot even use decibels to describe the fluctuations visible in the anomaly.
It is very, very instructive to plot global temperatures not as anomalies but on the actual Kelvin scale, TO scale. Sadly, woodsfortrees doesn’t appear to let you control the axes — it has one of the ways to lie with statistics essentially built into its display (I mean this literally, a method explicitly described in the book “How to Lie with Statistics”). Honestly plotted, with error bars, it would look like a more or less flat line where the entire width of the line would be error bar. After all, the entire variation in degrees K is 3 tenths of one percent from 1880 to the present.

We’re a full service website here, so I’ve provided your graph for you. The error bars are invisible on this scale, so I’ve left them out.

This is the idea I find I have to keep repeating. The surprising thing about the climate is not that the temperature of the globe varied by 0.6°C over the last century.
The surprise is that it ONLY varied by 0.6° over the last century.
I have messed around a whole lot, both with heat engines of a variety of types, and with iterative climate models. I can assure folks that getting a free-running system to exhibit that kind of stability, either in the real world or on a computer, is a difficult task.
If the interests of the climate science community had been dedicated to understanding the stability of the climate, rather than obsessing about tiny fluctuations in global temperature, we might actually have learned something in the last 30 years. Instead, we have followed this idiotic idea down the rabbit hole:
\displaystyle \Delta T = \lambda \Delta F
This is the foolish claim that temperature depends on one thing and one thing only—the amount of change in the “forcing”, the total downwelling radiation at the top of the atmosphere. Not only that, but the relationship is bog-standard linear.
No other chaotic natural system that I know of has such a bozo-simple linear relationship between input and output, so it’s a mystery to me why people claim the climate follows such an unlikely simplistic rule.
Actually, it’s not a mystery. They have to believe that. The other option is too horrible to contemplate—they can’t stand the idea that the Thermageddon™ may not be just around the bend, it may have only existed in their fevered imaginations …
w.

James Smyth
May 7, 2013 10:11 am

[Bart said] Zero padding does not produce the sinc convolution – the finite data record does that. Zero padding simply interpolates more data points, making a plot produced with linear interpolation between data points, as most plotting routines use, appear smoother.
Not only have I been staring at the phenomenon in practice for the last two days, but i know it’s true because a window transforms to a sinc, and multiplication in time is convolution in frequency.

Bart
May 7, 2013 11:39 am

James Smyth says:
May 7, 2013 at 10:11 am
“…a window transforms to a sinc, and multiplication in time is convolution in frequency…”
Yes, but the “window” is always the same – it is defined by the boundary of your data. Extending the dataset with zeros does not add new data. It just creates a finer grid upon which the FFT evaluates the continuous-in-frequency DTFT (Discrete Time Fourier Transform). The DFT is merely a sampled version of the DTFT.

James Smyth
May 7, 2013 12:23 pm

Sorry, I should not have used the word “window” in such and imprecise manner. The “rectangle” associated w/ zero-padding is what gives you the sinc convolution. Contra your statement that “Zero padding does not produce the sinc convolution “, it does.
And, the fact is, not only is here no need to do zero-padding, but zero-padding does not introduce “harmonics”, Both of which were common misconceptions and misstatements on this thread.
I have some other thoughts about the granularity of the f-domain, but it’s mostly personal interest (I think we all agree that the generate conclusion of the paper is overreaching). But, I want to think about the math, compare it to the paper, and maybe try to get the paper’s data.

Bart
May 7, 2013 1:00 pm

James Smyth says:
May 7, 2013 at 12:23 pm
“Sorry, I should not have used the word “window” in such and imprecise manner. The “rectangle” associated w/ zero-padding is what gives you the sinc convolution.”
No, the rectangle associated with the data record gives you the sinc convolution. Extending the record with zeros does not give you a bigger data window. It just allows you to sample more points of the DTFT.
I’m not going to argue with you. These things are well known and established. Good luck, and happy hunting.

James Smyth
May 7, 2013 1:17 pm

But, you (and I) are arguing 🙂 and I’m sitting here staring at very simple examples that show I’m correct and you are wrong. Is it possible that you are just used to old-school FFT requirements (which are antiquated) of power-of-2 zero-padding?
Or is there something about R’s FFT implementation that is misleading me? That’s possible.

Bart
May 7, 2013 4:15 pm

“Is it possible that you are just used to old-school FFT requirements (which are antiquated) of power-of-2 zero-padding?”
Mixed radix algorithms have been standard for decades. I’ve no doubt you are seeing an effect, it just isn’t what you think it is.
Come on, think. What happens to the Fourier sum when you increase N, but the sum is still over the same non-zero set of points?

Bart
May 7, 2013 4:21 pm

BTW, you do realize a wider time window gives you a convolution in the frequency domain with a narrower sinc function, right? Make it wider in the time domain, it becomes narrower in the frequency domain, and vice versa.
Fun fact: that is the basis of Heisenberg’s Uncertainty Principle in Quantum Mechanics, because momentum and position are Fourier transformable variables. The better you know the position, the worse you know the momentum, and vice versa.

James Smyth
May 7, 2013 5:34 pm

It might help if you made statements rather than ask questions.
When you increase N, you are changing the basis elements in the frequency domain. Each exp(-i*2*pi*k/N) is a different basis vector when you increase N. So, while you are summing over the same x(n), the X(k) are’s completely different. What point are you trying to make?

James Smyth
May 7, 2013 5:44 pm

I’m going back through your recent contributions and the only definitive statement I can see you’ve made “Zero padding does not produce the sinc convolution – the finite data record does that. ” is simply wrong. Zero padding does produce the sinc convolution and the finite data record (w/out padding) does not.

Bart
May 7, 2013 6:26 pm

James Smyth says:
May 7, 2013 at 5:34 pm
“It might help if you made statements rather than ask questions.”
I want you to think it through. That is the best way to fix the concepts in your mind.
“Each exp(-i*2*pi*k/N) is a different basis vector when you increase N.”
The Fourier transform kernel is exp(-i*2*pi*k*n/N). The quantity w = 2*pi*k/N is the radial frequency associated with the value of k, so you could write this as exp(-i*w*n).
Suppose w were a continuous variable. Performing the Fourier sum over n then gives you the DTFT, which is a continuous function of w. So, you see, the DFT is the DTFT sampled at the points 2*pi*k/N.
If you increase N, but are summing over the same set of non-zero points, then what you have done is increased your sample resolution by sampling more closely spaced values of w. As you increase the zero padding, you will begin to see more features of the DTFT come into view. But, it is not because of any convolution. It is because you are looking at more samples of the DTFT.
” Zero padding does produce the sinc convolution and the finite data record (w/out padding) does not.”
No, it just increases the resolution of the sinc function inherent from the finite data window because you are sampling the DTFT more densely.

May 7, 2013 6:31 pm

“Zero padding does produce the sinc convolution and the finite data record (w/out padding) does not.”
I believe it is the finite data record. It’s as if you had an infinite record and multiplied it by a gate function. The spectrum is then convolved with the sinc function which is the FT of the gate function. If you’re saying that the zero padding creates the gate function, then OK, but the sinc itself is independent of how much padding.
If the original signal had been the 254 block repeated indefinitely, then the FT of that would be a set of delta functions at the harmonics of period=254. If you then gate it to one block of 254, each of those delta’s is convolved with the gate fns sinc FT. If you then cut back to a finite interval, the spectrum is discrete, with the sinc function represented by a set of spikes. As the padding is reduced, the spikes get further apart.

James Smyth
May 8, 2013 12:47 am

If you increase N, but are summing over the same set of non-zero points, then what you have done is increased your sample resolution by sampling more closely spaced values of w. As you increase the zero padding, you will begin to see more features of the DTFT come into view. But, it is not because of any convolution. It is because you are looking at more samples of the DTFT.
I get the change in granularity. In fact, I was one of the original people to note that in order to suss out a 248 year cycle, you need data whose duration is at least that long (a sort of companion/correlary to the Nyquist rate). But when you change N, you are now summing over a different set of basis elements, even though their coefficient x(n) are the same. And I think the different values reflect a change in the nature of the time-domain signal (ie. you’ve changed it’s period by the addition of one zero point). So, it’s not clear to me why this higher granularity would be giving you “better” (more accurate?) information about the original, non-zero padded signal. Sorry, that’s very qualitative but I’m pretty burnt at this point.
Look at these examples and try to tell me that the zero-padded examples are just showing “more features” due to “more samples”: Zero Padding Examples.

James Smyth
May 8, 2013 1:07 am

but the sinc itself is independent of how much padding.
(sigh). No,it’s not. The size of the window/padding determines the character of the sinc. I can’t be bothered to look it up but its something like ~ sin(W*f)/(W*sin(f)) where W is the window/non-zero width.

James Smyth
May 8, 2013 1:11 am

I added a larger zero-pad to the end of my examples. And zoomed in to show the sinc.

Nick Stokes
May 8, 2013 2:39 am

James,
I think your example is focussing on the wrong frequency range. You have a relatively high “carrier” and a very high sampling rate. Ludecke’s has no carrier, and a sampling rate, in your terms, of 1 Hz (to scale, his yr = your sec). And he is looking at frequencies around .01 Hz (yr^-1). You’re looking at about 100x higher. So yes, your small end padding acts as a gate function multiplier, and makes a sinc function a few Hz wide.
Ludecke shows a continuous plot at around .01 yr^-1. To get that, he would need a lot of low frequencies in his DFT, which implies padding out to many millenia.
The math – if you have a function which is 1 from -a to a, and you add zero padding b on each side, at infinite sampling rate, then the DFT is
sin(2πfa)/(2πf)
where f is frequency. It’s independent of b, but b enters into the discrete frequencies of the DFT, which occur at n/(a+b), n=0,+-1,+-2 etc. So the padding doesn’t change the sinc, but increases sampling in the freq domain.

Bart
May 8, 2013 7:13 am

Nick Stokes says:
May 8, 2013 at 2:39 am
“So the padding doesn’t change the sinc, but increases sampling in the freq domain.”
Exactly. It’s the same sinc function every time, dependent on the length of the data record, just sampled at different values. With no zero padding at all, you are sampling frequency at the nulls of the sinc except for the central point.

Bart
May 8, 2013 7:22 am

“So, it’s not clear to me why this higher granularity would be giving you “better” (more accurate?) information about the original, non-zero padded signal.”
It doesn’t in reality, but for the way we humans process visual information, it is better. Plotting software generally interpolates data points with linear interpolation. Zero padding imposes a finer grid so that the plot looks smoother to us, and it is easier to recognize patterns which correspond to specific, frequently encountered forms.
E.g., the Cauchy peak of a second order damped oscillator driven by broadband noise, which is useful for modeling many climate variables such as sunspot numbers.

James Smyth
May 8, 2013 10:02 am

I think your example is focussing on the wrong frequency range. You have a relatively high “carrier” and a very high sampling rate.
Well, I was trying to illustrate something specific. And I chose the durations and freqs that I did to illustrate the sinc at a very high resolution. And to show how adding a relatively tiny window (ie. one which on very slighly increases the frequency resolution ) imposes an obvious sinc convolution on the results. I made it so i couldn’t be any clearer. BTW, the carrier is not “relatively high”; it’s actually pretty low, but the spectrum is zoomed in close (again, to see that resolution).
I did play around w/ numbers like the paper, and it illustrates our original point about the gross nature of a 248 year signal in a 25x duration data. But, I started to realize that I could goof around with that forever. Also, I wanted to find the smoothing function they used, in order to try to determine whether it was invertible (ie. should I wasted me time trying to find other solutions to their equation/fitting, or is it one-to-one and onto)
So the padding doesn’t change the sinc, but increases sampling in the freq domain.
Seriously? You’ve left out a function of the relative window size in your numerator. The zeros change w/ more padding. See here: Padding Goes to Town I’m sorry, I can’t be any clearer than that and hope to get anything done at work today 🙂

Bart
May 8, 2013 10:13 am

James Smyth says:
May 8, 2013 at 10:02 am
“…imposes an obvious sinc convolution on the results…”
“Seriously? You’ve left out a function of the relative window size in your numerator. The zeros change w/ more padding.”
Zero padding does not induce a convolution in the frequency domain. I’ve spoon fed it to you, and apparently wasted my time. If you are not even going to bother to try to understand, there is no point in carrying on. You are wrong.

James Smyth
May 8, 2013 12:12 pm

. If you are not even going to bother to try to understand, there is no point in carrying on
I’ve actually gone to great lengths to try to understand your points. I’ve noodled w/ the math and experimented with a wide range of sample data. And you haven’t spoon fed me anything. I’d be happy to look at something specific (and at this point it really is time for me to consult the literature), but your talk about increased granularity struck me as hand-waving. Still does.
I don’t see (or didn’t yesterday), either qualitatively or mathematically, any difference between taking data (which is assumed to have persistent characteristics) and multiplying by a window to induce zeroes or taking a shorter version of that data and padding zeros. Surely, in the former you’d agree that multiplication by the window induces convolution by the sinc. It’s not a great leap from there to happens when you take the original data and pad zeros. [I’m tempted to omit this b/c it is hand-waving]
I’m also open to the possibility that there is something about the examples I’ve posted that are misleading me, but I went great lengths to make sure there wasn’t something about the combination of numbers (rounding errors, low-resolution sampling, interpolation, plotting issues, zoom level, etc) that was throwing me off.

May 8, 2013 1:53 pm

James,
As often, I think there’s a duality here – we’re just looking at different sides of it. That’s why the frequency range is important, because the merits of the two ways of looking at it depend on that.
I think my math is correct – here’s the interpretation re your latest diagrams. In the first you have no padding and see a bare carrier frequency. My formula would say that there “is” a sinc function with zeroes at multiples of 1/192, but the DFT only samples at those freq multiples, so you don’t see it.
Then you put in 1 sec padding. That shifts the freq sampling and you now see a bit of the sinc function as a beat freq, Put in 2 sec, and the beats go to 1/2 Hz, the spacing of your zeroes.
This all seems artificial, but as you keep increasing the padding that underlying sinc starts to emerge as a real picture as you sample it more often in the freq domain. And it is the sinc of the signal block.
I think Ludecke is at that high padding end.

James Smyth
May 8, 2013 6:02 pm

we’re just looking at different sides of it.
This may be the case. I found an old IEEE article that I glanced at and it got me thinking about the limit of W -> infinity and sampling that.
And I also think the seemingly radical change of the spectrum in going from this unpadded to this one-padded really throws me off.

RACookPE1978
Editor
May 8, 2013 6:37 pm

James Smyth says:
May 8, 2013 at 6:02 pm

we’re just looking at different sides of it.
This may be the case. I found an old IEEE article that I glanced at and it got me thinking about the limit of W -> infinity

James! Shirely you jest! There is no limit to “W” in the warmist’s minds. It will always be Bush’s fault.