Guest essay by:
Horst-Joachim Lüdecke, EIKE, Jena, Germany
Alexander Hempelmann, Hamburg Observatory, Hamburg, Germany
Carl Otto Weiss, Physikalisch-Technische Bundesanstalt Braunschweig, Germany
In a recent paper [1] we Fourier-analyzed central-european temperature records dating back to 1780. Contrary to expectations the Fourier spectra consist of spectral lines only, indicating that the climate is dominated by periodic processes ( Fig. 1 left ). Nonperiodic processes appear absent or at least weak. In order to test for nonperiodic processes, the 6 strongest Fourier components were used to reconstruct a temperature history.
Fig. 1: Left panel: DFT of the average from 6 central European instrumental time series. Right panel: same for an interpolated time series of a stalagmite from the Austrian Alps.
Fig. 2 shows the reconstruction together with the central European temperature record smoothed over 15 years (boxcar). The remarkable agreement suggests the absence of any warming due to CO2 ( which would be nonperiodic ) or other nonperiodic phenomena related to human population growth or industrial activitiy.
For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.
However one has to caution for artefacts. An obvious one is the limited length of the records. The dominant ~250 year period peak in the spectrum results from only one period in the data. This is clearly insufficient to prove periodic dynamics. Longer temperature records have therefore to be analyzed. We chose the temperature history derived from a stalagmite in the Austrian Spannagel cave, which extends back by 2000 years. The spectrum ( Fig. 1 right ) shows indeed the ~250 year peak in question. The wavelet analysis ( Fig. 3 ) indicates that this periodicity is THE dominant one in the climate history. We ascertained also that a minimum of this ~250 year cycle coincides with the 1880 minimum of the central European temperature record.
Fig 2: 15 year running average from 6 central European instrumental time series (black). Reconstruction with the 6 strongest Fourier components (red).
Fig 3: Wavelet analysis of the stalagmite time series.
Thus the overall temperature development since 1780 is part of periodic temperature dynamics prevailing already for ~2000 years. This applies in particular to the temperature rise since 1880, which is officially claimed as proof of global warming due to CO2, but clearly results from the 250 year cycle. It also applies to the temperature drop from 1800 ( when the temperature was roughly as high as today, Fig. 4 ) to 1880, which in all official statements is tacitly swept under the carpet. One may also note that the temperature at the 1935 maximum was nearly as high as today. This shows in particular a high quality Antarctic ice core record in comparison with the central-european temperature records (Fig. 4, blue curve).
Fig 4: Central European instrumental temperatures averaged the records of Prague, Vienna, Hohenpeissenberg, Kremsmünster, Paris, and Munich (black). Antarctic ice core record (blue).
As a note of caution we mention that a small influence of CO2 could have escaped this analysis. Such small influence could by the Fourier transform have been incorporated into the ~250 year cycle, influencing slightly its frequency and phase. However since the period of substantial industrial CO2 emission is the one after 1950, the latter is only 20% of the central European temperature record length and can therefore only weakly influence the parameters of the ~250 year cycle.
An interesting feature reveals itself on closer examination of the stalagmite spectrum ( Fig.1 right ). The lines with a frequency ratio of 0.5, 0.75, 1, 1.25 with respect to the ~250 year periodicity are prominent. This is precisely the signature spectrum of a period-doubling route to chaos [2]. Indeed, the wavelet diagram Fig. 3 indicates a first period-doubling from 125 to 250 years around 1200 AD. The conclusion is that the climate, presently dominated by the 250 year cycle is close to the point at which it will become nonperiodic, i.e. “chaotic”. We have in the meantime ascertained the period-doubling clearer and in more detail.
In summary, we trace back the temperature history of the last centuries to periodic ( and thus “natural” ) processes. This applies in particular to the temperature rise since 1880 which is officially claimed as proof of antroprogenic global warming. The dominant period of ~250 years is presently at its maximum, as is the 65 year period ( the well-known Atlantic/Pacific decadal oscillations ).
Cooling as indicated in Fig. 2 can therefore be predicted for the near future, in complete agreement with the lacking temperature increase since 15 years. The further future temperatures can be predicted to continue to decrease, based on the knowledge of the Fourier components. Finally we note that our analysis is compatible with the analysis of Harde who reports a CO2 climate sensitivity of ~0.4 K per CO2 doubling by model calculations [3].
Finally we note that our analysis is seamlessly compatible with the analysis of P. Frank in which the Atlantic/Pacific decadal oscillations are eliminated from the world temperature and the increase of the remaining slope after 1950 is ascribed to antropogenic warming [4], resulting in a 0.4 deg temperature increase per CO2 doubling. The slope increase after 1950, turns out in our analysis as simply the shape of the 250 year sine wave. A comparable small climate sensitivity is also found by the model calculations /3/.
[1] H.-J. Lüdecke, A. Hempelmann, and C.O. Weiss, Multi-periodic climate dynamics: spectral analysis of long-term instrumental and proxy temperature records, Clim. Past, 9, 447-452, 2013, doi:10.5194/cp-9-447-2013, www.clim-past.net/9/447/2013/cp-9-447-2013.pdf
[2] M.J. Feigenbaum, Universal behavior in nonlinear systems, Physica D, 7, 16-39, 1983
[3] H. Harde, How much CO2 really contributes to global warming? Spectroscopic studies and modelling of the influence of H2O, CO2 and CH4 on our climate, Geophysical Research Abstracts, Vol. 13, EGU2011-4505-1, 2011, http://meetingorganizer.copernicus.org/EGU2011/EGU2011-4505-1.pdf
“Contrary to expectations the Fourier spectra consist of spectral lines only”
The figures show “peaks” rather than “lines”. That statements is either wrong or it needs explanation.
They’ve done some kind of smoothing of the data using a higher-granularity frequency domain that can possible be legitimate. Per my above, I don’t think there is any legitimate way to resolve data at K=1 (1/230 years) and K=2(1/115) years into a signal w/ a peak at a precision of 248 years.
Or I could be wrong; there may be some legitimate transform analysis/technique that I’m ignorant of.
sorry, I goofed a closing italics tag. Everything after “they’ve done” was mine.
so, let’s try it again…
“Contrary to expectations the Fourier spectra consist of spectral lines only”
The figures show “peaks” rather than “lines”. That statements is either wrong or it needs explanation.
They’ve done some kind of smoothing of the data using a higher-granularity frequency domain that can possible be legitimate. Per my above, I don’t think there is any legitimate way to resolve data at K=1 (1/230 years) and K=2(1/115) years into a signal w/ a peak at a precision of 248 years.
Or I could be wrong; there may be some legitimate transform analysis/technique that I’m ignorant of.
Sure this is just another climate model curve fit to the data. But, it does have the advantage of being ridiculously simple compared to most. If they are actually on to something, it should be comparatively easy to identify the causes of 6 variables, and go on to establish a relatively convincing argument. There are no guarantees when performing research, but if I were them I would keep pursuing this path.
So frequencies are extracted from the data, then plugged back in to re-create the original curves. This is just a routine mathematical trick — entirely unproven unless each extracted frequency is mapped to a physical cause, or if it builds a successful track record of prediction. I see Richard Telford has said as much above. Pretty, but a crowd-pleaser only.
I love this type of completely different analysis! It demonstrates, (TMHO) that the power of the large, supercomputer based models is viritually shgit. And whether all parameters have a physical foundation, show me the parameters of the large GCM models. None of the parameters have any physical meaning, which is why they are presented as “ensembles”. WTF? Ensembles? Why? Because none of the individual models, individual runs is capable of catching the observed global temperature.
Models: GDI:GBO
Back to the original, hand written, pristine observations, without the corrections, adjustments.
The dominant period of ~250 years is presently at its maximum
I wonder if either of the two sharp-shooters
rgbatduke
or
Matthew R Marler
in view of the sudden up/down discrepancy of 125 years
http://www.vukcevic.talktalk.net/2CETs.htm
half of the all important ‘dominant period of ~250 years’, would care to make any additional comment, on the effect on the FT output.
Thanks RGB
Highlighted periods of 80, 61, 47, and 34 years seem to be just harmonics of the “base period” of 248; 248 is rather close to 3 x 80, 4 x 61, 5 x 47, and 6 x 34.
And the period of 248 is too long to be extracted reliably from the whole length of analyzed data. Also the stalactite record does not seem to match the temperature record very well; some peaks seem to be close but trying to reproduce the temperature record using frequencies which peak in the stalactite record might be quite hard.
The fact that there are peaks on the spectrum in my opinion does not mean the temperature is solely based on cycles; it only proves the temperature record is not completely chaotic or random. There are processes which are cyclic (PDO, AMO, ENSO, …) but they do not have to be periodic with a fixed period. Just one such process is bound to wreak havoc and lots of false matches to any fourier analysis.
Steve from Rockwood says:May 4, 2013 at 9:55 am
“I thought that if you were going to use the FFT the data should be periodic. If you graph the year 2000 onto the start of the record at 1750 you have a serious discontinuity.”
In fact, the DFT interprets the data as periodic, and reports accordingly. As you note, there is a discontinuity. It’s made worse by the use of zero padding, presumably to a 256 year period as the FFT requires. That means that, as the FFT sees it, every 256 years, the series rises to a max of 1.5 (their Fig 4, end value), drops to zero for six years, then rises to their initial value of 0.5.
That is seen as real, and it’s a very large effect. That’s where their 248-year peak comes from. It’s the FT of that periodic pulse. It’s entirely spurious.
vukcevic: would care to make any additional comment,
I am glad you asked. I have always wished that you would put more labeling and full descriptions of whatever it is that you have in your graphs. Almost without exception, and this pair is not an exception, I do not understand what you have plotted.
James Smyth, you and I are in agreement about the need for at least 2 full cycles for any period that must be estimated from the data. Only if the period is known exactly a priori can you try to estimate the phase and amplitude from data. Also see Nick Stokes on the effects of padding, which the authors had to do for an FFT algorithm. The paper (and this applies to the original) is too vague on details.
My friends, I fear I don’t understand Figure 2. You’ve taken the Fourier analysis of the European data, then used the six largest identified cycles to reconstruct the rough shape of the European data …
So what?
How does that explain or elucidate anything? Were you expecting that the reconstruction would have a different form? Do you think the Fourier reconstruction having that form actually means something? Your paper contains the following about Figure 2:
Say what? Let me count the problems with that statement.
1. The “remarkable” agreement you point out is totally expected and absolutely unremarkable. That’s what you get every time when you “reconstruct” some signal using just the larger longer-term Fourier cycles as you have done … the result looks like a smoothed version of the data. Which is no surprise, since what you’ve done is filtered out the high-frequency cycles, leaving a smoothed version of the data.
And you have compared it to … well, a smoothed version of the data, using a “boxcar” filter.
You seem impressed by the “remarkable agreement” … but since they are both just smoothed versions of the underlying data, the agreement is predictable and expected, and not “remarkable” in the slightest.
Let me suggest that this misunderstanding of yours reveals a staggering lack of comprehension of what you are doing. In the hackers’ world you’d be described as “script kiddies”. A “script kiddie” is an amateur hacker using methods (“scripts” designed to gain entry to a computer) by rote, without any understanding of what the methods are actually doing. You seem to be in that position vis-a-vis Fourier analysis.
I am speaking this bluntly for one reason only—you truly don’t seem to understand the magnitude of your misunderstanding …
2. The Fourier analysis does NOT “suggest the absence of any warming”. Instead, it just suggests the limitations of Fourier analysis. A couple of years after I was born, Claude Shannon pointed out the following:
This means that the longest cycle you can hope to resolve in a dataset N years long is maybe N / 2 years. Any cycle longer than that you’d be crazy to trust, and even that long can be sketchy.
3. The Fourier analysis does NOT “suggest the absence of any … other nonperiodic phenomena.” There’s lots of room in there for a host of other things to go on. For example, try adding an artificial trend to your data of say an additional 0.5°C per century, starting in 1850. Then redo your Fourier analysis, and REPORT BACK ON YOUR FINDINGS, because that’s how science works …
Or not, blow off all the serious comments and suggestions, and walk away … your choice. I’m easy either way.
Finally, you claim to extract a 248-year cycle from data that starts in 1780, although the picture is a bit more complex than that. In your paper (citation [1]) you say:
The period of overlap between all of these records (the latest start date) is 1781, for Hohen-whateversburg and Kremsmunster. They all end in 2010, so rather than being 250 years long, your dataset covers 1781-2010, only 230 years.
So my question is … how did you guys get a 248-year cycle out of only 230 years of data?
Regards,
w.
PS—My summary of this analysis? My friends, I fear it is as useless as a trailer hitch on a bowling ball. It’s as bad as the previous exercise in meaningless curve fitting. The fact that the curves have been fit using Fourier analysis doesn’t change the problems with the analysis.
In fact, the DFT interprets the data as periodic, and reports accordingly. As you note, there is a discontinuity.
This is picking nits, but I think that’s a poor verbalization of the underlying mathematics as a linear transform from C(N) to C(N). The DFT does no interpreting, and there is no assumption of periodicity of the domain data in the definition itself. Of course, the DFT itself has the property that X(N + k) = X(k), but that is not based on an assumption of x(N+k) = x(k). It is based on properties of the range basis elements. So, you can say with some meaningfulness that “the DFT is periodic.”
Now, as to the question of zero-padding for the FFT … I don’t even see the use of the term FFT in the excerpt. Which begs the question … Does anyone have a handy reference to modern computing time required for a true DFT of (what I would bet is) not that big of a sample set? A quick google search is not finding anything.
Matthew R Marler says: May 4, 2013 at 2:09 pm
…………..
Ah…, my trade mark…
Let’s have a go
Graph
http://www.vukcevic.talktalk.net/2CETs.htm
Top chart
-black line
Central European Temperature as shown in the Fig 2 from the article (thread) it is slightly stretched horizontally for a better resolution.
-green line
the CET (shown in absolute values), scales are equalised for the 0.5 C steps.
Bottom chart
Same annotation as in the above.
-Red up/down arrows
show start and end of ~125 year long period, during which if the European temps are lifted by ~0.5 C, then there is an almost perfect match to the CET for the same period.
This strikes me as an unnatural event, particularly at the start, to take place over just few years and then last another ~125 or so years.
Two areas CET and Central Germany (Europe) are on similar latitudes and about 500 miles apart.
If one assumes that the CET trend is more representative of the N.Hemisphere, then I would expect that the ‘dominant period of ~250 years’ to disappear.
Looking forward to your comment. Tnx.
The “remarkable” agreement you point out is totally expected and absolutely unremarkable. That’s what you get every time when you “reconstruct” some signal using just the larger longer-term Fourier cycles as you have done … the result looks like a smoothed version of the data. Which is no surprise, since what you’ve done is filtered out the high-frequency cycles, leaving a smoothed version of the data.
LOL. Again, I haven’t read the paper; and I’m way out of practice w/ this stuff, but this is really hilarious. It’s like, holy cow, the inverse of an invertable transform is the original data? Whoddathunkit???
James Smyth says: May 4, 2013 at 2:56 pm
“Now, as to the question of zero-padding for the FFT … I don’t even see the use of the term FFT in the excerpt.”
They don’t mention it, and in fact they do mention N=254. But why on earth would they be zero-padding to N=254?
The more I read the paper, the more bizarre it gets. They have no idea what they are doing, and I can only assume Zorita is clueless too. They have a DFT which represents the data as harmonics of the base frequency, and they actually show that breakdown in Table 1 (relative to N=254) with the relevant maths in Eq 4. But then they show a continuous (smoothed) version in Fig 3, marking the harmonics with periods that are different (but close). Then they try to tell us that these frequencies have some significance.
But of course they don’t. They are simply the harmonics of the base frequencies, which is just the period they have data for. The peak at 248 (or 254) years is just determined by the integral of the data multiplied by the base sinusoid. Of course there will be a peak there.
I agree with most the comments made here.
Transformation from the time domain to frequency domain means very little UNLESS ther are multiple records that allow averaging of the power spectrum. All that has been done is to use a (crude) low pass filter. So what? Obviously you can reconstruct the the signal from its Fourier components.
I’ve done a lot of work on chaotic dynamics and the period doubling route to chaos. Try as I might it just doesn’t look like period doubling to me. The subharmonics of the doubled period should have amplitudes that are smaller than the amplitude of the main peak. The peak with a periof of 341 years is larger than the putative fundamental with a 234 year period. See for example Fig. 2, P.S. Linsay, Phys. Rev. Letters, 47, 1349 (1981).
Your true frequency resolution may also cause problems separating peaks so close together, they are only a two to three bins apart in frequency space. Without knowing your windowing function it’s hard to know what it is.
The fact of the matter is that period doubling is a very fragile route to chaos and usually only shows up in dynamical systems with only a small number of degrees of freedom. As much as the AGW crowd would like us to believe that’s true with CO2 driving everything, the reality points to a very complicated system with many degrees of freedom. That is what one should expect from a system with multiple coupled fluids and components that can change phase.
vukcevik: -Red up/down arrows
You get a slightly greater fit if you move the left-end red line leftward a bit, and the right-end red line rightward a bit. That suggests that during an epoch of about 160 years central Europe (is Paris “central”?) cooled more than central England. It’s hard to get away from epochs that are approximately some multiple or fraction of some period in a Fourier transform.
They are simply the harmonics of the base frequencies, which is just the period they have data for. The peak at 248 (or 254) years is just determined by the integral of the data multiplied by the base sinusoid I would need to see your math. These words don’t translate into anything meaningful for me.. It sounds like you are implying that zero padding introduces peak frequencies; which I don’t think is true. Multiplication by a window is convolution w/ a sinc, It will spread existing frequency peaks out as convolved w/ a sinc function. Is it possible to get the sinc’d components to add up in such a way that it introduces new peaks; I suppose.
Thank goodness for RGB!!
I read this article and alarm bells went off in my head everywhere. Spectral analysis is fraught with artifacts. It is also very important what type of “window” you apply to the data prior to analysis.
I would treat all of these “conclusions” with EXTREME caution.
Thank you RGB for pointing this out.
For those who want to know more
http://en.wikipedia.org/wiki/Window_function
For those whose brains are about to explode … 😉
I highly recommend the following (free) book:
The thing about this book is that it is written from the standpoint of one who might acutally want to do some digital signal processing. There are a few serious traps the naive can fall into and this is the best book I’ve seen for pointing them out.
I think the most succinct definition of the problem is as follows:
rgbatduke you are indeed a master of understatement.
The symmetric “U” shaped curve of figure 2 from one max to another max located at either end of the record is highly suspicious. I realize you are using central Europe records and not NH records but what happened to the LIA which was certainly dominant in the first part of the record. In 1780, heavy cannon were hauled on the ice from Jersey City to New York and people could walk from New York to Staten Island on 8 foot-thick ice! One third of all Finns died, grape vines died in continental Europe…..
http://query.nytimes.com/mem/archive-free/pdf?res=9C06EED81E31E233A25757C2A9649D946096D6CF
http://green.blogs.nytimes.com/2012/01/31/in-the-little-ice-age-lessons-for-today/
The many criticisms above of an DFT of the record and the tautological result you obtained notwithstanding, shouldn’t you, at your pay-scale (shame on you all), have made some attempt to identify what the individual cycles were caused by in reality? Was it six cycles? I can’t be bothered to go back and check. There are virtually an infinite number of possible sine wave bundles that could give you essentially as good a fit (am I wrong here, I’m harkening back to mid 20thC – pre computer days summing of curves – such a work as yours might have been done in a grade 12 high school class). And you find in all this a corroboration of a 0.4 C climate sensitivity of CO2? This would require an FT along an up-sloping trend line. We’ve been jumping all over failed climate models but they are far superior to yours for their attempt to model actual affects.
The authors appear to have taken the signal, transformed it, removed some high frequency components, back-transformed, and made conclusions based on how well the result matches the original. I do not see how that would be valid.
I would have thought that before applying the FT, the data should be de-trended and windowed. No mention of any windowing in the description.
The Fourier transform assumes the signal is periodic, that is it recurs infinitely. So without de-trending and windowing, an implicit periodic saw-tooth signal covering the data with harmonics is added to the output. Windowing limits the frequency resolution, but greatly reduces the saw-tooth artifacts.
I do not understand how the FT can be used to discover what the aperiodic components of a signal are. Rather it is used to decompose a signal into sine waves, assuming it is periodic.