Periodic climate oscillations

Guest essay by:

Horst-Joachim Lüdecke, EIKE, Jena, Germany

Alexander Hempelmann, Hamburg Observatory, Hamburg, Germany

Carl Otto Weiss, Physikalisch-Technische Bundesanstalt Braunschweig, Germany

In a recent paper [1] we Fourier-analyzed central-european temperature records dating back to 1780. Contrary to expectations the Fourier spectra consist of spectral lines only, indicating that the climate is dominated by periodic processes ( Fig. 1 left ). Nonperiodic processes appear absent or at least weak. In order to test for nonperiodic processes, the 6 strongest Fourier components were used to reconstruct a temperature history.

clip_image002

Fig. 1: Left panel: DFT of the average from 6 central European instrumental time series. Right panel: same for an interpolated time series of a stalagmite from the Austrian Alps.

Fig. 2 shows the reconstruction together with the central European temperature record smoothed over 15 years (boxcar). The remarkable agreement suggests the absence of any warming due to CO2 ( which would be nonperiodic ) or other nonperiodic phenomena related to human population growth or industrial activitiy.

For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.

However one has to caution for artefacts. An obvious one is the limited length of the records. The dominant ~250 year period peak in the spectrum results from only one period in the data. This is clearly insufficient to prove periodic dynamics. Longer temperature records have therefore to be analyzed. We chose the temperature history derived from a stalagmite in the Austrian Spannagel cave, which extends back by 2000 years. The spectrum ( Fig. 1 right ) shows indeed the ~250 year peak in question. The wavelet analysis ( Fig. 3 ) indicates that this periodicity is THE dominant one in the climate history. We ascertained also that a minimum of this ~250 year cycle coincides with the 1880 minimum of the central European temperature record.

clip_image004

Fig 2: 15 year running average from 6 central European instrumental time series (black). Reconstruction with the 6 strongest Fourier components (red).

clip_image006

Fig 3: Wavelet analysis of the stalagmite time series.

Thus the overall temperature development since 1780 is part of periodic temperature dynamics prevailing already for ~2000 years. This applies in particular to the temperature rise since 1880, which is officially claimed as proof of global warming due to CO2, but clearly results from the 250 year cycle. It also applies to the temperature drop from 1800 ( when the temperature was roughly as high as today, Fig. 4 ) to 1880, which in all official statements is tacitly swept under the carpet. One may also note that the temperature at the 1935 maximum was nearly as high as today. This shows in particular a high quality Antarctic ice core record in comparison with the central-european temperature records (Fig. 4, blue curve).

clip_image008

Fig 4: Central European instrumental temperatures averaged the records of Prague, Vienna, Hohenpeissenberg, Kremsmünster, Paris, and Munich (black). Antarctic ice core record (blue).

As a note of caution we mention that a small influence of CO2 could have escaped this analysis. Such small influence could by the Fourier transform have been incorporated into the ~250 year cycle, influencing slightly its frequency and phase. However since the period of substantial industrial CO2 emission is the one after 1950, the latter is only 20% of the central European temperature record length and can therefore only weakly influence the parameters of the ~250 year cycle.

An interesting feature reveals itself on closer examination of the stalagmite spectrum ( Fig.1 right ). The lines with a frequency ratio of 0.5, 0.75, 1, 1.25 with respect to the ~250 year periodicity are prominent. This is precisely the signature spectrum of a period-doubling route to chaos [2]. Indeed, the wavelet diagram Fig. 3 indicates a first period-doubling from 125 to 250 years around 1200 AD. The conclusion is that the climate, presently dominated by the 250 year cycle is close to the point at which it will become nonperiodic, i.e. “chaotic”. We have in the meantime ascertained the period-doubling clearer and in more detail.

In summary, we trace back the temperature history of the last centuries to periodic ( and thus “natural” ) processes. This applies in particular to the temperature rise since 1880 which is officially claimed as proof of antroprogenic global warming. The dominant period of ~250 years is presently at its maximum, as is the 65 year period ( the well-known Atlantic/Pacific decadal oscillations ).

Cooling as indicated in Fig. 2 can therefore be predicted for the near future, in complete agreement with the lacking temperature increase since 15 years. The further future temperatures can be predicted to continue to decrease, based on the knowledge of the Fourier components. Finally we note that our analysis is compatible with the analysis of Harde who reports a CO2 climate sensitivity of ~0.4 K per CO2 doubling by model calculations [3].

Finally we note that our analysis is seamlessly compatible with the analysis of P. Frank in which the Atlantic/Pacific decadal oscillations are eliminated from the world temperature and the increase of the remaining slope after 1950 is ascribed to antropogenic warming [4], resulting in a 0.4 deg temperature increase per CO2 doubling. The slope increase after 1950, turns out in our analysis as simply the shape of the 250 year sine wave. A comparable small climate sensitivity is also found by the model calculations /3/.

[1] H.-J. Lüdecke, A. Hempelmann, and C.O. Weiss, Multi-periodic climate dynamics: spectral analysis of long-term instrumental and proxy temperature records, Clim. Past, 9, 447-452, 2013, doi:10.5194/cp-9-447-2013, www.clim-past.net/9/447/2013/cp-9-447-2013.pdf

[2] M.J. Feigenbaum, Universal behavior in nonlinear systems, Physica D, 7, 16-39, 1983

[3] H. Harde, How much CO2 really contributes to global warming? Spectroscopic studies and modelling of the influence of H2O, CO2 and CH4 on our climate, Geophysical Research Abstracts, Vol. 13, EGU2011-4505-1, 2011, http://meetingorganizer.copernicus.org/EGU2011/EGU2011-4505-1.pdf

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Only variables I would consider for determining natural oscillation periods (in the N. Hemisphere at least) is the CET and tectonic records and here is what they show:
http://www.vukcevic.talktalk.net/NVp.htm
with future extrapolation ( not a prediction !)

resulting in a 0.4 deg temperature increase per CO2 doubling
Henry says
oh, this is nonsense again, to try to get the paper passed the censors.
there is no actual scientific evidence (from actual tests and measurements) for that statement
problem with old records:
they are old.
0.4 or 0.5 K error on the accuracy and the way of recording (a few times per day? hopefully)
is quite normal.
the 210 (248) and 88 (100) year cycle: could well be
(*) = accounting for lag either way
I identified the 88 year in the current records from 1973
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/

This is another a curve fitting exercise, as ridiculous as R J Salvador’s random walk: a herd of von Neumann’s elephants at WUWT this week.
“For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.”
Rubbish – the frequency, amplitude, phase of each of the six Fourier components are all chosen to optimise the fit. 18 parameters – enough to make this elephant dance.
And worse, the old data are known to be biased – see http://rabett.blogspot.no/2013/02/rotten-to-core.html

Horst-Joachim Lüdecke et al.:
Thankyou for this article.
Whatever the merit your model resolves to be, I gained one thing I did not know and I am very grateful for it.
You say,

Finally we note that our analysis is compatible with the analysis of Harde who reports a CO2 climate sensitivity of ~0.4 K per CO2 doubling by model calculations.

http://meetingorganizer.copernicus.org/EGU2011/EGU2011-4505-1.pdf
A CO2 climate sensitivity of ~0.4 K per CO2 doubling obtained from your study and from model calculations!
All the empirical derivations of climate sensitivity of which I am aware each also indicates climate sensitivity of ~0.4 K per CO2 doubling.
These are
Idso from surface measurements
http://www.warwickhughes.com/papers/Idso_CR_1998.pdf
and Lindzen & Choi from ERBE satelite data
http://www.drroyspencer.com/Lindzen-and-Choi-GRL-2009.pdf
and Gregory from balloon radiosonde data
http://www.friendsofscience.org/assets/documents/OLR&NGF_June2011.pdf
You now say your analysis also provides the same indication of climate sensitivity.
And until reading your article I was not aware that any model study had also provided that value: I thought they all gave higher values.
Thankyou.
Richard

A couple or three quick comments. First, rather than interpret the spectrum as the climate BECOMING chaotic, it is more likely (given a 2000 year record, especially) evidence that the climate already IS chaotic, as indeed one would expect for a highly nonlinear multivariate coupled system with strong feedbacks. Indeed, a glance at the data itself suffices to indicate that this is indeed quite likely. Second, the authors note the possibility of artifacts (well done!) but do not present e.g. the FT of suitable exponentials for comparison or attempt to correct for an presumed exponential growth period. As is well known, the FT of an exponential is a Lorentzian, and to my eye at least their spectra very much have the shape of a Lorentzian convolved with the chaotic quasi-periodicities. Third, the difference between the station transform and the stalagmite transform is a bit puzzling. In a few cases one can imagine pairs of lines from the latter combining to form peaks in the former, but in most of the decadal-scale peaks they are different and difficult to resolve in this way. This makes one question the robustness of the result or the “global” versus local character of the transforms.
If one takes ANY temperature record and FT it — or any irregular curve and FT it — one will of course get a few peaks that probably ARE artifacts associated with the base length of the series being FT’d, plus a lot of peaks at shorter times. If those short-time peaks completely change, though, if one e.g. doubles the length of the time being fit, one has to imagine that they are irrelevant noise, not any sort of signal of actual causality with that periodicity.
In other words, with a 2000 year record I might well still mistrust a 250 year peak or set of peaks. I would certainly mistrust 1000 year or 500 year peaks — one expects Gibbs ringing on the base interval and integer divisors thereof. It might be wiser to take an e.g. 1700 window (or any number far from a binary multiple of 250) out of the 2000 years and see if the 250 year signal survives. Similarly it might be good to “average” sliding window transforms to see what of the decadal signal is just irrelevant noise or if there is anything (not an binary divisor of the window length) that survives.
Fourier transforms of really long/infinite series are great. For finite series the question of artifacts is a very serious one, and the authors haven’t quite convinced me that any of the peaks they propose as “signal” of underlying actual long period causal variation are robust, as opposed to either being artifacts or pure noise, something that will shift all over the place any time one alters the length of the timeseries being fit, irrelevant stuff needed to fit THIS particular curve but not indicative of any actual underlying periodicity in hypothetical causes of the curve.
So, “interesting” but no cigar, at least not yet.
rgb

This post is a bit late. Other blogs have already discussed this paper in February. For example at Open Mind:
http://tamino.wordpress.com/2013/02/25/ludeckerous/
A review of this paper in German can be found on:
http://scienceblogs.de/primaklima/2013/02/22/artikel-von-eike-pressesprecher-ludecke-et-al-veroffentlicht-in-climate-of-the-past/

I have some trouble with:

The dominant ~250 year period peak in the spectrum results from only one period in the data. This is clearly insufficient to prove periodic dynamics. Longer temperature records have therefore to be analyzed. We chose the temperature history derived from a stalagmite in the Austrian Spannagel cave, which extends back by 2000 years. The spectrum ( Fig. 1 right ) shows indeed the ~250 year peak in question.

The CET spike is 248, the stalagmite spike is 234 with nearby periods of 342 and 182. Not all that great a match to my eye, especially with how narrow the spikes are.
In http://wattsupwiththat.com/2011/12/07/in-china-there-are-no-hockey-sticks/ , their power spectrum shows spikes at 110, 199, 800, and 1324 years, a substantial disagreement with this new paper. (The China record is 2485 years long, a projection of its record calls for cooling between 2006 and 2068.)
BTW, more about the stalagmite record is at http://www.uibk.ac.at/geologie/pdf/vollweiler06.pdf‎ . They claim it shows the Litle Ice Age and Medieval Warm Period. A summary is at http://www.worldclimatereport.com/index.php/2006/11/15/stalagmite-story/

rgbatduke

This post is a bit late. Other blogs have already discussed this paper in February. For example at Open Mind:
http://tamino.wordpress.com/2013/02/25/ludeckerous/

And yes, this is entirely well-taken. Including a lot of the blog comments (not so much the silly poems, but I especially resonate with the bit about the elephant wiggling its trunk and the stupidity of “just” fitting curves using arbitrary basis functions. Why bother using period functions (fourier series or transforms)? Why bother making up obscure sums of unmotivated functional forms? Go for the cheese, build a neural network that fits the data. That way NOBODY will have any idea what the weights mean, and by adding more hidden layer neurons one can even fit the noise as closely as one likes.
Of course, no matter how one does this one will have a very hard time differentiating any kind of signal that can extrapolate outside of the training set to predict a trial set, the bete noir of climate modeling as soon as that trial set is more than a few hundred years long if not much less. In the meantime, one might as well do what Tamino does, only even more brutally. Take a coordinate axis. Have your six year old draw a wiggly curve on it following the rules for making a function. Then orient it via symmetry transformations until it ends with an upturn, give it to anybody you like as a supposed trace of the climate record, and ask them to explain it. Or give it to people you don’t like. It won’t matter.
This would make a great experiment in practical psychology and the systematic abuse of scientific methodology. Give it to Mann, and he’ll turn it into a hockey stick. Give it to Tamino, and he’ll find a “warming signal” at the end that can be attributed to CO2. Give it to Dragonslayers and they’ll either turn the curve upside down and claim that is the REAL curve or they’ll attribute the warming to thermonuclear fusion occurring inside the Earth’s core and tell you that CO2 is obviously a cooling gas and without it the Earth would resemble Venus.
Give it to an honest man — note well, not Mann — and they’ll analyze it and tell you something like “I haven’t got a clue, because correlation is not causality and hence monotonic covariation of two curves doesn’t prove anything.” At which point Tamino’s point about needing to include physics is well-taken, but begs the question about whether or not we know ANY physics or physical model that can quantitatively/accurately predict or explain the thermal record of the Pliestocene. I would tend to say ha ha no, don’t make me laugh. He might assert something else — but can he prove it by (for example) rigorously predicting the MWP through the LIA and Dalton minimum to the present? Or even just the last 16,000 years, including the Younger Dryas?
Warming due to CO2 is warming on TOP of an unknown, unpredictable secular variation. “Climate sensitivity” (needed to make warming due to CO2 significant enough to become “catastrophic” over the next century) is warming on TOP of the warming due to CO2 on TOP of an unknown, unpredictable secular variation. The secular trend visible in the actual climate trace from the last 130 years is almost unchanged in the present, and we cannot explain the trend itself or the significant variations around that trend.
Until we can do better, fitting arbitrary bases to climate data is an exercise in futility, especially when one doesn’t even bother with the standard rules of predictive modeling theory, in particular the one that says that you’re a fool to trust a model (physically motivated or otherwise) just because it works for the training set. A model that works for not only the training set it was built with but ALSO for all trial data available (not just a carefully cherrypicked subset of it) might be moderately trustworthy, although there is still the black swan to be feared by all righteous predictive modelers, especially those modeling systems with high multivariate nonlinear complexity known to exhibit chaotic dynamics dominated by a network of constantly shifting attractors and multiple competing locally semistable states.
Like the climate.
rgb

Steve from Rockwood

I thought that if you were going to use the FFT the data should be periodic. If you graph the year 2000 onto the start of the record at 1750 you have a serious discontinuity.

Stephen Wilde

One of Leif Svalgaard’s objections to a solar influence on climate was his observation that solar activity in the late 1700s was comparable to today.
This research shows that the temperature then was about the same as today.
It appears that there was a rapid solar recovery from the depths of the LIA to the late 1700s then a dip to about 1880 and a recovery since then all of which are reflected in the level of solar activity.
Leif thought that by highlighting the high solar activity of the late 1700s he was delivering a fatal blow to a proposed solar influence on temperature changes since the LIA.
However a more detailed examination of the temperature changes since the LIA appears to match solar activity very closely.
As the authors say:
“the overall temperature development since 1780 is part of periodic temperature dynamics prevailing already for ~2000 years. This applies in particular to the temperature rise since 1880, which is officially claimed as proof of global warming due to CO2, but clearly results from the 250 year cycle. It also applies to the temperature drop from 1800 ( when the temperature was roughly as high as today, Fig. 4 ) to 1880, which in all official statements is tacitly swept under the carpet”
The period 1800 to 1880 appears to represent a reversal of the previous recovery from the LIA and that reversal appears to correlate with a decline in solar activity.
Leif’s work actually supports the solar / climate link once the temperature record is properly examined.

jorgekafkazar

rgbatduke says: “…First, rather than interpret the spectrum as the climate BECOMING chaotic, it is more likely (given a 2000 year record, especially) evidence that the climate already IS chaotic, as indeed one would expect for a highly nonlinear multivariate coupled system with strong feedbacks. Indeed, a glance at the data itself suffices to indicate that this is indeed quite likely….So, “interesting” but no cigar, at least not yet.”
Nicely put, as usual, rgb.

Matthew R Marler

For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.
There is a major misunderstanding: DFTs compute parameter estimates from the data. Why you think that the Fourier transform of the data does not compute parameters is a mystery. If you claim that the model makes an accurate forecast, then you claim that the coefficients are accurate representations of the process, and that means that they are estimates.
This applies in particular to the temperature rise since 1880, which is officially claimed as proof of global warming due to CO2, but clearly results from the 250 year cycle.
You have less than 1 full cycle of the 250 year cycle on which to base this estimate, so it should not be accorded a lot of credence.
Finally we note that our analysis is compatible with the analysis of Harde who reports a CO2 climate sensitivity of ~0.4 K per CO2 doubling by model calculations [3].
If your model includes no representation of CO2, then if it is sufficiently accurate it is “compatible” with a CO2 climate sensitivity of exactly 0 K per CO2 doubling. If you accept that 0.4K per doubling is accurate, then you need to include a term for CO2 in your model and forecast.
Contrary to expectations the Fourier spectra consist of spectral lines only
The figures show “peaks” rather than “lines”. That statements is either wrong or it needs explanation.
An interesting feature reveals itself on closer examination of the stalagmite spectrum ( Fig.1 right ). The lines with a frequency ratio of 0.5, 0.75, 1, 1.25 with respect to the ~250 year periodicity are prominent. This is precisely the signature spectrum of a period-doubling route to chaos [2]. Indeed, the wavelet diagram Fig. 3 indicates a first period-doubling from 125 to 250 years around 1200 AD. The conclusion is that the climate, presently dominated by the 250 year cycle is close to the point at which it will become nonperiodic, i.e. “chaotic”. We have in the meantime ascertained the period-doubling clearer and in more detail.
What is represented by the color code in Figure 3? It would be helpful to have more information on how you estimated a time-varying spectrum — merely referring to wavelet analysis is incomplete because there are so many wavelet bases. I think it requires a visionary to read period-doubling off that graph; the claim that climate has been periodic but will become chaotic extremely weak. Period-doubling is a route to chaos, but that doesn’t imply that the next doubled period, should it occur, will mark the onset of chaos. And period-doubling is not the sine qua non of chaos.
I am no less happy with this post-hoc model and its estimated parameters than with all the others that have been presented here, but it is an estimated post-hoc model. The test will, as with all the others, be how well its projection into the future fits future data.

John Tillman

Stephen Wilde says:
May 4, 2013 at 10:00 am
————————————
The Little Ice Age of course wasn’t uniformly cold, but consisted of its own arguably cyclic ups & downs of temperature on decadal periods.
The latter 18th century was, as since the late 1940s, a generally warmer spell within the remarkably cold LIA. It ended with the Dalton Minimum, aggravated by the 1815 Tambora explosion. It was preceded by the Maunder Minimum in the depths of the LIA, which followed the Spoerer & Wolf Minima.
http://en.wikipedia.org/wiki/File:Carbon14_with_activity_labels.svg
At the end of the Maunder occurred the coldest European year on record. It was an extreme weather event, but also possibly owed some of its ferocity to the preceding decades of prevailing cold.
http://en.wikipedia.org/wiki/Great_Frost_of_1709
Charles XII’s Sweden might have defeated Peter’s Russia in the Great Northern War but for this event.
I’m willing to be convinced that Holocene climate is chaotic, but haven’t seen compelling statistical & evidential demonstration of this hypothesis. I may well have missed something.

Matthew R Marler

In a recent paper [1] we Fourier-analyzed central-european temperature records dating back to 1780.
Figure 1 shows data earlier than 1780. Why the apparent discrepancy?

Rud Istvan

Well done alternative curve fit, but that is all. Dubious predictive power into the future for any meaningful period of time.
Any study, including Lindzen and Choi 2011, which concludes equilibrium sensitivity is negative (0.4C versus 1.0 for a ‘black’ earth and 1.1 to 1.2 for a grey earth, with Lindzen himself using 1.2 for the no feedback case) is suspect. There is simple too much data from too many approaches on the other side. Annans informed priors give 1.9; Nic Lewis and others get anything between 1.5 and 1.9. Various ocean heat methods give 1.8pro 1.9. The most recent batches of paleo sensitivities give 2 to 2.2. Posted last year elsewhere, and in the book on Truth.
Negative sensitivity says that negative cloud feedback is greater than water vapor positive feedback. That is very unlikely. And no other feedback is thought to be nearly as great as those two ( at least no one has yet credibly argued any such thing). Far more likely based on multiple observational papers is that the water vapor feedback is less positive than in the models. Reasons include inability to model convective cloud formation and precipitation on sub grid scales. Evidence includes declining UTrH since 1975 to about 200x, and absence of tropical troposphere hotspot. Mechanism is probably Lindzens adaptive iris. Cloud feedback is probably negative, not positive as in almost all CMIP5 GCMs. The combination halves sensitivity from 3 to about 1.5, right in the ballpark of many new sensitivity estimates since AR4. Adding a cloud superparameterization to a GCM increased clouds and precipitation, lowered UTrH, and halved sensitivity. Published on 2009.
The climate gang just does not want to fix the situation because puts the lie to past proclamations and to AR4.
Sensitivity of 1.5 is sufficient to say CO2 does something, but not very much. Either no actions, or only adaptive change suffices. CAGW remains hogwash. And natural variability still predominates in the second half of the 20th century– the old attribution problem.

Matthew@11:21
I don’t think the first sentence was exclusive.
wrt. Figure 1:-
“. Right panel: same for an interpolated time series of a stalagmite from the Austrian Alps.”

george e. smith

The authors state that the Fourier transform defines the frequency components in amplitude and phase, so no tweaking is involved. I’ll take their word for that.
What I WOULD find interesting then, is a sequential reconstruction plot. Start with a plot of the single largest component, or maybe start from lowest frequency, then in sequence add one more component at a time. In any case, I do find their result interesting, although, I don’t know how robust (mandatory adjectival comment) is their projection of a 0.4 deg C per doubling climate sensitivity; a concept to which I ascribe no demonstrable evidence, either experimental, or theoretical, in fact I find that notion laughable.
I don’t have much hesitation in accepting ML CO2 data, other than wondering if ML is not unlike a Coca Cola bottling plant, as regards its local atmosphere. But I can see no effectively monotonic increase in global Temperature regardless of any time offset, plus or minus. Remember, that over the ML record interval, there is no material difference between a linear relationship, and a logarithmic one. The sunspot cycle data since IGY in 1957-58 and since, seems better linked to the Temperature (whatever that is), than does the CO2, and those have been going counter to each other now for 17-18 years.

bones

rgb is correct about artifacts. Anyone who has used Fourier Transforms to any extent will surely have been bitten by spurious peaks. Some additional tests with record lengths that are nowhere near multiples of any interesting periodicities are needed.

Rud Istvan:
At May 4, 2013 at 11:36 am you assert

Negative sensitivity says that negative cloud feedback is greater than water vapor positive feedback. That is very unlikely.

Really? And your evidence for that is?
Richard

PMHinSC

Judging from many of these comments, any analysis that isn’t perfect is greeted with instant criticism and dismissal. Yes FFTs, like all analysis, presupposes knowledge of the data set that often doesn’t exist. And yes even the authors “… caution for artefacts” among other limitations of this approach. FFT is a proven tool for teasing information from a data set. I don’t throw away results because they are imperfect, have limitations, or I don’t like them. If someone has a better approach I look forward to reading your essay prominently by-lined on WUWT. I would suggest that there is something of value here for those with an open mind and who are inclined to learn.

Matthew R Marler

Table 1. Frequencies, periods, and the coefficients aj and bj of the
reconstruction RM6 due to Eqs. (1), (3), and (4) (N = 254).
j j/N (yr−1) Period (yr) aj bj
0 0 – 0 0
1 0.00394 254 0.68598 −0.12989
2 0.00787 127 – –
3 0.01181 85 0.19492 −0.14677
4 0.01575 64 0.17465 −0.22377
5 0.01968 51 0.14730 −0.10810
6 0.02362 42 −0.02510 −0.12095
7 0.02756 36 0.12691 0.01276
8 0.03150 32 – –
That’s Table 1 from the original. They seem to have estimated 27 parameters, 9 triplets (period/frequency, sine and cosine coefficients). Sorry about the formatting. Columns are j, frequency, period, cosine parameter, sine parameter. For the equation:
RM6(t) = Sum on j of [aj cos(2pi*jt/N) +bj sin (2pi*jt/N)] ;
j. = 0,1,3,4,5,6,7
They claim only 7, but period 127 and period 32 seem as well as period infty (freq 0) to have coefficients estimated to be 0. All without estimated standard errors. As with standard fft, the frequencies are chosen based on the length of the data record. Perhaps that is why the authors think that the frequencies are not “estimated”.
rgbatduke: Until we can do better, fitting arbitrary bases to climate data is an exercise in futility, especially when one doesn’t even bother with the standard rules of predictive modeling theory, … .
It’s a sort of standard operating procedure. We have to do something while waiting for the teams of empiricists to record and verify all of the data of the next 3 decades.

Matthew R Marler

oops. In Figure 1 shows data earlier than 1780. Why the apparent discrepancy?
I meant Figure 2. Figure 1 in their paper shows data before 1775 for Paris and Prague, but not Hohenpeissneberg or Vienna. Perhaps Figure 2 of the post shows the mean of available data back as early as the earliest, but data used in the modeling exclude the dates for which complete data are unavailable.

Matthew R Marler

Rud Isvan: Negative sensitivity says that negative cloud feedback is greater than water vapor positive feedback. That is very unlikely. And no other feedback is thought to be nearly as great as those two ( at least no one has yet credibly argued any such thing).
Likely or not, negative future CO2 sensitivity to a doubling is not unreasonable or ruled out by data or what is sometimes referred to as “the physics”. Equilibrium is an abstract concept, so equilibrium climate sensitivity is suspect. At least as suspect is the idea that CO2 sensitivity is a constant instead of being dependent on the climate at the start of the doubling. If it is true that increased CO2 causes increased downwelling long-wave infrared radiation, then it is quite possible that increased CO2 will cause more rapid (earlier in the day) formation of clouds in the tropics and in the mid-latitudes, especially in hot weather. Combined with increased upward radiation by high-altitude CO2, the net effect could be cooling of the lower troposphere. It can’t be ruled out on present theory and data if you abandon equilibrium assumptions and examine what is known about actual energy flows.

I am not convinced about value of this reconstruction even for the North Hemisphere, let alone global temperature
http://www.vukcevic.talktalk.net/2CETs.gif
However, you pays your money and you takes your choice.

James Smyth

In a recent paper [1] we Fourier-analyzed central-european temperature records dating back to 1780.
I haven’t read the paper in detail, (and I haven’t studied the topic in years) but the granularity of the f-axis in a DFT is determined by the duration of the time-signal. The idea that, in a 230 year sample set, you can see a 250 year cycle with any clarity is dubious. In this case, (freq sample) K = 1 is the 230 year bin, and K = 2 is the 115 year bin, and so on. A wide range of cycle duration will be accumulated in those bins.
Many people don’t seem to realize that there is a kind of corellary to the Nyquist theorem when doing DFTs:. Just as you need to sample at 1/2F intervals in order to find frequency F components, you need to sample for 2Y duration in order to distinguish duration Y cycles..
There is an obvious issue in that the our most reliable satellite data is only about 30 years. I don’t believe it’s possible to really see any long duration cycles in that data.

James Smyth

“Contrary to expectations the Fourier spectra consist of spectral lines only”
The figures show “peaks” rather than “lines”. That statements is either wrong or it needs explanation.
They’ve done some kind of smoothing of the data using a higher-granularity frequency domain that can possible be legitimate. Per my above, I don’t think there is any legitimate way to resolve data at K=1 (1/230 years) and K=2(1/115) years into a signal w/ a peak at a precision of 248 years.
Or I could be wrong; there may be some legitimate transform analysis/technique that I’m ignorant of.

James Smyth

sorry, I goofed a closing italics tag. Everything after “they’ve done” was mine.

James Smyth

so, let’s try it again…
“Contrary to expectations the Fourier spectra consist of spectral lines only”
The figures show “peaks” rather than “lines”. That statements is either wrong or it needs explanation.

They’ve done some kind of smoothing of the data using a higher-granularity frequency domain that can possible be legitimate. Per my above, I don’t think there is any legitimate way to resolve data at K=1 (1/230 years) and K=2(1/115) years into a signal w/ a peak at a precision of 248 years.
Or I could be wrong; there may be some legitimate transform analysis/technique that I’m ignorant of.

steve

Sure this is just another climate model curve fit to the data. But, it does have the advantage of being ridiculously simple compared to most. If they are actually on to something, it should be comparatively easy to identify the causes of 6 variables, and go on to establish a relatively convincing argument. There are no guarantees when performing research, but if I were them I would keep pursuing this path.

NZ Willy

So frequencies are extracted from the data, then plugged back in to re-create the original curves. This is just a routine mathematical trick — entirely unproven unless each extracted frequency is mapped to a physical cause, or if it builds a successful track record of prediction. I see Richard Telford has said as much above. Pretty, but a crowd-pleaser only.

Harry

I love this type of completely different analysis! It demonstrates, (TMHO) that the power of the large, supercomputer based models is viritually shgit. And whether all parameters have a physical foundation, show me the parameters of the large GCM models. None of the parameters have any physical meaning, which is why they are presented as “ensembles”. WTF? Ensembles? Why? Because none of the individual models, individual runs is capable of catching the observed global temperature.
Models: GDI:GBO
Back to the original, hand written, pristine observations, without the corrections, adjustments.

The dominant period of ~250 years is presently at its maximum
I wonder if either of the two sharp-shooters
rgbatduke
or
Matthew R Marler
in view of the sudden up/down discrepancy of 125 years
http://www.vukcevic.talktalk.net/2CETs.htm
half of the all important ‘dominant period of ~250 years’, would care to make any additional comment, on the effect on the FT output.
Thanks RGB

Kasuha

Highlighted periods of 80, 61, 47, and 34 years seem to be just harmonics of the “base period” of 248; 248 is rather close to 3 x 80, 4 x 61, 5 x 47, and 6 x 34.
And the period of 248 is too long to be extracted reliably from the whole length of analyzed data. Also the stalactite record does not seem to match the temperature record very well; some peaks seem to be close but trying to reproduce the temperature record using frequencies which peak in the stalactite record might be quite hard.
The fact that there are peaks on the spectrum in my opinion does not mean the temperature is solely based on cycles; it only proves the temperature record is not completely chaotic or random. There are processes which are cyclic (PDO, AMO, ENSO, …) but they do not have to be periodic with a fixed period. Just one such process is bound to wreak havoc and lots of false matches to any fourier analysis.

Steve from Rockwood says:May 4, 2013 at 9:55 am
“I thought that if you were going to use the FFT the data should be periodic. If you graph the year 2000 onto the start of the record at 1750 you have a serious discontinuity.”

In fact, the DFT interprets the data as periodic, and reports accordingly. As you note, there is a discontinuity. It’s made worse by the use of zero padding, presumably to a 256 year period as the FFT requires. That means that, as the FFT sees it, every 256 years, the series rises to a max of 1.5 (their Fig 4, end value), drops to zero for six years, then rises to their initial value of 0.5.
That is seen as real, and it’s a very large effect. That’s where their 248-year peak comes from. It’s the FT of that periodic pulse. It’s entirely spurious.

Matthew R Marler

vukcevic: would care to make any additional comment,
I am glad you asked. I have always wished that you would put more labeling and full descriptions of whatever it is that you have in your graphs. Almost without exception, and this pair is not an exception, I do not understand what you have plotted.

Matthew R Marler

James Smyth, you and I are in agreement about the need for at least 2 full cycles for any period that must be estimated from the data. Only if the period is known exactly a priori can you try to estimate the phase and amplitude from data. Also see Nick Stokes on the effects of padding, which the authors had to do for an FFT algorithm. The paper (and this applies to the original) is too vague on details.

Willis Eschenbach

My friends, I fear I don’t understand Figure 2. You’ve taken the Fourier analysis of the European data, then used the six largest identified cycles to reconstruct the rough shape of the European data …
So what?
How does that explain or elucidate anything? Were you expecting that the reconstruction would have a different form? Do you think the Fourier reconstruction having that form actually means something? Your paper contains the following about Figure 2:

Fig. 2 shows the reconstruction together with the central European temperature record smoothed over 15 years (boxcar). The remarkable agreement suggests the absence of any warming due to CO2 ( which would be nonperiodic ) or other nonperiodic phenomena related to human population growth or industrial activitiy.

Say what? Let me count the problems with that statement.
1. The “remarkable” agreement you point out is totally expected and absolutely unremarkable. That’s what you get every time when you “reconstruct” some signal using just the larger longer-term Fourier cycles as you have done … the result looks like a smoothed version of the data. Which is no surprise, since what you’ve done is filtered out the high-frequency cycles, leaving a smoothed version of the data.
And you have compared it to … well, a smoothed version of the data, using a “boxcar” filter.
You seem impressed by the “remarkable agreement” … but since they are both just smoothed versions of the underlying data, the agreement is predictable and expected, and not “remarkable” in the slightest.
Let me suggest that this misunderstanding of yours reveals a staggering lack of comprehension of what you are doing. In the hackers’ world you’d be described as “script kiddies”. A “script kiddie” is an amateur hacker using methods (“scripts” designed to gain entry to a computer) by rote, without any understanding of what the methods are actually doing. You seem to be in that position vis-a-vis Fourier analysis.
I am speaking this bluntly for one reason only—you truly don’t seem to understand the magnitude of your misunderstanding …
2. The Fourier analysis does NOT “suggest the absence of any warming”. Instead, it just suggests the limitations of Fourier analysis. A couple of years after I was born, Claude Shannon pointed out the following:

If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart.

This means that the longest cycle you can hope to resolve in a dataset N years long is maybe N / 2 years. Any cycle longer than that you’d be crazy to trust, and even that long can be sketchy.
3. The Fourier analysis does NOT “suggest the absence of any … other nonperiodic phenomena.” There’s lots of room in there for a host of other things to go on. For example, try adding an artificial trend to your data of say an additional 0.5°C per century, starting in 1850. Then redo your Fourier analysis, and REPORT BACK ON YOUR FINDINGS, because that’s how science works …
Or not, blow off all the serious comments and suggestions, and walk away … your choice. I’m easy either way.
Finally, you claim to extract a 248-year cycle from data that starts in 1780, although the picture is a bit more complex than that. In your paper (citation [1]) you say:

The details of the records used in this paper are as follows: Kremsmunster, monthly (1768–2010) (Auer et al., 2007); Hohenpeißenberg, monthly (1781–2010) (CRU, 2012); Prague, monthly (1770– 2010) (CHMI, 2012); Paris-Le-Bourget, monthly (1757– 1993) (Me ́te ́o France, 2012); Paris-Montsouris, monthly (1994–2011) (Meteo France, 2012); Munich-Riem, monthly (1781–2007) (Auer et al., 2007); Munich-Airport, monthly (2007–2011) (DWD, 2012); Vienna, monthly (1775–2010) (CRU, 2012)

The period of overlap between all of these records (the latest start date) is 1781, for Hohen-whateversburg and Kremsmunster. They all end in 2010, so rather than being 250 years long, your dataset covers 1781-2010, only 230 years.
So my question is … how did you guys get a 248-year cycle out of only 230 years of data?
Regards,
w.
PS—My summary of this analysis? My friends, I fear it is as useless as a trailer hitch on a bowling ball. It’s as bad as the previous exercise in meaningless curve fitting. The fact that the curves have been fit using Fourier analysis doesn’t change the problems with the analysis.

James Smyth

In fact, the DFT interprets the data as periodic, and reports accordingly. As you note, there is a discontinuity.
This is picking nits, but I think that’s a poor verbalization of the underlying mathematics as a linear transform from C(N) to C(N). The DFT does no interpreting, and there is no assumption of periodicity of the domain data in the definition itself. Of course, the DFT itself has the property that X(N + k) = X(k), but that is not based on an assumption of x(N+k) = x(k). It is based on properties of the range basis elements. So, you can say with some meaningfulness that “the DFT is periodic.”
Now, as to the question of zero-padding for the FFT … I don’t even see the use of the term FFT in the excerpt. Which begs the question … Does anyone have a handy reference to modern computing time required for a true DFT of (what I would bet is) not that big of a sample set? A quick google search is not finding anything.

Matthew R Marler says: May 4, 2013 at 2:09 pm
…………..
Ah…, my trade mark…
Let’s have a go
Graph
http://www.vukcevic.talktalk.net/2CETs.htm
Top chart
-black line
Central European Temperature as shown in the Fig 2 from the article (thread) it is slightly stretched horizontally for a better resolution.
-green line
the CET (shown in absolute values), scales are equalised for the 0.5 C steps.
Bottom chart
Same annotation as in the above.
-Red up/down arrows
show start and end of ~125 year long period, during which if the European temps are lifted by ~0.5 C, then there is an almost perfect match to the CET for the same period.
This strikes me as an unnatural event, particularly at the start, to take place over just few years and then last another ~125 or so years.
Two areas CET and Central Germany (Europe) are on similar latitudes and about 500 miles apart.
If one assumes that the CET trend is more representative of the N.Hemisphere, then I would expect that the ‘dominant period of ~250 years’ to disappear.
Looking forward to your comment. Tnx.

James Smyth

The “remarkable” agreement you point out is totally expected and absolutely unremarkable. That’s what you get every time when you “reconstruct” some signal using just the larger longer-term Fourier cycles as you have done … the result looks like a smoothed version of the data. Which is no surprise, since what you’ve done is filtered out the high-frequency cycles, leaving a smoothed version of the data.
LOL. Again, I haven’t read the paper; and I’m way out of practice w/ this stuff, but this is really hilarious. It’s like, holy cow, the inverse of an invertable transform is the original data? Whoddathunkit???

James Smyth says: May 4, 2013 at 2:56 pm
“Now, as to the question of zero-padding for the FFT … I don’t even see the use of the term FFT in the excerpt.”

They don’t mention it, and in fact they do mention N=254. But why on earth would they be zero-padding to N=254?
The more I read the paper, the more bizarre it gets. They have no idea what they are doing, and I can only assume Zorita is clueless too. They have a DFT which represents the data as harmonics of the base frequency, and they actually show that breakdown in Table 1 (relative to N=254) with the relevant maths in Eq 4. But then they show a continuous (smoothed) version in Fig 3, marking the harmonics with periods that are different (but close). Then they try to tell us that these frequencies have some significance.
But of course they don’t. They are simply the harmonics of the base frequencies, which is just the period they have data for. The peak at 248 (or 254) years is just determined by the integral of the data multiplied by the base sinusoid. Of course there will be a peak there.

RCSaumarez

I agree with most the comments made here.
Transformation from the time domain to frequency domain means very little UNLESS ther are multiple records that allow averaging of the power spectrum. All that has been done is to use a (crude) low pass filter. So what? Obviously you can reconstruct the the signal from its Fourier components.

Paul Linsay

I’ve done a lot of work on chaotic dynamics and the period doubling route to chaos. Try as I might it just doesn’t look like period doubling to me. The subharmonics of the doubled period should have amplitudes that are smaller than the amplitude of the main peak. The peak with a periof of 341 years is larger than the putative fundamental with a 234 year period. See for example Fig. 2, P.S. Linsay, Phys. Rev. Letters, 47, 1349 (1981).
Your true frequency resolution may also cause problems separating peaks so close together, they are only a two to three bins apart in frequency space. Without knowing your windowing function it’s hard to know what it is.
The fact of the matter is that period doubling is a very fragile route to chaos and usually only shows up in dynamical systems with only a small number of degrees of freedom. As much as the AGW crowd would like us to believe that’s true with CO2 driving everything, the reality points to a very complicated system with many degrees of freedom. That is what one should expect from a system with multiple coupled fluids and components that can change phase.

Matthew R Marler

vukcevik: -Red up/down arrows
You get a slightly greater fit if you move the left-end red line leftward a bit, and the right-end red line rightward a bit. That suggests that during an epoch of about 160 years central Europe (is Paris “central”?) cooled more than central England. It’s hard to get away from epochs that are approximately some multiple or fraction of some period in a Fourier transform.

James Smyth

They are simply the harmonics of the base frequencies, which is just the period they have data for. The peak at 248 (or 254) years is just determined by the integral of the data multiplied by the base sinusoid I would need to see your math. These words don’t translate into anything meaningful for me.. It sounds like you are implying that zero padding introduces peak frequencies; which I don’t think is true. Multiplication by a window is convolution w/ a sinc, It will spread existing frequency peaks out as convolved w/ a sinc function. Is it possible to get the sinc’d components to add up in such a way that it introduces new peaks; I suppose.

Jeremy

Thank goodness for RGB!!
I read this article and alarm bells went off in my head everywhere. Spectral analysis is fraught with artifacts. It is also very important what type of “window” you apply to the data prior to analysis.
I would treat all of these “conclusions” with EXTREME caution.
Thank you RGB for pointing this out.

Jeremy

For those who want to know more
http://en.wikipedia.org/wiki/Window_function

commieBob

For those whose brains are about to explode … 😉
I highly recommend the following (free) book:

The Scientist and Engineer’s Guide to Digital Signal Processing
By Steven W. Smith, Ph.D.
http://www.dspguide.com/

The thing about this book is that it is written from the standpoint of one who might acutally want to do some digital signal processing. There are a few serious traps the naive can fall into and this is the best book I’ve seen for pointing them out.
I think the most succinct definition of the problem is as follows:

rgbatduke says:
May 4, 2013 at 8:47 am
… Fourier transforms of really long/infinite series are great. For finite series the question of artifacts is a very serious one, …

rgbatduke you are indeed a master of understatement.

Gary Pearse

The symmetric “U” shaped curve of figure 2 from one max to another max located at either end of the record is highly suspicious. I realize you are using central Europe records and not NH records but what happened to the LIA which was certainly dominant in the first part of the record. In 1780, heavy cannon were hauled on the ice from Jersey City to New York and people could walk from New York to Staten Island on 8 foot-thick ice! One third of all Finns died, grape vines died in continental Europe…..
http://query.nytimes.com/mem/archive-free/pdf?res=9C06EED81E31E233A25757C2A9649D946096D6CF
http://green.blogs.nytimes.com/2012/01/31/in-the-little-ice-age-lessons-for-today/
The many criticisms above of an DFT of the record and the tautological result you obtained notwithstanding, shouldn’t you, at your pay-scale (shame on you all), have made some attempt to identify what the individual cycles were caused by in reality? Was it six cycles? I can’t be bothered to go back and check. There are virtually an infinite number of possible sine wave bundles that could give you essentially as good a fit (am I wrong here, I’m harkening back to mid 20thC – pre computer days summing of curves – such a work as yours might have been done in a grade 12 high school class). And you find in all this a corroboration of a 0.4 C climate sensitivity of CO2? This would require an FT along an up-sloping trend line. We’ve been jumping all over failed climate models but they are far superior to yours for their attempt to model actual affects.

peterg

The authors appear to have taken the signal, transformed it, removed some high frequency components, back-transformed, and made conclusions based on how well the result matches the original. I do not see how that would be valid.
I would have thought that before applying the FT, the data should be de-trended and windowed. No mention of any windowing in the description.
The Fourier transform assumes the signal is periodic, that is it recurs infinitely. So without de-trending and windowing, an implicit periodic saw-tooth signal covering the data with harmonics is added to the output. Windowing limits the frequency resolution, but greatly reduces the saw-tooth artifacts.
I do not understand how the FT can be used to discover what the aperiodic components of a signal are. Rather it is used to decompose a signal into sine waves, assuming it is periodic.