Guest essay by:

Horst-Joachim Lüdecke, EIKE, Jena, Germany

Alexander Hempelmann, Hamburg Observatory, Hamburg, Germany

Carl Otto Weiss, Physikalisch-Technische Bundesanstalt Braunschweig, Germany

In a recent paper [1] we Fourier-analyzed central-european temperature records dating back to 1780. Contrary to expectations the Fourier spectra consist of spectral lines only, indicating that the climate is dominated by periodic processes ( Fig. 1 left ). Nonperiodic processes appear absent or at least weak. In order to test for nonperiodic processes, the 6 strongest Fourier components were used to reconstruct a temperature history.

*Fig. 1: Left panel: DFT of the average from 6 central European instrumental time series. Right panel: same for an interpolated time series of a stalagmite from the Austrian Alps.*

Fig. 2 shows the reconstruction together with the central European temperature record smoothed over 15 years (boxcar). The remarkable agreement suggests the absence of any warming due to CO_{2} ( which would be nonperiodic ) or other nonperiodic phenomena related to human population growth or industrial activitiy.

For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.

However one has to caution for artefacts. An obvious one is the limited length of the records. The dominant ~250 year period peak in the spectrum results from only one period in the data. This is clearly insufficient to prove periodic dynamics. Longer temperature records have therefore to be analyzed. We chose the temperature history derived from a stalagmite in the Austrian Spannagel cave, which extends back by 2000 years. The spectrum ( Fig. 1 right ) shows indeed the ~250 year peak in question. The wavelet analysis ( Fig. 3 ) indicates that this periodicity is THE dominant one in the climate history. We ascertained also that a minimum of this ~250 year cycle coincides with the 1880 minimum of the central European temperature record.

*Fig 2: 15 year running average* *from 6 central European instrumental time series (black). Reconstruction with the 6 strongest Fourier components (red).*

*Fig 3: Wavelet analysis of the stalagmite time series. *

Thus the overall temperature development since 1780 is part of periodic temperature dynamics prevailing already for ~2000 years. This applies in particular to the temperature rise since 1880, which is officially claimed as proof of global warming due to CO_{2}, but clearly results from the 250 year cycle. It also applies to the temperature drop from 1800 ( when the temperature was roughly as high as today, Fig. 4 ) to 1880, which in all official statements is tacitly swept under the carpet. One may also note that the temperature at the 1935 maximum was nearly as high as today. This shows in particular a high quality Antarctic ice core record in comparison with the central-european temperature records (Fig. 4, blue curve).

*Fig 4: Central European instrumental temperatures averaged the records of Prague, Vienna, Hohenpeissenberg, Kremsmünster, Paris, and Munich (black). Antarctic ice core record (blue).*

As a note of caution we mention that a small influence of CO_{2} could have escaped this analysis. Such small influence could by the Fourier transform have been incorporated into the ~250 year cycle, influencing slightly its frequency and phase. However since the period of substantial industrial CO_{2 }emission is the one after 1950, the latter is only 20% of the central European temperature record length and can therefore only weakly influence the parameters of the ~250 year cycle.

An interesting feature reveals itself on closer examination of the stalagmite spectrum ( Fig.1 right ). The lines with a frequency ratio of 0.5, 0.75, 1, 1.25 with respect to the ~250 year periodicity are prominent. This is precisely the signature spectrum of a period-doubling route to chaos [2]. Indeed, the wavelet diagram Fig. 3 indicates a first period-doubling from 125 to 250 years around 1200 AD. The conclusion is that the climate, presently dominated by the 250 year cycle is close to the point at which it will become nonperiodic, i.e. “chaotic”. We have in the meantime ascertained the period-doubling clearer and in more detail.

In summary, we trace back the temperature history of the last centuries to periodic ( and thus “natural” ) processes. This applies in particular to the temperature rise since 1880 which is officially claimed as proof of antroprogenic global warming. The dominant period of ~250 years is presently at its maximum, as is the 65 year period ( the well-known Atlantic/Pacific decadal oscillations ).

Cooling as indicated in Fig. 2 can therefore be predicted for the near future, in complete agreement with the lacking temperature increase since 15 years. The further future temperatures can be predicted to continue to decrease, based on the knowledge of the Fourier components. Finally we note that our analysis is compatible with the analysis of Harde who reports a CO_{2} climate sensitivity of ~0.4 K per CO_{2} doubling by model calculations [3].

Finally we note that our analysis is seamlessly compatible with the analysis of P. Frank in which the Atlantic/Pacific decadal oscillations are eliminated from the world temperature and the increase of the remaining slope after 1950 is ascribed to antropogenic warming [4], resulting in a 0.4 deg temperature increase per CO2 doubling. The slope increase after 1950, turns out in our analysis as simply the shape of the 250 year sine wave. A comparable small climate sensitivity is also found by the model calculations /3/.

[1] H.-J. Lüdecke, A. Hempelmann, and C.O. Weiss, Multi-periodic climate dynamics: spectral analysis of long-term instrumental and proxy temperature records, Clim. Past, 9, 447-452, 2013, doi:10.5194/cp-9-447-2013, www.clim-past.net/9/447/2013/cp-9-447-2013.pdf

[2] M.J. Feigenbaum, Universal behavior in nonlinear systems, Physica D, 7, 16-39, 1983

[3] H. Harde, How much CO2 really contributes to global warming? Spectroscopic studies and modelling of the influence of H_{2}O, CO_{2} and CH_{4} on our climate, Geophysical Research Abstracts, Vol. 13, EGU2011-4505-1, 2011, http://meetingorganizer.copernicus.org/EGU2011/EGU2011-4505-1.pdf

Only variables I would consider for determining natural oscillation periods (in the N. Hemisphere at least) is the CET and tectonic records and here is what they show:

http://www.vukcevic.talktalk.net/NVp.htm

with future extrapolation ( not a prediction !)

resulting in a 0.4 deg temperature increase per CO2 doubling

Henry says

oh, this is nonsense again, to try to get the paper passed the censors.

there is no actual scientific evidence (from actual tests and measurements) for that statement

problem with old records:

they are old.

0.4 or 0.5 K error on the accuracy and the way of recording (a few times per day? hopefully)

is quite normal.

the 210 (248) and 88 (100) year cycle: could well be

(*) = accounting for lag either way

I identified the 88 year in the current records from 1973

http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/

This is another a curve fitting exercise, as ridiculous as R J Salvador’s random walk: a herd of von Neumann’s elephants at WUWT this week.

“For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.”

Rubbish – the frequency, amplitude, phase of each of the six Fourier components are all chosen to optimise the fit. 18 parameters – enough to make this elephant dance.

And worse, the old data are known to be biased – see http://rabett.blogspot.no/2013/02/rotten-to-core.html

Horst-Joachim Lüdecke et al.:

Thankyou for this article.

Whatever the merit your model resolves to be, I gained one thing I did not know and I am very grateful for it.

You say,

http://meetingorganizer.copernicus.org/EGU2011/EGU2011-4505-1.pdf

A CO2 climate sensitivity of ~0.4 K per CO2 doubling obtained from your study and from model calculations!These areAll the empirical derivations of climate sensitivity of which I am aware each also indicates climate sensitivity of ~0.4 K per CO2 doubling.

Idso from

surface measurementshttp://www.warwickhughes.com/papers/Idso_CR_1998.pdf

and Lindzen & Choi from

ERBE satelite datahttp://www.drroyspencer.com/Lindzen-and-Choi-GRL-2009.pdf

and Gregory from

balloon radiosonde datahttp://www.friendsofscience.org/assets/documents/OLR&NGF_June2011.pdf

You now say your analysis also provides the same indication of climate sensitivity.

And until reading your article

I was not aware that any model study had also provided that value: I thought they all gave higher values.Thankyou.

Richard

A couple or three quick comments. First, rather than interpret the spectrum as the climate BECOMING chaotic, it is more likely (given a 2000 year record, especially) evidence that the climate already IS chaotic, as indeed one would expect for a highly nonlinear multivariate coupled system with strong feedbacks. Indeed, a glance at the data itself suffices to indicate that this is indeed quite likely. Second, the authors note the possibility of artifacts (well done!) but do not present e.g. the FT of suitable exponentials for comparison or attempt to correct for an presumed exponential growth period. As is well known, the FT of an exponential is a Lorentzian, and to my eye at least their spectra very much have the shape of a Lorentzian convolved with the chaotic quasi-periodicities. Third, the difference between the station transform and the stalagmite transform is a bit puzzling. In a few cases one can imagine pairs of lines from the latter combining to form peaks in the former, but in most of the decadal-scale peaks they are different and difficult to resolve in this way. This makes one question the robustness of the result or the “global” versus local character of the transforms.

If one takes ANY temperature record and FT it — or any irregular curve and FT it — one will of course get a few peaks that probably ARE artifacts associated with the base length of the series being FT’d, plus a lot of peaks at shorter times. If those short-time peaks completely change, though, if one e.g. doubles the length of the time being fit, one has to imagine that they are irrelevant noise, not any sort of signal of actual causality with that periodicity.

In other words, with a 2000 year record I might well still mistrust a 250 year peak or set of peaks. I would certainly mistrust 1000 year or 500 year peaks — one expects Gibbs ringing on the base interval and integer divisors thereof. It might be wiser to take an e.g. 1700 window (or any number far from a binary multiple of 250) out of the 2000 years and see if the 250 year signal survives. Similarly it might be good to “average” sliding window transforms to see what of the decadal signal is just irrelevant noise or if there is anything (not an binary divisor of the window length) that survives.

Fourier transforms of really long/infinite series are great. For finite series the question of artifacts is a very serious one, and the authors haven’t quite convinced me that any of the peaks they propose as “signal” of underlying actual long period causal variation are robust, as opposed to either being artifacts or pure noise, something that will shift all over the place any time one alters the length of the timeseries being fit, irrelevant stuff needed to fit THIS particular curve but not indicative of any actual underlying periodicity in hypothetical causes of the curve.

So, “interesting” but no cigar, at least not yet.

rgb

This post is a bit late. Other blogs have already discussed this paper in February. For example at Open Mind:

http://tamino.wordpress.com/2013/02/25/ludeckerous/

A review of this paper in German can be found on:

http://scienceblogs.de/primaklima/2013/02/22/artikel-von-eike-pressesprecher-ludecke-et-al-veroffentlicht-in-climate-of-the-past/

I have some trouble with:

The CET spike is 248, the stalagmite spike is 234 with nearby periods of 342 and 182. Not all that great a match to my eye, especially with how narrow the spikes are.

In http://wattsupwiththat.com/2011/12/07/in-china-there-are-no-hockey-sticks/ , their power spectrum shows spikes at 110, 199, 800, and 1324 years, a substantial disagreement with this new paper. (The China record is 2485 years long, a projection of its record calls for cooling between 2006 and 2068.)

BTW, more about the stalagmite record is at http://www.uibk.ac.at/geologie/pdf/vollweiler06.pdf . They claim it shows the Litle Ice Age and Medieval Warm Period. A summary is at http://www.worldclimatereport.com/index.php/2006/11/15/stalagmite-story/

This post is a bit late. Other blogs have already discussed this paper in February. For example at Open Mind:http://tamino.wordpress.com/2013/02/25/ludeckerous/And yes, this is entirely well-taken. Including a lot of the blog comments (not so much the silly poems, but I especially resonate with the bit about the elephant wiggling its trunk and the stupidity of “just” fitting curves using arbitrary basis functions. Why bother using period functions (fourier series or transforms)? Why bother making up obscure sums of unmotivated functional forms? Go for the cheese, build a neural network that fits the data. That way NOBODY will have any idea what the weights mean, and by adding more hidden layer neurons one can even fit the noise as closely as one likes.

Of course, no matter how one does this one will have a very hard time differentiating any kind of signal that can extrapolate outside of the training set to predict a trial set, the bete noir of climate modeling as soon as that trial set is more than a few hundred years long if not much less. In the meantime, one might as well do what Tamino does, only even more brutally. Take a coordinate axis. Have your six year old draw a wiggly curve on it following the rules for making a function. Then orient it via symmetry transformations until it ends with an upturn, give it to anybody you like as a supposed trace of the climate record, and ask them to explain it. Or give it to people you don’t like. It won’t matter.

This would make a great experiment in practical psychology and the systematic abuse of scientific methodology. Give it to Mann, and he’ll turn it into a hockey stick. Give it to Tamino, and he’ll find a “warming signal” at the end that can be attributed to CO

_{2}. Give it to Dragonslayers and they’ll either turn the curve upside down and claim that is the REAL curve or they’ll attribute the warming to thermonuclear fusion occurring inside the Earth’s core and tell you that CO_{2}is obviously a cooling gas and without it the Earth would resemble Venus.Give it to an honest man — note well, not Mann — and they’ll analyze it and tell you something like “I haven’t got a clue, because correlation is not causality and hence monotonic covariation of two curves doesn’t prove anything.” At which point Tamino’s point about needing to include physics is well-taken, but begs the question about whether or not we know ANY physics or physical model that can quantitatively/accurately predict or explain the thermal record of the Pliestocene. I would tend to say ha ha no, don’t make me laugh. He might assert something else — but can he prove it by (for example) rigorously predicting the MWP through the LIA and Dalton minimum to the present? Or even just the last 16,000 years, including the Younger Dryas?

Warming due to CO

_{2}is warming on TOP of an unknown, unpredictable secular variation. “Climate sensitivity” (needed to make warming due to CO_{2}significant enough to become “catastrophic” over the next century) is warming on TOP of the warming due to CO_{2}on TOP of an unknown, unpredictable secular variation. The secular trend visible in the actual climate trace from the last 130 years is almost unchanged in the present, and we cannot explain the trend itself or the significant variations around that trend.Until we can do better, fitting arbitrary bases to climate data is an exercise in futility, especially when one doesn’t even bother with the standard rules of predictive modeling theory, in particular the one that says that you’re a fool to trust a model (physically motivated or otherwise) just because it works for the training set. A model that works for not only the training set it was built with but ALSO for all trial data available (not just a carefully cherrypicked subset of it) might be moderately trustworthy, although there is still the black swan to be feared by all righteous predictive modelers, especially those modeling systems with high multivariate nonlinear complexity known to exhibit chaotic dynamics dominated by a network of constantly shifting attractors and multiple competing locally semistable states.

Like the climate.

rgb

I thought that if you were going to use the FFT the data should be periodic. If you graph the year 2000 onto the start of the record at 1750 you have a serious discontinuity.

One of Leif Svalgaard’s objections to a solar influence on climate was his observation that solar activity in the late 1700s was comparable to today.

This research shows that the temperature then was about the same as today.

It appears that there was a rapid solar recovery from the depths of the LIA to the late 1700s then a dip to about 1880 and a recovery since then all of which are reflected in the level of solar activity.

Leif thought that by highlighting the high solar activity of the late 1700s he was delivering a fatal blow to a proposed solar influence on temperature changes since the LIA.

However a more detailed examination of the temperature changes since the LIA appears to match solar activity very closely.

As the authors say:

“the overall temperature development since 1780 is part of periodic temperature dynamics prevailing already for ~2000 years. This applies in particular to the temperature rise since 1880, which is officially claimed as proof of global warming due to CO2, but clearly results from the 250 year cycle. It also applies to the temperature drop from 1800 ( when the temperature was roughly as high as today, Fig. 4 ) to 1880, which in all official statements is tacitly swept under the carpet”

The period 1800 to 1880 appears to represent a reversal of the previous recovery from the LIA and that reversal appears to correlate with a decline in solar activity.

Leif’s work actually supports the solar / climate link once the temperature record is properly examined.

rgbatduke says: “…First, rather than interpret the spectrum as the climate BECOMING chaotic, it is more likely (given a 2000 year record, especially) evidence that the climate already IS chaotic, as indeed one would expect for a highly nonlinear multivariate coupled system with strong feedbacks. Indeed, a glance at the data itself suffices to indicate that this is indeed quite likely….So, “interesting” but no cigar, at least not yet.”

Nicely put, as usual, rgb.

For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.There is a major misunderstanding: DFTs compute parameter estimates from the data. Why you think that the Fourier transform

of the datadoes not compute parameters is a mystery. If you claim that the model makes an accurate forecast, then you claim that the coefficients are accurate representations of the process, and that means that they are estimates.This applies in particular to the temperature rise since 1880, which is officially claimed as proof of global warming due to CO2, but clearly results from the 250 year cycle.You have less than 1 full cycle of the 250 year cycle on which to base this estimate, so it should not be accorded a lot of credence.

Finally we note that our analysis is compatible with the analysis of Harde who reports a CO2 climate sensitivity of ~0.4 K per CO2 doubling by model calculations [3].If your model includes no representation of CO2, then if it is sufficiently accurate it is “compatible” with a CO2 climate sensitivity of exactly 0 K per CO2 doubling. If you accept that 0.4K per doubling is accurate, then you need to include a term for CO2 in your model and forecast.

Contrary to expectations the Fourier spectra consist of spectral lines onlyThe figures show “peaks” rather than “lines”. That statements is either wrong or it needs explanation.

An interesting feature reveals itself on closer examination of the stalagmite spectrum ( Fig.1 right ). The lines with a frequency ratio of 0.5, 0.75, 1, 1.25 with respect to the ~250 year periodicity are prominent. This is precisely the signature spectrum of a period-doubling route to chaos [2]. Indeed, the wavelet diagram Fig. 3 indicates a first period-doubling from 125 to 250 years around 1200 AD. The conclusion is that the climate, presently dominated by the 250 year cycle is close to the point at which it will become nonperiodic, i.e. “chaotic”. We have in the meantime ascertained the period-doubling clearer and in more detail.What is represented by the color code in Figure 3? It would be helpful to have more information on how you estimated a time-varying spectrum — merely referring to wavelet analysis is incomplete because there are so many wavelet bases. I think it requires a visionary to read period-doubling off that graph; the claim that climate has been periodic but will become chaotic extremely weak. Period-doubling is a route to chaos, but that doesn’t imply that the next doubled period, should it occur, will mark the onset of chaos. And period-doubling is not the

sine qua nonof chaos.I am no less happy with this post-hoc model and its estimated parameters than with all the others that have been presented here, but it

isan estimated post-hoc model. The test will, as with all the others, be how well its projection into the future fits future data.Stephen Wilde says:

May 4, 2013 at 10:00 am

————————————

The Little Ice Age of course wasn’t uniformly cold, but consisted of its own arguably cyclic ups & downs of temperature on decadal periods.

The latter 18th century was, as since the late 1940s, a generally warmer spell within the remarkably cold LIA. It ended with the Dalton Minimum, aggravated by the 1815 Tambora explosion. It was preceded by the Maunder Minimum in the depths of the LIA, which followed the Spoerer & Wolf Minima.

http://en.wikipedia.org/wiki/File:Carbon14_with_activity_labels.svg

At the end of the Maunder occurred the coldest European year on record. It was an extreme weather event, but also possibly owed some of its ferocity to the preceding decades of prevailing cold.

http://en.wikipedia.org/wiki/Great_Frost_of_1709

Charles XII’s Sweden might have defeated Peter’s Russia in the Great Northern War but for this event.

I’m willing to be convinced that Holocene climate is chaotic, but haven’t seen compelling statistical & evidential demonstration of this hypothesis. I may well have missed something.

In a recent paper [1] we Fourier-analyzed central-european temperature records dating back to 1780.Figure 1 shows data earlier than 1780. Why the apparent discrepancy?

Well done alternative curve fit, but that is all. Dubious predictive power into the future for any meaningful period of time.

Any study, including Lindzen and Choi 2011, which concludes equilibrium sensitivity is negative (0.4C versus 1.0 for a ‘black’ earth and 1.1 to 1.2 for a grey earth, with Lindzen himself using 1.2 for the no feedback case) is suspect. There is simple too much data from too many approaches on the other side. Annans informed priors give 1.9; Nic Lewis and others get anything between 1.5 and 1.9. Various ocean heat methods give 1.8pro 1.9. The most recent batches of paleo sensitivities give 2 to 2.2. Posted last year elsewhere, and in the book on Truth.

Negative sensitivity says that negative cloud feedback is greater than water vapor positive feedback. That is very unlikely. And no other feedback is thought to be nearly as great as those two ( at least no one has yet credibly argued any such thing). Far more likely based on multiple observational papers is that the water vapor feedback is less positive than in the models. Reasons include inability to model convective cloud formation and precipitation on sub grid scales. Evidence includes declining UTrH since 1975 to about 200x, and absence of tropical troposphere hotspot. Mechanism is probably Lindzens adaptive iris. Cloud feedback is probably negative, not positive as in almost all CMIP5 GCMs. The combination halves sensitivity from 3 to about 1.5, right in the ballpark of many new sensitivity estimates since AR4. Adding a cloud superparameterization to a GCM increased clouds and precipitation, lowered UTrH, and halved sensitivity. Published on 2009.

The climate gang just does not want to fix the situation because puts the lie to past proclamations and to AR4.

Sensitivity of 1.5 is sufficient to say CO2 does something, but not very much. Either no actions, or only adaptive change suffices. CAGW remains hogwash. And natural variability still predominates in the second half of the 20th century– the old attribution problem.

Matthew@11:21

I don’t think the first sentence was exclusive.

wrt. Figure 1:-

“. Right panel: same for an interpolated time series of a stalagmite from the Austrian Alps.”

The authors state that the Fourier transform defines the frequency components in amplitude and phase, so no tweaking is involved. I’ll take their word for that.

What I WOULD find interesting then, is a sequential reconstruction plot. Start with a plot of the single largest component, or maybe start from lowest frequency, then in sequence add one more component at a time. In any case, I do find their result interesting, although, I don’t know how robust (mandatory adjectival comment) is their projection of a 0.4 deg C per doubling climate sensitivity; a concept to which I ascribe no demonstrable evidence, either experimental, or theoretical, in fact I find that notion laughable.

I don’t have much hesitation in accepting ML CO2 data, other than wondering if ML is not unlike a Coca Cola bottling plant, as regards its local atmosphere. But I can see no effectively monotonic increase in global Temperature regardless of any time offset, plus or minus. Remember, that over the ML record interval, there is no material difference between a linear relationship, and a logarithmic one. The sunspot cycle data since IGY in 1957-58 and since, seems better linked to the Temperature (whatever that is), than does the CO2, and those have been going counter to each other now for 17-18 years.

rgb is correct about artifacts. Anyone who has used Fourier Transforms to any extent will surely have been bitten by spurious peaks. Some additional tests with record lengths that are nowhere near multiples of any interesting periodicities are needed.

Rud Istvan:

At May 4, 2013 at 11:36 am you assert

Really? And your evidence for that is?

Richard

Judging from many of these comments, any analysis that isn’t perfect is greeted with instant criticism and dismissal. Yes FFTs, like all analysis, presupposes knowledge of the data set that often doesn’t exist. And yes even the authors “… caution for artefacts” among other limitations of this approach. FFT is a proven tool for teasing information from a data set. I don’t throw away results because they are imperfect, have limitations, or I don’t like them. If someone has a better approach I look forward to reading your essay prominently by-lined on WUWT. I would suggest that there is something of value here for those with an open mind and who are inclined to learn.

Table 1. Frequencies, periods, and the coefficients aj and bj of the

reconstruction RM6 due to Eqs. (1), (3), and (4) (N = 254).

j j/N (yr−1) Period (yr) aj bj

0 0 – 0 0

1 0.00394 254 0.68598 −0.12989

2 0.00787 127 – –

3 0.01181 85 0.19492 −0.14677

4 0.01575 64 0.17465 −0.22377

5 0.01968 51 0.14730 −0.10810

6 0.02362 42 −0.02510 −0.12095

7 0.02756 36 0.12691 0.01276

8 0.03150 32 – –

That’s Table 1 from the original. They seem to have estimated 27 parameters, 9 triplets (period/frequency, sine and cosine coefficients). Sorry about the formatting. Columns are j, frequency, period, cosine parameter, sine parameter. For the equation:

RM6(t) = Sum on j of [aj cos(2pi*jt/N) +bj sin (2pi*jt/N)] ;

j. = 0,1,3,4,5,6,7

They claim only 7, but period 127 and period 32 seem as well as period infty (freq 0) to have coefficients estimated to be 0. All without estimated standard errors. As with standard fft, the frequencies are chosen based on the length of the data record. Perhaps that is why the authors think that the frequencies are not “estimated”.

rgbatduke:

Until we can do better, fitting arbitrary bases to climate data is an exercise in futility, especially when one doesn’t even bother with the standard rules of predictive modeling theory, … .It’s a sort of standard operating procedure. We have to do something while waiting for the teams of empiricists to record and verify all of the data of the next 3 decades.

oops. In

Figure 1 shows data earlier than 1780. Why the apparent discrepancy?I meant Figure 2. Figure 1 in their paper shows data before 1775 for Paris and Prague, but not Hohenpeissneberg or Vienna. Perhaps Figure 2 of the post shows the mean of available data back as early as the earliest, but data used in the modeling exclude the dates for which complete data are unavailable.

Rud Isvan:

Negative sensitivity says that negative cloud feedback is greater than water vapor positive feedback. That is very unlikely. And no other feedback is thought to be nearly as great as those two ( at least no one has yet credibly argued any such thing).Likely or not, negative future CO2 sensitivity to a doubling is not unreasonable or ruled out by data or what is sometimes referred to as “the physics”. Equilibrium is an abstract concept, so equilibrium climate sensitivity is suspect. At least as suspect is the idea that CO2 sensitivity is a constant instead of being dependent on the climate at the start of the doubling. If it is true that increased CO2 causes increased downwelling long-wave infrared radiation, then it is quite possible that increased CO2 will cause more rapid (earlier in the day) formation of clouds in the tropics and in the mid-latitudes, especially in hot weather. Combined with increased upward radiation by high-altitude CO2, the net effect could be cooling of the lower troposphere. It can’t be ruled out on present theory and data if you abandon equilibrium assumptions and examine what is known about actual energy flows.

I am not convinced about value of this reconstruction even for the North Hemisphere, let alone global temperature

However, you pays your money and you takes your choice.

In a recent paper [1] we Fourier-analyzed central-european temperature records dating back to 1780.I haven’t read the paper in detail, (and I haven’t studied the topic in years) but the granularity of the f-axis in a DFT is determined by the duration of the time-signal. The idea that, in a 230 year sample set, you can see a 250 year cycle with any clarity is dubious. In this case, (freq sample) K = 1 is the 230 year bin, and K = 2 is the 115 year bin, and so on. A wide range of cycle duration will be accumulated in those bins.

Many people don’t seem to realize that there is a kind of corellary to the Nyquist theorem when doing DFTs:. Just as you need to sample at 1/2F intervals in order to find frequency F components, you need to sample for 2Y duration in order to distinguish duration Y cycles..

There is an obvious issue in that the our most reliable satellite data is only about 30 years. I don’t believe it’s possible to really see any long duration cycles in that data.

“Contrary to expectations the Fourier spectra consist of spectral lines only”The figures show “peaks” rather than “lines”. That statements is either wrong or it needs explanation.

They’ve done some kind of smoothing of the data using a higher-granularity frequency domain that can possible be legitimate. Per my above, I don’t think there is any legitimate way to resolve data at K=1 (1/230 years) and K=2(1/115) years into a signal w/ a peak at a precision of 248 years.

Or I could be wrong; there may be some legitimate transform analysis/technique that I’m ignorant of.sorry, I goofed a closing italics tag. Everything after “they’ve done” was mine.

so, let’s try it again…

“Contrary to expectations the Fourier spectra consist of spectral lines only”The figures show “peaks” rather than “lines”. That statements is either wrong or it needs explanation.They’ve done some kind of smoothing of the data using a higher-granularity frequency domain that can possible be legitimate. Per my above, I don’t think there is any legitimate way to resolve data at K=1 (1/230 years) and K=2(1/115) years into a signal w/ a peak at a precision of 248 years.

Or I could be wrong; there may be some legitimate transform analysis/technique that I’m ignorant of.

Sure this is just another climate model curve fit to the data. But, it does have the advantage of being ridiculously simple compared to most. If they are actually on to something, it should be comparatively easy to identify the causes of 6 variables, and go on to establish a relatively convincing argument. There are no guarantees when performing research, but if I were them I would keep pursuing this path.

So frequencies are extracted from the data, then plugged back in to re-create the original curves. This is just a routine mathematical trick — entirely unproven unless each extracted frequency is mapped to a physical cause, or if it builds a successful track record of prediction. I see Richard Telford has said as much above. Pretty, but a crowd-pleaser only.

I love this type of completely different analysis! It demonstrates, (TMHO) that the power of the large, supercomputer based models is viritually shgit. And whether all parameters have a physical foundation, show me the parameters of the large GCM models. None of the parameters have any physical meaning, which is why they are presented as “ensembles”. WTF? Ensembles? Why? Because none of the individual models, individual runs is capable of catching the observed global temperature.

Models: GDI:GBO

Back to the original, hand written, pristine observations, without the corrections, adjustments.

The dominant period of ~250 years is presently at its maximumI wonder if either of the two sharp-shooters

rgbatduke

or

Matthew R Marler

in view of the sudden up/down discrepancy of 125 years

http://www.vukcevic.talktalk.net/2CETs.htm

half of the all important ‘dominant period of ~250 years’, would care to make any additional comment, on the effect on the FT output.

Thanks RGB

Highlighted periods of 80, 61, 47, and 34 years seem to be just harmonics of the “base period” of 248; 248 is rather close to 3 x 80, 4 x 61, 5 x 47, and 6 x 34.

And the period of 248 is too long to be extracted reliably from the whole length of analyzed data. Also the stalactite record does not seem to match the temperature record very well; some peaks seem to be close but trying to reproduce the temperature record using frequencies which peak in the stalactite record might be quite hard.

The fact that there are peaks on the spectrum in my opinion does not mean the temperature is solely based on cycles; it only proves the temperature record is not completely chaotic or random. There are processes which are cyclic (PDO, AMO, ENSO, …) but they do not have to be periodic with a fixed period. Just one such process is bound to wreak havoc and lots of false matches to any fourier analysis.

Steve from Rockwood says:May 4, 2013 at 9:55 am“I thought that if you were going to use the FFT the data should be periodic. If you graph the year 2000 onto the start of the record at 1750 you have a serious discontinuity.”

In fact, the DFT

interpretsthe data as periodic, and reports accordingly. As you note, there is a discontinuity. It’s made worse by the use of zero padding, presumably to a 256 year period as the FFT requires. That means that, as the FFT sees it, every 256 years, the series rises to a max of 1.5 (their Fig 4, end value), drops to zero for six years, then rises to their initial value of 0.5.That is seen as real, and it’s a very large effect. That’s where their 248-year peak comes from. It’s the FT of that periodic pulse. It’s entirely spurious.

vukcevic:

would care to make any additional comment,I am glad you asked. I have always wished that you would put more labeling and full descriptions of whatever it is that you have in your graphs. Almost without exception, and this pair is not an exception, I do not understand what you have plotted.

James Smyth, you and I are in agreement about the need for at least 2 full cycles for any period that must be estimated from the data. Only if the period is known exactly a priori can you try to estimate the phase and amplitude from data. Also see Nick Stokes on the effects of padding, which the authors had to do for an FFT algorithm. The paper (and this applies to the original) is too vague on details.

My friends, I fear I don’t understand Figure 2. You’ve taken the Fourier analysis of the European data, then used the six largest identified cycles to reconstruct the rough shape of the European data …

So what?

How does that explain or elucidate anything? Were you expecting that the reconstruction would have a different form? Do you think the Fourier reconstruction having that form actually means something? Your paper contains the following about Figure 2:

Say what? Let me count the problems with that statement.

1. The “remarkable” agreement you point out is totally expected and absolutely unremarkable. That’s what you get every time when you “reconstruct” some signal using just the larger longer-term Fourier cycles as you have done … the result looks like a smoothed version of the data. Which is no surprise, since what you’ve done is filtered out the high-frequency cycles, leaving a smoothed version of the data.

And you have compared it to … well, a smoothed version of the data, using a “boxcar” filter.

You seem impressed by the “remarkable agreement” … but since they are both just smoothed versions of the underlying data, the agreement is

predictable and expected, and not“remarkable”in the slightest.Let me suggest that this misunderstanding of yours reveals a staggering lack of comprehension of what you are doing. In the hackers’ world you’d be described as “script kiddies”. A “script kiddie” is an amateur hacker using methods (“scripts” designed to gain entry to a computer) by rote, without any understanding of what the methods are actually doing. You seem to be in that position vis-a-vis Fourier analysis.

I am speaking this bluntly for one reason only—you truly don’t seem to understand the magnitude of your misunderstanding …

2. The Fourier analysis does NOT “suggest the absence of any warming”. Instead, it just suggests the limitations of Fourier analysis. A couple of years after I was born, Claude Shannon pointed out the following:

This means that the longest cycle you can hope to resolve in a dataset N years long is maybe N / 2 years. Any cycle longer than that you’d be crazy to trust, and even that long can be sketchy.

3. The Fourier analysis does NOT “suggest the absence of any … other nonperiodic phenomena.” There’s lots of room in there for a host of other things to go on. For example, try adding an artificial trend to your data of say an additional 0.5°C per century, starting in 1850. Then redo your Fourier analysis, and REPORT BACK ON YOUR FINDINGS, because that’s how science works …

Or not, blow off all the serious comments and suggestions, and walk away … your choice. I’m easy either way.

Finally, you claim to extract a 248-year cycle from data that starts in 1780, although the picture is a bit more complex than that. In your paper (citation [1]) you say:

The period of overlap between all of these records (the latest start date) is 1781, for Hohen-whateversburg and Kremsmunster. They all end in 2010, so rather than being 250 years long, your dataset covers 1781-2010, only 230 years.

So my question is … how did you guys get a 248-year cycle out of only 230 years of data?

Regards,

w.

PS—My summary of this analysis? My friends, I fear it is as useless as a trailer hitch on a bowling ball. It’s as bad as the

in meaningless curve fitting. The fact that the curves have been fit using Fourier analysis doesn’t change the problems with the analysis.previous exerciseIn fact, the DFT interprets the data as periodic, and reports accordingly. As you note, there is a discontinuity.This is picking nits, but I think that’s a poor verbalization of the underlying mathematics as a linear transform from C(N) to C(N). The DFT does no interpreting, and there is no assumption of periodicity of the domain data in the definition itself. Of course, the DFT itself has the property that X(N + k) = X(k), but that is not based on an assumption of x(N+k) = x(k). It is based on properties of the range basis elements. So, you can say with some meaningfulness that “the DFT is periodic.”

Now, as to the question of zero-padding for the FFT … I don’t even see the use of the term FFT in the excerpt. Which begs the question …

Does anyone have a handy reference tomoderncomputing time required for a true DFT of (what I would bet is) not that big of a sample set?A quick google search is not finding anything.Matthew R Marler says: May 4, 2013 at 2:09 pm

…………..

Ah…, my trade mark…

Let’s have a go

Graph

http://www.vukcevic.talktalk.net/2CETs.htm

Top chart

-black line

Central European Temperature as shown in the Fig 2 from the article (thread) it is slightly stretched horizontally for a better resolution.

-green line

the CET (shown in absolute values), scales are equalised for the 0.5 C steps.

Bottom chart

Same annotation as in the above.

-Red up/down arrows

show start and end of ~125 year long period, during which if the European temps are lifted by ~0.5 C, then there is an almost perfect match to the CET for the same period.

This strikes me as an unnatural event, particularly at the start, to take place over just few years and then last another ~125 or so years.

Two areas CET and Central Germany (Europe) are on similar latitudes and about 500 miles apart.

If one assumes that the CET trend is more representative of the N.Hemisphere, then I would expect that the ‘dominant period of ~250 years’ to disappear.

Looking forward to your comment. Tnx.

The “remarkable” agreement you point out is totally expected and absolutely unremarkable. That’s what you get every time when you “reconstruct” some signal using just the larger longer-term Fourier cycles as you have done … the result looks like a smoothed version of the data. Which is no surprise, since what you’ve done is filtered out the high-frequency cycles, leaving a smoothed version of the data.LOL. Again, I haven’t read the paper; and I’m way out of practice w/ this stuff, but this is really hilarious. It’s like, holy cow, the inverse of an invertable transform is the original data? Whoddathunkit???

James Smyth says: May 4, 2013 at 2:56 pm“Now, as to the question of zero-padding for the FFT … I don’t even see the use of the term FFT in the excerpt.”

They don’t mention it, and in fact they do mention N=254. But why on earth would they be zero-padding to N=254?

The more I read the paper, the more bizarre it gets. They have no idea what they are doing, and I can only assume Zorita is clueless too. They have a DFT which represents the data as harmonics of the base frequency, and they actually show that breakdown in Table 1 (relative to N=254) with the relevant maths in Eq 4. But then they show a continuous (smoothed) version in Fig 3, marking the harmonics with periods that are different (but close). Then they try to tell us that these frequencies have some significance.

But of course they don’t. They are simply the harmonics of the base frequencies, which is just the period they have data for. The peak at 248 (or 254) years is just determined by the integral of the data multiplied by the base sinusoid. Of course there will be a peak there.

I agree with most the comments made here.

Transformation from the time domain to frequency domain means very little UNLESS ther are multiple records that allow averaging of the power spectrum. All that has been done is to use a (crude) low pass filter. So what? Obviously you can reconstruct the the signal from its Fourier components.

I’ve done a lot of work on chaotic dynamics and the period doubling route to chaos. Try as I might it just doesn’t look like period doubling to me. The subharmonics of the doubled period should have amplitudes that are smaller than the amplitude of the main peak. The peak with a periof of 341 years is larger than the putative fundamental with a 234 year period. See for example Fig. 2, P.S. Linsay, Phys. Rev. Letters, 47, 1349 (1981).

Your true frequency resolution may also cause problems separating peaks so close together, they are only a two to three bins apart in frequency space. Without knowing your windowing function it’s hard to know what it is.

The fact of the matter is that period doubling is a very fragile route to chaos and usually only shows up in dynamical systems with only a small number of degrees of freedom. As much as the AGW crowd would like us to believe that’s true with CO2 driving everything, the reality points to a very complicated system with many degrees of freedom. That is what one should expect from a system with multiple coupled fluids and components that can change phase.

vukcevik:

-Red up/down arrowsYou get a slightly greater fit if you move the left-end red line leftward a bit, and the right-end red line rightward a bit. That suggests that during an epoch of about 160 years central Europe (is Paris “central”?) cooled more than central England. It’s hard to get away from epochs that are approximately some multiple or fraction of some period in a Fourier transform.

They are simply the harmonics of the base frequencies, which is just the period they have data for. The peak at 248 (or 254) years is just determined by the integral of the data multiplied by the base sinusoidI would need to see your math. These words don’t translate into anything meaningful for me.. It sounds like you are implying that zero padding introduces peak frequencies; which I don’t think is true. Multiplication by a window is convolution w/ a sinc, It will spread existing frequency peaks out as convolved w/ a sinc function. Is it possible to get the sinc’d components to add up in such a way that it introduces new peaks; I suppose.Thank goodness for RGB!!

I read this article and alarm bells went off in my head everywhere. Spectral analysis is fraught with artifacts. It is also very important what type of “window” you apply to the data prior to analysis.

I would treat all of these “conclusions” with EXTREME caution.

Thank you RGB for pointing this out.

For those who want to know more

http://en.wikipedia.org/wiki/Window_function

For those whose brains are about to explode … ;-)

I highly recommend the following (free) book:

The thing about this book is that it is written from the standpoint of one who might acutally want to do some digital signal processing. There are a few serious traps the naive can fall into and this is the best book I’ve seen for pointing them out.

I think the most succinct definition of the problem is as follows:

rgbatduke you are indeed a master of understatement.

The symmetric “U” shaped curve of figure 2 from one max to another max located at either end of the record is highly suspicious. I realize you are using central Europe records and not NH records but what happened to the LIA which was certainly dominant in the first part of the record. In 1780, heavy cannon were hauled on the ice from Jersey City to New York and people could walk from New York to Staten Island on 8 foot-thick ice! One third of all Finns died, grape vines died in continental Europe…..

http://query.nytimes.com/mem/archive-free/pdf?res=9C06EED81E31E233A25757C2A9649D946096D6CF

http://green.blogs.nytimes.com/2012/01/31/in-the-little-ice-age-lessons-for-today/

The many criticisms above of an DFT of the record and the tautological result you obtained notwithstanding, shouldn’t you, at your pay-scale (shame on you all), have made some attempt to identify what the individual cycles were caused by in reality? Was it six cycles? I can’t be bothered to go back and check. There are virtually an infinite number of possible sine wave bundles that could give you essentially as good a fit (am I wrong here, I’m harkening back to mid 20thC – pre computer days summing of curves – such a work as yours might have been done in a grade 12 high school class). And you find in all this a corroboration of a 0.4 C climate sensitivity of CO2? This would require an FT along an up-sloping trend line. We’ve been jumping all over failed climate models but they are far superior to yours for their attempt to model actual affects.

The authors appear to have taken the signal, transformed it, removed some high frequency components, back-transformed, and made conclusions based on how well the result matches the original. I do not see how that would be valid.

I would have thought that before applying the FT, the data should be de-trended and windowed. No mention of any windowing in the description.

The Fourier transform assumes the signal is periodic, that is it recurs infinitely. So without de-trending and windowing, an implicit periodic saw-tooth signal covering the data with harmonics is added to the output. Windowing limits the frequency resolution, but greatly reduces the saw-tooth artifacts.

I do not understand how the FT can be used to discover what the aperiodic components of a signal are. Rather it is used to decompose a signal into sine waves, assuming it is periodic.

Matthew R Marler says:

May 4, 2013 at 5:00 pm

You get a slightly greater fit if you move the left-end red line leftward a bit, and the right-end red line rightward a bit. That suggests that during an epoch of about 160 years central Europe (is Paris “central”?) cooled more than central England. It’s hard to get away from epochs that are approximately some multiple or fraction of some period in a Fourier transform.http://www.vukcevic.talktalk.net/2CETs.htm

Thanks. I agree. However I am not convinced ( London Paris distance is only 200 miles) that such sudden change could happen in short few years and endure 160 years at precisely same 0.5C difference, otherwise the temperatures change agree in every minute detail. I suspect that final conversion from the old European temperature scales may be the critical factor here.

The Réaumur scale saw widespread use in Europe, particularly in France and Germany http://en.wikipedia.org/wiki/Réaumur_scale

James Smyth says: May 4, 2013 at 5:43 pm“I would need to see your math. These words don’t translate into anything meaningful for me.. It sounds like you are implying that zero padding introduces peak frequencies; which I don’t think is true. Multiplication by a window is convolution w/ a sinc, It will spread existing frequency peaks out as convolved w/ a sinc function.”

Nothing complicated. Just Fourier series math as in their Eq 4. I’m not implying anything about zero padding; in fact I think my earlier statement about it was exaggerated – it probably isn’t the major influence on the 248 yr peak.

Their basic framework is just a DFT – about 254 data points and 254 frequencies. Invert the DFT restricting to the first 6 harmonics and you get the best available LS fit to the data of those functions. That’s just Fourier math.

They say

“Contrary to expectations the Fourier spectra consist of spectral lines only, indicating that the climate is dominated by periodic processes ( Fig. 1 left ).”but that just shows ignorance of the process. DFT has to yield spectral lines; it is discrete. And a simple DFT would just give lines at the harmonics. It’s not clear what they have done to smooth the spectrum – it’s possible that they have done a huge amount of zero padding, in which case as you say they’ll get the harmonic lines convolved with a sinc function.. But they also mention a Monte Carlo process that adds noise. Whatever, they have followed the basic process of DFT, truncate, invert, which tells nothing about special periodic behaviour here. The smoothing of the spectrum adds no information.The wonderful thing about WUWT is that even work which seems to support CAGW skepticism and thus soothe our cognitive dissonance and reinforce our confirmation bias, is analysed with as much rigour as that which claims the opposite. That is science!

Not being a mathematician I couldn’t possibly comment on the ins and outs here. But as a social scientist my eyebrows did go up a bit when you reassembled the extracted frequency peaks and seemed delighted that they bore a striking resemblance to the data from which they were extracted.

I do, however, appreciate what you’re trying to do. So much of the science in this field seems dependent upon linear regression which, in a system which is clearly cyclical, seems absurd. There are obvious periodicities outside the basic diurnal and annual cycles — which is obviously why many of them are dubbed “oscillations” — despite their frequencies being notoriously unreliable. I suppose that the search for relatively exact periodicities implies you’re looking for extra-terrestrial drivers for climate.

This study, as it stands, is interesting but not particularly persuasive. My immediate reaction was to question why you didn’t use the long-span proxies to derive your frequency peaks and then reassemble

themto see how they compare to the thermometer record?richard telford says:

May 4, 2013 at 8:34 am

Rubbish

========

Whether the paper is rubbish or not cannot be determined at present. You may believe that to be the case, but that doesn’t make it so.

The only valid test for any scientific theory or hypothesis – in this case that climate can be predicted from its spectral components – is in its ability to predict the unknown.

If we see a drop in temperatures as predicted by the model for the near future, then this is reasonably strong validation of the model. If the temperatures rise as predicted by the CO2 models, then that is reasonably strong validation for the CO2 models.

As I noted in the previous paper, curve fitting in itself does not invalidate the ability of a model to predict the unknown with skill. Mathematically the solution of a neural network by computers artificial intelligence) is indistinguishable from curve fitting. It is highly likely that human intelligence works in a similar fashion.

The problem is that many curves fit the data, but few if any have predictive skill. the challenge for any intelligent machine or organism is to correctly identify the few curves that have predictive ability and reject those curves that do not.

Two basic assumptions are presented in the technique. First, that the frequencies discovered through spectral analysis represent a physical reality of some sort. That climate is cyclical on longer time scales, and these cycles repeat in a regular fashion. Second, that the technique is correctly identifying these fundamental frequencies.

If there is an error in technique, then this can be discovered though analysis of the mathematics. Were the figures calculated correctly under the rules of mathematics? If not, then we cannot place much trust in the result.

If it has been established that the math is correct (a big if), then we are left with the first assumption, that climate is cyclical on longer time periods. This assumption is harder to argue, because we see clear evidence of cyclical behavior in many on the longer timescales in climate.

One obvious test as Willis pointed out in the previous paper is to repeat the exercise with part of the temperature record hidden from the model to see if the model has an skill at predicting the hidden portion of the data. If not, this would be a strong indication that the technique has no value.

One of my favorite examples of the value of curve fitting is the game of chess. There are 64 squares on the board. Each square has a value. 0 for unoccupied. -1 thru -6 for black, +1 thru +6 for white. 13 possible values. 4 bits with space left over for housekeeping, such as who’s move it is next. This gives us an array of 64 values, 4 bits each. 256 bits. Without any attempt to shrink the problem size.

This tells us that without any effort we can represent any position on the chess board as a number on the range of 0 thru 2^256. A big number but we are quickly approaching machines with word size of 256 bits so what is science fiction to day will likely not be in the future. Let this big number be your X axis, where each discrete value represents a chess position. Now many of these will never be reached in any game, but it tells us all the possibilities.

Now let the Y axis be the value of the game position to each player. 0 represents a game where no player has any advantage. A positive value, white has the advantage. The larger the value, the greater the advantage. The opposite is true for black. Negative values, black has the advantage, and the more negative the value, the greater the advantage to black.

Now play millions and millions of games of chess and use the winning histories of each game to score the value of each board position reached. If the game ends in a draw, nothing changes. If black wins, subtract 1 from every Y value for every board position reached during the game. If white wins, add 1 to every board position reached during the game. Over millions of games this gives you the Y values, for each of the possible X values that are reached. Normalize these Y values, based on the number of times each position was reached. These can then be graphed as a series of X and Y values.

Now plot a curve through these X and Y values. If you have the curve that correctly describes the game of chess, then on those values of X that were never reached in previous games, the curve itself will tell you the Y value of the board position. Thus, when you want to calculate your next move, examine all legal moves (linear time required) and find the corresponding X values. Compare the Y values and for these X values, and select the move with the greatest positive or negative value, depending on whether you are black or white.

By this technique you can then play chess without any need to resort to the time consuming forward search approach (exponential time required) more typically used in chess. With the correct curve to describe the game of chess, you can very quickly calculate the best Y value for any possible X value.

Something very similar to this must take place in intelligent organisms, because no organism can afford the time required to exhaustively search all possibilities in real time, when a decision is required. Instead they must process the possibilities in an offline mode (sleep) where they consider all possibilities in an exhaustive fashion and create a shorthand technique (curve fit) to calculate the optimal response when confronted with similar problems in real time.

The problem then becomes one of selecting the correct curve to describe the game of chess, from all possible curves that fit the known X,Y points on the graph. What you want is a curve that predicts a very close to correct Y value for any X value, regardless of whether this X value has ever been seen before in all the millions and millions of games played. This is a surprisingly difficult problem.

Consider repeating your analysis for the last 215 years rather than 230? years. 215 isn’t near a multiple of 80, 61 or 47 years. Do these peaks disappear? If so they are an artifact of the length of the data, not the data itself. (If Nick Stokes is right about zero padding to a multiple of 256, this still may not expose your problem.)

richard telford says:

May 4, 2013 at 8:34 am

“For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.””Rubbish …

—————————————————

Afraid I have to agree with Richard here, much as it pains me. The DFT

isa curve fit. Each bin is nothing more than a measure of the correlation of the input time series with a sinusoid at that frequency. If one naively constructs a truncated sinusoidal expansion from the output of the DFT, it will can match the function over the data interval with arbitrary precision, depending on the number of components in the truncated series, but it does not have predictive power in general.Successful DFT analysis generally requires some degree of smoothing, along with a seasoned analyst who can recognize the morphology arising from widely encountered physical processes, and thereby construct a model which generally produces a good approximation to commonly encountered system behavior.

@nNick Stokes

Their basic framework is just a DFT – about 254 data points and 254 frequencies. Invert the DFT restricting to the first 6 harmonics and you get the best available LS fit to the data of those functions. That’s just Fourier math.

I don’t want to sound argumentative, but have you heard of the Nyquist Theorem? If there are 254 data points, the maximum number of observable frquencies is 254/2 = 127.

RCS,

If you want to argue so, then there are 127 positive frequencies, and 127 negative.

But you have to have the same number of frequencies, otherwise the DFT is not ,a href=”http://en.wikipedia.org/wiki/Discrete_Fourier_transform#Completeness”>invertible.

ferd berple says:

May 5, 2013 at 8:15 am

richard telford says:

May 4, 2013 at 8:34 am

Rubbish

========

Whether the paper is rubbish or not cannot be determined at present. You may believe that to be the case, but that doesn’t make it so.The only valid test for any scientific theory or hypothesis – in this case that climate can be predicted from its spectral components – is in its ability to predict the unknown.

If we see a drop in temperatures as predicted by the model for the near future, then this is reasonably strong validation of the model. If the temperatures rise as predicted by the CO2 models, then that is reasonably strong validation for the CO2 models.Fred’s moderate tone seems right here – does the study make a prediction? – if so – how will it turn out? Much of the strong criticism here (with more than a little whiff of territoriality) seems to imply the conclusion “you cant use FFT in climate series”. Why not? Is the idea of cyclical phenomena in climate such an intolerable blasphemy? Is FFT really such a bad method to look for periodicities? More testing is needed on different climate data sets, that’s all.

Bart says:

May 5, 2013 at 12:31 pm

Afraid I have to agree with Richard here, much as it pains me. The DFT is a curve fit.

==========

Folks are making a mistake in assuming that because something is a curve fit that it is automatically wrong.

In general a curve fit will not provide predictive value because there are a whole lot of meaningless curves that will fit the same data. however, there are also some curves that will fit the data and provide predictive value – if the data itself is predictable. So, if you simply select one of many possible curves at random, you are likely to get a curve that has no value.

In this case we do not know for sure if climate is predictable. Lots of Climate Scientists point to the Law of Large Numbers and the Central Limit Theorem and say that this makes Climate predictable, but that doesn’t make it so. Just ask the folks that play the stock market.

Weather is known to be Chaotic. Yet Climate Scientists assume that the average of Chaos is no longer Chaotic. That seems on the face of it to be highly unlikely, because it would imply that all sorts of Chaotic processes in fields outside of climate would suddenly become predictable through simple averaging. Is the Dow Jones Average predictable?

Now, if Climate is predictable (and that is a BIG if) have the authors produced a curve fits that has predictive power? Likely not, but none of us here can say for sure. We may well believe they haven’t, because the odds are against them, but believing doesn’t make it so.

That is seen as real, and it’s a very large effect. That’s where their 248-year peak comes from. It’s the FT of that periodic pulse. It’s entirely spurious.Or, more precisely, it

maybe entirely spurious. As they even note (inadequately) in their discussion of artifacts. It is also fair to say that some unknown fraction of it is spurious (up to the whole thing). The 2000 year data, however, shows some transform strength across the same frequency/period. That too could be spurious, a Gibbs phenomenon on a power of two integer divisor of the 2000 year period. To show that it is not, they should take some prime multiple of 250 < 2000 and redo the FT and show that the peaks survive. Or better than prime, some irrational multiple of 250 not too near a power of two. Or do an e.g. 7*250 = 1750 year sliding window across the 2000 year data.But yes, absolutely, these errors reflect a certain rush to publish indicative of confirmation bias, which is just as much a sin when it is committed by antiwarmists as it is when it is by warmists.

Ah, for the days when people did science in something like a disinterested way, openly acknowledging the limitations of knowledge and the finite range of conclusions that can be drawn from any argument. Ah, for the days eventually to come when crowdsourcing science replaces "just" peer review and papers like this don't just get posted to a blog, they get

redone and corrected in real timeuntil all of the issues raised are addressed. I’m willing to be convinced — or anti-convinced — of their claim of a 250 year signal, sure, but their computation so far is not convincing. Or anticonvincing — it may NOT be the case that all of their ~250 year peak is artifact. It’s easy enough to find out, given the data in hand, for at least the 2000 year stalagmite dataset. And there are many more datasets.rgb

A few years back, I thought looking at global temperature, using Fourier Convolution Filtering (FCF) might be interesting. That is, convert the signal into the frequency domain, remove selected freq. & see if anything interesting might be seen. I liked the FCF, in that end pts. were not cut off, as happens using a moving average.

To make matters more interesting, I took the CEL, DeBuilt, Stockholm-GML & Berlin Tempelhof data sets, computed their respective anomalies, & averaged them. Nothing elegant as some would like, but gives the basic information.

In addition, the FCF was compared to a MOV as well as the forward & reverse recursive filter (“filffilt” in MATLAB terms). That is, the filter is run forward in time. But then is run backwards, last to first. The purpose is to reduce the phase lag due to the inherent characteristics of the filter. End points have transient effects, & care is required in re-initializing the backward computations. But most of the signal is present. In this case I used a 2-pole Chevushev.

The FCF requires a little work in order to reduce discontinuity at the end pts., hence the signal was “detrended” at the end points. This was inserted into a zero series window, whose length was based on the powers of 2. Hence the signal was in the middle, with ends padded with zeros. & no data lost. Getting good test data can be very expensive. This results in reduced “leakage”, characteristic of this method. The FFT transformed the signal into the freq. domain, & masking off selected freq. The inverse FFT was then performed, & “trended” back for comparison to the original data set.

These procedures are noted in http://www.dspguide.com as mentioned in an above comment.

As a cross check, a run was made, where no freq. were “masked” to compare how the procedure affected the original data.

The following 2 figures show the results, using a 10 & 20 year filter:

http://dc497.4shared.com/download/Bp5DxL3b/Ave4_10yr_3fil.gif?tsid=20130506-023010-c8122ef

http://dc497.4shared.com/download/l2jEHPxe/Ave4_20yr_3fil.gif?tsid=20130506-023324-5fa34b39

A second data set was used, just with the CEL data set, as shown below:

http://dc456.4shared.com/download/AgE-cCa2/Ave1_2010_FF_25yr.jpg?tsid=20130506-024001-cd51647f

A third run was made, using CRU data, comparing the FCF & the EMD filtering.

http://dc541.4shared.com/download/2foIw4k7/CRU-Fig-6a.gif?tsid=20130506-024316-2b67baa7

The EMD (Empirical Mode Decomposition) method handles non-stationary processes, better then the FCF method.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1986583/

In retrospect the FCF method held up quite well, & highlighted a number of the cyclic, or almost periodic cycles shown in the data. While the FCF is a very powerful tool, it is but one tool, along with others that present different views of features in data analysis, in hopes of providing insight into what the data is telling us.

And that’s where tamino & yours truly had a difference of opinion a few years back over which way the CEL data was going.

It should also be noted that there is a 2D FCF, used in image processing, “The Handbook of Astronomical Image Processing”.

become predictable through simple averaging. Is the Dow Jones Average predictable?Of course. To a point. In fact, it very reliably appreciates at the some multiple of the rate of inflation plus a smidgen. Take a look here:

http://stockcharts.com/freecharts/historical/djia1900.html

Oh, you mean is the

chaotic, random noise, are theblack swan eventspredictable? Of course not. But yes, the DJA does, in fact, become remarkably reliably predictable. Note well the semilog scale — the actual average experiences slightly better than exponential growth. Not also ON the semilog scale the difference between the Great Depression and the crash a few years ago. The crash a few years ago was basically a 3 decibel event (decibels being appropriate for log scale cuves). The Great Depression was more like 15 decibels — and STILL made little real difference in the fact that the overall curve appears to be decently fittable with an exponential of an exponential (or an exponential fit on a semilog scale).But this fit isn’t MEANINGFUL, and while nearly the entire world bets on it continuing, with some justification, it is the VARIATIONS in this behavior that are interesting. Also — not unlike the climate data — this century plus curve can equally well be interpreted in terms of “eras” with quantitatively distince behavior — pre-Depression (note the sharp upturn right before the crash where market margin rules were generating the illusion of wealth unbacked by the real thing!), post-Depression pre-1960s, a curiously flat stretch across the late 60s into the 80s (the era of runaway inflation, Viet Nam, the cold war, a “hockey stick” inflation from the Reagan years through Clinton, the chilling effect of the triple whammy of dotcom collapse (IT companies pretending they are banks but without any regulation), the perpetual, expensive middle east wars sucking all of the free money out of the Universe, the savings and loan collapse and the most recent correction due to the cumulative doldrums from all of the above.

A clever eye might see a pattern of 20-30 year prosperous growth followed by 20-30 year correction and little to no growth, one strangely synchronized to at least some of the climate variations! Or to sunspots. Or to the length of women’s dresses. Or to

any variable you like that monotonically increases in time.So sure, I can predict the DJA — and have been, for years. Just not the DJA

todayorthis monthorthis year. But predict it within a few dB over twenty or thirty years? A piece of cake! Even Taleb doesn’t properly note this in his book — it would be very interesting indeed to plot the distribution of black swan events (or the spectrum of market fluctuations in general) to see what kind of law they describe. I’d bet on a power law of some sort, but the fact that only one 16 dB correction is present over more than a century of data (with a big gap to the next largest correction, a jump from 16 dB down to what, 3dB excursions pretty consistently) makes it difficult to even assert this.This curve

doescontain important lessons for the antiwarmist and warmist alike. First of all, the global temperature curve has the same general shape as this but. This is what is so astounding — the climate isplotting only the anomaly, and on a normal scale, not semilogenormously stable. Forget 3dB corrections on the kelvin scale. One cannot even use decibels to describe the fluctuations visible in the.anomalyIt is very, very instructive to plot global temperatures not as anomalies but on the actual Kelvin scale, TO scale. Sadly, woodsfortrees doesn’t appear to let you control the axes — it has one of the ways to lie with statistics essentially built into its display (I mean this literally, a method explicitly described in the book “How to Lie with Statistics”). Honestly plotted, with error bars, it would look like a more or less flat line where the entire width of the line would be error bar. After all, the entire variation in degrees K is 3 tenths of one percent from 1880 to the present.

Not so with the DJA. Not even after renormalizing for inflation. The real wealth of the world has exponentially grown with population and technology, and no amount of market shenanigans or political incompetence can completely mask that overwhelming effect.

rgb

rgbatduke says:

May 6, 2013 at 8:37 am

become predictable through simple averaging. Is the Dow Jones Average predictable?

Of course. To a point. In fact, it very reliably appreciates at the some multiple of the rate of inflation plus a smidgen.

Actually the development of “technical analysis” of stock charts and of commodity price charts over the past 30yrs or so have made them more predictable (with the caveat that economic bombs disrupt this stuff). Technical analysis with its “moving average curves being crossed by the daily prices curves as an indication to buy or sell, is not scientific but is a kind of self-fulfilling exercise. Books on technical analysis “educate” investors in the technique and once a large enough segment of investors (or more effectively their brokers) are employing this, the market responds “predictably”. Bullion dealers have trained the market to behave according to these techniques. Having said all this, rgbatduke is correct about the long term – it is a trace of technical and economic progress. Putting your retirement money into a stock index fund long enough and you have a high probability of at least preserving your buying power. Remarkably, left to their own devices, individuals tend to buy high and sell low (withdraw from a disappointing turn).

And that’s where tamino & yours truly had a difference of opinion a few years back over which way the CEL data was going.Well, if you’re going to do numerology, at least one should do it competently, and it appears that you have done so. It is interesting to note that your power spectrum looks very nearly lorentzian, although one probably cannot resolve long period features or linear from slow exponential on this sort of interval. There is also clearly nothing (resolvable) that is special about 250 years in the 400 year data. I’m so so sure about the higher frequency structure — the filter you apply probably gets rid of ENSO altogether and most of the PDO. Or is that what the graph is indicating, that the short period stuff was the transform of the raw data?

I personally would argue that we have no idea which way the CEL data is “going”, fit or not. Even with your filtering, I expect that there are still artifacts near the ends.

rgb

rgbatduke says:

May 6, 2013 at 8:37 am

“A clever eye might see a pattern of 20-30 year prosperous growth followed by 20-30 year correction and little to no growth, one strangely synchronized to at least some of the climate variations!”I would suggest that this is the fundamental period of human generational turnover. It fundamentally reflects the bandwidth of institutional memory. One generation decides it has learned everything it needs to know, forgetting the lessons of the past, which sets us up for decline. The next generation relearns the lessons which promote prosperity anew, and things pick up, only to fail when institutional memory lapses once again.

It is, I think, no coincidence that every thirty or so years, we get a new climate panic, as the latest generation forgets the panic their forebears endured. It seems a grim practical joke by the Gods of weather to have synchronized the variation of the climate with the generational turnover. And, like the cycles of a resonance being driven by a sympathetic beat, succeeding peaks exhibit exponential growth in the level of alarm.

I am hopeful that the internet, as a vehicle for extending institutional memory, will arrest this damaging cycle. When the Global Cooling Panic of 2030 hits, there will be a plethora of documentation we can refer back to, and explain, yes, we have been through this before and yes, they really were serious about it last time, too.

rgbatduke says:

May 6, 2013 at 9:52 am

“Even with your filtering, I expect that there are still artifacts near the ends.”It is inherent. There is no magic in these methods, and you can’t get something for nothing. In general, filtering via FFT based manipulation is an inferior technique. It was recognition that seemingly straightforward weighting of the FFT produced bizarre actual responses which drove the development of the Parks-McClellan algorithm for the design of optimal equi-ripple filters.

It is inherent. There is no magic in these methods, and you can’t get something for nothing.You betcha. TANSTAAFL, especially in (predictive or otherwise) modeling. In fact, in many cases there are no-free-lunch theorems.

It reminds me of somebody (don’t remember who) who posted an article bemoaning the perpetual fitting of this or that straight line to e.g. UAH LTT or the end of HADCRUTX, not JUST because of the open opportunity to cherrypick ends to make the line come out however you wanted it to, but because in the end, there was really no information content in the line that wasn’t immediately visible to the eye without the line. Fit lines, in fact, are a way to lie with data — they are deliberately designed and drawn to make you eye see a pattern that may or may not be there, sort of like using magic marker on a piece of toast to help somebody see the face of Jesus there, or the way Moms point out a cloud that if you squint and pretend just right looks like a fluffy little sheep (and not a great white shark, the alternative).

Lines drawn to physically motivated models, OTOH, have at least the potential for meaning, and can be both weakly verified or weakly falsified by the passage of time or training/trial set comparisons. Smoothing with a running weighted filter is still smoothing, and since the filter one applies is a matter of personal choice without any sort of objective prescription, all one ends up with is a smoothed curve based on parameters that make the curve tell the story the curve-maker wants told. If there are 50 ways to leave your lover, there are 500 ways to lie to yourself and others (and many more to simply be mistaken) in this whole general game, curve fitting, predictive modelling, smoothing, data transforming, data infilling (ooo, I hate that word:-), and so on. I like Willis idea best. Look at the RAW data first, all by itself, unmeddled with. Everything after that has the potential for all sorts of bias, deliberate and unconscious both. The data itself may well be mistaken, but at least it is probably HONESTLY mistaken, RANDOMLY mistaken.

rgb

rgbatduke says:

May 6, 2013 at 12:10 pm

I like Willis idea best. Look at the RAW data first, all by itself, unmeddled with. Everything after that has the potential for all sorts of bias, deliberate and unconscious both. The data itself may well be mistaken, but at least it is probably HONESTLY mistaken, RANDOMLY mistaken.rgb

Like this study of 2249 raw unadjusted surface temperature records, for instance?

I’ve been revisiting the math, and taught myself enough R in the last day to test a pretty wide array of examples, and I’ve confirmed my position that windowing/padding does not introduce noticeable harmonics of existing frequencies. I say “noticeable” b/c it’s possible something hidden even by logarithmic scaling is not just rounding/precision noise.

In fact, as i additionally suspected, you don’t even need the traditional pad to a power of 2 length to do DFT on modern CPU (at least for these data set sizes). R’s FFT claims to work best on “highly composite” data length and I had to come up w/ some very large and very much

notcomposite data lengths to get it to run for so long that i had to kill it due to lack of patience (1999999 samples, which has 3 prime and 5 unique factors seems to be a good one. Typical variants on 250 years of daily data would be:(365*250 = 748250 samples) runs in seconds.Even the convolution w/ a sinc from padding is only pronounced when relatively large (like several factors) padding. But, I’ve only run a few test cases to very that.rgbatduke says:

May 6, 2013 at 12:10 pm

Notwithstanding that there is no further information with which to extrapolate the data beyond its boundaries within the data itself, it is possible to use other information to do so. That information is the vast well of knowledge of how natural systems typically evolve which has been compiled over the last century, particularly since about 1960 when widely applicable methods of finite element modeling, system identification, and optimal filtering were introduced. Sadly, I have not observed or encountered anyone on either side of the debate in the climate sciences who seems to be particularly conversant in these subjects.

James Smyth says:

May 6, 2013 at 11:30 pm

“Even the convolution w/ a sinc from padding is only pronounced when relatively large (like several factors) padding.”Zero padding does not produce the sinc convolution – the finite data record does that. Zero padding simply interpolates more data points, making a plot produced with linear interpolation between data points, as most plotting routines use, appear smoother.

I recommend you try putting in other functions than pure sinusoids, and show yourself that, while you can reconstruct your input by summing the major sinusoids indicated by the DFT, suitably scaled by the peaks and shifted by the phases, there is no general predictive value in the reconstruction, and you can find cases where it is just flat out wrong.

Then, you should try adding random noise into your input data series, and notice what that does to your DFT. Methods for dealing with that particular ugliness are dealt with under the heading of spectral density estimation.

rgbatduke,

tamino & I disagreed with his CEL projection, in his 08 post. Which is why I use additinal methods as a fundamental cross check. Here is the one from back then showing, an apparent downturn.

http://dc594.4shared.com/download/Lyy028dy/Ave1-CEL-2008-3fil-40yr.JPG?tsid=20130507-152145-cb7c6690

Looking at current CEL data, I would say my estimation is a bit better then tamino’s. .

As far as competence in this area, DOD & NASA haven’t complained. As a side issue, spectral methods were part of the very basic predictor estimatioin work, such as with Norbert Wiener & Charlie Draper.

You might want to consult “The Measurement of Power Spectra”, Blackman & Tukey, on sampling periods for measuring signals in you freq, window..

rgbatduke, as usual, is right on the money (emphasis mine):

We’re a full service website here, so I’ve provided your graph for you. The error bars are invisible on this scale, so I’ve left them out.

This is the idea I find I have to keep repeating. The surprising thing about the climate is not that the temperature of the globe varied by 0.6°C over the last century.

The surprise is that it ONLY varied by 0.6° over the last century.

I have messed around a whole lot, both with heat engines of a variety of types, and with iterative climate models. I can assure folks that getting a free-running system to exhibit that kind of stability, either in the real world or on a computer, is a difficult task.

If the interests of the climate science community had been dedicated to understanding the stability of the climate, rather than obsessing about tiny fluctuations in global temperature, we might actually have learned something in the last 30 years. Instead, we have followed this idiotic idea down the rabbit hole:

This is the foolish claim that temperature depends on one thing and one thing only—the amount of change in the “forcing”, the total downwelling radiation at the top of the atmosphere. Not only that, but the relationship is bog-standard linear.

No other chaotic natural system that I know of has such a bozo-simple linear relationship between input and output, so it’s a mystery to me why people claim the climate follows such an unlikely simplistic rule.

Actually, it’s not a mystery. They have to believe that. The other option is too horrible to contemplate—they can’t stand the idea that the Thermageddon™ may not be just around the bend, it may have only existed in their fevered imaginations …

w.

[Bart said]Zero padding does not produce the sinc convolution– the finite data record does that. Zero padding simply interpolates more data points, making a plot produced with linear interpolation between data points, as most plotting routines use, appear smoother.Not only have I been staring at the phenomenon in practice for the last two days, but i know it’s true because a window transforms to a sinc, and multiplication in time is convolution in frequency.

James Smyth says:

May 7, 2013 at 10:11 am

“…a window transforms to a sinc, and multiplication in time is convolution in frequency…”Yes, but the “window” is always the same – it is defined by the boundary of your data. Extending the dataset with zeros does not add new data. It just creates a finer grid upon which the FFT evaluates the continuous-in-frequency DTFT (Discrete Time Fourier Transform). The DFT is merely a sampled version of the DTFT.

Sorry, I should not have used the word “window” in such and imprecise manner. The “rectangle” associated w/ zero-padding is what gives you the sinc convolution. Contra your statement that “Zero padding does not produce the sinc convolution “, it does.

And, the fact is, not only is here no need to do zero-padding, but zero-padding does not introduce “harmonics”, Both of which were common misconceptions and misstatements on this thread.

I have some other thoughts about the granularity of the f-domain, but it’s mostly personal interest (I think we all agree that the generate conclusion of the paper is overreaching). But, I want to think about the math, compare it to the paper, and maybe try to get the paper’s data.

James Smyth says:

May 7, 2013 at 12:23 pm

“Sorry, I should not have used the word “window” in such and imprecise manner. The “rectangle” associated w/ zero-padding is what gives you the sinc convolution.”No, the rectangle associated with the data record gives you the sinc convolution. Extending the record with zeros does not give you a bigger data window. It just allows you to sample more points of the DTFT.

I’m not going to argue with you. These things are well known and established. Good luck, and happy hunting.

But, you (and I) are arguing :-) and I’m sitting here staring at very simple examples that show I’m correct and you are wrong. Is it

possiblethat you are just used to old-school FFT requirements (which are antiquated) of power-of-2 zero-padding?Or is there something about R’s FFT implementation that is misleading me? That’s possible.

“Is it possible that you are just used to old-school FFT requirements (which are antiquated) of power-of-2 zero-padding?”Mixed radix algorithms have been standard for decades. I’ve no doubt you are seeing an effect, it just isn’t what you think it is.

Come on, think. What happens to the Fourier sum when you increase N, but the sum is still over the same non-zero set of points?

BTW, you do realize a wider time window gives you a convolution in the frequency domain with a

narrowersinc function, right? Make it wider in the time domain, it becomes narrower in the frequency domain, and vice versa.Fun fact: that is the basis of Heisenberg’s Uncertainty Principle in Quantum Mechanics, because momentum and position are Fourier transformable variables. The better you know the position, the worse you know the momentum, and vice versa.

It might help if you made statements rather than ask questions.

When you increase N, you are changing the basis elements in the frequency domain. Each exp(-i*2*pi*k/N) is a different basis vector when you increase N. So, while you are summing over the same x(n), the X(k) are’s completely different. What point are you trying to make?

I’m going back through your recent contributions and the only definitive statement I can see you’ve made “Zero padding does not produce the sinc convolution – the finite data record does that. ” is simply wrong. Zero padding does produce the sinc convolution and the finite data record (w/out padding) does not.

James Smyth says:

May 7, 2013 at 5:34 pm

“It might help if you made statements rather than ask questions.”I want you to think it through. That is the best way to fix the concepts in your mind.

“Each exp(-i*2*pi*k/N) is a different basis vector when you increase N.”The Fourier transform kernel is exp(-i*2*pi*k*

n/N). The quantity w = 2*pi*k/N is the radial frequency associated with the value of k, so you could write this as exp(-i*w*n).Suppose w were a continuous variable. Performing the Fourier sum over n then gives you the DTFT, which is a continuous function of w. So, you see, the DFT is the DTFT sampled at the points 2*pi*k/N.

If you increase N, but are summing over the same set of non-zero points, then what you have done is increased your

sample resolutionby sampling more closely spaced values of w. As you increase the zero padding, you will begin to see more features of the DTFT come into view. But, it is not because of any convolution. It is because you are looking at more samples of the DTFT.” Zero padding does produce the sinc convolution and the finite data record (w/out padding) does not.”No, it just increases the resolution of the sinc function inherent from the finite data window because you are sampling the DTFT more densely.

“Zero padding does produce the sinc convolution and the finite data record (w/out padding) does not.”I believe it is the finite data record. It’s as if you had an infinite record and multiplied it by a gate function. The spectrum is then convolved with the sinc function which is the FT of the gate function. If you’re saying that the zero padding creates the gate function, then OK, but the sinc itself is independent of how much padding.

If the original signal had been the 254 block repeated indefinitely, then the FT of that would be a set of delta functions at the harmonics of period=254. If you then gate it to one block of 254, each of those delta’s is convolved with the gate fns sinc FT. If you then cut back to a finite interval, the spectrum is discrete, with the sinc function represented by a set of spikes. As the padding is reduced, the spikes get further apart.

If you increase N, but are summing over the same set of non-zero points, then what you have done is increased your sample resolution by sampling more closely spaced values of w. As you increase the zero padding, you will begin to see more features of the DTFT come into view. But, it is not because of any convolution. It is because you are looking at more samples of the DTFT.I get the change in granularity. In fact, I was one of the original people to note that in order to suss out a 248 year cycle, you need data whose duration is at least that long (a sort of companion/correlary to the Nyquist rate). But when you change N, you are now summing over a different set of basis elements, even though their coefficient x(n) are the same. And I think the different values reflect a change in the nature of the time-domain signal (ie. you’ve changed it’s period by the addition of one zero point). So, it’s not clear to me why this higher granularity would be giving you “better” (more accurate?) information about the original, non-zero padded signal. Sorry, that’s very qualitative but I’m pretty burnt at this point.

Look at these examples and try to tell me that the zero-padded examples are just showing “more features” due to “more samples”: Zero Padding Examples.

but the sinc itself is independent of how much padding.(sigh). No,it’s not. The size of the window/padding determines the character of the sinc. I can’t be bothered to look it up but its something like ~ sin(W*f)/(W*sin(f)) where W is the window/non-zero width.

I added a larger zero-pad to the end of my examples. And zoomed in to show the sinc.

James,

I think your example is focussing on the wrong frequency range. You have a relatively high “carrier” and a very high sampling rate. Ludecke’s has no carrier, and a sampling rate, in your terms, of 1 Hz (to scale, his yr = your sec). And he is looking at frequencies around .01 Hz (yr^-1). You’re looking at about 100x higher. So yes, your small end padding acts as a gate function multiplier, and makes a sinc function a few Hz wide.

Ludecke shows a continuous plot at around .01 yr^-1. To get that, he would need a lot of low frequencies in his DFT, which implies padding out to many millenia.

The math – if you have a function which is 1 from -a to a, and you add zero padding b on each side, at infinite sampling rate, then the DFT is

sin(2πfa)/(2πf)

where f is frequency. It’s independent of b, but b enters into the discrete frequencies of the DFT, which occur at n/(a+b), n=0,+-1,+-2 etc. So the padding doesn’t change the sinc, but increases sampling in the freq domain.

Nick Stokes says:

May 8, 2013 at 2:39 am

“So the padding doesn’t change the sinc, but increases sampling in the freq domain.”Exactly. It’s the same sinc function every time, dependent on the length of the data record, just sampled at different values. With no zero padding at all, you are sampling frequency at the nulls of the sinc except for the central point.

“So, it’s not clear to me why this higher granularity would be giving you “better” (more accurate?) information about the original, non-zero padded signal.”It doesn’t in reality, but for the way we humans process visual information, it is better. Plotting software generally interpolates data points with linear interpolation. Zero padding imposes a finer grid so that the plot looks smoother to us, and it is easier to recognize patterns which correspond to specific, frequently encountered forms.

E.g., the Cauchy peak of a second order damped oscillator driven by broadband noise, which is useful for modeling many climate variables such as sunspot numbers.

I think your example is focussing on the wrong frequency range. You have a relatively high “carrier” and a very high sampling rate.Well, I was trying to illustrate something specific. And I chose the durations and freqs that I did to illustrate the sinc at a very high resolution. And to show how adding a relatively tiny window (ie. one which on very slighly increases the frequency resolution ) imposes an obvious sinc convolution on the results. I made it so i couldn’t be any clearer. BTW, the carrier is not “relatively high”; it’s actually pretty low, but the spectrum is zoomed in close (again, to see that resolution).

I did play around w/ numbers like the paper, and it illustrates our original point about the gross nature of a 248 year signal in a 25x duration data. But, I started to realize that I could goof around with that forever. Also, I wanted to find the smoothing function they used, in order to try to determine whether it was invertible (ie. should I wasted me time trying to find other solutions to their equation/fitting, or is it one-to-one and onto)

So the padding doesn’t change the sinc, but increases sampling in the freq domain.Seriously? You’ve left out a function of the relative window size in your numerator. The zeros change w/ more padding. See here: Padding Goes to Town I’m sorry, I can’t be any clearer than that and hope to get anything done at work today :)

James Smyth says:

May 8, 2013 at 10:02 am

“…imposes an obvious sinc convolution on the results…”“Seriously? You’ve left out a function of the relative window size in your numerator. The zeros change w/ more padding.”Zero padding does not induce a convolution in the frequency domain. I’ve spoon fed it to you, and apparently wasted my time. If you are not even going to bother to try to understand, there is no point in carrying on. You are wrong.

. If you are not even going to bother to try to understand, there is no point in carrying onI’ve actually gone to great lengths to try to understand your points. I’ve noodled w/ the math and experimented with a wide range of sample data. And you haven’t spoon fed me anything. I’d be happy to look at something specific (and at this point it really is time for me to consult the literature), but your talk about increased granularity struck me as hand-waving. Still does.

I don’t see (or didn’t yesterday), either qualitatively or mathematically, any difference between taking data (which is assumed to have persistent characteristics) and multiplying by a window to induce zeroes or taking a shorter version of that data and padding zeros. Surely, in the former you’d agree that multiplication by the window induces convolution by the sinc. It’s not a great leap from there to happens when you take the original data and pad zeros. [I'm tempted to omit this b/c it is hand-waving]

I’m also open to the possibility that there is something about the examples I’ve posted that are misleading me, but I went great lengths to make sure there wasn’t something about the combination of numbers (rounding errors, low-resolution sampling, interpolation, plotting issues, zoom level, etc) that was throwing me off.

James,

As often, I think there’s a duality here – we’re just looking at different sides of it. That’s why the frequency range is important, because the merits of the two ways of looking at it depend on that.

I think my math is correct – here’s the interpretation re your latest diagrams. In the first you have no padding and see a bare carrier frequency. My formula would say that there “is” a sinc function with zeroes at multiples of 1/192, but the DFT only samples at those freq multiples, so you don’t see it.

Then you put in 1 sec padding. That shifts the freq sampling and you now see a bit of the sinc function as a beat freq, Put in 2 sec, and the beats go to 1/2 Hz, the spacing of your zeroes.

This all seems artificial, but as you keep increasing the padding that underlying sinc starts to emerge as a real picture as you sample it more often in the freq domain. And it is the sinc of the signal block.

I think Ludecke is at that high padding end.

we’re just looking at different sides of it.This may be the case. I found an old IEEE article that I glanced at and it got me thinking about the limit of W -> infinity and sampling that.

And I also think the seemingly radical change of the spectrum in going from this unpadded to this one-padded really throws me off.

James Smyth says:

May 8, 2013 at 6:02 pm

James! Shirely you jest! There is no limit to “W” in the warmist’s minds. It will always be Bush’s fault.

I keep meaning to update this w/ some kind of mea culpa. I don’t have any memory (grad school was only 15+ years ago) of understanding the DTFT as it relates to this. it’s all so obvious now. The zero padding DFTs are by-definition samples, and so is the original DFT. Bart did spoon feed it to me. To live it to learn, I guess.