Periodic climate oscillations

Guest essay by:

Horst-Joachim Lüdecke, EIKE, Jena, Germany

Alexander Hempelmann, Hamburg Observatory, Hamburg, Germany

Carl Otto Weiss, Physikalisch-Technische Bundesanstalt Braunschweig, Germany

In a recent paper [1] we Fourier-analyzed central-european temperature records dating back to 1780. Contrary to expectations the Fourier spectra consist of spectral lines only, indicating that the climate is dominated by periodic processes ( Fig. 1 left ). Nonperiodic processes appear absent or at least weak. In order to test for nonperiodic processes, the 6 strongest Fourier components were used to reconstruct a temperature history.

clip_image002

Fig. 1: Left panel: DFT of the average from 6 central European instrumental time series. Right panel: same for an interpolated time series of a stalagmite from the Austrian Alps.

Fig. 2 shows the reconstruction together with the central European temperature record smoothed over 15 years (boxcar). The remarkable agreement suggests the absence of any warming due to CO2 ( which would be nonperiodic ) or other nonperiodic phenomena related to human population growth or industrial activitiy.

For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.

However one has to caution for artefacts. An obvious one is the limited length of the records. The dominant ~250 year period peak in the spectrum results from only one period in the data. This is clearly insufficient to prove periodic dynamics. Longer temperature records have therefore to be analyzed. We chose the temperature history derived from a stalagmite in the Austrian Spannagel cave, which extends back by 2000 years. The spectrum ( Fig. 1 right ) shows indeed the ~250 year peak in question. The wavelet analysis ( Fig. 3 ) indicates that this periodicity is THE dominant one in the climate history. We ascertained also that a minimum of this ~250 year cycle coincides with the 1880 minimum of the central European temperature record.

clip_image004

Fig 2: 15 year running average from 6 central European instrumental time series (black). Reconstruction with the 6 strongest Fourier components (red).

clip_image006

Fig 3: Wavelet analysis of the stalagmite time series.

Thus the overall temperature development since 1780 is part of periodic temperature dynamics prevailing already for ~2000 years. This applies in particular to the temperature rise since 1880, which is officially claimed as proof of global warming due to CO2, but clearly results from the 250 year cycle. It also applies to the temperature drop from 1800 ( when the temperature was roughly as high as today, Fig. 4 ) to 1880, which in all official statements is tacitly swept under the carpet. One may also note that the temperature at the 1935 maximum was nearly as high as today. This shows in particular a high quality Antarctic ice core record in comparison with the central-european temperature records (Fig. 4, blue curve).

clip_image008

Fig 4: Central European instrumental temperatures averaged the records of Prague, Vienna, Hohenpeissenberg, Kremsmünster, Paris, and Munich (black). Antarctic ice core record (blue).

As a note of caution we mention that a small influence of CO2 could have escaped this analysis. Such small influence could by the Fourier transform have been incorporated into the ~250 year cycle, influencing slightly its frequency and phase. However since the period of substantial industrial CO2 emission is the one after 1950, the latter is only 20% of the central European temperature record length and can therefore only weakly influence the parameters of the ~250 year cycle.

An interesting feature reveals itself on closer examination of the stalagmite spectrum ( Fig.1 right ). The lines with a frequency ratio of 0.5, 0.75, 1, 1.25 with respect to the ~250 year periodicity are prominent. This is precisely the signature spectrum of a period-doubling route to chaos [2]. Indeed, the wavelet diagram Fig. 3 indicates a first period-doubling from 125 to 250 years around 1200 AD. The conclusion is that the climate, presently dominated by the 250 year cycle is close to the point at which it will become nonperiodic, i.e. “chaotic”. We have in the meantime ascertained the period-doubling clearer and in more detail.

In summary, we trace back the temperature history of the last centuries to periodic ( and thus “natural” ) processes. This applies in particular to the temperature rise since 1880 which is officially claimed as proof of antroprogenic global warming. The dominant period of ~250 years is presently at its maximum, as is the 65 year period ( the well-known Atlantic/Pacific decadal oscillations ).

Cooling as indicated in Fig. 2 can therefore be predicted for the near future, in complete agreement with the lacking temperature increase since 15 years. The further future temperatures can be predicted to continue to decrease, based on the knowledge of the Fourier components. Finally we note that our analysis is compatible with the analysis of Harde who reports a CO2 climate sensitivity of ~0.4 K per CO2 doubling by model calculations [3].

Finally we note that our analysis is seamlessly compatible with the analysis of P. Frank in which the Atlantic/Pacific decadal oscillations are eliminated from the world temperature and the increase of the remaining slope after 1950 is ascribed to antropogenic warming [4], resulting in a 0.4 deg temperature increase per CO2 doubling. The slope increase after 1950, turns out in our analysis as simply the shape of the 250 year sine wave. A comparable small climate sensitivity is also found by the model calculations /3/.

[1] H.-J. Lüdecke, A. Hempelmann, and C.O. Weiss, Multi-periodic climate dynamics: spectral analysis of long-term instrumental and proxy temperature records, Clim. Past, 9, 447-452, 2013, doi:10.5194/cp-9-447-2013, www.clim-past.net/9/447/2013/cp-9-447-2013.pdf

[2] M.J. Feigenbaum, Universal behavior in nonlinear systems, Physica D, 7, 16-39, 1983

[3] H. Harde, How much CO2 really contributes to global warming? Spectroscopic studies and modelling of the influence of H2O, CO2 and CH4 on our climate, Geophysical Research Abstracts, Vol. 13, EGU2011-4505-1, 2011, http://meetingorganizer.copernicus.org/EGU2011/EGU2011-4505-1.pdf

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
101 Comments
Inline Feedbacks
View all comments
May 5, 2013 12:34 am

Matthew R Marler says:
May 4, 2013 at 5:00 pm
You get a slightly greater fit if you move the left-end red line leftward a bit, and the right-end red line rightward a bit. That suggests that during an epoch of about 160 years central Europe (is Paris “central”?) cooled more than central England. It’s hard to get away from epochs that are approximately some multiple or fraction of some period in a Fourier transform.
http://www.vukcevic.talktalk.net/2CETs.htm
Thanks. I agree. However I am not convinced ( London Paris distance is only 200 miles) that such sudden change could happen in short few years and endure 160 years at precisely same 0.5C difference, otherwise the temperatures change agree in every minute detail. I suspect that final conversion from the old European temperature scales may be the critical factor here.
The Réaumur scale saw widespread use in Europe, particularly in France and Germany http://en.wikipedia.org/wiki/Réaumur_scale

Nick Stokes
May 5, 2013 12:42 am

James Smyth says: May 4, 2013 at 5:43 pm
“I would need to see your math. These words don’t translate into anything meaningful for me.. It sounds like you are implying that zero padding introduces peak frequencies; which I don’t think is true. Multiplication by a window is convolution w/ a sinc, It will spread existing frequency peaks out as convolved w/ a sinc function.”

Nothing complicated. Just Fourier series math as in their Eq 4. I’m not implying anything about zero padding; in fact I think my earlier statement about it was exaggerated – it probably isn’t the major influence on the 248 yr peak.
Their basic framework is just a DFT – about 254 data points and 254 frequencies. Invert the DFT restricting to the first 6 harmonics and you get the best available LS fit to the data of those functions. That’s just Fourier math.
They say “Contrary to expectations the Fourier spectra consist of spectral lines only, indicating that the climate is dominated by periodic processes ( Fig. 1 left ).” but that just shows ignorance of the process. DFT has to yield spectral lines; it is discrete. And a simple DFT would just give lines at the harmonics. It’s not clear what they have done to smooth the spectrum – it’s possible that they have done a huge amount of zero padding, in which case as you say they’ll get the harmonic lines convolved with a sinc function.. But they also mention a Monte Carlo process that adds noise. Whatever, they have followed the basic process of DFT, truncate, invert, which tells nothing about special periodic behaviour here. The smoothing of the spectrum adds no information.

Réaumur
May 5, 2013 2:37 am

The wonderful thing about WUWT is that even work which seems to support CAGW skepticism and thus soothe our cognitive dissonance and reinforce our confirmation bias, is analysed with as much rigour as that which claims the opposite. That is science!

Braqueish
May 5, 2013 3:55 am

Not being a mathematician I couldn’t possibly comment on the ins and outs here. But as a social scientist my eyebrows did go up a bit when you reassembled the extracted frequency peaks and seemed delighted that they bore a striking resemblance to the data from which they were extracted.
I do, however, appreciate what you’re trying to do. So much of the science in this field seems dependent upon linear regression which, in a system which is clearly cyclical, seems absurd. There are obvious periodicities outside the basic diurnal and annual cycles — which is obviously why many of them are dubbed “oscillations” — despite their frequencies being notoriously unreliable. I suppose that the search for relatively exact periodicities implies you’re looking for extra-terrestrial drivers for climate.
This study, as it stands, is interesting but not particularly persuasive. My immediate reaction was to question why you didn’t use the long-span proxies to derive your frequency peaks and then reassemble them to see how they compare to the thermometer record?

May 5, 2013 8:15 am

richard telford says:
May 4, 2013 at 8:34 am
Rubbish
========
Whether the paper is rubbish or not cannot be determined at present. You may believe that to be the case, but that doesn’t make it so.
The only valid test for any scientific theory or hypothesis – in this case that climate can be predicted from its spectral components – is in its ability to predict the unknown.
If we see a drop in temperatures as predicted by the model for the near future, then this is reasonably strong validation of the model. If the temperatures rise as predicted by the CO2 models, then that is reasonably strong validation for the CO2 models.
As I noted in the previous paper, curve fitting in itself does not invalidate the ability of a model to predict the unknown with skill. Mathematically the solution of a neural network by computers artificial intelligence) is indistinguishable from curve fitting. It is highly likely that human intelligence works in a similar fashion.
The problem is that many curves fit the data, but few if any have predictive skill. the challenge for any intelligent machine or organism is to correctly identify the few curves that have predictive ability and reject those curves that do not.

May 5, 2013 8:39 am

Two basic assumptions are presented in the technique. First, that the frequencies discovered through spectral analysis represent a physical reality of some sort. That climate is cyclical on longer time scales, and these cycles repeat in a regular fashion. Second, that the technique is correctly identifying these fundamental frequencies.
If there is an error in technique, then this can be discovered though analysis of the mathematics. Were the figures calculated correctly under the rules of mathematics? If not, then we cannot place much trust in the result.
If it has been established that the math is correct (a big if), then we are left with the first assumption, that climate is cyclical on longer time periods. This assumption is harder to argue, because we see clear evidence of cyclical behavior in many on the longer timescales in climate.
One obvious test as Willis pointed out in the previous paper is to repeat the exercise with part of the temperature record hidden from the model to see if the model has an skill at predicting the hidden portion of the data. If not, this would be a strong indication that the technique has no value.

May 5, 2013 9:20 am

One of my favorite examples of the value of curve fitting is the game of chess. There are 64 squares on the board. Each square has a value. 0 for unoccupied. -1 thru -6 for black, +1 thru +6 for white. 13 possible values. 4 bits with space left over for housekeeping, such as who’s move it is next. This gives us an array of 64 values, 4 bits each. 256 bits. Without any attempt to shrink the problem size.
This tells us that without any effort we can represent any position on the chess board as a number on the range of 0 thru 2^256. A big number but we are quickly approaching machines with word size of 256 bits so what is science fiction to day will likely not be in the future. Let this big number be your X axis, where each discrete value represents a chess position. Now many of these will never be reached in any game, but it tells us all the possibilities.
Now let the Y axis be the value of the game position to each player. 0 represents a game where no player has any advantage. A positive value, white has the advantage. The larger the value, the greater the advantage. The opposite is true for black. Negative values, black has the advantage, and the more negative the value, the greater the advantage to black.
Now play millions and millions of games of chess and use the winning histories of each game to score the value of each board position reached. If the game ends in a draw, nothing changes. If black wins, subtract 1 from every Y value for every board position reached during the game. If white wins, add 1 to every board position reached during the game. Over millions of games this gives you the Y values, for each of the possible X values that are reached. Normalize these Y values, based on the number of times each position was reached. These can then be graphed as a series of X and Y values.
Now plot a curve through these X and Y values. If you have the curve that correctly describes the game of chess, then on those values of X that were never reached in previous games, the curve itself will tell you the Y value of the board position. Thus, when you want to calculate your next move, examine all legal moves (linear time required) and find the corresponding X values. Compare the Y values and for these X values, and select the move with the greatest positive or negative value, depending on whether you are black or white.
By this technique you can then play chess without any need to resort to the time consuming forward search approach (exponential time required) more typically used in chess. With the correct curve to describe the game of chess, you can very quickly calculate the best Y value for any possible X value.
Something very similar to this must take place in intelligent organisms, because no organism can afford the time required to exhaustively search all possibilities in real time, when a decision is required. Instead they must process the possibilities in an offline mode (sleep) where they consider all possibilities in an exhaustive fashion and create a shorthand technique (curve fit) to calculate the optimal response when confronted with similar problems in real time.
The problem then becomes one of selecting the correct curve to describe the game of chess, from all possible curves that fit the known X,Y points on the graph. What you want is a curve that predicts a very close to correct Y value for any X value, regardless of whether this X value has ever been seen before in all the millions and millions of games played. This is a surprisingly difficult problem.

Frank
May 5, 2013 12:03 pm

Consider repeating your analysis for the last 215 years rather than 230? years. 215 isn’t near a multiple of 80, 61 or 47 years. Do these peaks disappear? If so they are an artifact of the length of the data, not the data itself. (If Nick Stokes is right about zero padding to a multiple of 256, this still may not expose your problem.)

Bart
May 5, 2013 12:31 pm

richard telford says:
May 4, 2013 at 8:34 am
“For clarity we note that the reconstruction is not to be confused with a parameter fit. All Fourier components are fixed by the Fourier transform in amplitude and phase, so that the reconstruction involves no free ( fitted ) parameters.””
Rubbish …
—————————————————
Afraid I have to agree with Richard here, much as it pains me. The DFT is a curve fit. Each bin is nothing more than a measure of the correlation of the input time series with a sinusoid at that frequency. If one naively constructs a truncated sinusoidal expansion from the output of the DFT, it will can match the function over the data interval with arbitrary precision, depending on the number of components in the truncated series, but it does not have predictive power in general.
Successful DFT analysis generally requires some degree of smoothing, along with a seasoned analyst who can recognize the morphology arising from widely encountered physical processes, and thereby construct a model which generally produces a good approximation to commonly encountered system behavior.

RCSaumarez
May 5, 2013 3:27 pm

@nNick Stokes
Their basic framework is just a DFT – about 254 data points and 254 frequencies. Invert the DFT restricting to the first 6 harmonics and you get the best available LS fit to the data of those functions. That’s just Fourier math.
I don’t want to sound argumentative, but have you heard of the Nyquist Theorem? If there are 254 data points, the maximum number of observable frquencies is 254/2 = 127.

Nick Stokes
May 5, 2013 4:03 pm

RCS,
If you want to argue so, then there are 127 positive frequencies, and 127 negative.
But you have to have the same number of frequencies, otherwise the DFT is not ,a href=”http://en.wikipedia.org/wiki/Discrete_Fourier_transform#Completeness”>invertible.

phlogiston
May 5, 2013 5:27 pm

ferd berple says:
May 5, 2013 at 8:15 am
richard telford says:
May 4, 2013 at 8:34 am
Rubbish
========
Whether the paper is rubbish or not cannot be determined at present. You may believe that to be the case, but that doesn’t make it so.
The only valid test for any scientific theory or hypothesis – in this case that climate can be predicted from its spectral components – is in its ability to predict the unknown.
If we see a drop in temperatures as predicted by the model for the near future, then this is reasonably strong validation of the model. If the temperatures rise as predicted by the CO2 models, then that is reasonably strong validation for the CO2 models.

Fred’s moderate tone seems right here – does the study make a prediction? – if so – how will it turn out? Much of the strong criticism here (with more than a little whiff of territoriality) seems to imply the conclusion “you cant use FFT in climate series”. Why not? Is the idea of cyclical phenomena in climate such an intolerable blasphemy? Is FFT really such a bad method to look for periodicities? More testing is needed on different climate data sets, that’s all.

May 6, 2013 6:42 am

Bart says:
May 5, 2013 at 12:31 pm
Afraid I have to agree with Richard here, much as it pains me. The DFT is a curve fit.
==========
Folks are making a mistake in assuming that because something is a curve fit that it is automatically wrong.
In general a curve fit will not provide predictive value because there are a whole lot of meaningless curves that will fit the same data. however, there are also some curves that will fit the data and provide predictive value – if the data itself is predictable. So, if you simply select one of many possible curves at random, you are likely to get a curve that has no value.
In this case we do not know for sure if climate is predictable. Lots of Climate Scientists point to the Law of Large Numbers and the Central Limit Theorem and say that this makes Climate predictable, but that doesn’t make it so. Just ask the folks that play the stock market.
Weather is known to be Chaotic. Yet Climate Scientists assume that the average of Chaos is no longer Chaotic. That seems on the face of it to be highly unlikely, because it would imply that all sorts of Chaotic processes in fields outside of climate would suddenly become predictable through simple averaging. Is the Dow Jones Average predictable?
Now, if Climate is predictable (and that is a BIG if) have the authors produced a curve fits that has predictive power? Likely not, but none of us here can say for sure. We may well believe they haven’t, because the odds are against them, but believing doesn’t make it so.

rgbatduke
May 6, 2013 7:57 am

That is seen as real, and it’s a very large effect. That’s where their 248-year peak comes from. It’s the FT of that periodic pulse. It’s entirely spurious.
Or, more precisely, it may be entirely spurious. As they even note (inadequately) in their discussion of artifacts. It is also fair to say that some unknown fraction of it is spurious (up to the whole thing). The 2000 year data, however, shows some transform strength across the same frequency/period. That too could be spurious, a Gibbs phenomenon on a power of two integer divisor of the 2000 year period. To show that it is not, they should take some prime multiple of 250 < 2000 and redo the FT and show that the peaks survive. Or better than prime, some irrational multiple of 250 not too near a power of two. Or do an e.g. 7*250 = 1750 year sliding window across the 2000 year data.
But yes, absolutely, these errors reflect a certain rush to publish indicative of confirmation bias, which is just as much a sin when it is committed by antiwarmists as it is when it is by warmists.
Ah, for the days when people did science in something like a disinterested way, openly acknowledging the limitations of knowledge and the finite range of conclusions that can be drawn from any argument. Ah, for the days eventually to come when crowdsourcing science replaces "just" peer review and papers like this don't just get posted to a blog, they get redone and corrected in real time until all of the issues raised are addressed. I’m willing to be convinced — or anti-convinced — of their claim of a 250 year signal, sure, but their computation so far is not convincing. Or anticonvincing — it may NOT be the case that all of their ~250 year peak is artifact. It’s easy enough to find out, given the data in hand, for at least the 2000 year stalagmite dataset. And there are many more datasets.
rgb

J. Bob
May 6, 2013 8:29 am

A few years back, I thought looking at global temperature, using Fourier Convolution Filtering (FCF) might be interesting. That is, convert the signal into the frequency domain, remove selected freq. & see if anything interesting might be seen. I liked the FCF, in that end pts. were not cut off, as happens using a moving average.
To make matters more interesting, I took the CEL, DeBuilt, Stockholm-GML & Berlin Tempelhof data sets, computed their respective anomalies, & averaged them. Nothing elegant as some would like, but gives the basic information.
In addition, the FCF was compared to a MOV as well as the forward & reverse recursive filter (“filffilt” in MATLAB terms). That is, the filter is run forward in time. But then is run backwards, last to first. The purpose is to reduce the phase lag due to the inherent characteristics of the filter. End points have transient effects, & care is required in re-initializing the backward computations. But most of the signal is present. In this case I used a 2-pole Chevushev.
The FCF requires a little work in order to reduce discontinuity at the end pts., hence the signal was “detrended” at the end points. This was inserted into a zero series window, whose length was based on the powers of 2. Hence the signal was in the middle, with ends padded with zeros. & no data lost. Getting good test data can be very expensive. This results in reduced “leakage”, characteristic of this method. The FFT transformed the signal into the freq. domain, & masking off selected freq. The inverse FFT was then performed, & “trended” back for comparison to the original data set.
These procedures are noted in http://www.dspguide.com as mentioned in an above comment.
As a cross check, a run was made, where no freq. were “masked” to compare how the procedure affected the original data.
The following 2 figures show the results, using a 10 & 20 year filter:
http://dc497.4shared.com/download/Bp5DxL3b/Ave4_10yr_3fil.gif?tsid=20130506-023010-c8122ef
http://dc497.4shared.com/download/l2jEHPxe/Ave4_20yr_3fil.gif?tsid=20130506-023324-5fa34b39
A second data set was used, just with the CEL data set, as shown below:
http://dc456.4shared.com/download/AgE-cCa2/Ave1_2010_FF_25yr.jpg?tsid=20130506-024001-cd51647f
A third run was made, using CRU data, comparing the FCF & the EMD filtering.
http://dc541.4shared.com/download/2foIw4k7/CRU-Fig-6a.gif?tsid=20130506-024316-2b67baa7
The EMD (Empirical Mode Decomposition) method handles non-stationary processes, better then the FCF method.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1986583/
In retrospect the FCF method held up quite well, & highlighted a number of the cyclic, or almost periodic cycles shown in the data. While the FCF is a very powerful tool, it is but one tool, along with others that present different views of features in data analysis, in hopes of providing insight into what the data is telling us.
And that’s where tamino & yours truly had a difference of opinion a few years back over which way the CEL data was going.
It should also be noted that there is a 2D FCF, used in image processing, “The Handbook of Astronomical Image Processing”.

rgbatduke
May 6, 2013 8:37 am

become predictable through simple averaging. Is the Dow Jones Average predictable?
Of course. To a point. In fact, it very reliably appreciates at the some multiple of the rate of inflation plus a smidgen. Take a look here:
http://stockcharts.com/freecharts/historical/djia1900.html
Oh, you mean is the chaotic, random noise, are the black swan events predictable? Of course not. But yes, the DJA does, in fact, become remarkably reliably predictable. Note well the semilog scale — the actual average experiences slightly better than exponential growth. Not also ON the semilog scale the difference between the Great Depression and the crash a few years ago. The crash a few years ago was basically a 3 decibel event (decibels being appropriate for log scale cuves). The Great Depression was more like 15 decibels — and STILL made little real difference in the fact that the overall curve appears to be decently fittable with an exponential of an exponential (or an exponential fit on a semilog scale).
But this fit isn’t MEANINGFUL, and while nearly the entire world bets on it continuing, with some justification, it is the VARIATIONS in this behavior that are interesting. Also — not unlike the climate data — this century plus curve can equally well be interpreted in terms of “eras” with quantitatively distince behavior — pre-Depression (note the sharp upturn right before the crash where market margin rules were generating the illusion of wealth unbacked by the real thing!), post-Depression pre-1960s, a curiously flat stretch across the late 60s into the 80s (the era of runaway inflation, Viet Nam, the cold war, a “hockey stick” inflation from the Reagan years through Clinton, the chilling effect of the triple whammy of dotcom collapse (IT companies pretending they are banks but without any regulation), the perpetual, expensive middle east wars sucking all of the free money out of the Universe, the savings and loan collapse and the most recent correction due to the cumulative doldrums from all of the above.
A clever eye might see a pattern of 20-30 year prosperous growth followed by 20-30 year correction and little to no growth, one strangely synchronized to at least some of the climate variations! Or to sunspots. Or to the length of women’s dresses. Or to any variable you like that monotonically increases in time.
So sure, I can predict the DJA — and have been, for years. Just not the DJA today or this month or this year. But predict it within a few dB over twenty or thirty years? A piece of cake! Even Taleb doesn’t properly note this in his book — it would be very interesting indeed to plot the distribution of black swan events (or the spectrum of market fluctuations in general) to see what kind of law they describe. I’d bet on a power law of some sort, but the fact that only one 16 dB correction is present over more than a century of data (with a big gap to the next largest correction, a jump from 16 dB down to what, 3dB excursions pretty consistently) makes it difficult to even assert this.
This curve does contain important lessons for the antiwarmist and warmist alike. First of all, the global temperature curve has the same general shape as this but plotting only the anomaly, and on a normal scale, not semilog. This is what is so astounding — the climate is enormously stable. Forget 3dB corrections on the kelvin scale. One cannot even use decibels to describe the fluctuations visible in the anomaly.
It is very, very instructive to plot global temperatures not as anomalies but on the actual Kelvin scale, TO scale. Sadly, woodsfortrees doesn’t appear to let you control the axes — it has one of the ways to lie with statistics essentially built into its display (I mean this literally, a method explicitly described in the book “How to Lie with Statistics”). Honestly plotted, with error bars, it would look like a more or less flat line where the entire width of the line would be error bar. After all, the entire variation in degrees K is 3 tenths of one percent from 1880 to the present.
Not so with the DJA. Not even after renormalizing for inflation. The real wealth of the world has exponentially grown with population and technology, and no amount of market shenanigans or political incompetence can completely mask that overwhelming effect.
rgb

May 6, 2013 9:27 am

rgbatduke says:
May 6, 2013 at 8:37 am
become predictable through simple averaging. Is the Dow Jones Average predictable?
Of course. To a point. In fact, it very reliably appreciates at the some multiple of the rate of inflation plus a smidgen.
Actually the development of “technical analysis” of stock charts and of commodity price charts over the past 30yrs or so have made them more predictable (with the caveat that economic bombs disrupt this stuff). Technical analysis with its “moving average curves being crossed by the daily prices curves as an indication to buy or sell, is not scientific but is a kind of self-fulfilling exercise. Books on technical analysis “educate” investors in the technique and once a large enough segment of investors (or more effectively their brokers) are employing this, the market responds “predictably”. Bullion dealers have trained the market to behave according to these techniques. Having said all this, rgbatduke is correct about the long term – it is a trace of technical and economic progress. Putting your retirement money into a stock index fund long enough and you have a high probability of at least preserving your buying power. Remarkably, left to their own devices, individuals tend to buy high and sell low (withdraw from a disappointing turn).

rgbatduke
May 6, 2013 9:52 am

And that’s where tamino & yours truly had a difference of opinion a few years back over which way the CEL data was going.
Well, if you’re going to do numerology, at least one should do it competently, and it appears that you have done so. It is interesting to note that your power spectrum looks very nearly lorentzian, although one probably cannot resolve long period features or linear from slow exponential on this sort of interval. There is also clearly nothing (resolvable) that is special about 250 years in the 400 year data. I’m so so sure about the higher frequency structure — the filter you apply probably gets rid of ENSO altogether and most of the PDO. Or is that what the graph is indicating, that the short period stuff was the transform of the raw data?
I personally would argue that we have no idea which way the CEL data is “going”, fit or not. Even with your filtering, I expect that there are still artifacts near the ends.
rgb

Bart
May 6, 2013 10:06 am

rgbatduke says:
May 6, 2013 at 8:37 am
“A clever eye might see a pattern of 20-30 year prosperous growth followed by 20-30 year correction and little to no growth, one strangely synchronized to at least some of the climate variations!”
I would suggest that this is the fundamental period of human generational turnover. It fundamentally reflects the bandwidth of institutional memory. One generation decides it has learned everything it needs to know, forgetting the lessons of the past, which sets us up for decline. The next generation relearns the lessons which promote prosperity anew, and things pick up, only to fail when institutional memory lapses once again.
It is, I think, no coincidence that every thirty or so years, we get a new climate panic, as the latest generation forgets the panic their forebears endured. It seems a grim practical joke by the Gods of weather to have synchronized the variation of the climate with the generational turnover. And, like the cycles of a resonance being driven by a sympathetic beat, succeeding peaks exhibit exponential growth in the level of alarm.
I am hopeful that the internet, as a vehicle for extending institutional memory, will arrest this damaging cycle. When the Global Cooling Panic of 2030 hits, there will be a plethora of documentation we can refer back to, and explain, yes, we have been through this before and yes, they really were serious about it last time, too.

Bart
May 6, 2013 10:12 am

rgbatduke says:
May 6, 2013 at 9:52 am
“Even with your filtering, I expect that there are still artifacts near the ends.”
It is inherent. There is no magic in these methods, and you can’t get something for nothing. In general, filtering via FFT based manipulation is an inferior technique. It was recognition that seemingly straightforward weighting of the FFT produced bizarre actual responses which drove the development of the Parks-McClellan algorithm for the design of optimal equi-ripple filters.

rgbatduke
May 6, 2013 12:10 pm

It is inherent. There is no magic in these methods, and you can’t get something for nothing.
You betcha. TANSTAAFL, especially in (predictive or otherwise) modeling. In fact, in many cases there are no-free-lunch theorems.
It reminds me of somebody (don’t remember who) who posted an article bemoaning the perpetual fitting of this or that straight line to e.g. UAH LTT or the end of HADCRUTX, not JUST because of the open opportunity to cherrypick ends to make the line come out however you wanted it to, but because in the end, there was really no information content in the line that wasn’t immediately visible to the eye without the line. Fit lines, in fact, are a way to lie with data — they are deliberately designed and drawn to make you eye see a pattern that may or may not be there, sort of like using magic marker on a piece of toast to help somebody see the face of Jesus there, or the way Moms point out a cloud that if you squint and pretend just right looks like a fluffy little sheep (and not a great white shark, the alternative).
Lines drawn to physically motivated models, OTOH, have at least the potential for meaning, and can be both weakly verified or weakly falsified by the passage of time or training/trial set comparisons. Smoothing with a running weighted filter is still smoothing, and since the filter one applies is a matter of personal choice without any sort of objective prescription, all one ends up with is a smoothed curve based on parameters that make the curve tell the story the curve-maker wants told. If there are 50 ways to leave your lover, there are 500 ways to lie to yourself and others (and many more to simply be mistaken) in this whole general game, curve fitting, predictive modelling, smoothing, data transforming, data infilling (ooo, I hate that word:-), and so on. I like Willis idea best. Look at the RAW data first, all by itself, unmeddled with. Everything after that has the potential for all sorts of bias, deliberate and unconscious both. The data itself may well be mistaken, but at least it is probably HONESTLY mistaken, RANDOMLY mistaken.
rgb

phlogiston
May 6, 2013 2:43 pm

rgbatduke says:
May 6, 2013 at 12:10 pm
I like Willis idea best. Look at the RAW data first, all by itself, unmeddled with. Everything after that has the potential for all sorts of bias, deliberate and unconscious both. The data itself may well be mistaken, but at least it is probably HONESTLY mistaken, RANDOMLY mistaken.
rgb

Like this study of 2249 raw unadjusted surface temperature records, for instance?

James Smyth
May 6, 2013 11:30 pm

I’ve been revisiting the math, and taught myself enough R in the last day to test a pretty wide array of examples, and I’ve confirmed my position that windowing/padding does not introduce noticeable harmonics of existing frequencies. I say “noticeable” b/c it’s possible something hidden even by logarithmic scaling is not just rounding/precision noise.
In fact, as i additionally suspected, you don’t even need the traditional pad to a power of 2 length to do DFT on modern CPU (at least for these data set sizes). R’s FFT claims to work best on “highly composite” data length and I had to come up w/ some very large and very much not composite data lengths to get it to run for so long that i had to kill it due to lack of patience (1999999 samples, which has 3 prime and 5 unique factors seems to be a good one. Typical variants on 250 years of daily data would be:(365*250 = 748250 samples) runs in seconds.
Even the convolution w/ a sinc from padding is only pronounced when relatively large (like several factors) padding. But, I’ve only run a few test cases to very that.

Bart
May 7, 2013 12:24 am

rgbatduke says:
May 6, 2013 at 12:10 pm
Notwithstanding that there is no further information with which to extrapolate the data beyond its boundaries within the data itself, it is possible to use other information to do so. That information is the vast well of knowledge of how natural systems typically evolve which has been compiled over the last century, particularly since about 1960 when widely applicable methods of finite element modeling, system identification, and optimal filtering were introduced. Sadly, I have not observed or encountered anyone on either side of the debate in the climate sciences who seems to be particularly conversant in these subjects.

Bart
May 7, 2013 12:48 am

James Smyth says:
May 6, 2013 at 11:30 pm
“Even the convolution w/ a sinc from padding is only pronounced when relatively large (like several factors) padding.”
Zero padding does not produce the sinc convolution – the finite data record does that. Zero padding simply interpolates more data points, making a plot produced with linear interpolation between data points, as most plotting routines use, appear smoother.
I recommend you try putting in other functions than pure sinusoids, and show yourself that, while you can reconstruct your input by summing the major sinusoids indicated by the DFT, suitably scaled by the peaks and shifted by the phases, there is no general predictive value in the reconstruction, and you can find cases where it is just flat out wrong.
Then, you should try adding random noise into your input data series, and notice what that does to your DFT. Methods for dealing with that particular ugliness are dealt with under the heading of spectral density estimation.