Well, Color Me Gobsmacked!

Guest Post by Willis Eschenbach

thumb slide ruleI’ve developed a curious kind of algorithm that I’ve humorously called “slow Fourier analysis”, but which might be better described as a method for direct spectral analysis. The basic concept is quite simple. I fit a sine wave with a certain cycle length, say 19 months, to the dataset of interest, and I note the peak-to-peak amplitude of this best-fit sine wave. I repeat this process for the various cycle lengths of interest, and put that all together as a periodogram. The periodogram shows how large of a sine wave is the best fit for the data for each of the cycle lengths. This method has worked a treat, but yesterday, this odd method of mine handed me a big surprise.

Now, I have been doing these periodograms utilizing only the time periods inherent in the data. So if the data was monthly, I’ve looked at all possible integral lengths of monthly cycles, starting with two month, three month, four month, five month cycles, and so on up to cycles with periods as long as a third the number of months in the data.

And on the other hand, if the data was yearly, I’ve looked at all possible integral lengths of yearly cycles, starting with two year cycles, three year cycles, four years, and so on up to a third of the number of years in the data.

(I mistrust reports of the longer cycles, and any cycle with a period longer than a third of the dataset is not really worth calculating—it will fool you every time.)

So I was going along in blissful ignorance when an alert WUWT reader, Greg Goodman, pointed out that there was no reason to restrict myself to integral periods. He noted that I could subdivide the results and gain greater resolution. I was dismissive of his idea. I said that I thought you could do it, and that he was right and it was an interesting idea, but I said I thought you wouldn’t gain any real resolution by doing that. Hang on, let me get my exact words and the graphic I used in that thread to illustrate my point … here’s what I said:

In any case, I’m still not convinced that the procedure will give any real increase in information. Here’s the difference, for example, between monthly and annual data:

periodogram monthly and annual

I think that if I increase the resolution on the sunspot periodogram while still using the annual data, it won’t look anything like the real monthly data … I’ll report back on that one.

Well … this is the promised “report back”. As you can see, both the monthly and the annual data show the peak at 11 years. The monthly data shows more detail, as it resolves the individual peaks, at 10, 11, and ~ 11.9 years. I thought that the test I outlined would be a good one. I’d see what happened when I sampled annual data at monthly intervals. So here’s what I did. I took the SIDC monthly sunspot data. For the annual data, I didn’t use the SIDC annual data. Instead, I averaged the SIDC monthly data to give me annual data that I knew was a true annual average of that particular monthly dataset. Figure 2 shows the monthly data in red, and the annually averaged monthly data in black.

SIDC sunspot numbers monthly and annually averagedFigure 2. Monthly SIDC sunspot data (red), and the same monthly SIDC data annually averaged (black). 

So to be clear, the data shown as the black line is the annual average of monthly data shown as the red line

Next, I modified my periodogram-generating computer function to allow for fractional time sampling of the data.

Finally, I calculated three periodograms. One is the periodogram of the annually averaged data, shown in yellow/black below. The second is the periodogram of the underlying monthly data, shown in blue.

And finally, I calculated the third periodogram (shown in cyan) using the annually averaged data, but sampled on a monthly basis … Figures 3 and 4 show those results. I must confess, when I saw them my jaw hit the floor.

periodograms sidc annual monthly oversampledcloseup periodograms sidc annual monthly oversampled

Figures 3 and 4. Upper panel shows the periodogram of annual sunspot data (yellow/black), monthly sunspot data (blue), and monthly-sampled annual data (cyan). Lower panel is a closeup of the upper panel, showing the period from seven to twenty-five years.

You can see why my jaw hit the floor. The blue line (periodogram of actual monthly data) is almost identical to the cyan line (periodogram of annual data sampled at monthly intervals). Who knew?

It turns out that contrary to my expectation, the information about the strength of the monthly cycles is NOT lost when the individual monthly values are subsumed into annual averages. Somehow, that information still exists and can be retrieved. When I take the annual data and analyze it at monthly frequencies, the data is still there. Not exactly, to be sure, as you can see the process is not perfect or precise.

But it is astoundingly accurate. I absolutely didn’t expect that.

Now, I suppose some signal analysis guy will pop up and say he knew that all along and abuse me for my colossal ignorance … but I taught myself this stuff, and all I can say is, it sure was a surprise to me.

Truly, I don’t understand this result. For many, many years I’ve thought that when you average data in chunks, like say averaging monthly data into yearly data, that all of the monthly information was gone. Lost. Kaput. Irretrievable.

But that doesn’t seem to be the case at all. It seems that very little of the power spectrum information was lost at all as a result of the annual averaging process.

Naturally, of course, this brings up the next question—is this procedure invertible? That is to say, given a periodogram such as the one above, can I run the process in reverse? Can I start from the annual data, calculate the monthly periodogram from that annual data, and then invert the periodogram to give me back the monthly data? That would be really amazing … but I’m pessimistic.

My guess is no, the periodogram can’t be inverted to me give back the monthly data … but given how poor my last pathetic guess (at the head of this post) was, I’ll certainly give it a try. Any assistance gladly accepted.

Like I said above … once again all I can say is, I’ll report back on that one.

Regards to all,

w.

Acknowledgement: My thanks to WUWT reader Greg Goodman for the suggestion to investigate fractional time periods.

For Clarity: If you disagree with something someone says, please quote the exact words you disagree with. It avoids all kinds of misunderstandings.

Data and Code: I’ve collected the R code, the R functions, and the data into a small (22KB) zipped folder called “Oversampling Folder“. It includes the monthly and annual sunspot data as CSV files. If you change your R workspace to the folder, it should be turnkey.

For the original data:

SIDC Annual Sunspot Data

SIDC Monthly Sunspot Data  Note that following the advice of Leif Svalgaard, I have increased all of the pre-1947 SIDC sunspot values by 20%, to correct for the change in counting methods at that time. The change is immaterial for this analysis.

Changes In The R Function: The function to generate the periodogram, named “sineptop” (for sine peak-to-peak), was previously called like this:

sineptop(annual_data, frequency = 1)

or like this:

sineptop(monthly_data,frequency = 12)

The “frequency” variable identifies the data as having that many periods per year.

It can still be called like that. In addition, however, the new sineptop syntax for e.g. monthly sampling an annual dataset looks like this:

sineptop(annual_data, frequency = 1, by = 1/12)

If the “by” variable is not specified it is assumed to be 1, so you don’t need to specify it, and the original syntax still works.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

167 Comments
Inline Feedbacks
View all comments
May 26, 2014 7:39 pm

You cannot use this method. The reason is that the resolution of a Fourier analysis is limited to B=1/T where T is the total length of the sample. Therefore in a 1000 year sample your frequency lines are spaced at 1/1000 y-1. The only valid frequency lines are for integer numbers of cycles in the total sample.
I suggest you read up on Fourier.

wsbriggs
May 26, 2014 7:48 pm

I salute your intellectual integrity in disclosing this. Far too many people would kick the rock off the peak and pretend that there was nothing to see.
I follow your posts as much for the writing as for the science. I always learn something, and I’m always entertained by the responses. It seems the mouth breathers aren’t as prevalent in the responses these days, or maybe I’m missing something.

intrepid_wanders
May 26, 2014 7:54 pm

Greg Locock says:
May 26, 2014 at 7:39 pm
You cannot use this method…

Would you care to explain how Willis reconstructed a fairly complicated waveform from averaged data with a flawed method?

May 26, 2014 7:56 pm

Send this off to Steve McIntyre

May 26, 2014 7:57 pm

The method you describe is very clever. It’s also known (in the astronomical literature at least) as the Date-Compensated Discrete Fourier Transform, or DCDFT (Ferraz-Mello, S. 1981, Astron. J., 86, 619). It’s related to the more popular (but in my opinion less effective) Lomb-Scargle modified periodogram (Scargle, J. 1982, Astrophys. J., 263, 835). You have also chosen to plot the *amplitude* spectrum rather than the *power* spectrum, a habit which is uncommon but certainly valid.
It’s no surprise that the fine detail in the periodogram is not lost when passing from monthly to annual averages. The “density” of information in the frequency domain depends on 1/T, with T the total time span covered by the data (actually, T = N dt with N the number of data and dt their time spacing). Transforming from monthly to annual averages doesn’t change the total time span T, so it doesn’t remove information which seems to you to be “in between” the frequencies you tested when restricting to periods which are integer multiples of a single year. It was a mistake to restrict to periods which are integer multiples of a year in the first place.
What *is* lost when passing from monthly to annual averages is the *high* frequency information. With evenly sampled data, the entire information content of the periodogram is restricted to the frequency range from 0 to 1 /(2 dt) (the “Nyquist frequency”). For monthly data, dt = 1/12 so the Nyquist frequency is 6, but for annual data, dt = 1 so the Nyquist frequency is 0.5. Passing from monthly to annual averages therefore loses the information in the frequency range from 1 to 6 cycles/year, but does not lose any of the information in the frequency range from 0 to 1 cycle/year.
The information content in a periodogram is evenly distributed in frequency space, but not in period (inverse frequency) space. Therefore I recommend plotting periodograms with frequency rather than period as the abscissa.
Incidentally, the DCDFT isn’t really necessary when the data are evenly sampled in time and there are no missing values; in that case the plain old DFT (discrete Fourier transform) is just fine (and is faster to compute, a palpable benefit with large data sets). Methods like the DCDFT and modified periodogram were developed specifically to address issues which arise when the time sampling is uneven.

intrepid_wanders
May 26, 2014 8:02 pm

Willis,
I was in your boat concerning the resolution loss with averages. Incredible.
Very Nice sir, very nice.

Michael D
May 26, 2014 8:04 pm

hi Willis,
The Convolution Theorem says that the Fourier Transform of two convolved functions F( A * B * C) is equal to the frequency-by-frequency product F(A) x F(B) x F(C). Let A be the monthly data, and B the one-year-averaging boxcar filter, and let C be one of your “slow Fourier” sine waves, Unless C has quite a short period, running a one-year averaging filter over it will no change it significantly, so F(B) x F(C) ~ F(C). Thus F( A * B * C) ~ F( A * C) which is what you observed.
You wonder My guess is no, the periodogram can’t be inverted to me give back the monthly data . I predict if your Slow Fourier is densely sampled, you should be able to invert it (after it was run on the annual data) to get something like what you would have got if you had done a sliding average on the monthly data. Thus the month-to-month variations may be present (because a sliding boxcar average is a blunt instrument) but will be very attenuated.
Filters are fun.

phlogiston
May 26, 2014 8:05 pm

This finding makes sense intuitively. In image analysis people are repeatedly surprised at how little analytic power is lost from 3d image data after it is repeatedly downsized by binning and averaging.

ossqss
May 26, 2014 8:08 pm

Michael D
May 26, 2014 8:08 pm

Have you looked a wavelets? They could allow you to see whether the energy at a specific frequency is changing with time.

Michael D
May 26, 2014 8:11 pm

Thanks, OSSQSS for this fun video about “Nothing Buttery.”

Nick Stokes
May 26, 2014 8:15 pm

Willis,
I think all that is happening here is that your SFT sums annual values as a form of integration. That is actually a poor integration formula, and it hurts when you mix a high frequency sinusoid. When you interpolate first monthly, as I assume you are doing, that’s equivalent to a trapezoidal integration, or maybe better.

James
May 26, 2014 8:42 pm

The reason that this works is because it is, as Willis plainly admits (and shows in his code), not really a Fourier transform (and therefore doesn’t have all of the assumptions). It’s just a series of curve fits using defined sinusoidal forms, the outputs of which you use to define the strength of the period.
I think the reason you get the same results with the monthly sampling (side note: I don’t think that’s the best word here. Data are sampled. Your method fits over a grid of periods, and the grid isn’t “sampled” in the usual sense) is because your fits will use some kind of sum squared error term. You can aggregate chunks of the data by its mean value and see mostly identical fit results (within reason, of course) when using that optimization.
I’ve seen this with my own playing around with temperature sets, for example. You can see it on this plot, for example. The OLS fit for the monthly data is basically the same as the OLS fit for a 2 year running mean. You can kick it up to 5 years and only see a slight divergence.
This follows from intuition. Small aggregations of the data (relative to the total size of it) will appear very similar in terms of error to the optimizer. You can show this in excel quickly with some simple linear functions with normal error, then aggregate the points in certain ways and look at the results. You’ll first see how the fit line moves through the normal errors without aggregation as it tries to find the mean value at the center of the normal error. Then you’ll see that aggregation pulls the points with error closer to the true mean of the line, essentially doing the work the optimizer is trying to do (which is pointing out the mean of the line’s values at each point).

Nick Stokes
May 26, 2014 9:22 pm

“Truly, I don’t understand this result. For many, many years I’ve thought that when you average data in chunks, like say averaging monthly data into yearly data, that all of the monthly information was gone.”
Yes, it is lost. But your result depends on what you replace it with. A numerical integration always involves some implied approx function. Your SFT implicitly treats it as locally constant – as if it has the average value through the year, then suddenly changed. Like a histogram. But the monthly data changes smoothly.
When you interpolate, you aren’t regaining lost data. But you are replacing it by a much better approximation. Although you don’t know what it is, you know that it is smoother than the chunk approx.

May 26, 2014 9:49 pm

Your averaging is a digital low-pass filter.
You’ve used 12 taps, with uniform coefficients of 1/12th. Off the top of my head, you should only see a very slight reduction at frequencies around 120 samples. IIRC, the -6 dB point is 24 samples, and anything longer than that will have a smaller reduction.

Cees
May 26, 2014 9:50 pm

The monthly variation shows up in the high frequencies (except for a but of leakage) not shown in your plots. The variation you see between the “standard” frequencies of the discrete Fourier transform (DFT) is mainly related to the beginning and end of your time series; you can modify it by applying a taper. So that variability is largely deterministic. It has nothing to do with your “least squares” approach: for the standard DFT frequencies, least-squares gives the same result as the DFT (or FFT, which is just an implementation) because of orthogonality. Least-squares approaches are normally used for irregularly sampled data; see http://en.wikipedia.org/wiki/Least-squares_spectral_analysis#The_Lomb.E2.80.93Scargle_periodogram. Success.

Michael 2
May 26, 2014 9:52 pm

“information about the strength of the monthly cycles is NOT lost when the individual monthly values are subsumed into annual averages. Somehow, that information still exists and can be retrieved.”
This information preservation feature is pretty much why MP3 music sounds reasonably good (assuming you like what was encoded), and JPEG photographs work well.
Nature is rich with periodic functions of short duration (music) or limited dimensions (jpeg photographs). These can be represented with a discrete cosine transform.
As others point out, wavelets are like a Fourier Transform but of limited duration. A wavelet image compression algorithm was invented, is probably superior, but patented so hardly anyone ever used it (including me — I have access to it but why use what no one else is using?).
Suppose you plot a sine wave of frequency 10 and you sample it at frequency 44. The dots (sample points) are not going to land on the inflection points of your sine wave but will “walk” up and down the slopes, only rarely hitting either a peak or an inflection point. These points *describe* the sine wave, but they are NOT the actual sine wave.
Because these samples are not the actual signal, you can do things to them that are probably not possible in analog signal systems. Sharp cutoff filters are easy to implement in a DSP (Digital Signal Processor) by converting to the frequency domain, chop off what you don’t want, then convert back to the time domain. Doing this introduces artifacts of course that have analog analogues.
If you smooth data, the missing data points nevertheless make their influence felt in the resulting smoothed curve. Because of that, your curve fitting algorithm will “peak” on the frequencies (or periods) of the missing data, but reduced in amplitude by pretty much exactly the amount of smoothing that was applied. That’s experience talking, I certainly cannot prove it mathematically but it would probably be amenable to an experiment.
Anything that alters the shape of the data will produce a measurable artifact. For instance, if you have a 15 month rolling average, I would expect to see a 30 month “signal” in the data as new “outliers” come into the window and then drop off 15 months later, producing a 15-month long “step” in amplitude. Using a 12 month rolling average will create an artifact but will doubtless be masked by common annual signals in many aspects of nature.

CC Squid
May 26, 2014 10:05 pm

So you have answered your question on 11 year sun spot cycles? ((2010-1750)/25= 10.4 Does this means that the Schwabian cycles are a fact? I love this blog and get great pleasure from your posts. Congratulations!

May 26, 2014 10:29 pm

still not applying a window before doing the convolving, so you’re convolving in the edges. (start/end data points).. which line up pretty well so the artifacts are smaller here. That’s just luck, not proper methodology…
Boxcar averages are also not much used in modern signal processing because they have all sorts of aliasing and distortion properties.

Bernie Hutchins
May 26, 2014 11:03 pm

In the last few months I have posted a number of Application Notes relating to “smoothing” and its counterpart, “unfiltering”, more traditionally called deconvolution or equalization. Above some folks wonder if a moving average can be inverted. Yes it can. Pretty much as you invert an integration with a differentiation:
“Inverting a Moving Average Operation”
http://electronotes.netfirms.com/AN407.pdf
Two other notes are AN408 “Examples of Inverting of Smoothers” and AN409 “Changing the Moving Average Length – A Recipe”. These can be had by just changing the number in the URL.
This practitioner-level material is really “Intermediate Digital Signal Processing” – too complicated for an “introduction” but short of research-level. No one has, or likely ever will, write a book on intermediate DSP! Everyone is a beginner, or an expert.

RalphB
May 26, 2014 11:27 pm

“ossqss” posted a breezy video suggesting that “the universe is entirely mathematical.”
After naming some of the useful things one can do with mathematical descriptions of nature, the narrator, Max Tegmark, remarks, “Some think it means nothing profound or that math is something we’ve made up to be useful, others think it means there is something fundamentally mathematical about nature, and the most extreme possibility is that our universe is completely mathematical in the sense that it has no properties except mathematical properties.”
The argument is that since everything is made of elementary particles and that such particles only have (conflation warning) “properties” such as -1, 1/2, and 1, i.e. properties called charge, spin, and lepton number but the particle (hand waving warning) “doesn’t care” what we call the properties, they are [“really”, as Plato would say] just numbers — mathematical properties — and they have no properties except mathematical properties.
Now, I don’t have space and time to thoroughly ridicule the philosophic blunders in this view, starting with the fallacies of reductionistic materialism, but it originated not with modern physics, but rather with a cult that combined mysticism and mathematics founded about 2530 years ago by Pythagoras of Samos. Some two hundred year later, Aristotle took on the Pythagorean idea and summarizes his analysis as follows:
“But the Pythagoreans, because they saw many attributes of numbers belonging to sensible bodies, supposed real things to be numbers — not separable numbers, however, but numbers of which real things consist. But why? Because the attributes of numbers are present in a musical scale and in the heavens and in many other things. But those who say that mathematical number alone exists cannot according to their hypotheses say anything of this sort; indeed, they used to say that those numbers could not be objects of the sciences. But we maintain that they are, as we said before. And it is evident that the objects of mathematics do not exist apart; for if they existed apart their attributes would not have been present in bodies. The Pythagoreans in this point are open to no objection; but in that they construct natural bodies out of numbers, things that have lightness and weight out of things that have not weight or lightness, they seem to speak of another heaven and other bodies, not of the sensible. But those who make number separable assume that it exists and is separable because the axioms would not be true of sensible things, while the statements of mathematics are true and delight the soul; and similarly with the magnitudes of mathematics. It is evident, then, both that our contrary theory will say the contrary of this, and that the difficulty we raised just now, why if numbers are in no way present in sensible things their attributes are present in sensible things, is solved for those who hold our views.”
Aristotle anticipates the problem of conflating the measurement of a property with the property itself: you can measure mass with a number but you cannot replace it with a number!
A further objection along Aristotelian lines is that (applied) mathematics *only describes* things, i.e., applied mathematics is predicated of things and things are particulars, whereas what can describe things are universals, abstract concepts that apply to an indeterminate number of particulars of a given kind. It would be hard to imagine anything less particular than a mathematical object.
In short, mathematical objects are abstract objects — concepts — and assuming abstract objects have the power to do real things in the real world is a classic example of the fallacy of reification. Speaking of which, treating climate models as if they are equivalent to what they attempt to explain, i.e. the particular known as Earth’s climate, also commits the fallacy of reification.

Greg
May 27, 2014 1:05 am

Grant Foster and NIck give the correct explication. It’s the short periods that you loose with the averaging not 11 year ones.
As I pointed out before the three peaks here indicate something like a 132y modulation of the amplitude of the basic 11y cycle. Seen like that you would not expect the presence or the separation of the peaks to be lost by the annual averaging. The 11y is still there at the same period and how it’s amplitude changes long term will not affected.
My suggestion actually was to calculate with smaller frequency steps in the circa 11y range to enable a more precise read-off of where the peaks lie.
Ultimately the resolution is limited by frequency resolution in the data (give by sample interval and length of data as explained by Mr T) but you want to read this off as accurately as possible. Currently you’re plotting at freq intervals of 0.083 (year)^-1 , this is a bit crude and jagged.
When you get down to around 5 ( eg to find the 5.5 y harmonic of the 11y peak) the data has way better resolution than what you’re currently plotting.
Ultimately it would make sense to use a different interval at each step that is related to freq interval not a time interval. That would pretty simple to do and would save a lot time wasted doing very close points for the long periods.
For the same run time you could reduce the wastage at the long end and extract more of the high resolution that the data can offer at the short end.

1 2 3 7