**Guest Post by Willis Eschenbach**

I’ve developed a curious kind of algorithm that I’ve humorously called “slow Fourier analysis”, but which might be better described as a method for direct spectral analysis. The basic concept is quite simple. I fit a sine wave with a certain cycle length, say 19 months, to the dataset of interest, and I note the peak-to-peak amplitude of this best-fit sine wave. I repeat this process for the various cycle lengths of interest, and put that all together as a periodogram. The periodogram shows how large of a sine wave is the best fit for the data for each of the cycle lengths. This method has worked a treat, but yesterday, this odd method of mine handed me a big surprise.

Now, I have been doing these periodograms utilizing only the time periods inherent in the data. So if the data was monthly, I’ve looked at all possible integral lengths of monthly cycles, starting with two month, three month, four month, five month cycles, and so on up to cycles with periods as long as a third the number of months in the data.

And on the other hand, if the data was yearly, I’ve looked at all possible integral lengths of yearly cycles, starting with two year cycles, three year cycles, four years, and so on up to a third of the number of years in the data.

(I mistrust reports of the longer cycles, and any cycle with a period longer than a third of the dataset is not really worth calculating—it will fool you every time.)

So I was going along in blissful ignorance when an alert WUWT reader, Greg Goodman, pointed out that there was no reason to restrict myself to integral periods. He noted that I could subdivide the results and gain greater resolution. I was dismissive of his idea. I said that I thought you could do it, and that he was right and it was an interesting idea, but I said I thought you wouldn’t gain any real resolution by doing that. Hang on, let me get my exact words and the graphic I used in that thread to illustrate my point … here’s what I said:

In any case, I’m still not convinced that the procedure will give any real increase in information. Here’s the difference, for example, between monthly and annual data:

I think that if I increase the resolution on the sunspot periodogram while still using the annual data, it won’t look anything like the real monthly data … I’ll report back on that one.

Well … this is the promised “report back”. As you can see, both the monthly and the annual data show the peak at 11 years. The monthly data shows more detail, as it resolves the individual peaks, at 10, 11, and ~ 11.9 years. I thought that the test I outlined would be a good one. I’d see what happened when I sampled annual data at monthly intervals. So here’s what I did. I took the SIDC monthly sunspot data. For the annual data, I didn’t use the SIDC annual data. Instead, I averaged the SIDC monthly data to give me annual data that I knew was a true annual average of that particular monthly dataset. Figure 2 shows the monthly data in red, and the annually averaged monthly data in black.

*Figure 2. Monthly SIDC sunspot data (red), and the same monthly SIDC data annually averaged (black). *

So to be clear, the data shown as the black line is the annual average of monthly data shown as the red line

Next, I modified my periodogram-generating computer function to allow for fractional time sampling of the data.

Finally, I calculated three periodograms. One is the periodogram of the annually averaged data, shown in yellow/black below. The second is the periodogram of the underlying monthly data, shown in blue.

And finally, I calculated the third periodogram (shown in cyan) using the annually averaged data,* but sampled on a monthly basis* … Figures 3 and 4 show those results. I must confess, when I saw them my jaw hit the floor.

*
Figures 3 and 4. Upper panel shows the periodogram of annual sunspot data (yellow/black), monthly sunspot data (blue), and monthly-sampled annual data (cyan). Lower panel is a closeup of the upper panel, showing the period from seven to twenty-five years.*

You can see why my jaw hit the floor. The blue line (periodogram of actual monthly data) is almost identical to the cyan line (periodogram of annual data sampled at monthly intervals). Who knew?

It turns out that contrary to my expectation, the information about the strength of the monthly cycles is NOT lost when the individual monthly values are subsumed into annual averages. Somehow, that information still exists and can be retrieved. When I take the annual data and analyze it at monthly frequencies, the data is still there. Not exactly, to be sure, as you can see the process is not perfect or precise.

But it is astoundingly accurate. I absolutely didn’t expect that.

Now, I suppose some signal analysis guy will pop up and say he knew that all along and abuse me for my colossal ignorance … but I taught myself this stuff, and all I can say is, it sure was a surprise to me.

Truly, I don’t understand this result. For many, many years I’ve thought that when you average data in chunks, like say averaging monthly data into yearly data, that all of the monthly information was gone. Lost. Kaput. Irretrievable.

But that doesn’t seem to be the case at all. It seems that very little of the power spectrum information was lost at all as a result of the annual averaging process.

Naturally, of course, this brings up the next question—is this procedure invertible? That is to say, given a periodogram such as the one above, can I run the process in reverse? Can I start from the annual data, calculate the monthly periodogram from that annual data, and then invert the periodogram to give me back the monthly data? That would be really amazing … but I’m pessimistic.

My guess is no, the periodogram can’t be inverted to me give back the monthly data … but given how poor my last pathetic guess (at the head of this post) was, I’ll certainly give it a try. Any assistance gladly accepted.

Like I said above … once again all I can say is, I’ll report back on that one.

Regards to all,

w.

**Acknowledgement:** My thanks to WUWT reader Greg Goodman for the suggestion to investigate fractional time periods.

**For Clarity: **If you disagree with something someone says, ** please quote the exact words you disagree with.** It avoids all kinds of misunderstandings.

**Data and Code:** I’ve collected the R code, the R functions, and the data into a small (22KB) zipped folder called “Oversampling Folder“. It includes the monthly and annual sunspot data as CSV files. If you change your R workspace to the folder, it should be turnkey.

For the original data:

SIDC Monthly Sunspot Data Note that following the advice of Leif Svalgaard, I have increased all of the pre-1947 SIDC sunspot values by 20%, to correct for the change in counting methods at that time. The change is immaterial for this analysis.

**Changes In The R Function:** The function to generate the periodogram, named “sineptop” (for sine peak-to-peak), was previously called like this:

sineptop(annual_data, frequency = 1)

or like this:

sineptop(monthly_data,frequency = 12)

The “frequency” variable identifies the data as having that many periods per year.

It can still be called like that. In addition, however, the new sineptop syntax for e.g. monthly sampling an annual dataset looks like this:

sineptop(annual_data, frequency = 1, by = 1/12)

If the “by” variable is not specified it is assumed to be 1, so you don’t need to specify it, and the original syntax still works.

You cannot use this method. The reason is that the resolution of a Fourier analysis is limited to B=1/T where T is the total length of the sample. Therefore in a 1000 year sample your frequency lines are spaced at 1/1000 y-1. The only valid frequency lines are for integer numbers of cycles in the total sample.

I suggest you read up on Fourier.

I salute your intellectual integrity in disclosing this. Far too many people would kick the rock off the peak and pretend that there was nothing to see.

I follow your posts as much for the writing as for the science. I always learn something, and I’m always entertained by the responses. It seems the mouth breathers aren’t as prevalent in the responses these days, or maybe I’m missing something.

Would you care to explain how Willis reconstructed a fairly complicated waveform from averaged data with a flawed method?

Greg Locock says:

May 26, 2014 at 7:39 pm

Thanks, Greg. Your objection reminds me of the old Soviet joke, where the Commissar looks at some brand new idea and says “It works perfectly in practice, Comrade, but I assure you it will never work in theory …

You see, my method works quite well in practice, no matter how much you claim I

“cannot use”it in theory …w.

Send this off to Steve McIntyre

The method you describe is very clever. It’s also known (in the astronomical literature at least) as the Date-Compensated Discrete Fourier Transform, or DCDFT (Ferraz-Mello, S. 1981, Astron. J., 86, 619). It’s related to the more popular (but in my opinion less effective) Lomb-Scargle modified periodogram (Scargle, J. 1982, Astrophys. J., 263, 835). You have also chosen to plot the *amplitude* spectrum rather than the *power* spectrum, a habit which is uncommon but certainly valid.

It’s no surprise that the fine detail in the periodogram is not lost when passing from monthly to annual averages. The “density” of information in the frequency domain depends on 1/T, with T the total time span covered by the data (actually, T = N dt with N the number of data and dt their time spacing). Transforming from monthly to annual averages doesn’t change the total time span T, so it doesn’t remove information which seems to you to be “in between” the frequencies you tested when restricting to periods which are integer multiples of a single year. It was a mistake to restrict to periods which are integer multiples of a year in the first place.

What *is* lost when passing from monthly to annual averages is the *high* frequency information. With evenly sampled data, the entire information content of the periodogram is restricted to the frequency range from 0 to 1 /(2 dt) (the “Nyquist frequency”). For monthly data, dt = 1/12 so the Nyquist frequency is 6, but for annual data, dt = 1 so the Nyquist frequency is 0.5. Passing from monthly to annual averages therefore loses the information in the frequency range from 1 to 6 cycles/year, but does not lose any of the information in the frequency range from 0 to 1 cycle/year.

The information content in a periodogram is evenly distributed in frequency space, but not in period (inverse frequency) space. Therefore I recommend plotting periodograms with frequency rather than period as the abscissa.

Incidentally, the DCDFT isn’t really necessary when the data are evenly sampled in time and there are no missing values; in that case the plain old DFT (discrete Fourier transform) is just fine (and is faster to compute, a palpable benefit with large data sets). Methods like the DCDFT and modified periodogram were developed specifically to address issues which arise when the time sampling is uneven.

Sean Peake says:

May 26, 2014 at 7:56 pm

Thanks, Sean. I suspect he’ll see it here …

w.

Willis,

I was in your boat concerning the resolution loss with averages. Incredible.

Very Nice sir, very nice.

hi Willis,

The Convolution Theorem says that the Fourier Transform of two convolved functions F( A * B * C) is equal to the frequency-by-frequency product F(A) x F(B) x F(C). Let A be the monthly data, and B the one-year-averaging boxcar filter, and let C be one of your “slow Fourier” sine waves, Unless C has quite a short period, running a one-year averaging filter over it will no change it significantly, so F(B) x F(C) ~ F(C). Thus F( A * B * C) ~ F( A * C) which is what you observed.

You wonder

My guess is no, the periodogram can’t be inverted to me give back the monthly data. I predict if your Slow Fourier is densely sampled, you should be able to invert it (after it was run on the annual data) to get something like what you would have got if you had done a sliding average on the monthly data. Thus the month-to-month variations may be present (because a sliding boxcar average is a blunt instrument) but will be very attenuated.Filters are fun.

This finding makes sense intuitively. In image analysis people are repeatedly surprised at how little analytic power is lost from 3d image data after it is repeatedly downsized by binning and averaging.

Have you looked a wavelets? They could allow you to see whether the energy at a specific frequency is changing with time.

Thanks, OSSQSS for this fun video about “Nothing Buttery.”

Willis,

I think all that is happening here is that your SFT sums annual values as a form of integration. That is actually a poor integration formula, and it hurts when you mix a high frequency sinusoid. When you interpolate first monthly, as I assume you are doing, that’s equivalent to a trapezoidal integration, or maybe better.

The reason that this works is because it is, as Willis plainly admits (and shows in his code), not really a Fourier transform (and therefore doesn’t have all of the assumptions). It’s just a series of curve fits using defined sinusoidal forms, the outputs of which you use to define the strength of the period.

I think the reason you get the same results with the monthly sampling (side note: I don’t think that’s the best word here. Data are sampled. Your method fits over a grid of periods, and the grid isn’t “sampled” in the usual sense) is because your fits will use some kind of sum squared error term. You can aggregate chunks of the data by its mean value and see mostly identical fit results (within reason, of course) when using that optimization.

I’ve seen this with my own playing around with temperature sets, for example. You can see it on this plot, for example. The OLS fit for the monthly data is basically the same as the OLS fit for a 2 year running mean. You can kick it up to 5 years and only see a slight divergence.

This follows from intuition. Small aggregations of the data (relative to the total size of it) will appear very similar in terms of error to the optimizer. You can show this in excel quickly with some simple linear functions with normal error, then aggregate the points in certain ways and look at the results. You’ll first see how the fit line moves through the normal errors without aggregation as it tries to find the mean value at the center of the normal error. Then you’ll see that aggregation pulls the points with error closer to the true mean of the line, essentially doing the work the optimizer is trying to do (which is pointing out the mean of the line’s values at each point).

“Truly, I don’t understand this result. For many, many years I’ve thought that when you average data in chunks, like say averaging monthly data into yearly data, that all of the monthly information was gone.”Yes, it is lost. But your result depends on what you replace it with. A numerical integration always involves some implied approx function. Your SFT implicitly treats it as locally constant – as if it has the average value through the year, then suddenly changed. Like a histogram. But the monthly data changes smoothly.

When you interpolate, you aren’t regaining lost data. But you are replacing it by a much better approximation. Although you don’t know what it is, you know that it is smoother than the chunk approx.

Your averaging is a digital low-pass filter.

You’ve used 12 taps, with uniform coefficients of 1/12th. Off the top of my head, you should only see a very slight reduction at frequencies around 120 samples. IIRC, the -6 dB point is 24 samples, and anything longer than that will have a smaller reduction.

The monthly variation shows up in the high frequencies (except for a but of leakage) not shown in your plots. The variation you see between the “standard” frequencies of the discrete Fourier transform (DFT) is mainly related to the beginning and end of your time series; you can modify it by applying a taper. So that variability is largely deterministic. It has nothing to do with your “least squares” approach: for the standard DFT frequencies, least-squares gives the same result as the DFT (or FFT, which is just an implementation) because of orthogonality. Least-squares approaches are normally used for irregularly sampled data; see http://en.wikipedia.org/wiki/Least-squares_spectral_analysis#The_Lomb.E2.80.93Scargle_periodogram. Success.

“information about the strength of the monthly cycles is NOT lost when the individual monthly values are subsumed into annual averages. Somehow, that information still exists and can be retrieved.”

This information preservation feature is pretty much why MP3 music sounds reasonably good (assuming you like what was encoded), and JPEG photographs work well.

Nature is rich with periodic functions of short duration (music) or limited dimensions (jpeg photographs). These can be represented with a discrete cosine transform.

As others point out, wavelets are like a Fourier Transform but of limited duration. A wavelet image compression algorithm was invented, is probably superior, but patented so hardly anyone ever used it (including me — I have access to it but why use what no one else is using?).

Suppose you plot a sine wave of frequency 10 and you sample it at frequency 44. The dots (sample points) are not going to land on the inflection points of your sine wave but will “walk” up and down the slopes, only rarely hitting either a peak or an inflection point. These points *describe* the sine wave, but they are NOT the actual sine wave.

Because these samples are not the actual signal, you can do things to them that are probably not possible in analog signal systems. Sharp cutoff filters are easy to implement in a DSP (Digital Signal Processor) by converting to the frequency domain, chop off what you don’t want, then convert back to the time domain. Doing this introduces artifacts of course that have analog analogues.

If you smooth data, the missing data points nevertheless make their influence felt in the resulting smoothed curve. Because of that, your curve fitting algorithm will “peak” on the frequencies (or periods) of the missing data, but reduced in amplitude by pretty much exactly the amount of smoothing that was applied. That’s experience talking, I certainly cannot prove it mathematically but it would probably be amenable to an experiment.

Anything that alters the shape of the data will produce a measurable artifact. For instance, if you have a 15 month rolling average, I would expect to see a 30 month “signal” in the data as new “outliers” come into the window and then drop off 15 months later, producing a 15-month long “step” in amplitude. Using a 12 month rolling average will create an artifact but will doubtless be masked by common annual signals in many aspects of nature.

Michael D says:

May 26, 2014 at 8:04 pm

Thanks, Michael. I’m not sure I understand that. For me, “convolution” means sweeping a filter along the length of a dataset, with the value of the filter applied to each point in sequence. But although the annual averaging is a boxcar average, is that convolution as well?

Like I say, I taught myself all of this, I’ve had one year of college calculus and that’s it, so there are large holes in my mathematical education …

That makes it sound like you could “un-average” the data, which still seems like magic to me … but I was born yesterday, what do I know? I was thinking maybe my method wasn’t invertible because the decomposition is not orthogonal … any clues on how to do the inversion with a non-orthogonal decomposition?

Also, what do you mean by “densely sampled”?

w.

So you have answered your question on 11 year sun spot cycles? ((2010-1750)/25= 10.4 Does this means that the Schwabian cycles are a fact? I love this blog and get great pleasure from your posts. Congratulations!

still not applying a window before doing the convolving, so you’re convolving in the edges. (start/end data points).. which line up pretty well so the artifacts are smaller here. That’s just luck, not proper methodology…

Boxcar averages are also not much used in modern signal processing because they have all sorts of aliasing and distortion properties.

In the last few months I have posted a number of Application Notes relating to “smoothing” and its counterpart, “unfiltering”, more traditionally called deconvolution or equalization. Above some folks wonder if a moving average can be inverted. Yes it can. Pretty much as you invert an integration with a differentiation:

“Inverting a Moving Average Operation”

http://electronotes.netfirms.com/AN407.pdf

Two other notes are AN408 “Examples of Inverting of Smoothers” and AN409 “Changing the Moving Average Length – A Recipe”. These can be had by just changing the number in the URL.

This practitioner-level material is really “Intermediate Digital Signal Processing” – too complicated for an “introduction” but short of research-level. No one has, or likely ever will, write a book on intermediate DSP! Everyone is a beginner, or an expert.

“ossqss” posted a breezy video suggesting that “the universe is entirely mathematical.”

After naming some of the useful things one can do with mathematical descriptions of nature, the narrator, Max Tegmark, remarks, “Some think it means nothing profound or that math is something we’ve made up to be useful, others think it means there is something fundamentally mathematical about nature, and the most extreme possibility is that our universe is completely mathematical in the sense that it has no properties except mathematical properties.”

The argument is that since everything is made of elementary particles and that such particles only have (conflation warning) “properties” such as -1, 1/2, and 1, i.e. properties called charge, spin, and lepton number but the particle (hand waving warning) “doesn’t care” what we call the properties, they are ["really", as Plato would say] just numbers — mathematical properties — and they have no properties except mathematical properties.

Now, I don’t have space and time to thoroughly ridicule the philosophic blunders in this view, starting with the fallacies of reductionistic materialism, but it originated not with modern physics, but rather with a cult that combined mysticism and mathematics founded about 2530 years ago by Pythagoras of Samos. Some two hundred year later, Aristotle took on the Pythagorean idea and summarizes his analysis as follows:

“But the Pythagoreans, because they saw many attributes of numbers belonging to sensible bodies, supposed real things to be numbers — not separable numbers, however, but numbers of which real things consist. But why? Because the attributes of numbers are present in a musical scale and in the heavens and in many other things. But those who say that mathematical number alone exists cannot according to their hypotheses say anything of this sort; indeed, they used to say that those numbers could not be objects of the sciences. But we maintain that they are, as we said before. And it is evident that the objects of mathematics do not exist apart; for if they existed apart their attributes would not have been present in bodies. The Pythagoreans in this point are open to no objection; but in that they construct natural bodies out of numbers, things that have lightness and weight out of things that have not weight or lightness, they seem to speak of another heaven and other bodies, not of the sensible. But those who make number separable assume that it exists and is separable because the axioms would not be true of sensible things, while the statements of mathematics are true and delight the soul; and similarly with the magnitudes of mathematics. It is evident, then, both that our contrary theory will say the contrary of this, and that the difficulty we raised just now, why if numbers are in no way present in sensible things their attributes are present in sensible things, is solved for those who hold our views.”

Aristotle anticipates the problem of conflating the measurement of a property with the property itself: you can measure mass with a number but you cannot replace it with a number!

A further objection along Aristotelian lines is that (applied) mathematics *only describes* things, i.e., applied mathematics is predicated of things and things are particulars, whereas what can describe things are universals, abstract concepts that apply to an indeterminate number of particulars of a given kind. It would be hard to imagine anything less particular than a mathematical object.

In short, mathematical objects are abstract objects — concepts — and assuming abstract objects have the power to do real things in the real world is a classic example of the fallacy of reification. Speaking of which, treating climate models as if they are equivalent to what they attempt to explain, i.e. the particular known as Earth’s climate, also commits the fallacy of reification.

Grant Foster and NIck give the correct explication. It’s the short periods that you loose with the averaging not 11 year ones.

As I pointed out before the three peaks here indicate something like a 132y modulation of the amplitude of the basic 11y cycle. Seen like that you would not expect the presence or the separation of the peaks to be lost by the annual averaging. The 11y is still there at the same period and how it’s amplitude changes long term will not affected.

My suggestion actually was to calculate with smaller frequency steps in the circa 11y range to enable a more precise read-off of where the peaks lie.

Ultimately the resolution is limited by frequency resolution in the data (give by sample interval and length of data as explained by Mr T) but you want to read this off as accurately as possible. Currently you’re plotting at freq intervals of 0.083 (year)^-1 , this is a bit crude and jagged.

When you get down to around 5 ( eg to find the 5.5 y harmonic of the 11y peak) the data has way better resolution than what you’re currently plotting.

Ultimately it would make sense to use a different interval at each step that is related to freq interval not a time interval. That would pretty simple to do and would save a lot time wasted doing very close points for the long periods.

For the same run time you could reduce the wastage at the long end and extract more of the high resolution that the data can offer at the short end.

Willis, I think Fred Haynie did something very similar a year or two ago. He fitted sinewaves to the past temp record to show the possible future. Not sure though. Everythings a bit hazy at my age.

Bernie Hutchins says: “Inverting a Moving Average Operation”

http://electronotes.netfirms.com/AN407.pdf

Thanks. I will read that in detail later. I’m currently working on an optical problem which is equivalent to two running means burring an optical spectrogram. I’m currently deconvoluting with sinc squared (the two being nominally equal). Deconvolution tends to blow up the noise and it’s already rather a noisy spectrum. I’ll see whether I can find a way improve it in your paper.

Do you have anything more advanced on that technique?

Willis “But although the annual averaging is a boxcar average, is that convolution as well?”

Very much so. A running mean is one of the most simple convolutions. It is equivalent to moving your hand when taking a photo. It blurs the result. An annual average is exactly the same as a running mean that is them sampled once per year.

Of general interest, the same thing applies to de-blurring and refocusing photos. A good explanation of blurring and de-blurring by convolution is given here.

http://www.fmwconcepts.com/imagemagick/cameradeblur/index.php

BTW one trick I’ve used in the past is convert time series data to 1 pixel high grayscale image and pass it to Imagemagick for “deblurring” or other FFT processing. It’s a useful cross-check on results.

CC Squid says:

May 26, 2014 at 10:05 pm

So you have answered your question on 11 year sun spot cycles? ((2010-1750)/25= 10.4 Does this means that the Schwabian cycles are a fact? I love this blog and get great pleasure from your posts. Congratulations!

====

Ah thank you. That probably explains the poorly defined bump on the side of the central peak I noticed in Willis’ early plots, that I don’t find by more conventional processing. This is what windowing or “taper” functions are for. I should have thought of that straight way.

That was one reason why I was suggesting better plotting resolution would be good, since I could not determine clearly where that peak was.

You can see this bump in figure 4

Greg Locock says:

May 26, 2014 at 7:39 pm

You cannot use this method. The reason is that the resolution of a Fourier analysis is limited to B=1/T where T is the total length of the sample. Therefore in a 1000 year sample your frequency lines are spaced at 1/1000 y-1. The only valid frequency lines are for integer numbers of cycles in the total sample.

I suggest you read up on Fourier.

………………

Hi Greg, it is useless. Willis is not reader. He is writer.

Willis, you can recover *exactly* the monthly data if you deconvolve it. Because you know *exactly* the filter you convolved it with you can exactly decomvolve it and get your original data back. That is, if you keep the density of data. If you throw out 1/12th of the dat then you lose some information.

Willis, I do have a challenge for you, though. You seem to be utterly convinced this method can give you data that the FT can’t (it cannot). Go back and do EXACTLY what you just did with the DFT and I think you’ll get similar results.

Willis,

You said:

[quote]

That makes it sound like you could “un-average” the data, which still seems like magic to me … but I was born yesterday, what do I know? I was thinking maybe my method wasn’t invertible because the decomposition is not orthogonal … any clues on how to do the inversion with a non-orthogonal decomposition?

[/quote]

I would agree with you: you can’t just add up the components after your projection, since the signals (base vectors) you’re projecting on are not orthogonal. However, in practise, this is also not the case for Fourier Transforms. In FT, the components are only orthogonal if you have an infinite time window; so you get spurious frequencies there as well in practical applications.

A mathematically stricter solution, I guess, is a kind of principal component analysis; i.e. you find the strongest sine wave (its amplitude and phase), subtract it, find the second strongest signal of what’s left, subtract it, etc. You stop when there is nothing left but noise. All the components you find through this procedure can be added back together, even though they are not orthogonal.

Frank

***

tamino says:

May 26, 2014 at 7:57 pm

***

I’m also “gobsmacked”. A reasonable & readable reply from tamino.

wsbriggs: “It seems the mouth breathers aren’t as prevalent in the responses these days, or maybe I’m missing something.”

yes it seems that the volume of political yabooing has dropped considerably. They must have got bored or worked it out of their systems.

If you convert the years to months, an 11 year cycle requires 132 sample points, whereas a year of data is 12 points. You would have to examine your graph from 2 months to 24 months to see any loss of resolution. Of course you couldn’t do this because the yearly data does not contain this information, having a cut-off (highest resolvable frequency) of 24 months. That is why your periodogram starts at 24 months (2 years). Any difference would be to the left of your starting axis.

Greg Locock says: . The only valid frequency lines are for integer numbers of cycles in the total sample. I suggest you read up on Fourier.

“Valid” for what ?!

So If you have 100 years of data with a pure 30 year cycle , you would not regard fitting 30 y as “valid”. The only “valid” frequencies in the data would be 50,25,12.5 …..

Merrick says: Willis, you can recover *exactly* the monthly data if you deconvolve it.

Have you ever actually done a deconvolution?

An excellent reply by Tamino, instructive and informative, as well as courteous. Imho he nails it, with extra insight on the applicability of this method.

Dare we hope that this cooperative spirit is an indication of more harmonious interactions in the future?

Anually-averaged data is just a bad low-pass filter. It attenuates high frequencies, but the frequencies you looking are low.

Filters have an effect on the time domain, but also you can consider them in the frequency domain. averaging -> attenuating high freqs

Willis, I don’t see how an ≈ 11 year cycle could have blossomed from the data unless it were there already. Give the data to Mikey…he’ll turn it into a hokey stick.

Now there’s some magic.

cn

Willis: “Thanks, Michael. I’m not sure I understand that. For me, “convolution” means sweeping a filter along the length of a dataset, with the value of the filter applied to each point in sequence. But although the annual averaging is a boxcar average, is that convolution as well? ”

What Michael D was trying to say is multiplication in time domain = convolution in freq domain and conversely multiplication in FD = convolution in TD.

Both multiplication and convolution can be done in any order.

It’s quite a neat result once you know what it all means.

Boxcar filter is a convolution. The annual sampling is not. The sampling is multiplication by a train of impulses (called the Shah function). In effect this convolves the frequency spectrum by the FT of Shah, which it happens, is also a Shah but with frequency spacing. What this does when you convolve the real spectrum is superimpose lots of copies of the real FT of the signal side by side. This why you need to anti-alias filter BEFORE re-sampling. Once you’ve done it, the mix is done and you can’t filter it back out.

The problem with averaging is that the RM part , which is implicit has it’s zero at a period twice as long as the Nyquist freq, required for anti-alias filtering. Not only than but it does not cut off cleanly below that but leaks an INVERTED signal which will also alias.

Now if all you have is random noise you can hope it all blows out but if there is some periodic content between nyquist and the RM or in the leaky inverting lobe, it will alias.

This is why averaging ( eg the ubiquitous monthly and annual means in climate) are not good practice.

Sometimes it’s small and does not matter but when you see a “periodic” in the result you never know.

I like Willis’ preemptive requirement.

“For Clarity: If you disagree with something someone says, please quote the exact words you disagree with. It avoids all kinds of misunderstandings.”

Of course it can avoid more than “misunderstandings”.

Hopefully it will also prevent the purposefully mendacious use of,

“So what you are really saying is…” method of distortion and straw man diversion.

The frequency sampling of the FFT is sample_rate/fft_size.

If you want a finer sampling frequency you can zero-pad the FFT, thereby making it longer.

The DFT (slow fft) can do this, but the calculations are proportional to N^2 vs. N*log(N).

By the way of example, say your data is 372 points long. left justify this data in a buffer that is 1024-points long and do the FFT. Your frequency sampling will be finer by the ratio of 1024/372.

In Matlab this would be something like this:

rbuf=zeros(1024,1);

rbuf(1:372)=mydata;

freq_resp=abs(fft(rbuf));

You can zero-pad out as far as you want. Just take care of the normalization (scaling) factor.

For some FFT’s you will have to correct the amplitude by the zero-padding ratio.

hi Willis,

What Greg Goodman just wrote makes sense to me. FWIW, I’m largely self-taught in filtering, but had lots of math education with grad degrees in physics. Other commentors are thus able to speak with more authority on this.

Imagine doing a full convolution using the annual-average “boxcar” filter: that would mean sweeping the boxcar month by month, taking the average from Jan01-Dec01, then from Feb01-Jan02, then Mar01-Feb03, etc. I think I called that (above) a “sliding boxcar average”. If you sub-sample that convolution data by saving only the points representing Jan-Dec (thus tossing out 11 of the 12 data points for each year) then that is (I think) a standard annual average, which is what you were working with, so that’s why I was thinking of your data as a “convolution” with the boxcar filter. It would more accurate to call it a subsampled boxcar convolution I suppose.

As a thought experiment, imagine “reconstructing” the sliding boxcar average from the subsampled boxcar. The best you could do would be to do a linear interpolation between the Jan01-Dec01 value and the Jan02-Dec02, etc. to estimate the intermediate months (Feb01-Jan02, Mar01-Feb02, etc.) values. The difference between the original sliding boxcar and the reconstructed boxcar represents the information that has been lost by subsampling. It is in the nature of a boxcar average that the difference will typically be about 1/12 of the signal in this case.

If you use a non-subsampled (i.e. “dense”) boxcar average, followed by your Slow Fourier, you probably haven’t lost much information (though the boxcar average is blind to certain frequencies – there are zeros in its frequency response). To reconstruct the original: arrange your Slow Fourier frequencies to be spaced evenly (i.e: 0, 1, 2, 3,…N) [cycles/(N years)] then do a Fourier transform on that set (if you are not working with complex numbers, the inverse Fourier is identical to the forward Fourier). With some re-scaling (maybe just 1/N) you should see the original pattern come back. You may need to create a “cosine” version of Slow Fourier, repeat the above using cosine Slow Fourier, and then add the sine and cosine results.

i don’t buy the “one year of college calculus” excuse – you are obviously a math whiz and you are close to doing “standard” Fourier domain stuff, you’d probably enjoy delving into that some more. And that would connect you to a bunch of theorems that are helpful.

It is quite incorrect (as some are suggesting) to believe that frequencies are restricted to integer multiples, except as a particular analysis tool (such as FFT) might require this, and the “uncertainty principle” (limit of resolution) likewise depends on the analysis tool. “Rules” (and limitations) that apply to one tool may not apply to another tool, which has its own limitations.

Consider the three time samples:

[0 0.01745240643728 0.03489949670250]

These might look like a straight line, but the third one tips down ever so slightly. So you might “suppose” that they are the first few samples of a very low frequency sinewave. (They are in fact the sines of 0, 1, and 2 degrees.) If you try to fit them to [Sin(0) Sin(theta) Sin(2 theta) ] (i.e., solve for theta) you discover there are enough equations, but they are non-linear.

So try the FFT. The FFT lies! It says that there is a DC term and a frequency of 1/3 (red curve here). It never heard of the correct frequency 1/360 (blue curve).

Prony’s method (1795) allows us to solve for the exactly correct frequency in terms of the parameters of a network that could have produced the samples. Any bandlimited frequency is possible. For example, sqrt(2)/360, (or 1/3 for that matter).

Limitations on Prony? You have to guess the order correctly, and it doesn’t like noise, but it has no preferred frequencies. Each analysis method should be considered individually.

As I re-read what I just wrote, I realize that I’ve been assuming that your “best fit” of the sine wave was akin to a convolution of the sine and cosine waves with the signal. But there are many ways to do best fits, so that is probably a bad assumption. Also if you are fitting the phase of the sine wave, then you are fitting both a cosine and a sine wave, so you don’t need a “cosine Slow Fourier” you just need to split the sine wave into cos and sin waves with different amplitudes and zero phase:

sin(wt + phi) = cos(phi)*sin(wt) + sin(phi)*cos(wt)

Good Day Willis,

You seem to be falling into the “average kid trap”. The average 5th grader is 5 feet tall and weighs 110 pounds. So, we put 30 desk of the appropriate size in a classroom and expect it to fit the 30 kids that show up. When it is a bad fit, blame the kids, the parents, the budget, the voters…

You are looking for an 11 year sun/climate correlation, because the average solar cycle is 11 years. Solar cycles come in a variety of lengths and intensities. (And, probably vary in other ways we don’t know about or measure yet, but, I digress) If I was going to look for a correlation between solar cycles and climate I would look at the type of cycle.

I would start by categorizing the cycles into 3 groups: shorter than average, average and longer than average. Do groups of shorter cycles lead to warming? Just eyeballing the data, it looks like it might. This also requires looking at how we measure the length of a cycle (since they overlap). Does it matter how we choose the dates a cycle begins and ends? Does it matter how long the overlap is?

I would also look at the number of sunspots smoothed over 11 years. In case I’m not using the term correctly: sunspots for 2005 = the annual spots 2003 through 2013 divided by 11. Just eyeballing it, sustained low sunspot activity seems to link to cold or cooling periods.

I would also look at where the earth was in relationship to the sun at solar maximum. Does it matter if the solar max happens during summer or winter solstice? (Given that the southern hemisphere is covered by mostly ocean and the northern by mostly land.)

I would also look at the length and intensity of the solar minimums.

PS. Whatever drives global climate is certainly more than just the sun, but when a unified theory of climate is finally established, I do expect it will begin with the sun.

*I’m a civilian and not a scientist, if common sense has led me astray, I would appreciate correction.

this post should have a warning … no science follows but statistics in spades …

“Thanks, Michael. I’m not sure I understand that. For me, “convolution” means sweeping a filter along the length of a dataset, with the value of the filter applied to each point in sequence. But although the annual averaging is a boxcar average, is that convolution as well?”You actually have to apply two steps – first convolve with the boxcar filter, then multiply by a function that contains a unit impulse (delta function) in the middle of every year. This will sample the running boxcar filter annually.

As noted above, convolution and multiplication are Fourier transforms of one another, so the effect on the frequency spectrum is to first multiply by the transform of the one-year boxcar (a sinc function) and then convolve with the transform of the sampling function (which is another sampling function with inverse frequency). The Sinc function is broad, and only cuts off the highest frequencies (although it will distort lower frequencies because it’s not constant), and the transformed sample function has widely separated peaks, which pastes copies of the entire spectrum at regular intervals in frequency. Both of these only mess up the high frequency end, which since you’re only plotting periods 2 years and above you have cut off.

Both convolutions and multiplications can be inverted *so long as they are not zero*. (You just look at it in whichever domain involves multiplication by a fixed function, and divide by it instead.) The Sinc function crosses the axis (is zero) at regular intervals, and the sampling function is zero along most of its length. Because of the latter, you can only invert if the bandwidth is less than the Nyquist frequency, and because of the former, you can’t reconstruct frequency components with periods that are exact integer fractions of a year, although you can theoretically reconstruct the rest of the signal.

Because your signal is of only finite duration, which is equivalent to taking an infinitely long signal and multiplying by a boxcar function the length of your data, this has the effect of convolving your spectrum with a correspondingly narrow Sinc function, blurring the fine details. This blurring spreads the nulls out somewhat, and means it’s difficult to reconstruct frequencies very close to the nulls, as well. And if there’s any measurement noise or rounding, this gets magnified around the nulls by deconvolution, spoiling your result.

{ for a time series x with n elements and filter series f with m elements, circular convolution }

{ I typed this from memory… }

{ m should be odd }

{ if f[..] coefficients are set to 1/m we have a box-car filter }

m2:=m div 2;

for i:=0 to n-1 do

begin

for j:=-m2 to +m2 do xf[i]:=x[i]*f[(i+j) mod i];

end;

C I miss FORTRAN.

Another rather excellent thread Willis!

It certainly stirred more dim memories for me than many other articles have.

Grant Foster (Tamino): Thank you for dropping in and leaving a solid explanation!

At the danger of getting my head blown off:

The method of interpolation from a DFT is to expand the transform by adding zeros to the high frequency end of the spectrum, effectively increasing the sampling rate. (care has to be taken with end effects here). This is equivalent to unsing a filter that has a rectangular, wholly real frequency response.

It is easily shown that this is equivalent to convolving the signal with a sinc finction:

sin(x)/x

whose frequency is chosen so that pi samples of the sinc function is equal to the oringinal inter-sample period. This means that you will be adding zero to the actual samples but reconstructing the gaps. The sample itself is not altered because sinc(x) tends to 1 as x tends to zero but does not exist at this point. One can show that this is valid using distribution theory If you have an irregularly sampled signal, this method breaks down.

However, as regards inverting the “transform”,. if you have calculated the coefficients correctly, you should be able to reconstruct the original signal EXACTLY simply by summing the Fourier series or by inserting your calculated values into a DFT and inverting it. You should take care with this because the Fourier transform is represented with the DC component at zero and the sine/cosine components being represented 0.5 of their amplitudes at +- the frequency in question with the sine component at negative frequency being the complex conjugate of that at positive frequency. (Becuse cos(wt)=0.5(exp(jwt)+exp(-jwt) and sin(wt)=-0.5*j*(exp(jwt)-exp(-jwt)).

I feel that there may be problems in doing this because what you are doing is not a DFT at frequencies that at not an exact multiple of the fundamental frequency (i.e.: 1/signal length).

Michael D says: May 27, 2014 at 10:19 am“It would more accurate to call it a subsampled boxcar convolution I suppose.”

Yes, that is how I think of it. The SSN is convolved with a gate function, then multiplied by a shah function (Dirac comb of delta functions), then Fourier transformed.

You can see this in the freq domain, and see where error comes in. The convolution becomes multiplication by a sinc function (FT of gate). This represents sampling inaccuracy on the year/month scale. Mult by shah becomes (approx) convolution by FT of shah, which is itself a shah. But it isn’t exactly a shah; it had been truncated, ie multiplied by another gate, this time the data length. So you get another sinc function – this time very broad in the freq domain, representing limited data error.

I see I’m just echoing

Nullius in Verba says: May 27, 2014 at 10:59 amhere

I know in signal processing you can retrieve information that has been “lost” by averaging, or is very slight SNR; even to the point of being indistinguishable from noise; so long as you know the period(s) length in which the data was encoded and enough data….so thinking about it I’m not too surprised. But this post is an excellent example of DIY science and enthusiasum!

RC Saumarez says:

May 27, 2014 at 2:06 pm

Yes, this is basically the same as you get with zero padding in the usual FFT for evenly spaced data. If you truncate the functional basis to reconstruct the monthly values from inverse transforming, you are essentially performing sinc interpolation. It doesn’t really provide any new information, just more pleasing to our eyes and thought patterns.

Nick Stokes says:

May 27, 2014 at 2:23 pm

I see I’m just echoing

Nullius in Verba says: May 27, 2014 at 10:59 am

here

…..

who was echoing …..

“Nullius in Verba ” roughly meaning : didn’t read of word.

@Bart

If you have a properly sampled signal, you have the information.and obviously Fourier interpolation doesn’t add more information in a strict sense. Padding a DFT is exactly equivalent to convolution with a sinc funct.

If you want to know, say, the amplitude of a peak that lies between samples and its position, this can be recovered. (subject to some constraints) and is therefore useful.

Merrick says:

May 27, 2014 at 3:36 am

I’ve said nothing of the sort. The advantages that my method has are:

1. It doesn’t care about missing data. In climate science this is a huge plus.

2. It is linear in period, making the interpretation simple.

3. It is easy to calculate the statistical significance of the results, including adjusting for autocorrelation.

4. It gives answers in the units of the underlying data (°C, mm, etc.).

5. It is computationally straightforward, and despite the name, certainly fast enough for the sizes of datasets commonly found in climate science.

6. It works, not just with missing data, but with irregularly spaced datasets. So if for example you have monthly data mixed in with by-weekly data, plus scattered hourly data, it doesn’t matter. It just uses the times of observations that it is given.

Other than that, it is, as I said in the head post, just another among the many methods of spectrum analysis, in this case a very direct sort of steam-powered, brute-force method that is easy to implement and understand.

w.

RC says: The method of interpolation from a DFT is to expand the transform by adding zeros to the high frequency end of the spectrum, effectively increasing the sampling rate. (care has to be taken with end effects here). This is equivalent to unsing a filter that has a rectangular, wholly real frequency response.

It is easily shown that this is equivalent to convolving the signal with a sinc finction:

sin(x)/x

====

The rectangular window is applied as soon as you take finite sample of continuous data and try to analyse it. it’s nothing to do with the zeroes.

If you just add zeros you could be increasing the discontinuities at the end. It’s often best to subtract the mean so that the ‘zeros’ are in fact padding with the mean.

Then there’s the question of a “taper function” to avoid the discontinuities (that is done before adding zeros). This scales the signal down to zero (hopefully the mean) at start and end. This itself distorts the data but is usually preferable to the gross effects of just wrapping the end on the start.

The whole idea of adding zeros is to get around the fixed intervals of DFT. It does not even apply to Willis code, where he is free to choose and period or interval he wishes.

Willis: “1. It doesn’t care about missing data. In climate science this is a huge plus.”

Does it care about missing heat , that would be definitely useful.

Michael D says:

May 27, 2014 at 10:32 am

Thanks, Michael. I started by using a double optimization method to adjust the phase and amplitude for the best-fit sine wave, but that was slow. In the end, I simply made up a cosine wave and a sine wave of the desired frequency that were the length of the dataset, both of them starting at t = 0.

Then I did a least-squares linear analysis of the form

a * coswave + b * sinwave + c = modeled value

and used the resulting fitted curve without getting into considerations of phase.

I was a happy man when I thought of that way around the optimization. I got the inspiration from Fourier’s original method, which (from memory from a while ago) used sine/cosine pairs rather than complex numbers, combined with a comment from a reader. Gotta love doing science online.

w.

Don’t you think it odd that we still use months as a timebase when they are of different length; so a 28 day month has the same information as a 31 day month.

justaknitter says:

May 27, 2014 at 10:48 am

You seem to be falling into the “I’ll provide advice instead of doing it myself” trap. This was from my last post, but your comment seems more related to that post, so I’ll repeat my words here, they’re apropos:

I don’t care about your theories about what I’m doing wrong, and about just how and where I should look for the 11-year cycle, justaknitter. That’s your job. Get back to me when you’ve actually knitted something solid following your own good advice, and have it in hand to show us …

w.

Nullius in Verba says:

May 27, 2014 at 10:59 am

Thanks, Nullius, your comment helped greatly.

w.

Nick Stokes says:

May 27, 2014 at 2:21 pm (Edit)

Interesting, Nick, and my thanks to you and all the others. Always more to learn.

w.

You seem to be falling into the “average kid trap”.

==============

agreed. the assumption underlying the analysis is that average temperature varies as average sunspot numbers. if so, then the cycle lengths should align.

But what if temperature and sunspot numbers both vary as cycle length? Shorter cycles, more spots, temps increases over time. In this case would you expect the cycle lengths to align?

what if temp = ratio of short cycles to long cycles. for example, take a period of roughly 220 years:

10 short and 10 long cycles = 10/10 = 1.0 temp (unchanged)

8 short and 12 long cycles = 8/12 = 0.67 temp (temp decrease)

12 short and 8 long cycles = 12/8 = 1.5 temp (temp increase)

Would you expect to see amplitude of temp cycle at the same frequency as average solar cycle length? I don’t see why it would, except by accident.

RC Saumarez said in part May 27, 2014 at 2:06 pm:

“…The method of interpolation from a DFT is to expand the transform by adding zeros to the high frequency end of the spectrum, effectively increasing the sampling rate. (care has to be taken with end effects here). …”

I pretty much agree. Thanks RC. But to be clear the “high end of the spectrum” is the MIDDLE of the DFT. In fact this is how I interpolated the three points in my graph posted above at 10:27 AM, repeated here:

I did a three point FFT X(k) of the three original time points x(n). Then I put 97 zeros in the middle:

XX(k) = [X(0), X(1), zeros(1,97), X(3)]

and took 100 times the real part of the inverse FFT of the now length-100 XX. (There is always a tiny imaginary roundoff in the inverse that needs to be discarded. )

This interpolated the three points to 100 time points to better show the higher frequency (1/3, rather than 1/360) that the FFT “had in mind”. So I had not been thinking of it this way (too busy calling the FFT a liar!), but perhaps the FFT frequency-domain interpolation “recovers” the frequencies lost to low-pass averaging. Neat.

[Incidentally, what if the FFT length is even? For example, if N = 8. In this cases, if things are properly bandlimited, the exact middle is X(4), half the sampling rate, and it should be zero. If not, you split the X(4) term, which must be real, in two between the padded zeros, half staying down, and half going up. Really – we can show this is true.]

Re zero padding

https://ccrma.stanford.edu/~jos/st/Zero_Padding_Applications.html

” Sometimes people will say that zero-padding in the time domain yields higher spectral resolution in the frequency domain. However, signal processing practitioners should not say that, because “resolution” in signal processing refers to the ability to “resolve” closely spaced features in a spectrum analysis (see Book IV [70] for details). The usual way to increase spectral resolution is to take a longer DFT without zero padding–i.e., look at more data. In the field of graphics, the term resolution refers to pixel density, so the common terminology confusion is reasonable. However, remember that in signal processing, zero-padding in one domain corresponds to a higher interpolation-density in the other domain–not a higher resolution. ”

consider the simple signal 1,1,1,1

now zero pad it to give twice the frequency resolution

1,1,1,1,0,0,0,0

If you are trying to tell me that the FFT of a DC signal is identical to that of a step function then something is very wrong. And a quick check says it ain’t. Even if you use two leading zeroes and two after it still ain’t. (OK I agree that if you detrend the original signal and then zero pad you get the ‘right’ answer, but B=1/T is a law, not an opinion, there is no more information in the sample than that)

So not very surprisingly we discover that there is no free lunch. Yes, you will increase your apparent frequency resolution but in doing so you will be altering the displayed spectrum in ways that you may not be able to predict.

As to the fake extra resolution you get by curve fitting a sine/cosine pair of arbitrary wavelength to a signal.

i agree that at first sight for instance an 11.9 Hz sine wave would fit an 11.9 Hz sine wave signal perfectly, even for very short sample lengths. But B=1/T says that in a .1 second sample, you can’t actually tell the difference between that and a signal that consists of a strong 12 Hz, and a weaker contribution from other frequencies. You are forcing the fit of the 11.9 and assuming that there are no other cycles of roughly the same frequency. That may be true for a particular system but is not true for a general signal.

WM Briggs advised: Do not smooth times series, you hockey puck! September 2008

There is an intrinsic relation between sinusoids and time invariance. A system is said to be time invariant if its response to a delayed excitation is the same as the response to the excitation delayed.

The time delay operator is a linear one (sum of two delayed functions is the sum delayed).

Now, eigenfunctions of the time delay operator are helices (screws), expressed as cos(wt)+j*sin(wt), that is, complex valued functions of the real variable t, their geometry is that of a screw. Projection of a screw to a plane, parallel to its axis is a sinusoid.

Multiplication by a complex number of unit absolute value is simply a rotation (in the complex plane).

The secret is that you just rotate the screwdriver, while the screw itself goes straight ahead in the plank; a really mysterious phenomenon, if you gave it a thought.

Filters are linear operators that commute with the time delay operator. If two linear operators commute, there is a base built of their common eigenvectors. This is why Fourier transform is useful in analysing filters.

For systems with no time delay symmetry Fourier transform does not make much sense.

sorry make that a 1 second sample, not .1

Poor Fourier must be rolling over in his grave. Few here comprehend that his series of harmonically-spaced, finite amplitude sinusoids provides a valid decomposition only for L- periodic signals, with 1/L as the fundamental frequency. Any other set of LS-fitted sinusoids fails to add up to the signal, especially when the underlying signal isn’t periodic. That’s what makes sinusoidal regression so devastating in the hands of novice signal analysts, captivated by its simplicity and other putative “advantages”.

@Bernie Hutchins

Those notes were very interesting. Thanks.

The effect of bringing the poles just inside the unit circle is a way of raising notches just off the floor in the FD. When doing the deconvolution in FD what I usually do is just add a small offset to ensure the division does not explode the noise at the those point.

Are you aware of a less arbitrary way of doing that? Or since it’s pretty arbitrary maybe that’s a good as anything.

Greg Locock on May 27, 2014 at 4:44 pm suggests a test signal 1,1,1,1 and notes correctly that zero-padding it does not give the correct answer. Indeed true. But this is not a good example because it IS bandlimited – bandlimited to 0 in fact. It certainly looks like DC and anything else is wrong.

Something closer to his second suggestion is much more revealing. Consider 1,1,1,1,1,-1,-1,-1,-1,-1. This APPEARS to be a square wave. A true square wave is NOT bandlimited. But the FFT only “understands” bandlimited (to 1/2 the sampling frequency) signals. We may feel that the 10 samples given “ought to be” enough to TELL the FFT it is a true square wave. What does the FFT “think” it is? Indeed the same interpolation scheme I mentioned at 4:24PM gives us the answer (top panel):

The red waveform is exactly the unique function that IS bandlimited to available FFT frequencies (DC and five others in this case) which goes through the 10 points. It wiggles! That nice square wave was just in your mind.

[Incidentally, the same problem comes up in trying to use the FFT to calculate Fourier Series coefficients. Here (blue dots in bottom panel) we have a fundamental k=1, a third harmonic at k=3, roughly 1/3 the fundamental, and a fifth harmonic k=5, roughly 1/5. Crude. If we want more FS coefficients, and more ACCURATE ones, we need a lot more samples, and it eventually always falls short for high enough k (due to aliasing). ]

The FFT is exactly symmetric with regard to time and frequency. Thus we CAN interpolate the spectrum by putting zeros in the MIDDLE of the time series (not at the end) as shown by the red curve in the second panel. Nothing remarkable here as far as I can see.

Not enough room here for all the details.

1sky1 says: May 27, 2014 at 5:57 pm“Poor Fourier must be rolling over in his grave. Few here comprehend that his series of harmonically-spaced, finite amplitude sinusoids provides a valid decomposition only for L- periodic signals, with 1/L as the fundamental frequency. Any other set of LS-fitted sinusoids fails to add up to the signal, especially when the underlying signal isn’t periodic.”

But Willis isn’t looking for a decomposition. He’s looking for a band-limited filter. And getting a displaced sinc function, which isn’t optimal, but works.

“If you are trying to tell me that the FFT of a DC signal is identical to that of a step function then something is very wrong.”You can pad with anything. Periodic extension if you want. That would answer this objection. But it doesn’t help much where padding is used in practice.

“Greg Locock on May 27, 2014 at 4:44 pm suggests a test signal 1,1,1,1 and notes correctly that zero-padding it does not give the correct answer.”

Hardly surprising. Doing that changes the mean , the variance and the stationarity of the data. Not bad for a few zeros.

Greg asks on May 27, 2014 at 6:01 pm:

“Are you aware of a less arbitrary way of doing that? Or since it’s pretty arbitrary maybe that’s a good as anything.”

If you are less concerned with fighting off noise (not always an option – and your applications seem not to appreciate noise!) and are not adverse to that “big echo” that eventually comes back, the inverse filter where stable is “perfect” and not arbitrary. But I agree that a certain amount of “bubble gum and baling wire” is usually necessary in practice.

From your questions and comments on this blog I infer that you are pretty good at appreciating and attacking the difficulties that always come up in practice. The flip side is that useful suggestions may be few and far between!

If I can be of help, feel free to email me (just Google my name).

Bernie

“If I can be of help…”

Thanks, will do.

Is that a “Versalog” rule?

tamino says:

May 26, 2014 at 7:57 pm

Dang, Grant, my hat is off to you. First off, it sounds like you’ve actually found the previous art regarding my method. I’m always glad to see that someone previously invented what I came across myself, it verifies that I’m on the right path.

Next, thanks for a most complete and clear explanation of the answer to my perplexity. Much appreciated.

w.

There seems to be some dispute over Fourier Interpolation. It is simply time domain interpolation achieved by artificially increasing the sampling rate. However, if one has a correctly sampled signal signal, it is exact provided that the fundamental signal is an exact period of the transform length. If not, there are potential errors since if there is a component at, say 1.5 times the sampling rate, in the DFT the last point in the signal record will not predict the first point by Taylor’s theorem and so introduce a discontinuity that results in a Gibb’s phenomenon when one tries to interpolate. This is simply a reflection of the the fact that a DFT is not a Fourier transform but a series and that the sinc function extends to +- infinity. Obviously, in practice one uses compromises with windowing, long records etc, etc.

The issue here is more subtle: can one reconstruct the original data series from the “slow Fourier transform”? This is completely staightforward. Once the coefficients of the series have been computed, simply sum the series, either by brute force evaluation at each (irregular) point in the time series or by approximation to that point by substution of the coefficients into a DFT (with as high a sampling frequency one likes) and invert the transform.

I have my doubts about the theory of the “slow Fourier Transform” since the orthogonality property is not satisfied for non integer multiples of the fundamental frequency. It is not a DFT and it is not a true Fourier Transform although it is notionally continuous in frequency because the time period of integration is limited.

“I have my doubts about the theory of the “slow Fourier Transform” ”

Willis called this SFT but is aware it’s tongue in cheek. For those who draw too much parallel with DFT what Willis is doing is not a “transform” but is a spectral analysis. It’s more akin the the chirp-z analysis I usually do.

If he used equal frequency steps , rather than period steps , it should be possible to select the subset of his amplitudes that match DFT frequencies and they should be close to a DFT and hence be a transform that could be used to reconstruct the original.

If all you want to know is where the peaks lie, this spectral analysis is fine and has the advantages Willis states. But not to be confused with a frequency domain _transformation_ that is equivalent to the time domain representation.

“Now, I suppose some signal analysis guy will pop up and say he knew that all along and abuse me for my colossal ignorance … but I taught myself this stuff, and all I can say is, it sure was a surprise to me.”

Well, take a class or read a book or something. There are plenty of surprising results out there. Don’t you want to get to them as soon as possible? This post comes across like a retired guy tinkering with his 1953 Desoto and complaining that the neighborhood kids don’t care.

Dinostratus said in part May 28, 2014 at 4:21 am:

“… Don’t you want to get to them as soon as possible? This post comes across like a retired guy tinkering with his 1953 Desoto and complaining that the neighborhood kids don’t care. …”

You make a fair point. But……

While it is always presumptuous (and dangerous) to confuse one’s self with Feynman, who thrived on figuring everything out for himself, there is great merit in getting a hint of an idea from own thinking, or from another source, and developing it carefully. Later you can enjoy the fact that someone else figured it out TOO, the relative timing becoming less important.

Too much (especially in climate?) people have embraced a shiny new tool (perhaps only new to them) and applied it to “real” data before seeing how it worked with simple “test signals” like a pulse, a sinewave, white noise, red noise, etc. Elementary “bench-testing” always is advisable and builds confidence.

This “retired guy” supposes that your “as soon as possible” should be “as soon as it is carefully considered and understood”. Some folks do not thrive on books and classes, many of which leave students with incomplete understanding, or even misled. Back on May 21, Dinostratus showed just such a misunderstanding of the FT vs. FFT on this blog.

Listen to the guy with the 53 Desoto!

Agreed, Willis’ “beginner’s mind” approach is good.

I was always slow in my studies because I insisted on understanding what I was doing , not just learning to repeat it. That meant I was always explaining stuff to the guys who got top the marks.

Dinostratus says:

May 28, 2014 at 4:21 am

Jeez, Dinostratus, I hope you aren’t married … I feel sorry for anyone who might have to put up with that kind of nasty negativity for long.

As to why I don’t “take a class or read a book or something”, I read lots of books and papers on all kinds of subjects. I’ve learned enough from them, it seems, to have independently come up with what tamino says is the “Date-Compensated Discrete Fourier Transform, or DCDFT” … and you?

Regarding taking classes, unfortunately, I have this ugly habit called a “day job” that kind of monopolizes my time, the nearest school is a long drive away, and they feature little in the way of night classes in Fourier analysis …

Next, I seem to be one of the few guys on the web who has the balls to admit when he is surprised by some result, and generally some jerkwagon like you comes along to tell me I shouldn’t be surprised, and inform me (usually in an unpleasant manner) that I should know all this stuff already … and here your are, just as I predicted. Look, I know I’m surrounded by genius sophisticates like yourself, to whom nothing is ever a surprise, but my world is not like that. I get surprised by things, and I’m fool enough to admit it when I am … so sue me.

As to whether this posts sounds like some old guy complaining, was there some part of this paragraph in the head post that escaped your short attention span?

If you wish to accuse me of complaining … point out where.

Or better yet, just go back to sleep, and when we want your opinion, we’ll rouse you in some manner, bang on the side of your crib or something, and ask for your opinion. Until then … not so much …

w.

Padding a time series with zeros would lead to a mess of high frequencies unless the time series was first detrended, the DC component removed AND then made periodic by a tapering filter, such that the end of the time series was continuous with the start. This is why Fourier analysis of temperature trends is such a useless exercise. You have to throw the baby out with the bath water in order to follow best practice.

Three apparent impossibilities:

“Nixon goes to China.”

“Hell freezes over.”

“Tamino offers a reasoned and positive contribution.”

What an unexpected development.

Willis, I have experienced the same feelings of accomplishment that one gets when a self-taught person invents a better way to do something, only to be told by an expert in the field that it isn’t ‘new’. As an hardware engineer, I was given access to a mainframe computer (a mighty one, in the early eighties) just because it was there. It came with an operating system and a file editor. As a means to an end – I had no budget to hire a programmer or buy software – I taught myself to write programs, and “invented” a method of dividing a huge file into halves, enabling me to find an entry in fewer “read” operations. It greatly speeded up my program’s execution. I showed the method to others, and was told I had reinvented the “binary” search, something that is taught to programmers in their first year. I taught myself so much stuff, I later had a second career as a programmer, achieving the title of SENIOR applications developer… Never stop learning.

Steve from Rockwood said:

May 28, 2014 at 1:28 pm

“Padding a time series with zeros would lead to a mess of high frequencies unless the time series was first detrended, the DC component removed AND then made periodic by a tapering filter, such that the end of the time series was continuous with the start. This is why Fourier analysis of temperature trends is such a useless exercise. You have to throw the baby out with the bath water in order to follow best practice.”

Exactly HOW are you padding the time series with zeros? There are AT LEAST three meanings to “zero-padding” So if the time series is

[1, 1, 1, -1, -1, -1, -1, -1, 1, 1]

What does it look like padded with 10 zeros, ready for the FFT?

Thanks Steve, a specific example should help a lot.

Also I guess I don’t understand what “made periodic by a tapering filter” means.

Nick Stokes:

The very fact that Willis uses an equally-spaced SET of periods and calls this methodology either “slow F. transform” or “periodogram” belies your contention that he’s looking for a “band-limited filter.” The latter would produce a time-VARYING output, as in wavelet analysis; what he obtains for each member of that set by the ages-old curve-fitting procedure called sinusoidal regression is nothing more than the CONSTANT amplitude and phase. Nowhere does he concern himself with the NON-orthogonality of his results, which do NOT “work” with signals ecountered in geophysics in any reliable–let alone “advantageous”–way.

Bernie hutchins:

Since you seem to have the time for much numerical experimentation, I’d like to send you a brief segment of real-world data for a blind test of Prony’s method, whose basis functions strike me as being quite suitable.

1sky1 says: May 28, 2014 at 5:12 pm“The latter would produce a time-VARYING output, as in wavelet analysis; what he obtains for each member of that set by the ages-old curve-fitting procedure called sinusoidal regression is nothing more than the CONSTANT amplitude and phase.”

It’s the ultimate in band-limited filters; it produces a single frequency response.

It’s as if he used his signal to excite a whole range of perfect tuning forks, and identified periodicity by a strong response.

Nick Stokes:

Your fantastic analogy with “perfect tuning forks” indicates scant mathematical comprehension of spectral windows in finite-length records and the demonstrable absence of strictly periodic components in geophysical signals not driven by astronomical factors.

1sky1 said May 28, 2014 at 5:19 pm

“Bernie hutchins:

Since you seem to have the time for much numerical experimentation, I’d like to send you a brief segment of real-world data for a blind test of Prony’s method, whose basis functions strike me as being quite suitable.”

The situation may not be overly promising. If the data is noisy, we would have to try it provisionally. Also I am not sure how much a “brief segment” is. I only need 2N samples to try up to order N, and that should be easy to try – glad to. Presumably this could be numbers posted here or by email to me. I have very little experience (or confidence with) data files other than .mat. Most of my stuff is done with test signals generated by the program itself. No recent experience with Prony and real stuff – last time was pulses propagated in dry and (then) wet sand some years ago. But sure – let’s consider what can be tried.

1sky1 says: May 28, 2014 at 6:01 pm“Your fantastic analogy with “perfect tuning forks” indicates scant mathematical comprehension of spectral windows in finite-length records”

My mathematical comprehension of spectral windows etc is fine. A lossless tuning fork has highly frequency dependent acoustic impedance. The amount of energy that it can pick up from a finite burst of signal is a very reasonable measure of periodicities in that signal. And that is what he is measuring.

1sky1 said, replying to Nick Stokes May 28, 2014 at 6:01 pm:

“Your fantastic analogy with “perfect tuning forks” indicates scant mathematical comprehension of spectral windows in finite-length records and the demonstrable absence of strictly periodic components in geophysical signals not driven by astronomical factors.”

A bit over the top perhaps? As one who has actually built filter banks, I am not adverse to the notion of, at least, tuning forks of high-Q. Before the FFT (etc.) this was a major way of doing spectral analysis – the other being heterodyning through a single sharp bandpass.

When 1sky1 says “indicates scant mathematical comprehension of spectral windows in finite-length records” we are left to contemplate all those “records” that are not finite length. None!

Additionally, 1sky1 says: “… and the demonstrable absence of strictly periodic components in geophysical signals not driven by astronomical factors”. Again “strictly periodic” (astronomical or otherwise) suggests to me infinite duration. And don’t we always try to show that something is absent by first looking for it by conventional means?

For myself, I have a pretty good idea what 1sky1 is trying to say, and believe it could be brought forward as a valid point. But it needs considerable elaboration. To many it probably sounds like a tricky gotcha “bar bet” engineers supposedly pose to each other: no signals are bandlimited – all signals are bandlimited.

Bernie Hutchins says: May 28, 2014 at 7:33 pm“the other being heterodyning through a single sharp bandpass.”

Actually, it could also be said that Willis is heterodyning. The regression basically multiplies the signal by the sinusoid and sums (integrates). If the signal has a near-periodic component, you get a low frequency response. The summation (integration) is a low pass filter, so the result is again a measure of the strength of the periodicity and how close the test frequency is to it.

I’m trying to write a post on this. It’s interesting stuff.

Nick Stokes said in part May 28, 2014 at 8:23 pm:

“…..Actually, it could also be said that Willis is heterodyning….”

Note also that Greg said earlier, May 28, 2014 at 2:18 am

” … It’s more akin the the chirp-z analysis I usually do….”

Quite so. That brings back memories. The term “Chirp” kind of says it all.

I’ve discussed some of these ideas in a post here.

1sky1 says:

May 28, 2014 at 5:12 pm

As usual, 1sky1 has shown up to tell me all about how I’m doing everything the wrong way. 1sky1 never shows us how to do it the right way, of course.

I asked him if he’d do that once, so we could learn from him. Boy, was that a mistake!

He sent back a very nasty refusal, filled with spleen and vituperation, with a claim that I was trying to get him to perform valuable work for nothing … yeah, right, that’s my secret plan, to get 1sky1 to do “valuable work” for nothing …

As to the non-orthogonality of my results, I have indeed discussed it before, even in

, but for the current quest … so what? I’m using it to look for strong signals, not to decompose the signal into its component parts. As I’m sure 1sky1 will understand if he takes the time to think about it (which likely means he won’t understand it at all), in such a situation non-orthogonality is meaningless. Strong signals are strong signals, orthogonal or not …this threadw.

1sky1 says:

May 28, 2014 at 6:01 pm

Case in point for 1sky1’s habit of only telling people that they’re doing it totally wrong, but never revealing his secret plan for doing it right … instead, we just get his self assessment of his own abilities, which, needless to say, are stupendous and fantastic, he’s a legend in his own mind …

Of course, as is far too common in such cases, despite his huge claims, 1sky1 doesn’t have the balls to sign his own words, so he can walk away from them tomorrow and come back as “out2sea” and no one will be the wiser.

Pathetic …

w.

Nick Stokes says:

May 28, 2014 at 9:23 pm

Thanks for that, Nick, it’s an interesting post.

w.

Proper “Zero-Padding” of a time sequence to interpolate the FFT spectrum. Put the zeros in the middle, not at the end.

Here is my simple experiment. We start with 10 samples, n=0 to 9, Sin(2*pi*n/10) + Sin(2*2*pi*n/10) + Sin(3*2*pi*n/10). That’s three sinewaves of frequency 1/10, 2/10, and 3/10. The FFT magnitude has just these three frequencies (plus upper sides of course). See top panel of figure here.

Adding 90 zeros to the END of the original time sequence of 10 points, we get the FFT magnitude as shown in the middle panel (blue circles – original spectral points have red centers). Kind of wild oscillations, but it does go through original points. NOW, putting the 90 zeros in the MIDDLE gives a much smoother result (lower panel).

The horrible middle panel (zeros at end) looks remarkably like designing an FIR filter and forgetting to put in the linear phase. I suspect it’s the same thing somehow.

@Bernie Hutchins following your invitation, I sent you an email yesterday to a yahoo address. Is that still valid?

Sin(2*pi*n/10) + Sin(2*2*pi*n/10) + Sin(3*2*pi*n/10)

Your test fn is basically a negative ramp repeating every 10 pts

If you end pad it you still have the ramp but with a pause between each cycle.

If you pad the middle you have a ramp that had a long flat half way down. As you have show the FFT is very different from that of the original

You seem to think this result is “better” in some way because it looks nicer?

From your comparison you can see that you have drastically reduced the resolution, have two peaks instead of three. The middle frequency is still there as a point but is not indistinguishable from any other point in the spectrum.

You have lost one of the three input signals. How is this better?

oops: but is NOW indistinguishable

However, I think your graph is a good illustration of how, despite giving better plotting resolution, padding does not improve the accuracy of the result. The basic uncertainly due to insufficient sampling of longer cycles remains and care is needed not to infer undue accuracy to the results.

Sorry, it does improve the accuracy but does not give the correct peak.

It would be interesting to do comparative plots for padding with say, 95, 135, 155 padding . (rather arbitrary choices trying to avoid integer multiples of signals and original sample length).

I ran a sine wave with a period of 11 years through the function “sineptop”.

There are all sorts of artifacts that have side lobes that are 20% the size of the main lobe. The effect of what you’re doing is creating some gnarly window and filtering effects. Partly because you are not windowing (high passing) the sample, and partly because your function does something funny to the sine-wave itself (haven’t figured out what yet – probably aliasing because you didn’t low pass filter the interpolation.

Please test with basic known waveforms that have known transform results before continuing. You aren’t getting useful information out of these transforms.

picture of result:

test data (11 year sine wave): https://www.dropbox.com/s/ieglk36hnxa430f/test-sin.csv

Yep, that’s magnitude of sinc fn.

What the periodogram for an 11 year sine wave should look like

red = unwindowed

blue = kaiser window with Beta = 2

picture: https://www.dropbox.com/s/cy5f0ahxa9i0uog/11-year-sine-wave-scilab.jpeg

source: https://www.dropbox.com/sh/5wh9dbja6x37nfa/AADhfZFr6JXWF2vCDAj407Hoa

first min 11+60mo second 11+12mo

sin(pi*x)/(pi*x); 12m= 2*pi ;

What the periodogram for an 11 year sine wave should look like

red = unwindowed

whoever told you the red line is unwindowed lied.

It has the same rectangular window Willis used.

;)

Just in case that is too subtle for anyone, the rect. window that is producing the artefacts that are of the form of sinc fn is the finite length of the data sample.

This distortion will be applied to all peaks in the spectrum and then overlaid.

Where the peaks overlap they will add to make something looking more significant.

I pointed out several days ago that triplets in Willis’ periodogram of SSN around 9 and 18y were probably an echo of the main 11y triplet.

I know this is going to go against Willis’ gut instincts not to mess with the data, but I think distorting the input data with a window fn of some sort is an essential compromise to avoid the finite sample period snapshot (rectangular window) throwing artefacts everywhere.

Sadly these window functions also reduce frequency resolution :(

I see lots of hard sums here.

So are we consigned to a firey end on an arid planet burned to death by our own waste products or not?

:) <- note smiley

Bernie Hutchins says:

May 28, 2014 at 3:37 pm

Steve from Rockwood said:

May 28, 2014 at 1:28 pm

Exactly HOW are you padding the time series with zeros? There are AT LEAST three meanings to “zero-padding” So if the time series is

[1, 1, 1, -1, -1, -1, -1, -1, 1, 1]

What does it look like padded with 10 zeros, ready for the FFT?

====================================================

1,1,1,-1,-1,-1,-1,-1,1,1,0,0,0,0,0,0,0,0,0,0

Not a great example Bernie, but imagine starting with 2,2,2,0,0,0,0,0,2,2 which has a DC offset of 1.0. Padding with zeros 2,2,2,0,0,0,0,0,2,2,0,0,0,0,0,0 to produce 16 terms would produce high frequencies at the 2,0 transition that was introduced by you. Removing the DC offset would reduce this to a 1,0 transition – not a great example data set but I think you know what I mean.

In the case where the first element is not equal to the last, this produces high frequencies because the data set is not periodic – a requirement of the FFT.

2,1,1,-1,-2,-1,-1,-1,1,1 for example needs to repeat as 2,1,1,-1,-2,-1,-1,-1,1,1,2,1,1 etc.

The start and rear ends of the time series would have to be tapered to match before padding.

e.g. 0.5,1,1,-1,-2,-1,-1,-1,1,0.5 then remove the DC offset, then pad with zeroes. You end up with a transition of 1.0, 0.5, 0.0 instead of 0.0, 2.0, 2.0 or 2.0, 2,0, 0.0.

The above is not a great example. Imagine a straight line of temperature increases.

0,0.5,1.0,1.5,2.0,2.5,3.0,3.5 etc…

When you put something like that into an FFT it is assumed to be periodic so the large offset between the first value 0.0 and the last value 3.5 will require a lot of high frequencies to reproduce the discontinuity when in fact there is no high frequency content in the time series – it is a straight line.

An FFT is band-limited. Any trend longer than half the length of the time series must be detrended. If you have 128 years of annual data you have to remove any trend longer than 64 years. The fact that subsequent warmings from 1850 onward ended with successively higher temperatures would have to be removed from the data set as the baby in the bath water. What would be left would be the oscillations (peak in the 40s, low in the 70s, peak in the 2000s), but no overall increase.

Willis I posted this on another page. The pdf makes an interesting read and to me confirms what you posted about reducing the climate models to simple black boxes

http://wattsupwiththat.com/2014/05/26/quote-of-the-week-howler-from-the-world-meteorological-organization-what-warming/#comment-1648819

Ferdinand Engelbeen says:

See: http://www.economics.rpi.edu/workingpapers/rpi0411.pdf

========

…

This confirms what Willis posted awhile back, that the climate models can all be reduced to a simple “black box” equation.

Greg and Steve from Rockwood –

Greg – I have your email and will get to it later today: it’s 11 AM here in Ithaca – first coffee!

Greg – the test signal is not a ramp, it’s the sum of three sines.

Steve – I intentionally chose the example I proposed because it automatically met two of your criterion. You did however clearly indicate that you are adding zeros to the end, which is one sense of “zero-padding”. Thanks. The part about filtering (windowing?) and the need for it escapes me still. All FFT pairs are periodic automatically.

Steve and Greg –

Isn’t it the case that zero padding the time sequence interpolates the spectrum? It adds more frequencies points but does NOT increase the resolution. No information is added – so how could it? It’s just computing many samples of the DTFT using an FFT. You CAN put the zero string anywhere, and the interpolation will go through the original FFT points as in the jpg I posted above (repeated here):

BUT makes a HUGE difference what happens between the original points. Putting the 90 zeros between point 2 and 3 (not in the jpg) gives an intermediate case between the two lower panels. Which is right? I certainly prefer the bottom panel.

Working on these ideas later today. It’s not simple. Thanks for the feedback.

It appears that interpolating an FFT is more complicated than interpolating a time signal because the signal is assumed real while the FFT is in general complex. It seems that it does involve putting in a correct linear phase. Working on it. Probably too involved to try to post here directly.

Bernie: “Which is right? I certainly prefer the bottom panel.”

You again suggest it’s better but don’t say why and fail to reply to my noting that you’ve lost one of the peaks. Which is more accurate or whatever is secondary, you now have two identifiable peaks instead of 3. The excursions are less , does that make it more pleasing to the eye? I’d rather keep the third peak. Above all the fact that there are 3 and not 2. frequencies present.

consider this: the wrap-around that DFT does is artificial but can’t be helped. What you are proposing makes an artificial break in real continuous data and inserts something. The means TWO more artificial breaks with accompanying steps (unless you’re really lucky).

I don’t see how this can do anything but degrade the signal.

Your example is non typical in that the split is near a zero crossing, and it still loses important info.

“Greg – the test signal is not a ramp, it’s the sum of three sines.”

What I said is: “Your test fn is basically a negative ramp repeating every 10 pts” , say ramp-like if you prefer. I also noted loss of resolution and loss of one of the three test frequencies. That would seem more important than whether is a ramp or ramp-like. You avoid addressing that.

I’d never heard of this idea before so thought it may be some technique I was unaware of. Having looked I don’t think I’ll be using it.

@Bernie.

You cannot add information to an FFT by padding a time series. When you pad with zeroes you can only add to the start, to the end, or a little of both (the FFT does not care because the time series has no start or end point, only frequency content). You cannot insert zeroes into a time series. It corrupts the real frequency content. The main reason for padding with zeroes that I am aware of is for the convenience of using the nlogn FFT that requires a power of two points. If you have a 484 point time series add enough zeroes to get to 512, for example. Also you may want FFTs from different time series to have the same number of points to compare the spectra for the same frequencies. One series with 3800 points is padded to 4096 and another with 5000 is truncated to 4096. You did not add any frequency content to the padded time series. You did possibly take away a little low frequency content by truncating, but not much.

@Bernie. Zero padding extends the lower frequency spectrum (for a given sample rate) but does not add any frequency content. You cannot add frequency content but you can introduce frequencies through bad practice.

Sure would improve your essays if you learned to use a decent image format for line drawings.

1. Greg said in part May 29, 2014 at 11:17 am

“….You again suggest it’s better but don’t say why and fail to reply to my noting that you’ve lost one of the peaks. Which is more accurate or whatever is secondary, you now have two identifiable peaks instead of 3. The excursions are less , does that make it more pleasing to the eye? I’d rather keep the third peak. Above all the fact that there are 3 and not 2. frequencies present……”

Sorry – I did not equate a loss of one local max to mean that the spectral energy there is lost, and indeed, it is still there EXACTLY as it was originally. (Actually, there are many more frequencies, each of the blue circles.) The three frequencies 1/10 (10/100), 2/10 (20/100), and 3/10 (30/100) are three of them, exactly as in the original. All that has been added is the suggestion that the region of spectral support may be continuous, but about the same width (from 1/10 to 3/10 or so) .

This is interpolation of the existing spectral information. No new information is added. The idea of putting the zeros in the middle, and of splitting any number exactly in the center (where present) is firmly established. Here it is for the time domain:

http://electronotes.netfirms.com/AN398.pdf

The useful point of view is to consider the INVERSE of interpolation, which is downsampling (resampling).

Greg also said: “Your example is non typical in that the split is near a zero crossing, and it still loses important info.”

I chose the example to be a sum of sine so that it would be zero at the center. (My program actually splits the center with 0/2 going both ways. ) Again, absolutely nothing is lost in my view.

I believe this is the same problem we have with the “Frequency Sampling” method of FIR filter design where you have to get the linear phase right:

http://electronotes.netfirms.com/AN337.pdf

Lots of pieces to put together.

Steve from Rockwood said in part May 29, 2014 at 12:35 pm

“You cannot add information to an FFT by padding a time series.”

Absolutely true. I hope I never suggested you could. Some people claim “more resolution” while it is just INTERPOLATION of the spectrum in the frequency domain. Note that my interpolations go EXACTLY through the FFT points of the length 10 case.

Interpolation in the TIME domain by inserting zeros in the MIDDLE of the FFT is well established:

http://electronotes.netfirms.com/AN398.pdf

and you will agree (I hope) that the FFT is mathematically symmetric with regard to time-frequency. So I can do the same to interpolate a spectrum. I am just finding a smooth curve that fits the frequency samples. The complication seems to be that most time signals are assumed to be real, while most FFT’s are complex. I believe this makes it necessary to pay more attention to a correct phase. I haven’t fixed this problem (or attempted to do so yet). I think it is the same as the problem with Frequency Sampling FIR filter design:

http://electronotes.netfirms.com/AN337.pdf

In as much as I am talking interpolation, the standard should be (at least as a start) does resampling back to the lower rate give me the original answer. It does – no loss.

Nick Stokes:

Your mathematics is not fine. No matter how high the Q of a physical tuning fork, its frequency response function never assumes the form of a cardinal sine function, sinc(x), with its negative side lobes. And your computational exercise with a pure finite-amplitude sinusoid totally misses the fundamental objection to using sinusoidal regression on data time-series that contain NONE, but are generated by RANDOM signals of varying bandwidths, whose power density occupies a CONTINUUM of frequencies.

Bernie Hutchins:

I remember quite well when physical spectrum analyzers consisted of a bank of narrow band-pass filters. I’ve already stated my objection to the tuning-fork analogy in response to Stokes. To address your obvious objection of lack of INFINiTELy-long records, one need only consider practical periodicity over available record length.

BTW, Since I work in an environment that discourages revelation of results, I’d be willing to send you plain-text file of <100 data points for test Prony's method only on the condition of strict confidentiality.

Willis:

That “your algorithm” fails to come to grips with the intrinsic character of most geophysical signals is plain as day to those with professional experience in signal analysis. My persistent criticism has always been directed at that issue–not your person. It’s utterly dishonest of you to accuse me now of that which you have persistently done. The marriage of ignorance with arrogance is what has driven scientifically qualified discussion largely elsewhere.

1sky1 said in part May 29, 2014 at 5:00 pm:

“…BTW, Since I work in an environment that discourages revelation of results, I’d be willing to send you plain-text file of <100 data points for test Prony's method only on the condition of strict confidentiality. …”

In as much as you are asking for MY help, I am reluctant to agree to any conditions on your part.

More to the point perhaps, my Matlab code is available and you can easily run the data yourself, and you know it is not a complicated method in the first place. You can probably code it better yourself.

Here is the code:

http://electronotes.netfirms.com/prony.txt

and all you would need to do is rename it prony.m and run a data vector x and a proposed order N. (Check your Matlab for existing functions prony.m. I think there is a GIT file floating around, and others, that are quite different from mine.) The example in the header should get you properly oriented. Some discussion is at:

http://electronotes.netfirms.com/AN365.pdf

It appears that the earlier discussions (before AN365) are not currently online, but I will be happy to post them if you need more.

1sky1 says:

May 29, 2014 at 5:25 pm

Well, let’s see what this charming comment contains. We have:

• No quotations to let us know what the author is talking about.

• No links to any studies.

• No support in the way of logic, math, data, or computer code for the author’s claims.

• In fact, no explanations of the claims at all.

• Claims of superior insight on the author’s part.

• A puerile accusation that I am dishonest, again without any quotation of my words to let us know what the author considers “dishonest”.

• Allied with the claim that I’m “dishonest”, a claim that no personal attacks are being made on me, that the author has directed his criticism at the science.

• An unsupported accusation that my actions are costing the site serious scientific discussion, again without specification of the purported damaging actions.

Yep … that has the hallmarks and signs of a 1sky1 comment, all right, it hits all the right notes.

w.

1sky1 says: May 29, 2014 at 4:49 pm“Your mathematics is not fine. No matter how high the Q of a physical tuning fork, its frequency response function never assumes the form of a cardinal sine function, sinc(x), with its negative side lobes.”

OK. Take a series LC circuit with voltage V(t) at one end, and V=0 at the other. V(t)=I=0 for t<0. To avoid sqrt signs, let L=C=1.

The current I(t) is

I(t) = -sin(t)∫ cos(τ) V(τ) dτ – cos(t)∫ sin(τ) V(τ) dτ

The ∫ has limits 0 to t, or to tm if V(t)=0 after tm.

So the LC circuit goes exactly into oscillation with frequency =1 after t=tm, since the integarls remain constant, and the amplitude is Willis’ measure of periodicity. The LC is a filter which produces a single frequency output after tm.

Bernie: The idea of putting the zeros in the middle, and of splitting any number exactly in the center (where present) is firmly established. Here it is for the time domain:

http://electronotes.netfirms.com/AN398.pdf

It is possible to interpolate time-domain samples using the FFT. The

procedure is to first take the FFT of the time sequence, zero-pad the middle

of this FFT, and then take the inverse FFT of the larger FFT to obtain the

longer time-sequence. For example, a length 20 time signal could be

interpolated to a length 100 time sequence by inserting 80 zeros in the middle

====

OH Man. This is not way the same. The FFT is symmetric and thus adding “in the middle” is adding zero valued high frequency data points. Thus increasing the frequency resolution of the spectrum without changing the data. It is, in end padding !

Now try splitting half way between min and max freq and inserting 80 zeros . That is comparable to what you are doing in the time domain.

Your PDF files are very informative and I thank you but I don’t think you are making much sense on central padding issue and are not addressing comments I raise, so I’ll leave it there. I’ve been down this path before with others.

1sky1: “BTW, Since I work in an environment that discourages revelation of results, I’d be willing to send you plain-text file of <100 data points for test Prony's method only on the condition of strict confidentiality."

OMG , you sound really important. Do you work for some black govt agency or something?

And what would happen if this anonymous "plain-text" file of less than 100 anonymous data points from an anonymous poster got into the wrong hands?

I do hope Bernie can be trusted and he's not going to do a Snowden and disappear to Russia with your "confidential " data points.

LOL What a jerk.

@Bernie. If you are using your own FFT algorithm you have problems beyond real and imaginary components. To convert from time to frequency you start with 2 arrays, XTR and XTI where all XTI elements (imaginary part of the time series in the time domain) are set to 0.0. Once in the frequency domain the FFT will produce two arrays XFR and XFI that represent frequencies up to the Nyquist (twice your sample rate). These elements are often arranged in an order from (0) being the DC component toward increasing frequency until (N2) and then back down to the lowest frequency again. To find the middle you have to find the Nquist which will likely be at N2+1 element of the array, for a 1-based array where X(1) was the DC, X(2) was the lowest frequency, X(N2-1) was the second highest, X(N2) was the highest and X(N2+1) was the second highest. From there you would expand the array with zero frequency (real and imaginary). For example, to fill a 256 point FFT with zeroes (I did not test this code – just trying to be helpful):

XTR(1..512) real, time domain

XTI(1..512) imaginary, time

XFR(1..512) real frequency

XFI(1..512) imaginary frequency

for i:=1 to 256 do

begin

XTR(i) = some function varying with (i)

XTI(i) = 0.0

end

CALL NLOGN(XTR,XTI,256,+1,XFR,XFI)

These arrays now have the FFT coefficients in them for 256 points but you want to interpolate to twice the sample rate or 512 points.

for i:=512-64 to 512 do

begin

XFR(i)=XFR(i-512+1+64)

XFI(i)=XFI(i-512+1+64)

end

The above code copies the lower frequencies into the upper part of the array, because FFT arrays have two sets of frequencies (-w) and (+w) in addition to RE (cos) and IM (sin) components.

for i:=65 to 512-64 do

begin

XFR(i)=0.0

XFI(i)=0.0

end

The above code zeroes the higher frequencies

CALL NLOGN(XFR,XFI,512,-1,XTR,XTI)

-1 means from (f) to (t) domain. If things went well XTI should have near zero values throughout and XTR should be sampled at twice the rate you started with.

Greg said in part May 30, 2014 at 2:15 am:

“…OH Man. This is not way the same. The FFT is symmetric and thus adding “in the middle” is

adding zero valued high frequency data points. Thus increasing the frequency resolution of the spectrum without changing the data. It is, in end padding !…”

Nope! You have forgotten something! The FFT is NOT in general symmetric. Make the signal complex such as x = xr + j*xi where xr and xi are both random (just for a general test). Take X = fft(x) and neither its real part, its imaginary part, or the magnitude is symmetric. Do actually try this. The FFT of x = xr (even if xr is random) IS of course symmetric, the basis for my interpolation.

We will almost also certainly agree that there is a complete mathematical symmetry between FFT and IFFT equations. Whatever I do in time I can do in frequency. That means that the procedure either works in both domains, or fails in both domains.

Interpolation in the frequency domain works as in my jpg bottom panel. BUT this is because (by chance at first) the FFT was purely imaginary (I chose three sines). If I switch one to cosine, the interpolation is less smooth. Making this work robustly seems possible, fixing the “phase problem” as I suggested, or perhaps doing real/imaginary parts separately. I may well have done this 25 years ago.

But right now I have a program with 13 figures stacked up. I need a blank file and time – longer than the life of most postings here! And we are boring most folks if any are left here. Please watch for a future app note in my Electronotes site (tentatively AN-413 ???).

Steve from Rockwood replied to me May 30, 2014 at 8:12 am

Steve – I read what you wrote and don’t see any disagreement between us.

I saw you comment after replying to Greg at 8:53AM and thought you were making the same point about the real/imaginary parts, correctly pointing out the the imaginary part of the time sequence is assumed zero.

I use Matlab exclusively, which has an excellent FFT. But in the “old days” it was a DFT in BASICA (anyone remember that) with nested for loops and separate real/imaginary arrays.

@Bernie. We are in agreement. The phasing problem is likely a result of not treating the R and I components identically or not shifting both halves of the frequency spectrum at the array element next to the Nyquist. Cheers!

“It has the same rectangular window Willis used.”

ROFL. Yes doing nothing is a rectangular window.

We are up against fundamental constraints on information – kind of like the Heisenberg uncertainty principle. When examining the frequency components of signals at low frequencies you have a choice of windows that trades off frequency accuracy, amplitude accuracy, and phase accuracy. You also have a fundamental 1/x accuracy issue with the period. If you only have 2 periods of your signal your frequency error on period seen in the periodogram is on the order of 50%!. If you want a 5% error then you need 20 periods. No amount of magic padding is going to help – the information isn’t there in the original signal.

Try sweeping the frequency of a sine wave through the FFT, it’s easy to see the behavior.

And there lies my fundamental disagreement with trying to determine trends off of historical climate data. We have 11 year, 60 year, and other long cycles happening in nature, and we simply don’t have enough data to accurately estimate what those periods actually are. Just to see two periods of the PMO or AMO we need 120 years of satellite data. Call me when it’s the year 2099….

Mann’s latest estimations off a SINGLE period of the AMO is a complete joke. It just doesn’t pass the basic “do we have enough information” test. AFAICT climate scientists that are peer reviewing this material would have failed a graduate level course in digital signal processing…

Nick Stokes:

Shifting your mathematical formulation from the band-pass characteristic of

spectral windows to that of low-pass RL or RC filters hardly changes the

fact that ONLY with purely sinusoidal input will the result be sinusoidal

also. This still fails to address the crux of the matter: what is obtained

with RANDOM signals? In the former case, pure gaussian white noise will be

shaped into autocorrelated oscillatory “waves” at the characteristic

bandwidth, which in short records may strike the eye as “periodic.” In the

latter case, there will be diffusion-like autocorrelated fluctuations. In

neither case will a sinusoid with FIXED amplitude and phase be obtained.

Nevertheless, sinusoidal regression will still produce a misleading

numerical result.

All this is at the heart of the well-established systematic BIAS and

asymptotic INCONSISTENCY of raw periodograms as estimators of power density

spectra, usually covered in Signal Analysis 101 (see, e.g., O&S pp.733-734).

Sadly, those who rely primarily on computation for insight remain dismally

ignorant in their unicorn hunt for “periodicity.”

Bernie:

Actually, I was not seeking your “help” in my work, but trying to set you up

for a potentially lucrative consulting opportunity elsewhere.

I did like your demonstration of how simplistic emulation of bandlimited

interpolation by zero-padding the FFT leads to unexpected results. The

notion that sinusoidal regression can be used effectively for interpolation

in the general case is patently absurd.

As an EE, perhaps you can explain to all of us how the old analog spectrum

analyzers set up their bank of band-pass filters in the frequency domain.

They were, of course, equally spaced in their center frequencies, but was

the Rayleigh criterion for resolution strictly observed?

Greg:

It’s not surprising that a programmer who can’t label the frequency axis

correctly, has scant clue that the purpose of power spectrum analysis is to

estimate the density of signal variance over frequency, obscure this by

plotting as a function of period, and talks nonsensically of “the power

spectrum of cross-correlation” would indulge himself in fantasies about my

work and his qualifications as a signal analyst.

Plato said it best: “Wise men speak when they have something to say; fools

speak to say something.”

1sky1 says: May 30, 2014 at 5:23 pm“Shifting your mathematical formulation from the band-pass characteristic of spectral windows to that of low-pass RL or RC filters hardly changes the fact that ONLY with purely sinusoidal input will the result be sinusoidal also.”

Not true at all. I’m using a LC filter, which is the equivalent of a tuning fork, and is a band-pass filter (freq 1/sqrt(LC)). And the integral formulation is for arbitrary driving voltage V(t). If it is time-bounded, then afterwards the LC circuit is in pure sinusoidal oscillation, at the resonant frequency.

Actually I made an error writing it down – in the integrals it should be dV/dτ. But that doesn’t change the result, which is that after the input has passed, the circuit goes on ringing (for ever with no R) at the resonant frequency with an amplitude that depends on V(t), and is high when V was oscillatory and strong in that band.

Peter Sable says:

May 29, 2014 at 1:52 am (Edit)

Thanks much, Peter. I’ve been thinking about that same question as I shoveled gravel today. I’ll need to take a closed look at what you done.

Appreciated,

w.

1sky1 commented to me May 30, 2014 at 5:24 pm

I guess that lucrative consulting contract was just not in the cards! (Tomorrow I will wander to the streets – tin cup in hand.) As for paragraph 2, I haven’t a clue what you were trying to say As for the spectrum analyzer, no, the one I made did not have equally spaced frequencies. I could elaborate, but following your lead, no particulars will be offered. I have no idea what a Rayleigh criterion of resolution is – never heard of it.

1sky1> Signal Analysis 101 (see, e.g., O&S pp.733-734).

Therein lies a serious problem. It’s not “101” if it’s page 733 and it’s certainly not 101 if it’s O&S – that’s a graduate level text. The undergrad text, which I also have, is terrible.

This field could benefit from a well written textbook that a normal engineer or climate scientist can use. O&S is a terrible textbook. The book would focus not just on the math but the process for not making horrible mistakes. O&S barely skims over that process, they assume if you can grok the math you can grok the process. Typical of too-smart people…

” obscure this byplotting as a function of period, ”

There’s nothing inherently wrong with this, period is merely 1/f. In either case the points should be plotted so that the resolution limitations are obvious.

Peter Sable said in part May 30, 2014 at 8:07 pm

“… O&S is a terrible textbook. …”

Yea – I think 97% of students would agree. I first learned DSP using a mimeographed edition probably about 1973, and have seen three published editions, and worked with students who were required to have it. ( When I taught a course myself, I never used any textbook.) I have heard it was always (or at least originally) intended to be a summary collection rather than a text, which makes sense for the earliest editions – there weren’t many courses. Most students did not like it at all. Later they were more apt to agree that if you had learned the material once, it was a reference they needed on their shelves. But true – you can’t just send someone right to page 733 (of 870 pages!) and expect anything useful to occur.

Nick Stokes:

I appreciate your clarification regarding the bandpass characteristic of LC

circuits, as seen in its freq. response. But we’re back to the misleading

tuning-fork analogy. To be sure, it will respond to any finite-duration

excitation and “ring” thereafter at its resonant frequency. However, that

is far removed from straightforward heterodyning of a PRE-CHOSEN sinusoid

with a data record–especially when that record LACKS any finite-amplitude

sinusoidal components.

That is the usual case in climate “anomaly” series, wherein diurnal and

annual cycles have been removed. Being the product of chaotic physical

processes governed by hyperbolic PDEs, these are oscillatory–but NOT

periodic. The power density spectrum is continuous, instead of

concentrated in discrete lines. Some number will, of course, be produced

with ANY chosen period, but its essential MEANING is not the same. In

practice, sinusoidal regression remains a curve-fitting method suitable

ONLY for periodic signals with KNOWN periods, such as the tidal components.

Bernie:

My conditional offer was not meant to suggest that you stand destitute,

although if cutbacks continue some co-workers may have to purchase tin

cups. ;-)

What I liked was the demo of the Gibbs effect you presented here:

which should remind all practitioners of the pitfalls of truncation in

finite-length DFTs. The Rayleigh criterion, a resolution concept borrowed

from physics, effectively specifies that sinusoidal orthogonality is

achieved when the first zero of the spectral window centered on a frequency

band coincides with the peak response of the adjoining band. Thus sinusoidal

regression in keeping with this criterion will simply replicate what is

obtained by FFTs.

Peter Sable:

While we all agree that O&S may be far from the ideal teaching text, Chapter

11 does cover the critical differences in analyzing periodic vs. random

signals with many examples and is advertised as suitable in part for

undergrads. Bendat & Piersol may be a better read for some minds.

Also it includes a chpater on cepstrum analysis, useful for sonar and radar work.

My objection to plotting power density curves v. period is the destruction

of any sense of the AREA under the density curve, which in many cases with

short records is more important than the exact location of a peak. On the

other hand, periodograms (suitable primarily for analyzing periodic

records, not random series), should be presented as discrete lines, rather

than curves.

1sky1 says: May 31, 2014 at 2:21 pm“However, that is far removed from straightforward heterodyning of a PRE-CHOSEN sinusoid with a data record”

No, it’s the same. Look again at that time-varying integral solution, written with general L and C, and integrated by parts to get back to V(t):

L*I(t) = cos(ωt)∫ cos(ωτ) V(τ) dτ – sin(ωt)∫ sin(ωτ) V(τ) dτ

again int over all real, but with V(t) of finite duration. ω=1/sqrt(LC)

Each coefficient is exactly the heterodyning integral – multiply the signal by the prechosen (tuning fork) test sinusoid trig(ωτ).

“again int over all real”– wrong, the upper limit is t. And the lower limit might as well be 0, since V(t)=0 outside a finite range,“The power density spectrum is continuous, instead of concentrated in discrete lines.”

Which is exactly why seeking more plotted resolution that is provided by DFT is desirable.

There have been several comments here to the effect that there is no spectral information available in the data between the Fourier periods ( frequencies) and that zero padding does not provide any more information.

This is a false conclusion that is based on the failure to realise the different between a spectral analysis and a TRANFORMATION.

FT in all its variants provides an equivalent F.Dom representation of the data that is fully equivalent to the T.dom representation and allows it to be reconstructed by iFT This is not a spectral analysis, it is a mathematically equivalent transformation. You can then perform a spectral analysis by looking at the FT but for medium and long periods it will be a crude approximation. That makes it a poor analysis.

The Fourier frequencies are all harmonics of the length of the data sample. It is patently false to suggest that there is no frequency information in the data for other values. The only true limitation is the Nyquist cut-off.

If you do the FFT of 200y of data of a pure 140y sine the FT will not contain a 140y period. But it will contain N ‘incorrect’ frequency components that enable a reconstruction of the correct periodic form over the sample range.

Now if you zero pad to 280y you will find the FT is a lot simpler ;) and the amplitude stronger than the largest amplitude in N previous frequencies.

This is how progressively padding, by shifting the fixed FT frequencies, does allow a more precise detection of where peaks lie. This concerns the

resolution of the analysis. It will never increase the resolution of the data. Many seem to have similar problems differentiating the two.Of course adding zeros can not improve the resolution of the data. It can improve the resolution of the analysis.

oops: more resolution _than_ is provided by DFT

Re the universe as numbers: to be able to ascribe number and dimension to any object or phenomenon’s properties, it is only necessary that it be consistent from instance to instance. So discovering numbers is discovering consistency. The consistency is not the thing, per se, but it is essential to identification, otherwise it would not be recognizable in the first place. So calling the universe one of numbers is a kind of tautology: “consistencies exist”.

Of course adding zeros can not improve the resolution of the data. It can improve the resolution of the analysis.

Greg: References?

I find there is a dearth of references on how good low frequency information is with an FFT. I would appreciate some references so I can do further reading.

I tried padding with zeros. Yes, I got a higher resolution plot in the low frequency (high period) section of the graph, but the energy was smeared across just as large a width as before – just with less jagged lines.

To show this I tried two sine waves one year in period apart, 32, and 33 years. Can’t tell the difference.

Further messing around shows that at 16 years, I can discern 2/3 year of a difference in overlapping peaks. At 32 years, it’s about 1/4 that. At 64 year’s it’s about 1/16th the resolution as at 16 years. Not unexpected. (note, was actually about 1/18th, so no numerical guessing, should try to derive an actual function symbolically… but I have a day job).

“I tried padding with zeros. Yes, I got a higher resolution plot in the low frequency (high period) section of the graph, but the energy was smeared across just as large a width as before ”

In fact you got higher plotting frequency resolution _throughout_ the spectrum. This was most obvious in long periods when plotting period, not frequency.

Of course it does not change energy distribution, it just shows it in more detail. That is what I was saying about improving the resolution of the analysis.

“Can’t tell the difference.” They have a frequency separation of 1/32-1/33=0.000947 per year, sampled once per month or whatever you did. That is what is meant by resolution of the data that you can’t improve unless you have more samples in the same period, not just more samples.

Sorry, I don’t have any links to hand and I too have a ‘day job’ so I’m not going to do your googling.

“I find there is a dearth of references on how good low frequency information is with an FFT. I would appreciate some references so I can do further reading.”

There’s a dearth because you’re asking the wrong question. The _frequency_ resolution is the same across the spectrum.

Nick Stokes:

Your analogy with a tuning fork or LC circuit subject to arbitrary TRANSIENT

excitation continues to mislead you. That’s a RESONANT physical system,

which acquires and stores energy from TIME-domain input, concentrating into

periodic output at the resonance frequency.

By contrast, the record-length dependent spectral window is a mathematical

construct, whose NON-RESONANT bandpass frequency response is entirely

different in form and operates in the FREQUENCY domain, simply integrating

the spectral content of NON-transient signals in a variance-preserving

manner. The spectral window never disappears and you cannot get a pure

sinusoid unless the input signal consists of spectral lines. Period!

Peter Sable:

You are entirely correct! Zero-padding adds no signal INFORMATION; it

merely increases the COMPUTATIONAL resolution in the frequency domain,

which amateurs mistake for ANALYTIC improvement. This can be seen by

considering the zero effect that padding has upon the sample acf. The only

way that the resolution-limitation of the spectral window can be overcome is

by acquiring more data. By supposing a pre-set period, sinusoidal

regression produces a number, but its significance is dependent entirely

upon the validity of that supposition.

An elementary demonstration of this can made by adding all the sinusoids

that Willis finds for the SSN data. The result is horribly out of scale,

because of non-othogonality of decomposition. And if the largest-amplitude

“periodicity” alone is compared to the data, the mean-square error is huge,

rendering that periodicity quite useless in signal identification or

prediction. The claim of “advantages” is demonstrably pathetic.

Why is an increase in computational resolution not, at least potentially, an analytical improvement?

Increasing the computational resolution can help to correctly identify a period that does not have the good fortune to fall exactly on one of the Fourier harmonics, as I explained here:

http://wattsupwiththat.com/2014/05/26/well-color-me-gobsmacked/#comment-1651187

” And if the largest-amplitude “periodicity” alone is compared to the data, the mean-square error is huge, rendering that periodicity quite useless ”

Quelle surprise! You take one component out of a spectrum and it has a “huge” residual. To follow that logic the whole process is “quite useless”.

However, if you take the largest component from a swept analysis found by progressively increased padding, that finds where the largest signal actually is in the data, rather that hoping it lies near a Fourier harmonic, you will in general get a smaller residual.

Again to take my example, the largest unpadded F cmpt would have a “huge” residual, the 280y padded analysis would have zero residual. Where the input is not itself a pure harmonic function, the result will be somewhere between these two extremes.

“The _frequency_ resolution is the same across the spectrum.”

Ah, and the derivative shown as as percentage error changes with the frequency, which is what I was intuiting but not explaining well.. Okay, I have enough clues to figure out the equation for that.

thanks, I’m catching up after a 15 year hiatus in signal processing. Between the day job and the grayer cells it takes a bit…