Crowdsourcing A Full Kernel Cascaded Triple Running Mean Low Pass Filter, No Seriously…

Fig 4-HadCrut4 Monthly Anomalies with CTRM Annual, 15 and 75 years low pass filters

Image Credit: Climate Data Blog

By Richard Linsley Hood  – Edited by Just The Facts

The goal of this crowdsourcing thread is to present a 12 month/365 day Cascaded Triple Running Mean (CTRM) filter, inform readers of its basis and value, and gather your input on how I can improve and develop it. A 12 month/365 day CTRM filter completely removes the annual ‘cycle’, as the CTRM is a near Gaussian low pass filter. In fact it is slightly better than Gaussian in that it completely removes the 12 month ‘cycle’ whereas true Gaussian leaves a small residual of that still in the data. This new tool is an attempt to produce a more accurate treatment of climate data and see what new perspectives, if any, it uncovers. This tool builds on the good work by Greg Goodman, with Vaughan Pratt’s valuable input, on this thread on Climate Etc.

Before we get too far into this, let me explain some of the terminology that will be used in this article:

—————-

Filter:

“In signal processing, a filter is a device or process that removes from a signal some unwanted component or feature. Filtering is a class of signal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal. Most often, this means removing some frequencies and not others in order to suppress interfering signals and reduce background noise.” Wikipedia.

Gaussian Filter:

A Gaussian Filter is probably the ideal filter in time domain terms. That is, if you consider the graphs you are looking at are like the ones displayed on an oscilloscope, then a Gaussian filter is the one that adds the least amount of distortions to the signal.

Full Kernel Filter:

Indicates that the output of the filter will not change when new data is added (except to extend the existing plot). It does not extend up to the ends of the data available, because the output is in the centre of the input range. This is its biggest limitation.

Low Pass Filter:

A low pass filter is one which removes the high frequency components in a signal. One of its most common usages is in anti-aliasing filters for conditioning signals prior to analog-to-digital conversion. Daily, Monthly and Annual averages are low pass filters also.

Cascaded:

A cascade is where you feed the output of the first stage into the input of the next stage and so on. In a spreadsheet implementation of a CTRM you can produce a single average column in the normal way and then use that column as an input to create the next output column and so on. The value of the inter-stage multiplier/divider is very important. It should be set to 1.2067. This is the precise value that makes the CTRM into a near Gaussian filter. It gives values of 12, 10 and 8 months for the three stages in an Annual filter for example.

Triple Running Mean:

The simplest method to remove high frequencies or smooth data is to use moving averages, also referred to as running means. A running mean filter is the standard ‘average’ that is most commonly used in Climate work. On its own it is a very bad form of filter and produces a lot of arithmetic artefacts. Adding three of those ‘back to back’ in a cascade, however, allows for a much higher quality filter that is also very easy to implement. It just needs two more stages than are normally used.

—————

With all of this in mind, a CTRM filter, used either at 365 days (if we have that resolution of data available) or 12 months in length with the most common data sets, will completely remove the Annual cycle while retaining the underlying monthly sampling frequency in the output. In fact it is even better than that, as it does not matter if the data used has been normalised already or not. A CTRM filter will produce the same output on either raw or normalised data, with only a small offset in order to address whatever the ‘Normal’ period chosen by the data provider. There are no added distortions of any sort from the filter.

Let’s take a look at at what this generates in practice.The following are UAH Anomalies from 1979 to Present with an Annual CTRM applied:

Fig 1-Feb UAH Monthly Global Anomalies with CTRM Annual low pass filter

Fig 1: UAH data with an Annual CTRM filter

Note that I have just plotted the data points. The CTRM filter has removed the ‘visual noise’ that a month to month variability causes. This is very similar to the 12 or 13 month single running mean that is often used, however it is more accurate as the mathematical errors produced by those simple running means are removed. Additionally, the higher frequencies are completely removed while all the lower frequencies are left completely intact.

The following are HadCRUT4 Anomalies from 1850 to Present with an Annual CTRM applied:

Fig 2-Jan HadCrut4 Monthly Anomalies with CTRM Annual low pass filter

Fig 2: HadCRUT4 data with an Annual CTRM filter

Note again that all the higher frequencies have been removed and the lower frequencies are all displayed without distortions or noise.

There is a small issue with these CTRM filters in that CTRMs are ‘full kernel’ filters as mentioned above, meaning their outputs will not change when new data is added (except to extend the existing plot). However, because the output is in the middle of the input data, they do not extend up to the ends of the data available as can be seen above. In order to overcome this issue, some additional work will be required.

The basic principles of filters work over all timescales, thus we do not need to constrain ourselves to an Annual filter. We are, after all, trying to determine how this complex load that is the Earth reacts to the constantly varying surface input and surface reflection/absorption with very long timescale storage and release systems including phase change, mass transport and the like. If this were some giant mechanical structure slowly vibrating away we would run low pass filters with much longer time constants to see what was down in the sub-harmonics. So let’s do just that for Climate.

When I applied a standard time/energy low pass filter sweep against the data I noticed that there is a sweet spot around 12-20 years where the output changes very little. This looks like it may well be a good stop/pass band binary chop point. So I choose 15 years as the roll off point to see what happens. Remember this is a standard low pass/band-pass filter, similar to the one that splits telephone from broadband to connect to the Internet. Using this approach, all frequencies of any period above 15 years are fully preserved in the output and all frequencies below that point are completely removed.

The following are HadCRUT4 Anomalies from 1850 to Present with a 15 CTRM and a 75 year single mean applied:

Fig 3-Jan HadCrut4 Monthly Anomalies with CTRM Annual, 15 and 75 years low pass filters

Fig 3: HadCRUT4 with additional greater than 15 year low pass. Greater than 75 year low pass filter included to remove the red trace discovered by the first pass.

Now, when reviewing the plot above some have claimed that this is a curve fitting or a ‘cycle mania’ exercise. However, the data hasn’t been fit to anything, I just applied a filter. Then out pops some wriggle in that plot which the data draws all on its own at around ~60 years. It’s the data what done it – not me! If you see any ‘cycle’ in graph, then that’s your perception. What you can’t do is say the wriggle is not there. That’s what the DATA says is there.

Note that the extra ‘greater than 75 years’ single running mean is included to remove the discovered ~60 year line, as one would normally do to get whatever residual is left. Only a single stage running mean can be used as the data available is too short for a full triple cascaded set. The UAH and RSS data series are too short to run a full greater than 15 year triple cascade pass on them, but it is possible to do a greater than 7.5 year which I’ll leave for a future exercise.

And that Full Kernel problem? We can add a Savitzky-Golay filter to the set,  which is the Engineering equivalent of LOWESS in Statistics, so should not meet too much resistance from statisticians (want to bet?).

Fig 4-Jan HadCrut4 Monthly Anomalies with CTRM Annual, 15 and 75 years low pass filters and S-G

Fig 4: HadCRUT4 with additional S-G projections to observe near term future trends

We can verify that the parameters chosen are correct because the line closely follows the full kernel filter if that is used as a training/verification guide. The latest part of the line should not be considered an absolute guide to the future. Like LOWESS, S-G will ‘whip’ around on new data like a caterpillar searching for a new leaf. However, it tends to follow a similar trajectory, at least until it runs into a tree. While this only a basic predictive tool, which estimates that the future will be like the recent past, the tool estimates that we are over a local peak and headed downwards…

And there we have it. A simple data treatment for the various temperature data sets, a high quality filter that removes the noise and helps us to see the bigger picture. Something to test the various claims made as to how the climate system works. Want to compare it against CO2. Go for it. Want to check SO2. Again fine. Volcanoes? Be my guest. Here is a spreadsheet containing UAH and a Annual CTRM and R code for a simple RSS graph. Please just don’t complain if the results from the data don’t meet your expectations. This is just data and summaries of the data. Occam’s Razor for a temperature series. Very simple, but it should be very revealing.

Now the question is how I can improve it. Do you see any flaws in the methodology or tool I’ve developed? Do you know how I can make it more accurate, more effective or more accessible? What other data sets do you think might be good candidates for a CTRM filter? Are there any particular combinations of data sets that you would like to see? You may have noted the 15 year CTRM combining UAH, RSS, HadCRUT and GISS at the head of this article. I have been developing various options at my new Climate Data Blog and based upon your input on this thread, I am planning a follow up article that will delve into some combinations of data sets, some of their similarities and some of their differences.

About the Author: Richard Linsley Hood holds an MSc in System Design and has been working as a ‘Practicing Logician’ (aka Computer Geek) to look at signals, images and the modelling of things in general inside computers for over 40 years now. This is his first venture into Climate Science and temperature analysis.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
355 Comments
Inline Feedbacks
View all comments
March 18, 2014 8:03 am

Dr Norman Page says:
March 18, 2014 at 7:56 am
Jeff – there are obviously many motive sfor forecasting future climate without regard to CO2. If for example we have reason to believe that we might expect a significant cooling -then this would suggest various actions in mitigation or adaptation.
Perhaps but I doubt that the uncertainties will be low enough for actionable predictions until we have amassed another couple of hundred years worth of reliable data. Until then, my primary concern is derailing the destructive agenda of the warming hysterics. After that threat is removed, knock yourselves out.

Bernie Hutchins
March 18, 2014 8:13 am

RichardLH at March 18, 2014 at 2:16 am quotes me SELECTIVELY as saying:
“No one will argue information loss if this is your angle. Because no information IS lost.”
Note I said “If this is your angle” but you never get around to saying it is. With FILTERING (processing of a signal) information IS lost. That’s the purpose of filtering. With multi-band ANALYSIS (like with an FFT), information is retained in different outputs. The purpose is insight.
Although analysis is apparently your viewpoint (you at 5:28PM yesterday, and Greg at 5:31PM yesterday), I’m not sure you understand this is what you are contending, or the important difference (in concept) from filtering.
It is somewhat disconcerting to have to keep mentioning this here.

RichardLH
March 18, 2014 8:15 am

Willis:
So that you can get an idea of what the HadCRUT 15 year CTRM de-trended with a 75 year S-G looks like
http://snag.gy/47xt0.jpg
A very nice 65 year (well one ‘cycle’ of anyway) wriggle. 🙂
And, no, I do not expect that underlying trend of the 75 year S-G to continue in the manner suggested, an inflexion in that is due fairly soon I suspect, but will take a few more years to be visible (before you ask)

RichardLH
March 18, 2014 8:18 am

Jeff Patterson says:
March 18, 2014 at 7:57 am
“Some suggestions for RichardLH
1) Plot the frequency response in dB. This allows much better dynamic range in observing how far down the sidelobes and passband ripples are.”
That would emphasise what is happening but would make the plots SO much different to other existing plots out there. I’ll stick with a straight linear scale for now.
“2) Run randomly generated data through the filter and look at the power spectral density of the output. If the PSD is flat in the passband, the filter is exonerated as the source of the wiggles.”
It is (very near to) Gaussian. It will have a Gaussian response. No work required.
“3) Show the step response of the filter, (not the theoretical) by running a step through the actual filter and plotting the result to prove there is no error in the implementing code and to show there is no substantial overshoot and/or ringing. This step is necessary because the temperature data is not WGN and contains large short-term excursions which will ring the filter if it is underdamped. ”
It is (very near to) Gaussian. It will have a Gaussian response. No work required.
“4) I think I saw somewhere that Greg had a high-pass version of the filter. Set the cut-off of the HP version as high as possible while still passing the 65 year cycle unattenuated. Subtract the HP filter output from the input data after lagging the input data by the (N-1)/2 samples (the filter delay) where N is the number of taps (coefficients) in the filter (N should be odd). The result will be the trend, which I suspect will show a remarkably low variance about a ~.5 degC/century regression.”
Did the high pass version as a quick piece of work and posted the results above. Only from the clipboard but that is the fastest way.

RichardLH
March 18, 2014 8:20 am

Bernie Hutchins says:
March 18, 2014 at 8:13 am
“Note I said “If this is your angle” but you never get around to saying it is. With FILTERING (processing of a signal) information IS lost. That’s the purpose of filtering. ”
Wrong. Filtering is about selecting, not losing, data. The ‘other’ data is still there, just in a different plot as I showed above.

March 18, 2014 8:27 am

Jeff You are too pessimistic – there will be straws in the wind by 2020 and by 2040 predictions until 2100 will probably be actionable .It is already clear that CO2 is of minor significance and that inaction in that regard is the best course.

Matthew R Marler
March 18, 2014 8:29 am

Jeff Patterson: Your first sentence is indecipherable (the signal definition depends on future data??).
I subverted my point by being too succinct. It isn’t the “definition” of the “signal” that depends (among other things) on future data. Rather, it is an evaluation and decision about what persists (hence might be called “signal” and what is transient (hence might be called “noise”.) If the 60 year oscillation persists, it might be called “signal”; if it does not persist, it is simply another instance of the fact that a finite time series can be well-modeled by a variety of methods: piecewise polynomials, polynomials, trig polynomials, smoothing splines, regression splines, and so forth.
You still misunderstand the difference between filtering and curve fitting but no matter.
What I wrote, and repeat, is that they are “indistinguishable” in practice. All “filtering” entails “curve-fitting”; then identification of some of the “fitting” results as “signal”, and other results as “noise”. “Band-pass filtering”, for example, is approximately equivalent to subtracting out the part of the data that is well fitted by the functions thought to represent the noise, which have been derived from the assumption about the noise (i.e., the coefficients are calculated to give the hoped for “appropriate” transfer function); there is no universal a priori reason for calling either the “low” frequency components or the “high” frequency components the “signal”, but depends on context.
Prediction of the climate is a fool’s errand. Since there is no detectable effect of CO2 in the temperature record the motive for forecasting is obviated. The climate will do what it has always done- meander about.
If it were possible, and it may become so, to predict climate (the distribution of the weather over space and time) that would be valuable. At the present time, the GCMs have a record of over-predicting the mean; they are not ready to be relied upon, but failure up til now does not imply failure in perpetuity (paraphrase of quotes by Thomas Edison and many others.) Perhaps (this is what I expect) in 20 years they’ll be accurate enough at predicting the seasonal distribution of the weather in reasonably sized regions (it would have been nice, 6 months ago, to have predicted the US combination of Western warmth and Eastern cold and snow.) Whether CO2 concentration will have to be included in order to make accurate enough forecasts is an open research question.
The assertion that there is a hard and fixed distinction between “curve fitting” and “filtering” I think results from a narrow view of all the ways and contexts in which these related computational techniques are used.

Matthew R Marler
March 18, 2014 8:32 am

Richard LH : OK – definition time (for this article anyway)
Signal = anything longer than 15 years in period
Noise = anything shorter than 15 years in period

With that definition set, your post illustrates what I wrote: for any definition, using a finite data set, you can find representations of the noise and signal. Deciding or learning what represents a persistent process requires more than that.

Bernie Hutchins
March 18, 2014 8:34 am

Richard –
You say your SG filter is length 75, but you did not specify the order of the fit.
And is this a cascade or just one pass?
Nice result (good picture), but we need specific details to verify and continue.

RichardLH
March 18, 2014 8:37 am

Matthew R Marler says:
March 18, 2014 at 8:32 am
“With that definition set, your post illustrates what I wrote: for any definition, using a finite data set, you can find representations of the noise and signal. Deciding or learning what represents a persistent process requires more than that.”
True. Except for the fact that a 15 year corner exposed just a ~60 year wriggle. That in itself is interesting.
There is nothing of any real power above 15 years and below 75 years except that rather cyclic ~60 year wriggle.
http://snag.gy/47xt0.jpg

RichardLH
March 18, 2014 8:40 am

Bernie Hutchins says:
March 18, 2014 at 8:34 am
“You say your SG filter is length 75, but you did not specify the order of the fit.”
Second order
“And is this a cascade or just one pass?”
5 pass, multi pass (See the R code and comments on my blog). The reference to Nate Drake PhD is because he put his PhD on the table to bet that a CTRM was not an FIR filter :-). See the Nature Missing Heat thread for all the gory details.
“Nice result (good picture), but we need specific details to verify and continue.”
Thanks 🙂

Matthew R Marler
March 18, 2014 8:46 am

Jeff Patterson: “Whether the “65 year cycle” is a near line spectrum of astronomical origin, a resonance in the climate dynamics, a strange attractor of an emergent, chaotic phenomena or a statistical anomaly, what it isn’t is man-made. It is however, responsible for the global warming scare. If it had not increased the rate of change of temperature during its maximum slope period of 1980′s and 90s, the temperature record would be unremarkable (see e.g. the cycle that changed history and we wouldn’t be here arguing the banalities of digital filter design.”
Only in part. Scientists in the 19th century warned of the potential of increasing CO2 to cause increasing climate warmth. Numerous textbooks (e.g. “Principles of Planetary Climate” by Raymond T. Pierrehumbert) have calculated the amount by which the “equilibrium” temperature of the Earth should increase in response to a doubling of CO2, and some other calculations based on a hypothetical H2O feedback have raised the estimate above that. All these calculations have their problems, but the overall increase in temp since the mid 1800s, despite the oscillation in the rise, are what was predicted. All the data analyses to date do not lead to a very clear-cut result; accumulating CO2 might or might not be warming the Earth climate, and the possible effect might or might not continue into the future (that is, even if CO2 has increased temps up til now, some negative feed back of the sort presented by Willis Eschenbach might prevent warming in the future.)
After reviewing this evidence for years, I am persistently dismayed by people who confidently believe that CO2 does or does not have an effect of a particular size.

RichardLH
March 18, 2014 8:53 am

Matthew R Marler says:
March 18, 2014 at 8:46 am
“After reviewing this evidence for years, I am persistently dismayed by people who confidently believe that CO2 does or does not have an effect of a particular size.”
I am cautious about ascribing cause without taking properly into account what looks to be very significant natural variations, possibly of a quite cyclic nature.
As far as I am concern, the data is not yet strong enough to be able to properly decide what proportions are assigned correctly to what.
Mostly it is just speculation and models. I have spent my life in computing trying to understand what assumptions people have made and what turns out to be wrong in those assumptions.

Matthew R Marler
March 18, 2014 8:56 am

Richard LH: True. Except for the fact that a 15 year corner exposed just a ~60 year wriggle. That in itself is interesting.
There is nothing of any real power above 15 years and below 75 years except that rather cyclic ~60 year wriggle.

That reminds me: did I thank you for your work and for a good, stimulating post? Thank you for your work, and thank you for your post.
You understand, I hope, that a new method to reveal a 60 year “oscillation” that had already been revealed by many others makes your result look post-hoc. Scafetta, Pratt and others have found something similar by different methods. This constant reworking of a well-studied finite data set doesn’t reveal what we want (well, what I want) to know: what mathematical result is a reasonably accurate representation of a process that will persist approximately the same for the next few decades? That’s the part that requires “out of sample data”.

March 18, 2014 9:00 am

Jeff Patterson says:
March 18, 2014 at 7:57 am
“Some suggestions for RichardLH
1) Plot the frequency response in dB. This allows much better dynamic range in observing how far down the sidelobes and passband ripples are.”
>That would emphasise what is happening but would make the plots SO much different to other >existing plots out there. I’ll stick with a straight linear scale for now.
Plotting in dB is the standard method in DSP circles. “Showing what is happening” would seem most important in light of your primary claim that it is near Gaussian.
“2) Run randomly generated data through the filter and look at the power spectral density of the output. If the PSD is flat in the passband, the filter is exonerated as the source of the wiggles.”
>It is (very near to) Gaussian. It will have a Gaussian response. No work required.
So you say but without the log plots it is impossible to say how close and whether it is coded correctly.
“3) Show the step response of the filter, (not the theoretical) by running a step through the actual filter and plotting the result to prove there is no error in the implementing code and to show there is no substantial overshoot and/or ringing. This step is necessary because the temperature data is not WGN and contains large short-term excursions which will ring the filter if it is underdamped. ”
>It is (very near to) Gaussian. It will have a Gaussian response. No work required.
Gaussian filters have no overshoot. Close-to-guassian will have some. How much?
>Did the high pass version as a quick piece of work and posted the results above. Only from the clipboard but that is the fastest way.
Did you subtract it from the signal? If so I couldn’t find the plot you referenced.

RichardLH
March 18, 2014 9:05 am

Matthew R Marler says:
March 18, 2014 at 8:56 am
“That reminds me: did I thank you for your work and for a good, stimulating post? Thank you for your work, and thank you for your post.”
Thank you. Discussing what is observed is thanks enough.
“You understand, I hope, that a new method to reveal a 60 year “oscillation” that had already been revealed by many others makes your result look post-hoc.”
I am all to well aware that what I am showing has been done elsewhere by other previously and probably better. I just hope that by using well know data sources and an irrefutable methodology I have got the observation of some form of ‘natural cycle’ to stick.
” Scafetta, Pratt and others have found something similar by different methods.”
Nicola was nice enough to say that I had confirmed some of his work. That is only partially true. I confirmed the presence to the ~60 year cycle. I have made no real offering as to why it is there yet. More observations needed before we go there 🙂
“This constant reworking of a well-studied finite data set doesn’t reveal what we want (well, what I want) to know: what mathematical result is a reasonably accurate representation of a process that will persist approximately the same for the next few decades? That’s the part that requires “out of sample data”.”
All we need to do is wait. Hard in this Internet age but it WILL answer all the questions 🙂

RichardLH
March 18, 2014 9:14 am

Jeff Patterson says:
March 18, 2014 at 9:00 am
“Plotting in dB is the standard method in DSP circles. “Showing what is happening” would seem most important in light of your primary claim that it is near Gaussian. ”
It is near Gaussian. Vaughn Pratt is the author of that statement and I’m not going to fight with him!
“So you say but without the log plots it is impossible to say how close and whether it is coded correctly.”
Again, I am only repeating work already done and discussed at much length by Greg and Vaughan. They (and me) are confident that it is Gaussian. If you believe that statement is wrong I would love to see some plots.
“Gaussian filters have no overshoot. Close-to-guassian will have some. How much?”
I suspect that you will find it hard to distinguish the two (as Willis did for the frequency plots).
“Did you subtract it from the signal? If so I couldn’t find the plot you referenced.”
No I just showed that it was possible to create the Annual High Pass output. There is a lot of stuff in there which, being sub-Annual, is probably mostly reflected values from the ‘normal’ and other very short term variations.
I did do the rather more interesting 15-75 to get the ‘cycle’ clearly though 🙂

Bernie Hutchins
March 18, 2014 9:18 am

Richard –
A second-order fit, length 75 SG has impulse response and frequency response as shown here:
http://electronotes.netfirms.com/SG2-75.jpg
The red line shows 60 years, and note that it is somewhat down the slope at about 1/2. Five passes would take it to about 3%. Not good. I think we are not on the same page with respect to sampling rate (I assumed yearly) while you have a denser sampling. So – how long is your impulse response in terms of samples? Is it 75 x 12 or something like that?
If my impulse response is correct (?) then exactly what does YOUR impulse response look like? It is, as you say, still FIR. Is there a math description of what you do or just a spread sheet?
Thanks for any specific information.

RichardLH
March 18, 2014 9:31 am

Bernie Hutchins says:
March 18, 2014 at 9:18 am
“A second-order fit, length 75 SG has impulse response and frequency response as shown here:
http://electronotes.netfirms.com/SG2-75.jpg
Thanks for that plot.
“The red line shows 60 years, and note that it is somewhat down the slope at about 1/2. Five passes would take it to about 3%. Not good. I think we are not on the same page with respect to sampling rate (I assumed yearly) while you have a denser sampling. So – how long is your impulse response in terms of samples? Is it 75 x 12 or something like that? ”
The sampling always stays at the underlying sample rate, in this case Monthly. In fact in order to match to the kernel size of the Annual CTRM which is 12 + 10 + 8 = 30 wide I used a factor of 2 to get an appropriate Annual S-G, i.e. 12 * 2 + 1. The 15 (and 75) year wide CTRM and S-G are similarly scaled. The results were verified by observing the closeness of the resultant 15 year curves during the whole of their overlap period. I suspect that it would be possible to get an even better fit by slightly modifying the *2 but the results were good enough ‘as is’.
“If my impulse response is correct (?) then exactly what does YOUR impulse response look like? It is, as you say, still FIR. Is there a math description of what you do or just a spread sheet? ”
For CTRM it is as close to Gaussian as you could possibly wish with a much simpler implementation. In any case, digitisation/sampling/rounding/truncation errors dominate over just about all the filter maths anyway as Vaughn was kind enough to agree when I pointed it out. There are only some 200 possible input values for UAH and some 2000 for HadCRUT because of the number of decimal places reported. That means that in most cases even a quite truncated Gaussian will have weighting factors that are below those.
“Thanks for any specific information.”
NP

Bart
March 18, 2014 9:31 am

Dr Norman Page says:
March 18, 2014 at 7:56 am
“Sophisticated statistical analysis actually doesn’t add much to eyeballing the time series.”
I would go further, and say statistical analysis should never, ever, be applied without eyeballing the time series to provide what is commonly referred to as a sanity check.
The utility of statistics is mostly in data compression, in distilling large amounts of information into a few representative numbers. Canned statistical routines, like computer simulations, are very susceptible to GIGO. They are only as good as the assumptions built into them.
The flip side of the old caveat of seeing patterns where they do not exist is having an analysis tool which misses patterns which are obviously there. The analysis tool is just a dumb algorithm. You have a brain. Use it. (not you, specifically, Dr. Page – you need no admonition).

RichardLH
March 18, 2014 9:35 am

Bernie Hutchins says:
March 18, 2014 at 9:18 am
I should point out that the correct way to get the various inter-stage values for the CTRM is next=round(prev*1.2067) which is the value that Vaughn provided. Both Greg and I had used 1.3371 previously, though from completely different sources.

Bart
March 18, 2014 9:37 am

You really don’t need a filter at all. The ~60 year periodic behavior is readily apparent in the raw data. Filtering it out for better resolution is really just gilding the lily.

RichardLH
March 18, 2014 9:38 am

Bart says:
March 18, 2014 at 9:37 am
“You really don’t need a filter at all. The ~60 year periodic behavior is readily apparent in the raw data. Filtering it out for better resolution is really just gilding the lily.”
You would think so. But I am told it is all down to co-incidental Volcanos, SO2 and CO2 in just the right mixture and timing 🙂

March 18, 2014 9:55 am

Bart Thanks for your very pertinent comment – the capacity of the establishment IPCC contributing modelers to avoid the blindingly obvious natural periodicities in the temperature record is truly mind blowing.

Bernie Hutchins
March 18, 2014 10:21 am

Richard –
I guess I still don’t have much of a clue what you are doing!
What you do is really quite (unnecessarily?) complicated, more complicated that SG or Gaussian, etc., but still FIR and you should be able to provide the impulse responses of the individual stages you are cascading. That would be complete, concise, well defined, and unambiguous. Is that available?