Image Credit: Climate Data Blog
By Richard Linsley Hood – Edited by Just The Facts
The goal of this crowdsourcing thread is to present a 12 month/365 day Cascaded Triple Running Mean (CTRM) filter, inform readers of its basis and value, and gather your input on how I can improve and develop it. A 12 month/365 day CTRM filter completely removes the annual ‘cycle’, as the CTRM is a near Gaussian low pass filter. In fact it is slightly better than Gaussian in that it completely removes the 12 month ‘cycle’ whereas true Gaussian leaves a small residual of that still in the data. This new tool is an attempt to produce a more accurate treatment of climate data and see what new perspectives, if any, it uncovers. This tool builds on the good work by Greg Goodman, with Vaughan Pratt’s valuable input, on this thread on Climate Etc.
Before we get too far into this, let me explain some of the terminology that will be used in this article:
—————-
Filter:
“In signal processing, a filter is a device or process that removes from a signal some unwanted component or feature. Filtering is a class of signal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal. Most often, this means removing some frequencies and not others in order to suppress interfering signals and reduce background noise.” Wikipedia.
Gaussian Filter:
A Gaussian Filter is probably the ideal filter in time domain terms. That is, if you consider the graphs you are looking at are like the ones displayed on an oscilloscope, then a Gaussian filter is the one that adds the least amount of distortions to the signal.
Full Kernel Filter:
Indicates that the output of the filter will not change when new data is added (except to extend the existing plot). It does not extend up to the ends of the data available, because the output is in the centre of the input range. This is its biggest limitation.
Low Pass Filter:
A low pass filter is one which removes the high frequency components in a signal. One of its most common usages is in anti-aliasing filters for conditioning signals prior to analog-to-digital conversion. Daily, Monthly and Annual averages are low pass filters also.
Cascaded:
A cascade is where you feed the output of the first stage into the input of the next stage and so on. In a spreadsheet implementation of a CTRM you can produce a single average column in the normal way and then use that column as an input to create the next output column and so on. The value of the inter-stage multiplier/divider is very important. It should be set to 1.2067. This is the precise value that makes the CTRM into a near Gaussian filter. It gives values of 12, 10 and 8 months for the three stages in an Annual filter for example.
Triple Running Mean:
The simplest method to remove high frequencies or smooth data is to use moving averages, also referred to as running means. A running mean filter is the standard ‘average’ that is most commonly used in Climate work. On its own it is a very bad form of filter and produces a lot of arithmetic artefacts. Adding three of those ‘back to back’ in a cascade, however, allows for a much higher quality filter that is also very easy to implement. It just needs two more stages than are normally used.
—————
With all of this in mind, a CTRM filter, used either at 365 days (if we have that resolution of data available) or 12 months in length with the most common data sets, will completely remove the Annual cycle while retaining the underlying monthly sampling frequency in the output. In fact it is even better than that, as it does not matter if the data used has been normalised already or not. A CTRM filter will produce the same output on either raw or normalised data, with only a small offset in order to address whatever the ‘Normal’ period chosen by the data provider. There are no added distortions of any sort from the filter.
Let’s take a look at at what this generates in practice.The following are UAH Anomalies from 1979 to Present with an Annual CTRM applied:
Fig 1: UAH data with an Annual CTRM filter
Note that I have just plotted the data points. The CTRM filter has removed the ‘visual noise’ that a month to month variability causes. This is very similar to the 12 or 13 month single running mean that is often used, however it is more accurate as the mathematical errors produced by those simple running means are removed. Additionally, the higher frequencies are completely removed while all the lower frequencies are left completely intact.
The following are HadCRUT4 Anomalies from 1850 to Present with an Annual CTRM applied:
Fig 2: HadCRUT4 data with an Annual CTRM filter
Note again that all the higher frequencies have been removed and the lower frequencies are all displayed without distortions or noise.
There is a small issue with these CTRM filters in that CTRMs are ‘full kernel’ filters as mentioned above, meaning their outputs will not change when new data is added (except to extend the existing plot). However, because the output is in the middle of the input data, they do not extend up to the ends of the data available as can be seen above. In order to overcome this issue, some additional work will be required.
The basic principles of filters work over all timescales, thus we do not need to constrain ourselves to an Annual filter. We are, after all, trying to determine how this complex load that is the Earth reacts to the constantly varying surface input and surface reflection/absorption with very long timescale storage and release systems including phase change, mass transport and the like. If this were some giant mechanical structure slowly vibrating away we would run low pass filters with much longer time constants to see what was down in the sub-harmonics. So let’s do just that for Climate.
When I applied a standard time/energy low pass filter sweep against the data I noticed that there is a sweet spot around 12-20 years where the output changes very little. This looks like it may well be a good stop/pass band binary chop point. So I choose 15 years as the roll off point to see what happens. Remember this is a standard low pass/band-pass filter, similar to the one that splits telephone from broadband to connect to the Internet. Using this approach, all frequencies of any period above 15 years are fully preserved in the output and all frequencies below that point are completely removed.
The following are HadCRUT4 Anomalies from 1850 to Present with a 15 CTRM and a 75 year single mean applied:
Fig 3: HadCRUT4 with additional greater than 15 year low pass. Greater than 75 year low pass filter included to remove the red trace discovered by the first pass.
Now, when reviewing the plot above some have claimed that this is a curve fitting or a ‘cycle mania’ exercise. However, the data hasn’t been fit to anything, I just applied a filter. Then out pops some wriggle in that plot which the data draws all on its own at around ~60 years. It’s the data what done it – not me! If you see any ‘cycle’ in graph, then that’s your perception. What you can’t do is say the wriggle is not there. That’s what the DATA says is there.
Note that the extra ‘greater than 75 years’ single running mean is included to remove the discovered ~60 year line, as one would normally do to get whatever residual is left. Only a single stage running mean can be used as the data available is too short for a full triple cascaded set. The UAH and RSS data series are too short to run a full greater than 15 year triple cascade pass on them, but it is possible to do a greater than 7.5 year which I’ll leave for a future exercise.
And that Full Kernel problem? We can add a Savitzky-Golay filter to the set, which is the Engineering equivalent of LOWESS in Statistics, so should not meet too much resistance from statisticians (want to bet?).
Fig 4: HadCRUT4 with additional S-G projections to observe near term future trends
We can verify that the parameters chosen are correct because the line closely follows the full kernel filter if that is used as a training/verification guide. The latest part of the line should not be considered an absolute guide to the future. Like LOWESS, S-G will ‘whip’ around on new data like a caterpillar searching for a new leaf. However, it tends to follow a similar trajectory, at least until it runs into a tree. While this only a basic predictive tool, which estimates that the future will be like the recent past, the tool estimates that we are over a local peak and headed downwards…
And there we have it. A simple data treatment for the various temperature data sets, a high quality filter that removes the noise and helps us to see the bigger picture. Something to test the various claims made as to how the climate system works. Want to compare it against CO2. Go for it. Want to check SO2. Again fine. Volcanoes? Be my guest. Here is a spreadsheet containing UAH and a Annual CTRM and R code for a simple RSS graph. Please just don’t complain if the results from the data don’t meet your expectations. This is just data and summaries of the data. Occam’s Razor for a temperature series. Very simple, but it should be very revealing.
Now the question is how I can improve it. Do you see any flaws in the methodology or tool I’ve developed? Do you know how I can make it more accurate, more effective or more accessible? What other data sets do you think might be good candidates for a CTRM filter? Are there any particular combinations of data sets that you would like to see? You may have noted the 15 year CTRM combining UAH, RSS, HadCRUT and GISS at the head of this article. I have been developing various options at my new Climate Data Blog and based upon your input on this thread, I am planning a follow up article that will delve into some combinations of data sets, some of their similarities and some of their differences.
About the Author: Richard Linsley Hood holds an MSc in System Design and has been working as a ‘Practicing Logician’ (aka Computer Geek) to look at signals, images and the modelling of things in general inside computers for over 40 years now. This is his first venture into Climate Science and temperature analysis.





Bart You are right about there being no precise cycles in nature .The 60 year cycle is very obvious as a modulation of the approximate 1000 year quasi periodicity which comes and goes as it beats with other cycles. See Fig 4 at
http://climatesense.norpag.blogspot.com
It looks good at 10000.9000 8000. 5000? 2000 1000 and 0 ( the present warming)
The same link provides an estimate of the possible coming cooling based on the 60 and 1000 year periodicities and the neutron count as the best proxy for solar “activity”
Richard –
Except as you can construct a proper inverse filter, smoothing (filtering, reduction of bandwidth) will remove information, and here you can’t construct this inverse. You say that you have not lost information because you kept the input separately. That doesn’t count!
Nobody is talking about numerical issues of the processing. Measurement issues (systematic) are a problem as is often discussed on this blog. It is very difficult to see how smoothing helps these.
And we all already agree we can see the apparent 60 year periodicity. What does your method offer that is better, let alone new?
If you are instead suggesting that you are not discarding any important information, then make that case, both in showing that you have chosen the correct parameters for a correct model, and that you understand the errors involved.
Bart says:
March 17, 2014 at 11:26 am
That’s a marvelous anecdote, Bart, but kinda short on links. The high point of the detrended HadCRUT4 data you are using occurred in 2003, which means that “predictions” after that point are hindcasts. So to examine your claim that “a lot” of people predicted what happened “more than a decade ago”, what we need are links to contemporaneous accounts of say three of your and other folks’ predictions made pre-2003 that global temperatures would peak by “mid-decade”.
I mean if “a lot” of you were making such predictions as you claim, surely linking to two or three of them isn’t a big ask.
w.
PS—Here is what I think about cycles:
Does the temperature go up and down? Yes, it does, on all timescales. They are best described as wiggles.
Do these wiggles imply the existence of cycles stable enough to have predictive value? I’ve never found one that could survive the simplest out-of-sample testing, and believe me, I’ve looked very hard.
Y’all seem to think I’m refusing to look at cycles. I’m not
I’m refusing to look at cycles AGAIN, because over the decades I’ve looked at, tested, and tried to extrapolate dozens and dozens of putative cycles, and investigated reports of others’ attempts as well. I wrote an Excel spreadsheet to figure out the barycentric orbit of the sun, for heaven’s sake, you don’t get more into cycles than that. I was as serious a cyclomaniac as the next man, with one huge, significant exception … I believe in out-of-sample testing …
Net result?
Nothing useful for predictions. After testing literally hundreds of natural timeseries datasets for persistent cycles, I found nothing that worked reliably out-of-sample, and I found no credible reports of anyone else finding anything that performed well out-of-sample. Not for lack of trying or lack of desire to find it, however. I’d love to be shown something like that the Jupiter-Saturn synodic period was visible in climate data, and I’ve looked hard for that very signal … but I never encountered anything like it, or any other cycles that worked for that matter.
So, as an honest scientist, I had to say that if the cycles are there, nobody’s found them yet, and turn to more productive arenas.
w.
Willis,
I’m still waiting [for] someone to produce a hunk of hide or bone from a Sasquatch, as well.
“Where has all the rigor gone? Long time pa-a-ssing …”
” … for someone …”
RichardLH says:
March 17, 2014 at 12:56 pm
That doesn’t seem right to me. Any filter which is not uniquely invertible loses information.
For example, while we can get from the numbers {1,2,3,4,5} to their mathematical average of 3, there is no way to be given the average of “3” and work backwards to {1,2,3,4,5} as a unique answer. So even ordinary averaging is not uniquely invertible, and thus it loses information. The same is true for the gaussian average, or a boxcar average, or most filters.
w.
Willis I repeat my comment to Bart from above
“Bart You are right about there being no precise cycles in nature .The 60 year cycle is very obvious as a modulation of the approximate 1000 year quasi periodicity which comes and goes as it beats with other cycles. See Fig 4 at
http://climatesense.norpag.blogspot.com
It looks good at 10000.9000 8000. 5000? 2000 1000 and 0 ( the present warming)
The same link provides an estimate of the possible coming cooling based on the 60 and 1000 year periodicities and the neutron count as the best proxy for solar “activity”
These quasi periodicities in the temperature data suggest what periodicities in the driver data are worth investigating. You do not need to understand the processes involved in order to make reasonable forecasts – see the link above.
The chief uncertainty in my forecasts is in knowing the timing of the peak in the 1000 year quasi periodicity .As Bart says- there no precise cycles – this one seems to oscillate from say 950 – 1050
. Looking at the state of the sun I think we are just past a synchronous peak in both the 60 and 1000 year periodicities. I’d appreciate your view on this .
Bernie H: “As for Lanczos, if I recall correctly it is an impulse response that is nothing more than a truncated sinc. As such it is subject to Gibbs Phenomenon transition band ripples – perhaps 23% or so – hardly a flat passband. ”
I went into some detail about all this on hte CE thread that Richard linked at the top . I included links to freq spectra of all discussed filters including Lanczos, and provided example code implementation in the update, if anyone wants to try it.
Here is a comparison of the freq response of the symmetric triple RM that Richard is providing, a Lanczos and a slightly different asymmetric triple RM that minimises the negative lobe that inverts part of the data. (It’s a fine difference, but while we’re designing a filter, we may as well minimise it’s defects).
http://climategrog.wordpress.com/2013/11/28/lanczos-filter-script/
Close-up of the stop-band ripple:
http://climategrog.wordpress.com/?attachment_id=660
There’s link in there that provides a close up of the detail of the lobes for those interested.
Yes there is a little bump in the pass band but it’s not the gregarious ripple that is seen in SG Butterworth, Chebychev, etc., and yes it’s a symmetric kernel so linear phase response.
I used a three lobe lanczos because it does not have too much Gibbs effect. You can obtain narrower transition band at the expense of accepting more ripple and over-shoot. Horses for courses as always in filter design.
The basic idea with Lanczos is to start with the ideal step function frequency response: the ‘ideal’ filter. But this requires an infinite length *sinc function’ as the kernel to achieve it. As soon as you truncate the kernel you multiply the nice, clear step your started with by another sinc fn.
Lanczos minimises the latter defect by fading out the truncation of the kernel rather than cutting it off. How quickly you cut it off, or fade it out determines the balance between the ideal step and the ringing. Lanczos was quite genius at maths and calculated this as the optimum solution. So it is somewhat more subtle than “nothing more than a truncated sinc”.
You can’t do it in three columns in a spreadsheet but it can be defined fairly easily as a macro function and it’s not that hard to code. It’s still a simple kernel based convolution. In fact it is no more complicated to do than a gaussian.
Gaussian or 3RM are useful, well-behaved low-pass filters but they make a pretty poor basis for high-pass or band-pass since they start to attenuate right from zero. If you want to make compound filters something with at least some portion of flat passband is much better. That is why I provided a high-pass Lanczos in my article too.
@Willis:
Comparison of freq resp of gaussian and triple RM:
http://climategrog.wordpress.com/?attachment_id=424
It can be seen gauss leaks about 4% of the nominal cut-off that the 3RM blocks completely. This can be a useful difference in the presence of a strong annual cycle. If you use a thick enough pencil, and scale your graph to see the cycle that was so big you wanted to get rid of it, it may not be that noticeable.
When the 4% does not matter, there’s nothing wrong with gaussian, it’s a good filter.
“RichardLH:
Excellent post. a good description of an easy to use filter to help expose underlying behaviour.
I find it sad that some complain about cycles when you said: “If you see any ‘cycle’ in graph, then that’s your perception. What you can’t do is say the wriggle is not there”
I hope the crowd sourcing aspect will enable the multiplier (1.2067) to be refined more quickly with more R users on to it.
I find it sad that people say “where is your code and where is the data you used” when it is on your linked website.
I hope you continue to develop these ideas.
If it is significantly better than fft’s as you suggest, then I can see its usage moving into other technical areas.
Willis: “So even ordinary averaging is not uniquely invertible, and thus it loses information. The same is true for the gaussian average, or a boxcar average, or most filters.”
That is true for straight average and RM since all the weights are equal. It is not the case for a gaussian or other kernel were the weights are unique.
The reverse process is called deconvolution. It’s basically a division in the frequency domain in the same way a convolution is multiplication in the freq domain. For example you can ‘soften’ a digital image with a gaussian ‘blur’ ( a 2-D gaussian low-pass filter ). You can then ‘unblur’ with a deconvolution. There are some artefacts due to calculation errors but in essence it is reversible.
This can be taken further to remove linear movement blur, or to refocus an out of focus photo. Yeah, I’d spent years telling people once you had taken a blurred shot there was NOTHING you could do with photoshop or anything else to fix it because the information just wasn’t there to recover. Wrong!
It is even possible to remove quite complex hand shake blurs, if you can find a spot in the photo that gives you a trace of the movement. This is also the way they initially corrected the blurring caused by the incorrect radius on the Hubble space telescope mirror. They found a dark bit of sky, with just one star in the field of view. They took a (blurred) image of it and then deconvolved all further shots using the FT of the blurred star image. That got them almost back to design spec and tied them through until they could get someone up there to install corrective optics.
One way to get a better look at how regular the supposed 60 year cycle is, would be to ‘detrend’ by fitting, say a 2nd order polynomial to the unfiltered data, subtract it and then do the filter. This should produce a levelled out wiggle which can be compared to a pure cosine or peaks and troughs picked out to see how stable the cycle length and amplitude is.
Just in case anyone is inclined, I have something else I’d rather work on now.
Greg Goodman says:
March 17, 2014 at 2:47 pm
Greg – Regarding Lanczos: thanks for that – I learned something. It is not a truncated sinc but the truncated product of two sincs.
The two sincs have different widths, so their produce in the time domain corresponds to the convolution of two unequal width rectangles – which is a trapezoid. This is then in turn truncated (by a rectangular product of the sync product in time) so the trapezoid in frequency is smeared by a frequency domain sinc. [ We use this, (it’s an old friend), but don’t give it a special name – it’s a “linear transition region”.]
Because of the gradual transition, the truncation is not the full 23% Gibbs but rather a much smaller bump. The result would look very much like a trapezoid with small bumps both sides of the linear slope. In fact it would look like something like – EXACTLY what you plotted! [Good homework problem. In practice we would probably truncate with a Hamming window rather than with a rectangle – with virtually all the Gibbs gone. Or best of all, you minimize the frequency domain squared error and make that transition region a “don’t care” band – like with Parks-McClellan.]
I like that filter pretty well. No significant passband ripple, and a Hamming window would fix even that. Nice control over roll-off if that is important. Good candidate along with SG. Thanks for the reply.
I am not sure, however why you suggest “gregarious ripple that is seen in SG Butterworth, Chebychev, etc”. Certainly SG and BW have no passband ripple. Chebychev certainly does, but note that SG looks a lot like inverse Chebyshev.
Bernie
Jeff Patterson says:
March 17, 2014 at 1:46 pm
“A LPF by definition, passes frequencies below it’s cut-off frequency. Since D.C. (i.e. frequency = 0) is always below the cut-off, the above statement by RLH is incorrect. A ramp-like input into a LPF will cause a lagged ramp-like output, with the lag determined by the filters group delay characteristic.”
That is true but not what was asked which was.
Lance Wallace says:“Can the Triple whatsis somehow compare the two curves and derive either a lag time or an estimate of how much of the CO2 emissions makes it into the observed atmospheric concentrations?”
I gave an answer that I intended to indicate that a low pass filter is incapable of such a connection.
I apologise it the words I used failed to convey that meaning.
P.S. The lag in this case would be one month, i.e. the sample frequency. AFAIK.
Bernie Hutchins says:
March 17, 2014 at 2:00 pm
“And we all already agree we can see the apparent 60 year periodicity.”
If I thought that the statement was true and universally accepted then I would probably not have constructed the post.
“What does your method offer that is better, let alone new?”
Only a very simple way to construct a Gaussian filter. Two extra columns in a spreadsheet and a few lines of R.
The reasoning about using high quality filters to examine the temperature traces rather than a statistical analysis hold regardless of the particular filter employed.
I believe that Gaussian is the best. I believe that a 15 year low pass uncovers details that others are missing/ignoring.
Greg Goodman says:
March 17, 2014 at 3:06 pm
@Willis:
Thanks for that, Greg. However, as you might imagine (and as I mentioned above) I also looked at the residuals from the process. There is no annual signal in the residual (CTRM minus gaussian).
As a result, once again I reject the claim of both yourself and Richard that the Gaussian doesn’t remove the annual signal. Both of them remove it.
Finally, although the difference at a given point might be 4%, the difference between the filters is the difference in the integrals … and that looks by eye to be much less than 4%
w.
RichardLH
Nice article, I like the simplicity of getting a high quality filter/splitter from just three spreadsheet columns, easy to understand and easy to apply. And simple to get the low and high pass parts without much loss. Neat.
Concerning the quasi-60 year periodicity, I was reminded to go and dig out a spreadsheet I made where I was looking at the periodicity in the temperature data too. My method was based on sine wave cross-correlation, looking for the peak cross-correlation as I swept the frequency up. Bit laborious in a spreadsheet, but I get a period of 66 years using that method. Takes more than 3 columns though…:-)
The other interesting data set to look at which is a long reconstruction (but no tree rings!) and only using other published proxies is Loehle (2007). On that data set the peak cross-correlation period (with the swept frequency cross-correlation I used) is 1560 years.
Willis Eschenbach says:
March 17, 2014 at 2:02 pm
“So, as an honest scientist, I had to say that if the cycles are there, nobody’s found them yet, and turn to more productive arenas.”
So as an honest scientists I observe that there is a wriggle in the current data which needs explaining and cannot just be swept under the carpet.
Something caused what we see. Any offers as to what? Co-incidence?
Greg Goodman replied to Willis March 17, 2014 at 3:28 pm in part:
“ Willis: “So even ordinary averaging is not uniquely invertible, and thus it loses information. The same is true for the gaussian average, or a boxcar average, or most filters.” Greg replied: “That is true for straight average and RM since all the weights are equal. It is not the case for a gaussian or other kernel were the weights are unique.” “
I believe the real issue is whether or not the filter has notches. Students are often surprised that so many digital filters have unit circle zeros, and ask why: Because so many useful filters (nearly all?) have stopbands and we go through zero to be reasonably close to zero. But inverting unit circle zeros does not work. If you notch out something, it’s gone and you can’t get it back. That is, you have poles on the unit circle (infinite response at the corresponding, former, notch frequencies).
Further it has nothing to do with unequal weights. RM has unit circle zeros, as does SG, CTRM, Lanctos, and most other filters except Gaussian. Occasionally filters don’t have stopbands (such as audio equalizers – like tone controls) and avoid unit-circle zeros. It is always possible to specify such a filter – it just may not have useful applications.
Willis Eschenbach says:
March 17, 2014 at 2:29 pm
“That doesn’t seem right to me. Any filter which is not uniquely invertible loses information. ”
Your intuition is wrong however. There is no data loss, only removal from this plot. The other high pass plot can be constructed that shows the removed information by subtracting this trace from the input trace.. Together they always add to the original input.
It is possible that some frequencies around the corner value will be distributed into both high and low in various mixtures, but when added back together the answer will still be correct.
Greg
It is not the case for a gaussian or other kernel were the weights are unique.
Sorry as someone who has to program professional applications that is just wrong. Sometimes I wonder whether you read something online only to spout some more “smoothed” and rather “convoluted” version here…
How on earth with overlapping kernel does one untangle the interference, how would one do it EVEN IN A DISCRETE VERSION WITH THE A GAUSSIAN FILTER KERNEL!!!!!!!!!!!!!!!!!!!!!
Image processing is a very easy and an accessible way to test this:
You cannot apply a Gaussian filter to an image, save the image then pass it to someone without the original image, then given just the kernel design reconfigure reproduce the original image. There are many “deconvolution” methods but they never recreate the original image, that would need to assume continuity (and determinism) in a discretised series where the best one can hope for is semi-continuity in a stochastic series.
You sir are full of BS! And I am starting to wonder whether I’ve given Richard to much kudos his answers to my question – hardly technically demanding – have produced answers which are neither here nor there.
Personally I don’t see cycles, I see damping. In my days of training for my airplane pilot license I was taught about the stability of the controlling surfaces of the airplane, ailerons, rudder, the pitch, yaw and roll axis. What happens when you apply power, pitch the nose up, or down, and fatefully when you drop like a rock when you put yourself into a stall (I pursued other interests after that exercise). And one thing that I remember is the stability of flight from the damping of the flight path due to how airplanes are designed.
Years latter while studying calculus, once again I came across the damping physics but this time from a deep mathematical perspective and analysis. The graph produced looks like damping to me. I’ll have to revisit the math for damping before I can do any further analysis. Others have compared to feedbacks in electronics circuitry.
– – –
On the other hand, the filtered graph looks like the path of a zombie chase with a large lag following a drunk walker, or a Lizzy looking at the flowers.
RichardLH says:
March 17, 2014 at 4:40 pm
So you are telling us that you can flawlessly reconstruct the original input trace, given only the result of a gaussian smoothing … but only if you have access to the original input trace??
Dang, that’s some fine detective work …
w.
Willis Eschenbach says:
March 17, 2014 at 4:34 pm
“As a result, once again I reject the claim of both yourself and Richard that the Gaussian doesn’t remove the annual signal. Both of them remove it.”
Are you saying that this plot of the various frequency responses is wrong?
http://climatedatablog.files.wordpress.com/2014/02/fig-2-low-pass-gaussian-ctrm-compare.png
Or that you were able to find that difference (admittedly tiny, 4% Greg puts it at) when you did your work?
As I observed it might depend on which source data set you use as digitisation/sampling errors dominate in most cases. UAH has a range of less than 200 possible sample input values and HadCRUT has less than 2000. 4% of those can be easy to miss.
RichardLH says:
March 17, 2014 at 4:36 pm
Richard, that is the underlying question in climate science—what causes the gradual swings in temperature. So indeed we study that.
The idea that we can squint at a smoothed temperature record and say what has caused the swings, however, is a bridge far too far.
w.
RichardLH
I think you are playing fast with the rules here and being quite “obfuscatious” with your interpretation of Willis’ comments.
Any filter will result in loss of information – that is without doubt; are you using the notion that all filters preserve data (modified as such) but not information (which is correct). I have always assumed you to be honest and decent but I cannot reconcile that notion with your response to Willis’ point. You either know exactly what he meant yet decided to dismiss it with pedantic obfuscation or you didn’t understand it.