This DSP engineer is often tasked with extracting spurious signals from noisy data. He submits this interesting result of applying these techniques to the HadCRUT temperature anomaly data. Digital Signal Processing analysis suggests cooling ahead in the immediate future with no significant probability of a positive anomaly exceeding .5°C between 2023 and 2113. See figures 13 and 14. Code and data is made available for replication. – Anthony
Guest essay by Jeffery S. Patterson, DSP Design Architect, Agilent Technologies
Harmonic Decomposition of the Modern Temperature Anomaly Record
Abstract: The observed temperature anomaly since 1900 can be well modeled with a simple harmonic decomposition of the temperature record based on a fundamental period of 170.7 years. The goodness-of-fit of the resulting model significantly exceeds the expected fit to a stochastic AR sequence matching the general characteristic of the modern temperature record.
Data
I’ve used the monthly Hadcrut3 temperature anomaly data available from http://woodfortrees.org/data/hadcrut3vgl/every as plotted in Figure 1.
Figure 1 – Hadcrut3 Temperature Record 1850-Present
To remove seasonal variations while avoiding spectral smearing and aliasing effects, the data was box-car averaged over a 12-month period and decimated by 12 to obtain the average annual temperature plotted in Figure 2.
Figure 2 – Monthly data decimated to yearly average
A Power Spectral Density (PSD) plot of the decimated data reveals harmonically related spectral peaks.
Figure 3 – PSD of annual temperature anomaly in dB
To eliminate the possibility that these are FFT (Fast Fourier Transform) artifacts while avoiding the spectral leakage associated with data windowing, we use a technique is called record periodization. The data is regressed about a line connecting the record endpoints, dropping the last point in the resulting residual. This process eliminates the endpoint discontinuity while preserving the position of the spectral peaks (although it does extenuate the amplitudes at higher frequencies and modifies the phase of the spectral components). The PSD of the residual is plotted in Figure 4.
Figure 4 – PSD of the periodized record
Since the spectral peaking is still present we conclude these are not record-length artifacts. The peaks are harmonically related, with odd harmonics dominating until the eighth. Since spectral resolution increases with frequency, we use the eighth harmonic of the periodized PSD to estimate the fundamental. The following Mathematica (Mma) code finds the 5th peak (8th harmonic) and estimates the fundamental.
wpkY1=Abs[ArgMax[{psdY,w>.25},w]]/8
0.036811
The units are radian frequency across the Nyquist band, mapped to ±p (the plots are zoomed to 0 < w < 1 to show the area of interest). To convert to years, invert wpkY1 and multiply by 2p, which yields a fundamental period of 170.7 years.
From inspection of the PSD we form the harmonic model (note all of the radian frequencies are harmonically related to the fundamental):
(*Define the 5th order harmonic model used in curve fit*)
model=AY1*Sin[wpkY1 t+phiY1]+AY2*Sin[2*wpkY1* t+phiY2]+AY3*
Sin[3*wpkY1* t+phiY3]+AY4*Sin[4*wpkY1* t+phiY4]+AY5*
Sin[5*wpkY1* t+phiY5]];
vars= {AY1,phiY1,AY2,phiY2,AY3,phiY3,AY4,phiY4,AY5,phiY5 }
and fit the model to the original (unperiodized) data to find the unknown amplitudes, AYx, and phases, phiYx.
fitParms1=FindFit[yearly,model,vars,t]
fit1=Table[model/.fitParms1,{t,0,112}];
residualY1= yearly- fit1;{AY1→-0.328464,phiY1→1.44861,AY2→-
0.194251,phiY2→3.03246,AY3→0.132514,phiY3→2.26587,AY4→0.0624932,
phiY4→-3.42662,AY5→-0.0116186,phiY5→-1.36245,AY8→0.0563983,phiY8→
1.97142,wpkY1→0.036811}
The fit is shown in Figure 5 and the residual error in Figure 6.
Figure 5 – Harmonic model fit to annual data
Figure 6 – Residual Error Figure 7 – PSD of the residual error
The residual is nearly white, as evidenced by Figure 7, justifying use of the Hodric-Prescott filter on the decimated data. This filter is designed to separate cyclical, non-stationary components from data. Figure 8 shows an excellent fit with a smoothing factor of 15.
Figure 8 – Model vs. HP Filtered data (smoothing factor=3)
Stochastic Analysis
The objection that this is simple curve fitting can be rightly raised. After all, harmonic decomposition is a highly constrained form of Fourier analysis, which is itself a curve fitting exercise that yields the harmonic coefficients (where the fundamental is the sample rate) which recreate the sequence exactly in the sample domain. That does not mean however, that any periodicity found by Fourier analysis (or by implication, harmonic decomposition) are not present in the record. Nor, as will be shown below, is it true that harmonic decomposition on an arbitrary sequence would be expected to yield the goodness-of-fit achieved here.
The 113 sample record examined above is not long enough to attribute statistical significance to the fundamental 170.7 year period, although others have found significance in the 57-year (here 56.9 year) third harmonic. We can however, estimate the probability that the results are a statistical fluke.
To do so, we use the data record to estimate an AR process.
procY=ARProcess[{a1,a2,a3,a4,a5},v];
procParamsY = FindProcessParameters[yearlyTD["States"],procY]
estProcY= procY /. procParamsY
WeakStationarity[estProcY]
{a1→0.713,a2→0.0647,a3→0.0629,a4→0.181,a5→0.0845,v→0.0124391}
As can be seen in Figure 9 below, the process estimate yields a reasonable match to observed power spectral density and covariance function.
Figure 9 – PSD of estimated AR process (red) vs. data Figure 9b – Correlation function (model in blue)
Figure 10 – 500 trial spaghetti plot Figure 10b – Three paths chosen at random
As shown in 10b, the AR process produces sequences which in general character match the temperature record. Next we perform a fifth-order harmonic decomposition on all 500 paths, taking the variance of the residual as a goodness-of-fit metric. Of the 500 trials, harmonic decomposition failed to converge 74 times, meaning that no periodicity could be found which reduced the variance of the residual (this alone disproves the hypothesis that any arbitrary AR sequences can be decomposed). To these failed trials we assigned the variance of the original sequence. The scattergram of results are plotted in Figure 11 along with a dashed line representing the variance of the model residual found above.
Figure 11 – Variance of residual; fifth order HC (Harmonic Coefficients), residual 5HC on climate record shown in red
We see that the fifth-order fit to the actual climate record produces an unusually good result. Of the 500 trials, 99.4% resulted in residual variance exceeding that achieved on the actual temperature data. Only 1.8% of the trials came within 10% and 5.2% within 20%. We can estimate the probability of achieving this result by chance by examining the cumulative distribution of the results plotted in Figure 12.
Figure 12 – CDF (Cumulative Distribution Function) of trial variances
The CDF estimates the probability of achieving these results by chance at ~8.1%.
Forecast
Even if we accept the premise of statistical significance, without knowledge of the underlying mechanism producing the periodicity, forecasting becomes a suspect endeavor. If for example, the harmonics are being generated by a stable non-linear climatic response to some celestial cycle, we would expect the model to have skill in forecasting future climate trends. On the other hand, if the periodicities are internally generated by the climate itself (e.g. feedback involving transport delays), we would expect both the fundamental frequency and importantly, the phase of the harmonics to evolve with time making accurate forecasts impossible.
Nevertheless, having come thus far, who could resist a peek into the future?
We assume the periodicity is externally forced and the climate response remains constant. We are interested in modeling the remaining variance so we fit a stochastic model to the residual. Empirically, we found that again, a 5th order AR (autoregressive) process matches the residual well.
tDataY=TemporalData[residualY1-Mean[residualY1],Automatic];
yearTD=TemporalData[residualY1,{ DateRange[{1900},{2012},"Year"]}]
procY=ARProcess[{a1,a2,a3,a4,a5},v];
procParamsY = FindProcessParameters[yearTD["States"],procY]
estProcY= procY /. procParamsY
WeakStationarity[estProcY]
A 100-path, 100-year run combining the paths of the AR model with the harmonic model derived above is shown in Figure 13.
Figure 13 – Projected global mean temperature anomaly (centered 1950-1965 mean)
Figure 14 – Survivability at 10 (Purple), 25 (Orange), 50 (Red), 75 (Blue) and 100 (Green) years
The survivability plots predict no significant probability of a positive anomaly exceeding .5°C between 2023 and 2113.
Discussion
With a roughly one-in-twelve chance that the model obtained above is the manifestation of a statistical fluke, these results are not definitive. They do however show that a reasonable hypothesis for the observed record can be established independent of any significant contribution from greenhouse gases or other anthropogenic effects.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Greg Goodman says:
September 11, 2013 at 12:52 pm
“This DSP engineer is often tasked with extracting spurious signals from noisy data. He submits this interesting result of applying these techniques to the HadCRUT temperature anomaly data.”
-Anthony
Freudian slip?
No, my day job is signal generation via direct digital synthesis. In such a system, the carrier is clear, unambiguous and easy to measure. Very small (< -90 dBc) spurious tones are also generated due to system non-idealities. We need to extract these spurs and characterize their amplitudes from the noise produced by DACs and measurement floors.
I have a couple of large objections to this procedure.
The first is that he has not divided the data in half, used his whizbang method to determine cycles much longer than the data itself, and then shown that when extended, his procedure closely matches the other half of the data. This is extremely basic testing, and the fact it wasn’t done is very worrisome, particularly given the claimed expertise of the author.
The second is that as far as anyone has every determined, the climate is chaotic … how does he reconcile this with his claim that it is deterministic? Extraordinary claims (such as the climate being deterministic in nature) require extraordinary evidence … and he hasn’t even provided the most basic of tests, using out-of-sample data.
The third is that his “Monte Carlo” analysis is extremely simplistic. I have provided a lot of evidence that the climate is governed (regulated) by a variety of emergent phenomena (thunderstorms, El Nino/La Nina, PDO, etc.). Accordingly, to investigate the system, among other approaches you need to generate synthetic data for such a governed system to use in the Monte Carlo analysis.
Monte Carlo analysis is easy to do … but very difficult to do well. The problem is that you are begging the question.
For example, in his case he is ASSUMING that AR data is what we are actually looking at, so he uses AR data to test his theory, and then he proves that if it is AR data we are seeing, then his result is significant.
I am sure that folks can see the circularity in that argument … to do a Monte Carlo analysis, you have to either:
a) Know the underlying form of the data (which we don’t), OR
b) Use a variety of assumptions about the data to try to cover the actual possibilities.
He has done neither.
Finally, I am completely unconvinced that you can determine a 170.7 year cycle (including the ludicrous .7 year decimal part) from a hundred and sixty years of data. The mere fact that he has included the .7 is a huge danger sign to me, it indicates that he is used to dealing with definitive numbers, not error-filled data like we have in climate science.
So, despite his claimed expertise … color me completely unimpressed by his claims. He has not done even a small part of the work necessary to substantiate his claims, he has not tested his method on out-of-sample data, he is claiming that the climate is deterministic (Mandelbrot among others disagrees) and he has done a childishly simplistic Monte Carlo analysis.
Note that these are all correctable errors … it remains to be seen if he corrects them.
w.
Testing the model´s ability to fore-/backcast, i.e., its usefulness, is easy: just use a subset of the time-series and see how the model´s extension matches the rest.
I suspect that if this is done, it will produce a completely different model for each subset, and all of them wil fail miserably at modelling anything outside of their respective known boxes.
We know that the hadcru, giss and noaa data is severely adjusted to reduce the late 30’s early 40’s warming, to reduce the cooling from 1945 to 1975 and to enhance the warming from 1980.
The data sets also fail to account properly for urban heat island effect, for station deterioration and for the removal of many cooler stations from the dataset.
The question is; do these massive questionable adjustments coupled with the failure to account for the factors mentioned above, affect the sort of analysis you have performed here.
Ok, now what if you tried again, but this time with unadjusted data ?
See – owe to Rich says: September 11, 2013 at 12:06 pm
Goodman and Suamarez have many criticisms on this article, but no-one has raised the things which immediately troubled me. First, given that HadCRUT3 starts in 1850, why did JP throw away 50 years and start at 1900? This is not explained, and is fishy. Second, if the model is good at predicting forwards, how well does it do in predicting backwards (hindcasting) the said 1850-1899 period. Third, since the data length is 113 years, is not a harmonic of 56.9 years (almost half of the 113) extremely fishy?
Using the five sinusoids (amplitude, frequency, phase) given in JP’s paper, the match (sinusoids versus measured data) going back to 1850 is not very good. You ask a good question, and I’d like to hear JP’s answer. I’d put my plot of that data in this comment, but I don’t know how to include figures in a comment.
Patterson presents yet another simplistic modeling study of Hadcrut3 in which DSP methods are applied to make predictions with scant comprehension of the stochastic nature of geophysical signals. There should be little doubt that,once the diurnal and seasonal cycles are effectively removed in decimating the data to yearly averages, what remains should be treated as a wide-band stochastic process. Unlike the aforementioned cycles, there no known physical forcings that are strictly periodic aside from the astronomical tides. it is only in the contrary case that any model consisting of a finite set of pure sinusoids can be expected to be effective in making predictions.
Although a ~170yr oscillation does indeed appear, inter alia, in the power density of GISP2 oxygen isotope data, Kalman-Bucy filters fail to produce close, practically useful predictions. The author’s computational domonstration of the poorer fit of independent realizations of nominally the same process is ipso fact irrelevant when only tested against the realization present in the manufactured Hadcrut3 data series. It is on this point, rather than the many other objections raised by others, that Patterson’s predictions are likely to fail.
I think that this is a most useful, reasoned and practical contribution to the overall debate. Sure, it hasn’t ‘proved’ anything one way or the other (my math is far too weak and my neurons retiring too rapidly to follow the fine grain of it, so use the salt shaker here…).
But the attempt is praiseworthy in itself: sticking one’s head over the parapet is always a risky move, and I congratulate Jeff for having done so.
one way to end the debate about the merits of this technique is to divide the data into two 65 year time periods and see if one predicts the other without changing the methodology.
if the method can predict the full time series with 1/2 the data from either end, without some magical adjustable “aerosol” parameters, then you either have the basis of a successful prediction or you should be buying lottery tickets.
However, if the method cannot predict the missing 1/2, then it is time to look for a new methodology.
If the methodology can predict the full time series from 1/2 the series, then it is worthy of a second, separate WUWT article because that would be strong evidence the author is on the right track.
Willis Eschenbach says:
September 11, 2013 at 2:59 pm
The first is that he has not divided the data in half, used his whizbang method to determine cycles much longer than the data itself, and then shown that when extended, his procedure closely matches the other half of the data.
========
I’d also like to see this and encourage the author to post the results.
Discerning a 170 year period from HadCRUT alone sounds to me far-fetched considering past global temperature approximations.
What I see more is that HadCRUT, especially HadCRUT3 (which I see as correlating with UAH and RSS better than HadCRUT4), has a visible periodic cycle.
http://www.metoffice.gov.uk/hadobs/hadcrut3/diagnostics/global/nh+sh/
The periodic cycle there, assuming it’s sinusoidal, appears to me to have a period of 64 years, with peaks in 2005, 1941, and 1877. Its amplitude appears to me as peaking at nearly .11 degree C above or below the longer-term trend. I got this by trying my hand at Fourier for several trials of 2 cycles, cosine component only, with start dates around 1877 and end dates around 2005. (I did not do well with figuring the sine component, due to contamination by the linear trend, due to the simple methods I used.)
The nearly +/- .11 degree C periodic factor explains about 40% of the warming in HadCRUT3 from the early 1970s to 2005. Manmade direct increase of greenhouse gases other than CO2, which was largely stalled in the 1990s, appears to me to explain a little less than 10% of the early 1970s to 2005 warming. That means the anthropogenic part of the reported warming rate from the early 1970s to 2005 is about or slightly over half the total in that time period.
The anthropogenic part includes effects of bias in generating the HadCRUT3 index, such as adjustments with insufficient consideration for growth of urban effects around surface stations – but that appears to me fairly small – in light of HadCRUT4, current and recent past versions of GISS, and NCDC, and their divergences from UAH and RSS.
Thankfully HadCRUT, especially HadCRUT3, has sea surface temperatures fully considered. The sea surface consideration accounts for about 2/3 of HadCRUT and is essentially uncontaminated by growth of locally nearby airports or cities.
Overall, it appears to me that anthropogenic global warming is for real, but degree of its existence appears to me as something like 35% of the “center track” reported by IPCC in AR4.
Willis Eschenbach says:
“The second is that as far as anyone has every determined, the climate is chaotic …”
You should have dropped in on me when you were in the West Country, I could have shown you how I do deterministic weather forecasts. At the noise level it is not chaotic, so the sum of weather at the climatic scale cannot be either.
Donald, you focus on the linear trend which is an artifact of the statistical analysis. As such linear trend cannot be proof of cause, nor even suggest it. In addition, a linear trend line has yet to be shown to be an effective method of demonstrating warming or cooling. If it were, the models would be no different than observation (the models use that statistic). Because they are not at all the same, the trend line itself is suspect in terms of calculating a meaningful trend (except for very short ones), regardless of its cause, either in the past and certainly not in the future.
Dunno. Maybe it’s like saying todays temperatures are unprecedented for 1000 years while hiding the decline.
Willis Exchenbach: “The first is that he has not divided the data in half, used his whizbang method to determine cycles much longer than the data itself, and then shown that when extended, his procedure closely matches the other half of the data.”
I chimed in with others to ask for this, and, although he did not do it in precisely the way in which I asked, he did hindcast to the pre-1900 period. To these old eyes the results weren’t impressive. I’m perfectly open to others’ convincing me otherwise, but otherwise I don’t find the results very interesting from a climate perspective.
That said, I’m not sure that all the criticisms are well taken. Those based on a obtaining a long fundamental period from a short record, for instance, are understandable but perhaps misapprehend what he actually did. He used the DFT to look for a clue as to what the fundamental might be, by observing peaks at frequencies that were integer multiples of a common fundamental. But it was only after he determined how much of the record could be “explained” by a periodic signal having that fundamental that he adopted it. Of course, the predictive value of the fact that such an explanation was achieved for that single record is not apparent.
And the criticism that “he is claiming that the climate is deterministic” is puzzling. Perhaps someone could point out where he claims that–or where the cite for Mandelbrot contradicts it.
Whatever the post’s other shortcomings are, I for one am grateful for not having had to grope through yet another fog of impressionism; the author actually laid out what he did.
Willis Eschenbach says:
September 11, 2013 at 2:59 pm
Hmm, I’ve seem to have struck some nerve with my hobby horse, that was not my intent. I long ago lost interest in the climate debate, knowing from the outset that model-based science is an oxymoron. My interest was re-sparked recently when I stumbled here while looking for some work related info on Wiener processes. There was an interesting post speculating on whether the climate was a random walk process. There was a link to the dataset so I pulled it in and ran a unit root test which failed with p=.7. The plot by eye looked to have some periodic patterns and knowing that periodic forcing functions can make a stationary process appear non-stationary, I looked at the PSD. I noticed the harmonic relationship between the spectral peaks so it seemed like a good candidate for harmonic decomposition, a technique we use widely for analyzing non-linear processes. After HD, the residual passed the URT at p=10^-12. I was also surprised by how few terms it took to get a good fit and that set me wondering if the periodicity was real or simply a curve fitting artifact.
Although I recognized the short record problem, it’s not one I normally encounter (if we need longer records we simply acquire more data – at 10 GSa/sec it doesn’t take long :>) I am an engineer and not a statistician so I used the tools I am familiar with to devise a significance test. The reasoning was simply. Asserting that there is no significance to the goodness of fit achieved is equivalent to asserting that HD on any sequence of similar BW and variance would yield similar results. I do not see the flaw in the logic, nor in the method for testing the assertion. A s”simplistic” test is not necessarily a flawed test.
Perhaps the problem is revealed in your mischaracterization of my use of the AR model here:
“For example, in his case he is ASSUMING that AR data is what we are actually looking at, so he uses AR data to test his theory, and then he proves that if it is AR data we are seeing, then his result is significant.”
Nowhere do I assume that it is AR data that we are seeing, nor do I anywhere attempt to prove that it is AR data we are seeing. The AR process was just a convenient way of generating random sequences with bandwidths and variances similar to the temperature record. If the HD fit were insignificant, it seems to me that one should easily achieve similar GOF results on the AR process data. As was shown in my post, this is not the case. It seemed an interesting result so I shared it.
As for halving the data, extracting a 170+ year of data from a 55 point record looks pretty hopeless. The HD algorithm fails to converge, as does the AR generated sequences 95% of the time. I may fool around with it some more if I get some time. As for hindcasting, I posted a link to a fifty year hindcast on the monthly data prior to your post.
Anyway, take it for what its worth (perhaps nothing). I need to get back to my day job.
Cheers.
I, for one, am not going to bet the house on the “predictions”. For me, I think that tomorrow is going to be pretty much like today, and this year is going to be pretty much like last year… So far, this seems to fit the data in my lifetime (79 years, thank you.).
@ur momisugly Jeffery S. Patterson
If you happen to be reading this could you reprocess it after first passing it thru a high pass filter set at zero to remove the offset .. I would love to see what it shows?
It’s the same trick you do in your field to remove a DC voltage offset if my language is confusing.
A useful reference that might bridge the gap between electrical engineering digital signal processing and climatology (hydrology and hydraulics) is “Stochastic Processes in Hydrology” by Vujica Yevjevich. The book can be obtained from several sources, including these:
http://www.amazon.com/Stochastic-Processes-Hydrology-Vujica-Yevjevich/dp/0918334012
http://books.google.com/books/about/Stochastic_processes_in_hydrology.html?id=uPJOAAAAMAAJ
http://trove.nla.gov.au/work/21245724?q&versionId=25362456
http://www.barnesandnoble.com/w/stochastic-processes-in-hydrology-vujica-m-yevjevich/1001110582?ean=9780918334015
A tribute to Prof. Yevjevich is located here:
http://www.engr.colostate.edu/ce/facultystaff/yevjevich/Yevjevich_index.htm
Ulric Lyons says:
September 11, 2013 at 6:00 pm
Are you a rich man, Ulric?
Because if not … why not? Anyone on this planet who could make weather predictions with the accuracy you claim, and at the distance out that you claim, could make millions.
Given that you haven’t done so, I fear that I greatly doubt the claims that you so confidently put forward.
w.
Jeff Patterson says:
September 11, 2013 at 7:53 pm
Struck a nerve? No, you’ve just done a poor and incomplete analysis, for the reasons I stated. If you want to pretend that somehow irritated people, sorry … the objections were scientific.
w.
Jeff Patterson says:
September 11, 2013 at 7:53 pm
Indeed, you are correct—you’re not a statistician. While your reasoning is “simplistic”, it’s also wrong.
Huh? You’ve admitted you used an AR model for your Monte Carlo test … but since you are not a statistician, you failed to realize that the choice of the model for the Monte Carlo test is a make-or-break decision for the validity of the test. You can’t just grab any data with similar bandwidth and variance as you have done and claim you’ve established your claims, that’s a joke.
Bad news on that front, Jeff. Extracting a 170+ year cycle from a 110 year dataset is equally hopeless. Me, I don’t trust any cycle I find in climate data that’s more than a third of the length of my dataset, and I would strongly encourage you to do the same.
Sorry, not impressed. Do it properly or not at all.
I see you have a keen eye for the estimation, not only of your own statistical abilities, but of the value of your work.
Thanks for the reply, Jeff. Truly, you seem to be totally at sea in this particular corner of signal processing. It’s a recondite corner, with many pitfalls for the unwary, and full of poor data.
In addition, there is general agreement that the signal is chaotic in nature (I gave a reference above). If your analysis is correct, it is deterministic … which is a huge claim that seems to have escaped your notice. To show that it is not chaotic, I fear you need more than an un-testable (by your own admission) extraction of a 170-year cycle from 110 years of data.
All the best,
w.
JP: “Although I recognized the short record problem, it’s not one I normally encounter …”
This is a key problem with using DSP techniques and experience from other fields like electrical engineering and acoustics, where adequate samples can usually be made available. (And if not this is recognized. )
Most climate data is fundamentally inadequate to determine the nature of longer term variability. Whether century scale change is periodic, linear AGW , randow walk, or some other stochastic process can not be determined from the available data.
The big lie is that the IPCC is now, apparently, 95% certain that it can.
The last 17 years is the proof that they can’t.
That they become more certain in the face of increasing discrepancy shows how detached they have become from scientific reality and how dominated this intergovernmental body is preconceived policy objectives.
In the end, Digital signal processing, is just following a mathematical algorithm; applying it to a set of numbers.
So it differs from computing an average, only in that the mathematical algorithm is different.
You can take the very same set of numbers, and calculate the average, using that particular algorithm, or you can do the more complex DSP algorithms, and get something else from exactly the same set of numbers.
Well of course, you can also apply the same mathematics to the set of numbers you can find in your local telephone directory and get an average, or whatever the DSP algorithms produce.
The mathematics is quite rigorous; but that doesn’t mean the result has any meaning.
In the case of DSP, there usually is a starting assumption that there IS a “signal” there to be found in the set of numbers. Some procedures make use of prior knowledge of what the signal is supposed to be, and when it is supposed to be present, so you aren’t looking for something at times when it is not there, so you just gather noise.; such as locking onto a Loran C signal or these days, a set of GPS signals.
But if you apply these methods to sets of numbers which are acquired without regard to the rules of sampled data systems (which all our measurements are), then there may be no signal there to acquire.
That’s the problem with “climate data”. It doesn’t even come close to satisfying the Nyquist criterion, particular for spatial sampling of Temperatures, and the common method of Temporal sampling, also fails on that score. A daily min max Temperature sampling, only suffices, if the diurnal Temperature variation is strictly sinusoidal with a 24 hour period. Now in climatism, the aim is not to recover the continuous Temperature function for processing, but just to get the average, which is the zero frequency component of the spectrum. You can’t even recover the average, if the daily Temperature cycle has even a second harmonic component, let alone any higher frequency components, such as the results of cloud cover variations during the day.
So the mathematics may be quite elegant; but the input “data” is just garbage.
chaotic != whimsical
ever seen a fetus with ultrasound?
dsp is abstruse. snarking in a corner is obtuse.