This DSP engineer is often tasked with extracting spurious signals from noisy data. He submits this interesting result of applying these techniques to the HadCRUT temperature anomaly data. Digital Signal Processing analysis suggests cooling ahead in the immediate future with no significant probability of a positive anomaly exceeding .5°C between 2023 and 2113. See figures 13 and 14. Code and data is made available for replication. – Anthony
Guest essay by Jeffery S. Patterson, DSP Design Architect, Agilent Technologies
Harmonic Decomposition of the Modern Temperature Anomaly Record
Abstract: The observed temperature anomaly since 1900 can be well modeled with a simple harmonic decomposition of the temperature record based on a fundamental period of 170.7 years. The goodness-of-fit of the resulting model significantly exceeds the expected fit to a stochastic AR sequence matching the general characteristic of the modern temperature record.
Data
I’ve used the monthly Hadcrut3 temperature anomaly data available from http://woodfortrees.org/data/hadcrut3vgl/every as plotted in Figure 1.
Figure 1 – Hadcrut3 Temperature Record 1850-Present
To remove seasonal variations while avoiding spectral smearing and aliasing effects, the data was box-car averaged over a 12-month period and decimated by 12 to obtain the average annual temperature plotted in Figure 2.
Figure 2 – Monthly data decimated to yearly average
A Power Spectral Density (PSD) plot of the decimated data reveals harmonically related spectral peaks.
Figure 3 – PSD of annual temperature anomaly in dB
To eliminate the possibility that these are FFT (Fast Fourier Transform) artifacts while avoiding the spectral leakage associated with data windowing, we use a technique is called record periodization. The data is regressed about a line connecting the record endpoints, dropping the last point in the resulting residual. This process eliminates the endpoint discontinuity while preserving the position of the spectral peaks (although it does extenuate the amplitudes at higher frequencies and modifies the phase of the spectral components). The PSD of the residual is plotted in Figure 4.
Figure 4 – PSD of the periodized record
Since the spectral peaking is still present we conclude these are not record-length artifacts. The peaks are harmonically related, with odd harmonics dominating until the eighth. Since spectral resolution increases with frequency, we use the eighth harmonic of the periodized PSD to estimate the fundamental. The following Mathematica (Mma) code finds the 5th peak (8th harmonic) and estimates the fundamental.
wpkY1=Abs[ArgMax[{psdY,w>.25},w]]/8
0.036811
The units are radian frequency across the Nyquist band, mapped to ±p (the plots are zoomed to 0 < w < 1 to show the area of interest). To convert to years, invert wpkY1 and multiply by 2p, which yields a fundamental period of 170.7 years.
From inspection of the PSD we form the harmonic model (note all of the radian frequencies are harmonically related to the fundamental):
(*Define the 5th order harmonic model used in curve fit*)
model=AY1*Sin[wpkY1 t+phiY1]+AY2*Sin[2*wpkY1* t+phiY2]+AY3*
Sin[3*wpkY1* t+phiY3]+AY4*Sin[4*wpkY1* t+phiY4]+AY5*
Sin[5*wpkY1* t+phiY5]];
vars= {AY1,phiY1,AY2,phiY2,AY3,phiY3,AY4,phiY4,AY5,phiY5 }
and fit the model to the original (unperiodized) data to find the unknown amplitudes, AYx, and phases, phiYx.
fitParms1=FindFit[yearly,model,vars,t]
fit1=Table[model/.fitParms1,{t,0,112}];
residualY1= yearly- fit1;{AY1→-0.328464,phiY1→1.44861,AY2→-
0.194251,phiY2→3.03246,AY3→0.132514,phiY3→2.26587,AY4→0.0624932,
phiY4→-3.42662,AY5→-0.0116186,phiY5→-1.36245,AY8→0.0563983,phiY8→
1.97142,wpkY1→0.036811}
The fit is shown in Figure 5 and the residual error in Figure 6.
Figure 5 – Harmonic model fit to annual data
Figure 6 – Residual Error Figure 7 – PSD of the residual error
The residual is nearly white, as evidenced by Figure 7, justifying use of the Hodric-Prescott filter on the decimated data. This filter is designed to separate cyclical, non-stationary components from data. Figure 8 shows an excellent fit with a smoothing factor of 15.
Figure 8 – Model vs. HP Filtered data (smoothing factor=3)
Stochastic Analysis
The objection that this is simple curve fitting can be rightly raised. After all, harmonic decomposition is a highly constrained form of Fourier analysis, which is itself a curve fitting exercise that yields the harmonic coefficients (where the fundamental is the sample rate) which recreate the sequence exactly in the sample domain. That does not mean however, that any periodicity found by Fourier analysis (or by implication, harmonic decomposition) are not present in the record. Nor, as will be shown below, is it true that harmonic decomposition on an arbitrary sequence would be expected to yield the goodness-of-fit achieved here.
The 113 sample record examined above is not long enough to attribute statistical significance to the fundamental 170.7 year period, although others have found significance in the 57-year (here 56.9 year) third harmonic. We can however, estimate the probability that the results are a statistical fluke.
To do so, we use the data record to estimate an AR process.
procY=ARProcess[{a1,a2,a3,a4,a5},v];
procParamsY = FindProcessParameters[yearlyTD["States"],procY]
estProcY= procY /. procParamsY
WeakStationarity[estProcY]
{a1→0.713,a2→0.0647,a3→0.0629,a4→0.181,a5→0.0845,v→0.0124391}
As can be seen in Figure 9 below, the process estimate yields a reasonable match to observed power spectral density and covariance function.
Figure 9 – PSD of estimated AR process (red) vs. data Figure 9b – Correlation function (model in blue)
Figure 10 – 500 trial spaghetti plot Figure 10b – Three paths chosen at random
As shown in 10b, the AR process produces sequences which in general character match the temperature record. Next we perform a fifth-order harmonic decomposition on all 500 paths, taking the variance of the residual as a goodness-of-fit metric. Of the 500 trials, harmonic decomposition failed to converge 74 times, meaning that no periodicity could be found which reduced the variance of the residual (this alone disproves the hypothesis that any arbitrary AR sequences can be decomposed). To these failed trials we assigned the variance of the original sequence. The scattergram of results are plotted in Figure 11 along with a dashed line representing the variance of the model residual found above.
Figure 11 – Variance of residual; fifth order HC (Harmonic Coefficients), residual 5HC on climate record shown in red
We see that the fifth-order fit to the actual climate record produces an unusually good result. Of the 500 trials, 99.4% resulted in residual variance exceeding that achieved on the actual temperature data. Only 1.8% of the trials came within 10% and 5.2% within 20%. We can estimate the probability of achieving this result by chance by examining the cumulative distribution of the results plotted in Figure 12.
Figure 12 – CDF (Cumulative Distribution Function) of trial variances
The CDF estimates the probability of achieving these results by chance at ~8.1%.
Forecast
Even if we accept the premise of statistical significance, without knowledge of the underlying mechanism producing the periodicity, forecasting becomes a suspect endeavor. If for example, the harmonics are being generated by a stable non-linear climatic response to some celestial cycle, we would expect the model to have skill in forecasting future climate trends. On the other hand, if the periodicities are internally generated by the climate itself (e.g. feedback involving transport delays), we would expect both the fundamental frequency and importantly, the phase of the harmonics to evolve with time making accurate forecasts impossible.
Nevertheless, having come thus far, who could resist a peek into the future?
We assume the periodicity is externally forced and the climate response remains constant. We are interested in modeling the remaining variance so we fit a stochastic model to the residual. Empirically, we found that again, a 5th order AR (autoregressive) process matches the residual well.
tDataY=TemporalData[residualY1-Mean[residualY1],Automatic];
yearTD=TemporalData[residualY1,{ DateRange[{1900},{2012},"Year"]}]
procY=ARProcess[{a1,a2,a3,a4,a5},v];
procParamsY = FindProcessParameters[yearTD["States"],procY]
estProcY= procY /. procParamsY
WeakStationarity[estProcY]
A 100-path, 100-year run combining the paths of the AR model with the harmonic model derived above is shown in Figure 13.
Figure 13 – Projected global mean temperature anomaly (centered 1950-1965 mean)
Figure 14 – Survivability at 10 (Purple), 25 (Orange), 50 (Red), 75 (Blue) and 100 (Green) years
The survivability plots predict no significant probability of a positive anomaly exceeding .5°C between 2023 and 2113.
Discussion
With a roughly one-in-twelve chance that the model obtained above is the manifestation of a statistical fluke, these results are not definitive. They do however show that a reasonable hypothesis for the observed record can be established independent of any significant contribution from greenhouse gases or other anthropogenic effects.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
We can test for this effect by subtracting a line that connects the data record endpoints and dropping the last point from the result to avoid repeating the zero-error point at the ends. The result is a record which has no discontinuity distinguishable from the noise present in the data. If the PSD peaks remain after this process, it indicates the spectral peaks are not record-length artifacts.
Ok. But then your line that you subtract from the time-based data is exquisitely sensitive to just two sample points, Point 1 and Point 113. I’ll go along with the method, if and only if, you use it on at least 9 other subsets of the data: Points 1-112, 1-111, 1-110, 2-113, 2-112, 2-111, 3-113, 3-112, 3-111. Now you have 10 different base lines to remove from the data and 10 different resulting PSD’s. What are the consistent peaks, what are the inconsistant peaks? You gotta admit, it add robutness and “fold” to the results.
One reader asked “why use the eight harmonic?”. Because there are more cycles present of the eight harmonic and so it provides a better estimate of the fundamental.
There is no free lunch. There might be more cycles, but fewer samples per cycle. Frankly it makes it more sensitive to the choice of data length.
Your response to the Fig. 10 question didn’t answer what was asked. What was randomized? Just the phase of the spectral components?
Like a few others above, I am highly skeptical of a fundamental cycle length longer than the original time series. From what I learned in geophysics, that is a violation of the Fourier theorems. Your support of this long period seems to come from “harmonic decomposition”. I am very skeptical. It sounds like circular logic. Are you in effect padding the time series with a variable numbe of zeros until you get your peaks to resonate? 32 years out of my Ph.D. [you listening, Gail?] I’m still willing to learn, but like I said, “I need to see more.”
Climate scientists got off on the wrong foot three decades ago, by prematurely assuming that there were just two important drivers of climate – namely the sun (insolation) and a CO2-driven greenhouse effect. These, most reckoned, were the “first-order” influences and all of the other factors were second-order or third-order and therefore could be largely ignored. CO2, they said, was the “control knob” of climate variability.
Mother Nature has unkindly demonstrated their basic error, by providing fifteen years of almost unchanged world average temperatures while the atmospheric CO2 level, for whatever reason, continued to increase significantly. The control knob was turned, and almost nothing happened! The most bizarre and ingenious ecclesiastical apologia forthcoming from the Jesuit intelligentsia would pale by comparison with some of the climate establishment’s amazing “explanations” for this hiatus. We wait with bated breath for the IPCC’s upcoming summary to see if it “explains” or simply ignores what nature has done to them.
This is why Jeff Patterson’s post is so exciting. It assumes absolutely nothing about the theories or explanations of climate scientists – or anything else – and simply mines the available data for plausible patterns which could provide some insight into future trends or actual mechanisms. And, unlike most other such attempts at naive analysis, it is performed by somebody who clearly understands the false moire and other mirages which can emerge from this type of statistical treatment. Such trend analysis is surely the best starting point for the badly-needed “reboot” of climate science.
Allow me to join the several other commenters who asked that you perform the last part of the analysis again. No, I’m not asking that you again seek the fundamental frequency. I’m just asking that, taking the fundamental you’ve already identified, you use just the part of the record prior to, say, 1945, to determine the harmonics’ amplitudes and phases. Then, using those amplitudes and phases, project forward through the present. (Although my guess is that your computer time required to identify what by my count are ten different parameters will be significant, you wouldn’t have to rewrite the program, of which even just the part we’re requesting you re-run would probably take plodders like this writer more than an hour to write and debug.)
I recognize that this a slight diversion, but it is indeed only slight, and the results could support the following interesting question: If the climate’s behavior in the record’s latter portion, which represents a time when CO2 concentration was rising significantly, is predictable, without any adjustment for CO2–and on what’s little more than a the-trend’s-your-friend basis–from the record’s former portion, which represents a period when little CO2 increase occurred, shouldn’t the proposition that CO2’s effects are significant bear a heavy burden of proof?
Allow me also to join others who have thanked you for a clear post and responsive follow-ups.
Henry@all
I really feel sorry for you all because you are all missing an important point:
where would be without the sun? We’d be dead, would we not?
We know that earth has a number of factors (built-in) that confuse matters,
on mean average temp.
like
its own volcanic action,
turning of the iron core,
magnetic fields/ clouds
lunar interaction
to name but a few.
I am sure there a lot more.
So why do you all keep staring at those stupid figures for mean average temps. & their anomalies?
Surely anyone with brains must have figured out by now that maximum temperatures are a much better proxy to look at, as I found. But don’t look at only one station, because if you were to look at CET maxima alone you might get confused: they have a lot of weather…..there in London.
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
crosspatch says: The problem with this sort of analysis, while interesting, is that it can only show what HAS happened and not what WILL happen.
But showing what has happened is important, is it not? The AWG proponents are convinced what _has_ happened is due to CO2 emissions. I’ve forwarded an alternate hypothesis which, if it could be varied, would alleviate these concerns. In that case we could confidently predict that the future climate will change, just as it has in the past, and these natural changes will likely be benign, just as they have been in the past.
So, richardscourtney, how does this jive with Akasofu’s linear “recovery” from the LIA?
Echoing others: Thanks for this article – for writing clearly and explaining, for acknowledging problems, for being bold and trying stuff out, for providing the code, and for being around to discuss issues with others. Even if your post were complete nonsense (Im not in a position to judge) it’s still a good example of what we should all be doing here.
Take a look at: CET Patterns, 1659-2007, Wikipedia Graph
If we were to guess about the next two centuries based upon historical patterns in the CET record, we could guess that the current pause in rising temperatures will be merely a pause; and that over the next 200 years, GMT will continue its long-term rise while following a jagged pattern of localized up-and-down trends, doing so at a linearized trend over a period of two centuries of about + 0.3 C per century.
Certainly, one or more localized downward trends in GMT could occur within that 200 year period, and probably will.
But in the mode of saying that when in doubt, predict that past trends will continue, we could also guess that a jagged up and down pattern — one with a gradually rising long-term linearized trend — will continue until the maximum of the Medievel Warm Period is reached.
A reasonable estimate of GMT at the height of the Medievel Warm Period is needed. Is there anyone with the appropriate credentials working on that estimate?
In the meantime, what about the next 100 years, as opposed to the next 200? Make your own guess about GMT in the year 2100 using Beta Blocker’s CET Pattern Picker.
Stephen Rasey says:
September 11, 2013 at 10:31 am
“There is no free lunch. There might be more cycles, but fewer samples per cycle. Frankly it makes it more sensitive to the choice of data length.”
True but even the eight harmonic is highly oversampled. Nyquists proves that increasing the number of samples per cycle (beyond two per the shortest period) adds no information.
“Your response to the Fig. 10 question didn’t answer what was asked. What was randomized? Just the phase of the spectral components?”
As was stated in the paper and elsewhere, figure 10 is the output of an AR process designed to provide random sequences which mimic (in character) the observed climate record. An AR process is just an IIR filter. The input to the filter in this case is random noise with the specified variance. Each path in figure 10 is the “filter’s” output with a different, random input sequence (and initial state). We then perform the same harmonic decomposition on these “climate-like” paths to test the null hypothesis that any old randomly produced sequence which looks kinda like temperature data would yield a goodness-of-fit similar to that achieved with the real data. While we cannot quite reject the null hypotheis (p=.081), we can state that getting the achieved GOF is not to be expected and in fact the achieved residual variance is near the 3-sigma of the distribution of the randomly generated results.
@Sam The First says: September 11, 2013 at 5:57 am
Menahwile in the broadsheet papers here in the UK, notably The Independent and The Guardian which are read by liberal / left opinion formers (inc teachers at all levels), the comments to this article below demonstrate that none of their readers is taking any notice of the long pause in warming and the overall cyclical record.
Will someone with the time and expertise please add a comment or two to explain what is really going on?
========================================================================
I’ve long been banned from CiF for dissent. The Indy, well, it’s best for me not to go there as it enrages me in no time at all. Occasionally, I will employ my rapier like wit to puncture some balloons, but ultimately, it’s a waste of time talking to such as comment there.
Jeff Patterson says
(future climate change) will likely be benign, just as they have been in the past.
henry says
Dear Jeff, unfortunately I think it will not be benign
Under normal circumstances, like you, I would have let things rest and just be happy to know the truth for myself. Indeed, I let things lie a bit. However, chances are that humanity will fall in the pit of global cooling and later me blaming myself for not having done enough to try to safeguard food production for 7 billion people and counting.
It really was very cold in 1940′s….The Dust Bowl drought 1932-1939 was one of the worst environmental disasters of the Twentieth Century anywhere in the world. Three million people left their farms on the Great Plains during the drought and half a million migrated to other states, almost all to the West. Please see here:
http://www.ldeo.columbia.edu/res/div/ocp/drought/dust_storms.shtml
I found confirmation in certain other graphs, that as we are moving back, up, from the deep end of the 88 year sine wave,
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
there will be standstill in the speed of cooling, on the bottom of the wave, and therefore naturally, there will also be a lull in pressure difference at that > [40 latitude], where the Dust Bowl droughts took place, meaning: no wind and no weather (read: rain). According to my calculations, which includes certain planetary positions, this will start around 2020 or 2021….and last until 2028.
Danger from global cooling is documented and provable. We have only ca. 7 “fat” years left……
1) We urgently need to develop and encourage more agriculture at lower latitudes, like in Africa and/or South America. This is where we can expect to find warmth and more rain during a global cooling period.
2) We need to tell the farmers living at the higher latitudes (>40) who already suffered poor crops due to the cold and/ or due to the droughts that things are not going to get better there for the next few decades. It will only get worse as time goes by.
3) We also have to provide more protection against more precipitation at certain places of lower latitudes (FLOODS!),
Reading 170 year cycles… I knew I read about this before!
People, try a quick blast on Google and you will be decimated by the amount of similar findings from differing sources. From war to famine. Even the 10Be records have something to say
How can one decimate by 12 ?
I’m delighted to see that the head posting confirms – and by similar methodology – work being done elsewhere which has come to very much the same conclusion – that there may be a 0.5 K drop in global temperatures within the next few years. I mentioned this briefly in an earlier posting, and the usual suspects elsewhere began sniffing and sneering and offering me bets.
Fourier analysis, and particularly the subset of it that is used here, is an appropriate method. But best results are obtained not so much by looking at the past temperature record in isolation and over little more than a century, as here, but in association with possible aetiologies and their datasets, and over all timescales. Murry Salby’s lecture takes this approach, and is worthy of study. The Akasofu and Scafetta approach, extrapolating from previous cycles, is also interesting, as is Tsonis on the ocean oscillations, whose 58-year cycles are close enough to be harmonics of the 171-year period mentioned in the head posting as having been detected by the author’s harmonic decomposition.
If the IPCC did science this way, openly and inviting public scrutiny, it would have been out of business long ago: it is only by concealment that they get away with continuing this scam. If Mann, Bradley and Hughes had done science this way, they would never have dared to foist the “hockey stick” on the world.
Finally, at Bishop Hill there is a hilarious debate in one of the House of Commons committee rooms about global warming, in which the true-believers take quite a pasting.
There is a 170 year cycle that is based upon the conjunctions of Uranus and Neptune, 1992, 1821,1648,1470,1300 etc.
This paper by Coughlin and Tung 2004 employs the nonlinear EMD (empirical mode decomposition) analysis to stratospheric temperatures, and finds a strong 11 year solar cycle.
This is the right kind of analysis to look for nonlinear forcing of the climate system by solar and other astrophysical cycles. Check out figure 3.
Stephen Rasey says:September 11, 2013 at 10:31 am
“Ok. But then your line that you subtract from the time-based data is exquisitely sensitive to just two sample points,”
Actually quite the opposite. This is because the Fourier Transform of a sum is equal to the sum of the FTs. Thus we can reconstruct the _exact_ spectrum of the unperiodized record by subtracting out the known transform of the line (ramp), irregardless of the slope of the line. Here this compensation step was unnecessary as we were only interested in determining that the observed peaks were not record-length artifacts.
Chris Schoneveld:
At September 11, 2013 at 11:04 am you ask me
Among the many possible meanings of your question, one is,
‘How do the studies of Patterson and Akasofu relate?’
And, of course, the answer to that question is,
‘They don’t relate because they are different analyses conducted for different purposes although they are conducted on the same data’.
Patterson attempts to use signal processing to determine underlying frequencies in the global temperature time series. It remains to be seen if his analysis can provide useful information, but if he has identified frequencies which are ‘real’ then consideration of those frequencies may induce studies for provision of information concerning significant climate processes.
Akasofu observes that the temperature global time series can be matched by assuming a linear recovery from the LIA combined with a sinusoidal oscillation (possibly related to ocean behaviour) which provides alternate periods of warming and cooling each of 30 years duration. He extrapolates that pattern with a view to determining when it changes, and upon observation of a change (which will surely eventuate) then it will be reasonable to infer whatever has induced that pattern has changed.
I don’t know how either analysis could jive or perform any other dance. But I am pleased with my jiving and have medals for my ability on the dance floor.
Richard
RC Saumarez says:
September 11, 2013 at 9:49 am
“I can take broadband noise and then pass it through a low-pass system. Provided the system has persistance, I will get a signal containing trends and this may appear quasi-periodic. I can fit a Fourier series to this signal and calculate an “input” that will create the output for a short length of recorded signal.”
But that is precisely what I’ve done in the section entitled “Stochastic Analysis”! The AR process used is a low-pass filter with coefficients adjusted to produce “climate-like” outputs from white noise inputs. When we do harmonic decomp on these random sequences we see that in general we cannot get close to the goodness-of-fit (as measured by the variance of the residual error) achieved with the actual climate data.
Mr. Patterson,
The quality of a five-harmonically-related-frequency model fit to the data surprises me. I’d be interested in seeing the quality of a similar fit using a five frequency model whose frequencies are not required to be harmonically related. That is, if the software routine you used to determined the values of harmonic sinewave amplitudes and phases that best fit the data can accommodate sinewave frequency as well, I’d be interested in seeing (a) how the best-fit estimated frequencies differ from their harmonic counterparts, and (b) the change, if any, of the residuals and residual spectral content. If the software you employed uses a weighted-least-squares cost function, my guess would be that giving that software an initial guess corresponding to your harmonic analysis results, the software would converge to a solution with frequencies, amplitudes, and phases differing only slightly from their “harmonic analysis” equivalents.
JP: ” forget to add a comment on forecasting: In retrospect I wish I had left this section out”
Yep, forgetting the title of the post may be the best approach.
“All modeling, including GCMs, no matter how complex are simply calculations. The results are completely determined by the input and initial state. They are thus only useful in the hypothesis-forming portion of the scientific method (something the climate modelers seem to have forgotten). Once formed, the hypothesis must be verified by empirical observation.”
A very good point.
JP: “First, the concerns about aliasing are IMHO unwarranted. The monthly data record is highly low-passed, being averaged in both space and time. ….The power spectral density of the unaltered Hadcrut data shows the high frequency floor to be down some 60dB (1/1000) from the spectral peaks we are trying to extract.”
That is precisely where the problem lies. It _had been_ heavily low-pass filtered and the evidence is that it was not correctly anti-aliased before that was done.
http://judithcurry.com/2011/10/18/does-the-aliasing-beast-feed-the-uncertainty-monster/
The fact that there is NOW very little H.F. tells us _nothing_ about how much this was in the data that is now aliased elsewhere in the spectrum. And that could multi-year or decadal in scale.
The papers on Hadley SST processing sketch out the process to build the global ‘climatology’ grid, which involves running averages in adjacent grid cells across latitude and longitude. This is repeated in a loop until the result ‘converges’. These 5×5 degree grid cells of 5 day averages are then processed into calendar monthly means.
The papers do not make any mention of anti-alias filtering prior to re-sampling .
So, yes there is lots of low pass filtering going on , that accounts for the low amplitude of h.f. signals that you comment on. That in no way indicates that the power in those poorly filtered and re-sampled frequencies are not now lying in the multi-year to decadal frequency bands.
RC Saumarez showed that the frequency spectrum of that data suggests rather clearly the presence of aliasing.
my comparison of ICOADS to hadSST :
http://climategrog.files.wordpress.com/2013/03/icoad_v_hadsst3_ddt_n_pac_chirp.png
http://climategrog.wordpress.com/2013/03/01/61/
…shows some limited but significant changes to the frequency content of the data due to Hadley processing. Whether this is an improvement or a degradation of the data is not my job to establish but I am unaware of any account that this was even measured as part of Hadley’s QA on their data processing.
Maybe it was , maybe I missed it. Maybe it wasn’t.
Caveat Emptor.
@Jeff Patterson at 11:31 am
Thus we can reconstruct the _exact_ spectrum of the unperiodized record by subtracting out the known transform of the [known] line (ramp),
You don’t know the line. 113 points in the dataset is just an artifact of how long someone was recording data. When the data starts and ends is similarly an artifact. Different starts and different ends give different lines and therefore different transforms. I expect differences in the high frequencies and power at the low frequencies.
Also, the choice of a base “line” is arbitrary. Simplistic, but not necessarily the best choice. That base line might be a linear function of CO2 concentration, logarithmic function of CO2, or a non-linear function of sunspot number.
The observed temperature anomaly since 1900 can be well modeled with a simple harmonic decomposition of the temperature record based on a fundamental period of 170.7 years.
That’s dear: 4 significant figures with a time series less than one full period.
Anthony: This DSP engineer is often tasked with extracting spurious signals from noisy data.
Is this “signal” spurious?
His model does not fit the data as well as Vaughan Pratt’s model. It should be clear by now that experienced data analysts can get just about any result they want from the extant data. Now that he has published his model, it can be tested by future data.
Goodman and Suamarez have many criticisms on this article, but no-one has raised the things which immediately troubled me. First, given that HadCRUT3 starts in 1850, why did JP throw away 50 years and start at 1900? This is not explained, and is fishy. Second, if the model is good at predicting forwards, how well does it do in predicting backwards (hindcasting) the said 1850-1899 period. Third, since the data length is 113 years, is not a harmonic of 56.9 years (almost half of the 113) extremely fishy?
I also think that some (albeit minor) global cooling is going to occur, because of the Sun, but I do not trust this model to tell me why or by how much. But I did enjoy the mathematics!
Rich.
For those interested, a fifty year hindcast back to 1850 is available here (the model was created using only data from 1900 to present because of missing data samples prior to that time )
http://montpeliermonologs.wordpress.com/2013/09/11/hindcast-of-model-back-to-1850/
The plotted data is the unaltered monthly Hadcrut data.