Guest essay by Jeffery S. Patterson
My last post on this site, examined the hypothesis that the climate is dominated by natural, harmonically related periodicities. As they say in the business, the critics were not kind. Some of the criticisms were due to a misunderstanding of the methodology and others stemmed from an under appreciation for the tentativeness of the conclusions, especially with respect to forecasting. With respect to the sparseness of the stochastic analysis, the critics were well founded. This lack of rigor is why it was submitted as a blog post and not a journal paper. Perhaps it served to spark someone’s interest who can do the uncertainty analysis properly, but I have a day job.
One of the commentators suggested I repeat the exercise using a technique called Singular Spectrum Analysis which I have done in a series of posts starting here. In this post, I turn my attention away from cycles and modeling and towards signal detection. Can we find a signature in the temperature data attributable to anthropogenic effects?
Detecting small signals in noisy data is something I am quite familiar with. I work as a design architect for a major manufacturer of test equipment, half of which is dedicated to the task of finding tiny signals in noisy data. These instruments can measure signals on the order of -100dBm (1 part in 10-13). Detecting the AGW signal should be a piece of cake (tongue now removed from cheek).
Information theory (and a moment’s reflection) tells us that in order to communicate information we must change something. When we speak, we modulate the air pressure around us and others within shouting distance detect the change in pressure and interpret it as sound. When we send a radio signal, we must modulate its amplitude and/or or phase in order for those at the other end to receive any information. The formal way of saying this is that ergodic processes (i.e. a process whose statistics do not change with time) cannot communicate information. Small signal detection in noise then, is all about separating the non-ergodic sheep from the ergodic goats. Singular Spectrum Analysis excels at this task, especially when dealing with short time series.
Singular Spectrum Analysis is really a misnomer, as it operates in the time-domain as opposed to the frequency domain as the term spectrum normally applies. It allows a time-series to be split into component parts (called reconstructions or modes) and sorts them in amplitude order, with the mode contributing most to the original time series first. If we use all of the modes, we get back the original data exactly. Or we can choose to use just some of the modes, rejecting the small, wiggly ones for example to provide the long term trend. As you may have guessed by now, we’re going to use the non-ergodic information-bearing modes, and relegate the ergodic, noisy modes to the dust bin.
SSA normally depends on two parameters, a window length L which can’t be longer than ½ the record length and a mode selection parameter k (k is sometimes a multi-valued vector if the selected modes aren’t sequential but here they are). This can make the analysis somewhat subjective and arbitrary. Here however, we are deconstructing the temperature time-series into only two buckets. Since the non-ergodic components contribute most to the signal characteristics, they will generally be in the first k modes, and the ergodic components will be sequential, starting from mode k+1 and including all remaining L-k-1 modes. Since in this method, L only controls how much energy leaks from one of our buckets to the other, we set to its maximum value to give the finest grain resolution to the division between our two buckets. Thus our analysis depends only on a single parameter k which is set to maximize the signal to noise ratio.
That’s a long time to go without a picture so here are the results of the above applied to the Northern Hemisphere Sea Surface Temperature data.
Figure 1 -SST data (blue) vs. a reconstruction based on the first four eigen modes (L=55, k=1-4)
The blue curve is the data and the red curve is our signal “bucket” reconstructed from the first four SSA modes. Now let’s look in the garbage pail.
Figure 2 – Residual after signal extraction
We see that the residual indeed looks like noise, with no discernible trend or other information. The distribution looks fairly uniform, the slight double-peak due probably to the fact that the early data is noisier than the more recent. Remembering that the residual and the signal sum to the original data, and since there is no discernible AGW signal in the residual, we can state without fear of contradiction that any sign of AGW, if one is to be found, must be found in the reconstruction built from the non-ergodic modes plotted in red in figure 1.
What would an AGW signal look like? The AGW hypothesis is that the exponential rise in CO2 concentrations seen since the start of the last century should give rise to a linear temperature trend impressed on top of the climate’s natural variation. So we are looking for a ramp, or equivalently a step change in the slope of the temperature record. Here’s an idea of what a trendless climate record (with the natural variation and noise similar to the observed SST record) might look like, with (right) and without (left) a 4 °C/century AGW component. The four curves on the right represent four different points in time where the AGW component first becomes detectable: 1950, 1960, 1970 and 1980.
Figure 3 – Simulated de-trended climate record without AGW component (left) and with 4°C/century AWG components
Clearly a linear AGW signal of the magnitude suggested by the IPCC should be easily detectible within the natural variation. Here’s the real de-trended SST data. Which plot above does it most resemble?
Figure 4 -De-trended SST data
Note that for the “natural variation is temporarily masking AGW” meme to hold water, the natural variation during the AGW observation widow would have to be an order of magnitude higher than that which occurred previously. SSA shows that not to be the case. Here is the de-trended signal decomposed into its primary components (note, for reasons too technical to go into here, SSA modes occur in pairs. The two signals plotted below include all four signal modes constituted as pairs)
Figure 5 – Reconstruction of SSA modes 1,2 and 3,4
Note the peak-to-peak variation has remained remarkably constant across the entire data record.
Ok, so if it’s not 4°C/century, what is it? Remember we are looking for a change in slope caused by the AGW component. The plot below shows the slope of our signal reconstruction (which contains the AWG component ,if any), over time.
Figure 6 – Year-to-year difference of reconstructed signal
We see two peaks, one in 1920 well before the effects of AGW are thought to have been detectable and one slightly higher in 1995 or so. Let’s zoom in on the peaks.
Figure 7 – Difference in signal slope potentially attributable to AGW
The difference in slope is 0.00575 °C/year or ~.6 °C/century. No smoothing was done on the first-difference plot above as is normally required, because we have eliminated the noise component which makes this necessary.
Returning to our toy climate model of figure 3, here’s what it looks like with a .6 per century slope (left) with the de-trended real SST data on the right for comparison.
Figure 8 – Simulated de-trended climate record with .6°C/century linear AGW components (see figure 3 above) left, de-trended SST (northern hemisphere) data right
Fitting the SST data on the right to a sine wave-plus-ramp model yields a period of ~65 years with the AGW corner at 1966, about where expected by climatologists. The slope of the AGW fit? .59 °C/century, arrived at complete independent of the SSA analysis above.
Conclusion
As Monk would say, “Here’s what happened”. During the global warming scare of the 1980’s and 1990’s, the quasi-periodic modes comprising the natural temperature variation were both in their phase of maximum slope (See figure 5). This naturally occurring phenomenon was mistaken for a rapid increase in the persistent warming trend and attributed to the greenhouse gas effect. When these modes reached their peaks approximately 10 years ago, their slopes abated, resulting in the so-called “pause” we are currently enjoying. This analysis shows that the real AGW effect is benign and much more likely to be less than 1 °C/century than the 3+ °C/century given as the IPCC’s best guess for the business-as-usual scenario.
Related articles
- ‘Mind blowing paper’ blames ENSO for Global Warming Hiatus (wattsupwiththat.com)
- Signal Detection: An Important Skill in a Noisy World (perceptualedge.com)
- Where’s the Magic? (EMD and SSA in R) (r-bloggers.com)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Nancy
dBm, dB milliwatt, more likely a context of 50 ohm Rf environments
The 600 ohm reference once common in balanced audio environments dates both of us,
Dr Norman Page: The global temperature has been within narrow limits for at least 500 million years.
That is compatible with a chaotic system that generates a bounded distribution of states.
davidmhoffer: The global temperature has been within narrow limits for at least 500 million years.
I read it. Thanks.
oops. I quoted the wrong text for davidmhoffer. It’s supposed to be this: In part please see my response to Willis above.
Notwithstanding all the work done on facts to disprove the IPCC, Al Gore, Michael Mann, Joe Ramm etal best to keep in mind [you’re] not in a haystack of hay, [you’re] in a haystack of needles made of lies and fraud.
What they have is well honed lies, protected by liars telling lies to keep you busy deconstructing the last pack of lies. It is just busy work.
It is about getting the low infomation voters to vote for the redistribution of wealth.
If you prove up that it is a lie and your 1000% correct, they will remain standing telling new lies about another ,, “ice age” or “wild winds of change” or “no water vapor left for [clouds”] or some such line of bull.
Pull their tax payer paid tickets with votes.
Dr Norman Page says:
September 26, 2013 at 3:57 pm
Norman, seriously … that is not a forecast. “Modest cooling” and “more significant cooling” are not falsifiable.
Again, no numbers, not falsifiable, useless.
Starting when? How much of a trend?
0.15 or more? 0.15 or less? How long a moving average?
How long an average? 0.5 or more? 0.5 or less?
Numbers! All of this needs numbers!
Oh, please. The forecast for 2100 was bad enough. This is a joke. Meaningless. Fluff.
Not falsifiable.
Norman, think of it as a bet. If you were going to bet, would you bet on something as vague as whether there would be “modest cooling”? No way, you’d be arguing endlessly about whether -0.1°C per decade is “modest” or not.
As it stands, I wouldn’t bet on a single one of those “forecasts”, because I couldn’t tell if I’d won or lost.
Best regards,
w.
davidmhoffer says:
September 26, 2013 at 5:17 pm
In my Bible, “Climate Near The Ground, they say that 72% of the DLR is coming from the bottom 87 metres, 6.4% from the next 89 metres, 4% from the next 90 metres, and so on.
w.
Willis;
In my Bible, “Climate Near The Ground, they say that 72% of the DLR is coming from the bottom 87 metres, 6.4% from the next 89 metres, 4% from the next 90 metres, and so on.
>>>>>>>>>>>>>>>>>>
Well that would leave precious little to come from CO2, the bulk of which is well above the first 300 meters or so, assuming it is reasonably well mixed? I tried to look this up in my own bible, which I think is different from yours because it just said something about going forth and multiplying upon the face of the earth, so I went outside and scratched 6×7=42 in the dirt. Then I just contemplated for a time on life, the universe, and everything.
I have been long interested in the metrics of global warming claimants’ refrains. Only a couple of days ago I saw Ars Technica mentioned as a bastion of censoring in the name of CO2, went over there and saw for myself, the zeal with which reference to instrumental readings, is absolute anathema to warmers.
The absolute inability of the climate changers to predict which way a thermometer will move always has been the end of their claim. They can’t, it’s that simple, and t.h.e.y. d.e.s.p.i.s.e. instruments.
For this reason only hypotheticals and politics are allowed at warmer sites in general, at least that’s the way it had been going and how I saw it over there, most recently.
It might have been here that I actually saw them mentioned as a warmer battalion bastion.
Surely this is recurrent periodic Scafettian cyclomania? Even Anthony Watts said so..
Of course any post 1950 step function or singularity COULD be simply the influence of a decadal millennial centennial and epochal cycle all coinciding, (or not).
Sadly we don’t have trustworthy data, especially with high frequencies in it, going back very far.
“I agree with those who have written here that climatology would be better served by trying to discover possible causes for these observed quasi-cycles instead of constructing ever more Ptolemaic epicycles-like GIGO models.”
But you neglect to point out that it was only many centuries after the Ptolemaic epicycles had been thoroughly mapped that the simplification on 9-10 elliptical orbits was possible.
Look: I do understand signal analysis. I don’t understand the mind set of the posters here.
All Fourier and related analyses do, is transform the data from one based around a time axis, to one based around a frequency axis, so that if the data does represent a series of superposed cycles, it should become fairly obvious, by inspection. It is not even a curve FITTING exercise, it is simply the data represented in a different way. If Ptolemy had done Fourier analysis on planetary motions he would have seen orbital periods staring him in the face.
The value of doing this, is to establish first of all whether there are cyclical variations, and demonstrate their fit to the data by isolating the dominant ones and reconstructing a time series.
Not unsurprisingly, the fit is good. But all that demonstrates is that there is a cyclically modellable component in the data.
There may also be a residual error that relates to something like CO2. Or perhaps another much longer period cycle. The technique can’t without extension into deep time establish the difference.
Bit what it CAN do and HAS done, is to say that once the data is presented and the sum of cyclic components, the case for at least some cyclical drivers of climate cannot be avoided.
Reconstruction of the most important 4 frequency ‘bins’ gives a good fit. That would not happen if those bins were simply a small energy percentage of the whole spectral energy plot.
The fact that its only 4 is encouraging. The more chaotic the signal, the more the energy is not coherent with respect to specific frequencies. That is a point people do not seem to understand. That is, this technique will not properly reconstruct a signal that is chaotic, using just a few bins out of the many available. Ergo the points made that temperature is not wholly chaotic, it has a clearly discernible cyclicity.
Likewise the existence of a slow influence – such as that presumed for CO2 – is in fact identical to a very long cycle cyclic variation over the period for which we have measured the (CO2) rise.
So sadly we cant say whether its CO2 or not by this method, but what we can say is that IF the cyclic nature of the data as evinced by the data itself is considered to represent something beyond mere coincidence, then it tells us that whatever the cycles are, they can account for the larger part of recent climate variation, and CO2 has an upper bound set on its actual and potential effects. Which is important.
In short, cyclomania does have sensible implications. It is turning unknown unknowns into known unknowns.
Something is happening here, even if we don’t know what it is, Mr Jones..
Leo Smith (September 27, 2013 at 3:31 am) wrote:
“The more chaotic the signal, the more the energy is not coherent with respect to specific frequencies. That is a point people do not seem to understand.”
Sensible discussion of Jeffery S. Patterson’s contributions is impossible here due to the level of ignorance &/or deception.
@Jeffrey S Patterson
Thanks for your response. I think that there is a very interesting conceptual problem. I develpoed a somewhat bastardised version of a principle component method to extract signal from noise and I think that one of the conclusions I developed applies here. The problem stems from how we think about orthogonality, particularly if you have gone through the classical DSP training in the medaeval period (i.e. 1970s) as I did.
In SSA one forms the covariance matrix of the lagged signal. Since this is real symmetric the eigenvectors are real and orthogonal and you then compose signal components on this basis. The problem, which took me ages to see, is that the extraction of orthogonal components are based on energy in the signal, but in this case is not cross-spectral power. However, this does not imply that the components one extracts are orthogonal in the conventional sense of the integral of their product is zero.
One of the quoted properties of SSA is that given a signal:
f(t)=exp(-kt).sin(at)
SSA can decompose the signal into the exponential decay and the sine wave. (I’ve just tried it and it appears to). However, exp(-kt) and sin(at) are NOT orthogonal functions and the decomposition is not unique. For example, if one expands this by McLaurin’s theorem one could generate a family of functions that describe it and if one expands the cross components to 7 terms one would get a pretty good representation of the signal, each with appropriate coefficients – but the even terms would not be part of sin(at) component.
This is why I am rather sceptical about being able to distinguish an anthropogenic component by SSA simply because I do not think one can assume that there MUST be seperation between them..
In fact, unless the anthropogenic component were a very peculiar shape such as a series of pulses, I am sceptical that one can seperate it out using DSP methods at all and it is more likely that a model based approach is more sound. I hasten to add that one would need a rigorously verified model before making any pronouncements and the current crop of models have some pretty severe predictive limitations!
Willis You are being deliberately obtuse or misleading or misunderstanding the numbers.The forecast temperatures clearly refer to the Fig 8 -SST Global Temperature anomaly in the last post at http://climatesensenorpag.blogspot,com
Thus the 2035 anomaly number is minus 0.15 and the 2100 number is minus 0.5
For some reason you didn’t recognize the minus sign.
The earlier 3 Forecasts are general trends and events in the context of the forecast of the minus 0.15 anomaly in 2035.The 2650 comment follows logically from a repeat of the 1000 to 2000 cycle.
I say later that at this time this forecast is speculative but it is by no means meaningless.
You cant replace something with nothing. What would your best shot at the Global HadSST3 numbers for 2035 , 2100 and 2650 be?
RC Saumarez says:
September 27, 2013 at 5:32 am
Thank you for your remarks. Unfortunately I can parse a key paragraph.
I think you are saying that the extracted eigenvectors are orthogonon (which of course they are) but the reconstruction may not be. This is the leakage issue I wrote of in the article and which I don’t think is an issue here. SSA is just a type of filter and just as there is no unique way to set the characteristic polynomia of an FIR, and there is leakage between the pass band and stop bands, so too in the SSA there are many adaptions possible and the residuals along each eigenvector (which are minimized by the algoritm) are not necessarily independent. Those effects are mitigated here by fixing F to its max value and setting k to provide the best SNR. The ACF of the residual is very impulsive (implies minimum cross mode correlation) and passes the UNit Root Test with p=0.
It is I think important that we be able to derive the SSA’s impulse response to ensure it is not ringing. One can’t do this in the normal way because the filter readapts to whatever signal you feed it! I’m working out a technique to use the extracted eigensystem to calculate the transfer function directly.
In my last post I said “I think you are saying that the extracted eigenvectors are orthogonon (which of course they are) but the reconstruction may not be” I meant the components comprising the reconstruction may not be orthogonal. I’m not sure this is correct but I do know that the residuals of each component are not guaranteed to be independent. Perhaps that’s two ways of saying the same thing – I’ll have to cogitate on that a bit. Filters introduce sample-to-sample correlation, there’s no way around that. That doesn’t mean the passband signal doesn’t represent the input signal with improved SNR.
Leo Smith says:
September 27, 2013 at 2:43 am
Surely this is recurrent periodic Scafettian cyclomania?
How so? We’ve said nothing about cycles and made no projections. We’ve only separated to the greatest extent possible, those components which could have an AGW component from those which could not and examined the resulting slope over time. I’m not seeing your analogy with Scafettia.
There is nothing wrong with Scafetta’s cycles except that he hasn’t gone to low enough frequencies ie the 1000 year cycle. If he included that his projections would I think, using the useful eyeball method, be very similar to my own -see above. I think Jeffs approach is helpful – and urge him again to use it on the 2000 year Christiansen proxy temperature reconstruction data – which I believe is archived on line. See Fig 3 and accompanying link on last post at http://climatesense-norpag.blogspot.com
Sorry y’all the Fig number in my last comment 7:11 should be 7.
Dr Norman Page says:
September 27, 2013 at 7:11 am
..[I] urge him again to use it on the 2000 year Christiansen proxy temperature reconstruction data – which I believe is archived on line.
I’ve looked for the dataset but haven’t been able to locate it. Any pointers?
Reblogged this on The Montpelier Monologues and commented:
Here’s a reblog of my article of 9/25/2013 posted on WUWT. I’ll follow up here with discussions of the method and follow on analysis
Jeff I’m pretty sure this is it. You need to read carefully the Christiansen paper and check the NOOA archive to make sure this is what was used in my Fig 7
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/ljungqvist2009/ljungqvist2009recons.txt
Looking more closely I think the NOOA data is the original from which my Fig seven was compiled as described in the paper. You might consider emailing the authors and politely requesting the actual annual data file for my Fig 7 ie Fig5 in the original paper.
Willis Eschenbach: In my Bible, “Climate Near The Ground, they say that 72% of the DLR is coming from the bottom 87 metres, 6.4% from the next 89 metres, 4% from the next 90 metres, and so on.
I want to thank you and davidmhoffer for your comments, but I am stuck where I began. Doubling the atmospheric CO2 concentration will produce some increase in the net downwelling IR at the surface. If it doesn’t (and some people do indeed argue that it can’t, but I don’t believe them), then doubling CO2 will not change the (equilibrium) surface temperature. Now over dry land the effect is to increase surface temperature. So my question remains: over ocean and wet land, how much of the increased downwelling energy goes to sensible heating, and how much to vaporization of the water?
davidmhoffer’s argument is that there is so much more water vapor than CO2, even after doubling CO2, that the increased downwelling of IR at the surface can’t be very much. Indeed, no one has argued that it is very much: the predicted equilibrium effect is only 1/2% (1.3K/288K), but a case that it is 0 is not complete, and not believable.
Jeffery S. Patterson, you are a good sport to put your work up here for examination and criticism, and to defend it.
Dr. Norman Page: You cant replace something with nothing. What would your best shot at the Global HadSST3 numbers for 2035 , 2100 and 2650 be?
How about the statement that, on present knowledge, the Global HadSST3 numbers for 2035, 2100 and 2650 can’t confidently be predicted with sufficient accuracy for the prediction to matter? A clear statement of what isn’t known is not “nothing”.