Guest essay by Jeffery S. Patterson
My last post on this site, examined the hypothesis that the climate is dominated by natural, harmonically related periodicities. As they say in the business, the critics were not kind. Some of the criticisms were due to a misunderstanding of the methodology and others stemmed from an under appreciation for the tentativeness of the conclusions, especially with respect to forecasting. With respect to the sparseness of the stochastic analysis, the critics were well founded. This lack of rigor is why it was submitted as a blog post and not a journal paper. Perhaps it served to spark someone’s interest who can do the uncertainty analysis properly, but I have a day job.
One of the commentators suggested I repeat the exercise using a technique called Singular Spectrum Analysis which I have done in a series of posts starting here. In this post, I turn my attention away from cycles and modeling and towards signal detection. Can we find a signature in the temperature data attributable to anthropogenic effects?
Detecting small signals in noisy data is something I am quite familiar with. I work as a design architect for a major manufacturer of test equipment, half of which is dedicated to the task of finding tiny signals in noisy data. These instruments can measure signals on the order of -100dBm (1 part in 10-13). Detecting the AGW signal should be a piece of cake (tongue now removed from cheek).
Information theory (and a moment’s reflection) tells us that in order to communicate information we must change something. When we speak, we modulate the air pressure around us and others within shouting distance detect the change in pressure and interpret it as sound. When we send a radio signal, we must modulate its amplitude and/or or phase in order for those at the other end to receive any information. The formal way of saying this is that ergodic processes (i.e. a process whose statistics do not change with time) cannot communicate information. Small signal detection in noise then, is all about separating the non-ergodic sheep from the ergodic goats. Singular Spectrum Analysis excels at this task, especially when dealing with short time series.
Singular Spectrum Analysis is really a misnomer, as it operates in the time-domain as opposed to the frequency domain as the term spectrum normally applies. It allows a time-series to be split into component parts (called reconstructions or modes) and sorts them in amplitude order, with the mode contributing most to the original time series first. If we use all of the modes, we get back the original data exactly. Or we can choose to use just some of the modes, rejecting the small, wiggly ones for example to provide the long term trend. As you may have guessed by now, we’re going to use the non-ergodic information-bearing modes, and relegate the ergodic, noisy modes to the dust bin.
SSA normally depends on two parameters, a window length L which can’t be longer than ½ the record length and a mode selection parameter k (k is sometimes a multi-valued vector if the selected modes aren’t sequential but here they are). This can make the analysis somewhat subjective and arbitrary. Here however, we are deconstructing the temperature time-series into only two buckets. Since the non-ergodic components contribute most to the signal characteristics, they will generally be in the first k modes, and the ergodic components will be sequential, starting from mode k+1 and including all remaining L-k-1 modes. Since in this method, L only controls how much energy leaks from one of our buckets to the other, we set to its maximum value to give the finest grain resolution to the division between our two buckets. Thus our analysis depends only on a single parameter k which is set to maximize the signal to noise ratio.
That’s a long time to go without a picture so here are the results of the above applied to the Northern Hemisphere Sea Surface Temperature data.
Figure 1 -SST data (blue) vs. a reconstruction based on the first four eigen modes (L=55, k=1-4)
The blue curve is the data and the red curve is our signal “bucket” reconstructed from the first four SSA modes. Now let’s look in the garbage pail.
Figure 2 – Residual after signal extraction
We see that the residual indeed looks like noise, with no discernible trend or other information. The distribution looks fairly uniform, the slight double-peak due probably to the fact that the early data is noisier than the more recent. Remembering that the residual and the signal sum to the original data, and since there is no discernible AGW signal in the residual, we can state without fear of contradiction that any sign of AGW, if one is to be found, must be found in the reconstruction built from the non-ergodic modes plotted in red in figure 1.
What would an AGW signal look like? The AGW hypothesis is that the exponential rise in CO2 concentrations seen since the start of the last century should give rise to a linear temperature trend impressed on top of the climate’s natural variation. So we are looking for a ramp, or equivalently a step change in the slope of the temperature record. Here’s an idea of what a trendless climate record (with the natural variation and noise similar to the observed SST record) might look like, with (right) and without (left) a 4 °C/century AGW component. The four curves on the right represent four different points in time where the AGW component first becomes detectable: 1950, 1960, 1970 and 1980.
Figure 3 – Simulated de-trended climate record without AGW component (left) and with 4°C/century AWG components
Clearly a linear AGW signal of the magnitude suggested by the IPCC should be easily detectible within the natural variation. Here’s the real de-trended SST data. Which plot above does it most resemble?
Figure 4 -De-trended SST data
Note that for the “natural variation is temporarily masking AGW” meme to hold water, the natural variation during the AGW observation widow would have to be an order of magnitude higher than that which occurred previously. SSA shows that not to be the case. Here is the de-trended signal decomposed into its primary components (note, for reasons too technical to go into here, SSA modes occur in pairs. The two signals plotted below include all four signal modes constituted as pairs)
Figure 5 – Reconstruction of SSA modes 1,2 and 3,4
Note the peak-to-peak variation has remained remarkably constant across the entire data record.
Ok, so if it’s not 4°C/century, what is it? Remember we are looking for a change in slope caused by the AGW component. The plot below shows the slope of our signal reconstruction (which contains the AWG component ,if any), over time.
Figure 6 – Year-to-year difference of reconstructed signal
We see two peaks, one in 1920 well before the effects of AGW are thought to have been detectable and one slightly higher in 1995 or so. Let’s zoom in on the peaks.
Figure 7 – Difference in signal slope potentially attributable to AGW
The difference in slope is 0.00575 °C/year or ~.6 °C/century. No smoothing was done on the first-difference plot above as is normally required, because we have eliminated the noise component which makes this necessary.
Returning to our toy climate model of figure 3, here’s what it looks like with a .6 per century slope (left) with the de-trended real SST data on the right for comparison.
Figure 8 – Simulated de-trended climate record with .6°C/century linear AGW components (see figure 3 above) left, de-trended SST (northern hemisphere) data right
Fitting the SST data on the right to a sine wave-plus-ramp model yields a period of ~65 years with the AGW corner at 1966, about where expected by climatologists. The slope of the AGW fit? .59 °C/century, arrived at complete independent of the SSA analysis above.
Conclusion
As Monk would say, “Here’s what happened”. During the global warming scare of the 1980’s and 1990’s, the quasi-periodic modes comprising the natural temperature variation were both in their phase of maximum slope (See figure 5). This naturally occurring phenomenon was mistaken for a rapid increase in the persistent warming trend and attributed to the greenhouse gas effect. When these modes reached their peaks approximately 10 years ago, their slopes abated, resulting in the so-called “pause” we are currently enjoying. This analysis shows that the real AGW effect is benign and much more likely to be less than 1 °C/century than the 3+ °C/century given as the IPCC’s best guess for the business-as-usual scenario.
Related articles
- ‘Mind blowing paper’ blames ENSO for Global Warming Hiatus (wattsupwiththat.com)
- Signal Detection: An Important Skill in a Noisy World (perceptualedge.com)
- Where’s the Magic? (EMD and SSA in R) (r-bloggers.com)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Sorry, again, I also don’t understand your numbers.
10,000,000V is 160 dB, so if you said dBm was referenced to mV then 160dBm is 3.1623 x 10^9 mV
10 microvolt is -100dB, so -100dBm is 3.162 x 10^-4 mV
I still want to know how -130dBm relates to the signal (what S/N does that represent?) as opposed to 1W.
What I fail to understand is the emphasis the warmists place on “Peer Reviewed Papers” and that having been Peer Reviewed the conclusions are 100% correct.”Peer Review” just means the whole paper is consistent, clear, free of ambiguities and won’t make you look like a complete idiot if you published it. In no way does infer that the conclusions of the paper are correct.
Jeffrey S. Patterson:
Thank you for the interesting essay. I’m aware of a barrier to accomplishment of what you try to accomplish that I would like to share.
The notion that of an anthropogenic “signal” submerged in “noise” is of political importance as it appears in the IPCC’s assessment reports as well as your essay. However, while control theory sets a requirement for information to travel from the future to the present if a system is to be controlled, relativity theory denies that a physical signal can do so, for to do so this “signal” would have to travel at superluminal speed. As “information” is only a measure, there is no bar in relativity theory for information to travel from the future to the present.
The information that is needed in order to control the climate cannot be carried by a signal. One of the consequences is that for a control system for the climate there is no such thing as a “signal-to-noise ratio.”
Currently, there is a bar to controlling the climate but it is not relativity theory. Instead, it is the non-existence of the events in the statistical population underlying the climate models. Absent these events, “information” does not exist as a concept.
Willis Eschenbach says:
September 26, 2013 at 7:55 am
###
You need to learn something about Information Theory. Statistical methods are completely inadequate in dealing with chaotic signals. Information Theory, on the other hand was developed specifically for this. The signal received by your cell phone is far more chaotic then a temperature record. BTW, many of the major errors with classical statistical methods were discovered by Information Theorists.
You know all those pretty pictures we get from Mars? Well, if Jeffery’s techniques did not work, we would not be getting them. Nothing he did is unusual or exotic. In fact they are pretty standard techniques.
A cheap book on the subject, very old but still a solid introduction that does not skimp.
http://www.amazon.com/Introduction-Information-Theory-Dover-Mathematics/dp/0486682102
JA says:
September 26, 2013 at 7:43 am
CO2 comprises .04 PERCENT of the atmosphere ; it is an atmospheric trace gas.
Of this .04 PERCENT of CO2, about 5 PERCENT, is produced as a result of human activity.
Of this trace gas, some 30% is produced as a result of human activity: from 0.03% ot 0.04%.
The 5% per year human contribution is additional, the 95% natural CO2 is simply going in and out, with 2.5% more going out than going in…
Matthew R Marler says:
September 26, 2013 at 9:19 am
Actually, the numbers indicate that ON AVERAGE the increased forcing can’t be “mostly increased vaporization”.

Global evaporation is estimated by the fact that on average the annual rainfall averaged over the surface of the planet (including snow water equivalent) is about a metre (39″) per year. It takes about 70 W/m2 to evaporate that much water. And from things like the steam you see rising off the pavement after a rain, much of that evaporation is from the sun. So maybe half of that 70 W/m2 is coming from longwave.
Downwelling longwave radiation, on the other hand, is measured around the globe, and averages about 330 W/m2.
This means that the majority (about 90%) of the longwave is NOT going into evaporation.
But of course, it’s not that simple. That’s just the average. In fact, you are correct on the local level in the tropics. There, because of the high temperatures, further increases in temperature have a greater effect on evaporation. There’s something called the “Bowen Ratio”, which is the ratio of heat loss through convection (sensible heat flux) to heat loss from evaporation (latent heat flux). Evaporation generally is much larger than sensible heat loss. Globally the Bowen Ratio is about 0.25, representing the ratio between global sensible (17 W/m2) and evaporative (80 W/m2) heat loss. In the tropics, the Bowen Ratio drops to .05 or so.
And the evaporation, as you point out, does lead to clouds which cut down the incoming energy. In fact, my analysis of the TAO buoy data shows that when the clouds form (typically around 11 AM) the change in incoming sunlight is so large that the surface actually cools.
SOURCE: The TAO That Can Be Spoken
See also:
Cloud Radiation Forcing in the TAO Dataset
TAO/TRITON TAKE TWO
All the best,
w.
Cool Stuff .. but I have a few questions.
Point 1). Your presentation is expressed as 0.6C change/century. And thus, you express that this is evidence that the increase in temp over the next 100 years would be closer to 1C than 3C. However, the basis of the 3C claim is on a doubling of CO2 in conjunction with time. I can see where this type of analysis can tease out a 0.6C changer per century in a hindcast, but that does not say what the effect will be from a doubling of CO2 or an increase rate of CO2 concentration.
Point 2). Your analysis shows a 0.6C/C increase, but such analysis takes into account all trending changes. Is it not correct to say that the 0.6C increase could have been due to CO2 .. or changes in cloud cover .. or an increase in SWR reaching the ground. I don’t see how this analysis serves as evidence for a particular “cause” outside of the noise. To assume that it is CO2 is to assume that ALL other influences are just noise, which is not a supportable argument.
Comments??
I’m working on a “paper” that will show the global trend is not due to a global warming signal at all, but the averaging of separate warming trends in different parts of the globe. Global warming is definitely not a global response, but regional warming driven by SST’s. I still have a few more regions to finish analyzing, I should be done soon.
Willis and Bob T, you’re going to love this……
Matthew R Marler says:
September 26, 2013 at 9:19 am
Here is IMO an honest attempt to summarize & analyze what is known & not known about changes in cloudiness over the past 60 years:
http://meteora.ucsd.edu/~jnorris/presentations/Caltechweb.pdf
I recommend it.
Isn’t a better analogy finding a specific needle in a needle stack?
JA See my comment at 8:45 above and check the series of posts at
http://climatesense:norpag.blogspot.com
The changing surface temperature of the earth has been accepted as a convenient metric for climate change.It is obvious that there are quasi cyclic- quasi repetitive patterns in the data. Some of these periodicities relate to the orbital relationships between the earth and the sun. If you want to “understand” where we are in relation to climate trends, first figure out where you are relative to these Milankovitch cycles. At this time we are several thousand years past the peak of the current interglacial and are headed toward the next ice age. The temperature changes which would be caused by these orbital changes then resonate and convolve with quasi-cyclic changes in solar “activity” .These include changes in solar magnetic field strength and solar wind speed which lead to changing GCR influx and probable associated changes in cloud cover and atmospheric chemistry ,also there are changes in TSI and perhaps more importantly in the solar radiation spectrum which importantly changes the ozone layer. These drivers then work through the great ocean and air systems – ENSO,PDO AMO,AO etc to produce the climate and weather.
The exact mechanisms are extremely complicated and hard to disentangle . However it is not necessary to understand these processes in order to make perfectly useful forecasts.
There are obvious patterns in the temperature data which can be reasonably projected forward for some fairly short time ahead .Here is the conclusion to my latest post.
“6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
7 By 2650 earth could possibly be back to the depths of the little ice age.
8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and more CO2 would help maintain crop yields .
9 Warning !!
The Solar Cycles 2,3,4 correlation with cycles 21,22,23 would suggest that a Dalton minimum could be imminent. The Livingston and Penn Solar data indicate that a faster drop to the Maunder Minimum Little Ice Age temperatures might even be on the horizon .If either of these actually occur there would be a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.
How confident should one be in these above predictions? The pattern method doesn’t lend itself easily to statistical measures. However statistical calculations only provide an apparent rigor for the uninitiated and in relation to the IPCC climate models are entirely misleading because they make no allowance for the structural uncertainties in the model set up.This is where scientific judgment comes in – some people are better at pattern recognition and meaningful correlation than others. A past record of successful forecasting such as indicated above is a useful but not infallible measure. In this case I am reasonably sure – say 65/35 for about 20 years ahead. Beyond that certainty drops rapidly. I am sure, however, that it will prove closer to reality than anything put out by the IPCC, Met Office or the NASA group. In any case this is a Bayesian type forecast- in that it can easily be amended on an ongoing basis as the Temperature and Solar data accumulate.
DesertYote says:
September 26, 2013 at 9:40 am
Yote, if you had a specific objection to what I said, you would have quoted the words I said, and raised your specific objection. That would have been helpful.
Instead, you wave your hands and say I need education … well, I do, and I always have. But saying something on the order of “Jeffrey is right because Information Theory, so there!!” is not an adequate teaching method. Nor does it address my objections in the slightest.
Finally, I think you may mean “noisy” when you claim that cell phone signals are chaotic, whereas I mean chaotic as in Mandelbrot.
w.
JA says:
September 26, 2013 at 9:11 am
So, everybody; what caused the Little Ice Age??
What caused the Medieval Warming Period?
Does ANYBODY KNOW?
If a Singular Spectrum Analysis had been carried out of the first 100 years of the Medieval Warming Period, what exactly would that tell us of THE CAUSE of that warming??
Would that analysis shed any light about the subsequent Little Ice Age?
Hello. !!!!!! Anybody there??
—————————————
Please see:
Dr Norman Page says:
September 26, 2013 at 10:13 am
Thanks Jeffrey for showing the very clear 60 year cycle.
For another 60 year correlation, see: ENSO and PDO Explain Tropical Average SSTs during 1950-2013 September 26th, 2013 by Roy W. Spencer, Ph. D.
I tend to agree. We need to concentrate on getting the measurements correct going forward, lets let our great-grand children concentrate on the short term analysis. The last hundred years of data is compromised, the reconstructions before that have anecdotal value, lets ensure the next hundred years of data is good.
Matthew R Marler;
The 3.7W/m^2 of forcing that results from CO2 doubling falls mostly on water and other wet surfaces
>>>>>>>>>>>>>>>>>>
Actually it doesn’t, it never even make it to the surface for the most part. Radiative forcing from CO2 runs into a layer of water vapour close to surface that absorbs and re-radiates it. Very little gets to surface. You need not believe me, I’m just citing the IPCC explanation of the difference between radiative forcing and surface forcing:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-2-23.html
Willis: Actually, the numbers indicate that ON AVERAGE the increased forcing can’t be “mostly increased vaporization”.
I really am not addressing averages, but what happens at particular places and times. I don’t think the rest of your post rules out the possibility that I mentioned, or the possibility that there is an increase in the rate of the hydrological cycle without an increase in the area of cloud cover.
Downwelling longwave radiation, on the other hand, is measured around the globe, and averages about 330 W/m2.
This means that the majority (about 90%) of the longwave is NOT going into evaporation.
The question I posed is: What does the additional radiation do given the processes that happen every morning and evening, in that 70% and more of the surface that is wet and already experiencing daily evaporation and condensation?
It is related to another question: Given that a non-negligible fraction of surface heat is carried to the upper atmosphere by convection of wet and dry air, how does the transport of heat by convection of wet air get affected by increased downwelling IR? Surely (?) the answer can’t be “no effect at all.”
Willis Eschenbach says:
September 26, 2013 at 10:16 am
###
No. I mean chaotic, e.g. the RF signal your cell phone receives in the Santa Rosa Mall. You should see what a grid of metal beams does to an RF signal! Though to be sure, the signal is very noisy also, just like the temperature record.
You have been consistently objecting to Jeffery’s methodology. Everything you have written is to this effect. No need to quote a specific. I am making a general observation, pointing out that his methods are successfully used to actually do stuff, in the real world. Don’t be a Mosher, reading into my comment things I did not say, such as “Jeffery is right because of Information Theory”. What I said was, Jeffery’s methods work, and then I site a few real examples of them working.
I do not write essays. I don’t have the time. I have a demanding job and can not afford the hours it takes to serialize out my thoughts (which is difficult for me). I keep my comments short and general. I expect people who are interested to go out and get their own knowledge. That’s what I do. That how I found out the things I now know about economics. I made a stupid comment, a regular here blasted me. So I did some research.
Willis Eschenbach says:
September 26, 2013 at 10:16 am
###
Forgot to add full discloser:
10 years HP/Agilent, mostly Microwave Spectrum Analyzers. I also know who Jeffery Patterson is. So I might be a bit defensive.
Merrick says:
September 26, 2013 at 9:33 am
Sorry, again, I also don’t understand your numbers.
That’s because I totally mucked them up (that’s why they don’t let us DSP guys talk about analog front-end specs). Our spectrum analyzers have about a 10 db noise figure which puts the noise floor at about -165 dBm. With noise correction and averaging we can get an additional 6 dB for a stable signal. At that attenuation setting our front-end mixer can handle about +10 dBm (we’d get some harmonic distortion but we’re talking about noise limited dynamic range). So the NFDR is about -181 dB, or about 1 part in 10^9, not 1 part in 10^13 as I erronously stated in the article. I apologize for the error.
Willis Eschenbach says:
September 26, 2013 at 7:55 am
So finding new methods to filter out the short-term variation doesn’t impress me much. There is very little difference, for example, between the red SSA line in your Figure 1, and either a Gaussian or a Loess filter applied to the same data.
This is not surprising- both the Loess filter and SSA are adaptive filters whose characteristic polynomial (i.e. the fiter poles and zeros placement) is determined by the data to optimize signal to noise ratio. The similarity between the red curve in my figure 1 and the brown curve in the An impartial look at global warming… by M.S.Hodgart a few days ago, in my mind re-enforces not detracts from their validity.
On the scientific dodge part we agree. Kevin Trenberth in a refreshing moment of candor said natural variation is just another was of saying we don’t know. I shouldn’t have used such a loaded term when what I really meant was the natural, unaltered (by man) climate.
To me, the notion that an emergent, natural phenomena showed up just at the right time and with the right slope to mask the AGW signal seems far fetched.
Best regards,
Jeff
Dr. Deanster says:
September 26, 2013 at 9:50 am
The projected rise in CO2 under the business as usual scenario to my understanding is more or less exponential. The logarithmic CO2 saturation curve gives rise to the expected linear trend.
A fair point which I should of highlighted more in my post. The detected signal matches the expected AWG signal in characteristic (roughly linear trend) and detection corner (mid-1960s) but that is not sufficient for attribution. It does I think show the unlikelihood of the IPCC business-as-usually projection and places an upper bound on any AGW component.
More important in mind than determining the exact number is the qualitative conclusion that there is no 3-6 degC/century crisis and spending untold billions on mitigations that will negatively impacting the quality of life for billions of people in the developing world is unwarranted. This and many other analyses like it show we have time to gather the data we need to make a rational decision. Heck even the warmist are saying the trend is likely down for at least another decade. In the meantime, we should be spending as much or more on improving the quality of data and on signal detection than on refining models whose usefulness for this task is, in my estimation, at least five decades away.
DesertYote says:
September 26, 2013 at 11:15 am
Sorry DY but I don’t think it is correct to say that a Faraday shield (your metal grid) induces chaos in a cell phone signal. That would require a high-order non-linearity which I’m not seeing in your analogy. Your main point however is correct. A type of adaptive filter similar to the SSA used here is what achieves the high signal-to-noise ratio that makes cell phone communication (and high speed modems) possible. The difference is the Kalman filter in your CP is dynamically adapting to the data continuously while the the SSA filter is adapted to the entire dataset just once.
Regards,
Jeff
Jeff Patterson
Very interesting analysis, thank you.
The “AGW signal” could also be a longer term oscillation on the hundreds of years scale.
Have you thought of running a similar analysis on SSN?
David Ball: “Isn’t a better analogy finding a specific needle in a needle stack?”
That’s really good, David. thanks for thinking it and writing it. Bravo