The Story Told by the Southern Oscillation Index

Guest post by David Archibald

Bob Tisdale’s post on ENSO on 19th November prompted me to see what I could find in the Southern Oscillation Index (SOI) data. The SOI is calculated from the monthly or seasonal fluctuations in the air pressure difference between Tahiti and Darwin. Sustained negative values of the SOI often indicate El Nino episodes. These negative values are usually accompanied by sustained warming of the central and eastern tropical Pacific Ocean, a decrease in the Pacific Trade Winds, and a reduction in rainfall over eastern and northern Australia. Following is a graph of the SOI on a monthly basis from 1876 to 2010. The major El Ninos are discernable, otherwise it looks like a lot of noise.

The graph following shows the cumulative monthly SOI from 1876 to 2010.

The SOI does tell a story. It was non-trending for the last of the Little Ice Age and then from 1920 went into a long La Nina-dominated trend that ended with the Great Pacific Climate Shift of 1976. The planet warmed into the 1930s at the beginning of this trend, but then cooled, as it should have down in a La Nina-dominated trend, from the 1940s to the 1970s.

The subsequent El Nino-dominated trend from 1976 to 1995 was almost three times as fast as the rise. The Climategate emails show that Phil Jones was aware that global warming ended in 1995. The end of the El Nino-dominated trend in 1995 might be the physical cause of that cessation of warming. The SOI has been non-trending since.

This might have been a very neat story if the world had cooled instead of warmed into the 1930s. The 20 years of El Nino-dominant trend from 1976 to 1995 produced the late 20th century warming that got so many people hot and bothered. The story told by the SOI also reinforces how important the Great Pacific Climate Shift of 1976 was. The climate system turned on a dime for some as yet unknown reason.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
86 Comments
Inline Feedbacks
View all comments
Roy Martin
December 10, 2010 12:11 am

Oops, the system did not like my symbols. Third line in second para. should read: Scanning periods showed that it was not less than 17.9 nor greater than 18.1 yrs..

December 10, 2010 2:33 am

Paul Vaughan says: “A question I’ve been meaning to ask you: Where was the ~1940 upturn in SST most distinct?”
That spike is most prominent in the western tropical Indian and the Arabian Sea:
http://bobtisdale.blogspot.com/2008/09/indian-ocean-more-detailed-look.html

December 10, 2010 2:37 am

P. Solar says: “I’ve grabbed the data you indicated but it does not really look anything like your plot. Could you cast an eye over this snippet and tell me if that’s the data you intended to point me too.”
That’s it with the default base years, and the default base years as far as I can tell are the term of the data, in this case 1900 to 2010.
Regards

December 10, 2010 5:48 am

After reproducing the Dr Archibald’s graph, it is obvious that from the point of pressure consideration it doesn’t have any physical meaning to it.
However, that said if it is considered that the pressure time function is a result of a short-term time differential of another physical process, than such cumulative sum makes sense for the purpose of identifying that process.
Here, that may be the case since it appears that cumulative SOI has a reasonable correlation, during the last 100 years, with a signal I identified as one of five critical ‘Pacific gateways’.
Correlation excludes the 10 year period 1960-70, possibly related to the pacific islands nuclear tests (ending 1972).
http://www.vukcevic.talktalk.net/NPG.htm

Paul Vaughan
December 10, 2010 9:04 am

vukcevic, the 1960-70 correlations/anti-correlations one finds when comparing terrestrial time series relate to rapid changes in persistence of AO/NAO & AAO/SAM and an associated spike in LOD’. In short: a reconfiguration of circulation. In layman’s terms: All one has to do is change the angle of a jet to make the eddies swirl in the opposite direction. Complex correlations help avoid Simpson’s Paradox (a trap for overly-linear thinking that is blind to deterministic-relationship sign-reversals). Cheers.

P. Solar
December 10, 2010 5:56 pm

Bob Tisdale said:
“And when I use the 121-month running mean in a post, I usually preface it with something along the lines of, it’s the same filter used by the NOAA ESRL for their AMO data.”
Yes, I find it a bit worrying how commonly these running means are used in view of how badly they behave, and above all that many seem to use them with unquestioning faith that a mean just filters out the noise.
This page is a quick intro to gaussian filters and compares the frequency response of G and RM filters. http://homepages.inf.ed.ac.uk/rbf/HIPR2/gsmooth.htm
The large and repeated side lobes let significant amounts of higher frequencies through that we tend to image we are filtering out using RM. These can sometimes come back to bite.
To illustrate the point with the el nino3.4 data, here is a comparison of the 121 month RM to a gaussian filter. Though there is “some” similarity there are many peaks that are significantly displaced and periods like the ’80s when the amplitude is very different and the two plots are 180 degrees out of phase !
http://i56.tinypic.com/24dr8eg.png
So, which is a better representation of the raw data? Let’s look closer.
http://i51.tinypic.com/2ypkodf.png
Looking at 1985, 1989 and 2000 we see that the gaussian is following the raw data and the RM is going the wrong way. It is out of phase. There must be frequency components in the surrounding data that are getting through.
BTW Met Office , Hadley also use a 21 month RM when presenting their NH/SH and global temperature data.
Describing running mean as a filter gives the impression that what comes out the other end is somehow cleaner or purer, that some noise has been removed. However, to someone with a signal processing background the word filter says: “frequency response ?”
It is better to think of filters as distorting the data rather than cleaning it up and ask if the distorted version accurately shows the features you are trying find or is displaying spurious artifacts.
This may not affect too much your just trying to spot nino/nina periods but I would not chose to use a “filter” that gets both magnitude and phase so badly wrong.
The implementation of a gaussian filter is no more than a weighted mean. It’s bearly any harder of computationally heavy than plotting a running mean.

December 11, 2010 3:09 am

P. Solar: Many thanks for the detailed discussion and links on gaussian and RM filters.
You wrote, “Looking at 1985, 1989 and 2000 we see that the gaussian is following the raw data and the RM is going the wrong way. It is out of phase.”
But if my interest is to show, for example, that the frequency and amplitude of El Nino events outweigh the frequency of magnitude of La Nina events for specific periods of time, then the gaussian filter is picking up the high-frequency component and it’s out of phase. In other words, it depends on what one is interested in presenting.
Regards

P. Solar
December 11, 2010 8:12 am

“But if my interest is to show, for example, that the frequency and amplitude of El Nino events outweigh the frequency of magnitude of La Nina events for specific periods of time, then the gaussian filter is picking up the high-frequency component and it’s out of phase. In other words, it depends on what one is interested in presenting.”
If the peaks are shifted this will affect the frequency even possibly the number of events you are going to identify. Some may be spurious artifacts due to the filter letting through some frequencies and not others. If you are interested in the magnitude , that magnitude is wrong, even inverted in some cases.
I appreciate that you are only looking very grossly at larger trends but even that may be affected.
I don’t understand your comment about gaussian “picking up the high-frequency component. It has no ringing or pass bands in its frequency response (which is it self a gaussian profile). In fact if you look at the plots you link here you will see a lot of detail in some areas that should not have got past a 5 year filter. These are artifacts. As seen in the years I noted this can even lead to a trough being plotted where there is a peak in the data. That would worry me.
What do you think the gaussian result is “out of phase” with? Certainly not the data.
I had a quick look at your blog and you seem to be doing a lot work looking for trends and causal links between different data sets. If you are using rm (which you are) you may well be missing some useful correlations or inferring ones that do not exist.
I would suggest you try re-plotting some of those comparative graphs using G instead of rm . You may find some interesting differences. Maybe some correlation will appear stronger, some lags may (will) change in size or may become leads. It could be very enlightening.
“In other words, it depends on what one is interested in presenting.”
Which I am sure M. Mann, Steig et al would agree with entirely.

December 11, 2010 12:10 pm

Mr. Tisdale
It is your graph and expertise in the Pacific Oscillations that inspired me to look for possible causes, and for that I thank you.
I think I may finally ‘cracked’ the PDO (graph No.4.)
http://www.vukcevic.talktalk.net/NPG.htm
Thanks again
Vuk(ceivic)

Paul Vaughan
December 11, 2010 12:18 pm

There are no “bad” “filters” & “good” “filters”, just goofy interpretations. Without good instinct about the effect of integrating across harmonics, sensible interpretation might not be possible. “Frequency response” graphs won’t help with interpretation if one lacks basic conceptual understanding. Say one has a symmetrical (& sinusoidal) valley between 2 peaks. A repeat narrow-band smooth will see the same basic pattern. A single wide boxcar from peak-to-peak is obviously going to put a peak over the middle of the valley. That’s not a problem if the goal is to show the average elevation for a boxcar centered on that point. Does the general public have the skills necessary to sensibly interpret? No. Worse than that: I know career academic statisticians who get tangled in knots thinking about this stuff (because it isn’t necessarily something they’ve ever thought about carefully …certainly not because they are incapable – quite the contrary). There is no substitute for taking whatever amount of time is necessary to build fundamental conceptual understanding from scratch. “Frequency response” plots are not a magical shortcut to enlightenment. A background in signal processing is not necessary for developing conceptual understanding of fundamentals (nor is it a guarantee of natural instinct).

Baa Humbug
December 12, 2010 6:28 am

Bob Tisdale says:
December 9, 2010 at 2:45 pm

Ian L. McQueen says: “Caveat: Though solar insolation or cloud cover do not cause ENSO events, they may well modulate the strength of these events. High insolation during El Nino strengthens it, low insolation weakens it. Vice versa for la Nina.”
Do you have a link to a paper or data that supports this?

Actually Ian was backquoting my comment Bob and sorry for the late response.
No I don’t have any papers or data to point to. My comment was my opinion/idea that I wanted to contribute.
However if I inadvertantly just added “noise” I apologise.