Keeping track of NOAA’s ENSO data changes

The NOAA Weekly ENSO Sea Surface Temperature Indices Webpage Has Changed Location

By Bob Tisdale

There has been concern expressed recently around the blogosphere that NOAA hasn’t updated their weekly El Niño-Southern Oscillation (ENSO) sea surface temperature-based indices webpage since May 16, 2012. I use the NOAA NOMADS website (alternate NOAA computer here) for my sea surface temperature updates so this didn’t present any difficulties for me. It did, however, cause problems for the WattsUpWithThat ENSO meter, since it had been linked to that NOAA SST indices webpage. To remedy that problem, the WUWT ENSO meter is now updated from an Australia Bureau of Meteorology webpage here.

What happened to the NOAA weekly updates?

NOAA changed climatologies (base years for anomalies) from 1971-2000 to 1981-2010 and provided a new webpage, which is currently up to date. The standard climatology for the Reynolds OI.v2 sea surface temperature anomalies continues to be 1971-2000, and that’s what the NOMADS and BOM use. As a result, there will be differences between the anomalies on the NOAA indices webpage and on the NOMADS and BOM webpages.  And since the NOAA Oceanic NINO Index (ONI) now uses a sliding series of 5-year climatologies (see here), it won’t agree with any of them.

Through a link on their Monthly Atmospheric & SST Indices webpage, NOAA explains the reason for the change in base years on a webpage titled Climate Diagnostics Bulletin Updates to Climatologies and Indices Beginning with January 2011 Data:

Beginning with the January 2011 monthly data, all climatologies, anomalies, and indices presented within and related to the monthly Climate Diagnostics Bulletin will be updated according to current WMO standards. For datasets that span at least the past 30 years (such as atmospheric winds and pressure), the new anomalies will be based on the most recent 30-year climatology period 1981-2010.

Well, there it is.

For those monitoring the weekly NOAA ENSO SST indices webpage, don’t forget to update your favorites with the new webpage.

About these ads

21 thoughts on “Keeping track of NOAA’s ENSO data changes

  1. Of what possible use is a 30 year data set? Hopelessly short. Meaningless. Why is it being used?
    Even worse they keep adjusting the data.

  2. The old NOAA page stopped updating a couple times before, I’d send them a note and it would start updating again until it stopped (the last stop, at 0.0 anomaly) had some amusement value. However, I switched to the BoM data at :


    And 0.4 is what the ENSO meter is displaying now.

    This is the first I’ve heard of the new NOAA page at which shows:

     Weekly SST data starts week centered on 3Jan1990
                    Nino1+2      Nino3        Nino34        Nino4
     Week          SST SSTA     SST SSTA     SST SSTA     SST SSTA
     03JAN1990     23.4-0.4     25.1-0.3     26.6 0.0     28.6 0.3
     09MAY2012     25.7 1.2     27.3 0.1     27.8 0.0     28.4-0.3
     16MAY2012     25.2 1.0     27.2 0.1     27.8 0.0     28.5-0.3
     23MAY2012     24.8 0.9     27.1 0.2     27.8 0.0     28.6-0.2
     30MAY2012     24.5 0.9     27.2 0.4     27.9 0.2     28.7-0.1
     06JUN2012     24.7 1.5     27.1 0.4     27.8 0.1     28.6-0.2
     13JUN2012     24.4 1.5     27.1 0.6     27.9 0.3     28.6-0.2

    In your May update on June 11th, you note

    The NINO3.4 Sea Surface Temperature anomalies based on the week centered on June 6, 2012 are well above zero. They are presently at +0.248 deg C.

    I tried to find that at the NOMADS site but got lost with the various plotting options when all I wanted was a simple table that was easy to download.

    So I gave up went with the BoM data.

    I’m happy to use any source I can readily “scrape.”

  3. I noticed this as soon as they changed it. ENSO measurement should be static (not sliding) and based on one set of base climatological years. Needless to say, a few months ago, I emailed a professor of mine who has been published for his climatology work. I don’t think he was a fan either.

  4. Another alternate source of ENSO indices is from the NOAA Earth System Research Laboratory,

    I particularly like the Multivariate ENSO index, which combines normalized departures of a variety of correlated parameters, including SST and wind fields. That index has gone positive, indicating, in the couched terms of a scientist, “the MEI reached ENSO-neutral conditions two months ago, and may have transitioned to El Niño conditions”. More discussion at:

  5. Excuse my ignorance but:
    if, as the CAGW crowd keep telling us, the oceans are heating up, then the 1981-2010 ‘average’ is going to be higher than the 1971-2010 ‘average’. Hence, should we now see a down-shift in ALL of the anomalies as they are using a higher base value to measure them against.

    If so, are we now all saved from thermageddon?

    Is the moving of the goalposts like this either good or accepted ‘science’? I guess it’s only the fate of mankind that relies on these figures so it won’t really matter.

    Does it not muddy the waters (pun intended) when we have to ask, before any discussion on SST, “Which base period are you using”?

  6. As A result of globalization we see and learn more about what is happening around the globe. Social media programs show us targets and ways of living that we have never seen before and due to on how to be social, not on how to do social. For some people this has created the need to keep in touch with people all over the world we encounter in our travels. As I’m writing this article, I am a Dutch woman, lives in Switzerland, working on an international project. The technique is only satisfy the need to stay in touch with people, cultures and societies around the world.

  7. Thank you Bob. I always enjoy reading your posts which are very important in the overall scheme of things.

    Since the oceans for all practical purposes drive the climate, the key to understanding the climate is understanding what is going on in the oceans and if there are any genuine changes why these changes are taking place.

    Like in all aspects of climate science, the problem is the data. Insufficient spatial coverage to know what is going on and insufficient duration from which to draw any conclusions. Further, the land based air temperature data set does not even measure the correct metric such that for assessing whether there may be some changes taking place in the energy budget it is simply incapable of answering that question. Given the numerous flaws (some of which are fundamental) in that data set, the land based air temperature data set would be best ditched altogether and redesigned from scratch. The problem is that we would have to come back in a few hundred (possibly thousand) years time.

    If the so called scientists practicing in this field were genuine scientists, they would put their hands up ad acknowledge that we simply do not possess adequate reliable data from which to draw any firm conclusions.

    In reality we do not really need to know whether there is truly any shift in cliamte and if so the cause(s) behind any climatic changes (whether these be natural or manmade) and the mature and honest position would be to simply keep matters under review and to adapt to the extent that there are genuine problems truly manifesting themselves.

    Concluding that we need to take steps to mitigate a problem that we do not know whether it truly a problem or not and if so whether we are the cause behind the problem and if so whether we can genuinely do something about it, is simply madness.

  8. Your commenter “emediacoder” has posted the same drivel on at least 2 stories, I would reckon it is a spambot. Keep up the good work!

  9. In a perfect world, the baseline should reflect neutral PDO conditions. But since the PDO flips from the cold state to the warm state quite quickly, then this doesn’t seem feasible. Alternatively, the baseline should reflect either the warm state or the cold state. If they aren’t doing that, then they should continue to use the previous baseline. Moving the goalposts does nothing except muddy the message.

    Of course, that is probably the point.

  10. When they change baselines, do they go back and re-calculate previous anomalies? if not, we will get step changes that screw up any long term viewpoints. This is my problem with using anomalies at all as they tend to make small changes look massive (0.7 C is a lot when moving around zero, but pretty puny when it is from 287.5 K to 288.2).

    And surely, with something like the ENSO index which uses a particular level of anomaly as a classification bin, changing the baseline will mean re-binning all previous ENSO events. I appreciate that an interpretation of ENSO is a certain level of sea surface temperature above an average, but if that is the case the average used would have to be continuously adjusted to avoid this re-binning issue.

  11. Is this another example of Lucia’s Hockey Stick Screening Fallacy? If some of the data does not fit your hypothesis, look only for data that does.

  12. Nice to know that through careful manipulation of the data we don’t know what or who to believe.

  13. Now some of you people are beginning to see why the use of anomalies is a bad idea.
    You never know where you stand.

  14. I have no problem reading anomaies. The thing that is also needed is some metric of dispersion. For the warming cultists, they don’t like this idea. Yet many of us appreciate this in another context: 100-year flood plain concept.

    Getting a summer that is unusually warm, but happens once every 100 years, is similar to having a flood that comes up to your home once every 100 years. You know it is rare, but is to be expected.

  15. Bob Tisdale and others,

    Hi. Have you read my published paper “Separation of a Signal of Interest from a Seasonal Effect in Geophysical Data: I. El Niño/La Niña Phenomenon” ?
    [International Journal of Geosciences, 2011, 2, 414-419 doi:10.4236/ijg.2011.24045 Published Online November 2011 (
    Geophysical signals N of interest are often contained in a parent signal G that also contains a seasonal signal X at a known frequency fX. The general issues associated with identifying N and X and their separation from G are considered for the case where G is the Pacific sea surface temperature monthly data, SST3.4; N is the El Niño/La Niña phenomenon and the seasonal signal X is at a frequency of 1/(12 months). It is shown that the commonly used climatology method of subtracting the average seasonal values of SST3.4 to produce the widely used anomaly index Nino3.4 is shown not to remove the seasonal signal. Furthermore, it is shown that the climatology method will always fail. An alternative method is presented in which a 1/fX (= 12 months) moving average filter F is applied to SST3.4 to generate an El Niño/La Niña index NL that does not contain a seasonal signal. Comparison of NL and Nino3.4 shows, among other things, that estimates of the relative magnitudes of El Niños from index NL agree with observations but estimates from index Nino3.4 do not. These results are applicable to other geophysical measurements.
    As I pointed out, any ENSO index based upon the climatology method (including the one that has been newly introduced) will fail to completely remove the seasonal effect.
    My NL index does the job.

    David Douglass
    Dept of Physics
    University of Rochester

  16. David Douglass: Thanks for the heads up on the paper. A couple of questions. The first question relates to one of your references. The abstract concludes with “Comparison of NL and Nino3.4 shows, among other things, that estimates of the relative magnitudes of El Niños from index NL agree with observations but estimates from index Nino3.4 do not.” Your reference for this was a NOAA webpage that showed the relative impacts of the top 10 El Nino events. How did NOAA account for the impact of the eruption of El Chichon on global surface temperatures for the 1982/83 El Nino? I find no mention of volcano or El Chichon in the paper NOAA referenced on the webpage.

    Second, the NINO3.4-based sea surface temperature indices fail to capture the differences between East Pacific and Central Pacific El Nino events. This is understood. The NINO3.4 region-based indices also fail to capture the differences in the strengths of the East Pacific El Nino events. The 1997/98 El Nino event was a much stronger East Pacific El Nino than the 1982/83 El Nino.

    Even when adjusted for the additional month duration on the 1982/83 El Nino, the average East Equatorial Pacific sea temperatures (outside of the NINO3.4 regional) over the course of the 1997/98 El Nino were considerably higher than the 1982/83 El Nino. The NL(NINO3.4) index seems to account for this without measuring it. How can that be?


  17. Bob,

    In my paper I accepted the Barnston et al. (one of my references) conclusion that region SST3.4 can be used to study ENSO phenomena. Barnston et al. go on to define the NINO3.4 index which is the average of SST3.4 minus the “climatology” as a proxy. The implication is that NINO3.4 is free of the annual signal that is in the SST signal. I showed that NINO3.4 still contains a considerable annual signal because the climatology method did remove it. And. in addition, that no future versions of the climatology would work either. My NL index does is, in fact, free of the annual effect.

    I used the example of the relative strength (given by the NOAA reference) of the two El Nino to show that my NL index gives a better estimate than does the NOAA NINO3.4 index. I do not quite understand your question on this point. Do you disagree with the NOAA statement on the relative magnitudes?

    I considered not including this sentence on the two EL Ninos because it might detract attention away from the main conclusion about the climatology. Please readi my paper with this statement removed and tell me what you think.Perhaps we should continue the discussion of these details via email ( mine can be found at my department web site).

    David Douglass

  18. David Douglass: I disagree with the NOAA paper IF they did not acccount for the impact of the El Chichon eruption in their results. It would skew the relative magnitudes of the responses of global temperatures to the two El Nino events. And I find nothing that says they accounted for the volcanic eruption.

    Second, the 1997/98 El Nino WAS much stronger than the 1982/83 El Nino, but the difference in strength was not in the NINO3.4 region. It was east of it. Yet your results account for the difference without sampling where they were different. I’ll ask my question again. How can that be?


  19. Bob

    I can not defend the NOAA paper on the relative strengths of the two El Ninos and wish that I had not put that statement in the paper. The claim that any ENSO index based upon the climatology method is contaminated with the annual signal stands alone.

    The new paper that I am writing on the most recent [futile] adjustments of the climatology in the definition of the ONI index will show that the index is still not season free. I will not mention the two El Ninos.


Comments are closed.