
I got an email today from Barry Hearn asking me if I knew what was going on with the NCDC data set. It seems that it has started to diverge from GISS, and now is significantly warmer in April 2009.
What is interesting is that while NCDC went up in April, UAH, and GISS both went down. RSS went up slightly, but is still much lower in magnitude, about 1/3 that of NCDC. HadCRUT is not out yet.
Here is a look at the most recent NCDC data plotted against GISS data:

Here is a list of April Global Temperature Anomalies for all four major datasets:
NCDC 0.605 °C
GISS 0.440 °C
RSS 0.202 °C
UAH 0.091 °C
It is quite a spread, a whole 0.514°C difference between the highest (NCDC) and the lowest (UAH), and a 0.165°C difference now between GISS and NCDC. We don’t know where HadCRUT stands yet, but it typically comes in somewhere between GISS and RSS values.
Source data sets here:
NCDC
ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/monthly.land_and_ocean.90S.90N.df_1901-2000mean.dat
Previous NCDC version to 2007 here: ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/monthly.land_and_ocean.90S.90N.df_1961-1990mean.dat
GISS
http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt
RSS
UAH
http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.2
While it is well known that GISS has been using an outdated base period (1951-1980) for calculating the anomaly, Barry points out that they have been tracking together fairly well, which is not unexpected, since GISS uses data from NCDC’s USHCN and COOP weather station network, along with GHCN data.

NCDC made the decision last year to update to a century long base period, this is what Barry Hearn’s junkscience.com page said about it then:
IMPORTANT NOTE May 16, 2008: It has been brought to our attention that NCDC have switched mean base periods from 1961-90 to 1901-2000. This has no effect on absolute temperature time series with the exception of land based temperatures. The new mean temperature base is unchanged other than land based mean temperatures for December, January and February (the northern hemisphere winter), with each of these months having their historical mean raise 0.1 K.
At this time raising northern winter land based temperatures has not
altered published combined annual means but we anticipate this will
change and the world will get warmer again (at least on paper, which
appears to be about the only place that is true).
So even with this switch a year ago, the data still tracked until recently. Yet all of the sudden in the past couple of months, NCDC and GISS have started to diverge, and now NCDC is the “warm outlier”.
Maybe Barry’s concern in the second paragraph is coming true.
So what could explain this? At the moment I don’t know. I had initially thought perhaps the switch to USHCN2 might have something to do with this, but that now seems unlikely, since the entire data set would be adjusted, not just a couple of months.
The other possibility is a conversion error or failure somewhere. Being a USA government entity, NCDC works in Fahrenheit on input data, while the other data sets work in Centigrade. Converting NCDC’s April value of of 0.605(assuming it may be degrees °F) to Centigrade results in 0.336°C, which seems more reasonable.
Unfortunately, since NCDC makes no notes whatsoever on the data they provide on the FTP site, nor even a readme file about it with anything relevant, it is hard to know what units we are dealing with. They have plenty of different datasets here: ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/
But the readme file is rather weak.
What is clear though is that there has been a divergence in the last couple of months, and NCDC’s data went up when other datasets went down.
So, I’m posting this to give our readers a chance to analyze and help solve this puzzle. In the meantime I have an email into NCDC to inquire.
What REALLY needs to happen is that our global temperature data providers need to get on the same base period so that these data sets presented to the public don’t have such significant differences in anomaly.
Standardized reporting of global temperature anomaly data sets would be good for climate science, IMHO.
Re: GK (02:02:17) :
“…what is the difference between GISS, HadCRUT and NCDC ? Do they use data from different monitoring stations ? Do they have differnt methods of making up their data….oops, I mean different methods for analysing their data ? What`s the difference ?”
They all use the same underlying raw surface station temperature data, but have different ways of gridding that data, filling in the gaps and getting to the Global Average Temperature (whatever that is). They also typically employ black box “correction schemes”, which attempt to correct for Urban-Heat-Island and other effects. But Steve McIntyre and others have shown that (at least in the case of the GISS correction) these corrections can actually make recent temperatures hotter, not cooler (i.e. they don’t actually do what they claim to do).
Bottom line: the Global Average Temperature as reported by GISS, HadCRUT and NCDC are many steps removed from raw data.
O.K.
Somebody a few days back posted a chart of just the straight temperatures NOT anomolies.
Would anybody happen to have a link to that?????
Thank you, very much.
[snip OT]
[snip OT]
If that trend continues….. 🙂
[snip OT]
Anthony
You should consider introducing a numbering system so that each comments entry had a number from 1 to whatever. That would make it much easier to refer to previous entries.
REPLY: Yes and I could offer free prizes too, except that this blog is hosted on WordPress.com and I have little control over features. – Anthony
I am sure that someone has a list of the zeros for each reference period. It would be helpful for readers to to report comparisons using just one standard reference period, even if the various reporting agencies use their own.
The change in the difference is what is important, regardless of base period.
I’d be interested in seeing the actual temperatures, NOT anomalies, published on one graph.
http://www.woodfortrees.org/plot/gistemp/from:1999/to:2009/plot/gistemp/from:1999/to:2009/trend/plot/rss/from:1999/to:2009/plot/rss/from:1999/to:2009/trend/plot/uah/from:1999/to:2009/plot/uah/from:1999/to:2009/trend/plot/hadcrut3gl/from:1999/to:2009/plot/hadcrut3gl/from:1999/to:2009/trend
There seems to be a gap developing between other organizations that measure global temperatures as well. For the period 1999-2009 or the last 10 years, here are the least squares trend line slopes per WOOD FOR THE TREES Interactive graphs
GISS 0.01865/year
UAH 0.01212/year
RSS 0.01048/year
CRU 0.00721/year [hadcrut3l]
COMPOSITE 0.01198/year [WOOD FOR TREES Temperature index]
The interesting note about the HADCRUT 3 figure is that it is almost identical to the slope for the period 1900-2009, namely 0.00727/year, so where is all the global warming that AWG science claims?
I have re checked when all the slopes went negative for the four organizations and found that atmospheric cooling started in 2001, including the composite of the four data sets. Global Oceans started cooling in 2000[per hadsst2gl. Northern Hemisphere oceans started cooling in 2002. Southern Hemisphere oceans cooling started in 2000.
Graph the differences. I’ll bet my hat the recent difference is not significant in light of past differences.
Highlander (01:52:08) :
Neil,
While I also agree with that thought, I’d like to suggest that at least two —and perhaps more— methods be employed for obvious reasons.
Having everybody reading from the same script might sound nice, but systemic errors can become difficult to detect as a result.
I took Anthony to simply mean using the same base period, not the same exact “method.” Has GISS ever explained why they have failed to update their base period, which is now two decades out of date? There is, after all, something of an international standard here — the current WMO period for calculating climate normals is 1971-2000. Now, that doesn’t work for the MSU based data (UAH and RSS). So we still have to “adjust” for any land-sea versus satellite comparisons. But there is no reason for the land-sea datasets not to be on the same page, and the WMO 30 year normal is a recognized standard.
REPLY: Exactly. Climate periods are a moving 30 year window. GISS is behind the times and I think I know why. The base period they chose starting in 1951 is a cooler period globally, and thus warm anomalies of the near present would appear higher using a 1951-1980 baseline than a more recent period. GISTEMP gets cited worldwide, often by news organizations and people that really don’t understand the concept of anomaly and base periods, and thus to change it to reflect proper base period reporting would cause the slope of the GISTEMP graph to drop, and look “less alarming”. – Anthony
RW (03:33:53) :
**It is not terribly interesting, in any case, to look in detail into a minor difference between two datasets. Look at the graph in mid-2008 and you’ll see a similar example of one going up and one going down. These things happen. It didn’t mean anything then, and it probably doesn’t mean anything now.**
It is called “details, details”. I disagree, I think it is important. You have to catch errors because they can grow larger. On CA, Gavin quit watching the Superbowl when one of the readers noted a error in far away Antarctica. This error is larger than Mann’s temperature error of a Millenniuuumm ago.
Want to see something “Hinky”?
Differences between NOAA and Unisys appears to reach new hights!
http://www.klimadebat.dk/forum/klimadebatens-fordrejninger-og-forfalskninger-d12-e556-s200.php#post_12170
Check out the picture where I compare the pacific.
In NOAA´s version the PDO is almost neutral, but in the Unisys version the PDO is very very strong. You have a cold pacifis by Unisys and a warm pacific by NOAA.
Grotesk
Okay, after re-baselining GISS, this is what I got for GISS-NCDC (black is a 12 month moving average):
http://i23.photobucket.com/albums/b370/gatemaster99/GISSminusNCDC.png
REPLY: We know that GISS and NCDC have tracked in the past, what is of most interest is the last three months. – Anthony
This business of reporting “anomalies” is biased by definition since the baseline is arbitrary. It gives the misleading impression that the baseline is the target or ideal whereas it is nothing more than a reference point used to make the temperature variations more significant than they really are. Even the use of the word “anomaly” is misleading because it presumes both the premise that the baseline is normal and the conclusion that differences from the baseline are unexpected. A more unbiased designation would be “variation” instead of “anomaly”. It is a good example of how you can make data appear to mean anything you want just by how you present it. And as others have pointed out, comparing anomalies from data sets that use different baselines is even more meaningless.
One of the big problems with this idea of baseline is that it is presented as a line, as if it is something perfect and ideal. I would very much like to see these graphs with lines representing plus or minus two standard deviations of the data that are used to calculate the baseline. This would give a far more useful view of what is being compared. A person would then be able to get a feel for the variation in the data that make up the baseline instead of just the variation of the data compared to an idealized baseline.
Anthony, can you remind use what these baselines are based upon? I’m assuming they are the averages of some period of data and not an arbitrary point in time, but I can’t remember.
I notice that Cryosphere Today hasn’t been updating its graphs -“tail of the tape”, NH curve, SH curve (behind 2 wks). I’ve been waiting with bated breath wondering whats up with that while a foot plus of snow has fallen across the Canadian prairies, Montana and adjacent northern Ontario for the May long weekend.
My biggest problem with all of these adjustments (TOB, homogenuity, etc…) is that they almost always lead to either more “warming”, or support the Alarmist’s narrative (ex “cool” the 1930s, and “warm” the 1990s). One rarely if ever, see an adjustment that goes against the Alarmist’s zeitgeist. One wonders how many adjustments these organizations calculate, and how many are thrown out. It is just a little too coincidental that whether they are adjusting for time of observation, or using a different time interval for the mean, we always get a little bit warmer.
Science is rarely, if ever, that simple.
Chris regarding TSI: Look at the scale. From trough to peak is .01 of 1 %
Not to worry.
[SNIP – totally off topic and not one iota of relevance to this thread – people PLEASE stop posting wildly OT stuff, Anthony]
“”” CodeTech (02:00:56) :
Incidentally, and admittedly off topic, [SNIP]
[snip totally pointlessly OT – Anthony]
Re: Frank Lansner (08:05:08) :
“Check out the picture where I compare the pacific.
In NOAA´s version the PDO is almost neutral, but in the Unisys version the PDO is very very strong. You have a cold pacifis by Unisys and a warm pacific by NOAA.”
Nathan Mantua’s page here shows the official monthly CDC value for the PDO index…
http://jisao.washington.edu/pdo/PDO.latest It has kept the PDO strongly negative through April. Note that on the SST graphics you link to – a negative PDO is characterized by a “warm horsehoe” in the North Pacific. That is, a warm tongue extending eastwards from Japan (warmer here =more negative PDO) with cold water South of Alaska and off British Columbia Coast (colder here=more negative). By that definition the NOAA image probably shows a more strongly negative PDO (stronger warm tongue) even though the cool anomalies near the North American Coast look weaker.
“”” MarcH (02:17:17) :
Anthony,
We routinely see the graphs without error ranges. Do GISS, UAH etc report these? Is it possible to show these to provide a more realistic indication of uncertainty, or at least the error range for monthly values. I guess the range between NCDC and UAH does provide some insight on this.
Cheers
MarcH “””
What would be the purpose of error bands ? Presumably, GISStemp anomaly values are calculated using some computer AlGorythm applied to a set of numbers they are provided with. I’m sure the computer can do arithmetic to 10 or 16 digits or whatever you want.
It’s not as if the supplied data are some real physical value of some measured variable.
My computer doesn’t spit out any error bands, every time I use the Windows scientific calculator to process some numbers I type in.