Something hinky this way comes: NCDC data starts diverging from GISS

I got an email today from Barry Hearn asking me if I knew what was going on with the NCDC data set. It seems that it has started to diverge from GISS, and now is significantly warmer in April 2009.

What is interesting is that while NCDC went up in April,  UAH, and GISS both went down. RSS went up slightly, but is still much lower in magnitude, about 1/3 that of NCDC.  HadCRUT is not out yet.

Here is a look at the most recent NCDC data plotted against GISS data:

NCDC-GISS
click for larger image

Here is a list of April Global Temperature Anomalies for all four major datasets:

NCDC   0.605 °C

GISS    0.440 °C

RSS    0.202 °C

UAH   0.091 °C

It is quite a spread, a whole 0.514°C difference between the highest (NCDC) and the lowest (UAH), and a 0.165°C difference now between GISS and NCDC. We don’t know where HadCRUT stands yet, but it typically comes in somewhere between GISS and RSS values.

Source data sets here:

NCDC

ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/monthly.land_and_ocean.90S.90N.df_1901-2000mean.dat

Previous NCDC version to 2007 here: ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/monthly.land_and_ocean.90S.90N.df_1961-1990mean.dat

GISS

http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt

RSS

ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_2.txt

UAH

http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.2

While it is well known that GISS has been using an outdated base period (1951-1980) for calculating the anomaly, Barry points out that they have been tracking together fairly well, which is not unexpected, since GISS uses data from NCDC’s USHCN and COOP weather station network, along with GHCN data.

Click for larger image
Click for larger image

NCDC made the decision last year to update to a century long base period, this is what Barry Hearn’s junkscience.com page said about it then:

IMPORTANT NOTE May 16, 2008: It has been brought to our attention that NCDC have switched mean base periods from 1961-90 to 1901-2000. This has no effect on absolute temperature time series with the exception of land based temperatures. The new mean temperature base is unchanged other than land based mean temperatures for December, January and February (the northern hemisphere winter), with each of these months having their historical mean raise 0.1 K.

At this time raising northern winter land based temperatures has not

altered published combined annual means but we anticipate this will

change and the world will get warmer again (at least on paper, which

appears to be about the only place that is true).

So even with this switch a year ago, the data still tracked until recently. Yet all of the sudden in the past couple of months, NCDC and GISS have started to diverge, and now NCDC is the “warm outlier”.

Maybe Barry’s concern in the second paragraph is coming true.

So what could explain this? At the moment I don’t know. I had initially thought perhaps the switch to USHCN2 might have something to do with this, but that now seems unlikely, since the entire data set would be adjusted, not just a couple of months.

The other possibility is a conversion error or failure somewhere. Being a USA government entity, NCDC works in Fahrenheit on input data, while the other data sets work in Centigrade. Converting NCDC’s April value of of 0.605(assuming it may be degrees °F) to Centigrade results in 0.336°C, which seems more reasonable.

Unfortunately, since NCDC makes no notes whatsoever on the data they provide on the FTP site, nor even a readme file about it with anything relevant, it is hard to know what units we are dealing with. They have plenty of different datasets here: ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/

But the readme file is rather weak.

What is clear though is that there has been a divergence in the last couple of months, and NCDC’s data went up when other datasets went down.

So, I’m posting this to give our readers a chance to analyze and help solve this puzzle. In the meantime I have an email into NCDC to inquire.

What REALLY needs to happen is that our global temperature data providers need to get on the same base period so that these data sets presented to the public don’t have such significant differences in anomaly.

Standardized reporting of global temperature anomaly data sets would be good for climate science, IMHO.

0 0 votes
Article Rating
129 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
May 18, 2009 12:30 am

>Standardized reporting of global temperature anomaly data sets would be good for climate science, IMHO.<
Agreed.
Keep up the good work.

Highlander
May 18, 2009 1:52 am

Neil,
While I also agree with that thought, I’d like to suggest that at least two —and perhaps more— methods be employed for obvious reasons.
Having everybody reading from the same script might sound nice, but systemic errors can become difficult to detect as a result.

Richard Heg
May 18, 2009 2:00 am

OT so delete if you wish.
ScienceDaily (May 18, 2009) — A team of UC San Diego-led atmospheric chemistry researchers moved closer to what is considered the “holy grail” of climate change science when it made the first-ever direct detection of biological particles within ice clouds.
http://www.sciencedaily.com/releases/2009/05/090517143334.htm
Another article in the “related stories”
“Evidence Of ‘Rain-making’ Bacteria Discovered In Atmosphere And Snow”
http://www.sciencedaily.com/releases/2008/02/080228174801.htm
Made me think, if bacteria can make clouds and UV kills bacteria and UV changes intensity with the solar cycle, could there be a link?
From the article: “bacteria form little groups on the surface of plants. Wind then sweeps the bacteria into the atmosphere, and ice crystals form around them. Water clumps on to the crystals, making them bigger and bigger. The ice crystals turn into rain and fall to the ground. When precipitation occurs, then, the bacteria have the opportunity to make it back down to the ground. If even one bacterium lands on a plant, it can multiply and form groups, thus causing the cycle to repeat itself.”

CodeTech
May 18, 2009 2:00 am

Yeah… as I was reading this I was thinking, why aren’t these things standardized? I can’t even imagine what my employer’s reaction would be to me releasing poorly documented product. I imagine there are several people working on this stuff, do they not have job descriptions? Or is this being produced for some particular purpose, and releasing it to the public is an afterthought?
Incidentally, and admittedly off topic, the phrase is “all of a sudden”, and it is preferable to use the word “suddenly” instead. Sorry, and no I’m not the language police, but that is one of my very few pet peeves (the other English one being the misuse of “loose” instead of “lose”).

GK
May 18, 2009 2:02 am

Dumb question – I realise GISS, HadCRUT, NCDC all use surface temps from monitoring stations, whereas UAH and RSS and satelite based.
But what is the difference between GISS, HadCRUT and NCDC ? Do they use data from different monitoring stations ? Do they have differnt methods of making up their data….oops, I mean different methods for analysing their data ? What`s the difference ?
I would have thought there is only the need for one group to monitor the worlds surface stations ?? I`m sure there’s a good reason ?

tallbloke
May 18, 2009 2:07 am

Perhaps it’s all part of a subtle plan to underline the need for a single US climate and weather service, or as I dubbed it, The Ministry of G!$$ and Wind…

May 18, 2009 2:17 am

Anthony,
We routinely see the graphs without error ranges. Do GISS, UAH etc report these? Is it possible to show these to provide a more realistic indication of uncertainty, or at least the error range for monthly values. I guess the range between NCDC and UAH does provide some insight on this.
Cheers
MarcH

SpecialEd
May 18, 2009 2:24 am

>Standardized reporting of global temperature anomaly data sets would be good for climate science, IMHO.<
Or increased public access to raw data and methods so errors could be found.
Publishing this data does not take much these days (online is easy and cheap).
And there are decent free tools available (Octave, Python, others).
There is no reason the analysis could not be done completely in the open with help from interested members of the community.

Konrad
May 18, 2009 2:28 am

“The other possibility is a conversion error or failure somewhere.”
I am guessing this is probable answer. The possibility that GISS is no longer the warmest temperature anomaly data set seems on previous evidence to be unlikely. Having said that, I am not too concerned about divergence in data sets using surface station data. I believe that Anthony’s work with regard to surface stations shows conclusively that this data source is irrevocably compromised for surface stations rated below CRN-2. I personally pay more attention to satellite data, specifically UAH. UAH has greater coverage than RSS and better satellite altitude maintenance.

SpecialEd
May 18, 2009 2:29 am

Public access to simple tools like Wood for Trees is nice, but you still don’t have access to full data records. You get a few records to compare and trend with some limited tools, but you are still limited.
BTW, check out the new hockey stick at WfT:
http://tinyurl.com/cjguzg

Sven
May 18, 2009 2:41 am

“while NCDC went up in April, RSS, UAH, and GISS all went down”
No they did not, GISS went slightly down (0.47 to 0.44), UAH went sharply down (0.206 to 0.091), but RSS went slightly up (0.194 to 0.202)
REPLY: You are correct, this is what I get for writing after midnight. Fixed, thank you. – Anthony

May 18, 2009 3:20 am

It is quite a spread, a whole 0.514°C difference between the highest (NCDC) and the lowest (UAH), and a 0.165°C difference now between GISS and NCDC. We don’t know where HadCRUT stands yet, but it typically comes in somewhere between GISS and RSS values.
Anthony (or whoever)
You simply can’t make a direct comparison between the anomalies. They all use different base periods. For example the GISS anomaly relative to the satellite base period (1979-1998) is ~+0.20 deg, i.e. about the same as RSS.
Re: GISS divergence/convergence
GISS seems to have dropped back towards the pack (Hadley, UAH,RSS) since the arctic temperatures started to cool. This makes sense as GISS estimate the arctic by extrapolation.
This doesn’t explain the NCDC anomaly, though.

RW
May 18, 2009 3:33 am

“It is quite a spread, a whole 0.514°C difference between the highest (NCDC) and the lowest (UAH), and a 0.165°C difference now between GISS and NCDC”
This statement is meaningless. The numbers are not comparable because they are anomalies relative to different base periods.
“While it is well known that GISS has been using an outdated base period (1951-1980) for calculating the anomaly…”
This statement is also meaningless. How can any base period be ‘outdated’? If you want to renormalise to a different one, it’s trivial to do so.
It is not terribly interesting, in any case, to look in detail into a minor difference between two datasets. Look at the graph in mid-2008 and you’ll see a similar example of one going up and one going down. These things happen. It didn’t mean anything then, and it probably doesn’t mean anything now.

Reid
May 18, 2009 3:58 am

The NCDC is detecting a leading hot air mass from the Copenhagen Alarmfest. The other temperature series are not as politically sensitive.

May 18, 2009 4:13 am

Some years ago, GISS started to diverge from the other 4 data sets, and become warmer. There were two possible explanations for this; GISS was cooking the books, or by extrapolating to the poles, GISS was measuring a significant difference. I always hoped the latter was correct, and GISS were doing proper science. Now the data indicates that the poles are cooling. So what is being observed is a return of the GISS data to where the other data sets always have been. The key issue to me is, however, quite different. RealClimate and the warmaholics have nailed their colors to the mast in using GISS data to support the idea that global temperatures are still rising. The GISS data was the last of the 5 to show that global temperature anomalies were falling. Now it is going to be much more difficult for the warmaholics to explain that temperature anomalies are really rising, when they have been falling for several years.

May 18, 2009 4:22 am

Anthony: As you’re aware, GISS uses NCDC’s OI.v2 (Optimum Interpolation) SST anomaly data from December 1981 to present, while the NCDC uses its ERSST (Extended Reconstructed) data. But there was a recent change in one. For its global surface temperature anomaly product, the NCDC recently switched versions of SST anomaly data from ERSST.v2 to ERSST.v3b.
There are differences between OI.v2 (used by GISS) and ERSST.v3b (used by NCDC). Here’s a comparative graph of the two global SST anomaly datasets from November 1981 to present.
http://i41.tinypic.com/vdz3h3.jpg
And here’s a graph of the difference (ERSST.v3b MINUS OI.v2 SST).
http://i44.tinypic.com/jt5ldc.jpg
Some of the difference should be due to the use of satellite data. OI.v2 uses satellite, buoy and ship sampling. ERSST.v3 (no “b” in the suffix) originally used the satellite data, primarily to supplement buoy and ship data where they’re sparse in the high latitudes of the Southern Hemisphere (in other words, the Southern Ocean). Users must have complained about the early timing of the drop in the Southern Ocean data, because the NCDC deleted the satellite data in the recently released ERSST.v3b data.
I wrote a quick post on the ERSST.v3 versus ERSST.v3b data a couple of months ago, for those who are interested:
http://bobtisdale.blogspot.com/2008/12/unheralded-changes-in-ersstv3-data.html
And two more posts that were a closer looks at the ERSSST.v3b data:
http://bobtisdale.blogspot.com/2009/03/knmi-added-ersstv3b-data-to-climate.html
http://bobtisdale.blogspot.com/2009/04/closer-look-at-ersstv3b-southern-ocean.html
I’m trying to finish a post that I’ve been working on for over a week (Why Did the Ocean Heat Content of the Atlantic Rise So Fast? Its Rise More Than Doubled Those of the Indian and Pacific Oceans). It’s a follow-up to my look at the Levitus et al (2009) OHC data on a per ocean basis.
http://bobtisdale.blogspot.com/2009/05/levitus-et-al-2009-ocean-heat-content.html
But when I’m done with that post, I’ll work on one that examines the differences between the OI.v2 and ERSST.v3b SST anomaly datasets. I’ll email you when I’m done with it.
Regards
REPLY: Thanks Bob, I’ll look forward to your analysis and what it may reveal. – Anthony

Sam the Skeptic
May 18, 2009 4:36 am

Stupid non-scientist question!!
In my young days I had a boss who taught us all that the business bottom line was all about cash. Percentages were a guide but what mattered to him at the end of the day was having the money. “I can’t buy food with percentages,” he used to say.
And I can’t plan next week’s trips to the coast or wherever on “anomalies”. I need to know what the real temperature is or is going to be but all I get is a lot of very clever people choosing a set of figures that suits them and then trying to tell me that this month is a small fraction of a degree more or less than the small fraction of a degree more than it was last month or last year.
It seems we don’t know what the earth’s actual temperature is or we have no way of measuring what it is (which is something that even Hansen admits) or even what it ought to be and we have five different organisations telling us five different things based on five different sets of figures and none of those tell us anything of practical use about the state of the climate or the weather.
Where am I going wrong?
I only ask because it would be nice to know.

Richard111
May 18, 2009 4:58 am

Good thing these “adjustments” only apply to the paperwork and not to the real world.

May 18, 2009 5:00 am

I prefer having differences between the datasets. If you understand the differences and the reasons for them, you can determine why one is used for a particular study and not the others.
Example, global ERSST (NCDC) data (all versions) has a dip and rebound from the 1880s to 1940s, where global HADSST data does not. HADSST has a more continuous rise. In other words, there’s no significant difference in the global ERSST data values in 1880 and in 1976. Global ERSST anomalies dropped and came back up–no big deal. So when climate modellers or the IPCC try to reproduce the global temperature record, they use Hadley Centre data because it doesn’t include that inconvenient dip and rebound and is therefore easier to “duplicate” with contrived forcings.
Also, to standardize the GISS, Hadley Centre and NCDC products, you’d have to get them to agree on infilling, smoothing, etc. Fat chance of that.

Bill Illis
May 18, 2009 5:23 am

I have a version from a few months ago and there is almost no change between the older and newer series until the last decade or so.
Then the changes seem to grow a little on an increasing trendline (0.003C per decade) versus the older series. The changes are still small with the biggest positive change being +0.062 in Jan 2006 and the biggest reduction being -0.015C in Nov 2008.
As these series are updated over time with new temps and the discovery of errors, this kind of change would be expected, although these changes are a little higher than they should be and there shouldn’t be an increasing trendline. (Theoritically of course, but this is climate science and the majority of adjustments made to the temperature record have been positive to date).
But it does look something might be becoming unstable as the swings are rising in amplitude beyond what should be occurring.

Steve Fitzpatrick
May 18, 2009 5:29 am

Anthony,
Off topic. I would like you to offer you a guest post. How would I go about this?

Jennyinoz
May 18, 2009 5:34 am

Reid,
It’s either that or they are using measurements from instruments located in Washington DC. All that hot air is bound to cause anomalies and an increase in NCDC readings.

MartinGAtkins
May 18, 2009 5:35 am

Sven (02:41:37) :

“while NCDC went up in April, RSS, UAH, and GISS all went down”
No they did not, GISS went slightly down (0.47 to 0.44), UAH went sharply down (0.206 to 0.091), but RSS went slightly up (0.194 to 0.202)

Confirmed. The article needs adjusting.

Dave
May 18, 2009 5:43 am

Highlander,
I have to agree with you. Better to allow different measurements of temperature be presented, using different methods and systems to derive their results. That way it does become much easier to spot and analyse systemic problems.
It also encourages a healthy debate and much needed scepticism.
So in this case there is every probability that NCDC have made a mistake somewhere.
There remains though, a small possibility, that everyone else has it wrong.
Best to go back to the raw data and check.

May 18, 2009 5:43 am

The difference between 1880 and 2009 is 0.050 to 0.150=0.100, a tenth of a degree!!. That is a NANO GLOBAL WARMING!!!!

realitycheck
May 18, 2009 5:56 am

Re: GK (02:02:17) :
“…what is the difference between GISS, HadCRUT and NCDC ? Do they use data from different monitoring stations ? Do they have differnt methods of making up their data….oops, I mean different methods for analysing their data ? What`s the difference ?”
They all use the same underlying raw surface station temperature data, but have different ways of gridding that data, filling in the gaps and getting to the Global Average Temperature (whatever that is). They also typically employ black box “correction schemes”, which attempt to correct for Urban-Heat-Island and other effects. But Steve McIntyre and others have shown that (at least in the case of the GISS correction) these corrections can actually make recent temperatures hotter, not cooler (i.e. they don’t actually do what they claim to do).
Bottom line: the Global Average Temperature as reported by GISS, HadCRUT and NCDC are many steps removed from raw data.

Pofarmer
May 18, 2009 6:00 am

O.K.
Somebody a few days back posted a chart of just the straight temperatures NOT anomolies.
Would anybody happen to have a link to that?????
Thank you, very much.

Cathy
May 18, 2009 6:01 am

[snip OT]

Chris
May 18, 2009 6:12 am

[snip OT]

Chris
May 18, 2009 6:13 am

If that trend continues….. 🙂

John Peter
May 18, 2009 6:14 am

[snip OT]

John Peter
May 18, 2009 6:16 am

Anthony
You should consider introducing a numbering system so that each comments entry had a number from 1 to whatever. That would make it much easier to refer to previous entries.
REPLY: Yes and I could offer free prizes too, except that this blog is hosted on WordPress.com and I have little control over features. – Anthony

deadwood
May 18, 2009 6:34 am

I am sure that someone has a list of the zeros for each reference period. It would be helpful for readers to to report comparisons using just one standard reference period, even if the various reporting agencies use their own.

Steve Keohane
May 18, 2009 6:35 am

The change in the difference is what is important, regardless of base period.

John W.
May 18, 2009 6:48 am

I’d be interested in seeing the actual temperatures, NOT anomalies, published on one graph.

matt v.
May 18, 2009 7:17 am

http://www.woodfortrees.org/plot/gistemp/from:1999/to:2009/plot/gistemp/from:1999/to:2009/trend/plot/rss/from:1999/to:2009/plot/rss/from:1999/to:2009/trend/plot/uah/from:1999/to:2009/plot/uah/from:1999/to:2009/trend/plot/hadcrut3gl/from:1999/to:2009/plot/hadcrut3gl/from:1999/to:2009/trend
There seems to be a gap developing between other organizations that measure global temperatures as well. For the period 1999-2009 or the last 10 years, here are the least squares trend line slopes per WOOD FOR THE TREES Interactive graphs
GISS 0.01865/year
UAH 0.01212/year
RSS 0.01048/year
CRU 0.00721/year [hadcrut3l]
COMPOSITE 0.01198/year [WOOD FOR TREES Temperature index]
The interesting note about the HADCRUT 3 figure is that it is almost identical to the slope for the period 1900-2009, namely 0.00727/year, so where is all the global warming that AWG science claims?
I have re checked when all the slopes went negative for the four organizations and found that atmospheric cooling started in 2001, including the composite of the four data sets. Global Oceans started cooling in 2000[per hadsst2gl. Northern Hemisphere oceans started cooling in 2002. Southern Hemisphere oceans cooling started in 2000.

timetochooseagain
May 18, 2009 7:22 am

Graph the differences. I’ll bet my hat the recent difference is not significant in light of past differences.

Basil
Editor
May 18, 2009 8:02 am

Highlander (01:52:08) :
Neil,
While I also agree with that thought, I’d like to suggest that at least two —and perhaps more— methods be employed for obvious reasons.
Having everybody reading from the same script might sound nice, but systemic errors can become difficult to detect as a result.

I took Anthony to simply mean using the same base period, not the same exact “method.” Has GISS ever explained why they have failed to update their base period, which is now two decades out of date? There is, after all, something of an international standard here — the current WMO period for calculating climate normals is 1971-2000. Now, that doesn’t work for the MSU based data (UAH and RSS). So we still have to “adjust” for any land-sea versus satellite comparisons. But there is no reason for the land-sea datasets not to be on the same page, and the WMO 30 year normal is a recognized standard.
REPLY: Exactly. Climate periods are a moving 30 year window. GISS is behind the times and I think I know why. The base period they chose starting in 1951 is a cooler period globally, and thus warm anomalies of the near present would appear higher using a 1951-1980 baseline than a more recent period. GISTEMP gets cited worldwide, often by news organizations and people that really don’t understand the concept of anomaly and base periods, and thus to change it to reflect proper base period reporting would cause the slope of the GISTEMP graph to drop, and look “less alarming”. – Anthony

Gerald Machnee
May 18, 2009 8:03 am

RW (03:33:53) :
**It is not terribly interesting, in any case, to look in detail into a minor difference between two datasets. Look at the graph in mid-2008 and you’ll see a similar example of one going up and one going down. These things happen. It didn’t mean anything then, and it probably doesn’t mean anything now.**
It is called “details, details”. I disagree, I think it is important. You have to catch errors because they can grow larger. On CA, Gavin quit watching the Superbowl when one of the readers noted a error in far away Antarctica. This error is larger than Mann’s temperature error of a Millenniuuumm ago.

Frank Lansner
May 18, 2009 8:05 am

Want to see something “Hinky”?
Differences between NOAA and Unisys appears to reach new hights!
http://www.klimadebat.dk/forum/klimadebatens-fordrejninger-og-forfalskninger-d12-e556-s200.php#post_12170
Check out the picture where I compare the pacific.
In NOAA´s version the PDO is almost neutral, but in the Unisys version the PDO is very very strong. You have a cold pacifis by Unisys and a warm pacific by NOAA.
Grotesk

timetochooseagain
May 18, 2009 8:12 am

Okay, after re-baselining GISS, this is what I got for GISS-NCDC (black is a 12 month moving average):
http://i23.photobucket.com/albums/b370/gatemaster99/GISSminusNCDC.png
REPLY: We know that GISS and NCDC have tracked in the past, what is of most interest is the last three months. – Anthony

Richard Wright
May 18, 2009 8:16 am

This business of reporting “anomalies” is biased by definition since the baseline is arbitrary. It gives the misleading impression that the baseline is the target or ideal whereas it is nothing more than a reference point used to make the temperature variations more significant than they really are. Even the use of the word “anomaly” is misleading because it presumes both the premise that the baseline is normal and the conclusion that differences from the baseline are unexpected. A more unbiased designation would be “variation” instead of “anomaly”. It is a good example of how you can make data appear to mean anything you want just by how you present it. And as others have pointed out, comparing anomalies from data sets that use different baselines is even more meaningless.
One of the big problems with this idea of baseline is that it is presented as a line, as if it is something perfect and ideal. I would very much like to see these graphs with lines representing plus or minus two standard deviations of the data that are used to calculate the baseline. This would give a far more useful view of what is being compared. A person would then be able to get a feel for the variation in the data that make up the baseline instead of just the variation of the data compared to an idealized baseline.
Anthony, can you remind use what these baselines are based upon? I’m assuming they are the averages of some period of data and not an arbitrary point in time, but I can’t remember.

May 18, 2009 8:21 am

I notice that Cryosphere Today hasn’t been updating its graphs -“tail of the tape”, NH curve, SH curve (behind 2 wks). I’ve been waiting with bated breath wondering whats up with that while a foot plus of snow has fallen across the Canadian prairies, Montana and adjacent northern Ontario for the May long weekend.

JP
May 18, 2009 8:27 am

My biggest problem with all of these adjustments (TOB, homogenuity, etc…) is that they almost always lead to either more “warming”, or support the Alarmist’s narrative (ex “cool” the 1930s, and “warm” the 1990s). One rarely if ever, see an adjustment that goes against the Alarmist’s zeitgeist. One wonders how many adjustments these organizations calculate, and how many are thrown out. It is just a little too coincidental that whether they are adjusting for time of observation, or using a different time interval for the mean, we always get a little bit warmer.
Science is rarely, if ever, that simple.

jack mosevich
May 18, 2009 8:37 am

Chris regarding TSI: Look at the scale. From trough to peak is .01 of 1 %
Not to worry.

Ed Scott
May 18, 2009 8:39 am

[SNIP – totally off topic and not one iota of relevance to this thread – people PLEASE stop posting wildly OT stuff, Anthony]

George E. Smith
May 18, 2009 8:56 am

“”” CodeTech (02:00:56) :
Incidentally, and admittedly off topic, [SNIP]

Ed Scott
May 18, 2009 8:57 am

[snip totally pointlessly OT – Anthony]

realitycheck
May 18, 2009 9:05 am

Re: Frank Lansner (08:05:08) :
“Check out the picture where I compare the pacific.
In NOAA´s version the PDO is almost neutral, but in the Unisys version the PDO is very very strong. You have a cold pacifis by Unisys and a warm pacific by NOAA.”
Nathan Mantua’s page here shows the official monthly CDC value for the PDO index…
http://jisao.washington.edu/pdo/PDO.latest It has kept the PDO strongly negative through April. Note that on the SST graphics you link to – a negative PDO is characterized by a “warm horsehoe” in the North Pacific. That is, a warm tongue extending eastwards from Japan (warmer here =more negative PDO) with cold water South of Alaska and off British Columbia Coast (colder here=more negative). By that definition the NOAA image probably shows a more strongly negative PDO (stronger warm tongue) even though the cool anomalies near the North American Coast look weaker.

George E. Smith
May 18, 2009 9:05 am

“”” MarcH (02:17:17) :
Anthony,
We routinely see the graphs without error ranges. Do GISS, UAH etc report these? Is it possible to show these to provide a more realistic indication of uncertainty, or at least the error range for monthly values. I guess the range between NCDC and UAH does provide some insight on this.
Cheers
MarcH “””
What would be the purpose of error bands ? Presumably, GISStemp anomaly values are calculated using some computer AlGorythm applied to a set of numbers they are provided with. I’m sure the computer can do arithmetic to 10 or 16 digits or whatever you want.
It’s not as if the supplied data are some real physical value of some measured variable.
My computer doesn’t spit out any error bands, every time I use the Windows scientific calculator to process some numbers I type in.

David L. Hagen
May 18, 2009 9:08 am

Anthony
I endorse John Peter’s recommendation on numbering.
I have not used any, but there may be some WordPress numbering plugins or methods to do it. e.g.
K.S. suggests it is possible: “Yes you can definitely do this. The easiest way to do it would be to look into themes that have numbered comments, or a plugin that does the same. Otherwise you could manually adjust your comments.php page.”
Greg’s Threaded Comment Numbering: Number Your WordPress 2.7 Comments the Easy Way claims to add numbering.
WordPress @ T2 gives code examples of the steps involved.
ClimateAudit provides comment numbers, and I thought it was using WordPress.
Maybe some reader has the expertise/experience to help.

REPLY
: I’ve dealt with this issue before, the answer is simply “no”. I can’t choose a theme like CA uses, I can’t easily make my own customizations. wordpress.com does not offer these things unless I move WUWT to either a VIP server $600 per month, or separate external server, which I don’t have the time nor inclination to manage.
The same goes for the other most popular request, comment preview, though I’ve asked for it from WP.com. I use this free service gratefully, but it has limitations that I have no control over. – Anthony

MartinGAtkins
May 18, 2009 9:11 am

GISS Surface Temperature Analysis
Updated Jan. 13, 2009, with calendar year data.
Given our expectation of the next El Niño beginning in 2009 or 2010, it still seems likely that a new global temperature record will be set within the next 1-2 years, despite the moderate negative effect of the reduced solar irradiance.
This has probably been covered by an article on WUWT but I probably missed it. It will be interesting if it unfolds thus.
http://data.giss.nasa.gov/gistemp/2008/

don rayburg
May 18, 2009 9:12 am

I smell a rat or rats! I simply don’t trust any govermental agency anymore, for good reason I think!

RW
May 18, 2009 9:21 am

“GISS is behind the times…”
This is simply not meaningful. The choice of reference period is completely irrelevant to any analysis. You can trivially renormalise to whatever period you choose. If their reference period was 1880-1910, or if it was 1979-2009, it would make absolutely no difference to anything.
“The base period they chose starting in 1951 is a cooler period globally”
Not really. Cooler than today, that’s for sure. It’s a thing called ‘global warming’. Not cooler than times before it. In fact, the mean anomaly during the base period (zero, by definition) and during the 20th century (-0.02°C) are almost equal.
“to change it to reflect proper base period reporting would cause the slope of the GISTEMP graph to drop, and look “less alarming”.”
There is no ‘proper’ base period, and changing it would make no difference at all to the slope of the graph. It would make no difference to anything.

George E. Smith
May 18, 2009 9:22 am

“”” Sam the Skeptic (04:36:33) :
Stupid non-scientist question!!
In my young days I had a boss who taught us all that the business bottom line was all about cash. Percentages were a guide but what mattered to him at the end of the day was having the money. “I can’t buy food with percentages,” he used to say.
And I can’t plan next week’s trips to the coast or wherever on “anomalies”. I need to know what the real temperature is or is going to be but all I get is a lot of very clever people choosing a set of figures that suits them and then trying to tell me that this month is a small fraction of a degree more or less than the small fraction of a degree more than it was last month or last year.
It seems we don’t know what the earth’s actual temperature is or we have no way of measuring what it is (which is something that even Hansen admits) or even what it ought to be and we have five different organisations telling us five different things based on five different sets of figures and none of those tell us anything of practical use about the state of the climate or the weather.
Where am I going wrong?
I only ask because it would be nice to know. “””
Well more importantly Sam; even if we could measure the real surface or lower troposphere mean global temperature; it would tell us exactly nothing about climate. It won’t even tell us whether the earth is heating up or cooling down; because energy transport is not simply related to temperature. Some thermal processes are linear with temperature differences; while others may follow fourth power or fifth power of temperature. There are no temperature differences in an average anomaly number. Then the actual energy processes are very dependent on location and terrain; not to mention the local biology.
So as RW said above; it is not very interesting to talk about minor small differences in different data sets; and even less so to report in millidegrees of some unspecified scale when global surface temperature can cover 150 deg C total range at some instant of time.
I heard a report on this morning’s news about artificial turf being used as an Astroturf substitute; made out of old tire rubber, and the report claims that the surface temperature on that stuff reaches 140 deg F; which is +60 deg C. So I would expect real ground surfaces to exceed that in the hottest places.

Jurinko
May 18, 2009 9:27 am

Have you read the Monckton´s report? It says something about NCDC..
http://scienceandpublicpolicy.org/images/stories/papers/reprint/markey_and_barton_letter.pdf
Mann Hockey stick: disgusting
GISTEMP record manipulated back in time: outrageous
NCDC boss Tom Karl lying at Senate Committee hearing: priceless

steven mosher
May 18, 2009 9:29 am

Anthony,
A simple check says there is nothing here. It’s trivial to rebaseline the series to the same period, but you dont even have to do that. An anomaly series is merely the raw series shifted up or down by a constant. and what you are looking for is some trend in the difference of the anomaly series.
From 1880 to 2009 The differences between GISS and NCDC looks like this
MAXIMUM =0.2298
MINIMUM = -0.3437
AVERAGE = -0.030139369
STANDARD DEV = 0.073151392
SLOPE of GISS- NCDC = 3.82089E-05

May 18, 2009 9:39 am

Hmmm! I may have to retract my earlier comment. I’m having trouble identifying the SST anomaly data used by NCDC in the global temperature product. Their Global Surface Temperature Anomalies webpage says it’s SR05 data:
http://www.ncdc.noaa.gov/oa/climate/research/anomalies/index.php
But the SR05 webpage says it’s experimental and there’s no data available through the links:
http://www.ncdc.noaa.gov/oa/climate/research/anomalies/anomalies-experimental.html#anomalies
Somewhere along the line I read or was given the impression that they were now using ERSST.v3b data, and that they had discontinued updating their ERSST.v2 data, though I can’t find any reference to either. I’ll keep looking.

crosspatch
May 18, 2009 9:40 am

One difference with GISS is that they don’t actually use any polar data measurements. The temperature reported in the polar regions is extrapolated from Hansen’s own models according to the last information I have. One source of a difference could be actual readings vs. calculated fill data.

May 18, 2009 9:46 am

GISS is out of step with the real world: click
Everyone else shows declining temperatures. When GISS “adjusts” their raw data, they show non-existent warming. Not surprising, with Hansen at the controls.

crosspatch
May 18, 2009 9:48 am

Another possibly interesting source of difference could be the GISS adjustment mechanism. GISS uses averages to fill in missing values from the past. When temperatures are warming, it means that with each monthly data set, the past is also adjusted warmer when calculating fill values from average. The current warmer sample increases the “average” and so that value is plugged into the past.
When temperatures are cooling, the reverse is true. The average declines and so calculated fill values representing past temperatures decline. So if this month cools, any missing values for last month that were calculated from averages also cool a little making the divergence greater this month than it was last month. The GISS adjustment mechanism is a positive feedback of sorts that causes changes today to “change” the past. In a period of cooling temperatures, we should see a cooling of all past temperatures where missing values are calculated from a declining average and incorporated into the gridded averages.

Basil
Editor
May 18, 2009 9:49 am

realitycheck (09:05:05) :
Re: Frank Lansner (08:05:08) :
“Check out the picture where I compare the pacific.
In NOAA´s version the PDO is almost neutral, but in the Unisys version the PDO is very very strong. You have a cold pacifis by Unisys and a warm pacific by NOAA.”
Nathan Mantua’s page here shows the official monthly CDC value for the PDO index…
http://jisao.washington.edu/pdo/PDO.latest It has kept the PDO strongly negative through April.

I’m glad someone has brought this up — that the canonical measure of the PDO is still strongly negative, and the latest — April — even went more negative just a bit.
I, too, have been pondering the difference between the Unisys page and NOAA’s image. Again, as in GISS vs. NCDC, or whatever, we have to ask what the base period is. I’m going to look into this further, as far as NOAA vs. Unisys is concerned. I suspect that Unisys is using a standard WMO climatological baseline of 1971 to 2000. But NOAA? We’ll see.

steven mosher
May 18, 2009 9:53 am

And…. If you rebaseline GISS to the same time period as NCDC you get
the following for GISS-NCDC
MAXIMUM= 0.2492417
MINIMUM = -0.3242583
AVERAGE = -0.010697669
STDEV= 0.073151392
SLOPE of GISS-NCDC= 3.82089E-05
For April 2009 on a rebaselined basis Giss-NCDC = -0.1455583.
Ya, NCDC is warmer. its a 2 sigma event. nothing to write home
about. and the trend of GISS-NCDC, is still zero. now this is just a simple test, so one might want to investigate if there is some pattern to the differences or time dependencies.. and its kinda interesting that the negative tail is longer..

crosspatch
May 18, 2009 9:55 am

And my comment immediately preceding can sort of be checked to see if last month’s divergence from NOAA is greater this month that it was last month.
In other words, did the amount of the previous month’s divergence increase this month?

Sam the Skeptic
May 18, 2009 10:00 am

George E. Smith (09:22:41)
Thanks for that reply. I just about understand the science of that (thanks to this site, mainly) but it only makes things more puzzling for the layman.
I know that an anomaly of +.02 compared with last year’s +.025 means that things are cooler but only if the period of comparison is the same and if the period of comparison is changing then surely the figures become meaningless.
You can pick what figures you like to prove what you like and that goes for the skeptics as much as for the alarmists.
I also understand that temperature is not the be-all and end-all of the argument but until the man-in-the-street understands, for example, that heat and temperature aren’t the same thing we’re going to keep on getting suckered by the warm-mongers and their doomsaying.

MartinGAtkins
May 18, 2009 10:13 am

I noticed some time ago that the NCDC monthly data for land and ocean has adjustments that often go back many years. I matched up their latest offering with my plotting data and the earliest change goes back to 12/1999.
It’s by a trivial amount -0.0001, but I’m mystified as to why they do it. This may be true of their other data sets but I haven’t had time to check.

May 18, 2009 10:14 am

I thought that was “Something Wicki This Way Comes”.

May 18, 2009 10:17 am

[snip OT]

Mike Bryant
May 18, 2009 10:19 am

Here are monthly temperatures … not anomalies:
http://junkscience.com/GMT/NCDC_absolute.gif
I think I stole this from Smokey…

May 18, 2009 10:24 am

Anthony,
This divergence is not particularly noteworthy (and not beyond 2 standard deviations in the variance between the two data series). There are months where they diverge just as much (or more) in 2007, 2005, 2004, 2003, 2002, 2001, 2000, etc. Your initial graph is slightly misleading because you do not put both series on both baselines.
Here is 2000 to present standardized on a 1979-2008 baseline:
http://i81.photobucket.com/albums/j237/hausfath/Picture55.png

John F. Hultquist
May 18, 2009 10:31 am

First: This has been a good post with good comments. Thanks to all of you with info and insight.
Second: It took me just a few reads here on WUWT to feel comfortable with the limitations some are complaining about. I now even enjoy it when someone corrects the grammar and spelling (even their own) and sometimes I even learn something new. And on some other slightly more complicated sites I still don’t know what is happening when I hit the submit button. Simple is better – and cheaper. Everyone just relax.
Third: OT – Mitch Daniels, Republican governor of Indiana had a great opinion piece in Friday’s WSJ (May 15) which they put under the title “Indiana Says ‘No Thanks’ to Cap and Trade.”

May 18, 2009 10:35 am

An interesting note on the importance of using a common baseline.
Here is my version of Anthony’s graph in the original post. In it, the current divergence seems unusual, at least since 2007:
http://i81.photobucket.com/albums/j237/hausfath/Picture56.png
Here is a correction of Anthony’s graph that puts both data series on the same baseline (1979-2008 in this case). Now we see that the current divergence is smaller than a divergence that occurred in 2007:
http://i81.photobucket.com/albums/j237/hausfath/Picture57.png

REPLY
: Thanks for doing that. The real question is, which of these is the correct global temperature anomaly for April ?:
NCDC 0.605 °C
GISS 0.440 °C
RSS 0.202 °C
UAH 0.091 °C
HadCRUT ?? (not yet published)
The general public has not the skill to discern the nuances of baselines and methods. – Anthony

Basil
Editor
May 18, 2009 10:43 am

Frank Lansner (08:05:08) :
Want to see something “Hinky”?
Differences between NOAA and Unisys appears to reach new hights!
http://www.klimadebat.dk/forum/klimadebatens-fordrejninger-og-forfalskninger-d12-e556-s200.php#post_12170
Check out the picture where I compare the pacific.
In NOAA´s version the PDO is almost neutral, but in the Unisys version the PDO is very very strong. You have a cold pacifis by Unisys and a warm pacific by NOAA.

Frank,
If I’m reading the provenance of the NOAA pic correctly, it is this:
The original 36 km satellite-only reprocessed SST data used for creating the climatologies were generated from the Multi-Channel SSTs (MCSSTs) by the Rosenstiel School of Marine and Atmospheric Science (RSMAS) of the University of Miami (Gleeson and Strong, 1995). In-situ SSTs from drifting and moored buoys were used to remove any biases, and statistics were compiled with time to derive the reprocessed SSTs. The monthly mean SST climatologies were then derived by averaging these satellite SSTs during the time period of 1985-1993, with observations from the years 1991 and 1992 omitted due to the aerosol contamination from the eruption of Mt. Pinatubo. These climatologies were developed at NOAA/NESDIS/STAR (then ORA) before being delivered to NESDIS/OSDPD for implementation. The 36 km climatologies were finally interpolated into 0.5-degree (50-km) resolution to match the resolution of the operational SST analysis field. These operational monthly mean climatologies are used for producing our operational SST anomaly products.
Source: http://coralreefwatch.noaa.gov/satellite/methodology/methodology.html#clim
I added the bold for emphasis.
The Unisys product, on the other hand, uses a standard 1971-2000 climatology. Whereas the NOAA product is using 1985-1993, with two cool years removed, as its “normal” in the NOAA graphs.

John F. Hultquist
May 18, 2009 10:45 am

I believe there is some confusion regarding “measurement error” and “error bands.” The temperatures being discussed would have measurement error for all the reasons explained in the “surface stations” reporting Anthony has conducted and reported on. An extension of a “best fit” line to a data set (a projection of future temperatures, perhaps) could have error bands. I don’t believe these are the same. [Someone with better expertise than I have might offer a proper exposition of this issue.]

John Boy
May 18, 2009 11:00 am

[snip OT]

Ron de Haan
May 18, 2009 11:11 am

[snip OT]

David L. Hagen
May 18, 2009 11:25 am

Anthony
Thanks for explaining the server issues and cost constraints.

oms
May 18, 2009 11:38 am

Richard Wright (08:16:18) :

This business of reporting “anomalies” is biased by definition since the baseline is arbitrary. It gives the misleading impression that the baseline is the target or ideal whereas it is nothing more than a reference point used to make the temperature variations more significant than they really are.

Often, a small change from a reference value is often easier to measure than an absolute value. I can estimate how much water is in this lake, but I can tell you with better precision how much water there is compared to last year using a set of marks I made around the perimeter.
The technical jargon used in geophysical sciences terms this kind of measurement an “anomaly.” Value judgments about what is more desirable or ideal are due to the interpreter, not due to the label on the measurements.
It’s true that anomalies can’t be compared without specifying respective reference values, but that’s akin to saying a measurement is useless without the units.
George E. Smith (09:05:16) :

What would be the purpose of error bands ? … I’m sure the computer can do arithmetic to 10 or 16 digits or whatever you want.
It’s not as if the supplied data are some real physical value of some measured variable.

Reminder: formal error and numerical error are two different things.
George E. Smith (09:22:41) :

…even if we could measure the real surface or lower troposphere mean global temperature; it would tell us exactly nothing about climate.

Well, one way to find out more about something is to measure it.
Climate is usually described in terms of the mean and variability of temperature, precipitation and wind over a period of time, ranging from months to millions of years (the classical period is 30 years).
Surface and lower troposphere temperature time series seem as good a place to start as any.

May 18, 2009 11:50 am

Anthony asks “The real question is, which of these is the correct global temperature anomaly for April ?”
Well, the real real question is which of these (on a common baseline, this time using the 1979-1998 standard for UAH/RSS) is the correct global temperature anomaly for April?
NCDC 0.3498 °C
GISS 0.2519 °C
RSS 0.202 °C
UAH 0.091 °C
Though we have to remember that, despite considerably variability for single months, the long-term trends of NCDC, RSS, GISS, and HadCRUT are nearly identicle (though UAH trends a bit lower):
http://i81.photobucket.com/albums/j237/hausfath/Picture22.png

May 18, 2009 12:17 pm

A quick followup, while April temps vary a bit across series, March was surprisingly uniform. Using the same 1979-2008 baseline, we have:
GISS 0.202 °C
HadCRUT 0.207 °C
RSS 0.194 °C
UAH 0.208 °C
NCDC 0.284 °C

Fuelmaker
May 18, 2009 12:52 pm

Is there any detail on the average anomalies? The differences would be a lot more obvious if they weren’t averaged. Certainly, the data for where most people live is a lot more important to us than sea surface temperatures and empty deserts.
I presume that all the averages are weighted by area, but certain sets must have different interpolations. The unexpected differences will lead us to a lot more insight than meaningless precision.

May 18, 2009 1:03 pm

Zeke Hausfather (12:17:58) :
A quick followup, while April temps vary a bit across series, March was surprisingly uniform. Using the same 1979-2008 baseline, we have:

I’ve been posting on WUWT about the use of common base periods for months. No-one takes a blind bit of notice. I hope you have more luck than me.

Harold Ambler
May 18, 2009 1:07 pm

George Smith’s lesson on “all of a sudden” is, perhaps, not entirely complete.
The following entry comes from entymology.com:
SUDDEN: c.1290 (implied in suddenly), perhaps via Anglo-Fr. sodein, from O.Fr. subdain “immediate, sudden,” from V.L. subitanus, variant of L. subitaneus “sudden,” from subitus “come or go up stealthily,” from sub “up to” + ire “come, go.” Phrase all of a sudden first attested 1681, earlier of a sudayn (1596), upon the soden (1558). Sudden death, tie-breakers in sports, first recorded 1927; earlier in ref. to coin tosses (1834).
—————————————–
So, first documented use of the expression comes from 1681. Qualifies as the King’s English to me … but of course I’m even soft on Shakespeare and don’t correct his dozen-plus different spellings of his own name when I encounter it, either.

Richard Wright
May 18, 2009 1:09 pm

oms (11:38:04) :
Often, a small change from a reference value is often easier to measure than an absolute value. I can estimate how much water is in this lake, but I can tell you with better precision how much water there is compared to last year using a set of marks I made around the perimeter.

But that’s not what is happening in temperature measurements. They are, in fact, measuring temperature, not temperature variations. (To use your analogy, they are indeed measuring the amount of water in the lake, not the level compared to last year.) They measure temperature but report “anomalies”. The use of the term is misleading (anomaly: “something that deviates from what is standard, normal or expected”). The presumption in these graphs is that the baseline is standard and anything that deviates from it is anomalous. It makes the word “anomalous” meaningless in a scientific sense, but it is useful in the hands of propagandists.

Harold Ambler
May 18, 2009 1:20 pm

Sorry, source should be:
http://www.etymonline.com/index.php?search=sudden&searchmode=none
and should also be “when I encounter them”

George E. Smith
May 18, 2009 1:24 pm

I don’t get this whole argument about the importance of the baseline, in anomaly measurments. I would think the baseline is totally irrelevent; particulkarly since we apparently don’t know anything more precise about the baseline than we do avout the current measurments.
Surely the choice of baseline simply sets the value of zero anomaly for that data set.
If you simply look at changes in anomaly rather than the anomaly itslef; then any baseline ought to suffice.
I prefer the basleine that takes zero as -273.15 Celsius degrees below the freezing point of water. Well I’d even accept that freezing point of water as a suitable baseline.
When I look at RSS/UAH/GISS/HAD, the only thing that registers in my mind, is how they change relative to themsleves; not how they compare with each other.

oms
May 18, 2009 1:34 pm

Richard, if the analysis method contains biases, then this will be reflected in the “global temperature index.” In the limit of “everything else remains the same,” the “anomaly” (!) will be more precise.
To continue with the lake analogy, the real measurement might be akin to a series of soundings which are then mapped to a volume “model.” If you are concerned with something slightly more complicated than density, say the mass of the water, then you will have to use even more modeling.
“Anomaly” might be a funny word, but so are the words “normalized,” “mean,” “standard deviation,” and even “normal distribution.” They are all words with precise meanings within the sciences and not chosen for any nefarious purposes.

Fluffy Clouds (Tim L)
May 18, 2009 1:35 pm

snip away if needed!
my take is this…. lies upon lies, compounding lies, and more lies.
why any of these releases are even near the same is amusing!
TX boss
P.S. u get my E-mail on solar grafts?

May 18, 2009 1:44 pm

NOAA is increasing the difference with Hadcrut3 and ERSST (Land+ocean), it’s not an isolated difference, it’s a different trend in the last few years.
NOAA-Hadley:
http://globalwarming.blog.meteogiornale.it/files/noaa-hadley-70-18-5-09.JPG
NOAA-ERSST:
http://globalwarming.blog.meteogiornale.it/files/noaa-ersst-70-18-5-09.JPG
The post is in italian language, you can traslate using google translate:
http://globalwarming.blog.meteogiornale.it/2009/05/18/qualcosa-non-va-con-i-dati-gw-del-noaa/
Someone says the trend are similar and this is true if we look at the long range, but this is not true if we look at the medium range, these are the trends in the last 30 years (NOAA, Hadley, ERSST): http://globalwarming.blog.meteogiornale.it/files/surf99-09-11-05-09.JPG
NOAA: +0,132°/decade
Hadley: +0,06°
ERSST: +0,096°.

Manfred
May 18, 2009 1:45 pm

some body recently posted hat ALL land based systems are outliers, because they should show significantly lower trends than the satellite based systems.
I think this statement is based on the model expectation, that higher altitudes in most regions and particularly should warm faster than sea-level.
http://www.realclimate.org/images/2xCO2_tropical_enhance.gif
integrating above data for 1000 mbar sea-level and UAH 600 mbar level would require an approx. 1.5 times steeper trend for satellite data compared to land based measurement.
We know that instead the slope for GISS over land isn’t 1.5 times smaller than UAH, it even higher, what allows only 3 conclusions:
– GISS temeprature trend over land is way too high
– models are wrong
– both

HarryL
May 18, 2009 1:51 pm

I came across this on Accuweathers GW blog.It’s from the NASA Earth observatory home page.
http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts.txt
There is a huge positive temp increase that begins around 1980 and lasts up to winter 2009. WUWT Anthony?

May 18, 2009 2:00 pm

Mutual grooming and caressing, when gratified through tipping produce a great variety of “scientific truths”..:-)

Pofarmer
May 18, 2009 2:24 pm

“I don’t get this whole argument about the importance of the baseline, in anomaly measurments.”
Maybe not, but to compare measurements, it would at least be helpful if they all used the same baseline.
“To continue with the lake analogy, the real measurement might be akin to a series of soundings which are then mapped to a volume “model.” If you are concerned with something slightly more complicated than density, say the mass of the water, then you will have to use even more modeling.”
Lake levels are normally reported as feet above sea level for the pool. I don’t care how deep the pool is, but, you don’t report the anomoly, you report the level. Same with rivers, etc. The reason I beleive that they use anomolies in Climate reporting is because they can use a really, really small scale that makes changes of .1degree C look positively huge, where if you showed a graph of daily min, max, and average, it wouldn’t really look all that scary.

May 18, 2009 2:26 pm

I agree with the poster who isn’t bothered by the reference period. Readers at CA occasionally raise this issue and I’m unsympathetic to these concerns. Whatever is causing the divergence, it will be in the data, not in the reference period.
I’ve discussed the NOAA-GISS divergence at CA on a few occasions – NOAA runs hotter than GISS. GISS makes its own attempt to adjust US data for UHI – I do not regard the GISS ROW algorithm as a UHI adjustment attempt: it’s more of a random permutation of data. The difference between GISS and NOAA in the US is as high as 0.5 deg C since the 1930s.
If a new divergence has developed, it might be to do with the new changepoint adjustment.

Jeff Alberts
May 18, 2009 3:16 pm

RW (03:33:53) :
This statement is meaningless. The numbers are not comparable because they are anomalies relative to different base periods.

This statement is also meaningless. How can any base period be ‘outdated’? If you want to renormalise to a different one, it’s trivial to do so.

Sounds like trying to measure Global Mean Temperature is meaningless. So why bother replying?

David
May 18, 2009 3:29 pm

It seems to be that using the term “anomaly” to describe the difference between a temperature reading and some reference point is incorrect. The term infers that change is anomalous, whereas we know that the climate is always changing. How about using “deviation” instead?

John Tofflemire
May 18, 2009 3:55 pm

I noticed the NCDC anomaly seemed unexpectedly high compared with the GISS figure when it came out the other day. Since I knew the NCDC anomaly is based on the 1901-2000 period while the GISS anomaly is based on the 1951-1980 period I compared the anomalies of the two datasets in the overlapping period and found that the difference between the two is only about .01 degrees Celsius. So the difference is due to something other than base period. NCDC does frequently make significant adjustments to their figures in the weeks and even months following the initial release so it may be best to see where their final figure ends up.
Regarding the NCDC base period, I have been following this time series fairly closely over the past two years and recall that its base period has been 1901-2000 throughout this time. The time series was significantly adjusted in December of 2007, an adjustment which affected the values throughout the entire time series, but this adjustment had nothing to do with a change in the base period.

pft
May 18, 2009 4:17 pm

John W. (06:48:04) :
“I’d be interested in seeing the actual temperatures”
Then it would be harder to deceive us.

oms
May 18, 2009 4:18 pm

Pofarmer (14:24:00) :

Lake levels are normally reported as feet above sea level for the pool. I don’t care how deep the pool is, but, you don’t report the anomoly, you report the level. Same with rivers, etc.

That’s interesting, the lake level is reported as a difference from sea level (a reference level which needs to be specified to be completely meaningful).
David (15:29:56) :

How about using “deviation” instead?

Well, since there is already a “standard” deviation, I suppose this deviation would be… anomalous? (Sorry, I couldn’t resist).
Anyhow, I’m with George E. Smith and Steve McIntyre on this: the cause of the divergence should be examined in the data. Disagreements about reference data sets or what you call the difference from the reference level are interesting but somewhat beside the point.

May 18, 2009 4:34 pm

David (15:29:56) :
It seems to be that using the term “anomaly” to describe the difference between a temperature reading and some reference point is incorrect. The term infers that change is anomalous, whereas we know that the climate is always changing. How about using “deviation” instead?
On my articles, I always try to use change, fluctuation or oscillation instead “anomaly” or “anomalous”. “Anomaly” and “anomalous” imply standard smooth and fixed conditions, which is not true in nature. It seems that climatology and ecology are two disciplines with a language that lately has been manipulated with a fixed goal external to authentic science.

noaaprogrammer
May 18, 2009 4:38 pm

JP wrote: “One wonders how many adjustments these organizations calculate, and how many are thrown out. It is just a little too coincidental that whether they are adjusting for time of observation, or using a different time interval for the mean, we always get a little bit warmer. Science is rarely, if ever, that simple.”
au contraire – consider how simple this is:
#include <manipulate I/O)
void AGW( float dataset[], float runningaverage)
{ // This AlGorythm virtually proves that man can
// indeed warm the globe by cooking datasets:
int i = 0;
int inflate = 1;
float CO2fudgefactor;
while (!endofglobe())
{ if(dataset[i] sizeof(dataset))
{ i = 0;
inflate = 1;
}
}
}
Seriously though, there should be Benford type tests for 1st, 2nd, 3rd,… leading digits in meteorological data sets that can detect cooked data sets just as there is in the auditing of financial records. ( See http://www.mathpages.com/HOME/kmath302/kmath302.htm ) However, because of physically restricted numeric ranges on any one meteorological statistic, the distributions on these digits will first have to be established so that a normalization can be carried out.

noaaprogrammer
May 18, 2009 4:40 pm

Note: The special symbols in the core of the while-loop that showed how to increase the data set values did not come through on my previous post.

May 18, 2009 4:45 pm

Anthony Watts
Re: base periods and anomalies
This issue seems to crop up every month. I’m wondering if it’s worth WUWT publishing it’s own adjusted anomalies for each of the 4 main metrics. They could all use the same common base period as that used by the satellites, i.e. 1979-1998. It’s a relatively trivial operation for anyone who’s already got the data in a spreadsheet. In fact, GISS has a tool which does it for you.
On the other hand perhaps we’d better leave it as I’ve just noticed I disagree with zeke’s GISS adjusted anomaly for April, See
Zeke Hausfather (11:50:56) :
…GISS 0.2519 °C

Pamela Gray
May 18, 2009 5:18 pm

The NWS (related to NOAA) has CONSISTENTLY predicted warmer temperatures than what actually occurred in rural NE Oregon. Their dynamical prediction model me thinks has a coolaid hue. You don’t suppose they have decided to adjust temperature sensor data with some kind of “global warming” signature that they felt was somehow missed by rural stations? Could it be some kind of perverse UHI affect applied to low temperature outliers?

henry
May 18, 2009 5:56 pm

RW (09:21:38) :
“This is simply not meaningful. The choice of reference period is completely irrelevant to any analysis. You can trivially renormalise to whatever period you choose. If their reference period was 1880-1910, or if it was 1979-2009, it would make absolutely no difference to anything.”
I’ve been aking about the differences in the reporting periods for awhile now, with always getting the same reply: It’s the TREND thats important, not the zero.
Now this may be true, but it’s still nice to know if a .6 degree upward trend is rising from zero, or from .7 below zero.
People see the “zero” point as the “normal” point. So the choice of zero is important to see if we’re rising from normal, or returning to normal.
This echos previous replies, in that the choice of zero is used to create as large a positive (above zero) change as possible. The more above zero we get, the scarier the news sounds.

May 18, 2009 6:10 pm

John W. (06:48:04) :
“I’d be interested in seeing the actual temperatures”

Probably you will never see them, chances are there is no such raw data, because if they are changing it all the time, by now they surely aren´t able to find out which was the original data.

norah4you
May 18, 2009 6:46 pm

By changing from 1960 to 1900 as a start year for readings world wide the NCDC can’t have helped getting into trouble. More than 1/3 of the stations, and about 95% in the Arctic and Antarctic, weren’t having humans living there or doing any kind of stations in same place for over 2 month before 1959…. the innerparts of the hugh areas towards the North Pole and the South Pole never been seen by a human at all in 1900…
No weather stations been established before 1920’s in many places where there are stations today in South America respectively in Australia. Some even much later than 1950’s.
This means that data before 1950 in most cases are completely dataestimated, extrapolated etc and have no connection to real values at all.
But it’s more. Take Sweden for example. In 1995 scholars were given opportunity to have correct readings for waterareas, such as lakes and rivers. They refused them since it was easier doing computerbased calculations of years before 1989…… How I know? My then retired father, gone since last year, was one of them who offerred detailed studies and values from 1957…..

steven mosher
May 18, 2009 7:36 pm

If you want to know the “real” temp as opposed to the anomaly, just add
about 14C to the anomaly…. So if the anomaly is .5
you get 14+.5 = 14.5
so.
Raw temp =14.5
Anomaly =.5
There, feel better?
Nothing whatsoever turns upon the anomaly period. Nothing. It just doesn’t matter. it’s a waste of time to even raise it as an issue, a distraction from the real problems in temperature series

REPLY:
mosh did you get the email I sent earlier? – Anthony

Richard Wright
May 18, 2009 7:47 pm

It seems to be that using the term “anomaly” to describe the difference between a temperature reading and some reference point is incorrect. The term infers that change is anomalous, whereas we know that the climate is always changing. How about using “deviation” instead?

Yes, that’s exactly my point. No one, even Hanson, “expects” (from the definition of anomaly) future temperatures to be exactly flat and coincident with the baseline. Therefore, the use of the term “anomaly” is inappropriate and misleading. How, for example, does one describe an outlying data point when all data points are referred to as anomalies? Is the outlier an unexpected anomaly (unexpectedly unexpected)? Just because climate researches choose to misuse a word, doesn’t mean it is a good choice.
My point is simply that the term is misleading and I think the concept is as well, because it implies, through the use of a baseline, that global temperatures should be flat, which is baseless. A plot of actual temperatures would show the same variations as these plots of anomalies but without the bias of the baseline assumption. That is the bias I’m talking about and I think it is very important to understand.
One could show actual temperatures and adjust the scale so that the same plus or minus 1°C spread is shown. But consider that without this artificial, horizontal baseline, the discussions of the data would, I think, be much different. Because then one has to ask the question, what should the temperatures be? Should they be flat? Should they be rising after an ice age? Putting in a baseline is an assumption at best or a conclusion at worst. It is not necessary in order to analyze the data and in fact confuses any analysis.

Richard Wright
May 18, 2009 7:52 pm

Here’s a very simple example to show the folly of the baseline. Global temperature varies throughout any year from Winter to Summer and back to Winter. Yet average monthly temperatures are compared to a flat baseline. What could possibly be the point doing that?

Just The Facts
May 18, 2009 10:50 pm

John Peter (06:16:53) :
David L. Hagen (09:08:20) :
In terms of being able to search WUWT for old comments, for now the best tool is Google’s new Blog Search:
http://blogsearch.google.com/?hl=en&ned=us&tab=nb
For example, if you wanted to see every comment by me you could use the advance search feature like this:
http://blogsearch.google.com/blogsearch/advanced_blog_search?as_q=&num=10&hl=en&client=news&um=1&ie=UTF-8&ctz=240&c2coff=1&as_oq=&as_eq=&as_drrb=q&as_qdr=a&as_mind=1&as_minm=1&as_miny=2000&as_maxd=18&as_maxm=5&as_maxy=2009&lr=&safe=active&q=%22just+the+facts%22+blogurl:wattsupwiththat.com
And then once you find and select the thread you are looking for hit Ctrl F to do a “find” on an Explorer or Firefox window to find the full comment within the thread.

Chriscafe
May 18, 2009 11:31 pm

Surely the major problem is that NOAA and GISS manipulate the raw data using unpublished algorithims. Occasionally these algorithms change and we are treated to a recasting of the temperature /time series, usually with older temperatures lowered to increase the upward trend.
Although they say they eliminate the effects of UHI, only spagetti code has been published bt GISS to provide a clue as to how. The Climate Audit analysis of this code,mentioned in this thread by Stephen McIntyre, shows that the claimed adjustment is not achieved.
Another major problem is that the unmanipulated raw data do not appear to be available.

David
May 19, 2009 12:00 am

“Here’s a very simple example to show the folly of the baseline. Global temperature varies throughout any year from Winter to Summer and back to Winter. Yet average monthly temperatures are compared to a flat baseline. What could possibly be the point doing that?”
If all temperature readings were taken in one hemisphere then that point would be valid – but they are not. Whether the readings in each hemisphere are equally weighted is another matter.

Evan Jones
Editor
May 19, 2009 12:07 am

But what is the difference between GISS, HadCRUT and NCDC ? Do they use data from different monitoring stations ? Do they have differnt methods of making up their data….oops, I mean different methods for analysing their data ? What`s the difference ?
I would have thought there is only the need for one group to monitor the worlds surface stations ?? I`m sure there’s a good reason ?

Well, believe it or not, it’s like this: NCDC takes its data and adjusts it (much controversy there). GISS takes the fully adjusted NCDC data and “unadjusts it” via some strange algorithm and then applies its own adjustments to the mangled results. Why they do not simply start off with NCDC raw data is a mystery for the ages.
HadCRUT, as I understand it, starts off with NCDC raw data, but does not reach all over the north pole by extrapolating from the “Siberian Thought Criminal” stations, so it generally clocks in a bit cooler than NCDC or GISS.
Satellites measure lower troposphere using microwave reflection proxies, and are not surface temps. They should (acc. to AGW theory) be warming faster than surface, yet they don’t. The sats are in pole-to-pole orbit and their cameras stick out the sides, so they can’t look directly down at the poles (I also think there is a problem measuring reflections off the ice). So the N/S polar temps are a lot less certain than they otherwise might be.

norah4you
May 19, 2009 3:48 am

Well fictive data, corrected due to this and that instead of facts from real temperature readings been used more than once by the so called scholars.
In Fiction or facts climate threats readings I present some close to home, Sweden, examples that say more than the so called scholars probably understood while writing their papers….

RW
May 19, 2009 4:35 am

henry:
“People see the “zero” point as the “normal” point. So the choice of zero is important to see if we’re rising from normal, or returning to normal.”
There is no meaningful definition of ‘normal’ in this context and hence no non-arbitrary choice of reference period.
“This echos previous replies, in that the choice of zero is used to create as large a positive (above zero) change as possible. The more above zero we get, the scarier the news sounds.”
If a difference of 0.1°C makes you scared, you’re a bit more timid than most people! I find the suggestion that the baseline was chosen in the way you suggest to be obviously absurd. Given that the GISTEMP record was first produced in the early 1980s, using the previous three decades as the base period seems rather obvious. If the desire was to produce larger positive figures, why didn’t they use 1880-1910 as the reference period?
chriscafe:
“Surely the major problem is that NOAA and GISS manipulate the raw data using unpublished algorithims”
Have you ever looked at any of the papers mentioned here or here? What definition of ‘unpublished’ are you using?
evanmjones:
“Well, believe it or not, it’s like this: NCDC takes its data and adjusts it (much controversy there). GISS takes the fully adjusted NCDC data and “unadjusts it” via some strange algorithm and then applies its own adjustments to the mangled results.”
That’s pure fiction. Have you ever read GISS’s own description of the actual procedure?

Steve Keohane
May 19, 2009 5:25 am

Anthony, OT
In case you haven’t caught this yet: Scientific American 5/18/09
Trees boost air pollution–and cool temperatures–in southeast U.S
Why is the southeastern U.S. getting cooler while the rest of the globe is warming? Thank the trees, say some researchers.

May 19, 2009 5:59 am

David (00:00:48) :

“Here’s a very simple example to show the folly of the baseline. Global temperature varies throughout any year from Winter to Summer and back to Winter. Yet average monthly temperatures are compared to a flat baseline. What could possibly be the point doing that?”

If all temperature readings were taken in one hemisphere then that point would be valid – but they are not. Whether the readings in each hemisphere are equally weighted is another matter.
The point is not valid either way. Monthly anomalies are temperature departures relative to the mean temperature for that month during a given ‘base period’. GISS use the 1951-1980 period. So, if the 1951-1980 mean temperature for May is 14 deg and the temperature for May 2009 is 14.5 then the May anomaly will be 0.5. The base period is irrelevant. If you don’t like the GISS base period then use one of your own. Use the satellite base period (1979-1998) if you prefer. This gives a GISS April anomaly of ~0.2 deg, i.e. similar to the RSS anomaly.
If you have the GISS data, converting from one base period to another is trivial. But you don’t even need to do that because GISS will do to for you . Click here ->
http://data.giss.nasa.gov/gistemp/maps/
Then select Hadl/Reyn_v2 in the Ocean drop down menu. Enter your preferred base period (e. Begin: 1979 End: 1998) then click on Make Map . You should end up with this ->
http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2009&month_last=04&sat=4&sst=1&type=anoms&mean_gen=04&year1=2009&year2=2009&base1=1979&base2=1998&radius=1200&pol=reg
which if it’s worked will show a global anomaly map. In the top RH corner you will find the monthly “anomaly” relative to the chosen base period. In this example it is 0.19 deg.

Steve Keohane
May 19, 2009 7:23 am

Anthony, OT, still, from the above Scientific American Post comments section, someone posted this:
http://www.ncdc.noaa.gov/img/climate/research/2009/apr/05_04_2009_DvTempRank_pg.gif
Shows a year long cooler than normal northwest, central and eastern US.

steven mosher
May 19, 2009 8:37 am

Anthony,
Naw I didnt read the mail. I’m ears deep in alligators. 20 hour days is killer.
Stuck in Asia with a bad connection. I’ll check later tonight, when I finish
work. I’m grumpy.

Richard Wright
May 19, 2009 10:26 am

he point is not valid either way. Monthly anomalies are temperature departures relative to the mean temperature for that month during a given ‘base period’. GISS use the 1951-1980 period. So, if the 1951-1980 mean temperature for May is 14 deg and the temperature for May 2009 is 14.5 then the May anomaly will be 0.5.

Thanks for clearing that up. Every time I have seen an explanation of the baseline, it is always something like “the average global temperature from 1951-1980” which is very different from what you are describing. However, I still maintain that the term “anomaly” is poorly chosen for the reasons I previously mentioned. And I still would like to see error bars or something that gives an idea of the spread of data used to calculate the “baseline”. How are we to know if a 0.2° difference is significant. If the standard deviation of the data used to calculate the baseline was 0.2, for example, then it would not be significant. Without this information, we have no way of knowing whether we are looking at real differences or just noise (even if the data itself were not suspect).

RW
May 19, 2009 2:18 pm

“And I still would like to see error bars or something that gives an idea of the spread of data ”
For GISTEMP, look here. Had you ever looked for this kind of information? It has been there all along and very easy to find.

Richard Wright
May 19, 2009 7:16 pm

“And I still would like to see error bars or something that gives an idea of the spread of data ”
For GISTEMP, look here. Had you ever looked for this kind of information? It has been there all along and very easy to find.

The graph at your link has 3 green error bars with no explanation of what they mean. Each data point should have error bars and we need to know what the error bars mean. And how would we know how much error is due to the baseline data and how much is due to the ongoing monthly measurements? Graphs that show averages without defined error bars do not allow an assessment of the meaning of the data.
I looked up Hansen’s Global Temperature Change paper that appears to be the source of the graph you linked to. Here is his explanation of error bars:

Estimated 2 sigma error (95% confidence) in
comparing nearby years of global temperature (Fig. 1 A), such as
1998 and 2005, decreases from 0.1°C at the beginning of the 20th
century to 0.05°C in recent decades (4). Error sources include
incomplete station coverage, quantified by sampling a model-
generated data set with realistic variability at actual station loca-
tions (7), and partly subjective estimates of data quality problems
(8).

I particularly like the last part about subjective estimates. Boy, that sounds precise and repeatable. So I guess his error bars are of the same quality as his data, which is not surprising. I would just like to see plus or minus 2 sigma based on the measurements that were averaged so we could see the spread of the data.

Just The Facts
May 19, 2009 9:47 pm

There’s no need to quibble about the differences between anomalies, 0.605 °C versus 0.440 °C, adjustments, base periods and all that, because “Earth’s median surface temperature could rise 9.3 degrees F (5.2 degrees C) by 2100, the scientists at the Massachusetts Institute of Technology found”…
http://www.reuters.com/article/latestCrisis/idUSN19448608
Can we get a copy of this “study” and have a thread where we give it a good peer review?

Chriscafe
May 19, 2009 9:51 pm

RW
Yes and Yes. Where is the complete set of manipulating algorithms published including data and code as required in all other branches of science?
Is the raw data still available or are we left with processed data which reflects the desired conclusions?

May 20, 2009 3:29 am

I knew I wasn’t too far off the mark with my earlier comment. In a post at Climate Audit, Steve McIntyre noted that NOAA has stated, regarding their ERSST data, that “V3b is now the official version. V2 will no longer be updated. It will still be available in our subdirtectory /Datasets/noaa.ersst/V2/’”
http://www.cdc.noaa.gov/data/gridded/data.noaa.ersst.html
Steve’s comment on the ClimateAudit post is here:
http://www.climateaudit.org/?p=6038#comment-342160

Mike Bryant
May 21, 2009 3:23 pm

Hmmm is the sea ice at the central east coast of Greenland there:
http://saf.met.no/p/ice/nh/conc/conc.shtml
or not?
http://nsidc.org/data/seaice_index/daily.html
Looks like we satellite problems again. There are also other missing swaths…
Mike

Bob Levinstein
May 25, 2009 12:04 pm

Completely off topic, but had to share:
http://comics.com/reality_check/2009-05-25/