And now, the most influential station in the GISS record is …

Guest post by John Goetz

#17 - Selinsgrove, PA (in 2003)

The GISS temperature record, with its various adjustments, estimations, and re-estimations, has drawn my attention since I first became interested in the methods used to measure a global temperature. In particular, I have wondered how the current global average can even be compared with that of 1987, which was produced using between six and seven times more stations than today. Commenter George E. Smith noted accurately that it is a “simple failure to observe the standard laws of sampled data systems.” GISS presents so many puzzles in this area, it is difficult to know where to begin.

My recent post on the June, 2009 temperature found that the vast majority of temperatures were taken from airports and urban stations. This would cause some concern if the urban heat island (UHI) effect were not accounted for in those stations. GISS does attempt to filter out UHI from urban stations by using “nearby” rural stations – “nearby” meaning anything within 1000 KM. No attempt is made to filter UHI from airports not strictly listed as urban.

If stations from far, far away can be used to filter UHI, then it stands to reason some stations may be used multiple times as filters for multiple urban stations. I thought it would be amusing to list which stations were used the most to adjust for UHI. Fortunately, NASA prints that data in the PApars.statn.use.GHCN.CL.1000.20 log file.

The results were as I expected – amusing. Here are the top ten, ranked in order of the number of urban stations they help adjust:

Usage Station Name Location From To Note
251 BRADFORD/FAA AIRPORT PA / USA 1957 2004 Airport
249 DUBOIS/FAA AIRPORT PA / USA 1962 1994 Airport
249 ALLEGANY STATE PARK PA / USA 1924 2007 Admin Building
246 PHILIPSBURG/MID-STATE AP PA / USA 1948 1986 Airport
243 WELLSBORO 4SSE PA / USA 1880 2007 Various Farms
243 WALES NY / USA 1931 2007 Various Homes
241 MANNINGTON 7WNW WVa / USA 1901 2007 Various Homes
241 PENN YAN 8W NY / USA 1888 1994 Various Homes
237 MILLPORT 2NW OH / USA 1893 2007 Various Farms
235 HEMLOCK NY / USA 1898 2007 Filtration Plant

Unfortunately, having three of the top four stations located at airports was the the sort of thing I expected.

Looking a little further, it turns out all of the top 100 stations are in either the US or Canada, and none of those 100 stations have reported data since 2007. (By the way, #100 is itself used 147 times.) Several of the top-100 stations have been surveyed by surfacestations.org volunteers who have documented siting issues, such as the following:

  • Mohonk Lake, N.Y. (197 times) – much too close to ground, shading issues, nearby building
  • Falls Village, Conn. (193 times) – near building and parking lot
  • Cornwall, Vt. (187 times) – near building
  • Northfield, Vt. (187 times) – near driveway, building
  • Enosburg Falls, Vt. (180 times) – adjacent to driveway, nearby building.
  • Greenwood, Del. (171 times) – sited on concrete platform
  • Logan, Iowa (164 times) – near building, concrete slabs
  • Block Island, R.I. (150 times) – adjacent to parking lot and aircraft parking area.

The current state of a rural station, however, is an insufficient criterion for deciding to use it to adjust the history of one or more other urban stations. The rural station’s history must be considered as well, with equipment record and location changes being two of the most important considerations.

Take for example good ‘ole Crawfordsville, which came in at #23, having been used 219 times. As discussed here, Crawfordsville’s station lives happily on a farm, and does seem to enjoy life in the country. However, up until 16 years ago the station lived in the middle of Crawfordsville, spending over 100 years at Wabash College and at the town’s power plant.

Mohonk Lake, N.Y. (197 times) – much too close to ground, shading issues, nearby building
Falls Village, Conn. (193 times) – near building and parking lot
Cornwall, Vt. (187 times) – near building
Northfield, Vt. (187 times) – near driveway, building
Enosburg Falls, Vt. (180 times) – adjacent to driveway, nearby building.
Greenwood, Del. (171 times) – sited on concrete platform
Logan, Iowa (164 times) – near building, concrete slabs
Block Island, R.I. (150 times) – adjacent to parking lot and aircraft parking area.
Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
127 Comments
Inline Feedbacks
View all comments
July 21, 2009 8:10 am

Mary Hinge (07:27:35) :
Are you one of the famous Spoonerisms? (The other being ‘Betty Swollocks’.)

David Snyder
July 21, 2009 8:16 am

It looks like Allegany has not been surveyed, I wonder about the others. I’ll have to take a stab at it while I’m there in August.

John Shoesmith
July 21, 2009 8:30 am

Take a temperature reading – record it
Add 37
Multiply by 8
Divide by 12
Add 16
Throw the result away
Pick a number that fits your theory – report it
Call it science

July 21, 2009 8:37 am

Mary Hinge (08:05:32),
I simply provided the dictionary definition of “bugger.” That’s what you asked for, isn’t it? So why the emotional response? Was it due to hormones?

Jim
July 21, 2009 8:57 am

Mary Hinge (03:29:38) : The current uptick in AMSU may or may not be meaningful, but that does not negate the fact that GISS shows a much greater uptrend over the past several years than does either RSS or UAH. It also does not validate the GISS methodology. Anyway, warm is better than cold any day. Warmist are wrong headed on so many levels.

DR
July 21, 2009 9:08 am

@Mary Hinge,
What makes you think satellites would not be sensing effects from UHI?

Rod Smith
July 21, 2009 9:09 am

E.M.Smith (04:41:58) :
“There was one systematic issue (a use of a non-standard “extension” to the language that some systems allow, but most do not) and there were a couple of cases of “type mismatch” in parameters. A couple of those were just parameters being passed to “write” statements, so they ought not to cause too much mischief. Another was more worrying in that a REAL data type was passed to a significant zonal subroutine with an INTEGER type for the variable.”
This is interesting. I’ve been retired for nearly two decades, but all the Federal Government software contracts during my working days specified that compliers used must meet ANSI Standards. Surely the extensions you noticed don’t meet ANSI standards.
I remember vividly that compilers for code developed for NASA had to meet that specification.

July 21, 2009 9:16 am

Smokey . And moderators.
This ‘Mary Hinge’ character is just trying to be funny. His/her ‘name’ is a Spoonerism. If you ‘undo’ the spoonerism, you’ll see what I mean, with the ‘name’ being UK slang for an intimate part of the female anatomy; a part which is not mentioned in polite discussion.
I’d get him/her to come up with a real name to prove he/she is a serious contributor.

Greg S
July 21, 2009 9:23 am

“Check this on Google scholar. Quite a few of these studies are available as PDFs. Some US lakes, such a Lake Mendotae (WI) have data on the date of ice cover and ice out since about 1850. One study, cited below, has long term data on ice duration for 62 lakes in the Great lakes region( see below). ” – Bill D.
Gosh….
That’s what a lot of people thought when they headed to Northern Minnesota for the fishing opening last year, only to find the lakes solid with ice in mid-May.
This year, we are wearing jackets in July.
When is the media going to recognize and acknowledge that climate changes from decade to decade?
The “average” for freeze-up and ice-out includes ’98, a very odd year.

Bill D
July 21, 2009 9:45 am

Jim (07:28:15) :
Bill D (06:32:29) : It’s been said before, but I’ll say it again … evidence of warming is not the same thing as evidence that man made CO2 is responsible for that warming. It also is not an established fact that warming is bad. To take it even further, I would say cooling is bad. In fact, cooling is much, much worse than warming. So, what’s your point?
Jim:
Of course, I agree that evidence for climate warming and the cause of climate warming are separate issues. The main concern on this particular post is whether temperature increases indicated by surface stations in the US are real or are they artifacts due to poor siting and UHI effects. All that I am saying is that data such as lake temperature and studies of the ice out dates on lakes (in the US and worldwide) are fully supported of the temperature increases suggested by the surface station data. The ice out data are esecially good for documenting warminng during winter and early spring.

Mary Hinge
July 21, 2009 10:01 am

Smokey (08:37:18) :
Mary Hinge (08:05:32),
I simply provided the dictionary definition of “bugger.” That’s what you asked for, isn’t it? So why the emotional response? Was it due to hormones?

You provided one definition but not the original meaning but bye the bye…
I was actually hoping you might have something relevant to say…still waiting!

Mary Hinge
July 21, 2009 10:10 am

Jim (08:57:51) :

Thanks for your response, at last an actual response to the original point.
Can you tell me and others here the source for this greater uptrend in the past years? I can offer you this link to the superb site wood for trees and I think you will find that there is no difference in trends at all. http://www.woodfortrees.org/plot/gistemp/from:1980/normalise:+2.4/plot/uah/from:1980/plot/wti/from:1980
In the context of recent articles written here the current AMSU uptick is very meaningful. It fully supports GISS’s figures from last month and shows that the now seemingly feeble attempts to discredit GISS are not worth the blogosphere bits they are written on.

Steven Hill
July 21, 2009 11:03 am

The coolest July in Kentucky history is on the way. Hats off to Roy Spenser, the clouds roll in every afternoon. We are saving money and using less electricty Mr. Gore, no AC needed and it’s July! WooHoo, I love your global warming! Can you turn it off however in Oct. we don’t want snow that soon.

E.M.Smith
Editor
July 21, 2009 11:48 am

Robert A Cook PE (07:30:02) :
EM Smith – Thank you for the service(s)!

You are most welcome! (Back after 5 hours sleep… hoping to get the first STEP0 run / debugged shortly…)
“Another was more worrying in that a REAL data type was passed to a significant zonal subroutine with an INTEGER type for the variable. ”
If I recall my FORTRAN properly, this would “replace” the numeric (eight digit powers of ten format) result from the calculation of the various input numbers with an “integer” (single digit precision) in the output.

Thanks for that. You’ve saved me a bit of time. I thought it was something like that, but your statement joggled a few 30 year old memories and it’s easy enough to test with a stub.
Could be significant. Could be insignificant. Could be trivial. Could be disastrous.
And that pretty much sums up GIStemp.
From the odd coding style (scribbling temp data in with the source files, in line compile, run delete…) to the strange algorithmic choices (why ‘unadjust’ via the last 10 years “offset” when NOAA has an ‘unadjusted’ dataset available? why re-write history? why fill in missing data with fantasies based on something that happened 1500 km away? ignoring Nyquist… How much does Fargo really reflect Dallas?) to the lack of basic math skills (exactly HOW do you get 1/100 C precision out of 1 F data?) it is just full of such “stuff”.

A few fundamental questions: We now have GISS’s “official” list of 100 “rural” (uncontaminated!) stations.
Realize that the “list” is not official, it is an artifact of the particular “run” of the program. It will change each time GIStemp is run. Perhaps slowly, perhaps fast. If some stations are deleted from the record, or some added, or just some artifact of this year data changing the ranking: the list will change based on the new data in the run.
These are not well selected “uncontaminated” or “rural” stations. They are selected based on the character of the data fed into GIStemp (and AFTER the earlier steps of GIStemp have already partly re-imagined the data… So, for example, the GHCN – USHCN blending / “de-offsetterizing” step gets done in STEP0 before the “reference station method” gets applied the first time IIRC (of several…). So it’s a computer selected list partly selected based on things the computer has already done to the data, that changes with each run…
1) What are their (uncorrected!) temperature histories?
2) What are their (contaminated (er, corrected)) temperature histories?

You can directly download this data from NOAA:
GHCN = Global Historical Climate Network (NOAA)
USHCN = US Historical Climate Network (NOAA)
Basic data set: GHCN – ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2
v2.mean.Z (data file)
v2.temperature.inv.Z (station information file)
For US: USHCN – ftp://ftp.ncdc.noaa.gov/pub/data/ushcn
hcn_doe_mean_data.Z
station_inventory
Note that GHCN also has v2.mean_adj.Z (and max and min) datasets available. And USHCN has doe_min and doe_max sets too.
Since the USHCN data are reflected in the GHCN set, it really is a bit murky why we need to go through the machinations of “unadjusting” the data in the first place. It just introduces a big change in valid historical data for no good reason, IMHO.
BTW, for most browsers you can just paste the “ftp://ftp…” line from above into the browser and get the directory presented for drag / drop copy of the data.
3) Since these 100 stations are supposedly correcting for UHI in thousands of other heat-affected stations, why does he (Hansen) not just simply use these stations by themselves – no area adjustments at all?
IMHO, the “game” being played here is all about making up data where there are none due to the coverage of thermometers being too sparse both in time and in space. Disjoint bits of data for a given station are glued together from USHCN and GHCN, missing chunks are made up based on “nearby” stations, “anomaly” maps are made with zones who’s content are made up based on what’s available within 1500km (or more) and with the ocean boxes filled in via an “anomaly map” that is already based on gluing together several sets of measurements (ships, buoys, an already processed satellite anomaly map, and more…) etc.
All of this to give the impression that we have a global coverage for 100 years when the reality is that we have a lot of coverage for the U.S.A. and Western Europe for fragmented chunks of time, and darned near nothing for most of the planet for most of time / history.
http://chiefio.wordpress.com/2009/02/24/so_many_thermometers_so_little_time/
It is all an attempt to ignore the fact that they violate Nyquist and their results are meaningless because of that.
3) Why “fill in” data in missed days, months, years – which will invisibly contaminate previously valid rural station data with the nearest URBAN data for the missing days – and not just plot and use what is actually present in the record?
I think I covered that above. I suspect, but can never prove, that they started with plotting the real data. Then discovered that they didn’t have the data needed to make any conclusions. So they headed down the path of “filling in the gaps with guesses” and believe that their guesses are valid (when they are not).
So we need to blend data sets with different “adjustment” histories… (make up a way to conform these to each other, but ignore the side effects). We need to fill in missing blocks of data (guess based on what you do have, and don’t look too close at the quality of the guesses…) We need to fill in long spans of history of large parts of the globe (make “zone boxes” that are really big, then blend whatever data you have in the box over the whole thing, ignore that Kona may have little to do with Hilo and the rainy side of Kauai – Kilauea may be dramatically different from both, just use them to make up data for the surrounding 1500 km radius of ocean…).
4) Why back date old records if the algorithm for filling in continuously and erroneously adjusted literally hundreds of years of historical records with continuously adjusted new data?
This, IMHO, is a very important place to “stick a fork in it, Pablo!”
Simply take a GISSified station, blink it with the GHCN adjusted data, and ask Why?
All the difference is from the GIStemp magic sauce, and nothing more. They start with GHCN (that gets it’s data for US stations from USHCN as I understand it) then “adjust it” in strange and wondrous ways. So want to know how much GISS is “making up”? Just compare the two sets.
On my “someday” goal list is to take the GIStemp code and shut off parts of the “homogenized processed data food product” process one at a time to find the sensitivity of the output to the different parts of the GIStemp blender.
I’m well on my way (now that it compiles) but it will likely take me a few more months to get there. But, I have to make money some time or other or my kids don’t eat… so I work it in around the edges. (If any big oil company would like to hire me to work on GIStemp, I’d love to do it, but that magic money spigot the AGW crowd keep saying is buying folks off just never seems to be running when I’m around 8-}

Leon Brozyna
July 21, 2009 11:52 am

Just goes to show – some averages are more average than others.

E.M.Smith
Editor
July 21, 2009 12:02 pm

Rod Smith (09:09:19) : This is interesting. I’ve been retired for nearly two decades, but all the Federal Government software contracts during my working days specified that compliers used must meet ANSI Standards. Surely the extensions you noticed don’t meet ANSI standards.
I remember vividly that compilers for code developed for NASA had to meet that specification.

I believe that the compiler has to provide all the features specified in ANSI, but can have “extensions” to is that are ignored. Programmers are “encouraged” not to use those non-ANSI extensions, but folks use them anyway.
In this case, it’s just an easy way to preload an array with data.
The standard is:
INTEGER FOO(3)
DATA FOO / 1, 2, 3 /
The extension is:
INTEGER FOO(3)/ 1, 2, 3 /
so you can see how folks would take the “shortcut”. Once I figured it out, it didn’t take long to “fix” even if it was in about 1/3 of the programs… The g95 compile error messages pointed me right at it in each program.

tim maguire
July 21, 2009 12:51 pm

Judging by the names and locations, I’d expect to find most of these airports are for small, generally private planes, are grassy, don’t see a lot or activity or generate a lot of heat and would qualify as rural or semi-rural (quite different from the big city airports we use to get around this great country).
My question is more basic and you have probably already addressed it somewhere. Is there any validity at all to these “adjustments”? Is there any reason to believe that taking a temperature reading and then smoothing it out based on other area temperature readings will get you a more meaningful (or even just meaningful) number?

Mary Hinge
July 21, 2009 1:40 pm

DR (09:08:30)
@Mary Hinge,
What makes you think satellites would not be sensing effects from UHI?
Are you being serious? Are you suggesting that the current sharp rise in lower atmospheric temperatures is due to sudden increase in urbanization. Isn’t it infinitely more likely that last months high surface temperature anomolies from GISS (includes the oceans and as fas as I know there have been no Jules Vernes type cities built anywhere on this 70% of the Earth’s surface) are now showing up in the lower atmosphere? This is still my main point and despite the bluster nobody here as yet come up with an alternative mechanism but will still be saying how ‘criminally’ wrong GISS was wrong last month.
GISS was right, you are wrong, time to apologise to them maybe?

Jim
July 21, 2009 1:49 pm

Mary Hinge (10:10:56) : That was a nice try, but typical of warmers, you did a shake and bake on the data. You can’t use normalize only on GISS … we’ve been down this road before.
Normalise – Scales and offsets all samples so they fall into the range 0..1
See the farily rendered chart instead:
http://www.woodfortrees.org/plot/gistemp/from:1980/plot/uah/from:1980/plot/wti/from:1980

sky
July 21, 2009 1:51 pm

E. M. Smith:
This question has to be asked: have you found anything in GISTEMP that is done right?

sky
July 21, 2009 2:10 pm

Mary Hinge (13:40:59):
Let’ s look closely at the June global anomalies: UAH=.001, RSS=.075, GISS=.640, GISS(land)=.730. Satellites cover virtually the entire globe; land stations and SST reports from ships of opportunity do not. So where do you get the unhinged notion that “GISS was right?”

DJ
July 21, 2009 2:15 pm

My posts are based in science and my own personal interpretations. Perhaps rather than slaps you deal with the substance.
Sure it was fun being a “sceptic” while the La Nina and solar minimum coincided (and the Eurasian snow storms of Jan 2008 were a well timed freak event on which to anchor silly stories of ice ages), but all the data are going in the wrong direction and fast.
REPLY: And people like yourself are driving it there, fast. Take a look at all the adjustments being made in the POSITIVE direction. No complaints from you about those, why is that. Confirmation bias. Its OK then by your view to remove datasets that go against what you believe? NCDC thinks so.
http://www.ncdc.noaa.gov/oa/climate/research/sst/papers/merged-product-v3.pdf
“In the ERSST version 3 on this web page we have removed satellite data from ERSST and the merged product. The addition of satellite data caused problems for many of our users (WHO???). Although, the satellite data were corrected with respect to the in situ data as described in reprint, there was a residual cold bias that remained as shown in Figure 4 there. The bias was strongest in the middle and high latitude Southern Hemisphere where in situ data are sparse. The residual bias led to a modest decrease in the global warming trend and modified global annual temperature rankings.”
if you were a scientist of integrity you’d
1- use you name here instead of hiding behind BoM IP addresses while at work
2- take the issue of dataset removal seriously.
But you won’t do either of those things. Feel free to prove me wrong. – Anthony Watts

Brandon Dobson
July 21, 2009 2:16 pm

To tty, AllanM et al, regarding medieval maps and the colonization of Greenland:
If any significant research is done on the topic, the statement “As for 16th century maps of Antarctica, may I point out that nobody visited Antarctica until 1820.”
has no more weight than Colombus’ claim that he “discovered America”, even though there were clearly long-established Native Americans prior to his landing. Such attitudes reflect the ignorance of Europeans in the Dark Ages, and the widely held belief that only white-skinned Christians have any valid claims to discovery. A thorough discussion of the medieval maps can be found at:
http://www.saudiaramcoworld.com/issue/198001/piri.reis.and.the.hapgood.hypotheses.htm
The alleged “discovery” of Antarctica in 1820 is, of course, the main point of contention. Quoting from the above website:
“The cartography of the Age of Discovery, for instance, often seems to have been independent of the voyages themselves; that is, certain early maps of America contain features before their supposed date of discovery.
The most notable example of this is the map of America made by Glareanus, a famous Swiss poet, mathematician and theoretical geographer, in the year 1510. This map, which was probably based on the 1504 de Canerio map, clearly shows the west coast of America 12 years before Magellan passed through the strait that bears his name. In other words, Piri Reis was not the only one to include anachronous information.
The map of Glareanus, furthermore, was reproduced in Johannes de Stobnicza’s famous 1512 Cracow edition of Ptolemy and is unquestionably similar to the map of Piri Reis. Did Piri Reis have a copy of this early printed edition of Ptolemy before him when he drew his map? Is this what Piri Reis meant by “maps drawn in the time of Alexander the Great”?
Aware that ideas that deviate from traditional scientific beliefs get short shrift in the scientific community – as did, for instance, Wegener’s theory of continental drift, now widely accepted, Hapgood, therefore, pointed out in Maps of the Ancient Sea Kings that civilizations have vanished before. No one knew where Sumer, Akkad, Nineveh and Babylon were until 19th-century archeologists dug them up. And as late as 1970 – only 10 years ago – no one even suspected the existence of a civilization called Ebla (See Aramco World, March -April l978). It had existed. It was real. But it vanished without a trace. Why then, argue Hapgood advocates, couldn’t there have been other civilizations that vanished?
The same is true of Hapgood’s unspecified advanced technology. Greek fire – something like napalm – was developed in the ninth century but its composition has never been duplicated. Arab scientists of the Golden Age were able to perform delicate eye surgery – using advanced instruments – but these skills were later lost. And in 1900, according to Scientific American, archeologists discovered an astoundingly advanced gearing system in a Greek navigational instrument. It dated back to 65 B.C. and its existence had never been suspected.
Although unquestionably an amateur theoretician, he did do his homework and had it thoroughly checked by professionals. The U.S. Air Force SAC cartographers, for example, worked with him for two years and fully endorsed his conclusions about Antarctica.
Furthermore, the Hapgood team identified 50 geographical points on the Finaeus map, as re-projected, whose latitudes and longitudes were located quite accurately in latitude and longitude, some of them quite close to the pole. “The mathematical probability against this being accidental,” says Hapgood, “is astronomical””
Scientific studies have been done in the Ross Sea, which show the effects of river-born sedimentation:
“In 1949 coring was done to take samples of the ice and sediment at the bottom of the Ross Sea. They clearly showed several layers of stratification, meaning the area went through several environmental changes. Some of the sediments were of the type usually brought down to the sea by rivers. Tests done at the Carnegie Institute in Washington DC, which date radioactive elements found in sea water, dated the sediments at about 4000 BC, which would mean the area was ice free with flowing rivers up until that time – exactly what is recorded on the Reis and Finaeus maps.”
If science means anything anymore, these findings cannot be dismissed by armchair theorists.
Whether Greenland’s naming was to promote colonization, or as other sources claim, as an apt description, the point is moot because no one disputes that Greenland’s climate was warmer around 1000 AD, and that farming once supported colonists. The ruins of Hvalsey Church are clearly visible, and are one of the best-preserved signs of Middle Age settlements in Greenland.
And I should point out that the central theme of WUWT, that climate change is natural, cyclical, and not necessarily human-caused, is a “deviation from traditional scientific beliefs” at this period of time. To dismiss anything that is outside the realm of consensus undermines the whole purpose of finding the truth.

Alexej Buergin
July 21, 2009 2:28 pm

To calm down the gentleman who calls himself the m-word, the b-word is only indecent when it is used alone or in combination with -you or -up. In combination with -OFF it simply means “go away”.

Ron de Haan
July 21, 2009 3:18 pm

There is a new report that provides evidence connecting the earth’s magnetic field to warming: http://www.appinsys.com/GlobalWarming/index.htm?nn=2009072101
You can download two PDF Files, the original report and the latest magnetic field map.