Early in the project, one of the criticisms heard against the surfacestations.org effort was that there had been “cherry picking” going on in the selections of stations to survey, and that the project wasn’t reaching a wide area. In this first map of it’s kind, one can clearly see the how the quality distribution of the 460 out of 1221 stations surveyed so far looks and that those claims weren’t valid. The results clearly show that the majority of USHCN stations surveyed so far have compromised measurement environments. The question then is this; have these mircosite biases been adequately accounted for in the surface temperature record?
As you can see below, there appears to be some clustering near population areas, and some east coast/west coast volume bias. There are sparse areas in the midwest that I hope can be surveyed soon. But, there is a nationwide distribution. The thing that really stands out though is that there are few sites that are CRN1/2 and many more that are CRN 3/4/5. This speaks to the concerns that our measurement network is broadly affected by microsite biases and urbanization encroachment.
Here is how this map came about; there was a suggestion made in comments by Henry, suggesting that a map showing distribution of the CRN rating would be useful. I agreed, but lamented that I’m overloaded with work at the moment. The beauty though of this project is it’s capable volunteers.
Volunteer Gary Boden came to the rescue, and provided the map below as a function of the Excel spreadsheet tracking the ratings that I’ve made publicly available for some time now. You can download my data set in Excel format at www.surfacestations.org See his plot below:
Click picture for a larger image
Here is the same data presented in Pie Chart Form:

For reference, as originally defined in the NOAA Climate Reference Network Handbook, here are the site quality rating descriptions:
Class 1 – Flat and horizontal ground surrounded by a clear surface with a slope below 1/3 (<19deg). Grass/low vegetation ground cover <10 centimeters high. Sensors located at least 100 meters from artificial heating or reflecting surfaces, such as buildings, concrete surfaces, and parking lots. Far from large bodies of water, except if it is representative of the area, and then located at least 100 meters away. No shading when the sun elevation >3 degrees.
Class 2 – Same as Class 1 with the following differences. Surrounding Vegetation <25 centimeters. No artificial heating sources within 30m. No shading for a sun elevation >5deg.
Class 3 (error >= 1C) – Same as Class 2, except no artificial heating sources within 10 meters.
Class 4 (error >= 2C) – Artificial heating sources <10 meters.
Class 5 (error >= 5C) – Temperature sensor located next to/above an artificial heating source, such as a building, roof top, parking lot, or concrete surface.
Given that the generally agreed upon rise in surface temperature over the last century is approximately 0.8 degrees Centigrade, and seeing that the majority of climate monitoring stations have errors that are nearly equal to or larger than that value, the microsite bias errors are a cause for concern.
We need more stations surveyed; this upcoming Christmas travel season would be a perfect opportunity to help us fill in the midwest. If you’d like to volunteer and survey a station or two, visit www.surfacestations.org and sign up.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Anthony –
Thanks to you and Gary (and all the volunteers doing the surveys).
With this chart, people will now see what the whole “how not to measure” series is about. No cherry picking, but rather an overabundance of poor sites.
And the US is supposed to have the highest quality sites (makes you wonder what the ROW really looks like).
Let’s see how long it takes for THIS graphic to make the rounds…
BTW, I realize that you’re onmly doing the lower 48, but here’s a website showing some of the Alaskan COOP sites (just for fun).
http://ccc.atmos.colostate.edu/Alaskacoopsites.php
Ok, so what is NOAA’s or GISS response to this so far??? Are they waiting for you to finish before doing anything or are they busy revising the numbers according to the ratings you are documenting? I see a big fight over the ratings issue versus what so called UHI adjustment Hansen made. In fact, in order to make any UHI adjustment Hansen of necessity would have had to had documentation on the level of compliance of each site. So when will his documentation be forth coming or is this another set of data that is off limits to the scientific community?????
Henry, thanks for that link to Alaska stations.
What is sad is that every one of those stations is out of compliance. I was particularly dismayed by the MMTS temperature sensor less than a foot away from the newly constructed building. In winter, do you think that will read warmer?
Thought you’d get a kick out of the stations “up north”. There, they’re REALLY placed for observer convenience.
I suppose as long as you give proper credit, you could probably add an album to surfacestations.
I have been struck over the months of your project Anthony with the complexity of the task Hensen has attempted. As I look at this graph, the temp. error is at best a guess.
I have been thinking for some time about how to quantify the errors your research (and researchers) have discovered. If you were able to find a large plot of land in the Midwest(relatively flat / large areas without trees) with a sizeable building on it. Multiple MMTS units could be sited and you could set up units at each of the CRN levels and determine the actual error induced. That would also introduce another level, ie: on the north side of the building, on the south side of the building, north by the ac, south by the ac. with asphalt or without..ad infinitium. By maintaining this over time(several years), you could get realistic level of error, one I believe would be higher then the conservative numbers you are using.
Now that my head is throbbing, I will go back to my private thoughts on how it was the Chinese who took advantage of the movement to disable GOES 12.
I agree with Henry, if one was to try to “adjust for microsite/uhi conditions” he would have to have for GISS alone 1221applicable algorithems alone. then what happens when something changes. Hansen can’t write programs that fast. I just don’t think that we can trust the numbers comming out of the system……………….
Nope the science isn’t even close to properly begun. we haven’t learned how to collect the data properly as yet.
Bill
Anthony,
It is shocking to see how much red, orange and yellow there is on the US map. It is one thing to look at at the pie charts, but one does not get the real magnitude of the problem until, you and see the dots spread across all the states. How can anyone look at this chart and say, well there may be problems with individual stations but over all we have captured the global warming trend. No, what they have captured is the output of some very biased stations.
As a suggestion to Gary:
Consider plotting all of the non-surveyed sites as empty circles to show that the distribution of all USHCN sites is not very even (show that it is not simply a “cherry-picked” distribution).
Any chance someone could email that map to the guys sunning themselves in Bali while they decide the fate of Mankind?
Very interesting plots.
A small comment: the orange and yellow colors in the plot and the pie chart do not correspond to the same CRN.
REPLY I posted the older version of the pie chart graphic by mistake, that’s fixed now, thanks for pointing it out.
Have CRNs 3 and 4 been mixed up in either the map or the pie chart? The colours seem to have been swapped.
REPLY I posted the older version of the pie chart graphic by mistake, that’s fixed now, thanks for pointing it out.
The other telling part is the following from the charts:
CRN5 = 55%
Error for CRN5 = 5C
More that half the USHCN stations have an error equal to or greater than 5C? And we’re only .6C above a 27 year old reference (NASA still using the period from 1951-1980 as a base period).
Try putting THAT error bar over the “surface record” line grafted to the charts…
It appears that what we have isn’t AGW, but DGW (Data Global Warming).
REPLY Henry, I think you mean CRN 4, there isn’t a 55% CRN 5 volume
Anthony, I linked to this post at another blog I frequent, and I got the following response, (actually from a guy that I know and respect):
I scanned CA’s archives and couldn’t find any such reference. Any idea what he’s talking about?
REPLY Yes it’s John V’s analysis using opentemp, but it was done very early in the game when there were only 17 CRN1 stations and a lopsided survey distribution, and I don’t think there was a valid sample size to detect. I think he was a bit too eager. We’ll run it again when we get a better volume. Another problem is that he ran the test using data all the way back to 1920, and the CRN ratings are for our current time frame. A run of 10 years back might yield more relevant results.
I’ve made it a point not to do any analysis beyond the posting of census and distribution data because I think it’s premature to go looking for timeline divergence signatures until we get a significant majority of the network surveyed. Right now we are at 37.5%
Looks like we need some people in Texas.
Iowa looks nekked too.
I’ll send out some feelers. Got family in Oklahoma.
Stan –
John V’s website might get you more info/answers:
http://www.opentemp.org/main/
I scrolled through the Alaska photos, and it occurred to me, those and the various CRNs are likely just fine for average civilian weather reporting and forecasting. Where the stupid occurs is when supposedly educated people think any of this data is useful for fractional degree analysis or prediction. Even with all the hand waving ‘adjustments’. Predicting temperature changes to the accuracy of a hundredth of a degree in time frames out to 2010, nevermind 2100 seem, well, just nonsensical. One problem with modern digital instrumentation is that the readout is often NOT indicative of the actual instrument accuracy. Old analog meters used to be the limiting factor, as were LIG thermometers. I have seen 0.1% readouts on digital instruments with 3% basic accuracy. Maybe the MMTS are better, nevertheless, this can lead to all sorts of numerical foolishness, and in this case, upon which gazillion dollar world policy is to be based? Gimmee a break!!!!
REPLY Henry, I think you mean CRN 4, there isn’t a 55% CRN 5 volume
Well, there WAS, before the chart changed…
If I was able to edit, I would.
But it still says that (including CRN4/5), that 69% of the stations could have an error greater than 2C, with 14% being greater than 5C.
It’s times like this, a statistician comes in handy.
Because a quick question is: Based on 37.5% of the network, with the percentages shown, and the errors listed for each percentage, what is the projected error for the network?
That NOAA US surface temp chart has never listed a +/- value. I’m beginning to see why not.
REPLY : Hi Henry I’m sorry but there was never a 55% percentage of CRN 5 stations. The WAS and IS a 55% percentage of CRN4. The only thing that has changed is the color scheme on the pie chart for CRN 3/4. Gary Boden made his colors reversed of what I normally used for CRN 3/4 and I initially posted the pie chart using the older color scheme. The colors changed, the numbers did not. Not trying to pick a fight, just trying to clarify.
Anthony,
This is off-topic and I tried to e-mail you privately but it was chucked back. Anyway, I picked this up today and thought it might be of interest.
http://formerspook.blogspot.com/2007/11/when-tropical-storm-isnt-tropical-storm.html
Regarding: Bob L. (18:25:18) :
That is an excellent idea. Taking one location, a few hundred yard radius, and place monitors over pavement, near buildings, AC exhaust, and other common violations, vs a properly placed monitor, and record data for a year. It should be done in several climate type areas, such as upper midwest, southeast, desert southwest, etc. It would be very interesting to see the differences in readings of proper vs improper measurements. Anthony, make it so!
Clayton B:
Including the unsurveyed sites would clutter this plot. It’s intention is only to show the distribution of rankings so far. The surfacestations site has plots showing both surveyed and unsurveyed sites.
I think it is already possible to make the conclusion that satellite (MSU) is our best available data for the last 28 years.
It would however be very interesting to sort out good stations and look at the combined trends at those. That would give us a reasonable measure on Hansens et als work on adjustments. Similar trends would strengthen Hansens case and vice versa.
Anthony,
I live in Dallas, TX, have relatives in Tulsa, OK, and travel to Austin, TX from time to time. I would be willing to survey any nearby stations or stations between these locations you can identify. I would need instructions and and necessary equipment identified.
I would like to volunteer to help in this project. I live in Houston, TX. It appears that most of the unsurveyed TX sites are for the most part outside my easy commute. May be able to get to a few. TX is a very large state and gasoline prices discourage longggg side trips. Santa Claus is bringing me the electronics.
I am planning a trip to Iowa in late April 2008. I could probably make it to seven of the sites in the SE quadrant of Iowa. Is this too late to help?
Cherrypicking, is it?
So quick to accuse!
So slow to check it out!
More dim bulbs from the “Lights=0” side of the aisle.
Especially as the original breakdown was much the same as the current lot.
One Freudian Word: PROJECTION