Equipment Distribution in the US Climate Network

I just finished several days of data compilation and cross checking in preparation for release of the first set of numbers from my www.surfacestations.org project.

One of the things I’m doing is looking at what kind of equipment is used and how widely distributed it is. Here are some numbers that illustrate the makeup of the USHCN network of 1221 weather stations:

NIMBUS 196
MMTS 674
CRS w/ MAX-MIN 251
ASOS HYGROTHERM 64
THERMOGRAPH 5
OTHER NS EQUIP 19
UNKNOWN 12
Total: 1221

USHCN_equipment_piechart.png

Source Data: NCDC MMS

Note that the vast majority of the temperature sensors are now the MMTS / Nimbus electronic type, comprising 71% combined, with the older Cotton Region Shelter and Mercury MAX-MIN Thermometers comprising only 21% of the network now. ASOS systems mostly at airports comprise 64 stations, or 5%. There are 19 official climate stations where nonstandard consumer level equipment has been substituted, comprising 2% of the network.

This is an important thing to know, keep it in mind becuase it goes hand in hand with the upcoming station site quality analysis based on 25% of the total network that has been surveyed.

Advertisements

21 thoughts on “Equipment Distribution in the US Climate Network

  1. Anthony,
    For the sake of accuracy and consistency, can you provide the data sources for this chart, currency of the data, and any other relevant information such as the differences between the types of equipment? Thanks.

  2. When you get ready to do
    the quality analysis, I have some suggestions for
    proceedures and protocals.
    Others from CA will chime in I am sure

  3. Steve – don’t wait – chime in now as it is in progress.

    Gary there is a link to the data source under the chart at NCDC. The data is current as of this week in terms of accessing it, but I can’t speak for NCDC in their currency of it since they provide no easy to extract record for it.

    The differences between the equipment is coming, patience.

  4. Keep it in mind? I thought that equipment was supposed to be consistent.

    Yes, I am eagerly awaiting preliminary results. (Will you be using the “1-5” method, or holding off until all the results are in?)

  5. Hmmm…I half-expected to see “coin toss” as one of the methodologies, based on the photographs at some of those sites.

  6. Ideally you would want exemplars of class 1, 2, 3, 4, 5.

    You would use these to train and normalize the people doing the assessment.

    Then you have evaluation sheets for every site with the criteria listed.
    ( vegetatin height etc etc) So that the assesment has traceability. Site X was ranked a 3 because of x, y and z.

    Then you’d have a number of “raters”. Each site gets a rating from each rater.

    Randmize the order of presentation as raters can often regress to the mean during long periods of rating..( everything becomes a 3)

    Then you have to check for homogeneity between raters.
    If two people call it a 5 and one calls it a 1, then you have to address that issue.

    If you don’t want to be that elaborate, then at least keep eval sheets for every site. People will question the rating.

  7. Bill H, you are correct, and that’s been fixed. Thanks for pointing it out!

    In my late night haste to be done and get to bed I missed an error that prevented 4 stations from being accounted for.

    The problem had to do with manual data entry at NCDC. One case had a “Sixes” Thermometer” in place of the standard temperature equipment (a sixes thermometer is used in the evapotranspiration pan) and the other three were HYGROTHERMOGRAPH instead of the THERMOGRAPH I was searching for. I searched on THERMOGRAPH which just happened to be the one entry they did different.

    All fixed now, and new results with total displayed.

  8. “If you don’t want to be that elaborate, then at least keep eval sheets for every site. People will question the rating.”

    Naturally. One would expect nothing less.

    (Too bad it’s come to the point where you actually have to spell it out.)

  9. The beauty of having the photos and survey data – and presumably the eventual ratings themselves – posted openly on the web (surfacestations.org) is that anyone can come in and “peer review” the ratings. That is assuming, of course, they are familiar with the standards, which are also available on the site.

    A wiki, comment log, message board, or something of that ilk could be used as the rating and peer review mechanism. It would certainly provide an open, reviewable record of why people think a “controversial” station should be rated as it is.

    Of course some will argue that mere amatures are doing the rating, not professionals. But again, the openness and availability of the record allows the pros out there to do the rating themselves.

  10. Oh don’t fool yourself. Realclimate et al. will attack every little minutia they can. First and foremost they will argue that your sample wasn’t purely random. And probably that there was some malfeasance in the sample selection as well. I calculated the confidence interval of the project a while ago and ended up in a wikki argument that lasted paragraphs. Ironically people who live and die by their faith in statistical analysis will not allow statistical analysis to be used when it contradicts them. I’m sorry Anthony but you have no other option than to survey every single last station. Even then they wont be satisfied but it will take away many of their knee jerk arguments.

  11. GTTofAK: With every survey we do, we take another small bite out of that objection. They seem to be rolling in steadily, lately. I’d wager we’re closer to ~28%, now. I have some more time next week – should be good for 2-3 more surveys. Onward!

    Anyway, I’m greatly looking forward to seeing your first pass of 25%, Anthony.

  12. Ideally, the ranking would be done by people trained in a protocal with no knowledge of the import of their ranking. Absent that, transparacy is a good antiseptic.

    If you are dilgent and fair, others merely expse themselves with their objections.

  13. I’m sort of on the fence at the moment on releasing what I have so far because I’m somewhat worn down by the energy I have to keep expending to ward off the naysayers.

    Beyond my own surveys and a couple of suggested sites to look at USHCN2, I’ve never had any influence over what sites have been selected for survey. It’s pure luck of the draw as to who signs up and what opportunities they have.

    The data and method of course will be public and replicable by anybody, even those folks who hide behind rocks and take pot shots on the Internet. So it’s a tough call as to release it now, and take the criticisms that will inevitably come, or to refine methods further and keep collecting, and release later when its all done and have less criticism since they can’t claim cherry picking or selective sampling.

    Its taken two months to get 25%, I figure the final 75% will take 8-10 because there’s always going to be those problem stations we can’t get into easily.

  14. “Its taken two months to get 25%, I figure the final 75% will take 8-10 because there’s always going to be those problem stations we can’t get into easily.”

    What makes you think this? Has there been a drop in momentum? I would anticipate the opposite given all of the publicity.

    Of course there will be a few stations that will take an extended effort to survey but I wouldn’t expect the number to be that significant.

  15. “Has there been a drop in momentum? I would anticipate the opposite given all of the publicity.”

    Well, it depends on where the “momentum” lives. If the stations are “remote”, I think it will take a while. Station surveying is not done spur of the moment.

    I would anticipate that the team gets to 50% on a similar or better trajectory, then takes maybe 7 or 8 months to get to 90%, and will accelerate to 100%. The last few stations will go quickly as news gets out that they are the only ones remaining.

  16. Anthony,

    Is there enough information from MMS to get the instrument distribution over an extended history? From my experience, the data are thin before 1948, but even from that time it would be interesting to compare the thermometers used then vs. now as well as through the intervening decades.

    This is probably on your to do list already, but thought I’d ask.

  17. John Goetz, I was looking over the surveys for Indiana to see if there were any near me that hadn’t been done. I noticed that the station in Huntington was surveyed by you. Small world — I live about 30 miles from Huntington. Do you live in Roanoke by any chance?

  18. Anthony,

    Biggest Issues I think will be all the AFBs.

    Perhaps there is an angle to get them all

  19. Re: AFBs – maybe a call to Inhofe’s office to see if he could pull some strings? A blanket authorization would be handy, I’m sure.

  20. Chris D,

    Thats a good Idea.

    Anthony, perhaps we come up with a list of AFB lcations and a package and make it easy for the AF to comply.

    Same with FAA

Comments are closed.