Given all of the discussions recently on issues with the surface network, I thought it would be a good idea to present this excellent paper by Lin and Hubbard and what they discovered about the accuracy, calibration and maintenance of the different sensors used in the climatic networks of the USA. Pay particular attention to the errors cited for the ASOS and AWOS aviation networks, which are heavily used by GHCN.
X. LIN AND K. G. HUBBARD
School of Natural Resource Sciences, University of Nebraska at Lincoln, Lincoln, Nebraska
ABSTRACT
The biases of four commonly used air temperature sensors are examined and detailed. Each temperature transducer consists of three components: temperature sensing elements, signal conditioning circuitry, and corresponding analog-to-digital conversion devices or dataloggers. An error analysis of these components was performed to determine the major sources of error in common climate networks. It was found that, regardless of microclimate effects, sensor and electronic errors in air temperature measurements can be larger than those given in the sensor manufacturer’s specifications. The root-sum-of-squares (RSS) error for the HMP35C sensor with CR10X datalogger was above 0.2°C, and rapidly increases for both lower (<-20°C) and higher temperatures (>30°C). Likewise, the largest errors for the maximum–minimum temperature system (MMTS) were at low temperatures (<-40°C). The temperature linearization error in the HO-1088 hygrothermometer produced the largest errors when the temperature was lower than -20°C. For the temperature sensor in the U.S. Climate Reference Networks (USCRN), the error was found to be 0.2° to 0.33°C over the range -25° to 50°C. The results presented here are applicable when data from these sensors are applied to climate studies and should be considered in determining air temperature data continuity and climate data adjustment models.
Introduction
A primary goal of air temperature measurement with weather station networks is to provide temperature data of high quality and fidelity that can be widely used for atmospheric and related sciences. Air temperature measurement is a process in which an air temperature sensor measures an equilibrium temperature of the sensor’s physical body, which is optimally achieved through complete coupling between the atmosphere and air temperature sensor.
The process accomplished in the air temperature radiation shield is somewhat dynamic, mainly due to the heat convection and heat conduction of a small sensor mass. Many studies have demonstrated that to reach a higher measurement accuracy both good radiation shielding and ventilation are necessary for air temperature measurements (Fuchs and Tanner 1965; Tanner 1990; Quayle et al. 1991; Guttman and Baker 1996; Lin et al. 2001a,b; Hubbard et al. 2001; Hubbard and Lin 2002). Most of these studies are strongly associated with the study of air temperature bias or errors caused by microclimate effects (e.g., airflow speed in-side the radiation shields radiative properties of sensor surface and radiation shields, and effectiveness of the radiation shields). Essentially, these studies have assumed the equation governing the air temperature to be absolutely accurate, and the investigations have focused on the measurement accuracy and its dependence on how well the sensor is brought into equilibrium with the atmospheric temperature. Such findings are indeed very important for understanding air temperature measurement errors in climate monitoring, but it is well known that all microclimate-induced biases or errors also include the electronic biases or errors embedded in their temperature sensors and their corresponding data acquisition system components.
Three temperature sensors are commonly used in the weather station networks: A thermistor in the Cooperative Observing Program (COOP) that was formally recognized as a nationwide federally supported system in
1980; a platinum resistance thermometer (PRT) in the Automated Surface Observing System (ASOS), a network that focuses on aviation needs; and a thermistor in the Automated Weather Station (AWS) networks operated
by states for monitoring evaporation and surface climate data.
Each of these sensors has been used to observe climate data over at least a ten year period in the U.S. climate monitoring networks. The U.S. Climate Reference Network (USCRN) was established in 2001 and gradually and nationally deployed for monitoring long-term and high quality surface climate data. In the USCRN system, a PRT sensor was selected for the air temperature measurements. All sensing elements in these four climate monitoring networks are temperature sensitive resistors, and the temperature sensors are referred to as the maximum–minimum temperature system (MMTS), sensor: HMP35C, HO-1088, and USCRN PRT sensors, respectively, in the COOP, AWS, ASOS, and USCRN networks (see Table 1).
The basic specifications of each sensor system including operating temperature range, static accuracy, and display/output resolution can be found in operation manuals. However, these specifications do not allow a detailed evaluation, and some users even doubt the stated specifications and make their own calibrations before deploying sensors in the network. In fact, during the operation of either the MMTS sensor in the COOP or HO-1088 hygrothermometer in the ASOS, both field and laboratory calibrations were made by a simple comparison using one or two fixed precision resistors (National Weather Service 1983; ASOS Program Office 1992).
This type of calibration is only effective under the assumption of temporal nonvariant sensors with a pure linear relation of resistance versus temperature. For the HMP35C, some AWS networks may regularly calibrate the sensors in the laboratory, but these calibrations are static (e.g., calibration at room temperature for the data acquisition system).
It is not generally possible to detect and remove temperature-dependent bias and sensor nonlinearity with static calibration. In the USCRN, the PRT sensor was strictly calibrated from -50° to +50°C each year in the laboratory. However, this calibration does not include its corresponding datalogger. To accurately trace air temperature trends over the past decades or in the future in the COOP, AWS, ASOS, and USCRN and to reduce the influence of time-variant biases in air temperature data, a better understanding of electronic bias in air temperature measurements is necessary.
The objective of this paper is to carefully analyze the sensor and electronic biases/errors induced by the temperature sensing element, signal conditioning circuitry, and data acquisition system.
…
This implies that the MMTS temperature observations are unable to discriminate ±0.25°C changes
in the lower temperature ranges (Fig. 5 and Table 2). The interchangeability of the MMTS thermistors is from
60.2°C from temperature -40° to +40°C and ±0.45°C elsewhere (Fig. 4). Two fixed resistors (R2 and R3) with
a 0.02% tolerance produced larger temperature errors of measurement in low temperatures, but the error
caused by the fixed resistor R19 in Fig. 1 can be ignored. Therefore, the RSS errors in the MMTS are from 0.31°
to 0.62°C from temperature -40°C to -50°C (Fig. 5).
…
The major errors in the HO-1088 (ASOS Temp/DP sensor) are interchangeability, linearization error, fixed resistor error, and self-heating error (Table 2 and Fig. 7). The linearization error in the HO-1088 is relatively serious because the analog signal (Fig. 3) is simply linearized from -50° to 50°C versus -2 to 2 V. The maximum magnitude of linearization error reached over 1°C (Fig. 7). There are four fixed precision resistors: R13, R14, R15, and R16 with a 0.1% tolerance. However, the error of temperature measurement caused by the R14, R15, and R16 can be eliminated by the adjustment of amplifier gain and offsets during onboard calibration operations in the HO-1088.
The error caused by the input fixed resistor R13 is illustrated in Fig. 7. Since this error was constantly varied from -0.2° to -0.3°C, it can be cancelled during the onboard calibration. It is obvious that a 5-mA current flowing through the PRT in the HO-1088 is not appropriate, especially because it has a small sensing element (20 mm in length and 2 mm in diameter). The self-heating factor for the PRT in the HO-1088 is 0.25°C mW21 at 1 m s21 airflow (Omega Engineering 1995), corresponding to the selfheating errors 0.5°C when the self-heating power is 2mW (Table 2 and Fig. 7). Compared to the linearization error and self-heating error, the interchangeability and LSB error in the HO-1088 sensor are relative small, ±0.1° and ±0.01°C, respectively (Table 2).
…
Conclusions and discussion
This study provides a better understanding of temperature measurement errors caused by the sensor, analog signal conditioning, and data acquisition system. The MMTS sensor and the HO-1088 sensor use the ratiometric method to eliminate voltage reference errors. However, the RSS errors in the MMTS sensor can reach 0.3–0.6 under temperatures beyond -40° to +40°C. Only under yearly replacement of the MMTS thermistor with the calibrated MMTS readout can errors be constrained within ±0.2°C under the temperature range from -40° to +40°C. Because the MMTS is a calibration- free device (National Weather Service 1983), testing of one or a few fixed resistors for the MMTS is unable to guarantee the nonlinear temperature relations of the MMTS thermistor. For the HO-1088 sensor, the self-heating error is quite serious and can make temperature 0.5°C higher under 1 m/s airflow, which is slightly less than the actual normal ventilation rate in the ASOS shield (Lin et al. 2001a). The simple linear method for the PRT of the HO-1088 causes unacceptable errors that are more serious in the low temperature range. These findings are helpful for explaining the ASOS warm biases found by Kessler et al. (1993) in their climate data and Gall et al. (1992) in the climate data archives. For the dewpoint temperature measurements in the ASOS, such self-heating effects might be cancelled out by the chill mirror mechanism: heating or cooling the chill mirror body (conductively contains the dewpoint PRT inside) to reach an equilibrium thin dew layer–dewpoint temperature.
Thus, in this case, the selfheating error for dewpoint temperature measurements might not be as large as the air temperature after correct calibration adjustment. Likewise, the relative humidity data from the ASOS network, derived from air temperature and dewpoint temperature, is likely be contaminated by the biased air temperature.
Both resistance measurements in the HMP35C and USCRN PRT sensors are interrogated by the dataloggers.
The HMP35C is delivered from Campbell Scientific, Inc., with recommended measurement methods.
Even so, the HMP35C sensor in the AWS network can experience more than 0.28C errors in temperatures from
-30° to +30°C. Beyond this range, the RSS error increases from 0.4° to 1.0°C due to thermistor interchangeability, polynomial error, and CR10X datalogger inaccuracy. For the USCRN PRT sensor in the USCRN network, the RSS errors can reach 0.2°–0.34°C due to the inaccuracy of CR23X datalogger, which suggests that the configuration of USCRN PRT and measurement taken in the CR23X could be improved if higher accuracy is needed. Since the USCRN network is a new setup, the current configuration of the USCRN PRT temperature sensor could be reconstructed for better measurements.
This reconstruction should focus on the increase of signal sensitivity, the selection of fixed resistor(s) with smaller temperature coefficient of resistance, and the decrease of the self-heating power, so that it could be more compatible with the CR23X for longterm climate monitoring.
These findings are applicable\ to the future of temperature data generated from the USCRN network and possible modification of the PRT sensor for higher quality measurements in the reference climate network.
The complete Lin-Hubbard papert (PDF) is available here.






Walt The Physicist (11:07:03) :
“When I tried to ascertain the accuracy of temperature measurement and asked Mr. Gavin (Real Climate), the answer was reference to GRL v.28,n13, pp.2621-2624 (2001). In this article it is shown that the accuracy of average monthly temperature is 0.2C/Square root of (60). The 0.2C was determined to be the accuracy of one temperature measurement with a thermometer. SQRT(60) comes from twice a day temperature recording – 60 data points. So, the accuracy is 0.0255C. This is an elegant solution to the accuracy problem – even with a crappy measuring device just make thousands of measurements and your accuracy will be fantastic. All that is ironically speaking, of course.”
I find it hard to believe that anyone could be that daft. Did he really mean ACCURACY?!! Dividing by the square root of 60 is the correct way of handling random measurement errors to improve PRECISION. But it will NOT improve ACCURACY where you have a non-random bias. Bias is bias – you can’t get rid of it by statistical manipulation. Sure, you can improve the variance/SD, but that says nothing about bias on the mean.
Think about it – if you put the measuring device in a controlled oven whose temperature you know and control perfectly, then take 60 measurements, you can determine the BIAS to a PRECISION of 0.0255degC, but the bias could be 2 degrees!. You can only improve the ACCURACY by compensating for this bias. If you don’t know what the bias is, all you are doing by taking more and more samples and performing statistical manipulation is getting a better estimate of a biased mean, which tells you nothing about the extent of the bias to the mean.
In fort years of flying, the only thing we ever used ASOS and AWOS for were current wind direction and velocity — we were specifically briefed by the FAA that all other measurements were “approximate”…
Hearing protection ended up causing a stir in similar fashion. Foam plugs were born and raised under lab conditions. Lab technicians inserted foam plugs in the same way with the same insertion pressure and then measured sound attenuation. Great. Those foam plugs really to the job. Manufacturers stamped a 40 dB (or more) rating on the package because under laboratory conditions, that was the attenuation. Then foam plugs got sent out to workers. Workers still complained of noise. So a “field” researcher decided to measure attenuation under field conditions (workers inserted plugs, worked all day, etc). Turns out the little plugs weren’t performing like they did in the lab. The industry was forced to lower the ratings on the packages to reflect real world attenuation.
What. A. surprise.
So, are electronic temperature measurements more or less reliable than the traditional glass thermometer with mercury in it?
Also note the follow up paper by Dr. Lin on the bias that may be introduced by snow on the sensors. Perhaps this year especially this will be important? The abstract is here.
Sorry, try again, abstract here.
JDN (14:06:06) :
My conclusion is that quality control is necessary. Someone needs to pull the entire enclosure into a lab, calibrate it against known temperatures, and return it. The conclusion that the actual temperature may be higher/lower under certain conditions is suggested by their analysis but no conclusion can really be drawn without direct monitoring of real-world performance.
Am I safe in assuming that nobody checks up on the performance of these stations?
JDN: All of the AWOS/ASOS stations are checked on a quarterly basis against a NIST-certified standard. However, at best the tech is only checking that the temp/dew point is within a +/-2F reading of the standard. I can personally attest that the AWOS sensors do walk all over the place compared to the standard. I have also seen techs that don’t allow a proper length of time before making the comparison. I’ve always felt that whatever “scientist” was using the AWOS/ASOS record for climate mongering should be fired.
I’d also like to point out that your thought on pulling the whole thing into a QC’ed lab and recalibrated really isn’t feasible as a number of variables would be changed.
ScientistForTruth (04:34:37) :
It is worse than that. The 60 measurements are not of the same thing. i.e the temperature changes from day to night and from day to day. The humidity changes. The barometer changes. The angle of the sun changes etc.
You only get the improvement if you are measuring the same thing with the same instrument. i.e. your measurement of a regulated oven. And in such a case a noisy instrument (within limits) improves the accuracy of your average.
But as you point out it does nothing for your precision.
Now these folks, supposedly knowledgeable of the most arcane statistical procedures, don’t even get the basics right. It is a travesty.
JDN (14:06:06) :
“… All of the AWOS/ASOS stations are checked on a quarterly basis against a NIST-certified standard. However, at best the tech is only checking that the temp/dew point is within a +/-2F reading of the standard. I can personally attest that the AWOS sensors do walk all over the place compared to the standard. I have also seen techs that don’t allow a proper length of time before making the comparison. I’ve always felt that whatever “scientist” was using the AWOS/ASOS record for climate mongering should be fired….”
So no matter what this study shows the actual BEST accuracy is +/-2F since that is the allowable drift per the Quality Control Specification – and that is only if the tech does the calibration correctly. As a lab manager I did in-house calibration quarterly or better but the lab was required to have all the instrumentation certified at minimum once a year or once every six months by an outside trained and certified technician. If the government required industry to do this WHY are they not held to the same standards??? Heck FARM STANDS are required to have their balances and scales outside tested and certified here in North Carolina.
SHEESH, the more you dig the worse it stinks.
OT but breaking news I suppose
http://news.bbc.co.uk/1/hi/sci/tech/8561004.stm
Ed
Gail Combs (10:31:31) :
So no matter what this study shows the actual BEST accuracy is +/-2F since that is the allowable drift per the Quality Control Specification – and that is only if the tech does the calibration correctly. As a lab manager I did in-house calibration quarterly or better but the lab was required to have all the instrumentation certified at minimum once a year or once every six months by an outside trained and certified technician. If the government required industry to do this WHY are they not held to the same standards???
Gail: Actually the comment was mine, not JDN’s. The calibration cycle on the temp/dew pt/barometer *standards* is one year by an outside lab.
When you consider that pilots really don’t care (usually) if the temp is off +/-2F, the AWOS/ASOS system does indeed do it’s job. The problem comes about when scientists try to take a station that is not intended for high accuracy/precision and use it as if it was.
wmsc (11:30:55) :
“….Gail: Actually the comment was mine, not JDN’s. The calibration cycle on the temp/dew pt/barometer *standards* is one year by an outside lab.
When you consider that pilots really don’t care (usually) if the temp is off +/-2F, the AWOS/ASOS system does indeed do it’s job. The problem comes about when scientists try to take a station that is not intended for high accuracy/precision and use it as if it was.”
Thank you and that does make me feel better. I am well aware of the accuracy/use factor. I do not use an analytical balance good to 0.000gr to measure out feed for my sheep, but I might use it for measuring out medicine. On the other hand using a bathroom scale to measure out a dose of Cydectin for a small lamb could get the poor animal killed. Seems the Climate Scientists are using a bathroom scale and trying to tell us it is an analytical balance.
OK, so perhaps if we wanted to reliably measure the Earth’s temperature and the up/down trends, we should bury our sensor about a metre underground on the shady side of a building away from car parks or drains. Then allow a week for stabilisation, and measure at several times per day, always at the same times…………….our logged data would be worth looking at methinks, and would show any trends over a long period of time.
Calibration would require removal and re-installation, along with the same settling period each year.
Perhaps this would have better integrity….
M. Simon (09:17:12) :
“You only get the improvement if you are measuring the same thing with the same instrument. i.e. your measurement of a regulated oven. And in such a case a noisy instrument (within limits) improves the accuracy of your average.
But as you point out it does nothing for your precision.”
I’m sure you meant “And in such a case a noisy instrument (within limits) improves the precision of your average.
But as you point out it does nothing for your accuracy.”
Regardless of how it is measured, live analysis of raw data continue to demonstrate the high degree of variability that describes our home (Earth and in particular the shared climate zones around Washington, Oregon, Idaho, and California). In the space of 3 days, we have set record highs (March 3) and record lows (March 6). Homogenize that!
ScientistForTruth (14:54:43) :
Thanks,
I hadn’t had my tenth cup of coffee by then.
Surely, I can’t be the only one wondering about traceability.
A search for “traceability” (to known standards) comes up negative, both in this thread and the original paper. To leave out traceability while discussing measurements errors is worse than not publishing error bars.
They are both insidious, no? Wonderful paper otherwise.
Well the first thing that I noticed was that the sensors used for the benefit of pilot operations (hey they really want to know) are Platinum Resistance themometers. The ones available for climate studies are “thermistors”.
Well “thermistors” are generally high positive temperature coefficient “resistors”, and their TC is generally much higher than any pure metal has. As a result, the out of balance analog signals tend to be higher than you can get with say a PRT.
That means you can get away with cheaper analog electronics to detect those temperature signals. That does not necessarily translate into more accurate thermometer readings; jsut cheaper thermometer readings.
I’ve never heard it claimed that these high PTC materials are stable over geologic time scales, or even climate time frames.
I’m reasonably comfortable with properly designed and constructed PRTs, and also with the availability of good analog circuitry and processing means to detect those signals accurately. Certainly more comfortable than say reading a mercury in glass thermometer over the same temperature range would be.
Considering all the palaver they go too with their house and grounds, and gardens, and other accoutrements; it seems pretty cheesy to me that they even consider skimping on the guts of the operation, which is the sensor and its setup.
That’s like spending education funding on “administrative overhead” rather than on classroom school teachers.
I’m prepared to grant, that given the best temperature reading thermometer known to man; that does not necessarily translate into reading the right temperature. The instrument installation problem that ensures the thermometer sees only the real temperature you are interested in, is not at all trivial. But at least the basic sensor ought to be as “Robust” as technology knows how to make it; so we don’t have to retrofit these things from time to time (not that I’m suggesting they actually do that)>
“”” Gail Combs (11:57:24) :
wmsc (11:30:55) :
“….Gail: Actually the comment was mine, not JDN’s. The calibration cycle on the temp/dew pt/barometer *standards* is one year by an outside lab.
When you consider that pilots really don’t care (usually) if the temp is off +/-2F, the AWOS/ASOS system does indeed do it’s job. The problem comes about when scientists try to take a station that is not intended for high accuracy/precision and use it as if it was.”
Thank you and that does make me feel better. I am well aware of the accuracy/use factor. I do not use an analytical balance good to 0.000gr to measure out feed for my sheep, but I might use it for measuring out medicine. On the other hand using a bathroom scale to measure out a dose of Cydectin for a small lamb could get the poor animal killed. Seems the Climate Scientists are using a bathroom scale and trying to tell us it is an analytical balance. “””
Now take it easy on that poor bathroom scale Gail; those things can actually be used to weigh your luggage before you head off to the airport.
I’ve even used my bathroom scale to weight the entire planet earth; yes you can do that.
You need something like a bucket or a couple of cinder blocks that you can put the scale on so it is up off the ground; this will weig the cinder blocks along with the planet.
So you turn your bathroom scale upside down, and put it down on top of the cinder blocks or bucket, so the readout is hanging out clear of the blocks, so you can read it upside down with a mirror placed on the floor under the scale.
Then you climb up on top of the scale, so you can look over the edge, and read the scale in the mirror.
And voilla, if you look at the reading, you will see that it is reading the weight of the entire planet (plus the cinder blocks.)
I would guess if you do this at your place, you will probably find that the earth weighs about 96-97 pounds or so.
Over at my house, the gravity is a bit higher, and I always get just a tad under 180 pounds, whenever I weigh the earth.
But try it; you’ll see I am right.
But no I wouldn’t use the bathroom scale to weigh out the medicine for one of those Mexican Rat Dogs, if it gives reading for the weight of the earth, that could be 97 or 180 pounds, you don’t want to use it to weigh medicine doses.