Accuracy of climate station electronic sensors – not the best

Given all of the discussions recently on issues with the surface network, I thought it would be a good idea to present this excellent paper by Lin and Hubbard and what they discovered about the accuracy, calibration and maintenance of the different sensors used in the climatic networks of the USA. Pay particular attention to the errors cited for the ASOS and AWOS aviation networks, which are heavily used by GHCN.

Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station Networks*

X. LIN AND K. G. HUBBARD

School of Natural Resource Sciences, University of Nebraska at Lincoln, Lincoln, Nebraska

ABSTRACT

The biases of four commonly used air temperature sensors are examined and detailed. Each temperature transducer consists of three components: temperature sensing elements, signal conditioning circuitry, and corresponding analog-to-digital conversion devices or dataloggers. An error analysis of these components was performed to determine the major sources of error in common climate networks. It was found that, regardless of microclimate effects, sensor and electronic errors in air temperature measurements can be larger than those given in the sensor manufacturer’s specifications. The root-sum-of-squares (RSS) error for the HMP35C sensor with CR10X datalogger was above 0.2°C, and rapidly increases for both lower (<-20°C) and higher temperatures (>30°C). Likewise, the largest errors for the maximum–minimum temperature system (MMTS) were at low temperatures (<-40°C). The temperature linearization error in the HO-1088 hygrothermometer produced the largest errors when the temperature was lower than -20°C. For the temperature sensor in the U.S. Climate Reference Networks (USCRN), the error was found to be 0.2° to 0.33°C over the range -25° to 50°C. The results presented here are applicable when data from these sensors are applied to climate studies and should be considered in determining air temperature data continuity and climate data adjustment models.

Introduction

A primary goal of air temperature measurement with weather station networks is to provide temperature data of high quality and fidelity that can be widely used for atmospheric and related sciences. Air temperature measurement is a process in which an air temperature sensor measures an equilibrium temperature of the sensor’s physical body, which is optimally achieved through complete coupling between the atmosphere and air temperature sensor.

The process accomplished in the air temperature radiation shield is somewhat dynamic, mainly due to the heat convection and heat conduction of a small sensor mass. Many studies have demonstrated that to reach a higher measurement accuracy both good radiation shielding and ventilation are necessary for air temperature measurements (Fuchs and Tanner 1965; Tanner 1990; Quayle et al. 1991; Guttman and Baker 1996; Lin et al. 2001a,b; Hubbard et al. 2001; Hubbard and Lin 2002). Most of these studies are strongly associated with the study of air temperature bias or errors caused by microclimate effects (e.g., airflow speed in-side the radiation shields radiative properties of sensor surface and radiation shields, and effectiveness of the radiation shields). Essentially, these studies have assumed the equation governing the air temperature to be absolutely accurate, and the investigations have focused on the measurement accuracy and its dependence on how well the sensor is brought into equilibrium with the atmospheric temperature. Such findings are indeed very important for understanding air temperature measurement errors in climate monitoring, but it is well known that all microclimate-induced biases or errors also include the electronic biases or errors embedded in their temperature sensors and their corresponding data acquisition system components.

Three temperature sensors are commonly used in the weather station networks: A thermistor in the Cooperative Observing Program (COOP) that was formally recognized as a nationwide federally supported system in

1980; a platinum resistance thermometer (PRT) in the Automated Surface Observing System (ASOS), a network that focuses on aviation needs; and a thermistor in the Automated Weather Station (AWS) networks operated

by states for monitoring evaporation and surface climate data.

Each of these sensors has been used to observe climate data over at least a ten year period in the U.S. climate monitoring networks. The U.S. Climate Reference Network (USCRN) was established in 2001 and gradually and nationally deployed for monitoring long-term and high quality surface climate data. In the USCRN system, a PRT sensor was selected for the air temperature measurements. All sensing elements in these four climate monitoring networks are temperature sensitive resistors, and the temperature sensors are referred to as the maximum–minimum temperature system (MMTS), sensor: HMP35C, HO-1088, and USCRN PRT sensors, respectively, in the COOP, AWS, ASOS, and USCRN networks (see Table 1).

The basic specifications of each sensor system including operating temperature range, static accuracy, and display/output resolution can be found in operation manuals. However, these specifications do not allow a detailed evaluation, and some users even doubt the stated specifications and make their own calibrations before deploying sensors in the network. In fact, during the operation of either the MMTS sensor in the COOP or HO-1088 hygrothermometer in the ASOS, both field and laboratory calibrations were made by a simple comparison using one or two fixed precision resistors (National Weather Service 1983; ASOS Program Office 1992).

This type of calibration is only effective under the assumption of temporal nonvariant sensors with a pure linear relation of resistance versus temperature. For the HMP35C, some AWS networks may regularly calibrate the sensors in the laboratory, but these calibrations are static (e.g., calibration at room temperature for the data acquisition system).

It is not generally possible to detect and remove temperature-dependent bias and sensor nonlinearity with static calibration. In the USCRN, the PRT sensor was strictly calibrated from -50° to +50°C each year in the laboratory. However, this calibration does not include its corresponding datalogger. To accurately trace air temperature trends over the past decades or in the future in the COOP, AWS, ASOS, and USCRN and to reduce the influence of time-variant biases in air temperature data, a better understanding of electronic bias in air temperature measurements is necessary.

The objective of this paper is to carefully analyze the sensor and electronic biases/errors induced by the temperature sensing element, signal conditioning circuitry, and data acquisition system.

This implies that the MMTS temperature observations are unable to discriminate  ±0.25°C changes

in the lower temperature ranges (Fig. 5 and Table 2). The interchangeability of the MMTS thermistors is from

60.2°C from temperature -40° to +40°C and ±0.45°C elsewhere (Fig. 4). Two fixed resistors (R2 and R3) with

a 0.02% tolerance produced larger temperature errors of measurement in low temperatures, but the error

caused by the fixed resistor R19 in Fig. 1 can be ignored. Therefore, the RSS errors in the MMTS are from 0.31°

to 0.62°C from temperature -40°C to -50°C (Fig. 5).

The major errors in the HO-1088 (ASOS Temp/DP sensor) are interchangeability, linearization error, fixed resistor error, and self-heating error (Table 2 and Fig. 7). The linearization error in the HO-1088 is relatively serious because the analog signal (Fig. 3) is simply linearized from -50° to 50°C versus -2 to 2 V. The maximum magnitude of linearization error reached over 1°C (Fig. 7). There are four fixed precision resistors: R13, R14, R15, and R16 with a 0.1% tolerance. However, the error of temperature measurement caused by the R14, R15, and R16 can be eliminated by the adjustment of amplifier gain and offsets during onboard calibration operations in the HO-1088.

The error caused by the input fixed resistor R13 is illustrated in Fig. 7. Since this error was constantly varied from -0.2° to -0.3°C, it can be cancelled during the onboard calibration. It is obvious that a 5-mA current flowing through the PRT in the HO-1088 is not appropriate, especially because it has a small sensing element (20 mm in length and 2 mm in diameter). The self-heating factor for the PRT in the HO-1088 is 0.25°C mW21 at 1 m s21 airflow (Omega Engineering 1995), corresponding to the selfheating errors 0.5°C when the self-heating power is 2mW (Table 2 and Fig. 7). Compared to the linearization error and self-heating error, the interchangeability and LSB error in the HO-1088 sensor are relative small, ±0.1° and ±0.01°C, respectively (Table 2).

Conclusions and discussion

This study provides a better understanding of temperature measurement errors caused by the sensor, analog signal conditioning, and data acquisition system. The MMTS sensor and the HO-1088 sensor use the ratiometric method to eliminate voltage reference errors. However, the RSS errors in the MMTS sensor can reach 0.3–0.6 under temperatures beyond -40° to +40°C. Only under yearly replacement of the MMTS thermistor with the calibrated MMTS readout can errors be constrained within ±0.2°C under the temperature range from -40° to +40°C. Because the MMTS is a calibration- free device (National Weather Service 1983), testing of one or a few fixed resistors for the MMTS is unable to guarantee the nonlinear temperature relations of the MMTS thermistor. For the HO-1088 sensor, the self-heating error is quite serious and can make temperature 0.5°C higher under 1 m/s airflow, which is slightly less than the actual normal ventilation rate in the ASOS shield (Lin et al. 2001a). The simple linear method for the PRT of the HO-1088 causes unacceptable errors that are more serious in the low temperature range. These findings are helpful for explaining the ASOS warm biases found by Kessler et al. (1993) in their climate data and Gall et al. (1992) in the climate data archives. For the dewpoint temperature measurements in the ASOS, such self-heating effects might be cancelled out by the chill mirror mechanism: heating or cooling the chill mirror body (conductively contains the dewpoint PRT inside) to reach an equilibrium thin dew layer–dewpoint temperature.

Thus, in this case, the selfheating error for dewpoint temperature measurements might not be as large as the air temperature after correct calibration adjustment. Likewise, the relative humidity data from the ASOS network, derived from air temperature and dewpoint temperature, is likely be contaminated by the biased air temperature.

Both resistance measurements in the HMP35C and USCRN PRT sensors are interrogated by the dataloggers.

The HMP35C is delivered from Campbell Scientific, Inc., with recommended measurement methods.

Even so, the HMP35C sensor in the AWS network can experience more than 0.28C errors in temperatures from

-30° to +30°C. Beyond this range, the RSS error increases from 0.4° to 1.0°C due to thermistor interchangeability, polynomial error, and CR10X datalogger inaccuracy. For the USCRN PRT sensor in the USCRN network, the RSS errors can reach 0.2°–0.34°C due to the inaccuracy of CR23X datalogger, which suggests that the configuration of USCRN PRT and measurement taken in the CR23X could be improved if higher accuracy is needed. Since the USCRN network is a new setup, the current configuration of the USCRN PRT temperature sensor could be reconstructed for better measurements.

This reconstruction should focus on the increase of signal sensitivity, the selection of fixed resistor(s) with smaller temperature coefficient of resistance, and the decrease of the self-heating power, so that it could be more compatible with the CR23X for longterm climate monitoring.

These findings are applicable\ to the future of temperature data generated from the USCRN network and possible modification of the PRT sensor for higher quality measurements in the reference climate network.

The complete Lin-Hubbard papert (PDF) is available here.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
119 Comments
Inline Feedbacks
View all comments
kadaka
March 9, 2010 12:22 pm

Smokey (10:11:31) :
Yup, there it goes. Solid reasons to question the AMSU numbers from UAH, as the AMSU unit is calibrated on-board by staring at a warm target whose temp is measured by multiple PRT’s. Which are not getting regular periodic calibration. And being on a satellite, they likely see a good deal of thermal cycling over a rather wide range. Plus, I see mention of self-heating. What currents/wattages do the satellites run them at?

March 9, 2010 12:42 pm

In relation to climate change, this is a great fuss over nothing.
It seems to me that the obvious conclusion from figure 5 is that over the range of ordinary real-world conditions the instrument error is easily corrected. Even outside that range the worst-case inaccuracy is only on the order of 0.5%.
Also to draw an analogy; which quality, “accurate” or “consistent”, would be better in a train station clock? After-all, the debate is over temperature trends.

Stephen Brown
March 9, 2010 12:51 pm

This is a good point. The data is everything; from whence the data is obtained is the very crux of the discussion. If the instrumentation cannot be trusted then nothing flowing from this now-known-to-be erroneous data can be trusted.
It’s back to square one. We are going to have to start all over again, but this time let’s get some engineers and electricians in on the act and get the job done properly.

janama
March 9, 2010 12:58 pm

as I’ve mentioned before I have found a situation where two Stevenson screens exist within 200m of each other, one is automated and one is read twice a day. The data for each is published by BoM – the variance ranges around .5 – .7C!

Frank
March 9, 2010 1:00 pm

This paper doesn’t tell us what these laboratory measurements mean for temperatures recorded in the field and the climate record. If a daily high and low is recorded to the nearest 1 degF or 0.5 degC, many of these errors aren’t large enough to be important. When we want to know about how climate changes, long-term stability, not absolute accuracy is the most important factor. As long as a high of 30.5 degC three decades ago at a station still gets recorded as 30.5 degC today, there isn’t a big problem if the actual temperature is 30.12 or 30.87. On the other hand, if the average instrument drifts up or down by 0.1 degC/decade as the instrument ages, there is a big problem.
However, this is good reminder about how small global warming of 0.2 degC/decade really is.

RockyRoad
March 9, 2010 1:01 pm

OT: Listening to 9:54 of Phil Jones provided by climategate2009 (09:25:14) was pretty difficult, not because my headphones didn’t work but because of the obfuscation and redefinition of “climate science”. UEA definitely has a problem.

Rod Smith
March 9, 2010 1:01 pm

My questions:
– Who in our federal government authorized purchase of equipment with such poor performance? How did it pass IOC checks?
– What is the cost to upgrade the equipment with accurate sensors?
– Is anyone investigating this acquisition, and if not, why not?

GAZ from Sydney
March 9, 2010 1:04 pm

In the absence of specific mention to the contrary I assume that:
1. They have carried out the analysis on a single sensor of each type
2. The analysis was done on new instruments.
Which raises the questions:
1. What sort of variability is there between similar sensors?
2. Does the error range change over time, and the need for calibration?
I wonder if anyone can respond to these.

March 9, 2010 1:06 pm

Here’s a nice introduction to electronic temp measurement:
Improving The Accuracy of Temperature Measurements
Happy reading–
Pete Tillman

MikeC
March 9, 2010 1:09 pm

Not sure if I understand what all the excitement is about. USHCN v2 is good at catching these kinds of errors, such as the ones related to MMTS. And the study says this is mainly applicable to CRN, and each CRN station has 3 HGO’s hanging from them specificly to catch drift or other equipment problems.
REPLY: USHCN2 is good at catching transient and short period errors. It will not catch long period errors such as instrument calibration drift or gradually increasing UHI, or gradual land use change. I confirmied this with the author of the method, Matt Menne when I visited NCDC – Anthony

D. King
March 9, 2010 1:13 pm

The problem with Ponzi schemes is a crash or time will expose the scheme.
With Ponzitemps, you can only add heat until time or a super El Nino gives
you 15 years of no statically significant increases. Then you’re left holding the
bag and having to explain your methodology. OOPS.

March 9, 2010 1:21 pm

After reading this about the PRTs I too immediately thought about the satellite PRTs, kadaka. You beat me to a post.
I don’t know the orbital altitude of the satellites in question but it would seem that they might be temperature cycled at least one an hour and in the vacuum environment self heating might be a large problem. It isn’t just the PRTs themselves of course but all of the associated electronics and temperature cycling isn’t the only problem. What about the radiation environment?
The satellite sensors were designed to to provide what amounts to an enhanced and denser weather balloon network for short term forecasting purposes. For this they probably work well.
The weather balloon network also suffers from lack of post calibration of the sensors immediately after a flight although I have thought of a way of doing this.
I’m beginning to think that the entire field of climatology has elements of high farce.

Pops
March 9, 2010 1:37 pm

Problems copying and pasting text? Try this:
http://www.stevemiller.net/puretext/
REPLY: I downloaded it. Thanks great FREE tool!
– Anthony

Ray
March 9, 2010 1:37 pm

In the lab I use some of the best thermometers and thermocouples avail on the market for my temperature measurements. Even with those I still need to check their calibrations regularly and make the necessary corrections. How can some temperature measuring devices be left for decades out there without proper and regular calibrations? Not only they have significant errors but also their precision and exactitude deviate over time.

pat
March 9, 2010 1:40 pm

climategate2009 –
the clip leaves out the most damning and under-reported moment of the entire hearing:
Q119 Dr Harris: You cannot speak for other fields of science I guess but do you have any idea whether, in other fields of science, the data is sent out on request? In clinical trials I have not seen photocopies of anonymised patient data being sent out on request. If peer reviewers ask to see the raw data, is there a different situation there or do they never ask for that?
Professor Jones: We would probably send them that then, but they have never asked for it.
Q120 Dr Harris: You would not object to sending peer reviewers or editors that data?
Professor Jones: No, but they have never asked.
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/uc387-i/uc38702.htm
extraordinary article in the Guardian:
9 March: Guardian: Alan Nelson: How to avoid your own ‘climategate’ scandal
Alan Nelson is a senior associate at law firm Dundas & Wilson
So how do universities and academics ensure that their correspondence does not become the “smoking gun” that turns a simple FoI request into an international scandal?…
For sensitive information that you would not want in the public domain, rather than putting it in email or in a document, it may be better to discuss it face-to-face or on the phone.
Careful consideration should also be given to how long emails are saved and when they are deleted. In some fields of work, there will be regulatory reasons for keeping emails (clinical work, for example) but do they all need to be retained and archived? A periodic review should be performed to ensure that, wherever possible and lawful, emails that could be that smoking gun are deleted.
When making handwritten notes or comments on documents, staff need to be aware that those scribbles could enter the public domain in response to a FoI request. Do you really want someone to see your exclamations of “Idiot!!!” or “Rubbish!!!” on a note? Probably not, so take care – and shred your notes once they have served their useful purpose…
http://www.guardian.co.uk/education/2010/mar/09/avoid-climategate-foi-requests-academics
Nelson also says: ” Jones admitted to the Commons committee that he had not dealt with requests for data “in the right way”. His detractors accuse him of a reluctance to reveal his data and research, and both Jones and the UEA of a desire to avoid complying with FoI requests.”
those terrible ‘detractors’!

Pops
March 9, 2010 1:41 pm

Henry chance (09:37:23): Blizzards in Spain. Do wind turbines attract snow?
No, only huge subsidies.

jaymam
March 9, 2010 1:42 pm

Climate station electronic sensors clearly need to be checked regularly (e.g. monthly) against mercury thermometers, and a log kept of the readings. If this has not been done, the data from sites with electronic sensors is suspect and should not be used.

March 9, 2010 1:43 pm

Understanding Climate’s Influence on Human Evolution
The hominin fossil record documents a history of critical evolutionary events that have ultimately shaped and defined what it means to be human, including the origins of bipedalism; the emergence of our genus Homo; the first use of stone tools; increases in brain size; and the emergence of Homo sapiens, tools, and culture. The geological record suggests that some of these evolutionary events were coincident with substantial changes in African and Eurasian climate, raising the intriguing possibility that key junctures in human evolution and behavioral development may have been affected or controlled by the environmental characteristics of the areas where hominins evolved.
new web book:
http://books.nap.edu/openbook.php?record_id=12825&page=R1

J Zulauf
March 9, 2010 1:47 pm

would it be interesting to see what pointing a web cam at an old fashioned glass thermometer colocated with the fancy kind and compare. The accuracy required appears to be +/-50. to within .1 — 1000 states. 1-2Mpixels would (or a 1-2K element 1D optical sensor, with lens calibrations of course) — automate it and hook it up wirelessly…
Hmmm, that just about maps to the iPhone specs… “Calibrate weather stations, there’s an app for that” — or there could be.

Allen Ford
March 9, 2010 1:56 pm

“The people making temperature measurements might learn something if they talked to people specializing in metrology.”
Just like they might learn something from competent specialists in computer programming and statistics!

Espen
March 9, 2010 2:00 pm

Tom in Texas: Exactly my thoughts. A lot of the warming is in the winter, and in cold places like northern Canada and the interior of Russia. If the readings of the minimum temperatures in winter for a lot of these places are wrong, it could explain a significant part of the difference between the current warm period and the warm period in the 30s-40s.

Paul Linsay
March 9, 2010 2:00 pm

The satellites also calibrate against the 2.7 K background from the Big Bang, about as absolute a temperature standard as possible.

Gail Combs
March 9, 2010 2:01 pm

vukcevic (13:43:47) :
Understanding Climate’s Influence on Human Evolution
http://books.nap.edu/openbook.php?record_id=12825&page=R1
Thanks, you come up with some very interesting stuff.

JDN
March 9, 2010 2:06 pm

Nice paper as far as it goes. But as it admits, it’s a reconstruction of how well these devices perform.
My conclusion is that quality control is necessary. Someone needs to pull the entire enclosure into a lab, calibrate it against known temperatures, and return it. The conclusion that the actual temperature may be higher/lower under certain conditions is suggested by their analysis but no conclusion can really be drawn without direct monitoring of real-world performance.
Am I safe in assuming that nobody checks up on the performance of these stations?

MikeC
March 9, 2010 2:07 pm

REPLY: USHCN2 is good at catching transient and short period errors. It will not catch long period errors such as instrument calibration drift or gradually increasing UHI, or gradual land use change. I confirmied this with the author of the method, Matt Menne when I visited NCDC – Anthony
Anthony, You’re confusing two different issues. The MMTS errors will show as a step change. Therefore the USHCN v2 will catch it. It was probably one of the contributing factors in the MMTS adjustment that Quayle never researched.
The HGO sensor, on the other hand, will drift. But it is on the CRN station which has 3 HGO’s hanging down specificly to catch drift.
And by the way, Kristen’s page still has the link to Claude William’s presentation to the AMA where he says UHI is retained in USHCN v2 (that it was their goal), “and that’s a good thing”, if you can get a copy and send it to PtM I would apreciate it. I cant copy that format for the life of me and cannot get anyone else who can either. It would probably make a good post too.

REPLY:

The MMTS errors such as replacement of sensoroffset, would indeed be caught by USHCN2. A long term sensor drift over several years would not be caught by USHCN2 methods.
Can you point me to the link you are referring to?-A