Accuracy of climate station electronic sensors – not the best

Given all of the discussions recently on issues with the surface network, I thought it would be a good idea to present this excellent paper by Lin and Hubbard and what they discovered about the accuracy, calibration and maintenance of the different sensors used in the climatic networks of the USA. Pay particular attention to the errors cited for the ASOS and AWOS aviation networks, which are heavily used by GHCN.

Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station Networks*

X. LIN AND K. G. HUBBARD

School of Natural Resource Sciences, University of Nebraska at Lincoln, Lincoln, Nebraska

ABSTRACT

The biases of four commonly used air temperature sensors are examined and detailed. Each temperature transducer consists of three components: temperature sensing elements, signal conditioning circuitry, and corresponding analog-to-digital conversion devices or dataloggers. An error analysis of these components was performed to determine the major sources of error in common climate networks. It was found that, regardless of microclimate effects, sensor and electronic errors in air temperature measurements can be larger than those given in the sensor manufacturer’s specifications. The root-sum-of-squares (RSS) error for the HMP35C sensor with CR10X datalogger was above 0.2°C, and rapidly increases for both lower (<-20°C) and higher temperatures (>30°C). Likewise, the largest errors for the maximum–minimum temperature system (MMTS) were at low temperatures (<-40°C). The temperature linearization error in the HO-1088 hygrothermometer produced the largest errors when the temperature was lower than -20°C. For the temperature sensor in the U.S. Climate Reference Networks (USCRN), the error was found to be 0.2° to 0.33°C over the range -25° to 50°C. The results presented here are applicable when data from these sensors are applied to climate studies and should be considered in determining air temperature data continuity and climate data adjustment models.

Introduction

A primary goal of air temperature measurement with weather station networks is to provide temperature data of high quality and fidelity that can be widely used for atmospheric and related sciences. Air temperature measurement is a process in which an air temperature sensor measures an equilibrium temperature of the sensor’s physical body, which is optimally achieved through complete coupling between the atmosphere and air temperature sensor.

The process accomplished in the air temperature radiation shield is somewhat dynamic, mainly due to the heat convection and heat conduction of a small sensor mass. Many studies have demonstrated that to reach a higher measurement accuracy both good radiation shielding and ventilation are necessary for air temperature measurements (Fuchs and Tanner 1965; Tanner 1990; Quayle et al. 1991; Guttman and Baker 1996; Lin et al. 2001a,b; Hubbard et al. 2001; Hubbard and Lin 2002). Most of these studies are strongly associated with the study of air temperature bias or errors caused by microclimate effects (e.g., airflow speed in-side the radiation shields radiative properties of sensor surface and radiation shields, and effectiveness of the radiation shields). Essentially, these studies have assumed the equation governing the air temperature to be absolutely accurate, and the investigations have focused on the measurement accuracy and its dependence on how well the sensor is brought into equilibrium with the atmospheric temperature. Such findings are indeed very important for understanding air temperature measurement errors in climate monitoring, but it is well known that all microclimate-induced biases or errors also include the electronic biases or errors embedded in their temperature sensors and their corresponding data acquisition system components.

Three temperature sensors are commonly used in the weather station networks: A thermistor in the Cooperative Observing Program (COOP) that was formally recognized as a nationwide federally supported system in

1980; a platinum resistance thermometer (PRT) in the Automated Surface Observing System (ASOS), a network that focuses on aviation needs; and a thermistor in the Automated Weather Station (AWS) networks operated

by states for monitoring evaporation and surface climate data.

Each of these sensors has been used to observe climate data over at least a ten year period in the U.S. climate monitoring networks. The U.S. Climate Reference Network (USCRN) was established in 2001 and gradually and nationally deployed for monitoring long-term and high quality surface climate data. In the USCRN system, a PRT sensor was selected for the air temperature measurements. All sensing elements in these four climate monitoring networks are temperature sensitive resistors, and the temperature sensors are referred to as the maximum–minimum temperature system (MMTS), sensor: HMP35C, HO-1088, and USCRN PRT sensors, respectively, in the COOP, AWS, ASOS, and USCRN networks (see Table 1).

The basic specifications of each sensor system including operating temperature range, static accuracy, and display/output resolution can be found in operation manuals. However, these specifications do not allow a detailed evaluation, and some users even doubt the stated specifications and make their own calibrations before deploying sensors in the network. In fact, during the operation of either the MMTS sensor in the COOP or HO-1088 hygrothermometer in the ASOS, both field and laboratory calibrations were made by a simple comparison using one or two fixed precision resistors (National Weather Service 1983; ASOS Program Office 1992).

This type of calibration is only effective under the assumption of temporal nonvariant sensors with a pure linear relation of resistance versus temperature. For the HMP35C, some AWS networks may regularly calibrate the sensors in the laboratory, but these calibrations are static (e.g., calibration at room temperature for the data acquisition system).

It is not generally possible to detect and remove temperature-dependent bias and sensor nonlinearity with static calibration. In the USCRN, the PRT sensor was strictly calibrated from -50° to +50°C each year in the laboratory. However, this calibration does not include its corresponding datalogger. To accurately trace air temperature trends over the past decades or in the future in the COOP, AWS, ASOS, and USCRN and to reduce the influence of time-variant biases in air temperature data, a better understanding of electronic bias in air temperature measurements is necessary.

The objective of this paper is to carefully analyze the sensor and electronic biases/errors induced by the temperature sensing element, signal conditioning circuitry, and data acquisition system.

This implies that the MMTS temperature observations are unable to discriminate  ±0.25°C changes

in the lower temperature ranges (Fig. 5 and Table 2). The interchangeability of the MMTS thermistors is from

60.2°C from temperature -40° to +40°C and ±0.45°C elsewhere (Fig. 4). Two fixed resistors (R2 and R3) with

a 0.02% tolerance produced larger temperature errors of measurement in low temperatures, but the error

caused by the fixed resistor R19 in Fig. 1 can be ignored. Therefore, the RSS errors in the MMTS are from 0.31°

to 0.62°C from temperature -40°C to -50°C (Fig. 5).

The major errors in the HO-1088 (ASOS Temp/DP sensor) are interchangeability, linearization error, fixed resistor error, and self-heating error (Table 2 and Fig. 7). The linearization error in the HO-1088 is relatively serious because the analog signal (Fig. 3) is simply linearized from -50° to 50°C versus -2 to 2 V. The maximum magnitude of linearization error reached over 1°C (Fig. 7). There are four fixed precision resistors: R13, R14, R15, and R16 with a 0.1% tolerance. However, the error of temperature measurement caused by the R14, R15, and R16 can be eliminated by the adjustment of amplifier gain and offsets during onboard calibration operations in the HO-1088.

The error caused by the input fixed resistor R13 is illustrated in Fig. 7. Since this error was constantly varied from -0.2° to -0.3°C, it can be cancelled during the onboard calibration. It is obvious that a 5-mA current flowing through the PRT in the HO-1088 is not appropriate, especially because it has a small sensing element (20 mm in length and 2 mm in diameter). The self-heating factor for the PRT in the HO-1088 is 0.25°C mW21 at 1 m s21 airflow (Omega Engineering 1995), corresponding to the selfheating errors 0.5°C when the self-heating power is 2mW (Table 2 and Fig. 7). Compared to the linearization error and self-heating error, the interchangeability and LSB error in the HO-1088 sensor are relative small, ±0.1° and ±0.01°C, respectively (Table 2).

Conclusions and discussion

This study provides a better understanding of temperature measurement errors caused by the sensor, analog signal conditioning, and data acquisition system. The MMTS sensor and the HO-1088 sensor use the ratiometric method to eliminate voltage reference errors. However, the RSS errors in the MMTS sensor can reach 0.3–0.6 under temperatures beyond -40° to +40°C. Only under yearly replacement of the MMTS thermistor with the calibrated MMTS readout can errors be constrained within ±0.2°C under the temperature range from -40° to +40°C. Because the MMTS is a calibration- free device (National Weather Service 1983), testing of one or a few fixed resistors for the MMTS is unable to guarantee the nonlinear temperature relations of the MMTS thermistor. For the HO-1088 sensor, the self-heating error is quite serious and can make temperature 0.5°C higher under 1 m/s airflow, which is slightly less than the actual normal ventilation rate in the ASOS shield (Lin et al. 2001a). The simple linear method for the PRT of the HO-1088 causes unacceptable errors that are more serious in the low temperature range. These findings are helpful for explaining the ASOS warm biases found by Kessler et al. (1993) in their climate data and Gall et al. (1992) in the climate data archives. For the dewpoint temperature measurements in the ASOS, such self-heating effects might be cancelled out by the chill mirror mechanism: heating or cooling the chill mirror body (conductively contains the dewpoint PRT inside) to reach an equilibrium thin dew layer–dewpoint temperature.

Thus, in this case, the selfheating error for dewpoint temperature measurements might not be as large as the air temperature after correct calibration adjustment. Likewise, the relative humidity data from the ASOS network, derived from air temperature and dewpoint temperature, is likely be contaminated by the biased air temperature.

Both resistance measurements in the HMP35C and USCRN PRT sensors are interrogated by the dataloggers.

The HMP35C is delivered from Campbell Scientific, Inc., with recommended measurement methods.

Even so, the HMP35C sensor in the AWS network can experience more than 0.28C errors in temperatures from

-30° to +30°C. Beyond this range, the RSS error increases from 0.4° to 1.0°C due to thermistor interchangeability, polynomial error, and CR10X datalogger inaccuracy. For the USCRN PRT sensor in the USCRN network, the RSS errors can reach 0.2°–0.34°C due to the inaccuracy of CR23X datalogger, which suggests that the configuration of USCRN PRT and measurement taken in the CR23X could be improved if higher accuracy is needed. Since the USCRN network is a new setup, the current configuration of the USCRN PRT temperature sensor could be reconstructed for better measurements.

This reconstruction should focus on the increase of signal sensitivity, the selection of fixed resistor(s) with smaller temperature coefficient of resistance, and the decrease of the self-heating power, so that it could be more compatible with the CR23X for longterm climate monitoring.

These findings are applicable\ to the future of temperature data generated from the USCRN network and possible modification of the PRT sensor for higher quality measurements in the reference climate network.

The complete Lin-Hubbard papert (PDF) is available here.

0 0 votes
Article Rating
119 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jan Curtis
March 9, 2010 9:08 am

Great work Ken. I must read.

Lance Wallace
March 9, 2010 9:10 am

Anthony, the temperatures got screwed up here and make no sense. They appear to be thousands of degrees. The actual numbers might be -20 [superscript] degrees C but it gets changed to 2058 or some such thing.
REPLY: I noticed that as soon as I published. Some sort of translation error in ASCII character sets from the PDF to HTML. Fixed even before you wrote this. Refresh.- A

jack mosevich
March 9, 2010 9:11 am

This is a problem I have always wondered about. Have any calibration/accuracy studies been performed? This paper answers that question and I am humbled by how comlicated the answer to a simple question can be. I also wonder how accurate mercury/alcohol (older) thermometers fare?
Maybe the errors cancel and have a mean of zero. Do the GISS personel care I wonder?

March 9, 2010 9:18 am

Interesting stuff. In Brohan et al. 2006 (the HadCRUT3 and CRUTEM3 paper) the error associated with a single thermometer reading is estimated at 0.2C.

John Diffenthal
March 9, 2010 9:24 am

Garbage in – garbage out! It’s difficult to be amusing about this litany of poor design and configuration.
Has there ever been a systematic check of mercury thermometers against the data log?

March 9, 2010 9:25 am

“But that’s just a fact of life in the climate sciences. [No data for you!]”
Professor Phil Jones giving evidence at the Parliamentary inquiry into Climategate.

woodNfish
March 9, 2010 9:25 am

I’ve said all along that the error range of the equipment is large enough that there is no way to determine the small trends we were being told were occurring. All this furor over AGW for nothing, yet the juggernaut for regulating carbon continues unabated.

March 9, 2010 9:26 am

The tests need to be made in a humidity chamber so you can go from cold and dry to warm and humid quickly to see if condensation can be a factor.

Pops
March 9, 2010 9:27 am

It seems to me that you can have all the electronic gizmos and gadgets you like in order to accurately collect the data – even correlate it – but you still have the problem of human bias prior to the publication of the desired “end-product”, as some people refer to it these days. Until honest (new) hands are back at the tiller, can anyone trust any numbers published by anyone in the (climate) scientific world these days?

Philhippos
March 9, 2010 9:28 am

Two points:
1. I can still only see things like this in the conclusions: However, the RSS errors in the MMTS sensor can reach 0.3–0.6 under temperatures beyond 2408 to 1408C. Only under yearly replacement of the MMTS thermistor with the calibrated MMTS readout can errors be constrained within 60.28C under the temperature range from 2408 to 1408C.
2. I hoped to conatct via Tips & Notes to ask about possibility of checking out UK/European stations and where to find info but cannot get that link to work – only a blank page that takes no typing.

March 9, 2010 9:29 am

Also PRT (thermistors somewhat less so) are subject to stress changes. To detect them a number of cycles from cold to hot (to the hot limit) should be tried to see if there is a shift.

DR
March 9, 2010 9:31 am

We still keep a good ole mercury thermometer in our lab.

Gary Hladik
March 9, 2010 9:36 am

Uh-oh. I anticipate even more undocumented spaghetti “correction” code from GISS.

Henry chance
March 9, 2010 9:37 am

Blizzards in Spain. Do wind turbines attract snow? I wonder if the temps in Spain are high above freezing.

March 9, 2010 9:38 am

Also: .1% resistors are subject to drift over their life. It can be accelerated them by cooking them at the max inst temp for 1,000 hours or so. This is even more true for electronics.
I’ll come back and see what others say.
Let me leave you with this: electronic accuracy to 1% is not too tough. Initial calibration to .1% is not too tough. Holding .1% accuracy over life without frequent recalibration is very tough.
Then you have to go back and see how that relates to temp. BTW setting accuracy by manual trim pots is very bad. It should be done in software with values held in a local EEPROM. Second best is digital pots of sufficient resolution and stability.

Chris
March 9, 2010 9:41 am

I have wondered about the color stability of the plastic MMTS enclosures. Certain plastics such as PVC and perhaps others tend to yellow or gray slightly as they are exposed to weather and sunlight over several years. I guess this is a similar issue to weathering and paint issues on the Stevenson screens. This reflectance change might be enough to slightly increase heat absorption in the sensors and the resultant daytime readings. Don’t know if this has been considered a factor in long-term temperature trends. Don’t know if this is an issue with the materials used in the MMTS units.

March 9, 2010 9:49 am

As a layman who knows that when one is in doubt, try to find and work from first principles, I became a full-on sceptic when I enquired if there was a standard for climate thermometers, their siting, housing etc on a Guardian blog and got jumped all over as a “troll” and a “denier” by one of that paper’s more ardent Greenpeace activist/bloggers for my enquiry. In that short time my knowledge of climate has increased exponentially, but it now seems that my original question was very sensible, if a little innocent considering what we all know now about standards of data-selection and archiving in some quarters.

Tom in Texas
March 9, 2010 9:51 am

It appears that sensors in low temperature regions (Siberia, etc.) have the largest errors. Aren’t these regions showing the most warming?
“Do the GISS personel care I wonder?”
GISS has analyzed this problem and have come up with a correction factor:
Lower earlier temperatures and raise later ones.

Curiousgeorge
March 9, 2010 9:51 am

@ jack mosevich (09:11:08) :
Measurement system error is a very common concern throughout industry. Google “Gage Reproducibility and Repeatability”, or visit NIST ( http://www.nist.gov/index.html ) to learn more about it. In my experience in aerospace such errors are rarely evenly distributed. The distribution for a particular system tends to be skewed in one direction or the other, and knowledge of this behavior is usually incorporated by design engineers when determining tolerances.

March 9, 2010 9:55 am

Crikey !!!
it just keeps getting worse then we thought !!!

Max Anderson
March 9, 2010 9:57 am

Take a look at figure 3 in the paper. The signal conditioning circuit for ASOS stations uses variable resistors (potentiometers) to adjust gain and offset of the measurement. After working in the world of electronic measurements for 30+ years, I can tell you this is asking for trouble. Eventually these devices start to drift and/or become noisy which will cause more problems with the temperature data.
In today’s world of digital data processing there is seldom any excuse for using potentiometers in this sort of equipment.
REPLY: I’ve used hundreds of those devices, and yes, they are a source of drift – A

Tom Bakewell
March 9, 2010 9:59 am

Esteemed Antony, I still see text errors with the temp measurements as noted by Lance Wallace above.
What a great paper! Data is everything, and understanding what it is and how it is collected is my kind of science work.
Tom Bakewell

Lance Wallace
March 9, 2010 9:59 am

Some errors corrected but there are still strange numbers like
3rd paragraph below Table 1: “2508C to 1508C”
5th para below Table 1: “unable to discriminate 60.258C…”
Conclusions line 5 “2408 to 1408C”
REPLY: It is strange that copy/paste from PDF does this in this case, but I think I’ve fixed them all. – A

March 9, 2010 10:06 am

It’s still wrong after refresh:
In the USCRN, the PRT sensor was strictly calibrated from 2508 to 1508C each year in the laboratory.
The linearization error in the HO-1088 is relatively serious because the analog signal (Fig. 3) is simply linearized from 2508 to 508C versus 22 to 2 V. [the ’22’ is wrong there as well]
Since this error was constantly varied from 20.28 to 20.38C, it can be cancelled
Even so, the HMP35C sensor in the AWS network can experience more than 0.28C errors in temperatures from 2308 to 1308C.
Beyond this range, the RSS error increases from 0.48 to 1.08C due to thermistor interchangeability,
The self-heating factor for the PRT in the HO-1088 is 0.258C mW21 at 1 m s21 airflow (Omega Engineering 1995),
or the USCRN PRT sensor in the USCRN network, the RSS errors can reach 0.28–0.348C due to
etc
REPLY: Yes strange that copy/paste from PDF results in this. But I think I’ve fixed them all now. -A

JonesII
March 9, 2010 10:08 am

Who cares! if afterwards any recorded temperatures will be “adjusted” to fulfill their global warming crazy dreams.

Mark Wagner
March 9, 2010 10:10 am

Maybe the errors cancel and have a mean of zero
unlikely given that:
some of the error is inherent in the component manufacturing. resistors of a given resistance, for example, are “mixed” with a certain ratio of the various “ingredients.” modern manufacturing processes ensure that each lot is virtually identical to the next, thus any fluctuation in purity would tend to always be off in the same direction. although components made by different mfgs could be different.
some of the error is due to degradation of the physical substance in the components, which would always be the same direction for a particular component type, although they would degrade at different rates due to differences in environment.
probably a few more I could think of

March 9, 2010 10:11 am

M. Simon (09:29:41) (09:38:41):
“…PRT (thermistors somewhat less so) are subject to stress changes. To detect them a number of cycles from cold to hot (to the hot limit) should be tried to see if there is a shift.”
That is certainly true of PRTs. The technical term is hysteresis. It means that if you cycle from a known baseline temperature, raise the temperature, then lower it back to the original temperature repeatedly, the PRT reading drifts around the baseline, never coming back to the exact same reading.
The effect isn’t great, but it’s definitely there. Hysteresis occurs in all thermocouples to varying degrees. That is why Mil-Spec government defense contracts require regular periodic calibration of thermocouples, RTDs, PRTs, etc.

John F. Hultquist
March 9, 2010 10:18 am

Anthony, FYI — I just visited the new (Sept. 09) weather station site in Cle Elum, WA (the one in the database now is defunct as it was when John S. did it). He alerted me to the altered-new Lat/Long and suggested I go have a look, which I did. Our families are both dealing with ill family members but we will get this done in a week or so. John H.
REPLY: Thank you for the effort. -A

JonesII
March 9, 2010 10:18 am

The gulf deep cold…so the gulf current would change in the near future:
http://weather.unisys.com/surface/sst_anom.gif

geo
March 9, 2010 10:35 am

When they were talking about outside -40C/+40C, I was thinking “that’s not good, but not terrible. . . problematic regularly in some parts of the southwest on the high side, and occassionally in parts of the upper midwest on the low side”.
Then they got to talking about outside -30C/+30C, and I just sighed. . . .

Steven Hill
March 9, 2010 10:46 am

If it’s low, throw it out. If it’s high, it’s climate change.

March 9, 2010 10:48 am

REPLY: Yes strange that copy/paste from PDF results in this. But I think I’ve fixed them all now. -A
Still plenty need fixing – at least 19:
In the USCRN, the PRT sensor was strictly calibrated from 2508 to 1508C each year in the laboratory.
This implies that the MMTS temperature observations are unable to discriminate 6 [should be +/-]
0.258C changes in the lower temperature ranges (Fig. 5 and Table 2).
The interchangeability of the MMTS thermistors is from 60.28C from temperature 2408 to 1408C and 60.458C elsewhere (Fig. 4).
versus 22 [should be -2] to 2 V.
The maximum magnitude of linerization error reached over 18C (Fig. 7).
Since this error was constantly varied from 20.28 to 20.38C, it can be cancelled during the onboard calibration.
The self-heating factor for the PRT in the HO-1088 is 0.258C
mW21 at 1 m s21 airflow
corresponding to the selfheating errors 0.58C
Even so, the HMP35C sensor in the AWS network can experience more than 0.28C errors
the RSS error increases from 0.48
in the USCRN network, the RSS errors can reach 0.28

March 9, 2010 10:52 am

Dear Patchy:
Just to say thanks to everyone at Teri for the truck loads of cash over the years. I concluded my work and closed my office. In my grant app. i indicated i’d be studying the connection between CO2 and AGW. Well, I did and there isn’t one.
Cheers Mate.
Johnnny

March 9, 2010 10:52 am

A – don’t hate me for this, but…
>> 60.28C from temperature 2408 to 1408C and 60.458C elsewhere
there’s still one pesky gremlim wandering around just above the second table…

Brian D
March 9, 2010 10:54 am

Can this issue get any crazier? Can we get a true temp anymore?
Well it in the low 40’s and snow is melting away quite nicely. At least I think its in the low 40’s. Can’t be sure now. But the snow is melting and it feels warm out. I’ll go with that.

Fred Harwood
March 9, 2010 10:56 am

And did I read that PRTs are used to calibrate the temperature satellites?

John F. Hultquist
March 9, 2010 10:57 am

I wonder when the official temperature of the USA, like the time, will become the responsibility of the National Institute of Standards and Technology (NIST)? http://www.nist.gov/index.html
The issues raised by this paper are not going to get addressed in a large network of stations at airports, sewage treatment facilities, other NGO, and the backyards of hundreds of interested volunteers. However, I don’t believe this issue will have legs in the current cycle of politics and the need to find vast sums of money to support the ongoing spending.

Joe
March 9, 2010 11:00 am

Anthony,
We are putting all the weather data together to understand the worlds climate.
This is absolutely incorrect and will only produce confusing contridictions.
We have actually two weather systems.
The northern hemisphere has the most land mass and population density.
And the southern hemisphere with the most ocean mass and less animal and plant life.
These systems converge at the equator to roll back on themselves back to the poles. The bulge in our atmosphere at the equator is no coincident as heated air rises then moves back to the poles rotationally cooling down.
It is impossible for a hurricane to cross. If it did, it would loose all it’s energy very quickly as the rotational spin would be backwards to it and the energy the oceans are producing across the equator.

Annabelle
March 9, 2010 11:00 am

Thanks climategate2009 for that YouTube link of Jones giving evidence to the parliamentary enquiry.
It shows climate science in a terrible light – using Jones’s own words. Jones comes across as very scared and shifty – even now he is trying to obfuscate and not answer questions directly.

March 9, 2010 11:04 am

So, for both monitoring types over the range of temperatures found in nearly all habitable regions, -20degC to +50 degC, the bias is always positive. And as much as +1 degC bias at the higher temperatures.
Now, why am I surprised?
[not that I’m bothered about 1 degC either way – but that seems enough to have the world going crazy wasting a few more trillion on the carbon folly]

Walt The Physicist
March 9, 2010 11:07 am

When I tried to ascertain the accuracy of temperature measurement and asked Mr. Gavin (Real Climate), the answer was reference to GRL v.28,n13, pp.2621-2624 (2001). In this article it is shown that the accuracy of average monthly temperature is 0.2C/Square root of (60). The 0.2C was determined to be the accuracy of one temperature measurement with a thermometer. SQRT(60) comes from twice a day temperature recording – 60 data points. So, the accuracy is 0.0255C. This is an elegant solution to the accuracy problem – even with a crappy measuring device just make thousands of measurements and your accuracy will be fantastic. All that is ironically speaking, of course.

March 9, 2010 11:13 am

the……..’worse than we thought’ is the new baseline. When plotted, using this new baseline, the IPCC is building a ski jump!

Eddie
March 9, 2010 11:15 am

not to be a pain but i caught another translation error…
It is not generally possible to detect and remove temperature-dependent bias and sensor nonlinearity with static calibration. In the USCRN, the PRT sensor was strictly calibrated from 2508 to 1508C each year in the laboratory.
REPLY: Its the character formatting of the AMS document that is the pain…never had one do this before. I think I’ve caught them all now. -A

dearieme
March 9, 2010 11:17 am

OK, so these instruments should be error-analysed as we were taught in freshman physics. Why had it not been done before?

March 9, 2010 11:25 am

As a couple of people pointed out, error biases all tend to drift in the same direction. Was there any work done based on drift over time? not just the sensors themselves either. The wires they are attached to will oxidize over time and their resistance goes up. Small vibrations transmitted through the earth from a nearby highway produce tiny stress on the wires that also drives resistance up over time. If there is a capacitor in the circuit, its value will also change as it degrades over time.
I know a lot of people don’t believe the vibration thing. Might I remind you that those are human induced vibrations. They are 300 times as powerful as naturaly induced vibrations. This has been known for several decades due to research done by some boys on a beach who published considerable work on good vibrations.

Urederra
March 9, 2010 11:40 am

[quote]Henry chance (09:37:23) :
Blizzards in Spain. Do wind turbines attract snow? I wonder if the temps in Spain are high above freezing.[/quote]
There were below freezing here at night but not right now, but just for 1 degree. more freezing is coming tonight. I am living where the first wind turbines were placed in Spain. And, well, they are working OK right now. they were placed in a good windy spot.

James Chamberlain
March 9, 2010 11:50 am

accuracy smaccuracy. as long as it’s hot.

Steve in SC
March 9, 2010 11:52 am

Couple of things.
1) Thermistors are the absolute worst choice you could make for a temperature sensor for anything other than VERY VERY short term measurements. They (all models) have proven to be very drifty over time and temperature excursions. Nonlinear to boot.
2) The best choice would be RTDs. (you can tell these folks have not dealt with instrumentation very much at all just by their terminology) The second best choice would be thermocouples. Various types (E, K, J, etc)
3) The standard error for RTDs is +/- 1 deg F and for thermocouples is +/- 2 deg F. You can do better than that by on spot calibration of the “SYSTEM” which includes sensor, electronics, readout device, and any wire leads or batteries involved. You can generally expect 50% to 75% better accuracy with calibration. If you really want fine accuracy to .1 C or better, plan on calibrating for every reading.
4) The temperature sensor of choice in the pharmaceutical industry is the RTD. Calibrations are normally at 90 day intervals with some non critical monitoring temps out at 180 or yearly. None, I repeat NONE go beyond that for any reason. If anybody dies because of bad drugs caused by sloppy manufacture, heads roll, money is lost, the ax falls, and you know the rest.
5) While it is possible to resolve down to 0.01 deg C or F, getting inaccuracies down to that level is extremely expensive. (note the proper use of terminology)
6) It looks like (if the table is to be believed) that the ASOS would be the best setup of the bunch.
7) For those wondering the accuracy statement of a good calibrated lab grade mercury thermometer is in the neighborhood of +/- 1/2 degree.
The International Practical Temperature Scale of 1991(IPTS91) would be better than referring to Omega Engineering who are nothing but marketers and resellers. I can probably fish out some good references on how you actually do this stuff if anybody really needs it.

Douglas Hoyt
March 9, 2010 12:04 pm

Instead of using PRT sensors which tend to drift, I would suggest that using sonic temperature measurements would give better results. A pdf file describing the theory is at http://www.wmo.int/pages/prog/www/IMOP/publications/IOM-82-TECO_2005/Posters/P3(09)_Germany_4_Lanzinger.pdf
They use a 20 cm baseline and get 0.3 C accuracy or better. If one built and instrument in the shape of a cross with the sound source in the center and 4 sensors on the outside, one could measure temperature and wind velocity very accurately. With a 5 meter baseline, a 0.01 C accuracy or better should be obtainable. The major error source then will probably be relative humidity and pressure. In order to get accurate temperatures below -25 C, they need to protect the electronics more, perhaps by burying it.
The people making temperature measurements might learn something if they talked to people specializing in metrology.

Jerry
March 9, 2010 12:06 pm

I guess this has been my whole beef with the AGW theory, being an Engineer who uses pressure and temperature gauges in a closed system every day, the accuracy for which is purported never made any sense to me. The ranges for what the temperature of the earth (If there really is such a thing) could be exceeds the purported warming for the last century. No one will ever be able to convince me that measuring temperature is in any way a good proxie for what happened in the past or where we will go in the future.
As Pete in Oh Brother Where Art Thou Said ” That Don’t Make No Sense”

kadaka
March 9, 2010 12:22 pm

@ Smokey (10:11:31) :
Yup, there it goes. Solid reasons to question the AMSU numbers from UAH, as the AMSU unit is calibrated on-board by staring at a warm target whose temp is measured by multiple PRT’s. Which are not getting regular periodic calibration. And being on a satellite, they likely see a good deal of thermal cycling over a rather wide range. Plus, I see mention of self-heating. What currents/wattages do the satellites run them at?

March 9, 2010 12:42 pm

In relation to climate change, this is a great fuss over nothing.
It seems to me that the obvious conclusion from figure 5 is that over the range of ordinary real-world conditions the instrument error is easily corrected. Even outside that range the worst-case inaccuracy is only on the order of 0.5%.
Also to draw an analogy; which quality, “accurate” or “consistent”, would be better in a train station clock? After-all, the debate is over temperature trends.

Stephen Brown
March 9, 2010 12:51 pm

This is a good point. The data is everything; from whence the data is obtained is the very crux of the discussion. If the instrumentation cannot be trusted then nothing flowing from this now-known-to-be erroneous data can be trusted.
It’s back to square one. We are going to have to start all over again, but this time let’s get some engineers and electricians in on the act and get the job done properly.

janama
March 9, 2010 12:58 pm

as I’ve mentioned before I have found a situation where two Stevenson screens exist within 200m of each other, one is automated and one is read twice a day. The data for each is published by BoM – the variance ranges around .5 – .7C!

Frank
March 9, 2010 1:00 pm

This paper doesn’t tell us what these laboratory measurements mean for temperatures recorded in the field and the climate record. If a daily high and low is recorded to the nearest 1 degF or 0.5 degC, many of these errors aren’t large enough to be important. When we want to know about how climate changes, long-term stability, not absolute accuracy is the most important factor. As long as a high of 30.5 degC three decades ago at a station still gets recorded as 30.5 degC today, there isn’t a big problem if the actual temperature is 30.12 or 30.87. On the other hand, if the average instrument drifts up or down by 0.1 degC/decade as the instrument ages, there is a big problem.
However, this is good reminder about how small global warming of 0.2 degC/decade really is.

RockyRoad
March 9, 2010 1:01 pm

OT: Listening to 9:54 of Phil Jones provided by climategate2009 (09:25:14) was pretty difficult, not because my headphones didn’t work but because of the obfuscation and redefinition of “climate science”. UEA definitely has a problem.

Rod Smith
March 9, 2010 1:01 pm

My questions:
– Who in our federal government authorized purchase of equipment with such poor performance? How did it pass IOC checks?
– What is the cost to upgrade the equipment with accurate sensors?
– Is anyone investigating this acquisition, and if not, why not?

GAZ from Sydney
March 9, 2010 1:04 pm

In the absence of specific mention to the contrary I assume that:
1. They have carried out the analysis on a single sensor of each type
2. The analysis was done on new instruments.
Which raises the questions:
1. What sort of variability is there between similar sensors?
2. Does the error range change over time, and the need for calibration?
I wonder if anyone can respond to these.

March 9, 2010 1:06 pm

Here’s a nice introduction to electronic temp measurement:
Improving The Accuracy of Temperature Measurements
Happy reading–
Pete Tillman

MikeC
March 9, 2010 1:09 pm

Not sure if I understand what all the excitement is about. USHCN v2 is good at catching these kinds of errors, such as the ones related to MMTS. And the study says this is mainly applicable to CRN, and each CRN station has 3 HGO’s hanging from them specificly to catch drift or other equipment problems.
REPLY: USHCN2 is good at catching transient and short period errors. It will not catch long period errors such as instrument calibration drift or gradually increasing UHI, or gradual land use change. I confirmied this with the author of the method, Matt Menne when I visited NCDC – Anthony

D. King
March 9, 2010 1:13 pm

The problem with Ponzi schemes is a crash or time will expose the scheme.
With Ponzitemps, you can only add heat until time or a super El Nino gives
you 15 years of no statically significant increases. Then you’re left holding the
bag and having to explain your methodology. OOPS.

March 9, 2010 1:21 pm

After reading this about the PRTs I too immediately thought about the satellite PRTs, kadaka. You beat me to a post.
I don’t know the orbital altitude of the satellites in question but it would seem that they might be temperature cycled at least one an hour and in the vacuum environment self heating might be a large problem. It isn’t just the PRTs themselves of course but all of the associated electronics and temperature cycling isn’t the only problem. What about the radiation environment?
The satellite sensors were designed to to provide what amounts to an enhanced and denser weather balloon network for short term forecasting purposes. For this they probably work well.
The weather balloon network also suffers from lack of post calibration of the sensors immediately after a flight although I have thought of a way of doing this.
I’m beginning to think that the entire field of climatology has elements of high farce.

Pops
March 9, 2010 1:37 pm

Problems copying and pasting text? Try this:
http://www.stevemiller.net/puretext/
REPLY: I downloaded it. Thanks great FREE tool!
– Anthony

Ray
March 9, 2010 1:37 pm

In the lab I use some of the best thermometers and thermocouples avail on the market for my temperature measurements. Even with those I still need to check their calibrations regularly and make the necessary corrections. How can some temperature measuring devices be left for decades out there without proper and regular calibrations? Not only they have significant errors but also their precision and exactitude deviate over time.

pat
March 9, 2010 1:40 pm

climategate2009 –
the clip leaves out the most damning and under-reported moment of the entire hearing:
Q119 Dr Harris: You cannot speak for other fields of science I guess but do you have any idea whether, in other fields of science, the data is sent out on request? In clinical trials I have not seen photocopies of anonymised patient data being sent out on request. If peer reviewers ask to see the raw data, is there a different situation there or do they never ask for that?
Professor Jones: We would probably send them that then, but they have never asked for it.
Q120 Dr Harris: You would not object to sending peer reviewers or editors that data?
Professor Jones: No, but they have never asked.
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/uc387-i/uc38702.htm
extraordinary article in the Guardian:
9 March: Guardian: Alan Nelson: How to avoid your own ‘climategate’ scandal
Alan Nelson is a senior associate at law firm Dundas & Wilson
So how do universities and academics ensure that their correspondence does not become the “smoking gun” that turns a simple FoI request into an international scandal?…
For sensitive information that you would not want in the public domain, rather than putting it in email or in a document, it may be better to discuss it face-to-face or on the phone.
Careful consideration should also be given to how long emails are saved and when they are deleted. In some fields of work, there will be regulatory reasons for keeping emails (clinical work, for example) but do they all need to be retained and archived? A periodic review should be performed to ensure that, wherever possible and lawful, emails that could be that smoking gun are deleted.
When making handwritten notes or comments on documents, staff need to be aware that those scribbles could enter the public domain in response to a FoI request. Do you really want someone to see your exclamations of “Idiot!!!” or “Rubbish!!!” on a note? Probably not, so take care – and shred your notes once they have served their useful purpose…
http://www.guardian.co.uk/education/2010/mar/09/avoid-climategate-foi-requests-academics
Nelson also says: ” Jones admitted to the Commons committee that he had not dealt with requests for data “in the right way”. His detractors accuse him of a reluctance to reveal his data and research, and both Jones and the UEA of a desire to avoid complying with FoI requests.”
those terrible ‘detractors’!

Pops
March 9, 2010 1:41 pm

Henry chance (09:37:23): Blizzards in Spain. Do wind turbines attract snow?
No, only huge subsidies.

jaymam
March 9, 2010 1:42 pm

Climate station electronic sensors clearly need to be checked regularly (e.g. monthly) against mercury thermometers, and a log kept of the readings. If this has not been done, the data from sites with electronic sensors is suspect and should not be used.

March 9, 2010 1:43 pm

Understanding Climate’s Influence on Human Evolution
The hominin fossil record documents a history of critical evolutionary events that have ultimately shaped and defined what it means to be human, including the origins of bipedalism; the emergence of our genus Homo; the first use of stone tools; increases in brain size; and the emergence of Homo sapiens, tools, and culture. The geological record suggests that some of these evolutionary events were coincident with substantial changes in African and Eurasian climate, raising the intriguing possibility that key junctures in human evolution and behavioral development may have been affected or controlled by the environmental characteristics of the areas where hominins evolved.
new web book:
http://books.nap.edu/openbook.php?record_id=12825&page=R1

J Zulauf
March 9, 2010 1:47 pm

would it be interesting to see what pointing a web cam at an old fashioned glass thermometer colocated with the fancy kind and compare. The accuracy required appears to be +/-50. to within .1 — 1000 states. 1-2Mpixels would (or a 1-2K element 1D optical sensor, with lens calibrations of course) — automate it and hook it up wirelessly…
Hmmm, that just about maps to the iPhone specs… “Calibrate weather stations, there’s an app for that” — or there could be.

Allen Ford
March 9, 2010 1:56 pm

“The people making temperature measurements might learn something if they talked to people specializing in metrology.”
Just like they might learn something from competent specialists in computer programming and statistics!

Espen
March 9, 2010 2:00 pm

Tom in Texas: Exactly my thoughts. A lot of the warming is in the winter, and in cold places like northern Canada and the interior of Russia. If the readings of the minimum temperatures in winter for a lot of these places are wrong, it could explain a significant part of the difference between the current warm period and the warm period in the 30s-40s.

Paul Linsay
March 9, 2010 2:00 pm

The satellites also calibrate against the 2.7 K background from the Big Bang, about as absolute a temperature standard as possible.

Gail Combs
March 9, 2010 2:01 pm

vukcevic (13:43:47) :
Understanding Climate’s Influence on Human Evolution
http://books.nap.edu/openbook.php?record_id=12825&page=R1
Thanks, you come up with some very interesting stuff.

JDN
March 9, 2010 2:06 pm

Nice paper as far as it goes. But as it admits, it’s a reconstruction of how well these devices perform.
My conclusion is that quality control is necessary. Someone needs to pull the entire enclosure into a lab, calibrate it against known temperatures, and return it. The conclusion that the actual temperature may be higher/lower under certain conditions is suggested by their analysis but no conclusion can really be drawn without direct monitoring of real-world performance.
Am I safe in assuming that nobody checks up on the performance of these stations?

MikeC
March 9, 2010 2:07 pm

REPLY: USHCN2 is good at catching transient and short period errors. It will not catch long period errors such as instrument calibration drift or gradually increasing UHI, or gradual land use change. I confirmied this with the author of the method, Matt Menne when I visited NCDC – Anthony
Anthony, You’re confusing two different issues. The MMTS errors will show as a step change. Therefore the USHCN v2 will catch it. It was probably one of the contributing factors in the MMTS adjustment that Quayle never researched.
The HGO sensor, on the other hand, will drift. But it is on the CRN station which has 3 HGO’s hanging down specificly to catch drift.
And by the way, Kristen’s page still has the link to Claude William’s presentation to the AMA where he says UHI is retained in USHCN v2 (that it was their goal), “and that’s a good thing”, if you can get a copy and send it to PtM I would apreciate it. I cant copy that format for the life of me and cannot get anyone else who can either. It would probably make a good post too.

REPLY:

The MMTS errors such as replacement of sensoroffset, would indeed be caught by USHCN2. A long term sensor drift over several years would not be caught by USHCN2 methods.
Can you point me to the link you are referring to?-A

kadaka
March 9, 2010 2:19 pm

@ J Zulauf (13:47:52) :
For electronic surface temperature monitoring, I think I would rather trust a system that optically measures the level of a mercury thermometer. Very low-power LED laser, directly taking the reading against a marked scale (bar-coded graduations perhaps)… We can do that.

Steve Koch
March 9, 2010 2:21 pm

I’ve designed/implemented software systems that acquired realtime measurements from surface sensors. My experience was that it was unusual to find a properly calibrated surface sensor in the field. After a while, I tried to automate calibrating (and detection of poorly calibrated) sensors as much as possible. When reviewing data after the fact, I wrote software to analyze the sensor data to see if it looked properly calibrated. We automatically stored in the computer the results, timestamp, and name of the guy who did the calibrating for each calibration done in the field.
It would be an excellent idea to review the calibration records for these sensors. The measurements have no credibility without calibration records.

Gail Combs
March 9, 2010 2:29 pm

Steve Koch (14:21:23) :
“….It would be an excellent idea to review the calibration records for these sensors. The measurements have no credibility without calibration records.”,/i>
Truer words were never spoken.
How the heck can you call it science if you do not even bother to do the first step any legitimate scientist would do – Calibrate your instruments and then place them on a reasonable recalibration schedule.

March 9, 2010 2:31 pm

Anthony, you’re probably fed up with hearing from me on this…but there are still some errors from the copying of the PDF. It doesn’t look too good if this gets copied on elsewhere – for example
The maximum magnitude of linerization error reached over 18C (Fig. 7).
Should be 1 degC (big difference!).
Others:
versus 22 to 2 V. [Should be -2, not 22]
Since this error was constantly varied from 20.28 to 20.38C,
0.58C higher under 1 m s21 airflow
0.258C mW21 at 1 m s21 airflow (Omega Engineering 1995), corresponding to the selfheating errors 0.58C
AWS network can experience more than 0.28C errors
in the USCRN network, the RSS errors can reach 0.28

D. King
March 9, 2010 2:37 pm

J Zulauf (13:47:52) :
would it be interesting to see what pointing a web cam at an old fashioned glass thermometer colocated with the fancy kind and compare. The accuracy required appears to be +/-50. to within .1 — 1000 states. 1-2Mpixels would (or a 1-2K element 1D optical sensor, with lens calibrations of course) — automate it and hook it up wirelessly…
That’s a very good idea. Maybe a glass thermometer, CCD, LED for
light (no heat), automated and only on when sampling, All available
technology, easy to fabricate and calibrate.

Geoff Sherrington
March 9, 2010 2:41 pm

It is a source of contant bewilderment for me that casual readers of this page can comment easily on instrumental errors like hysteresis in Pt resistance etc., yet the Lin & Hubbard article above infers that such errors persist.
In some climate science, as we have now amply heard, one response to the discovery of an error is to smooth it or average it or taper it, so that it is hidden. The proper approach is to find the reason, replace the defective step, then publish a detailed description to help others who might be susceptible.
In the early 70s when I had a large laboratory, we spent about 10% of our analysis on quality control, both through internal replication of samples and instruments and methods, and on inter-laboratory comparisons and use of certified calibration material. Even then, there were errors too large to be comfortable.
In a paper like the one above, it is essential to dissect accuracy errors (bias) from precision errors because the consequence and corrections are different. It can be the case that precision errors tend to balance each other and tend to a minimum as the number of observations increase, but that has to be tested – it cannot be assumed. Accuracy is far more difficult to achieve.
If it is good enough for aircraft to use redundancy, like a mix of GPS systems with INS, more than one compass, radar and visual range finding, air and ground speed indicators, etc, then it should be good enough to use multiple recording devices at high quality network weather stations.
The initial measurement is at the base of the pyramid of knowledge in a measurement system. If it is wrong, all of the building above it is wrong.
Does anyone have a PDF copy of “Evaluation of lunar elemental analyses”
George Harold Morrison
Anal. Chem., 1971, 43 (7), pp 22A–31a
Publication Date: June 1971
The top labs of the world did their best effort to arrive a a close set of analysis results on these historically unprecedented first lunar samples. The published comparison results by George Morrison will give readers some indication of what to expect from the world’s best. The error bars have not retreated much in the last 40 years. The masters of accuracy grow age and retire, to be replaced by fresh young faces briming with knowledge – and a fresh set of personal errors.

Allen Ford
March 9, 2010 2:57 pm

“How the heck can you call it science if you do not even bother to do the first step any legitimate scientist would do – Calibrate your instruments and then place them on a reasonable recalibration schedule.”
Surely the first thing you learn in practical science classes. I know I did, as did we all did in the ’50s. This point has always bothered me regarding the climate gang, now we know. Jones’ admission in the video that it is beneath the dignity of the gangsters to deal with the mundane business of getting down and dirty with collecting actual data, but rely on the fudged stuff from the likes of him and his cronies.
On all accounts, the whole field of climate research, with a few exceptions, should be junked until they can come up with some hard core, remedial retraining.

Al Gored
March 9, 2010 3:30 pm

But I’m sure that the instruments used in the 1800’s to get their baseline temperatures were much more accurate than these modern ones.

March 9, 2010 3:44 pm

Paul Linsay (14:00:14) :
“The satellites also calibrate against the 2.7 K background from the Big Bang, about as absolute a temperature standard as possible.”
Very nice. Now what part of the atmosphere is at around that temperature?

March 9, 2010 3:50 pm

For heaven’s sake, when I was using this equipment I never assumed it could be any better than +/-1C. If this really is the equipment being used for the global temperature measurement, then to be quite frank it is crap. OK, for a repeatably rough temperature, but if you want to be measuring to 0.1C, you need equipment with an initial tolerance around 0.01C-0.02C. By the time you’ve added sensor, leads, sensor current, amp, A/D, temperature sensivity, the total error isn’t going to allow much better than +/-0.1C. So, an accurac of 0.1C is only possible with precision equipment designed and calibrated for precision use, not off the shelf stuff – however good campbell are at producing the stuff, it just isn’t suitable for the purpose!
What kind of idiot set up this global temperature monitoring system? What kind of bizarre logic made them think any kind of temperature change was anything to do with the environment and not everything to do with inherent biases in the equipment. The more I hear about global temperature monitoring, the more I think this is criminal incompetence by those involved.

JMANON
March 9, 2010 4:12 pm

One way to limit self heating that is used in industry is to only power the sensor when a measurement is being recorded.
In a typical set of industrial signal conditioning electronics, the reading might be taken every 500ms but the power is applied to the sensor fleetingly, just as long as necessary to make a measurement.
So is there a truly continuous measurement made by these stations or do they log data at suitable intervals… what intervals? 1 per second? 1 per minute? how much detail is needed? how much can the actual temperature change from one second to the next?
How long does the sensor circuit have to be energised to make a measurement?
The sort of self heating errors mentioned here seeem enormous compared to what is acceptable in industry as precision.
OK, no single measurement is satisfactory as it can suffer both random and systematic errors. Systematic errors you would hope to eliminate through calibration and proper curve fitting. Random errors you minimise by taking multiple measurements.
So, for a single measurement you might collect 10 sequential readings, discard the highest and lowest and average the rest. Much more sophisticated than that would probably be a waste.
Time taken? anywhere from 1-5seconds with the sensor powered only briefly for each reading. Repeat once a minute? once every 10 minutes?
The question is, where these stations designed or intended to be used for climate studies or simply for weather observations.
Seems to me that climate studies would require a purpose constructed network of stations and that they are trying to define miniscule temperature changes.

March 9, 2010 4:12 pm

humm, if the devices start drifting with regular exposure over +40C or even +30C for some, then no wonder parts of Australia are “heating up” – even in coastal Perth where I am, we have entire summers of mid to high 30’s and more than a few over 40C days. Up in the Pilbara and Kimberley regions, they have 80+ days in a row over 40.
We’re trying to pick up fractions of a degree/decade as evidence of an AGW signal, but the basic measuring devices have normal operational errors of similar magnitudes.
Wonderful.

March 9, 2010 5:18 pm

Hi, I just placed a not on ‘tips and Notes’ about sensor calibration.
I do all the calibrations for the company I work for and tear my hair out when I come across ‘scientists’ who are doing so called important research, with equipment malfunctions or anomalies, only to find that the gear was bought in 2005 and has never been back for calibration……….5 years of work with no integrity? And we have to buy it? Some of this work, we are actually paying for because a Govt body is using the gear and loggers.
There’s more but I can’t say too much without advertising the company and this may not be allowed…. Cheers, Ira

March 9, 2010 6:37 pm

I would consider ±0.5°C to be pretty good for a weather station. As we know from this site the likely errors due to how the instruments are sited as well as from changing instrument types over time is probably a lot more than that.

March 9, 2010 6:56 pm

Couldn’t agree more, a lot depends on how astute your researcher really is…..but if you want to look at trends and slight temperature changes over time, I am able to look at thousandths (0.001) of a degree of change.
I suspect that guys doing research just do not know what can be done with the gear today.
Since readings are taken every now and then, at predetermined intervals, and they are taken for just 12milliseconds, there is no sensor heating over time.
Ira

Bill
March 9, 2010 8:08 pm

Wow.
I am an electrical engineer who used to service industrial chillers. Before I put them in service, I would calibrate the thermisters by making ice cubes from distilled water, and making a distilled water/ice slush, letting it settle back to exactly 32°C, and then calibrating the thermister. The thought of weather thermometers just being stuck out there with no calibration other than a calibration resistor just blows my mind.

March 9, 2010 8:56 pm

Hi Bill,
Life was so much easier then…But how would you calibrate for -40 through +120 deg C. ?
Cheers, Ira

DesertYote
March 9, 2010 9:12 pm

I quickly scanned this paper, but that is all. Its too much like work. I’ve been involved with Test and Measurement for 35 years. 10 doing level S qualification of ICs for NASA , 10 years with one of the biggest names in T and M, and another 10 years IE of equipment deployed in the field (including meteorological instrumentation!). This paper probably doesn’t even begin to tell the story. There is absolutely no excuse for our authoritative weather data collection processes to be so out of control. Though I am not surprised.
I think that one thing missing is an analysis of the performance of these sensors over variations in pressure, humidity, and containments (salt, dust, icing). Did I read correctly that some of the instruments are adjusted with pots? And some of them use thermistors? YIKES! Of course, what really needs to be done is to audit the quality of all of the deployed instruments and their calibration processes. It would be instructive to see what the distribution of the errors are of actual in use sensors. I have a feeling though, that trying to gather that data would be resisted.
BTW, I have never assumed that the reported temps had anything better then +/- 1°C accuracy.

DesertYote
March 9, 2010 9:39 pm

Ira Quirke (20:56:19) :
The Level S range is -50 to +125°C. But I use to calibrate starting with -72 as my first point because it is easy 🙂

rbateman
March 9, 2010 11:22 pm

I’m thinking we were better off with the bulb thermometers.
Anybody still make the high/low bulb thermometers?

March 10, 2010 2:21 am

JMANON (16:12:02) :”So is there a truly continuous measurement made by these stations or do they log data at suitable intervals… what intervals? 1 per second?”
As far as I remember (though its not obvious rereading the manual), the temperature measurement is done using a pulsed on sensor current on the campbell equipment. But this is just the half of it e.g. let me quote:-
“Assume a limit of 0.05C over a 0-40C range is established for the transient settling error. This limit is a reasonable choice since it approximates the linearisation error over that range”.
Or to put it in other words: “there’s not a hope in hell of getting anything like a 0.05C” total error, and this is going to be small compared to the total instrumentational errors of sensor & datalogger. What that means, is that there are probably half a dozen small errors like this that all add up to around 0.3C total error. Which means it simply isn’t the right equipment to be measuring something that is intended to give a global figure of 0.1C.
And forget the nonsense about “averaging out errors”, many of the errors will be one sider. And that is with brand new equipment and sensors, without the effects of solar heating on the temperature enclosure, without spiders nesting in the temperature sensor enclosure, without horizontal rain soaking the sensor, without a host of real errors which the academics in their ivory towers seem to have not the slightest clue exist.
And to cap it all, there really isn’t any unit called “temperature”, temperature is a meaningless mish-mash of unrelated pseudo proxies for temperature from gas bulbs to platinum resistance probes: a bastardised unit!

michaelozanne
March 10, 2010 3:23 am

“Mike Haseler (15:50:00) :
What kind of idiot set up this global temperature monitoring system? What kind of bizarre logic made them think any kind of temperature change was anything to do with the environment and not everything to do with inherent biases in the equipment. The more I hear about global temperature monitoring, the more I think this is criminal incompetence by those involved.”
Okay, breathe, again, hold it in let it out slowly…there you go
Now its kind of the point that there isn’t a global temperature system. There is a set of weather and other stations each of which was set up for its own purpose completely distinct from calculating Global Temperature. Airport stations for example exist for the purpose of determining that safe conditions exist for operating aircraft. Most met stations exist for trying to guess tomorrow’s weather etc.. Trying to compose a global control datum from the individual stations is a relatively recent idea, and I’m not convinced that the quality control aspects of doing it was given sufficient thought.

OceanTwo
March 10, 2010 3:48 am

Brian D (10:54:33) :
Can this issue get any crazier? Can we get a true temp anymore?

To (mis?)quote: A man with one watch knows the time; a man with two is never quite sure.
I, also, have worked in the electronics/data acquisition/automation field for a good few years. I would say that discrete electronic devices (resistors, pots, etc.) can be as reliable and accurate if not more reliable and accurate than digital systems.
Both digital and discrete systems have their flaws; specifically, digital systems have a finite resolution which, depending on the application, may or may not be relevant. Analog systems require more maintenance more often.
In this situation, accuracy and resolution would favor an analog system, because of the emphasis placed on both accuracy and resolution as requirements. However, these systems, it appears, are treated as modern digital systems – fire and forget – which just cannot be done. Simply, you cannot treat an analog system as a digital system and visa versa.
Having said that, modern digital systems do have the capability (quite obviously) of monitoring surface temperatures over the full range, and yet there are issues. It’s like we are taking our $200,000 bentley to be used in a NASCAR race – it’s a fine machine but perhaps it’s not quite appropriate for the application.
Both analog and digital measurement systems require calibration based on the expected measurement outcome, and noted when the temperature exceeds the calibration range. Calibration certificates must be held for ever and a day (and always with a name associated with it).
This is a great paper demonstrating that errors exist in numerous places when taking a temperature reading, that all errors are cumulative, errors are dynamic based both on the value being measured as well as time. This is saying nothing of any human error introduced into the measurement system.

OceanTwo
March 10, 2010 4:09 am

I do find a bit of amusement in the want of man to find linear relationships – particularly linear trends – in inherently non-linear, non-discrete, complex systems.

OceanTwo
March 10, 2010 4:21 am

Oh, to add my own personal quip:
Engineers (and scientists) are lazy. They are so lazy, in fact, that they will spend 6 months of hard time to create something that saves them 5 minutes of work.
Not sure if it’s relevant, but I think it does hold true that a lot of engineers and scientists cannot see the wood for the trees. Reminds me of the Gary Larson Far Side cartoon with the kid trying to push the door to the Midvale School for The Gifted open – while it is clearly marked pull.

March 10, 2010 4:34 am

Walt The Physicist (11:07:03) :
“When I tried to ascertain the accuracy of temperature measurement and asked Mr. Gavin (Real Climate), the answer was reference to GRL v.28,n13, pp.2621-2624 (2001). In this article it is shown that the accuracy of average monthly temperature is 0.2C/Square root of (60). The 0.2C was determined to be the accuracy of one temperature measurement with a thermometer. SQRT(60) comes from twice a day temperature recording – 60 data points. So, the accuracy is 0.0255C. This is an elegant solution to the accuracy problem – even with a crappy measuring device just make thousands of measurements and your accuracy will be fantastic. All that is ironically speaking, of course.”
I find it hard to believe that anyone could be that daft. Did he really mean ACCURACY?!! Dividing by the square root of 60 is the correct way of handling random measurement errors to improve PRECISION. But it will NOT improve ACCURACY where you have a non-random bias. Bias is bias – you can’t get rid of it by statistical manipulation. Sure, you can improve the variance/SD, but that says nothing about bias on the mean.
Think about it – if you put the measuring device in a controlled oven whose temperature you know and control perfectly, then take 60 measurements, you can determine the BIAS to a PRECISION of 0.0255degC, but the bias could be 2 degrees!. You can only improve the ACCURACY by compensating for this bias. If you don’t know what the bias is, all you are doing by taking more and more samples and performing statistical manipulation is getting a better estimate of a biased mean, which tells you nothing about the extent of the bias to the mean.

March 10, 2010 4:51 am

In fort years of flying, the only thing we ever used ASOS and AWOS for were current wind direction and velocity — we were specifically briefed by the FAA that all other measurements were “approximate”…

Pamela Gray
March 10, 2010 5:55 am

Hearing protection ended up causing a stir in similar fashion. Foam plugs were born and raised under lab conditions. Lab technicians inserted foam plugs in the same way with the same insertion pressure and then measured sound attenuation. Great. Those foam plugs really to the job. Manufacturers stamped a 40 dB (or more) rating on the package because under laboratory conditions, that was the attenuation. Then foam plugs got sent out to workers. Workers still complained of noise. So a “field” researcher decided to measure attenuation under field conditions (workers inserted plugs, worked all day, etc). Turns out the little plugs weren’t performing like they did in the lab. The industry was forced to lower the ratings on the packages to reflect real world attenuation.
What. A. surprise.

rbateman
March 10, 2010 7:07 am

So, are electronic temperature measurements more or less reliable than the traditional glass thermometer with mercury in it?

Geoff
March 10, 2010 7:27 am

Also note the follow up paper by Dr. Lin on the bias that may be introduced by snow on the sensors. Perhaps this year especially this will be important? The abstract is here.

Geoff
March 10, 2010 7:33 am

Sorry, try again, abstract here.

wmsc
March 10, 2010 8:03 am

JDN (14:06:06) :
My conclusion is that quality control is necessary. Someone needs to pull the entire enclosure into a lab, calibrate it against known temperatures, and return it. The conclusion that the actual temperature may be higher/lower under certain conditions is suggested by their analysis but no conclusion can really be drawn without direct monitoring of real-world performance.
Am I safe in assuming that nobody checks up on the performance of these stations?
JDN: All of the AWOS/ASOS stations are checked on a quarterly basis against a NIST-certified standard. However, at best the tech is only checking that the temp/dew point is within a +/-2F reading of the standard. I can personally attest that the AWOS sensors do walk all over the place compared to the standard. I have also seen techs that don’t allow a proper length of time before making the comparison. I’ve always felt that whatever “scientist” was using the AWOS/ASOS record for climate mongering should be fired.
I’d also like to point out that your thought on pulling the whole thing into a QC’ed lab and recalibrated really isn’t feasible as a number of variables would be changed.

March 10, 2010 9:17 am

ScientistForTruth (04:34:37) :
It is worse than that. The 60 measurements are not of the same thing. i.e the temperature changes from day to night and from day to day. The humidity changes. The barometer changes. The angle of the sun changes etc.
You only get the improvement if you are measuring the same thing with the same instrument. i.e. your measurement of a regulated oven. And in such a case a noisy instrument (within limits) improves the accuracy of your average.
But as you point out it does nothing for your precision.
Now these folks, supposedly knowledgeable of the most arcane statistical procedures, don’t even get the basics right. It is a travesty.

Gail Combs
March 10, 2010 10:31 am

JDN (14:06:06) :
“… All of the AWOS/ASOS stations are checked on a quarterly basis against a NIST-certified standard. However, at best the tech is only checking that the temp/dew point is within a +/-2F reading of the standard. I can personally attest that the AWOS sensors do walk all over the place compared to the standard. I have also seen techs that don’t allow a proper length of time before making the comparison. I’ve always felt that whatever “scientist” was using the AWOS/ASOS record for climate mongering should be fired….”
So no matter what this study shows the actual BEST accuracy is +/-2F since that is the allowable drift per the Quality Control Specification – and that is only if the tech does the calibration correctly. As a lab manager I did in-house calibration quarterly or better but the lab was required to have all the instrumentation certified at minimum once a year or once every six months by an outside trained and certified technician. If the government required industry to do this WHY are they not held to the same standards??? Heck FARM STANDS are required to have their balances and scales outside tested and certified here in North Carolina.
SHEESH, the more you dig the worse it stinks.

Edbhoy
March 10, 2010 10:48 am

OT but breaking news I suppose
http://news.bbc.co.uk/1/hi/sci/tech/8561004.stm
Ed

wmsc
March 10, 2010 11:30 am

Gail Combs (10:31:31) :
So no matter what this study shows the actual BEST accuracy is +/-2F since that is the allowable drift per the Quality Control Specification – and that is only if the tech does the calibration correctly. As a lab manager I did in-house calibration quarterly or better but the lab was required to have all the instrumentation certified at minimum once a year or once every six months by an outside trained and certified technician. If the government required industry to do this WHY are they not held to the same standards???
Gail: Actually the comment was mine, not JDN’s. The calibration cycle on the temp/dew pt/barometer *standards* is one year by an outside lab.
When you consider that pilots really don’t care (usually) if the temp is off +/-2F, the AWOS/ASOS system does indeed do it’s job. The problem comes about when scientists try to take a station that is not intended for high accuracy/precision and use it as if it was.

Gail Combs
March 10, 2010 11:57 am

wmsc (11:30:55) :
“….Gail: Actually the comment was mine, not JDN’s. The calibration cycle on the temp/dew pt/barometer *standards* is one year by an outside lab.
When you consider that pilots really don’t care (usually) if the temp is off +/-2F, the AWOS/ASOS system does indeed do it’s job. The problem comes about when scientists try to take a station that is not intended for high accuracy/precision and use it as if it was.”

Thank you and that does make me feel better. I am well aware of the accuracy/use factor. I do not use an analytical balance good to 0.000gr to measure out feed for my sheep, but I might use it for measuring out medicine. On the other hand using a bathroom scale to measure out a dose of Cydectin for a small lamb could get the poor animal killed. Seems the Climate Scientists are using a bathroom scale and trying to tell us it is an analytical balance.

March 10, 2010 1:22 pm

OK, so perhaps if we wanted to reliably measure the Earth’s temperature and the up/down trends, we should bury our sensor about a metre underground on the shady side of a building away from car parks or drains. Then allow a week for stabilisation, and measure at several times per day, always at the same times…………….our logged data would be worth looking at methinks, and would show any trends over a long period of time.
Calibration would require removal and re-installation, along with the same settling period each year.
Perhaps this would have better integrity….

March 10, 2010 2:54 pm

M. Simon (09:17:12) :
“You only get the improvement if you are measuring the same thing with the same instrument. i.e. your measurement of a regulated oven. And in such a case a noisy instrument (within limits) improves the accuracy of your average.
But as you point out it does nothing for your precision.”
I’m sure you meant “And in such a case a noisy instrument (within limits) improves the precision of your average.
But as you point out it does nothing for your accuracy.”

Pamela Gray
March 10, 2010 5:22 pm

Regardless of how it is measured, live analysis of raw data continue to demonstrate the high degree of variability that describes our home (Earth and in particular the shared climate zones around Washington, Oregon, Idaho, and California). In the space of 3 days, we have set record highs (March 3) and record lows (March 6). Homogenize that!

March 10, 2010 6:09 pm

ScientistForTruth (14:54:43) :
Thanks,
I hadn’t had my tenth cup of coffee by then.

Roy
March 11, 2010 1:06 am

Surely, I can’t be the only one wondering about traceability.
A search for “traceability” (to known standards) comes up negative, both in this thread and the original paper. To leave out traceability while discussing measurements errors is worse than not publishing error bars.
They are both insidious, no? Wonderful paper otherwise.

George E. Smith
March 11, 2010 4:21 pm

Well the first thing that I noticed was that the sensors used for the benefit of pilot operations (hey they really want to know) are Platinum Resistance themometers. The ones available for climate studies are “thermistors”.
Well “thermistors” are generally high positive temperature coefficient “resistors”, and their TC is generally much higher than any pure metal has. As a result, the out of balance analog signals tend to be higher than you can get with say a PRT.
That means you can get away with cheaper analog electronics to detect those temperature signals. That does not necessarily translate into more accurate thermometer readings; jsut cheaper thermometer readings.
I’ve never heard it claimed that these high PTC materials are stable over geologic time scales, or even climate time frames.
I’m reasonably comfortable with properly designed and constructed PRTs, and also with the availability of good analog circuitry and processing means to detect those signals accurately. Certainly more comfortable than say reading a mercury in glass thermometer over the same temperature range would be.
Considering all the palaver they go too with their house and grounds, and gardens, and other accoutrements; it seems pretty cheesy to me that they even consider skimping on the guts of the operation, which is the sensor and its setup.
That’s like spending education funding on “administrative overhead” rather than on classroom school teachers.
I’m prepared to grant, that given the best temperature reading thermometer known to man; that does not necessarily translate into reading the right temperature. The instrument installation problem that ensures the thermometer sees only the real temperature you are interested in, is not at all trivial. But at least the basic sensor ought to be as “Robust” as technology knows how to make it; so we don’t have to retrofit these things from time to time (not that I’m suggesting they actually do that)>

George E. Smith
March 11, 2010 4:37 pm

“”” Gail Combs (11:57:24) :
wmsc (11:30:55) :
“….Gail: Actually the comment was mine, not JDN’s. The calibration cycle on the temp/dew pt/barometer *standards* is one year by an outside lab.
When you consider that pilots really don’t care (usually) if the temp is off +/-2F, the AWOS/ASOS system does indeed do it’s job. The problem comes about when scientists try to take a station that is not intended for high accuracy/precision and use it as if it was.”
Thank you and that does make me feel better. I am well aware of the accuracy/use factor. I do not use an analytical balance good to 0.000gr to measure out feed for my sheep, but I might use it for measuring out medicine. On the other hand using a bathroom scale to measure out a dose of Cydectin for a small lamb could get the poor animal killed. Seems the Climate Scientists are using a bathroom scale and trying to tell us it is an analytical balance. “””
Now take it easy on that poor bathroom scale Gail; those things can actually be used to weigh your luggage before you head off to the airport.
I’ve even used my bathroom scale to weight the entire planet earth; yes you can do that.
You need something like a bucket or a couple of cinder blocks that you can put the scale on so it is up off the ground; this will weig the cinder blocks along with the planet.
So you turn your bathroom scale upside down, and put it down on top of the cinder blocks or bucket, so the readout is hanging out clear of the blocks, so you can read it upside down with a mirror placed on the floor under the scale.
Then you climb up on top of the scale, so you can look over the edge, and read the scale in the mirror.
And voilla, if you look at the reading, you will see that it is reading the weight of the entire planet (plus the cinder blocks.)
I would guess if you do this at your place, you will probably find that the earth weighs about 96-97 pounds or so.
Over at my house, the gravity is a bit higher, and I always get just a tad under 180 pounds, whenever I weigh the earth.
But try it; you’ll see I am right.
But no I wouldn’t use the bathroom scale to weigh out the medicine for one of those Mexican Rat Dogs, if it gives reading for the weight of the earth, that could be 97 or 180 pounds, you don’t want to use it to weigh medicine doses.