Accuracy of climate station electronic sensors – not the best

Given all of the discussions recently on issues with the surface network, I thought it would be a good idea to present this excellent paper by Lin and Hubbard and what they discovered about the accuracy, calibration and maintenance of the different sensors used in the climatic networks of the USA. Pay particular attention to the errors cited for the ASOS and AWOS aviation networks, which are heavily used by GHCN.

Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station Networks*

X. LIN AND K. G. HUBBARD

School of Natural Resource Sciences, University of Nebraska at Lincoln, Lincoln, Nebraska

ABSTRACT

The biases of four commonly used air temperature sensors are examined and detailed. Each temperature transducer consists of three components: temperature sensing elements, signal conditioning circuitry, and corresponding analog-to-digital conversion devices or dataloggers. An error analysis of these components was performed to determine the major sources of error in common climate networks. It was found that, regardless of microclimate effects, sensor and electronic errors in air temperature measurements can be larger than those given in the sensor manufacturer’s specifications. The root-sum-of-squares (RSS) error for the HMP35C sensor with CR10X datalogger was above 0.2°C, and rapidly increases for both lower (<-20°C) and higher temperatures (>30°C). Likewise, the largest errors for the maximum–minimum temperature system (MMTS) were at low temperatures (<-40°C). The temperature linearization error in the HO-1088 hygrothermometer produced the largest errors when the temperature was lower than -20°C. For the temperature sensor in the U.S. Climate Reference Networks (USCRN), the error was found to be 0.2° to 0.33°C over the range -25° to 50°C. The results presented here are applicable when data from these sensors are applied to climate studies and should be considered in determining air temperature data continuity and climate data adjustment models.

Introduction

A primary goal of air temperature measurement with weather station networks is to provide temperature data of high quality and fidelity that can be widely used for atmospheric and related sciences. Air temperature measurement is a process in which an air temperature sensor measures an equilibrium temperature of the sensor’s physical body, which is optimally achieved through complete coupling between the atmosphere and air temperature sensor.

The process accomplished in the air temperature radiation shield is somewhat dynamic, mainly due to the heat convection and heat conduction of a small sensor mass. Many studies have demonstrated that to reach a higher measurement accuracy both good radiation shielding and ventilation are necessary for air temperature measurements (Fuchs and Tanner 1965; Tanner 1990; Quayle et al. 1991; Guttman and Baker 1996; Lin et al. 2001a,b; Hubbard et al. 2001; Hubbard and Lin 2002). Most of these studies are strongly associated with the study of air temperature bias or errors caused by microclimate effects (e.g., airflow speed in-side the radiation shields radiative properties of sensor surface and radiation shields, and effectiveness of the radiation shields). Essentially, these studies have assumed the equation governing the air temperature to be absolutely accurate, and the investigations have focused on the measurement accuracy and its dependence on how well the sensor is brought into equilibrium with the atmospheric temperature. Such findings are indeed very important for understanding air temperature measurement errors in climate monitoring, but it is well known that all microclimate-induced biases or errors also include the electronic biases or errors embedded in their temperature sensors and their corresponding data acquisition system components.

Three temperature sensors are commonly used in the weather station networks: A thermistor in the Cooperative Observing Program (COOP) that was formally recognized as a nationwide federally supported system in

1980; a platinum resistance thermometer (PRT) in the Automated Surface Observing System (ASOS), a network that focuses on aviation needs; and a thermistor in the Automated Weather Station (AWS) networks operated

by states for monitoring evaporation and surface climate data.

Each of these sensors has been used to observe climate data over at least a ten year period in the U.S. climate monitoring networks. The U.S. Climate Reference Network (USCRN) was established in 2001 and gradually and nationally deployed for monitoring long-term and high quality surface climate data. In the USCRN system, a PRT sensor was selected for the air temperature measurements. All sensing elements in these four climate monitoring networks are temperature sensitive resistors, and the temperature sensors are referred to as the maximum–minimum temperature system (MMTS), sensor: HMP35C, HO-1088, and USCRN PRT sensors, respectively, in the COOP, AWS, ASOS, and USCRN networks (see Table 1).

The basic specifications of each sensor system including operating temperature range, static accuracy, and display/output resolution can be found in operation manuals. However, these specifications do not allow a detailed evaluation, and some users even doubt the stated specifications and make their own calibrations before deploying sensors in the network. In fact, during the operation of either the MMTS sensor in the COOP or HO-1088 hygrothermometer in the ASOS, both field and laboratory calibrations were made by a simple comparison using one or two fixed precision resistors (National Weather Service 1983; ASOS Program Office 1992).

This type of calibration is only effective under the assumption of temporal nonvariant sensors with a pure linear relation of resistance versus temperature. For the HMP35C, some AWS networks may regularly calibrate the sensors in the laboratory, but these calibrations are static (e.g., calibration at room temperature for the data acquisition system).

It is not generally possible to detect and remove temperature-dependent bias and sensor nonlinearity with static calibration. In the USCRN, the PRT sensor was strictly calibrated from -50° to +50°C each year in the laboratory. However, this calibration does not include its corresponding datalogger. To accurately trace air temperature trends over the past decades or in the future in the COOP, AWS, ASOS, and USCRN and to reduce the influence of time-variant biases in air temperature data, a better understanding of electronic bias in air temperature measurements is necessary.

The objective of this paper is to carefully analyze the sensor and electronic biases/errors induced by the temperature sensing element, signal conditioning circuitry, and data acquisition system.

This implies that the MMTS temperature observations are unable to discriminate  ±0.25°C changes

in the lower temperature ranges (Fig. 5 and Table 2). The interchangeability of the MMTS thermistors is from

60.2°C from temperature -40° to +40°C and ±0.45°C elsewhere (Fig. 4). Two fixed resistors (R2 and R3) with

a 0.02% tolerance produced larger temperature errors of measurement in low temperatures, but the error

caused by the fixed resistor R19 in Fig. 1 can be ignored. Therefore, the RSS errors in the MMTS are from 0.31°

to 0.62°C from temperature -40°C to -50°C (Fig. 5).

The major errors in the HO-1088 (ASOS Temp/DP sensor) are interchangeability, linearization error, fixed resistor error, and self-heating error (Table 2 and Fig. 7). The linearization error in the HO-1088 is relatively serious because the analog signal (Fig. 3) is simply linearized from -50° to 50°C versus -2 to 2 V. The maximum magnitude of linearization error reached over 1°C (Fig. 7). There are four fixed precision resistors: R13, R14, R15, and R16 with a 0.1% tolerance. However, the error of temperature measurement caused by the R14, R15, and R16 can be eliminated by the adjustment of amplifier gain and offsets during onboard calibration operations in the HO-1088.

The error caused by the input fixed resistor R13 is illustrated in Fig. 7. Since this error was constantly varied from -0.2° to -0.3°C, it can be cancelled during the onboard calibration. It is obvious that a 5-mA current flowing through the PRT in the HO-1088 is not appropriate, especially because it has a small sensing element (20 mm in length and 2 mm in diameter). The self-heating factor for the PRT in the HO-1088 is 0.25°C mW21 at 1 m s21 airflow (Omega Engineering 1995), corresponding to the selfheating errors 0.5°C when the self-heating power is 2mW (Table 2 and Fig. 7). Compared to the linearization error and self-heating error, the interchangeability and LSB error in the HO-1088 sensor are relative small, ±0.1° and ±0.01°C, respectively (Table 2).

Conclusions and discussion

This study provides a better understanding of temperature measurement errors caused by the sensor, analog signal conditioning, and data acquisition system. The MMTS sensor and the HO-1088 sensor use the ratiometric method to eliminate voltage reference errors. However, the RSS errors in the MMTS sensor can reach 0.3–0.6 under temperatures beyond -40° to +40°C. Only under yearly replacement of the MMTS thermistor with the calibrated MMTS readout can errors be constrained within ±0.2°C under the temperature range from -40° to +40°C. Because the MMTS is a calibration- free device (National Weather Service 1983), testing of one or a few fixed resistors for the MMTS is unable to guarantee the nonlinear temperature relations of the MMTS thermistor. For the HO-1088 sensor, the self-heating error is quite serious and can make temperature 0.5°C higher under 1 m/s airflow, which is slightly less than the actual normal ventilation rate in the ASOS shield (Lin et al. 2001a). The simple linear method for the PRT of the HO-1088 causes unacceptable errors that are more serious in the low temperature range. These findings are helpful for explaining the ASOS warm biases found by Kessler et al. (1993) in their climate data and Gall et al. (1992) in the climate data archives. For the dewpoint temperature measurements in the ASOS, such self-heating effects might be cancelled out by the chill mirror mechanism: heating or cooling the chill mirror body (conductively contains the dewpoint PRT inside) to reach an equilibrium thin dew layer–dewpoint temperature.

Thus, in this case, the selfheating error for dewpoint temperature measurements might not be as large as the air temperature after correct calibration adjustment. Likewise, the relative humidity data from the ASOS network, derived from air temperature and dewpoint temperature, is likely be contaminated by the biased air temperature.

Both resistance measurements in the HMP35C and USCRN PRT sensors are interrogated by the dataloggers.

The HMP35C is delivered from Campbell Scientific, Inc., with recommended measurement methods.

Even so, the HMP35C sensor in the AWS network can experience more than 0.28C errors in temperatures from

-30° to +30°C. Beyond this range, the RSS error increases from 0.4° to 1.0°C due to thermistor interchangeability, polynomial error, and CR10X datalogger inaccuracy. For the USCRN PRT sensor in the USCRN network, the RSS errors can reach 0.2°–0.34°C due to the inaccuracy of CR23X datalogger, which suggests that the configuration of USCRN PRT and measurement taken in the CR23X could be improved if higher accuracy is needed. Since the USCRN network is a new setup, the current configuration of the USCRN PRT temperature sensor could be reconstructed for better measurements.

This reconstruction should focus on the increase of signal sensitivity, the selection of fixed resistor(s) with smaller temperature coefficient of resistance, and the decrease of the self-heating power, so that it could be more compatible with the CR23X for longterm climate monitoring.

These findings are applicable\ to the future of temperature data generated from the USCRN network and possible modification of the PRT sensor for higher quality measurements in the reference climate network.

The complete Lin-Hubbard papert (PDF) is available here.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
119 Comments
Inline Feedbacks
View all comments
kadaka
March 9, 2010 2:19 pm

J Zulauf (13:47:52) :
For electronic surface temperature monitoring, I think I would rather trust a system that optically measures the level of a mercury thermometer. Very low-power LED laser, directly taking the reading against a marked scale (bar-coded graduations perhaps)… We can do that.

Steve Koch
March 9, 2010 2:21 pm

I’ve designed/implemented software systems that acquired realtime measurements from surface sensors. My experience was that it was unusual to find a properly calibrated surface sensor in the field. After a while, I tried to automate calibrating (and detection of poorly calibrated) sensors as much as possible. When reviewing data after the fact, I wrote software to analyze the sensor data to see if it looked properly calibrated. We automatically stored in the computer the results, timestamp, and name of the guy who did the calibrating for each calibration done in the field.
It would be an excellent idea to review the calibration records for these sensors. The measurements have no credibility without calibration records.

Gail Combs
March 9, 2010 2:29 pm

Steve Koch (14:21:23) :
“….It would be an excellent idea to review the calibration records for these sensors. The measurements have no credibility without calibration records.”,/i>
Truer words were never spoken.
How the heck can you call it science if you do not even bother to do the first step any legitimate scientist would do – Calibrate your instruments and then place them on a reasonable recalibration schedule.

March 9, 2010 2:31 pm

Anthony, you’re probably fed up with hearing from me on this…but there are still some errors from the copying of the PDF. It doesn’t look too good if this gets copied on elsewhere – for example
The maximum magnitude of linerization error reached over 18C (Fig. 7).
Should be 1 degC (big difference!).
Others:
versus 22 to 2 V. [Should be -2, not 22]
Since this error was constantly varied from 20.28 to 20.38C,
0.58C higher under 1 m s21 airflow
0.258C mW21 at 1 m s21 airflow (Omega Engineering 1995), corresponding to the selfheating errors 0.58C
AWS network can experience more than 0.28C errors
in the USCRN network, the RSS errors can reach 0.28

D. King
March 9, 2010 2:37 pm

J Zulauf (13:47:52) :
would it be interesting to see what pointing a web cam at an old fashioned glass thermometer colocated with the fancy kind and compare. The accuracy required appears to be +/-50. to within .1 — 1000 states. 1-2Mpixels would (or a 1-2K element 1D optical sensor, with lens calibrations of course) — automate it and hook it up wirelessly…
That’s a very good idea. Maybe a glass thermometer, CCD, LED for
light (no heat), automated and only on when sampling, All available
technology, easy to fabricate and calibrate.

Geoff Sherrington
March 9, 2010 2:41 pm

It is a source of contant bewilderment for me that casual readers of this page can comment easily on instrumental errors like hysteresis in Pt resistance etc., yet the Lin & Hubbard article above infers that such errors persist.
In some climate science, as we have now amply heard, one response to the discovery of an error is to smooth it or average it or taper it, so that it is hidden. The proper approach is to find the reason, replace the defective step, then publish a detailed description to help others who might be susceptible.
In the early 70s when I had a large laboratory, we spent about 10% of our analysis on quality control, both through internal replication of samples and instruments and methods, and on inter-laboratory comparisons and use of certified calibration material. Even then, there were errors too large to be comfortable.
In a paper like the one above, it is essential to dissect accuracy errors (bias) from precision errors because the consequence and corrections are different. It can be the case that precision errors tend to balance each other and tend to a minimum as the number of observations increase, but that has to be tested – it cannot be assumed. Accuracy is far more difficult to achieve.
If it is good enough for aircraft to use redundancy, like a mix of GPS systems with INS, more than one compass, radar and visual range finding, air and ground speed indicators, etc, then it should be good enough to use multiple recording devices at high quality network weather stations.
The initial measurement is at the base of the pyramid of knowledge in a measurement system. If it is wrong, all of the building above it is wrong.
Does anyone have a PDF copy of “Evaluation of lunar elemental analyses”
George Harold Morrison
Anal. Chem., 1971, 43 (7), pp 22A–31a
Publication Date: June 1971
The top labs of the world did their best effort to arrive a a close set of analysis results on these historically unprecedented first lunar samples. The published comparison results by George Morrison will give readers some indication of what to expect from the world’s best. The error bars have not retreated much in the last 40 years. The masters of accuracy grow age and retire, to be replaced by fresh young faces briming with knowledge – and a fresh set of personal errors.

Allen Ford
March 9, 2010 2:57 pm

“How the heck can you call it science if you do not even bother to do the first step any legitimate scientist would do – Calibrate your instruments and then place them on a reasonable recalibration schedule.”
Surely the first thing you learn in practical science classes. I know I did, as did we all did in the ’50s. This point has always bothered me regarding the climate gang, now we know. Jones’ admission in the video that it is beneath the dignity of the gangsters to deal with the mundane business of getting down and dirty with collecting actual data, but rely on the fudged stuff from the likes of him and his cronies.
On all accounts, the whole field of climate research, with a few exceptions, should be junked until they can come up with some hard core, remedial retraining.

Al Gored
March 9, 2010 3:30 pm

But I’m sure that the instruments used in the 1800’s to get their baseline temperatures were much more accurate than these modern ones.

March 9, 2010 3:44 pm

Paul Linsay (14:00:14) :
“The satellites also calibrate against the 2.7 K background from the Big Bang, about as absolute a temperature standard as possible.”
Very nice. Now what part of the atmosphere is at around that temperature?

March 9, 2010 3:50 pm

For heaven’s sake, when I was using this equipment I never assumed it could be any better than +/-1C. If this really is the equipment being used for the global temperature measurement, then to be quite frank it is crap. OK, for a repeatably rough temperature, but if you want to be measuring to 0.1C, you need equipment with an initial tolerance around 0.01C-0.02C. By the time you’ve added sensor, leads, sensor current, amp, A/D, temperature sensivity, the total error isn’t going to allow much better than +/-0.1C. So, an accurac of 0.1C is only possible with precision equipment designed and calibrated for precision use, not off the shelf stuff – however good campbell are at producing the stuff, it just isn’t suitable for the purpose!
What kind of idiot set up this global temperature monitoring system? What kind of bizarre logic made them think any kind of temperature change was anything to do with the environment and not everything to do with inherent biases in the equipment. The more I hear about global temperature monitoring, the more I think this is criminal incompetence by those involved.

JMANON
March 9, 2010 4:12 pm

One way to limit self heating that is used in industry is to only power the sensor when a measurement is being recorded.
In a typical set of industrial signal conditioning electronics, the reading might be taken every 500ms but the power is applied to the sensor fleetingly, just as long as necessary to make a measurement.
So is there a truly continuous measurement made by these stations or do they log data at suitable intervals… what intervals? 1 per second? 1 per minute? how much detail is needed? how much can the actual temperature change from one second to the next?
How long does the sensor circuit have to be energised to make a measurement?
The sort of self heating errors mentioned here seeem enormous compared to what is acceptable in industry as precision.
OK, no single measurement is satisfactory as it can suffer both random and systematic errors. Systematic errors you would hope to eliminate through calibration and proper curve fitting. Random errors you minimise by taking multiple measurements.
So, for a single measurement you might collect 10 sequential readings, discard the highest and lowest and average the rest. Much more sophisticated than that would probably be a waste.
Time taken? anywhere from 1-5seconds with the sensor powered only briefly for each reading. Repeat once a minute? once every 10 minutes?
The question is, where these stations designed or intended to be used for climate studies or simply for weather observations.
Seems to me that climate studies would require a purpose constructed network of stations and that they are trying to define miniscule temperature changes.

March 9, 2010 4:12 pm

humm, if the devices start drifting with regular exposure over +40C or even +30C for some, then no wonder parts of Australia are “heating up” – even in coastal Perth where I am, we have entire summers of mid to high 30’s and more than a few over 40C days. Up in the Pilbara and Kimberley regions, they have 80+ days in a row over 40.
We’re trying to pick up fractions of a degree/decade as evidence of an AGW signal, but the basic measuring devices have normal operational errors of similar magnitudes.
Wonderful.

March 9, 2010 5:18 pm

Hi, I just placed a not on ‘tips and Notes’ about sensor calibration.
I do all the calibrations for the company I work for and tear my hair out when I come across ‘scientists’ who are doing so called important research, with equipment malfunctions or anomalies, only to find that the gear was bought in 2005 and has never been back for calibration……….5 years of work with no integrity? And we have to buy it? Some of this work, we are actually paying for because a Govt body is using the gear and loggers.
There’s more but I can’t say too much without advertising the company and this may not be allowed…. Cheers, Ira

March 9, 2010 6:37 pm

I would consider ±0.5°C to be pretty good for a weather station. As we know from this site the likely errors due to how the instruments are sited as well as from changing instrument types over time is probably a lot more than that.

March 9, 2010 6:56 pm

Couldn’t agree more, a lot depends on how astute your researcher really is…..but if you want to look at trends and slight temperature changes over time, I am able to look at thousandths (0.001) of a degree of change.
I suspect that guys doing research just do not know what can be done with the gear today.
Since readings are taken every now and then, at predetermined intervals, and they are taken for just 12milliseconds, there is no sensor heating over time.
Ira

Bill
March 9, 2010 8:08 pm

Wow.
I am an electrical engineer who used to service industrial chillers. Before I put them in service, I would calibrate the thermisters by making ice cubes from distilled water, and making a distilled water/ice slush, letting it settle back to exactly 32°C, and then calibrating the thermister. The thought of weather thermometers just being stuck out there with no calibration other than a calibration resistor just blows my mind.

March 9, 2010 8:56 pm

Hi Bill,
Life was so much easier then…But how would you calibrate for -40 through +120 deg C. ?
Cheers, Ira

DesertYote
March 9, 2010 9:12 pm

I quickly scanned this paper, but that is all. Its too much like work. I’ve been involved with Test and Measurement for 35 years. 10 doing level S qualification of ICs for NASA , 10 years with one of the biggest names in T and M, and another 10 years IE of equipment deployed in the field (including meteorological instrumentation!). This paper probably doesn’t even begin to tell the story. There is absolutely no excuse for our authoritative weather data collection processes to be so out of control. Though I am not surprised.
I think that one thing missing is an analysis of the performance of these sensors over variations in pressure, humidity, and containments (salt, dust, icing). Did I read correctly that some of the instruments are adjusted with pots? And some of them use thermistors? YIKES! Of course, what really needs to be done is to audit the quality of all of the deployed instruments and their calibration processes. It would be instructive to see what the distribution of the errors are of actual in use sensors. I have a feeling though, that trying to gather that data would be resisted.
BTW, I have never assumed that the reported temps had anything better then +/- 1°C accuracy.

DesertYote
March 9, 2010 9:39 pm

Ira Quirke (20:56:19) :
The Level S range is -50 to +125°C. But I use to calibrate starting with -72 as my first point because it is easy 🙂

rbateman
March 9, 2010 11:22 pm

I’m thinking we were better off with the bulb thermometers.
Anybody still make the high/low bulb thermometers?

March 10, 2010 2:21 am

JMANON (16:12:02) :”So is there a truly continuous measurement made by these stations or do they log data at suitable intervals… what intervals? 1 per second?”
As far as I remember (though its not obvious rereading the manual), the temperature measurement is done using a pulsed on sensor current on the campbell equipment. But this is just the half of it e.g. let me quote:-
“Assume a limit of 0.05C over a 0-40C range is established for the transient settling error. This limit is a reasonable choice since it approximates the linearisation error over that range”.
Or to put it in other words: “there’s not a hope in hell of getting anything like a 0.05C” total error, and this is going to be small compared to the total instrumentational errors of sensor & datalogger. What that means, is that there are probably half a dozen small errors like this that all add up to around 0.3C total error. Which means it simply isn’t the right equipment to be measuring something that is intended to give a global figure of 0.1C.
And forget the nonsense about “averaging out errors”, many of the errors will be one sider. And that is with brand new equipment and sensors, without the effects of solar heating on the temperature enclosure, without spiders nesting in the temperature sensor enclosure, without horizontal rain soaking the sensor, without a host of real errors which the academics in their ivory towers seem to have not the slightest clue exist.
And to cap it all, there really isn’t any unit called “temperature”, temperature is a meaningless mish-mash of unrelated pseudo proxies for temperature from gas bulbs to platinum resistance probes: a bastardised unit!

michaelozanne
March 10, 2010 3:23 am

“Mike Haseler (15:50:00) :
What kind of idiot set up this global temperature monitoring system? What kind of bizarre logic made them think any kind of temperature change was anything to do with the environment and not everything to do with inherent biases in the equipment. The more I hear about global temperature monitoring, the more I think this is criminal incompetence by those involved.”
Okay, breathe, again, hold it in let it out slowly…there you go
Now its kind of the point that there isn’t a global temperature system. There is a set of weather and other stations each of which was set up for its own purpose completely distinct from calculating Global Temperature. Airport stations for example exist for the purpose of determining that safe conditions exist for operating aircraft. Most met stations exist for trying to guess tomorrow’s weather etc.. Trying to compose a global control datum from the individual stations is a relatively recent idea, and I’m not convinced that the quality control aspects of doing it was given sufficient thought.

OceanTwo
March 10, 2010 3:48 am

Brian D (10:54:33) :
Can this issue get any crazier? Can we get a true temp anymore?

To (mis?)quote: A man with one watch knows the time; a man with two is never quite sure.
I, also, have worked in the electronics/data acquisition/automation field for a good few years. I would say that discrete electronic devices (resistors, pots, etc.) can be as reliable and accurate if not more reliable and accurate than digital systems.
Both digital and discrete systems have their flaws; specifically, digital systems have a finite resolution which, depending on the application, may or may not be relevant. Analog systems require more maintenance more often.
In this situation, accuracy and resolution would favor an analog system, because of the emphasis placed on both accuracy and resolution as requirements. However, these systems, it appears, are treated as modern digital systems – fire and forget – which just cannot be done. Simply, you cannot treat an analog system as a digital system and visa versa.
Having said that, modern digital systems do have the capability (quite obviously) of monitoring surface temperatures over the full range, and yet there are issues. It’s like we are taking our $200,000 bentley to be used in a NASCAR race – it’s a fine machine but perhaps it’s not quite appropriate for the application.
Both analog and digital measurement systems require calibration based on the expected measurement outcome, and noted when the temperature exceeds the calibration range. Calibration certificates must be held for ever and a day (and always with a name associated with it).
This is a great paper demonstrating that errors exist in numerous places when taking a temperature reading, that all errors are cumulative, errors are dynamic based both on the value being measured as well as time. This is saying nothing of any human error introduced into the measurement system.

OceanTwo
March 10, 2010 4:09 am

I do find a bit of amusement in the want of man to find linear relationships – particularly linear trends – in inherently non-linear, non-discrete, complex systems.

OceanTwo
March 10, 2010 4:21 am

Oh, to add my own personal quip:
Engineers (and scientists) are lazy. They are so lazy, in fact, that they will spend 6 months of hard time to create something that saves them 5 minutes of work.
Not sure if it’s relevant, but I think it does hold true that a lot of engineers and scientists cannot see the wood for the trees. Reminds me of the Gary Larson Far Side cartoon with the kid trying to push the door to the Midvale School for The Gifted open – while it is clearly marked pull.