For those that don’t notice, this is about metrology, not meteorology, though meteorology uses the final product. Metrology is the science of measurement.
Since we had this recent paper from Pat Frank that deals with the inherent uncertainty of temperature measurement, establishing a new minimum uncertainty value of ±0.46 C for the instrumental surface temperature record, I thought it valuable to review the uncertainty associated with the act of temperature measurement itself.
As many of you know, the Stevenson Screen aka Cotton Region Shelter (CRS), such as the one below, houses a Tmax and Tmin recording mercury and alcohol thermometer.
They look like this inside the screen:

Reading these thermometers would seem to be a simple task. However, that’s not quite the case. Adding to the statistical uncertainty derived by Pat Frank, as we see below in this guest re-post, measurement uncertainty both in the long and short term is also an issue.The following appeared on the blog “Mark’s View”, and I am reprinting it here in full with permission from the author. There are some enlightening things to learn about the simple act of reading a liquid in glass (LIG) thermometer that I didn’t know as well as some long term issues (like the hardening of the glass) that have values about as large as the climate change signal for the last 100 years ~0.7°C – Anthony
==========================================================
Metrology – A guest re-post by Mark of Mark’s View
This post is actually about the poor quality and processing of historical climatic temperature records rather than metrology.
My main points are that in climatology many important factors that are accounted for in other areas of science and engineering are completely ignored by many scientists:
- Human Errors in accuracy and resolution of historical data are ignored
- Mechanical thermometer resolution is ignored
- Electronic gauge calibration is ignored
- Mechanical and Electronic temperature gauge accuracy is ignored
- Hysteresis in modern data acquisition is ignored
- Conversion from Degrees F to Degrees C introduces false resolution into data.
Metrology is the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology. Believe it or not, the metrology of temperature measurement is complex.
It is actually quite difficult to measure things accurately, yet most people just assume that information they are given is “spot on”. A significant number of scientists and mathematicians also do not seem to realise how the data they are working with is often not very accurate. Over the years as part of my job I have read dozens of papers based on pressure and temperature records where no reference is made to the instruments used to acquire the data, or their calibration history. The result is that many scientists frequently reach incorrect conclusions about their experiments and data because the do not take into account the accuracy and resolution of their data. (It seems this is especially true in the area of climatology.)
Do you have a thermometer stuck to your kitchen window so you can see how warm it is outside?
Let’s say you glance at this thermometer and it indicates about 31 degrees centigrade. If it is a mercury or alcohol thermometer you may have to squint to read the scale. If the scale is marked in 1c steps (which is very common), then you probably cannot extrapolate between the scale markers.
This means that this particular thermometer’s resolution is1c, which is normally stated as plus or minus 0.5c (+/- 0.5c)
This example of resolution is where observing the temperature is under perfect conditions, and you have been properly trained to read a thermometer. In reality you might glance at the thermometer or you might have to use a flash-light to look at it, or it may be covered in a dusting of snow, rain, etc. Mercury forms a pronounced meniscus in a thermometer that can exceed 1c and many observers incorrectly observe the temperature as the base of the meniscus rather than it’s peak: ( this picture shows an alcohol meniscus, a mercury meniscus bulges upward rather than down)
Another major common error in reading a thermometer is the parallax error.
Image courtesy of Surface meteorological instruments and measurement practices By G.P. Srivastava (with a mercury meniscus!) This is where refraction of light through the glass thermometer exaggerates any error caused by the eye not being level with the surface of the fluid in the thermometer.
(click on image to zoom)
If you are using data from 100’s of thermometers scattered over a wide area, with data being recorded by hand, by dozens of different people, the observational resolution should be reduced. In the oil industry it is common to accept an error margin of 2-4% when using manually acquired data for example.
As far as I am aware, historical raw multiple temperature data from weather stations has never attempted to account for observer error.
We should also consider the accuracy of the typical mercury and alcohol thermometers that have been in use for the last 120 years. Glass thermometers are calibrated by immersing them in ice/water at 0c and a steam bath at 100c. The scale is then divided equally into 100 divisions between zero and 100. However, a glass thermometer at 100c is longer than a thermometer at 0c. This means that the scale on the thermometer gives a false high reading at low temperatures (between 0 and 25c) and a false low reading at high temperatures (between 70 and 100c) This process is also followed with weather thermometers with a range of -20 to +50c
25 years ago, very accurate mercury thermometers used in labs (0.01c resolution) had a calibration chart/graph with them to convert observed temperature on the thermometer scale to actual temperature.
Temperature cycles in the glass bulb of a thermometer harden the glass and shrink over time, a 10 yr old -20 to +50c thermometer will give a false high reading of around 0.7c
Over time, repeated high temperature cycles cause alcohol thermometers to evaporate vapour into the vacuum at the top of the thermometer, creating false low temperature readings of up to 5c. (5.0c not 0.5 it’s not a typo…)
Electronic temperature sensors have been used more and more in the last 20 years for measuring environmental temperature. These also have their own resolution and accuracy problems. Electronic sensors suffer from drift and hysteresis and must be calibrated annually to be accurate, yet most weather station temp sensors are NEVER calibrated after they have been installed. drift is where the recorder temp increases steadily or decreases steadily, even when the real temp is static and is a fundamental characteristic of all electronic devices.
Drift, is where a recording error gradually gets larger and larger over time- this is a quantum mechanics effect in the metal parts of the temperature sensor that cannot be compensated for typical drift of a -100c to+100c electronic thermometer is about 1c per year! and the sensor must be recalibrated annually to fix this error.
Hysteresis is a common problem as well- this is where increasing temperature has a different mechanical affect on the thermometer compared to decreasing temperature, so for example if the ambient temperature increases by 1.05c, the thermometer reads an increase on 1c, but when the ambient temperature drops by 1.05c, the same thermometer records a drop of 1.1c. (this is a VERY common problem in metrology)
Here is a typical food temperature sensor behaviour compared to a calibrated thermometer without even considering sensor drift: Thermometer Calibration depending on the measured temperature in this high accuracy gauge, the offset is from -.8 to +1c
But on top of these issues, the people who make these thermometers and weather stations state clearly the accuracy of their instruments, yet scientists ignore them! a -20c to +50c mercury thermometer packaging will state the accuracy of the instrument is +/-0.75c for example, yet frequently this information is not incorporated into statistical calculations used in climatology.
Finally we get to the infamous conversion of Degrees Fahrenheit to Degrees Centigrade. Until the 1960’s almost all global temperatures were measured in Fahrenheit. Nowadays all the proper scientists use Centigrade. So, all old data is routinely converted to Centigrade. take the original temperature, minus 32 times 5 divided by 9.
C= ((F-32) x 5)/9
example- original reading from 1950 data file is 60F. This data was eyeballed by the local weatherman and written into his tallybook. 50 years later a scientist takes this figure and converts it to centigrade:
60-32 =28
28×5=140
140/9= 15.55555556
This is usually (incorrectly) rounded to two decimal places =: 15.55c without any explanation as to why this level of resolution has been selected.
The correct mathematical method of handling this issue of resolution is to look at the original resolution of the recorded data. Typically old Fahrenheit data was recorded in increments of 2 degrees F, eg 60, 62, 64, 66, 68,70. very rarely on old data sheets do you see 61, 63 etc (although 65 is slightly more common)
If the original resolution was 2 degrees F, the resolution used for the same data converted to Centigrade should be 1.1c.
Therefore mathematically :
60F=16C
61F17C
62F=17C
etc
In conclusion, when interpreting historical environmental temperature records one must account for errors of accuracy built into the thermometer and errors of resolution built into the instrument as well as errors of observation and recording of the temperature.
In a high quality glass environmental thermometer manufactured in 1960, the accuracy would be +/- 1.4F. (2% of range)
The resolution of an astute and dedicated observer would be around +/-1F.
Therefore the total error margin of all observed weather station temperatures would be a minimum of +/-2.5F, or +/-1.30c…
===============================================================
UPDATE: This comment below from Willis Eschenbach, spurred by Steven Mosher, is insightful, so I’ve decided to add it to the main body – Anthony
===============================================================
Willis Eschenbach says:
As Steve Mosher has pointed out, if the errors are random normal, or if they are “offset” errors (e.g. the whole record is warm by 1°), increasing the number of observations helps reduce the size of the error. All that matters are things that cause a “bias”, a trend in the measurements. There are some caveats, however.
First, instrument replacement can certainly introduce a trend, as can site relocation.
Second, some changes have hidden bias. The short maximum length of the wiring connecting the electronic sensors introduced in the late 20th century moved a host of Stevenson Screens much closer to inhabited structures. As Anthony’s study showed, this has had an effect on trends that I think is still not properly accounted for, and certainly wasn’t expected at the time.
Third, in lovely recursiveness, there is a limit on the law of large numbers as it applies to measurements. A hundred thousand people measuring the width of a hair by eye, armed only with a ruler measured in mm, won’t do much better than a few dozen people doing the same thing. So you need to be a little careful about saying problems will be fixed by large amounts of data.
Fourth, if the errors are not random normal, your assumption that everything averages out may (I emphasize may) be in trouble. And unfortunately, in the real world, things are rarely that nice. If you send 50 guys out to do a job, there will be errors. But these errors will NOT tend to cluster around zero. They will tend to cluster around the easiest or most probable mistakes, and thus the errors will not be symmetrical.
Fifth, the law of large numbers (as I understand it) refers to either a large number of measurements made of an unchanging variable (say hair width or the throw of dice) at any time, or it refers to a large number of measurements of a changing variable (say vehicle speed) at the same time. However, when you start applying it to a large number of measurements of different variables (local temperatures), at different times, at different locations, you are stretching the limits …
Sixth, the method usually used for ascribing uncertainty to a linear trend does not include any adjustment for known uncertainties in the data points themselves. I see this as a very large problem affecting all calculation of trends. All that are ever given are the statistical error in the trend, not the real error, which perforce much be larger.
Seventh, there are hidden biases. I have read (but haven’t been able to verify) that under Soviet rule, cities in Siberia received government funds and fuel based on how cold it was. Makes sense, when it’s cold you have to heat more, takes money and fuel. But of course, everyone knew that, so subtracting a few degrees from the winter temperatures became standard practice …
My own bozo cowboy rule of thumb? I hold that in the real world, you can gain maybe an order of magnitude by repeat measurements, but not much beyond that, absent special circumstances. This is because despite global efforts to kill him, Murphy still lives, and so no matter how much we’d like it to work out perfectly, errors won’t be normal, and biases won’t cancel, and crucial data will be missing, and a thermometer will be broken and the new one reads higher, and …
Finally, I would back Steven Mosher to the hilt when he tells people to generate some pseudo-data, add some random numbers, and see what comes out. I find that actually giving things a try is often far better than profound and erudite discussion, no matter how learned.
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.




Very interesting post, thanks.
As an engineer, most of the ideas here were pretty familiar to me. I find it almost unbelievable that the climate records aren’t being processed in a way which reflects the uncertainty of the data, as is stated in the article. What evidence is there that this is the case (I think I mean: is the processing applied by GISS or CRU clearly described, and is there any kind of audit trail? Have the academics who processed the data published the processing method?).
I think the F to C conversion issue is tricky. I think the conversion method you favour (using significant figures in C to reflect uncertainty) would lead to a bias which varies with temperature. What is actually needed is proper propagation of the known uncertainty through the calculation, rather than using the implied accuracy of the number of significant figures. So the best conversion from 60F to C would be 15.55+/-1.1C (in your example above). But obviously, promulgating 15.55 is fraught with the danger of the known 1.1C uncertainty being forgotten, and the implied 0.005C certainty used instead. Which would be bad.
Is the Hanksville photo one of those,”how many mistakes” competitions?
I once designed a precision temperature “oven” which had a display showing the temperature to 0.01C. In practice the controller for device was only accurate to 0.1C at best and with a typical lab thermometer there was at least another 0.1C error. Then the was the fact you were not measuring the temperature at the centre of the oven and drift and even mains supply variation had a significant effect!
All in all the error of this device which might appear to be accurate to 0.01C could have been as bad as the total so called “global warming signal”.
I’ve also set up commercial weather stations using good commercial equipment which I believe is also used by many meteorological stations and the total error is above +/-1C even on this “good” equipment.
As for your bulk standard thermometer from a DIY shop. Go to one and take a reading from them all and see how much they vary … it’s normally as much as 2C or even 3C from highest to lowest.
Basically, the kind of temperature error being quoted by the climategate team is only possible in a lab with regularly calibrated equipment.
I think Hanksville was just a pretty typical set-up. Some of those stations in the study were in even worse shape from what I saw. I particularly liked the one in the junkyard. Okay… maybe it wasn’t a junkyard, but that was what it looked like.
As for the Mark’s View post, I read this over at his site and thought it was fascinating. I knew about the calibration requirements for electronic test equipment, but had no idea of the vagaries behind the simple mechanical/visual thermometer.
Are those thermometers nailed horizontal onto the Stevenson Screen?
Surely a horizontal thermometer, without the gravitational component, will read differently to a vertical one. And if it is touching the wood, then surely there is radiative cooling of the wood at night and warming in the day. Surely the thermometers should be insulated from the screen by a few centimeters. How much error would this give??
I actually ran a spreadsheet back in December showing how just 3 instrument changes, with each instrument having better resolution, over a long term historical record changes the trend of the data (over 1° F in change to the USCHN data)
http://boballab.wordpress.com/2010/12/06/do-we-really-know-what-the-temperature-is/
The picture of the inside of a Stevenson Screen raised a couple of questions in my mind that perhaps a professional can answer:
Does the orientation of a glass thermometer affect its reading? I have thought of all thermometers as being used approximately vertically but those in the screen are shown as horizontal.
Is the vertical position inside the screen relevant? Even though there are ventilation louvres, I would expect some sort of vertical temperature gradient within the enclosure, reporting higher temperatures closer to the top of the enclosure. This, of course would be especially bad when exposed to full sun, with a tendency therefore to over-report temperatures. I would also expect this problem to increase with time as the reflectivity of the white paint drops with flaking, build up of dust and dirt etc.
So in reality the 0.6C feared temperature rise could mean that statistically there has been no temperature change. And all models give the wrong answer because temperature inputs are incorrect.
Very interesting article is a pdf copy possible please Anthony?
pedants’ corner: ‘extrapolate’ between the scale markers – or ‘interpolate’?
As long as the errors don’t trend in a biased way over time, the fact that there are thousands of sensors should make standard errors small (variance divided by square root no. of observations).
Of course, one biased sensor in the middle of nowhere would have a disproportionate effect – although I’m not clear how the interpolations are done.
I mean standard deviation (or put square root around the brackets)…
As the earths surface area is +- 196,935,000 square miles and the geological parameters are as diverse as it is possible to imagine, we have intelligent people declaring that the world is heating by as much as 2.0c per 100 years, with official recorded temp. covering perhaps 1% of the total area of the earth. I fully agree with the above article, we as humans are all idiots, some of us think we understand what we are trying to do, but even reading a temp. looks beyond the scope of those who’s job it is.
Italy is a prime example of human intelligence, knowledge, and understanding, it is in that country for all to see, and Italians have in the past produced some of the worlds most outstanding thinkers, BUT !!!! if you put 100 Italians into a room and ask them to form a political party, at the end of a month you would find that they have formed 100 plus political parties, and thousands of political ideologies non of which address the problems facing the country.
And Italy is one of the worlds better countries, it had inside baths and running hot water when the English were still learning to make fire. but here we are 3000 years later and 99.9% of the world population still cannot read a thermometer, like i said above we are all idiots.
Anthony,
You missed a few big errors in thermometers only from experience with them.
Having them measure in direct sunlight. The material around it absorbs heat and gives a false warmer temperature reading.
Some themomters are pressure fit or have a fastened and can slide in the sleeve.
Having the themometer snow covered in heavy blowing wind.
Moisture on the themometers.
“The correct mathematical method of handling this issue of resolution is to look at the original resolution of the recorded data. Typically old Fahrenheit data was recorded in increments of 2 degrees F, eg 60, 62, 64, 66, 68,70. very rarely on old data sheets do you see 61, 63 etc (although 65 is slightly more common)
If the original resolution was 2 degrees F, the resolution used for the same data converted to Centigrade should be 1.1c.
Therefore mathematically :
60F=16C
61F17C
62F=17C
etc
In conclusion, when interpreting historical environmental temperature records one must account for errors of accuracy built into the thermometer and errors of resolution built into the instrument as well as errors of observation and recording of the temperature.”
In GHCN observers recorded F by rounding up or down.
Tmax to 1degree F
Tmin to 1 degree F.
Then the result is averaged and rounded (tmax+tmin/2)
Now, if you think that changing from F to C is an issue you can do the following.
calculate the trend in F
Then
convert F to C and calculate the trend.
Also, the “observer error” and transcription errors are all addressed in the literature, see brohan 06.
The other thing that is instructive is to compare two thermometers that are within a few km of each other.. over the period of say 100 years. Look at the corellation.
98% plus.
Or you can write a simulation of a sensor with very gross errors. simulate daily data for 100 years. Assume small errors. calculate the trend. Assume large errors. calculated the trend.
Result? the error structure of individual measures doesnt impact your estimation of the long term trends. NOW, if many thermomemters all has BIASES ( not uncertainty) and if those biases were skewed hot or cold, and if those biases changed over time, then your trend estimation would get impacted
Result? no difference.
Stephen;
You don’t get to just wave away the possibility of systematic error. That’s the whole point of error bars. You can’t know anything about systematic error WITHOUT ACTUALLY CHECKING FOR IT. And such checking has not been done. Ergo ….
Steven Mosher
I agree with you, but only in case all biases occur at same time, same direction, same amount. And that is never ever the case. One even don´t know how much, which and where that kind of bias appear, until you look at every station and have a deep look in its history. What only Antonys volunteers did. But at sea (7/10 of earth surface) situation is even much more worse. Literature is full about this issue. The Metadata neccessary to estimate – and perhaps luckily correct that kind of bias- are not available. So what is left is: You should mention them and increase your range of uncertainty accordingly.
Michael
Thanks – a very enlightening post!
I was most interested in the way that looking at the meniscus from different angles will give you different results. Ages ago, we students were shown how to do this, in an introductory practical about metrology.
In my naivety I assumed that this care was taken by all who record temperatures, and that the climate scientists using those data were aware of the possibility of measuring errors … apparently not.
So even if sloppiness here or there doesn’t influence the general trend, as Steven Mosher points out above – should we not ask what other measurements used by the esteemed computer models are equally sloppy, and do not even address the question of metrological quality control?
I saw the uniform o.46 number for error over the instrumental record and my first thoughts were on the estimates of sea surface temperature. The oceans cover 70 percent of the globe yet until about 30-40 years ago the temperature measurement followed shipping lanes and the spacial coverage was poor. Land temp measurements were also weighted heavily in more developed regions. So to me it is absurd to think the accuracy can be that consistent based on spacial distribution changes of where the temp has been measured only.
Your article is “spot on”. I have various mercury bulb and alcohol bulb thermometers around my house as well as type K and RTD thermocouples. On any given day you can’t really get better than 1 or 2 deg F agreement between them all.
As a person versed in both thermodynamics and statistics I find it amusing every time I see precision and accuracy statements from the climate community.
Besides the paint chipping off over time, if it’s really cold and the thermometer reader takes some time to read the thermometer, or gets really close to it due to 1) bad eyesight or 2) to get that “perfect” measurement in the interest of science, either breathing on the thermometer or body heat would affect the reading to the upside wouldn’t it? The orientation of the thermometer in the picture shows that a person would be facing the thermometer and breathing in that direction for however long it took to take the reading. And if it was very cold and windy that sure would make a nice little shelter to warm up for a moment and scrape the fog off the glasses, maybe have a sip of coffee before taking the reading.
Another example of a measurement errors to the upside. Why oh why are they always to the upside.
Great post. As a practicing engineer involved in design and analysis of precision measurement equipment I am well aware of the challenges in making measurements and interpreting data. This post is spot-on in its observation that many data users put little thought into the accuracy of the measurements.
Especially in cases where one is searching for real trends, the presence of measurement drift (as opposed to random errors) can create huge problems. The glass hardening issue is therefore huge here.
Any chance of being allowed to make an icewater measurement with one of these old thermometers? I’m sure that result would be fascinating.
There is one more source of uncertainty which is not mentioned in this excellent article: changes in observation times. Different daily average calculation methods could create a significant warm or cold bias compared to the true 24-hour average temperature of any day. The difference will be different in each station and in each historical measurement site, because the average daily temperature curve is determined by microclimatic effects.
In most of the cases, climatologists try to account for these biases with monthly mean adjustments calculated from 24-hour readings. However, it is impossible to adjust these errors correctly with a single number, when the deviations are not the same in each observing site.
Let me show you an example for this issue:
Between 1780 and 1870, Hungarian sites observed the outdoor temperature at 7-14-21h, 6-14-22h or 8-15-22h Local Time, depending on location. How can anyone compare these early readings with contemporary climatological data? (the National Met. Service defines the daily mean temperature as the average of 24-hourly observations)
The average annual difference between 7-14-21h LT and 24-hr readings calcuted from over a million automatic measurements is -0.283°C. This old technique causes a warm bias, which is most pronounced in early summer (-0.6°c in June) and negligible in late winter/early spring. Monthly adjustments are within 0.0 and -0.6°c. The accuracy of these adjustments are different in each month, 1-sigma standard error varies between 0.109 and 0.182°C. Instead of a single value, we can only define an interval for each historical monthly and annual mean.
I am wondering what exactly CRU does when they incorporate 19th century Hungarian data to their CRUTEM3 global land temperature series. Observation time problems (bias and random error) are just one source of uncertainty. What about different site exposures – Stevenson screens haven’t existed that time – or the old thermometers which were scaled in Reaumur degrees instead of Celsius? These issues are well documented in hand-written station diaries, 19th century yearbooks and other occasional publications. Such informations are only available in Hungarian, did Phil Jones ever read them? 🙂
Great Post, Mark, and thanks for giving it the air it deserves, Anthony. I read this on Mark’s blog a couple of days ago and was sure it was worth wider promulgation.
My memory from running a university metsite 40 yrs ago (University of Papua New Guinea, Port Moresby), is that the ‘almost horizontal’ orientation commented on and queried by a couple of posters, is because the max and min thermometers have little indicator rods inside the bore, which get pushed up to max by the meniscus and stick there (Hg) ; and pulled down to the min and stick there (alcohol). The weather observer resets them by turning to the vertical (upright for the Hg (max), and the rod slides back down to again touch the meniscus); or upside down, for the alcohol (min), wherupon the rod slides back up, inside the liquid, till it touches the inside of the meniscus. Both are then resplaced on their near-horizontal stands.
By the way, the stevenson screen pictured is in an atrocious condition, and the surrounds are far out of compliance from the WMO specs which require, from memory, ‘100 feet of surrounding mown but not watered grass, and no close concrete or other paved surface’ or similar …
This alone could surely give a degree or so temp enhancement, and I suspect that this sort of deterioration over time from site spec has on average added somewhat to the global average *as recorded* to give a secular trend which is really just reflecting the increasing crappiness of the average met station over time…
Not helped by the gradual secular transition to electronic sensors, cos they apparently are almost always within spitting distance of a (!!!) building…. thus giving another time trend pushing up the average.
Mark
Mosher has it summed up pretty well. The overall error for a single instrument would impact local records as has been seen quite a few times. As far as the global temperature record, not so much. Bias is the major concern when instrumentation is changed at a site or the site relocated.
Adjustments to the temperature record are more a problem for me. UHI adjustment is pretty much a joke. I still have a problem with the magnitude of the TOBS. It makes sense in trying to compare absolute temperatures (where the various errors do count) but not so much where anomalies are used for the global average temperature record. Perhaps Mosh would revived the TOBS adjustment debate.