For those that don’t notice, this is about metrology, not meteorology, though meteorology uses the final product. Metrology is the science of measurement.
Since we had this recent paper from Pat Frank that deals with the inherent uncertainty of temperature measurement, establishing a new minimum uncertainty value of ±0.46 C for the instrumental surface temperature record, I thought it valuable to review the uncertainty associated with the act of temperature measurement itself.
As many of you know, the Stevenson Screen aka Cotton Region Shelter (CRS), such as the one below, houses a Tmax and Tmin recording mercury and alcohol thermometer.
They look like this inside the screen:

Reading these thermometers would seem to be a simple task. However, that’s not quite the case. Adding to the statistical uncertainty derived by Pat Frank, as we see below in this guest re-post, measurement uncertainty both in the long and short term is also an issue.The following appeared on the blog “Mark’s View”, and I am reprinting it here in full with permission from the author. There are some enlightening things to learn about the simple act of reading a liquid in glass (LIG) thermometer that I didn’t know as well as some long term issues (like the hardening of the glass) that have values about as large as the climate change signal for the last 100 years ~0.7°C – Anthony
==========================================================
Metrology – A guest re-post by Mark of Mark’s View
This post is actually about the poor quality and processing of historical climatic temperature records rather than metrology.
My main points are that in climatology many important factors that are accounted for in other areas of science and engineering are completely ignored by many scientists:
- Human Errors in accuracy and resolution of historical data are ignored
- Mechanical thermometer resolution is ignored
- Electronic gauge calibration is ignored
- Mechanical and Electronic temperature gauge accuracy is ignored
- Hysteresis in modern data acquisition is ignored
- Conversion from Degrees F to Degrees C introduces false resolution into data.
Metrology is the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology. Believe it or not, the metrology of temperature measurement is complex.
It is actually quite difficult to measure things accurately, yet most people just assume that information they are given is “spot on”. A significant number of scientists and mathematicians also do not seem to realise how the data they are working with is often not very accurate. Over the years as part of my job I have read dozens of papers based on pressure and temperature records where no reference is made to the instruments used to acquire the data, or their calibration history. The result is that many scientists frequently reach incorrect conclusions about their experiments and data because the do not take into account the accuracy and resolution of their data. (It seems this is especially true in the area of climatology.)
Do you have a thermometer stuck to your kitchen window so you can see how warm it is outside?
Let’s say you glance at this thermometer and it indicates about 31 degrees centigrade. If it is a mercury or alcohol thermometer you may have to squint to read the scale. If the scale is marked in 1c steps (which is very common), then you probably cannot extrapolate between the scale markers.
This means that this particular thermometer’s resolution is1c, which is normally stated as plus or minus 0.5c (+/- 0.5c)
This example of resolution is where observing the temperature is under perfect conditions, and you have been properly trained to read a thermometer. In reality you might glance at the thermometer or you might have to use a flash-light to look at it, or it may be covered in a dusting of snow, rain, etc. Mercury forms a pronounced meniscus in a thermometer that can exceed 1c and many observers incorrectly observe the temperature as the base of the meniscus rather than it’s peak: ( this picture shows an alcohol meniscus, a mercury meniscus bulges upward rather than down)
Another major common error in reading a thermometer is the parallax error.
Image courtesy of Surface meteorological instruments and measurement practices By G.P. Srivastava (with a mercury meniscus!) This is where refraction of light through the glass thermometer exaggerates any error caused by the eye not being level with the surface of the fluid in the thermometer.
(click on image to zoom)
If you are using data from 100’s of thermometers scattered over a wide area, with data being recorded by hand, by dozens of different people, the observational resolution should be reduced. In the oil industry it is common to accept an error margin of 2-4% when using manually acquired data for example.
As far as I am aware, historical raw multiple temperature data from weather stations has never attempted to account for observer error.
We should also consider the accuracy of the typical mercury and alcohol thermometers that have been in use for the last 120 years. Glass thermometers are calibrated by immersing them in ice/water at 0c and a steam bath at 100c. The scale is then divided equally into 100 divisions between zero and 100. However, a glass thermometer at 100c is longer than a thermometer at 0c. This means that the scale on the thermometer gives a false high reading at low temperatures (between 0 and 25c) and a false low reading at high temperatures (between 70 and 100c) This process is also followed with weather thermometers with a range of -20 to +50c
25 years ago, very accurate mercury thermometers used in labs (0.01c resolution) had a calibration chart/graph with them to convert observed temperature on the thermometer scale to actual temperature.
Temperature cycles in the glass bulb of a thermometer harden the glass and shrink over time, a 10 yr old -20 to +50c thermometer will give a false high reading of around 0.7c
Over time, repeated high temperature cycles cause alcohol thermometers to evaporate vapour into the vacuum at the top of the thermometer, creating false low temperature readings of up to 5c. (5.0c not 0.5 it’s not a typo…)
Electronic temperature sensors have been used more and more in the last 20 years for measuring environmental temperature. These also have their own resolution and accuracy problems. Electronic sensors suffer from drift and hysteresis and must be calibrated annually to be accurate, yet most weather station temp sensors are NEVER calibrated after they have been installed. drift is where the recorder temp increases steadily or decreases steadily, even when the real temp is static and is a fundamental characteristic of all electronic devices.
Drift, is where a recording error gradually gets larger and larger over time- this is a quantum mechanics effect in the metal parts of the temperature sensor that cannot be compensated for typical drift of a -100c to+100c electronic thermometer is about 1c per year! and the sensor must be recalibrated annually to fix this error.
Hysteresis is a common problem as well- this is where increasing temperature has a different mechanical affect on the thermometer compared to decreasing temperature, so for example if the ambient temperature increases by 1.05c, the thermometer reads an increase on 1c, but when the ambient temperature drops by 1.05c, the same thermometer records a drop of 1.1c. (this is a VERY common problem in metrology)
Here is a typical food temperature sensor behaviour compared to a calibrated thermometer without even considering sensor drift: Thermometer Calibration depending on the measured temperature in this high accuracy gauge, the offset is from -.8 to +1c
But on top of these issues, the people who make these thermometers and weather stations state clearly the accuracy of their instruments, yet scientists ignore them! a -20c to +50c mercury thermometer packaging will state the accuracy of the instrument is +/-0.75c for example, yet frequently this information is not incorporated into statistical calculations used in climatology.
Finally we get to the infamous conversion of Degrees Fahrenheit to Degrees Centigrade. Until the 1960’s almost all global temperatures were measured in Fahrenheit. Nowadays all the proper scientists use Centigrade. So, all old data is routinely converted to Centigrade. take the original temperature, minus 32 times 5 divided by 9.
C= ((F-32) x 5)/9
example- original reading from 1950 data file is 60F. This data was eyeballed by the local weatherman and written into his tallybook. 50 years later a scientist takes this figure and converts it to centigrade:
60-32 =28
28×5=140
140/9= 15.55555556
This is usually (incorrectly) rounded to two decimal places =: 15.55c without any explanation as to why this level of resolution has been selected.
The correct mathematical method of handling this issue of resolution is to look at the original resolution of the recorded data. Typically old Fahrenheit data was recorded in increments of 2 degrees F, eg 60, 62, 64, 66, 68,70. very rarely on old data sheets do you see 61, 63 etc (although 65 is slightly more common)
If the original resolution was 2 degrees F, the resolution used for the same data converted to Centigrade should be 1.1c.
Therefore mathematically :
60F=16C
61F17C
62F=17C
etc
In conclusion, when interpreting historical environmental temperature records one must account for errors of accuracy built into the thermometer and errors of resolution built into the instrument as well as errors of observation and recording of the temperature.
In a high quality glass environmental thermometer manufactured in 1960, the accuracy would be +/- 1.4F. (2% of range)
The resolution of an astute and dedicated observer would be around +/-1F.
Therefore the total error margin of all observed weather station temperatures would be a minimum of +/-2.5F, or +/-1.30c…
===============================================================
UPDATE: This comment below from Willis Eschenbach, spurred by Steven Mosher, is insightful, so I’ve decided to add it to the main body – Anthony
===============================================================
Willis Eschenbach says:
As Steve Mosher has pointed out, if the errors are random normal, or if they are “offset” errors (e.g. the whole record is warm by 1°), increasing the number of observations helps reduce the size of the error. All that matters are things that cause a “bias”, a trend in the measurements. There are some caveats, however.
First, instrument replacement can certainly introduce a trend, as can site relocation.
Second, some changes have hidden bias. The short maximum length of the wiring connecting the electronic sensors introduced in the late 20th century moved a host of Stevenson Screens much closer to inhabited structures. As Anthony’s study showed, this has had an effect on trends that I think is still not properly accounted for, and certainly wasn’t expected at the time.
Third, in lovely recursiveness, there is a limit on the law of large numbers as it applies to measurements. A hundred thousand people measuring the width of a hair by eye, armed only with a ruler measured in mm, won’t do much better than a few dozen people doing the same thing. So you need to be a little careful about saying problems will be fixed by large amounts of data.
Fourth, if the errors are not random normal, your assumption that everything averages out may (I emphasize may) be in trouble. And unfortunately, in the real world, things are rarely that nice. If you send 50 guys out to do a job, there will be errors. But these errors will NOT tend to cluster around zero. They will tend to cluster around the easiest or most probable mistakes, and thus the errors will not be symmetrical.
Fifth, the law of large numbers (as I understand it) refers to either a large number of measurements made of an unchanging variable (say hair width or the throw of dice) at any time, or it refers to a large number of measurements of a changing variable (say vehicle speed) at the same time. However, when you start applying it to a large number of measurements of different variables (local temperatures), at different times, at different locations, you are stretching the limits …
Sixth, the method usually used for ascribing uncertainty to a linear trend does not include any adjustment for known uncertainties in the data points themselves. I see this as a very large problem affecting all calculation of trends. All that are ever given are the statistical error in the trend, not the real error, which perforce much be larger.
Seventh, there are hidden biases. I have read (but haven’t been able to verify) that under Soviet rule, cities in Siberia received government funds and fuel based on how cold it was. Makes sense, when it’s cold you have to heat more, takes money and fuel. But of course, everyone knew that, so subtracting a few degrees from the winter temperatures became standard practice …
My own bozo cowboy rule of thumb? I hold that in the real world, you can gain maybe an order of magnitude by repeat measurements, but not much beyond that, absent special circumstances. This is because despite global efforts to kill him, Murphy still lives, and so no matter how much we’d like it to work out perfectly, errors won’t be normal, and biases won’t cancel, and crucial data will be missing, and a thermometer will be broken and the new one reads higher, and …
Finally, I would back Steven Mosher to the hilt when he tells people to generate some pseudo-data, add some random numbers, and see what comes out. I find that actually giving things a try is often far better than profound and erudite discussion, no matter how learned.
w.




Something that is interesting here,
Although error does tend to “wash out” when using large amounts of data, this still means that the error bands will tend to be large. The larger impact is in fact human caused in the actual measuring. To say that you can detect noise within the noise of measurements such as these boggles the mind. The noise in this case would be the human impact on the climate, which will be indectable from other causes over time. You simply can not torture the data enough to get that signal so to speak.
But the largest issue is that even with the understanding of error here, many people do not understand the actual implications. The odds are that the temperature increase of 0.7C is correct (this is the actual observed temperature..) but the fact that the error is say +- 1.3 C does say something that we should be aware of.
It means that above all else, we have probably seen warming which few people doubt. Whether this is natural or man-caused is the actual debate now, and this result seriously puts some damper into the assertion that it is man-caused since with such large error bands you can not be as sure on the trends over smaller periods….and since the signal comes from shorter trends as a rule, this means that over short time periods you will be able to get a very accurate trend, but whether its noise or caused by humans…? No chance.
The implications are somewhat large in that sense. If you run the models by randomly adding “possible” error and over many reproductions of the models, they should show that if the CO2 signal exists, that the measuring error wouldn’t matter. To put that into practice would involve randomly changing observed temperatures up and down somewhat and fine-tuning the GCM’s based on that new assumption.
This is something that the GCM’s are weak on since they use temperatures to fine-tune themselves (on other climate variables ranging from solar influences on down…) and to my knowledge the actual error in the instruments has not been calculated in model runs to this date. This is an issue, which does bear some exploring.
Overall, very good article, although I question the larger possible error and I would hazzard to guess that 1.3C is the actual limit of the error since we would assume that the observer bias and C/F computation has been considered (its fairly obvious and although I do find most climate scientists to be mostly incompetent, I would find it hard to believe that they didn’t figure this one out. The actual error from the instruments would also be obvious, but shucks, its something I can see them over-looking as they simply assume it washes out so to speak.
The fact that so much error is possible does bear a large study in itself. If we could make the temperature record more accurate, it would help a lot in our studying of the climate overall.
Steven Mosher
“The other thing that is instructive is to compare two thermometers that are within a few km of each other.. over the period of say 100 years. Look at the corellation.
98% plus.”
I never tried it over a period of 100 years but over about 100 days, there was not much correlation between the trend in the max/min readings from the thermometer at my school and that at my club. But perhaps 20 kms is more than “a few”. Or am I cheating because one was in the desert and the other in an urban area?
When I was at school, I recorded the temperatures for years from the school’s Stevenson screen. We were a sub-recording station for the local RAF base, so I assume it was set up correctly. We had 4 thermometers inside, a high precision Tmax, which was mercury, a high precision Tmin which was alcohol, and a mid precision wet & dry. The Tmax and Tmin were angled at about 20 degrees, with the bulbs at the bottom of the slope and the wet & dry was vertical. Two things strike me about the photos of the screen. The thermometers are fixed directly onto a piece of horizontal wood so there is no free airflow round the bulbs. Standing the screen over a gravestone must make the nightime minimum temperature totally innacurate as it will be affected by re-radiated heat from the block of stone below, the warmth rising straight upwards towards the box in the cool night air. At our school site, the screen was located in a fenced off, grassed area which the school gardener was told not to cut. The grass was quite long there, and contained a final thermometer, the grass minimum, which was horizontal on two small supports such that the bulb just touched the blades of grass. This often used to record ground frosts which didn’t appear as air frosts.
An 8 inch rain gauge and a Fortin barometer completed the equipment – we also recorded cloud cover and type, and estimated cloud base. We used to record all this daily, draw up graphs and charts, and work out the humidity from the wet & dry readings, using a book of tables. It was a very good grounding in Physics, Physical Geography, and the methods of recording and presenting data over a long period. Many, many years before computers!!
I second the request for a pdf document. It would be great in my file.
This post highlights and confirms with numbers something I have believed for a long time. How does this fit with the past discussions of the fact that temperature recording at airports has changed (the M for minus thing)? More and more errors introduced and unaccounted for, I suppose.
Here is the paper Steve Mosher referred to in his post:
“Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850,” Borhan, P. et al., J. Geophysical Research 111, 2006
In the 1960s I used to wander round the UK with a team engaged in commissioning turbine-generator units before they were handed over to the CEGB.
We got to one power station where the oil return drains from the turbine-generator bearings were fitted with dial thermometers (complete with alarm contacts) and witnessed yet another round in the endless battle between the “sparks” and “hiss & piss” departments.
The electrical engineers in the generator department at head office had specified temperature scales in degrees C, whereas the mechanical engineers in the turbine department (on the floor below) had specified scales in degrees F.
How we laughed (when the CEGB guys were not looking).
Another thing ignored, or never even thought of, is the response time of modern electronic sensors compared to liquid in glass thermometers. A wind shift resulting in a sudden movement of warm air towards the temperature station, say warm air from the airport tarmac, would register a lower peak temperature on a glass thermometer than it would on a modern electronic sensor. This biases the Tmax upwards with modern instrumentation compared with what would have been measured by past instrumentation.
I am appalled to see the claim that the Climate Scientists do not seem to have include any of these basic metrology considerations in their work. And, to give significance to results that have a resolution greater than the basic resolution of the measuring process, is not even wrong ( to borrow a phrase).
If you look at just one aspect of the chain of electronic temperature recording , that of the quantisation of the reading of the analog temperature element; there are potentially serious sources of error which must be understood by any serious user of such systems.
Any electonics engineer will confirm that analog-to-digital conversion devices are prone to a host of potential error-sources … ( try googling for them; you will be amazed ….)
They are very difficult to calibrate across their range, and also across the range of environmental variations in order to check that the various compensations remain within spec.
I would add that there is an insidious problem in integrative averaging of many readings, which comes from the non-linearities in each individual measurement system.
If you use the ‘first-diference’ method to get a long term average ‘trend’ then non-linearity will cause a continual upward (or downward) drift in the result. The problem worsens with more readings . For example, using the ‘average’ from a series of weekly readings ….. and comparing it with the ‘average’ from daily reading will reveal thsi source of drift.
And when we come to the Mauna Loa CO2 series …we have another area where the claimed ‘results’ of the manipulation of the instrument readings are given in PPM with decimal places! ( How DO you get 0.1 of a part?)
Anyway, as I understand it, the measurement and reporting of atmospheric CO2 is dominated by a single linear process, from the production and testing of the calibration gases through to the analysis of the average of the results. I should like to see a thorough critical analysis of every stage of this process, to ensure that we are not looking at an artefactual result.
Go, Michael!
Being that this subject is sort of in my wheelhouse, I would like to add a few items to the uncertainty picture.
Calibration of mercury/alcohol thermometers:
The 0 degree C (ice point) is somewhat dependent on the purity of the water.
Regular tap water from a municipal supply can throw the ice point off by as much as 2 degrees C with the presence of naturally occurring salts/minerals. The same is true to a lesser extent with the boiling point. Also the air (barometric) pressure has a marked effect on the boiling point.
The accuracy and stability of electronic temperature measurement devices is largely dependent on the purity of the components involved as well as the metallurgical chemistry involved. Oxidation and Nitrogen embrittlement are all factors over time on metal based devices.
You have basically three classes of temperature measurement contrivances.
Those that rely on the coefficient of thermal expansion.
Mercury/alcohol thermometers and bi-metal dial thermometers are examples of CTE devices.
Those that generate an EMF due to temperature differences.
Thermocouples are the best examples of these.
Those that vary resistance with temperature.
Thermisters and RTD are examples of these.
Then there are the electronic types that rely on radiation/optics.
These are non contact and are dependent on the emissivity of the object whose temperature is to be measured. (it is a fourth so sue me) Various techniques are used such as thermopiles and photo detectors. These are generally not as accurate as direct contact devices. Your handy dandy satellites use variants of this technique.
About the only stable temperature point available to calibrate anything with is the freezing point of gold. It is stable because of its chemistry. Gold does not readily combine chemically with anything.
You can obtain remarkable resolution from almost all of the contrivances, all it takes is large sums of money, time, and pathological attention to detail.
Errors for Thermocouples are + or – 1.5 C.
Errors for RTDs are + or – 1 C
As stated errors for standard thermometers are + or – 1 C but can be as large as 3 or 4 C depending on factors.
Regarding climatology, the error of temperature measurement is somewhat cumulative so that over time the uncertainty levels should increase. This is of course ignored by the climate community. That and the ludicrous claim that they can reconstruct temperature to within 0.1 C is an indication that they do not know what they are talking about and are fumbling around in the dark.
just my $.02 worth which may not be much due to inflation.
Steven Mosher says:
January 22, 2011 at 3:23 am
The point that you have missed here Steve Mosher is that the margin of error is practically twice the claimed “global warming signal” of 0.7º C. Add in some biased agenda driven human homogenisation and what have you got?
The oldest temperature record in the world is only 352 years old. Based on the Central England Temperature record, over the last 15 years there has been a cooling trend:
http://c3headlines.typepad.com/.a/6a010536b58035970c0147e1680aac970b-pi
So Steve Mosher’s point sounds all well and good until one actually examines it more closely, at which point it becomes utterly meaningless. The reason it becomes meaningless is entirely because the so “global waring signal” is half the margin of error in the “official” data. It makes no difference which trend you prefer, the warming or the cooling, they are both meaningless because they are both approximately half the margin of error. So the whole temperature issue is a red herring.
It is an unprovable and un-winnable faux debate that serves the “warmist’s” and the “gatekeepers” both, by keeping everyone distracted from the real issue, CO2.
I have seen this line of reasoning before, and I belive this is an incorrect application of the statistics of large numbers. If you had 100 thermometers measuring temperature in the same small area at the same time you would be correct. This is how the satellite measurements of sea level can get millimeter accuracy with only 3 cm resolution. They take tens of thousands of measurements of the same area in a short space of time. A time series of measurements of a single thermometer in one location doesn’t, I believe, meet the criteria for this statistical method.
In an engineering sense….. The Climate scientist’s measurement regime is fine if one was cutting 2×4’s and plywood for shelves in the garage….. You definately wouldn’t want these guys designing an air frame for a supersonic jet fighter!……:-o
….. and to be honest, I think they knocked together the climate science club house that they alone are playing in. All that’s missing is the hand painted, “NO GURLS” sign….(that’d be us skeptics)…..;-)
Thanks for posting this Anthony. I would like to see a lot more posts on this topic. I do not agree with Mr Mosher that Result? no difference. If we accepted his dismissive argument about the lack of importance of calibration of the instruments (all of them), then why do other disciplines make a big fuss about correct readings? Is it not important when we are talking about only .7 deg per century, and the economies of many countries being trashed for the sake of that? If instrument calibration adds a degree or two to the “noise”, then the .7 is meaningless.
John Marshall says:
January 22, 2011 at 1:35 am
So in reality the 0.6C feared temperature rise could mean that statistically there has been no temperature change. And all models give the wrong answer because temperature inputs are incorrect.
Very interesting article is a pdf copy possible please Anthony?
____________________________________________________________
Frank – no need to bother Anthony. Just copy and paste into your word processor and export or save as a pdf. Use MS Word, Word Perfect, Open Source or whatever share ware program you want, they pretty well all do that. I don’t know about copyright.
I remember having thermometer correction sheets in the labs when calibrating thermometers and old steel survey tapes as measurements all had to be adjusted for temperature, – even temperature. 😉 Even modern electronic distance measuring devices have a temperature bias that needs to be adjusted although newer multifrequency devices have self correcting circuits but still: “Need to know change in elevation of two points (slope correction), air temperature, atmospheric pressure, water vapor amount in air”… –all can have an effect on the measurement.
In other words, EVERYTHING requires adjustments to correct for site conditions. All instruments and observers have built in biases and inaccuracies and NOTHING is absolute.
It seems to me that there are more positive biases than negative. Consider, glass hardening will always increase over time. At the beginning of a temperature record, the thermometers are new. 100 years later, most of them are old to very old, and are reading high by 0.7 degrees. The enclosures do the same thing. At the beginning all are shiny and new. After 100 years most are in bad shape. Even if they have been repainted, it was with modern paint, not the old white-wash, which was more reflective. You need not consider UHI problems, or siting difficulties to explain all the temperature rise seen over time.
This doesn’t indict the temperature record.
Accuracy of thermometers matters hardly at all because the acquired data in absolute degrees is used to generate data which is change over time. If a thermometer or observer is off by 10 whole degrees it won’t matter so long as the error is consistently 10 degrees day afer day – the change over time will still be accurate.
Precision is a similar story. There would have to be a bias that changes over the years that somehow makes the thermometers or observers record an ever growing higher temperature as the years go by. Urban heat islands are perfect for that but instrument/recording error just doesn’t work that way. There are thousands of instruments in the network each being replaced at random intervals so the error from age/drift is averaged out because there is an equal distribution of old and new instruments.
This might be interesting in an academic way but isn’t productive in falsifying the CAGW hypothesis. The instrumentation and observation methods are adequate and trying to paint them as less than adequate only appears to be an act of desperation – if the job is botched blaming the tools is no excuse.
So; the old trees make lousy thermometers is now thermometers make lousy thermometers.
Another problem comes from taking the average temperature to be halfway between Tmin and Tmax. This may well be the case if temperatures rise and fall in a cyclical fashion in a 24-hour period. However, an anomalous event, such as hot aircraft exhaust blowing in the direction of the station, can increase Tmax considerably, thereby creating a considerable error in the ‘average’. And because such ‘hot’ anomalies are more likely than ‘cold’ ones, and also increasingly likely over time, the bias is likely upwards.
xyzlatin says: January 22, 2011 at 6:57 am
[…] If instrument calibration adds a degree or two to the “noise”, then the .7 is meaningless.
Many good points above, I think this sentence says it all. Trying to parse fractions of a degree change from the current system is not possible. Temperature is very difficult to measure accurately. I know, I successfully measured and controlled temperatures in an IC process to +/-0.1°F in the eighties when linewidths went sub-micron.
Steven Mosher says:
January 22, 2011 at 3:23 am
Absolutely right, Steve.
Skeptics are no better than CAGW alarmists in their willingness to believe anything which supports their own beliefs or disputes the beliefs of the other side. It’s sad. Objectivity is a rare and precious commodity.
Great bunch of real data in one post… First time I ever heard of the following:
[…]”However, a glass thermometer at 100c is longer than a thermometer at 0c.”
The 5 degree cold record set in International Falls (posted below), pretty much eliminates the problem of the thermometer’s resolution.
Look for records like this in the future, burr.
We need a NEW name for the next minimum, we are moving into.
My vote is for “David Archibald Minimum”.
Well maybe not, having one’s name attached to miserable weather, may not be the best way to be remembered.
Steven Mosher,
You say,
“NOW, if many thermomemters all has BIASES ( not uncertainty) and if those biases were skewed hot or cold, and if those biases changed over time, then your trend estimation would get impacted”
Could you confirm that this is what you meant?
(And, I would be grateful if you could comment on the effect of transducer non-linearities, too)
This is why climatologists work with anomalies rather than estimates of absolute temperature. And they do so on long time scales using lots of instruments. Instrument error is of interest only if a general bias becomes significantly greater over time.
John McManus:
They do, when you’re trying to measure fractions of a degree change over a period of decades to centuries.
The fact that tree rings etc make such lousy temperature proxies is probably due to the fact that nothing in nature exhibits any great sensitivity to small temperature changes – especially to warming changes. Most plants and animals do better with warmer temperatures. So if nature isn’t particularly perturbed by small temp increases, why should we be?
Anyone interested in measurement standards and calibration of their instruments can call up the NIST website for information that includes certified calibration labs and the traceability of measurements to a national or international standard.
There is a paper presented in 1999 by Dr Henrik S. Nielsen titled what is traceability and why do we calibrate.
In it he defines traceability.
“property of the result of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons all having stated uncertainties.”.
He also adds.
“The concept of traceability requires more than just a calibration sticker. It requires an uncertainty budget for the measurement process and traceable calibration of all the instrument and environmental attributes that have a significant influence on the uncertainty.
In addition to identifying the instrument attributes that need to be calibrated to establish traceability, the uncertainty budget also allows for the optimization of measurement processes both in terms of uncertainty and dollars and cents”.
In short, if the measurements are bad everything determined from the measurements is bad.