For those that don’t notice, this is about metrology, not meteorology, though meteorology uses the final product. Metrology is the science of measurement.
Since we had this recent paper from Pat Frank that deals with the inherent uncertainty of temperature measurement, establishing a new minimum uncertainty value of ±0.46 C for the instrumental surface temperature record, I thought it valuable to review the uncertainty associated with the act of temperature measurement itself.
As many of you know, the Stevenson Screen aka Cotton Region Shelter (CRS), such as the one below, houses a Tmax and Tmin recording mercury and alcohol thermometer.
They look like this inside the screen:

Reading these thermometers would seem to be a simple task. However, that’s not quite the case. Adding to the statistical uncertainty derived by Pat Frank, as we see below in this guest re-post, measurement uncertainty both in the long and short term is also an issue.The following appeared on the blog “Mark’s View”, and I am reprinting it here in full with permission from the author. There are some enlightening things to learn about the simple act of reading a liquid in glass (LIG) thermometer that I didn’t know as well as some long term issues (like the hardening of the glass) that have values about as large as the climate change signal for the last 100 years ~0.7°C – Anthony
==========================================================
Metrology – A guest re-post by Mark of Mark’s View
This post is actually about the poor quality and processing of historical climatic temperature records rather than metrology.
My main points are that in climatology many important factors that are accounted for in other areas of science and engineering are completely ignored by many scientists:
- Human Errors in accuracy and resolution of historical data are ignored
- Mechanical thermometer resolution is ignored
- Electronic gauge calibration is ignored
- Mechanical and Electronic temperature gauge accuracy is ignored
- Hysteresis in modern data acquisition is ignored
- Conversion from Degrees F to Degrees C introduces false resolution into data.
Metrology is the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology. Believe it or not, the metrology of temperature measurement is complex.
It is actually quite difficult to measure things accurately, yet most people just assume that information they are given is “spot on”. A significant number of scientists and mathematicians also do not seem to realise how the data they are working with is often not very accurate. Over the years as part of my job I have read dozens of papers based on pressure and temperature records where no reference is made to the instruments used to acquire the data, or their calibration history. The result is that many scientists frequently reach incorrect conclusions about their experiments and data because the do not take into account the accuracy and resolution of their data. (It seems this is especially true in the area of climatology.)
Do you have a thermometer stuck to your kitchen window so you can see how warm it is outside?
Let’s say you glance at this thermometer and it indicates about 31 degrees centigrade. If it is a mercury or alcohol thermometer you may have to squint to read the scale. If the scale is marked in 1c steps (which is very common), then you probably cannot extrapolate between the scale markers.
This means that this particular thermometer’s resolution is1c, which is normally stated as plus or minus 0.5c (+/- 0.5c)
This example of resolution is where observing the temperature is under perfect conditions, and you have been properly trained to read a thermometer. In reality you might glance at the thermometer or you might have to use a flash-light to look at it, or it may be covered in a dusting of snow, rain, etc. Mercury forms a pronounced meniscus in a thermometer that can exceed 1c and many observers incorrectly observe the temperature as the base of the meniscus rather than it’s peak: ( this picture shows an alcohol meniscus, a mercury meniscus bulges upward rather than down)
Another major common error in reading a thermometer is the parallax error.
Image courtesy of Surface meteorological instruments and measurement practices By G.P. Srivastava (with a mercury meniscus!) This is where refraction of light through the glass thermometer exaggerates any error caused by the eye not being level with the surface of the fluid in the thermometer.
(click on image to zoom)
If you are using data from 100’s of thermometers scattered over a wide area, with data being recorded by hand, by dozens of different people, the observational resolution should be reduced. In the oil industry it is common to accept an error margin of 2-4% when using manually acquired data for example.
As far as I am aware, historical raw multiple temperature data from weather stations has never attempted to account for observer error.
We should also consider the accuracy of the typical mercury and alcohol thermometers that have been in use for the last 120 years. Glass thermometers are calibrated by immersing them in ice/water at 0c and a steam bath at 100c. The scale is then divided equally into 100 divisions between zero and 100. However, a glass thermometer at 100c is longer than a thermometer at 0c. This means that the scale on the thermometer gives a false high reading at low temperatures (between 0 and 25c) and a false low reading at high temperatures (between 70 and 100c) This process is also followed with weather thermometers with a range of -20 to +50c
25 years ago, very accurate mercury thermometers used in labs (0.01c resolution) had a calibration chart/graph with them to convert observed temperature on the thermometer scale to actual temperature.
Temperature cycles in the glass bulb of a thermometer harden the glass and shrink over time, a 10 yr old -20 to +50c thermometer will give a false high reading of around 0.7c
Over time, repeated high temperature cycles cause alcohol thermometers to evaporate vapour into the vacuum at the top of the thermometer, creating false low temperature readings of up to 5c. (5.0c not 0.5 it’s not a typo…)
Electronic temperature sensors have been used more and more in the last 20 years for measuring environmental temperature. These also have their own resolution and accuracy problems. Electronic sensors suffer from drift and hysteresis and must be calibrated annually to be accurate, yet most weather station temp sensors are NEVER calibrated after they have been installed. drift is where the recorder temp increases steadily or decreases steadily, even when the real temp is static and is a fundamental characteristic of all electronic devices.
Drift, is where a recording error gradually gets larger and larger over time- this is a quantum mechanics effect in the metal parts of the temperature sensor that cannot be compensated for typical drift of a -100c to+100c electronic thermometer is about 1c per year! and the sensor must be recalibrated annually to fix this error.
Hysteresis is a common problem as well- this is where increasing temperature has a different mechanical affect on the thermometer compared to decreasing temperature, so for example if the ambient temperature increases by 1.05c, the thermometer reads an increase on 1c, but when the ambient temperature drops by 1.05c, the same thermometer records a drop of 1.1c. (this is a VERY common problem in metrology)
Here is a typical food temperature sensor behaviour compared to a calibrated thermometer without even considering sensor drift: Thermometer Calibration depending on the measured temperature in this high accuracy gauge, the offset is from -.8 to +1c
But on top of these issues, the people who make these thermometers and weather stations state clearly the accuracy of their instruments, yet scientists ignore them! a -20c to +50c mercury thermometer packaging will state the accuracy of the instrument is +/-0.75c for example, yet frequently this information is not incorporated into statistical calculations used in climatology.
Finally we get to the infamous conversion of Degrees Fahrenheit to Degrees Centigrade. Until the 1960’s almost all global temperatures were measured in Fahrenheit. Nowadays all the proper scientists use Centigrade. So, all old data is routinely converted to Centigrade. take the original temperature, minus 32 times 5 divided by 9.
C= ((F-32) x 5)/9
example- original reading from 1950 data file is 60F. This data was eyeballed by the local weatherman and written into his tallybook. 50 years later a scientist takes this figure and converts it to centigrade:
60-32 =28
28×5=140
140/9= 15.55555556
This is usually (incorrectly) rounded to two decimal places =: 15.55c without any explanation as to why this level of resolution has been selected.
The correct mathematical method of handling this issue of resolution is to look at the original resolution of the recorded data. Typically old Fahrenheit data was recorded in increments of 2 degrees F, eg 60, 62, 64, 66, 68,70. very rarely on old data sheets do you see 61, 63 etc (although 65 is slightly more common)
If the original resolution was 2 degrees F, the resolution used for the same data converted to Centigrade should be 1.1c.
Therefore mathematically :
60F=16C
61F17C
62F=17C
etc
In conclusion, when interpreting historical environmental temperature records one must account for errors of accuracy built into the thermometer and errors of resolution built into the instrument as well as errors of observation and recording of the temperature.
In a high quality glass environmental thermometer manufactured in 1960, the accuracy would be +/- 1.4F. (2% of range)
The resolution of an astute and dedicated observer would be around +/-1F.
Therefore the total error margin of all observed weather station temperatures would be a minimum of +/-2.5F, or +/-1.30c…
===============================================================
UPDATE: This comment below from Willis Eschenbach, spurred by Steven Mosher, is insightful, so I’ve decided to add it to the main body – Anthony
===============================================================
Willis Eschenbach says:
As Steve Mosher has pointed out, if the errors are random normal, or if they are “offset” errors (e.g. the whole record is warm by 1°), increasing the number of observations helps reduce the size of the error. All that matters are things that cause a “bias”, a trend in the measurements. There are some caveats, however.
First, instrument replacement can certainly introduce a trend, as can site relocation.
Second, some changes have hidden bias. The short maximum length of the wiring connecting the electronic sensors introduced in the late 20th century moved a host of Stevenson Screens much closer to inhabited structures. As Anthony’s study showed, this has had an effect on trends that I think is still not properly accounted for, and certainly wasn’t expected at the time.
Third, in lovely recursiveness, there is a limit on the law of large numbers as it applies to measurements. A hundred thousand people measuring the width of a hair by eye, armed only with a ruler measured in mm, won’t do much better than a few dozen people doing the same thing. So you need to be a little careful about saying problems will be fixed by large amounts of data.
Fourth, if the errors are not random normal, your assumption that everything averages out may (I emphasize may) be in trouble. And unfortunately, in the real world, things are rarely that nice. If you send 50 guys out to do a job, there will be errors. But these errors will NOT tend to cluster around zero. They will tend to cluster around the easiest or most probable mistakes, and thus the errors will not be symmetrical.
Fifth, the law of large numbers (as I understand it) refers to either a large number of measurements made of an unchanging variable (say hair width or the throw of dice) at any time, or it refers to a large number of measurements of a changing variable (say vehicle speed) at the same time. However, when you start applying it to a large number of measurements of different variables (local temperatures), at different times, at different locations, you are stretching the limits …
Sixth, the method usually used for ascribing uncertainty to a linear trend does not include any adjustment for known uncertainties in the data points themselves. I see this as a very large problem affecting all calculation of trends. All that are ever given are the statistical error in the trend, not the real error, which perforce much be larger.
Seventh, there are hidden biases. I have read (but haven’t been able to verify) that under Soviet rule, cities in Siberia received government funds and fuel based on how cold it was. Makes sense, when it’s cold you have to heat more, takes money and fuel. But of course, everyone knew that, so subtracting a few degrees from the winter temperatures became standard practice …
My own bozo cowboy rule of thumb? I hold that in the real world, you can gain maybe an order of magnitude by repeat measurements, but not much beyond that, absent special circumstances. This is because despite global efforts to kill him, Murphy still lives, and so no matter how much we’d like it to work out perfectly, errors won’t be normal, and biases won’t cancel, and crucial data will be missing, and a thermometer will be broken and the new one reads higher, and …
Finally, I would back Steven Mosher to the hilt when he tells people to generate some pseudo-data, add some random numbers, and see what comes out. I find that actually giving things a try is often far better than profound and erudite discussion, no matter how learned.
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.




It is even worse than that. Speaking of thermometers circa 1953, the constancy of the cross-section of the mercury tube varied a bit. With a tiny bit of “pinch” the readings above it would be high, and a bit of “wow” would make readings above the “wow” lower than real.
As for it all “averaging out” with many readings, my chem prof demonstrated that it was as likely that the average would be on the low side or the high side.
People should not treat rounded numbers as if they were integers, as they are no such thing. Rounded numbers represent ranges, not distinct numbers. Write the readings in a center column. Add the high end margin to the reading, and write the result in a column to the left, and the reading less the low margin of error on a column to the right. Add the left column, average it, then add the right column and averaged it and the results of these average will represent the range of the reading, high temperature possibility of the range on the left and low temperature possibility of the range on the right. The actual temperature will fall somewhere in between.
For surface of ocean temperatures, an ordinary seaman, under the direction of a low ranked ships officer, would toss out a bucket on a rope, allow it to sink to an unknown depth (ship moving) and then pull the bucket up, put a thermometer in (and the time of immersion varied considerably), then read the thermometer. It is highly unlikely that the ordinary seaman was ever trained to properly read a thermometer, and a toss-up as to whether the thermometer stayed immersed long enough to stabilize to the temperature of the water in the bucket. Most probably depended on how busy the ship’s crew was. I can assure all that getting the temperature right was no priority, and very much a bother.
To actually attempt to read a mercury thermometer to an accuracy of 0.1 degree is foolhardy, and to even think that averaging temperatures with a margin of error of a degree to an accuracy of 0.01 degree will result in any reality is beyond belief stupid. These folks should take grade-school arithmetic over again.
More than that, ships courses through a sea lane, older days, could vary up to at least 50 miles each side of the intended course.
John Andrews says:
January 22, 2011 at 11:09 am
In all the comments to this point there is only one comment mentioning relative humidity measurement. One other comment mentions relative humidity. It seems to me that if we are to measure temperature for the purpose of estimating global warming, we should be looking for the temperature of dry air at sea level, or the heat contained in one cubic meter of dry air at 1000 mbar. Of course with appropriate 1 sigma error bars for each calculation.
Thanks for noticing – everyone else is away arguing about the incorrect metric.
One would almost think that this was a deliberate ploy to ensure everyone is involved in more and more detailed debate about the incorrect metric.
Always remember though that thermometers correlate better to temperature than trees do 🙂
Dave Springer says:
January 22, 2011 at 7:24 am
Steven Mosher says:
January 22, 2011 at 3:23 am
Result? the error structure of individual measures doesnt impact your estimation of the long term trends.
NOW, if many thermomemters all has BIASES ( not uncertainty) and if those biases were skewed hot or cold, and if those biases changed over time, then your trend estimation would get impacted
Result? no difference.
Absolutely right, Steve.
Skeptics are no better than CAGW alarmists in their willingness to believe anything which supports their own beliefs or disputes the beliefs of the other side. It’s sad. Objectivity is a rare and precious commodity.
———————————–
Dave,
This observation and other similar ones are made with tedious frequency. Please be assured that I believe that you think you are much more objective than everyone you disagree with.
—————————
Alfred Burdett says:
January 22, 2011 at 9:00 am
Has anyone investigated the possible impact of observer preconception bias?
In particular, does positive bias in temperature readings rise and fall with belief in AGW?
Would this not be a worthy topic for investigation.
—————————-
I think it’s a very significant factor that is probably impossible to quantify.
As a young man in Canada’s frozen North in the seventies, I was one of many clueless adventurers working in the mines, mills and such of the Yukon. There was no AGW talk then but there was a fierce desire in our hearts to see ourselves as latterday Jack Londons, scorning the bitter cold as much as we disdained those softies in the South.
Everybody exaggerated the temperatures. It was always said to be colder than it really was. In the winter, at least. In the summer we just bragged about the bugs and how many days we’d paddled on bannock leavened with wood ash.
To this day, forty below is the legendary temperature that people put in books and songs. Never mind that they’ve never come close to it. Nowadays, we just throw in the “wind chill factor” and everybody’s impressed.
So, imagine you’re 21, from Vancouver, you have a job at the Dawson City airport and you’re looking at a thermometer that reads -39. What do you write down?
Steve in SC
thank you Sir, your post shows a brilliant light in a very dark hole.
George
Do things like meniscus really affect trends? Somehow I think not since the meniscus is the same regardless of who is reading the instrument or what the current temperature is.
Since average human height has been increasing since the late 1800s then one would expect an ever increasing temperature from the parallax error.
Or are one of my assumptions in error here?
About a year ago I asked on one of these threads how often the calibration of these NOAA thermometers was checked. Now I know: Never.
Man, that would never cut it in the nuclear power industry. In a nuke plant, every single one of the ~10,000 instruments in the plant was required to have it’s calibration checked periodically – anywhere from monthly to every five years.
The back end issue that seems to keep getting glossed over as people argue, and counter argue is quite simple.
Ben D.
“…although I question the larger possible error and I would hazzard to guess that 1.3C is the actual limit of the error since we would assume that the observer bias…”
Dave Springer
“…Accuracy of thermometers matters hardly at all because the acquired data in absolute degrees is used to generate data which is change over time. If a thermometer or observer is off by 10 whole degrees it won’t matter so long as the error is consistently 10 degrees day after day – the change over time will still be accurate.”
There is a lot of assuming going on.
The point is, that NO ONE seems to have taken the error rates or bias into account.
Well, at least no one who actually cares that businesses are being driven into extinction because of this flawed theology.
Leave it to mankind to make Extinction Level Events a practical business model.
Although only partially related to this story, I’ve written a short post on my blog about the nature of a “proxy”. In particular, how is the rate of growth of a tree fundamentally different from a spreadsheet entry from an RTD. They are both recordings of approximations of a temperature over a period of time. For an RTD, the time can be very short. In addition, an RTD is much more precise. Accurate too if it is calibrated properly. Nonetheless, I would argue the term “proxy” is a misdirection. Tree rings are thermometers. Or at least they are being used as thermometers in Mann’s and others various papers.
[snip – taunt]
Talking of folks in the Yukon back in the 70’s, Oliver Ramsay says with reference to the question I raised about observer preconception bias (OPB):
“Everybody exaggerated the temperatures. It was always said to be colder than it really was. … So, imagine you’re 21, from Vancouver, you have a job at the Dawson City airport and you’re looking at a thermometer that reads -39. What do you write down?”
This explains well the kind of underlying psychology. However, to be clear, I was not suggesting fraud, but simply the possibility that unconscious factors, including preconceptions about climate change, will skew observational data when based on a tricky assessment, e.g., when the meniscus is about to halfway between two notches on the thermometer.
And that raises another question. If we are contemplating the expenditure of $trillions to avert AGW, why not spend a few billion on new global land surface meteorological recording network of frequently calibrated, properly sited, high precision instruments that transmit data in real time via satellite to a computerized analytical system. At least then, if 2011 is reported to be the hottest or the coldest, or the wettest year on record, we could fairly sure about it.
Ian W says:
January 22, 2011 at 8:58 am
“…Atmospheric Temperature does NOT EQUAL Atmospheric Heat Content.
Then entire claim of ‘greenhouse gases’ (sic) causing [global warming|Climate Change|climate catastrophe|climate disruption|tipping points] is based on the hypothesis that these gases trap heat in the atmosphere. To show this is the case the climatologists measure atmospheric temperature
BUT
Atmospheric Temperature does NOT EQUAL Atmospheric Heat Content.
This reality remains the same however accurately you quantify the incorrect metric as it ignores the huge effect on atmospheric enthalpy from water vapor.
The heat content of the Earth is far more accurately measured by measuring the temperature of the oceans as ocean temperature is closely equivalent to ocean heat content and the top 2 or 3 meters of ocean holds as much heat as the entire atmosphere….”
——————————————
Ian is right. I have long argued that we should ditch the land based temperature record; it was never designed for the purpose of assessing global warming and it is now too corrupted to be relied upon and, in any event, it is essentially of minor importance.
It is the heat contents of the oceans that is important. Given the entire volume of the oceans, they probably account for about 99% of the total heat content of the earth (ignoring the core/mantle). If the oceans are not warming, global warming is not happening. If the oceans are heating this is most probably due to changes in cloud albedo, since only solar radiation (or geothermal energy) can effectively heat the oceans.
The oceans are the key to this debate for another reason. Namely, one fundamental problem for AGW is whether back radiation from increased CO2 in the atmosphere can in practice warm the oceans. Due to the wavelength of back radiation, it cannot effectively heat the oceans. It is all fully absorbed within about the first 10 microns and any heat absorbed by this layer either boils off, or is thrown to the air as spray and/or, in any event, cannot transfer its heat downwards to reach the depths required for circulation admixture.
While Mosh is correct that random observation errors will tend to average out when computing long-run trends, problems arise when they are not random.
Besides the drift issues Mark discusses, changes in rounding rules come to mind: If the temperature is between the 60 mark and 62 mark, are observers trained to read this as 60 or as 62 or to the nearer one? If the rule (or custom) was different in the late 19th century than in the mid-20th century, it could make a big difference in the trend change. And if it’s about half way, should it be rounded up or down? My father, and engineer, taught me that good engineering practice is to round ties to the nearest even value, so that on average they will cancel out, though that wouldn’t work if the marks are every 2 dF.
Mark, for his part, rounds 15.5555… to 15.55, even though .0055… is clearly greater than .005, so that he is rounding down rather than off. In this example .01 makes no difference, but the same rule would lead one to round 15.5555… to 15 rather than 16 when rounding to integers.
But when historical records have been converted from integer F to integer C, it wouldn’t surprise me if the software truncated down to the lower integer in one decade, but rounded to the nearer integer in another decade (with time-varying tie-breaker rules — Matlab, for example, rounds exact ties away from 0).
When converting from F to C, I would carry at least one decimal place, just so the conversion doesn’t introduce additional meaningful error. The last digit has to be meaningless to prevent loss of precision. But even there it could make a perceptible difference whether the one decimal place is the result of truncation (which is easy in Fortran, say) or rounding (which requires a little more effort).
I should have ended my last post with a final paragraph:
This explains why Trenbeth cannot find his missing heat in the oceans. Due to the wavelength of back radiation, it is incapable of getting there!
Steve in SC:
You brought up a good point, but with regard to thermocouples and RTDs you left out a few issues.
Thermocouples and RTDs have an internationally recognized temperature/voltage curve that is based on statistical samples of many devices of each sensor type. All thermocouples and RTDs are required to be within a predefined error from those curves. These errors can be as much as 1.5 degree C. Some devices are tested and have tighter tolerances.
To read a thermocouple or RTD, you need a temperature indicator. These devices have additional errors that get added to the reading.
I was responsible for the design of a temperature meter some years back. The unit was designed with a ±0.1% of reading ±1 count accuracy relative to the NIST thermocouple and RTD curves.
What this means is a system using this meter reading 100°C would have an accuracy of ±2.6°C (±1.5° for the thermocouple, ±0.1° for the % of reading and ±1° for the count uncertainty) while the same unit configured to read 100.0°C would have an accuracy of ±1.7°C (±1.5° for the thermocouple, ±0.1° for the % of reading and ±0.1° for the count uncertainty) .
The only time you could say you actually knew the temperature was 100°C was when you had a precision temperature standard to compare the system to. Any other time, you had system uncertainty as part of the reading.
When using these devices, you must pay close attention to the accuracy specifications of the device and the accuracy specifications for the indicator. They all add up.
Based on that experience, I can argue that anyone who claims they know the temperature of anything (except a NIST or BSI traceable standard) to better than ±1° or ±2°C is fooling themselves.
As soon as I see the “I am a technical expert and i know that all scientists are idiots” theme I begin to suspect how this is going to turn out.
And after a bit more reading we come up with
——-
If the scale is marked in 1c steps (which is very common), then you probably cannot extrapolate between the scale markers.
——-
The correct word is interpolate not extrapolate.
The claim “you probably cannot interpolate between the scale markers” is false. Anyone trained to read scales properly can do this. Although I cannot answer for the training and conscientiousness of the people who make meteorology measurements.
REPLY:According to the info you provide WUWT with comments, you’re a software developer that does typing tutor programs and some other educational k-12 programs for the MacIntosh, what makes you an expert on thermometers/meteorological measurement and everybody else here not? Provide a citation. – Anthony
Accurate digital thermometers and data systems are readily available, for a price. They are used in medical and physiological research. See —
http://www.physitemp.com/
NIST traceable accuracy of 0.1° C is routine.
Wayne Delbeke says:
January 22, 2011 at 10:42 am
See National Institute of Standards and Technology.
http://www.temperatures.com/Papers/nist%20papers/2009Cross_LiG_Validation.pdf
There aren’t any experiments or references to experiments in this document that have any bearing on glass properties of thermometers. It is merely asserted. So, I challenge the NIST as well.
Proof is going to be hard to come by. It requires a thermometer to be made, to be measured accurately w.r.t. volume of bulb, diameter of capillary, calibration, etc. prior to service, to undergo service for ~20-50 years and then to be measured again. Simulated aging is not acceptable. I say that this whole article and the NIST document backing it up are hearsay and could very easily be wrong. And please, let’s not have an argument along the lines of “how dare you argue with a large government agency full of experts”. We’ve seen how well that holds up.
Oliver Ramsay says:
January 22, 2011 at 11:51 am
Tedious frequency huh? I don’t think so. Each side accuses the other of it. That’s a given. But you’ll have to show me where someone in one camp points out that both camps do it.
p says:
January 22, 2011 at 12:05 pm (Edit)
Do things like meniscus really affect trends? Somehow I think not since the meniscus is the same regardless of who is reading the instrument or what the current temperature is.
#####
for meniscus to affect trend you would have to have the following
Let is say that the bottom of the curve was read 100% of the time for the first few years
a 100 year record. every day, every month, every year. Then suppose that the top of the curve was read for the last few records. And suppose the difference between these was 1degree.
You’d see a false trend. But if the observer allways records the top or allways record
the bottom you see Zero trend. You just have a bias in the absolute temp. if the observer switches back and forth randomly, you’ll also see no trend bias.
You can write little simulations of this if you like and “model” observer behavior.
Or you can note that there is no reason to assume anything other than a normal distribution of reading practice and add the proper quantity to your error budget.
[snip – Wow, such hypocrisy – see your comment below – simply saying something is “bogus” doesn’t mean it to be so, since you are in the mode of demanding citations, provide one – Anthony]
Laurence M. Sheehan, PE says:
January 22, 2011 at 11:40 am
“As for it all “averaging out” with many readings, my chem prof demonstrated that it was as likely that the average would be on the low side or the high side.”
Did your chem professor teach in a school similar to the school where Michael Mann teaches? Just because someone is a professor doesn’t mean crap.
Once again – thousands of instruments, changing numbers not absolute numbers, the imprecision averages out and accuracy doesn’t really matter for finding trends.
Temperature cycles in the glass bulb of a thermometer harden the glass and shrink over time, a 10 yr old -20 to 50c thermometer will give a false high reading of around 0.7c
—–
I don’t believe you. Provide a citation.
Hoser says:
January 22, 2011 at 11:11 am
Thermometer data is corroborated by a lot of different proxies. Ya think whether the meniscus is read at the top or bottom effects arctic ice melt, tree rings, glacier retreat, rising sea levels, ice core gas/isotope ratios, and things of that nature? Ya think the age of the glass effects the temperature recorded by weather satellites? Would a thermometer used one time and discarded in a radiosonde be effected by any of that?
The instrumental temperature record is not a point of weakness for the CAGW hypothesis. The CAGW hypothesis has more holes than swiss cheese in it but the surface thermometer record just isn’t one of them. The adjustments to the raw data and lack of adequate accounting for urban heat islands might be weaknesses but those are problems with the instruments or the manner in which the instruments are read.