The Metrology of Thermometers

For those that don’t notice, this is about metrology, not meteorology, though meteorology uses the final product. Metrology is the science of measurement.

Since we had this recent paper from Pat Frank that deals with the inherent uncertainty of temperature measurement, establishing a new minimum uncertainty value of ±0.46 C for the instrumental surface temperature record, I thought it valuable to review the uncertainty associated with the act of temperature measurement itself.

As many of you know, the Stevenson Screen aka Cotton Region Shelter (CRS), such as the one below, houses a Tmax and Tmin recording mercury and alcohol thermometer.

Hanksville_looking_north
Hanksville, UT USHCN climate monitoring station with Stevenson Screen - sited over a gravestone. Photo by surfacestations.org volunteer Juan Slayton

They look like this inside the screen:

NOAA standard issue max-min recording thermometers, USHCN station in Orland, CA - Photo: A. Watts

Reading these thermometers would seem to be a simple task. However, that’s not quite the case. Adding to the statistical uncertainty derived by Pat Frank, as we see below in this guest re-post, measurement uncertainty both in the long and short term is also an issue.The following appeared on the blog “Mark’s View”, and I am reprinting it here in full with permission from the author. There are some enlightening things to learn about the simple act of reading a liquid in glass (LIG) thermometer that I didn’t know as well as some long term issues (like the hardening of the glass) that have values about as large as the climate change signal for the last 100 years ~0.7°C – Anthony

==========================================================

Metrology – A guest re-post by Mark of Mark’s View

This post is actually about the poor quality and processing of historical climatic temperature records rather than metrology.

My main points are that in climatology many important factors that are accounted for in other areas of science and engineering are completely ignored by many scientists:

  1. Human Errors in accuracy and resolution of historical data are ignored
  2. Mechanical thermometer resolution is ignored
  3. Electronic gauge calibration is ignored
  4. Mechanical and Electronic temperature gauge accuracy is ignored
  5. Hysteresis in modern data acquisition is ignored
  6. Conversion from Degrees F to Degrees C introduces false resolution into data.

Metrology is the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology. Believe it or not, the metrology of temperature measurement is complex.

It is actually quite difficult to measure things accurately, yet most people just assume that information they are given is “spot on”.  A significant number of scientists and mathematicians also do not seem to realise how the data they are working with is often not very accurate. Over the years as part of my job I have read dozens of papers based on pressure and temperature records where no reference is made to the instruments used to acquire the data, or their calibration history. The result is that many scientists  frequently reach incorrect conclusions about their experiments and data because the do not take into account the accuracy and resolution of their data. (It seems this is especially true in the area of climatology.)

Do you have a thermometer stuck to your kitchen window so you can see how warm it is outside?

Let’s say you glance at this thermometer and it indicates about 31 degrees centigrade. If it is a mercury or alcohol thermometer you may have to squint to read the scale. If the scale is marked in 1c steps (which is very common), then you probably cannot extrapolate between the scale markers.

This means that this particular  thermometer’s resolution is1c, which is normally stated as plus or minus 0.5c (+/- 0.5c)

This example of resolution is where observing the temperature is under perfect conditions, and you have been properly trained to read a thermometer. In reality you might glance at the thermometer or you might have to use a flash-light to look at it, or it may be covered in a dusting of snow, rain, etc. Mercury forms a pronounced meniscus in a thermometer that can exceed 1c  and many observers incorrectly observe the temperature as the base of the meniscus rather than it’s peak: ( this picture shows an alcohol meniscus, a mercury meniscus bulges upward rather than down)

Another  major common error in reading a thermometer is the parallax error.

Image courtesy of Surface meteorological instruments and measurement practices By G.P. Srivastava (with a mercury meniscus!) This is where refraction of light through the glass thermometer exaggerates any error caused by the eye not being level with the surface of the fluid in the thermometer.

(click on image to zoom)

If you are using data from 100’s of thermometers scattered over a wide area, with data being recorded by hand, by dozens of different people, the observational resolution should be reduced. In the oil industry it is common to accept an error margin of 2-4% when using manually acquired data for example.

As far as I am aware, historical raw multiple temperature data from weather stations has never attempted to account for observer error.

We should also consider the accuracy of the typical mercury and alcohol thermometers that have been in use for the last 120 years.  Glass thermometers are calibrated by immersing them in ice/water at 0c and a steam bath at 100c. The scale is then divided equally into 100 divisions between zero and 100. However, a glass thermometer at 100c is longer than a thermometer at 0c. This means that the scale on the thermometer gives a false high reading at low temperatures (between 0 and 25c) and a false low reading at high temperatures (between 70 and 100c) This process is also followed with weather thermometers with a range of -20 to +50c

25 years ago, very accurate mercury thermometers used in labs (0.01c resolution) had a calibration chart/graph with them to convert observed temperature on the thermometer scale to actual temperature.

Temperature cycles in the glass bulb of a thermometer harden the glass and shrink over time, a 10 yr old -20 to +50c thermometer will give a false high reading of around 0.7c

Over time, repeated high temperature cycles cause alcohol thermometers to evaporate  vapour into the vacuum at the top of the thermometer, creating false low temperature readings of up to 5c. (5.0c not 0.5 it’s not a typo…)

Electronic temperature sensors have been used more and more in the last 20 years for measuring environmental temperature. These also have their own resolution and accuracy problems. Electronic sensors suffer from drift and hysteresis and must be calibrated annually to be accurate, yet most weather station temp sensors are NEVER calibrated after they have been installed. drift is where the recorder temp increases steadily or decreases steadily, even when the real temp is static and is a fundamental characteristic of all electronic devices.

Drift, is where a recording error gradually gets larger and larger over time- this is a quantum mechanics effect in the metal parts of the temperature sensor that cannot be compensated for typical drift of a -100c to+100c electronic thermometer is about 1c per year! and the sensor must be recalibrated annually to fix this error.

Hysteresis is a common problem as well- this is where increasing temperature has a different mechanical affect on the thermometer compared to decreasing temperature, so for example if the ambient temperature increases by 1.05c, the thermometer reads an increase on 1c, but when the ambient temperature drops by 1.05c, the same thermometer records a drop of 1.1c. (this is a VERY common problem in metrology)

Here is a typical food temperature sensor behaviour compared to a calibrated thermometer without even considering sensor drift: Thermometer Calibration depending on the measured temperature in this high accuracy gauge, the offset is from -.8 to +1c

But on top of these issues, the people who make these thermometers and weather stations state clearly the accuracy of their instruments, yet scientists ignore them!  a -20c to +50c mercury thermometer packaging will state the accuracy of the instrument is +/-0.75c for example, yet frequently this information is not incorporated into statistical calculations used in climatology.

Finally we get to the infamous conversion of Degrees Fahrenheit to Degrees Centigrade. Until the 1960’s almost all global temperatures were measured in Fahrenheit. Nowadays all the proper scientists use Centigrade. So, all old data is routinely converted to Centigrade.  take the original temperature, minus 32 times 5 divided by 9.

C= ((F-32) x 5)/9

example- original reading from 1950 data file is 60F. This data was eyeballed by the local weatherman and written into his tallybook. 50 years later a scientist takes this figure and converts it to centigrade:

60-32 =28

28×5=140

140/9= 15.55555556

This is usually (incorrectly) rounded  to two decimal places =: 15.55c without any explanation as to why this level of resolution has been selected.

The correct mathematical method of handling this issue of resolution is to look at the original resolution of the recorded data. Typically old Fahrenheit data was recorded in increments of 2 degrees F, eg 60, 62, 64, 66, 68,70. very rarely on old data sheets do you see 61, 63 etc (although 65 is slightly more common)

If the original resolution was 2 degrees F, the resolution used for the same data converted to  Centigrade should be 1.1c.

Therefore mathematically :

60F=16C

61F17C

62F=17C

etc

In conclusion, when interpreting historical environmental temperature records one must account for errors of accuracy built into the thermometer and errors of resolution built into the instrument as well as errors of observation and recording of the temperature.

In a high quality glass environmental  thermometer manufactured in 1960, the accuracy would be +/- 1.4F. (2% of range)

The resolution of an astute and dedicated observer would be around +/-1F.

Therefore the total error margin of all observed weather station temperatures would be a minimum of +/-2.5F, or +/-1.30c…

===============================================================

UPDATE: This comment below from Willis Eschenbach, spurred by Steven Mosher, is insightful, so I’ve decided to add it to the main body – Anthony

===============================================================

Willis Eschenbach says:

As Steve Mosher has pointed out, if the errors are random normal, or if they are “offset” errors (e.g. the whole record is warm by 1°), increasing the number of observations helps reduce the size of the error. All that matters are things that cause a “bias”, a trend in the measurements. There are some caveats, however.

First, instrument replacement can certainly introduce a trend, as can site relocation.

Second, some changes have hidden bias. The short maximum length of the wiring connecting the electronic sensors introduced in the late 20th century moved a host of Stevenson Screens much closer to inhabited structures. As Anthony’s study showed, this has had an effect on trends that I think is still not properly accounted for, and certainly wasn’t expected at the time.

Third, in lovely recursiveness, there is a limit on the law of large numbers as it applies to measurements. A hundred thousand people measuring the width of a hair by eye, armed only with a ruler measured in mm, won’t do much better than a few dozen people doing the same thing. So you need to be a little careful about saying problems will be fixed by large amounts of data.

Fourth, if the errors are not random normal, your assumption that everything averages out may (I emphasize may) be in trouble. And unfortunately, in the real world, things are rarely that nice. If you send 50 guys out to do a job, there will be errors. But these errors will NOT tend to cluster around zero. They will tend to cluster around the easiest or most probable mistakes, and thus the errors will not be symmetrical.

Fifth, the law of large numbers (as I understand it) refers to either a large number of measurements made of an unchanging variable (say hair width or the throw of dice) at any time, or it refers to a large number of measurements of a changing variable (say vehicle speed) at the same time. However, when you start applying it to a large number of measurements of different variables (local temperatures), at different times, at different locations, you are stretching the limits …

Sixth, the method usually used for ascribing uncertainty to a linear trend does not include any adjustment for known uncertainties in the data points themselves. I see this as a very large problem affecting all calculation of trends. All that are ever given are the statistical error in the trend, not the real error, which perforce much be larger.

Seventh, there are hidden biases. I have read (but haven’t been able to verify) that under Soviet rule, cities in Siberia received government funds and fuel based on how cold it was. Makes sense, when it’s cold you have to heat more, takes money and fuel. But of course, everyone knew that, so subtracting a few degrees from the winter temperatures became standard practice …

My own bozo cowboy rule of thumb? I hold that in the real world, you can gain maybe an order of magnitude by repeat measurements, but not much beyond that, absent special circumstances. This is because despite global efforts to kill him, Murphy still lives, and so no matter how much we’d like it to work out perfectly,  errors won’t be normal, and biases won’t cancel, and crucial data will be missing, and a thermometer will be broken and the new one reads higher, and …

Finally, I would back Steven Mosher to the hilt when he tells people to generate some pseudo-data, add some random numbers, and see what comes out. I find that actually giving things a try is often far better than profound and erudite discussion, no matter how learned.

w.

5 2 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

240 Comments
Inline Feedbacks
View all comments
Dave Springer
January 22, 2011 1:59 pm

John Andrews says:
January 22, 2011 at 11:09 am
“Over all, I don’t see much change in the global climate during my lifetime (I’m 76).”
I’m 54 and I’ve noticed the winters are generally much milder than when I was a kid. When my mom was a kid the river in my hometown completely froze over every winter and they’d plow it for miles and ice skate down it. It hasn’t frozen over like that even once in my lifetime.
There is no doubt the climate has gotten warmer. The first days of spring weather when certain plants spring up and migrating birds return have been getting earlier in the year and the dates of the first and last frosts have been changing. Some people note these things. Evidently you aren’t one of those people. Not a farmer are ya?

LazyTeenager
January 22, 2011 2:00 pm

Over time, repeated high temperature cycles cause alcohol thermometers to evaporate  vapour into the vacuum at the top of the thermometer, creating false low temperature readings of up to 5c. (5.0c not 0.5 it’s not a typo…)
———-
I don’t believe this. The vacuum would be very temporary at the time of manufacture. After that the region above the liquid would be filled with saturated vapour till the end of time.
The amount of vapour would vary directly with the temperature. This effect should/would be incorporated into the scale since it is reproducible.

Dave
January 22, 2011 2:02 pm

Dave Springer and Mosh>
Can you explain further for me?

January 22, 2011 2:07 pm

Michael Moon.
the climate does not exist as a phenomena that can be measured. I should link up the nice video from the newton institute workshop on uncertainty in climate science, because the guy does a really good job of explaining it.
http://sms.cam.ac.uk/media/1083858;jsessionid=71886089203AF0121AED826A772E901C?format=flv&quality=high&fetch_type=stream
Average heights do not exist. Average weight does not exist. we never observe averages.
They are mathematical constructs that serve a useful purpose.
The temperature in SF today is 60F. That’s observable. That’s the weather.
What’s the climate for SF on Jan 22? Well, collect the weather for the past, say 30 years (assume stationarity over those years) and do a thing called averaging. This construct is called the climatology. The
digits in this construct have nothing to do with the accuracy of the instrument recording the temp. lets say that construct is 54.8888771633784959
do the math and you have another construct telling you how much warmer it is today
than “normal”. And in this context the word “normal” has nothing to do with being
“normal” That’s just shorthand for the computation of the “average”
If you ask me how warm it was 15 years ago, i will estimate
“54.8888771633784959”
That estimate will be the best estimate, given no other information than knowledge of the average. that estimate will minimize my error. It will also be wrong. But, it will be the best estimate. and in a betting game if you bet something different you are more likely to lose the bet to me than win it.

Mark T
January 22, 2011 2:29 pm

The other thing that is instructive is to compare two thermometers that are within a few km of each other.. over the period of say 100 years. Look at the corellation.
98% plus.

If both thermometers were affected by the same sort of physical process that was causing a degradation of accuracy over time, you would expect them to have highly correlated data, too.

Or you can write a simulation of a sensor with very gross errors. simulate daily data for 100 years. Assume small errors. calculate the trend. Assume large errors. calculated the trend.

If you’re drawing your “errors” using independent trials from the same distribution of course this will work. That is a trivial application of the CLT that proves nothing other than the fact that the CLT works if you meet all the requirements.
In general, I don’t think anybody in here actually understands how the CLT or LLN work. A few came close. You do not need a normal distribution for the CLT to work. You need independent and identically distributed (i.i.d.) error distributions for errors to cancel with the sqrt(N). It is my hope that at some point everyone will figure out how much of a limit i.i.d. really is.
Generally speaking, independence is not really required (independence is calculated over all time, which is not possible,) just orthogonality (uncorrelatedness) but the errors do need to be drawn from an identical distribution if you want the cancellation property to apply. That implies the same mean and variance, btw. The mean and variance need to exist, obviously, and they also should be stationary (unless they all vary identically over time,) which is not as obvious but easy to figure out. That also implies that if the errors are a function of the thing you’re measuring, e.g., a percentage, then the CLT will not apply. Sorry. Get over it. The same applies to situations in which the error distributions are unknown, which clearly applies to temperature measurements.
Increased uncertainty in the data itself implies increased uncertainty in any calculations done with the data. If the i.i.d. requirement is not met, then you have no choice but to assume the errors do not cancel… anywhere. It sucks, I know, but them’s the breaks. Stay away from statistical endeavors if you cannot wrap your head around this very basic concept.
Mark

BioBob
January 22, 2011 2:33 pm

“Therefore the total error margin of all observed weather station temperatures would be a minimum of +/-2.5F, or +/-1.30c”
=======================================
This seems to be a good 1st step. Can anyone provide an analysis of how this uncertainty should be translated into the error margin for larger composites of temperature measurements or global anomaly measures? Is the error margin the same no matter how we transform the individual stations’ data ?
Thanks !

Mark T
January 22, 2011 2:35 pm

Dave Springer says:
January 22, 2011 at 1:41 pm

Once again – thousands of instruments, changing numbers not absolute numbers, the imprecision averages out and accuracy doesn’t really matter for finding trends.

Can you prove all the errors are i.i.d.?
I dare you to prove it. Until then, you are just plain wrong.
Mark

LazyTeenager
January 22, 2011 2:41 pm

Drift, is where a recording error gradually gets larger and larger over time- this is a quantum mechanics effect in the metal parts of the temperature sensor that cannot be compensated for typical drift of a -100c to 100c electronic thermometer is about 1c per year! and the sensor must be recalibrated annually to fix this error.
———
Quantum — I am about to talk rubbish warning.
Misleading. Drift can mean different things and this explanation does not capture that.

Ike
January 22, 2011 2:46 pm

Here’s the bottom line: if you have a claimed increase of +.7 degree C and a margin of error of +-1.4, that doesn’t mean that “most likely” the temperature change was a positive .7 degree. What it does mean is that the temperature change was +2.1 degree or -.7 degree or anywhere in between and – here’s the punch line – we have no idea what the actual temperature was between those two values. Nothing more, nothing less. When the margin of error is larger than the claimed measurement, that conclusion is possible. And that is the stake in the heart of all the claims of global warming, no matter what term or terms is substituted for it.

Ike
January 22, 2011 2:47 pm

Apologies. Next to the last sentence should read: “When the margin of error is larged than the claimed measurement, only that conclusion is possible.” The word “only” was omitted in the original post.

Dave Springer
January 22, 2011 2:49 pm

I know more than anyone should ever have to know about evolutionary psychology. Talk about a theory that explains everything and hence explains nothing. It’s almost like climate change science in that regard. I bet if we were to ask those EvP boys they could spin us a yarn about how there’s a psychological reason why people 100 years ago used to read a thermometer one way and how they read it differently now. That’s how evolution works… when an explanation is needed it never fails to produce one. Just don’t bring up pesky points like falsification or the scientific method because these sciences are so advanced they don’t need that stuff because it’s just never wrong anymore.

LazyTeenager
January 22, 2011 2:50 pm

Here is a typical food temperature sensor behaviour compared to a calibrated thermometer without even considering sensor drift: Thermometer Calibration depending on the measured temperature in this high accuracy gauge, the offset is from -.8 to 1c
———–
This is not relevant. A food process thermometer is not a professional level meteorological thermometer.

Dave Springer
January 22, 2011 2:51 pm

Mark T
Prove it in a blog post? Hardly. I’ve been an engineer all my life. A very successful one. I know how these things work. If I didn’t I never would have been able to outperform my peers.

LazyTeenager
January 22, 2011 2:53 pm

yet frequently this information is not incorporated into statistical calculations used in climatology.
———–
Probably because the accuracy is not relevant to the determination of trends. As long as the thermometer is not changed.

Philip Shehan
January 22, 2011 2:56 pm

Firstly, I hate to nitpick, but since this whole section is about accuracy and precision, the author should not have talked about “extrapolating” between markers. One extrapolates beyond data points, but interpolates between data points.
The discussion of errors is correct as far as it goes, but in the context of discussing climatic changes they are mostly random errors in readings which cancel out over the long run. For example, for every reading that is in parralax error due to the observer eyeballing from above the parallel there will be another from below the parallel.
Systematic errors occur in the same direction and are not cancelled. In this example it is claimed that 10 year old mercury thermometers give a 0.7 C high reading but old alcohol thermometers can read 5 C low.
As for conversion from F to C, quoting too many significant figures at the end of the process gives a misleading impression of the precision (or resolution) of the result, claiming we know the result more precisely than we actually do, but does not effect the accuracy of the measurement, accuracy being how close the quoted figure is to the true figure.
Strings of zeros immediately before the decimal point are non-significant. All zeros after the decimal point are significant. Thus 1500 has a precision of two sig figs, generally taken as meaning the true figure is between 1450 and 1550. 1503 has 4 figs (1502.5 – 1503.5). 1503.0 has 5 figs (1502.95 – 15203.05). 1503.0026173 – well you get the drift. The number of significant figures is a claim to the precision with which we know the result.
At the end of a calculation you are not justified in claiming more than the least number of figures in any of the numbers in the calculation.
The conversion example given actually shows another problem with significant figures.
The F to C conversion funtion on my HP calculator cives the folllowing results (to four decimal places).
60 F = 15.5556 C = 16 C to two significant figures
61 F = 16.1111 C = 16 C
62 F = 16.6667 C = 17 C
Note that my calculator gives a value for 61 F as 16 C, not 17 C as calculated by the author. I assume the difference arises because he has taken the 2 F uncertainty that he says the thermometer readers use converted that to 1.1 C and adjusted final figures accordingly. But that is not the real problem.
Whereas 61 and 62 F have two significant figures, 60 F has one significant figure, (55-65). So should not the centigrade conversion be given to one significant figure; 20 C (15-25)? Strictly speaking yes, but the context makes it clear that in this case the zero is intended to be significant. Such ambiguity is avoided by using power of ten or “scientific” notation:
60 = 6 x 10 (that is 10 to the power of one – not sure how to do superscripts here) is one significant figure.
60 = 6.0 x 10 is two significant figures.

Mark T
January 22, 2011 3:03 pm

Dave Springer says:
January 22, 2011 at 2:51 pm

Prove it in a blog post? Hardly. I’ve been an engineer all my life. A very successful one.

Good for you though I did not realize they were handing out degrees to newborns. Learn something new every day I guess. I am unimpressed by your “authority.” I have pretty significant qualifications myself but I have never claimed that is why I am right. I am right because you have not proven i.i.d., nor can you, not in a blog post nor anywhere else. Just knowing that the errors are potentially a function of temperature immediately invalidates any attempt.

I know how these things work. If I didn’t I never would have been able to outperform my peers.

Wow, now I’m really unimpressed. So, what, because you are sooooo good you can magically violate the requirements for fairly well established theory and make it work anyway?
It doesn’t matter how good you are, you clearly do not understand the CLT nor the LLN nor the concept of i.i.d.
What a joke.
Mark

Dave Springer
January 22, 2011 3:09 pm

Mark T
On second thought the claim to being an engineer all my life isn’t quite true. From age 18 to 22 I was metrology technician in the military responsible for calibration, maintenance, and repair of all the weather forecasting gear at USMCAS El Toro, California. So I know a lot more than average engineer about all the gimcracks used by meteorologists. For the 30 odd years since then I’ve been a hardware/software design engineer. My life has been consumed by knowing how to read instrumentation and know the limits therein. Thousands of people reading thousands of different thermometers for hundreds of years won’t give you the confidence to say it was 70.2 degrees +-1 degree on April 4th, 1880 in Possum Trot, Kentucky but it will allow you to say the average temperature for April in Kentucky in 1880 was 0.5 degrees +-0.1 degrees cooler in 1880 than it was 1980. That’s just how these things work out as a practical matter. Trends from thousands of samples from thousands of different instruments are generally reliable. One sample from one instrument can be catastrophically wrong. There’s a continuum of increasing reliability with increasing number of observations, instruments, and observers.

LazyTeenager
January 22, 2011 3:15 pm

Therefore mathematically :
60F=16C
61F17C
62F=17C
—-
No. Mathematically 61F = 16C if you round it correctly. But that is just nitpicking.
The problem with the article is that there are 2 issues that are being confused in this conversion argument..
1. The use of significant digits is used to convey the degree of uncertainty in the final result of a calculation. Hence the author is correct in that sense.
2. But if values are to be used in subsequent calculations then it is common to retain guard digits to avoid biassing the final result. That is why it is an acceptable practice here.
The author’s insistence of his superiority relative climate scientists is not justified.

Mark T
January 22, 2011 3:19 pm

Thousands of people reading thousands of different thermometers for hundreds of years won’t give you the confidence to say it was 70.2 degrees +-1 degree on April 4th, 1880 in Possum Trot, Kentucky but it will allow you to say the average temperature for April in Kentucky in 1880 was 0.5 degrees +-0.1 degrees cooler in 1880 than it was 1980.

No, they won’t, not unless you can prove the errors are drawn from independent and identically distributed distributions.

That’s just how these things work out as a practical matter.

Wow. My jaw is on the floor. So, what you’re saying is that this happens because you “just know it happens.” You don’t even know the theory behind it? Holy s***t, you really need to educate yourself. You clearly do not know what you are talking about, seriously.

Trends from thousands of samples from thousands of different instruments are generally reliable. One sample from one instrument can be catastrophically wrong. There’s a continuum of increasing reliability with increasing number of observations, instruments, and observers.

Again, not unless you meet the requirements of independence and identical distributions. You can certainly do the averages and get all sorts of extra digits, but they are meaningless.
Mark

Philip Shehan
January 22, 2011 3:20 pm

Ike, you are correct that an increase of 0.7 ± 1.4 , whether temperature or something else, is not statistically meaningful. But it is not necessarily true that the true value must be between 2.1 and -0.7 and 0.7 is not the most probable value. Uncertainties are often quoted as 95% confidence limits of a normal or bell shaped probability lcurve. In this case that would mean there is a 95% chance the true figure is between 2.1 and -.07 and the mostlikely figure is at the top of the bell curve, at +0.7.
And in terms of global warming, the measurements are the statistical average of thousands of measurements and probably hundreds of studies, using different methods (including satellite) so again the error averages out. Whereas you may get a measurement of +0.7 ± 1.4 for a single measurement at one station (I assume you are taking the error as the quoted one for a glass thermometer) it will simply not carry over to the global picture.

LazyTeenager
January 22, 2011 3:24 pm

Therefore the total error margin of all observed weather station temperatures would be a minimum of /-2.5F, or /-1.30c…
———
This claim is ambiguously expressed in multiple ways and makes no sense.
What exactly is the total error margin of all observed weather stations?
Why “observed”
Why “total”
Why minimum instead of maximum?
Why is this relevant to climatology?
Why assume 1960 thermometers are relevant to the current network?
REPLY: Try to collect all of your thoughts into one post instead of serial thread bombing – penalty box assigned to you – first warning – Anthony

Dave Springer
January 22, 2011 3:33 pm

Ike says:
January 22, 2011 at 2:46 pm
“Here’s the bottom line: if you have a claimed increase of +.7 degree C and a margin of error of +-1.4, that doesn’t mean that “most likely” the temperature change was a positive .7 degree. What it does mean is that the temperature change was +2.1 degree or -.7 degree or anywhere in between and – here’s the punch line – we have no idea what the actual temperature was between those two values. Nothing more, nothing less. When the margin of error is larger than the claimed measurement, that conclusion is possible. And that is the stake in the heart of all the claims of global warming, no matter what term or terms is substituted for it.”
That’s all sorts of wrong. It applies to single measurements. It’s a different ballgame when you have thousands of measurements from thousands of instruments and thousands of observers with dozens of different instrument manufacturers and changing technologies over the course of hundreds of years and then on top of that you have proxies totally unrelated to the instruments and those proxies are in general agreement. THAT’s the bottom line.

Mark T
January 22, 2011 3:35 pm

And in terms of global warming, the measurements are the statistical average of thousands of measurements and probably hundreds of studies, using different methods (including satellite) so again the error averages out.

What???
For chrissakes… go back and read the couple posts I just made regarding the conditions that are required for this to be true. Then go find a suitable text and read up on the LLN and the CLT which will demonstrate that what I just wrote is indeed required. Then, probably in the same text, read and understand the concept of independence (really orthogonality) and attempt to understand the concept of identical distributions.
Really, c’mon folks. Where on earth do you get this nonsense?
Mark

Dave Springer
January 22, 2011 3:38 pm

Mark T
Prove it doesn’t work the way I said it does. Maybe you can share a Nobel Peace prize for proving that the instrumental temperature record for the past 200 years is worthless and you can, all by your lonesome, end the biggest scientific hoax in history. Good luck.

Mark T
January 22, 2011 3:45 pm

Dave Springer says:
January 22, 2011 at 3:38 pm

Prove it doesn’t work the way I said it does.

You’re joking, right? YOU MADE THE CLAIM, you need to prove it, not me.
I have already given you the requirements for the LLN, do you deny that?

Maybe you can share a Nobel Peace prize for proving that the instrumental temperature record for the past 200 years is worthless and you can, all by your lonesome, end the biggest scientific hoax in history. Good luck.

Where on earth did you get this from? Who said it is worthless? I only noted that you cannot arbitrarily cancel errors, and I am correct in that statement.
Mark

1 3 4 5 6 7 10