Surface temperature uncertainty, quantified

There is a new paper out that investigates something that has not previously been well dealt with related to the surface temperature record (at least as far as the author knows). “Sensor measurement uncertainty”. The author has defined a lower limit to the uncertainty in the instrumental surface temperature record.

 

Figure 3. (•), the global surface air temperature anomaly series through 2009, as updated on 18 February 2010, (http://data.giss.nasa.gov/gistemp/graphs/). The grey error bars show the annual anomaly lower-limit uncertainty of ±0.46 C.

 

UNCERTAINTY IN THE GLOBAL AVERAGE SURFACE AIR TEMPERATURE INDEX: A REPRESENTATIVE LOWER LIMIT

Patrick Frank, Palo Alto, CA 94301-2436, USA, Energy and Environment, Volume 21, Number 8 / December 2010 DOI: 10.1260/0958-305X.21.8.969

Abstract

Sensor measurement uncertainty has never been fully considered in prior appraisals of global average surface air temperature. The estimated average ±0.2 C station error has been incorrectly assessed as random, and the systematic error from uncontrolled variables has been invariably neglected. The systematic errors in measurements from three ideally sited and maintained temperature sensors are calculated herein. Combined with the ±0.2 C average station error, a representative lower-limit uncertainty of ±0.46 C was found for any global annual surface air temperature anomaly. This ±0.46 C reveals that the global surface air temperature anomaly trend from 1880 through 2000 is statistically indistinguishable from 0 C, and represents a lower limit of calibration uncertainty for climate models and for any prospective physically justifiable proxy reconstruction of paleo-temperature. The rate and magnitude of 20th century warming are thus unknowable, and suggestions of an unprecedented trend in 20th century global air temperature are unsustainable.

INTRODUCTION

The rate and magnitude of climate warming over the last century are of intense and

continuing international concern and research [1, 2]. Published assessments of the

sources of uncertainty in the global surface air temperature record have focused on

station moves, spatial inhomogeneity of surface stations, instrumental changes, and

land-use changes including urban growth.

However, reviews of surface station data quality and time series adjustments, used

to support an estimated uncertainty of about ±0.2 C in a centennial global average

surface air temperature anomaly of about +0.7 C, have not properly addressed

measurement noise and have never addressed the uncontrolled environmental

variables that impact sensor field resolution [3-11]. Field resolution refers to the ability

of a sensor to discriminate among similar temperatures, given environmental exposure

and the various sources of instrumental error.

In their recent estimate of global average surface air temperature and its uncertainties,

Brohan, et al. [11], hereinafter B06, evaluated measurement noise as discountable,

writing, “The random error in a single thermometer reading is about 0.2 C (1σ) [Folland,et al., 2001] ([12]); the monthly average will be based on at least two readings a day throughout the month, giving 60 or more values contributing to the mean. So the error

in the monthly average will be at most 0.2 /sqrt60= 0.03 C and this will be uncorrelated with the value for any other station or the value for any other month.

Paragraph [29] of B06 rationalizes this statistical approach by describing monthly surface station temperature records as consisting of a constant mean plus weather noise, thus, “The station temperature in each month during the normal period can be considered as the sum of two components: a constant station normal value (C) and a random weather value (w, with standard deviation σi).” This description plus the use of a 1 / sqrt60 reduction in measurement noise together indicate a signal averaging statistical approach to monthly temperature.

I and the volunteers get some mention:

The quality of individual surface stations is perhaps best surveyed in the US by way of the commendably excellent independent evaluations carried out by Anthony Watts and his corps of volunteers, publicly archived at http://www.surfacestations.org/ and approaching in extent the entire USHCN surface station network. As of this writing, 69% of the USHCN stations were reported to merit a site rating of poor, and a further 20% only fair [26]. These and more limited published surveys of station deficits [24, 27-30] have indicated far from ideal conditions governing surface station measurements in the US. In Europe, a recent wide-area analysis of station series quality under the European Climate Assessment [31], did not cite any survey of individual sensor variance stationarity, and observed that, “it cannot yet be guaranteed that every temperature and precipitation series in the December 2001 version will be sufficiently homogeneous in terms of daily mean and variance for every application.”

Thus, there apparently has never been a survey of temperature sensor noise variance or stationarity for the stations entering measurements into a global instrumental average, and stations that have been independently surveyed have exhibited predominantly poor site quality. Finally, Lin and Hubbard have shown [35] that variable field conditions impose non-linear systematic effects on the response of sensor electronics, suggestive of likely non-stationary noise variances within the temperature time series of individual surface stations.

The ±0.46 C lower limit of uncertainty shows that between 1880 and 2000, the

trend in averaged global surface air temperature anomalies is statistically

indistinguishable from 0 C at the 1σ level. One cannot, therefore, avoid the conclusion

that it is presently impossible to quantify the warming trend in global climate since

1880.

The journal paper is available from Multi-Science publishing here

I ask anyone who values this work and wants to know more,  to support this publisher by purchasing a copy of the article at the link above.

Congratulations to Mr. Frank for his hard work and successful publication. I know his work will most certainly be cited.

Jeff Id at the Air Vent has a technical discussion going on about this as well, and it is worth a visit.

What Evidence for “Unprecedented Warming”?

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

101 Comments
Inline Feedbacks
View all comments
DirkH
January 20, 2011 11:48 pm

Hansen will have to double his adjustment efforts.

Robert L
January 21, 2011 12:15 am

Are you all aware that the commonly used airport METAR temp measurements are rounded up to nearest degree? introducing an artifical warming of 0.5°.
While anomaly analyses could deal with this I can also imagine that there may be circumstances where it is not properly accounted for (eg if a weather bureau switched from using it’s own recording at an airport to using metar data without realising the rounding difference).

Peter Plail
January 21, 2011 12:51 am

I am sure it has been covered before but can anyone tell us how independent of surface station temperatures are satellite measurements.
As I understand it satellites don’t measure temperatures directly, but measure other factors from which temperatures can be derived. Is this an absolute calculation or does it depend on calibrating against “known” measured surface temperatures?
To put it more directly, does this mean that satellite temperatures have their own errors on top of the surface temperature errors, so are even more inaccurate?

January 21, 2011 1:23 am

. . . the monthly average will be based on at least two readings a day throughout the month, giving 60 or more values contributing to the mean. So the error
in the monthly average will be at most 0.2 /sqrt60= 0.03 C . . .

Allow me to quibble here.
I may be wrong, but using many measurements to increase accuracy only applies to measurements of the same thing. They claim accuracy of the monthly average at 0.03 °C, but not a single one of those 60 measurements is of the monthly average. Each reading is of a temperature at a different time, each with a 0.2 ° error. That does nothing to reduce the monthly error, which should be just the average error of the individual readings, or 0.2 °C.
The monthly error is understated by an order of magnitude.

January 21, 2011 1:26 am

Calibration of meteorological thermometers.
Can somebody tell me how the calibration of electronic meteorological thermometers is performed?
An electronic thermometer consist basically of two parts: Sensor (sometimes platinum resistor (Pt100, 100 ohms at 0°C), and an electronic unit that converts the resistance variation to voltage and displays it as temperature.
My main concern is that often calibration is performed by putting the sensor in liquid with some well known temperature, and the electronic unit is adjusted until the reading is correct at two or three different temperatures. The electronic unit is however kept at a more or less constant temperature during calibration.
The problem is that the electronic unit does also act as a thermometer because all have some temperature drift. This drift can be considerable if the electronic unit is installed outside where temperature variations are great.
Is this the case with electronic meteorological thermometers?
(Besides this the sensor is newer totally linear. Even a platinum sensor (Pt100) is somewhat unlinear and the electronic unit must have a built-in linearizer).
Agust

January 21, 2011 1:49 am

Slightly off subject but, I always love the way the y-axis on warmist graphs has to be stretched beyond what is measurable to show anything discernible.

EFS_Junior
January 21, 2011 1:54 am

Pat Frank says:
January 20, 2011 at 9:49 pm
Thanks, everyone, for your comments and interest.
Ira, the meaning of the error bars is that we don’t know where the true temperature trend is, within the (+/-)0.46 C envelope. The physically real trend just has a 68% chance of being within it.
What’s going on is that the individual thermometer readings are bouncing around, and deviating from the “true” air temperature mostly due to systematic effects. The systematic effects are mostly uncontrolled environmental variables. Depending on how the thermometers bounce, one could get this trend, or that one, or another, all of which would deviate from the “true” trend and all of which would be uncertain by (+/-)0.46 C. But, readings were taken, so we’ve gotten one of all the possible trends. It has some shape. But one should resist the temptation to read too much into it.
We know the climate has warmed from other observations — in the article I mention the northern migration of the arctic tree line as indicating warming. But putting a number on that warming is evidently impossible.
_____________________________________________________________
Hello Pat (if you are still reading this forum),
Is there any possability of obtaining a copy of your paper directly from you?
I’m very curious about the derivation of the temperature noise relationship shown in your ;
4. Summary and Conclusions
http://wattsupwiththat.files.wordpress.com/2011/01/frank2010_summary.png?w=640&h=151
The sigma/square root of N versus sigma derivation, is this more fully explaind elsewhere in your paper?
Anyway, if you can, could you post your paper here or provide an email address where those who are interested in the details might request a copy?
Thanks.

Alexander K
January 21, 2011 2:41 am

Anthony, congrats on getting your real science in Surfacestations noted and complimented. Mark Cooper, your info on metrology is excellent and worthy of wide promulgation. I would liked to have left a compliementary comment, but as I do not have a blog this was not possible.
I have said this before but it’s worth repeating; I found WUWT after being very rudely accused of being a troll on the Guardian CiF blog by a feral Greenpeace activist when I asked a sincere and innocent question – is there a common standard for siting and housing environmental thermometers? This came from my long involvement with agriculture and other industries where measurement of various phenomena, such as the moisture content in grain to be harvested, are vital to economic survival. Those measurements, too, are subject to minor error for all sorts of reasons.
As an illustration of the problems inherent in any form of measurement, I once assisted in the construction of the steel work of a free-standing internal staircase for a public building. The treads and risers, crafted in very expensive native timbers, were built by a separate team of expert woodworkers. We installed the steelwork, then the woodworkers arrived with the beautiful oiled woodwork; nothing fitted! The two teams had agreed to purchase new steel tape measures especially for the contract, but, sadly, the teams bought different brands, both with excellent reputations, one made in England and one made in America. The tapes had a difference of 5mm over 4 metres! Resolution of the problem was by the woodworkers carting the stairs back to their factory, planing the outer rails down to reduce the width (using one of the steel worker’s measuring tapes to check) then re-oiling. The second attempt at installation went without a hitch.
I am still amazed that such a simple concept as that of errors in readings from many kinds of measurement devices has been consistently ignored by most of the clisci community.

Curiousgeorge
January 21, 2011 4:29 am

Agust Bjarnason says:
January 21, 2011 at 1:26 am
Calibration of meteorological thermometers.
Can somebody tell me how the calibration of electronic meteorological thermometers is performed?

Your best bet is NIST – http://www.nist.gov/index.html . You will find much information at their website – simply search on Thermometer calibration . Or contact them via phone or email . They are the US authority for all instrument calibration.

C P
January 21, 2011 6:11 am

The thing that amazes me is that I attended a physics colloquium on Global Warming at the University of Colorado in about 1990 that showed global temperature data with uncertainty bars. The data range was the same as shown here and the uncertainty bars were very similar except that the uncertainty decreased by about 50 % from the oldest to the newest data. That presentation of the data (just like this one) made it clear that there was no provable trend in the data.
I wonder when and why climate scientists decided to stop showing the uncertainty bars?

JJ
January 21, 2011 6:42 am

eadler,
“It seems to me that if there is a systematic error in a thermometer, as long as it does not vary systematically with time, it should not contribute to a significant error in a trend, which is what is used to calculate the global temperature anomaly.”
That statement assumes that the instrumentation of the system doesnt change over time. That isnt the case. The number of instruments and the mix of instrumentation has changed over time. If thermometer type B has a consistent warm bias vs type A, and if the mix of thermometers shifts from predominantly type A to type B, the trend will bias warm. That is one way that consistent individual error can still affect the trend in the aggregate.
That said, what Pat has thus far failed to do is demonstrate how his assertion about the uncertainty of the individual measurements operates in the aggregate to produce the observed trend. Unless the individual errors are correlated across all stations over time (thus far not shown) the uncertainty about the trend will be much smaller than the uncertainty of the individual measurements. I doubt this paper has much longevity.

beng
January 21, 2011 7:32 am

This is OK statistically — it deals w/uncertainty in the GISS numbers themselves (right or wrong). But GISS, as we know, doesn’t realistically account for UHI & changes in local site conditions. Bottom line, GISS in it’s present form can’t tell us much of anything.

Bernd Felsche
January 21, 2011 8:09 am

Keith Minto raises a salient point:

may be hard to sell to journalists and the general public.In Fig3 above, it certainly looks as if the temperature has risen since 1960.

So let’s re-plot the data in terms of e.g. the annual temperature range experienced by people in temperate climates, which’d be a scale of ±25°C and a seasonal temperature fluctuation shown as a background band. Then show the “anomaly” (what an intimidating, horrible word to the untrained) curve.
There: You get a sense of proportion. Something that is absent from the palaver and fear-mongering of the alarmists.

Steve Keohane
January 21, 2011 8:28 am

This just what I suspected from looking at the surface station project, i.e. we don’t have the data to tell what is going on relative to any other time because of the error levels of the measurement system.

eadler
January 21, 2011 9:05 am

JJ says:
January 21, 2011 at 6:42 am
“eadler,
“It seems to me that if there is a systematic error in a thermometer, as long as it does not vary systematically with time, it should not contribute to a significant error in a trend, which is what is used to calculate the global temperature anomaly.”
That statement assumes that the instrumentation of the system doesnt change over time. That isnt the case. The number of instruments and the mix of instrumentation has changed over time. If thermometer type B has a consistent warm bias vs type A, and if the mix of thermometers shifts from predominantly type A to type B, the trend will bias warm. That is one way that consistent individual error can still affect the trend in the aggregate.
That said, what Pat has thus far failed to do is demonstrate how his assertion about the uncertainty of the individual measurements operates in the aggregate to produce the observed trend. Unless the individual errors are correlated across all stations over time (thus far not shown) the uncertainty about the trend will be much smaller than the uncertainty of the individual measurements. I doubt this paper has much longevity.”
The various agencies which chart the global trends have studied the change in instrumentation over time, and have adjusted the temperature trends at the various locations used for the documented equipment changes at each location.

Domenic
January 21, 2011 9:53 am

I have 20+ years in professional temperature measurement management, including all kinds of temp sensors, from mercury thermometer to thermocouples to RTDs to resistance platinum to infrared detectors.
Another source of error not yet mentioned here is the ‘time constant’ error inherent in any temp sensor’s ability to measure. For example, the time constant of a glass mercury thermometer, or a bimetallic (common many years ago) is quite long, a minute or more, depending on circumstances (just from memory). A platinum RTD, with very thin sheathing will change in just seconds.
This is significant because it has a direct effect on ‘record low’ and ‘record high’ data sets. I suspect that a lot of ‘record high’ data comes from this error source.
This error source can make it ‘appear’ as the temps are swinging more wildly now than they have ever been. But that is not necessarily true. The wild swings in temp were always there, but the measuring systems could not capture it. Now they can.
For example, when the sun moves in and out of the clouds, outdoor air temperatures can change quite quickly. The amount of that change recorded by the sensor is completely dependent on its time constant. So, ‘record high’ daily data is very susceptible to this. Record lows would not be as sensitive to time constant error because the sun is not dominant at night…so temps simply do not jump around as much.
As I don’t know all the details of what specific temp sensors AND THE RECORDING SYTEMS have been used historically in all the reporting stations, and whether they are using fast electronic ‘peak picker’ temp sensors systems now to get daily highs or lows, I cannot really know with any certainty how much the error is.
My guess though, is that the are now using fast electronic ‘peak picker’ systems to measure daily highs and lows as it is simply so easy to so do now. But keep in mind, what is easy to do now, was very difficult to do many years ago.
And there are additional possible errors in the entire measurement systems between today and in the past.
As a rough guess, in my professional opinion, the error bars for global temp data are much, much greater than most can imagine, more likely to be around +/-2 deg C.
And this does not include the UHI effect, which I have been pleased to see carefully examined here. That is indeed a VERY large effect. I have observed it many times first hand.

BillT
January 21, 2011 10:18 am

years ago i posted on another forum that the margin of error is far greater than any claimed “warming”.
this work confirms that but i would go further in saying that even the plus or minus .46 is too LOW.

JJ
January 21, 2011 10:29 am

Eadler,
“The various agencies which chart the global trends have studied the change in instrumentation over time, and have adjusted the temperature trends at the various locations used for the documented equipment changes at each location.”
In the broad view, that is not accurate. To the contrary, station metadata globally are so incomplete and unreliable that the latest version of GHCN abandons metadata based adjustments in favor of various model-based corrections. Those corrections are the source of some controversy.
That said, my point was that there are ways that a fixed bias in single instruments can translate into a trending bias when those individual instruments are aggregated spatially and temporaly. Instrument mix was just an easy to understand example.
Note too that I was agreeing with your larger point. Pat has not demonstrated how he believes the uncertainty of individual measurements translates to uncertainty in the trend across time of the spatially weighted mean of those individual measurements. He asserts that it is possible to fit many different trend lines through the uncertainty band about the invidiual measurements, which is true. It is also true that statistics is not about the possible, but the probable. Missing is the justification for the assertion that all of those posssible trend lines are equally probable. Again, from what it presented thus far, this paper does not appear to have legs. WUWT readers should be cautious about investing too much in its conclusions until they have been further vetted.

January 21, 2011 11:46 am

Thank you so much for this.
O/T: I wish I had one thin dime for every time Arctic conditions are cited as evidence of AGW while Antarctic conditions are omitted. Bias much?

January 21, 2011 12:39 pm

[snip – if you have something substantial to say, say it]

Günther Kirschbaum
January 21, 2011 1:00 pm

Galileo Galilei, Albert Einstein, Patrick Frank…
All men who saw things nobody before them had seen.

January 21, 2011 1:44 pm

The more I read about these global temperature trends and annual comparisons, the more they resemble sausages. Like the butcher says, “You really don’t want to know what is in there.”

January 21, 2011 1:57 pm

Uncertainty in the temperature records and trends is the weak point in all the temperature-based climate change models. Uncertainty is based on two parts, precision and accuracy. The first involves how finely we can measure something, and the other how well what we measure is “true”. This report says that what we are measuring as a temperature change is not greatly different the uncertainty of that measurement, or the error bar. Yet the CAGW position is not the size of the error bar, whether it is 0.46K or even more. The huge error bar of the Hockey Stick was irrelevant in the political and public belief that the global temperature was rising to an alarming level. What everyone saw was the change of the mid-point of the error bar. Unless there is more reason to pick the top of the error bar in the past, and the bottom of the error bar in the present, the same result will occur even if reports like this double the width of the error bar is doubled: the global temperatures will be seen to have gone up by what Hansen and Gore say, about +0.8K.
However, as many have described on WUWT, this rise of about 0.8K is made up of about 0.4K of raw data and 0.4K of adjustments. The adjustments are an attempt to counter the probable error in the raw data described in this paper. This report speaks more to the raw data than it does to the adjustments; the focus needs to move now to the adjustments.
Below I have discussed the five areas in which I see uncertainty affects the data record and the trends deduced from it: instrumental precision, instrumental accuracy, observer precision, observer accuracy and observer consistency. Each one requires a response, whether that is to do nothing or something, and introduces a place for Hansen et al to have adjusted the record. The nut to be cracked lies within these areas.
1. Instrumental precision, i.e. the sensor’s ability to register fine points of difference, is immaterial when the trends, i.e. the anomalies, are reviewed, as long as the level stays the same over the temperature range and is the same across the network. With enough data, as I understand error analysis, a square root function can drops the +/- bar of the group far below that of the individual station. The magnitude of the “imprecision” has decreased with time if you believe that instrumentation is more sensitive now than in the past. Bulked up, improved precision through better instruments will not significantly affect temperatures or trends. There should be no adjustment for this.
2. Instrumental accuracy is more significant to the records than precision due to technological advances. Even assuming that the levels of accuracy are randomly distributed within the network, accuracy uncertainty in 1880 would be worse than that in 2010. Both raw temperatures and trends would be affected. The changes, moreover, would be non-random, progressive over time with either a warming or cooling pattern. This is because we “improved” sensors over time rather than reinvent them, retaining their basic way of operating. The modern digital/computer system is a reinvention and could have introduced a step change but at the time of introduction calibrations were made to get its readings to conform to “known” reality. Future functioning may, however, have still revealed a step-change due to new styles of measurement, with the step-change either a warming or a cooling event. Justification for adjustments for changes in instrumental accuracy changes would come through a comparison of historical and modern equipment. They would not be progressive but step-changes, and once done, should not have to be done again, unless the reference equipment was changed. There would have to be studies for this adjustment.
Note that the UHIE effect introduces a type of instrumental inaccuracy, in that local heating distorts the “natural” world we are trying to measure. The instrument is reading properly, but what it is reading is not an accurate reflection of what you are trying to measure. It is a proxy for the untainted-world, requiring adjustment. Both data and trends would be affected and progressively, with a warming bias, as urbanization has increased through time. The effect should be to decrease modern temperatures while leaving older temperatures alone.
3 & 4. Observer precision is today about the same as yesterday, at about 0.5K (probably because that is about 1*F for old thermometers). Observer accuracy may easily be +/- 5C – AT TIMES as in reading and then writing a number down, it is easy to put an 18 instead of a 13. These types of error, though, are fairly easy to recognize as outliers, and discounted. Further, over time they should be random and average out with enough data. Observer precision and accuracy should not affect either the temperature records or trends over time. No adjustment should be necessary.
5. Observer consistency is a different matter. I understand that through time readings are taken earlier today than before, such that a reading of 14.0C in 1880 would have been 13.9C under the 2010 protocol. Historical records, if this is true, would push older temperatures down relative to what the raw data shows. The adjustment here would be a one-time, step-wise function using local data correlating day-of-the-year variation in daily temperatures. A broad-brush might be easier to do, but if you really want to reduce data error, a very short time comparison would be necessary.
The 0.4K of adjustments by Hansen/GISS has been to decrease older temperatures relative to the present day. This is reasonable only if you believe a) that older sensors were less accurate than those of today and 2) the older sensors had a warm bias, and 3) were read at a later time in the day than presently. The magnitude of such necessary adjustments is empirically determinable. These problems cannot, however, in good faith be given as much significance for the modern era as for the past. The changes from 1880 to 1980 have to be far more significant than from 1980 to 2010.
The recent changes are also different from the older ones. From 1980 to 2010 the profound changes are in station numbers and locations, and the magnitude of the UHIE. A more urban record and a larger urban presence introduce higher temperatures. To correct for these, the modern temperatures (and anomalies) will have to be pushed down, i.e. cooling recent data. Older temperatures should be unaffected. But, strangely enough, what we see is the reverse, right up to the adjustments revealed in 2010 to the 2007 data and trends.
Recent records have been made (even if in a minor way) warmer and older, still cooler. Since 2000 additional adjustments have pushed pre-1980 data and trends further down and post-1980 temperatures down and modern ones up. This can only be justified if you claim that the prior, 1980 reference equipment was warm biased relative that of 2010. It can be CREATED, however, by changing the basic data involved, i.e. the current database has lost warmer pre-1980 data stations (or introduced more pre-1980, cooler stations) than previously used. This would be a nefarious way to improve your results, but not semi-unjustifiable. A neat “trick”, to be sure, but the group got away with tricks like this before.
If, as suggested, the raw data increase of about 0.4K has an uncertainty due to technical issues of +/-0.46K. By itself this changes nothing in the warmist argument. A 0.8K +/- 0.46K temperature increase since 1880 is enough to get them in a CO2 frenzy as it matches their models “well enough”. So, the focus has to be on the rationale for the 0.4K of adjustments. I suggest this is by checking justification for:
1) older, less accurate instrumentation,
2) older, warm-biased instrumentation,
3) older, warmer time periods when temperatures were recorded, and
4) a (probably progressive) change in the raw data being used.
Uncertainty in those four areas will then be attributable to the other 0.4K of temperature rise. As these are not multiple repeats of the same data type, the errors add.
It could be that instrumentation plus adjustment uncertainties are nearly equal to the 0.8K claimed temperature rise. This situation has a possible answer: that the current world is changing mainly by swapping heat around, rather than increasing or decreasing its total heat content. The temperature anomalies are nothing but a measurement of the temperature “noise” of the world. It is where the “noise” is located that determines whether the region (like the Arctic) is warming or (like the Antarctic) cooling.

George Steiner
January 21, 2011 2:44 pm

It is worse than you thought. It is a climatological fiction that there is auch a measure as average globla temperature. It is a mathematical artifact only. You can only say what the temperature is at the measurement point. Simpleminded engineering types such as I would not dream of implying what the temperature is at the distance of 10 feet, 1000 feet, 1000 miles from the measurement point. Climatologists have for twenty years milked this one number and built a house of cards on it.
But as a house it stood up remarkably well. Anthony Watts thinks that there is such a number, Lindzen thinks there is such a number. All everybody arguing about is how big it is. Or rather how much it deviates from an arbitrary reference.
Let me predict the near future. The house of global average temperature will colapse.

Scott Covert
January 21, 2011 3:03 pm

Some stations are better than others. I think this article is judging “best case”.
Military readings are often done by 19 or 20 something young people, The readings are probably more accurate during fair weather and poorly measured or outright “fudged” when it is unpleasant to go out and check the Stevenson screen. Military sensors are used mostly to determine short term weather conditions and were never intended for use as climate data.
Small stations at lighthouses, observatories, schools, etc are usually read by students, interns, and “your cousin Bob” whom have no idea how the data will be used and have no idea what accurate readings are.
The Surface Station Project showed how poorly and unscientifically the stations are maintaned.
I thing 0.46C is REALLY generous.
I think the readings we are getting today are much better due to automation but there is still UHI and siting issues that blow 0.46C out of the watter.