Surface temperature uncertainty, quantified

There is a new paper out that investigates something that has not previously been well dealt with related to the surface temperature record (at least as far as the author knows). “Sensor measurement uncertainty”. The author has defined a lower limit to the uncertainty in the instrumental surface temperature record.

 

Figure 3. (•), the global surface air temperature anomaly series through 2009, as updated on 18 February 2010, (http://data.giss.nasa.gov/gistemp/graphs/). The grey error bars show the annual anomaly lower-limit uncertainty of ±0.46 C.

 

UNCERTAINTY IN THE GLOBAL AVERAGE SURFACE AIR TEMPERATURE INDEX: A REPRESENTATIVE LOWER LIMIT

Patrick Frank, Palo Alto, CA 94301-2436, USA, Energy and Environment, Volume 21, Number 8 / December 2010 DOI: 10.1260/0958-305X.21.8.969

Abstract

Sensor measurement uncertainty has never been fully considered in prior appraisals of global average surface air temperature. The estimated average ±0.2 C station error has been incorrectly assessed as random, and the systematic error from uncontrolled variables has been invariably neglected. The systematic errors in measurements from three ideally sited and maintained temperature sensors are calculated herein. Combined with the ±0.2 C average station error, a representative lower-limit uncertainty of ±0.46 C was found for any global annual surface air temperature anomaly. This ±0.46 C reveals that the global surface air temperature anomaly trend from 1880 through 2000 is statistically indistinguishable from 0 C, and represents a lower limit of calibration uncertainty for climate models and for any prospective physically justifiable proxy reconstruction of paleo-temperature. The rate and magnitude of 20th century warming are thus unknowable, and suggestions of an unprecedented trend in 20th century global air temperature are unsustainable.

INTRODUCTION

The rate and magnitude of climate warming over the last century are of intense and

continuing international concern and research [1, 2]. Published assessments of the

sources of uncertainty in the global surface air temperature record have focused on

station moves, spatial inhomogeneity of surface stations, instrumental changes, and

land-use changes including urban growth.

However, reviews of surface station data quality and time series adjustments, used

to support an estimated uncertainty of about ±0.2 C in a centennial global average

surface air temperature anomaly of about +0.7 C, have not properly addressed

measurement noise and have never addressed the uncontrolled environmental

variables that impact sensor field resolution [3-11]. Field resolution refers to the ability

of a sensor to discriminate among similar temperatures, given environmental exposure

and the various sources of instrumental error.

In their recent estimate of global average surface air temperature and its uncertainties,

Brohan, et al. [11], hereinafter B06, evaluated measurement noise as discountable,

writing, “The random error in a single thermometer reading is about 0.2 C (1σ) [Folland,et al., 2001] ([12]); the monthly average will be based on at least two readings a day throughout the month, giving 60 or more values contributing to the mean. So the error

in the monthly average will be at most 0.2 /sqrt60= 0.03 C and this will be uncorrelated with the value for any other station or the value for any other month.

Paragraph [29] of B06 rationalizes this statistical approach by describing monthly surface station temperature records as consisting of a constant mean plus weather noise, thus, “The station temperature in each month during the normal period can be considered as the sum of two components: a constant station normal value (C) and a random weather value (w, with standard deviation σi).” This description plus the use of a 1 / sqrt60 reduction in measurement noise together indicate a signal averaging statistical approach to monthly temperature.

I and the volunteers get some mention:

The quality of individual surface stations is perhaps best surveyed in the US by way of the commendably excellent independent evaluations carried out by Anthony Watts and his corps of volunteers, publicly archived at http://www.surfacestations.org/ and approaching in extent the entire USHCN surface station network. As of this writing, 69% of the USHCN stations were reported to merit a site rating of poor, and a further 20% only fair [26]. These and more limited published surveys of station deficits [24, 27-30] have indicated far from ideal conditions governing surface station measurements in the US. In Europe, a recent wide-area analysis of station series quality under the European Climate Assessment [31], did not cite any survey of individual sensor variance stationarity, and observed that, “it cannot yet be guaranteed that every temperature and precipitation series in the December 2001 version will be sufficiently homogeneous in terms of daily mean and variance for every application.”

Thus, there apparently has never been a survey of temperature sensor noise variance or stationarity for the stations entering measurements into a global instrumental average, and stations that have been independently surveyed have exhibited predominantly poor site quality. Finally, Lin and Hubbard have shown [35] that variable field conditions impose non-linear systematic effects on the response of sensor electronics, suggestive of likely non-stationary noise variances within the temperature time series of individual surface stations.

The ±0.46 C lower limit of uncertainty shows that between 1880 and 2000, the

trend in averaged global surface air temperature anomalies is statistically

indistinguishable from 0 C at the 1σ level. One cannot, therefore, avoid the conclusion

that it is presently impossible to quantify the warming trend in global climate since

1880.

The journal paper is available from Multi-Science publishing here

I ask anyone who values this work and wants to know more,  to support this publisher by purchasing a copy of the article at the link above.

Congratulations to Mr. Frank for his hard work and successful publication. I know his work will most certainly be cited.

Jeff Id at the Air Vent has a technical discussion going on about this as well, and it is worth a visit.

What Evidence for “Unprecedented Warming”?

Advertisements

  Subscribe  
newest oldest most voted
Notify of
carbon-based life form

Here is a link to a letter to the APS editorial page from this author that may be the genesis of his published study. http://www.aps.org/units/nes/newsletters/fall09.cfm

etudiant

Bravo!
At last some glimmer of reality. Now stand by for heavy flack.

Al Gored

Wow. Finally. Uncertainty officially acknowledged at this level. Combined with all the station problems you documented Anthony, and related adjustments, this should pretty much put all the quibbling about one-tenth degree changes, etc. to an end… but it won’t of course.

richard verney

Well it is good to see that this has been published. This paper is bound to get quite some reaction. I know many who have long considered that the claimed warming is indeterminal from the noise. Given all the uncertainties, it is rediculous to claim measurements to 1/100ths of a degree.
Now if they could only clean up the UHI problem and bias through station drop out, we may be better able to consider the significance of any observed temperature change and to put it in its proper context.

SomeJerk

It won’t stop the alarmists because it isn’t about AGW. It never was. It has always been about crisis, any crisis, that can be used to motivate the people into whatever action the political higher-ups want.

Duster

More significantly, the study concludes that the trend in surface tmeperatures since 1880 is statistically indistinguishable from a 0-degree C trend. That is, although it has often been widely acknowledged, even among skeptics of AGW, that the globe has warmed over the last century, there is no statistically significant support at even a one-sigma level in the data for such warming. There may in short have been no warming pattern at all, simply shorter term noise. The fault may not be in our star but in our numbers.

ThomasU

Wow, this “settled science” is always good for a big surprise. There has never ever been a thorough scientific assessment of station errors?! Extraordinary! I have learned a lot since I first became aware of “WUWT”, and I knew very well that “climate science” is full of holes. But to read now that nobody ever even bothered to assess the quality of the temperature measurements properly is still a somewhat shocking. The whole “climatism” (or “alarmism” or “warmism”) scam has to be brought to an end, the sooner the better!
I can only thank Anthony Watts for bringing all these informations to the wider public. Here in Germany we still have a long way to go before the scam ends. There seems to be an “unholy alliance” of MSM, politics, NGOs, “climate scientists” and “green energy” profiteers which all work hard to promote ever more regulations, taxes, fees and research grants. Until now the people unwillingly bear the burden…

mike g

Just curious, now that the Hudson Bay has frozen over, have BBQ conditions set in for the UK?

kadaka (KD Knoebel)

Has Mount Romm erupted?
It’d be a big volcanic eruption , lots of sulfate aerosols dispersed up high, lots of dust from the disintegrating gray dense stony matter. Mount Romm will be venting steam for weeks, spewing spurts of fire and brimstone for months.
I feel a chill in the air, like the world has gotten colder…

coaldust

I don’t believe man is reponsible for the temperature rise, but we do see the following evidences that it has risen:
Less ice in the Arctic
Similar temperature measurements from satellites
Deviations with good explanations (Pinatubo, El Niño, La Niña)
probably more…
IMO the argument that we don’t know that the temperature has risen is not a good one. The evidence just doesn’t point that way.
These blow holes in CAGW though:
MWP
Cloud albedo feedback uncertainty
The way the “science” was hijacked
probably more…

Mark Cooper

Here is some relevant info about Metrology. I sent this to Mr McIntyre in 2008 but he never replied, and eventually I got around to sticking it on a blog for posterity…
http://pugshoes.blogspot.com/2010/10/metrology.html

rsteneck

[snip – sorry, your topic is way off topic this sort of thing is better published here. I’m sure they’ll take it. -Anthony ]

It’s as bad as we thought.

richard verney

Whats the betting that in about 2 weeks time, Dr Spencer will be reporting a global temperature anomaly of about zero against the 30 year trend.

François

Anything about large lakes surface temperature uncertainty? NASA’s JPL gives figures way above Frank and Alto’s noise variance.

Keith Minto

Good thoughtful article, however….

The ±0.46 C lower limit of uncertainty shows that between 1880 and 2000, the
trend in averaged global surface air temperature anomalies is statistically
indistinguishable from 0 C at the 1σ level. One cannot, therefore, avoid the conclusion
that it is presently impossible to quantify the warming trend in global climate since
1880.

…….may be hard to sell to journalists and the general public.In Fig3 above, it certainly looks as if the temperature has risen since 1960. The general public may think that the strongest signal is the mean, with the error bars indicating a gradient of uncertainty.

David L. Hagen

The full uncertainty (Type A + Type B) of ±0.46 C vs Type A of ±0.2 C is typical of the difference between a full uncertainty analysis versus the commonly reported “accuracy”.
NIST provides an introduction:
Essentials of expressing measurement uncertainty
For details see: B.N. Taylor and C.E. KuyattGuidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results NIST TN 1297

Scott

It’ll be interesting to see what Tamino’s response to this is (if any). Isn’t this supposed to be right up his alley?
-Scott

latitude

A 130 years of data, and a straight line is still within the noise………….
==============================================
coaldust says:
January 20, 2011 at 3:19 pm
I don’t believe man is reponsible for the temperature rise
=================================================
Man is not responsible at all coal…..
…..if man had anything to do with it, it would be a lot warmer

Curiousgeorge

My main issue with “Average” ( or mean), in this or many other contexts, is that most people who are not statistically literate equate that term with “Majority”. Not true of course, but even fewer people have a clue what mode or median is, much less how those and other terms (skew, kurtosis, etc. ) are used to describe a distribution. Which would be far more meaningful than a simple point estimate.

Thanks Anthony, and of course Patrick Frank,
I am qouting from this article at http://www.oarval.org/ClimateChange.htm and http://www.oarval.org/CambioClima.htm (Spanish).

Dave

To Anthony and the Surface Stations team… thank you, and congratulations.
By the way, I hear that Vegas bookies have your team leading the hockey team by (whatever is statistically more significant than) 6 Sigma!!!

JRR Canada

So is there any science left in the CAWG concept? Its incontravertable eh.?
However you spell absolutely for sure the only answer.

Dr A Burns

Temperatures now measured electronically to 0.1 degrees but have been recorded to only +/- 0.5 degrees
http://www.srh.noaa.gov/ohx/dad/coop/EQUIPMENT.pdf … page 11
This doesn’t seem to have been considered.

George E. Smith

Well the paper is about sensor errors. How about the errors that occur for about 70% of the surface area; which happens to be water. I don’t see any attempt to correct the erroneous data that was gathered prior to 1980, which in 2001 (Jan) was found to not correlate with the Temperatures that are measured on land ie from the atmosphere rather than from the water temperatures, as were the older oceanic data readings.
These global Temperature anomaly proxies have more changes than any carillon on earth.

terrybixler

Draw a line at zero and it barely escapes the error bars in 2010 (the hottest adjustment ever) , interesting.

Mike Edwards

mike g says:
January 20, 2011 at 3:17 pm
Just curious, now that the Hudson Bay has frozen over, have BBQ conditions set in for the UK?

Nice of you to ask, Mike G. January has seen the UK get a more customary spell of wet & windy weather, with temperatures a more tolerable 8 to 10 C (46-50F), with a short spell up to 13C (almost springlike). Things have returned to a chilly dry theme now, which looks like it will last for a week or so, with night frosts and daytime temps in the 2 – 6 C range.
So no BBQ, just more like a regular winter than the icy blast we received before Christmas…

xyzlatin

Mark Cooper says:
January 20, 2011 at 3:20 pm
Here is some relevant info about Metrology. I sent this to Mr McIntyre in 2008 but he never replied, and eventually I got around to sticking it on a blog for posterity…
Mark, I found your blog awhile ago but lost it again, thanks for posting the link. Your article was really good and I will print it out and show it to everyone who thinks that it is possible to show from thermometer records that the temperature has risen at all. I have known for many years of the errors in thermometers, and I have just been unable to fathom why everyone is arguing about a supposed rise of .7 deg and so on.
I have heard many skeptics agreeing that the temperature has risen, I think that they don’t want to be put in the category of “flat earther” which is worse than “denier”, yet the basis of argument has to be a complete analysis of the instruments measuring the data, and then the people taking that measurement.
Mark, you excellently sum up the problems in your blog article.
1.Human Errors in accuracy and resolution of historical data are ignored
2.Mechanical thermometer resolution is ignored
3.Electronic gauge calibration is ignored
4.Mechanical and Electronic temperature gauge accuracy is ignored
5.Hysteresis in modern data acquisition is ignored
6.Conversion from Degrees F to Degrees C introduces false resolution into data.
You are obviously someone who has knowledge about this subject, how about posting more on WUWT?
ps. Mark, it is impossible to post a comment on your blog unless one already has a blog. It might be the reason there are no comments (I know I found it impossible to put a comment).

It's always Marcia, Marcia

No statistically significant warming in 130 years.

Beesaman

As someone who spent twenty years as an instrument engineer I’d love to see the field instruments that are doing better than ±0.5 Deg C. I’d also like to see the calibration records of the standards used and the calibration reports and methodology.
I noticed NOAA give a tolerance of ± one degree for the MAX/MIN temperatures and for the Palmer Soil thermometer even more, to quote from,
http://www.srh.noaa.gov/ohx/dad/coop/EQUIPMENT.pdf
V. CALIBRATION – CONTINUED:
When comparing Palmer Model 35B readings with those of a check
thermometer, remember two things:
A) tolerance of the Palmer (approx. 2 degrees F.) and the
check thermometer (generally 1% of scale) may be additive.
B) a seemingly slight difference in exposure between the two
may contribute to a variation in the readings. A spread of
up to 4 degrees F. between the two readings can be
considered satisfactory. If using method 3, a reading
between 29 degrees F. and 35 degrees F. can be considered
sufficiently accurate at the ice-point.
Ho hum…

I don’t want to bring a skunk to this picnic, but look again at the figure. It says it is equally likely that global temperatures, since 1880, have cooled by as much as 0.2ºC (the difference between the high limit for 1880 and the low limit for 2009) -OR- warmed by as much as 1.8ºC (the difference between the low limit for 1880 and the high limit for 2009).
Since the error bars are at least approximately normally distributed, the best bet would be that the actual warming is somewhere in the middle, nearer to 0.8ºC than either of the extremes. Yes, it COULD be that it has cooled 0.2ºC since 1880, or, conversely, that it has warmed an astounding 1.8ºC, but the best bet is near the middle.
Now, I think the most likely net warming since 1880 is 0.5ºC, so this paper seems to be 0.3ºC warmer than me.
The message of this paper, for skeptics like me, is that the uncertainty is about as large as the supposed 0.8ºC official Team warming estimate. That supports the view that we should not adopt any drastic, economy-wrecking public policy changes in the “environmental” domain based on the error-prone temperature record we have in hand.

GregO

xyzlatin Thanks for giving Mark the H/T and Mark – right on.
After Climategate I got interested in so-called man-made global warming and step one for me was to check out the thermometers. Took me awhile to find this site and when I found out about Anthony’s surface station project it just blew me away. Sure there are other proxies like sea-level – guess what – that’s tough to measure too!
There just isn’t much delta T to work with; if there really is run-away warming we’d all be debating how much, how fast, and what to do right now. Problem remains, the delta T is just tiny compared to our measuring technique/technology. There just isn’t much to get excited about that can be directly measured.
So now the argument is that there is this mysterious “heat in the pipeline” ala Trenberth that is somehow lurking, just waiting to destroy mankind. Really? Just show me the data. I’ll make up my own mind about how bad it is.

geo

I feel a small touch of pride today. 🙂
Anthony, were you consulted in advance? Your blessing secured? I ask, because I well remember your attitude to how you were treated by NOAA for use of the data that you were the driving, organizing, and evaluating force behind.

vigilantfish

Wow Anthony, congratulations. Nice to see your real science is getting some official recognition! This is nice news to counterbalance the claptrap coming from the UN weather agency, the World Meteorological Organization, that claims 2010 is now ties as the hottest year in history.
http://www.forexyard.com/en/news/2010-ties-for-hottest-year-UN-2011-01-20T115020Z-FACTBOX

Curiousgeorge

@ xyzlatin says:
January 20, 2011 at 5:08 pm
Mark Cooper says:
January 20, 2011 at 3:20 pm
Here is some relevant info about Metrology. I sent this to Mr McIntyre in 2008 but he never replied, and eventually I got around to sticking it on a blog for posterity…
Mark, I found your blog awhile ago but lost it again, thanks for posting the link. Your article was really good and I will print it out and show it to everyone who thinks that it is possible to show from thermometer records that the temperature has risen at all. I have known for many years of the errors in thermometers, and I have just been unable to fathom why everyone is arguing about a supposed rise of .7 deg and so on.

I agree. Marks article was exactly on the money. Having had some experience with metrology myself in a industrial environment tho, I can testify that the users of measurement instruments absolutely hate it when anybody raises the issue, and immediately point the finger at the calibration laboratory. The reason is that on the floor ( or in the field) they have no capability ( and usually no training or tools ) to establish or compensate for that instrument/human error. They have to believe what their eyes tell them whether it’s a tape measure, caliper, torque wrench, or thermometer. So when the data gets to the analyst, he has no way of determining the inst/human error either. Questioning the readings, just makes everyone look bad to the boss, who has other management concerns besides some piddly problem with instrument error, and no one will admit to human error. In some industries, admission of this uncertainty carries legal and contractual risk as well. It’s a real can of squirmy worms.
There are statistical techniques for dealing with this; notably a Gage Repeatability and Reproducibility (GRR) study, which is a form of ANOVA. But again that requires a level of training and so on, which may not be available, and the results can be hazardous to your business health.

Wally

Not only is the temperature trend uncertain, any modeling using these temperatures is even more uncertain.

magellan

Mark Cooper says:
January 20, 2011 at 3:20 pm
Here is some relevant info about Metrology. I sent this to Mr McIntyre in 2008 but he never replied, and eventually I got around to sticking it on a blog for posterity…
http://pugshoes.blogspot.com/2010/10/metrology.html

Mark Cooper: AMEN! I’ve been howling about this very same thing for some time. Metrology takes a back seat in climate science when it should be the first and foremost concern prior to any measurements are taken or reports written no matter what field of work. How any scientist or organization can be qualified to use data without studying, understanding and applying fundamental metrology is mind boggling IMO. If industry ran like climate science, most would have been put out of business 20 years ago.
I chuckle when reading statements like “data was put through quality control” or such. That they can even with a straight face publish resolutions to .001 or even .1 for that matter is ludicrous, not to mention the error bars which has bothered me for some time. Then GISS extrapolates/interpolates and otherwise morphs what used to resemble “data” into nothing more than near meaningless values. Met O does the same thing with HadCRUT results by modeling into their own image. These should add to the uncertainty of the measurements on their own, but we’re told don’t worry, it’s all been accounted for. Yeah right.
Where are the calibration procedures and maintenance records for the surface stations? Where are the GR&R studies? Where are the system audits?
I’ll be purchasing this paper. Thanks Pat Frank for bringing this to light. It’s long overdue.

Ira Glickstein, PhD says:
January 20, 2011 at 6:16 pm
I don’t want to bring a skunk to this picnic, but look again at the figure. It says it is equally likely that global temperatures, since 1880, have cooled by as much as 0.2ºC (the difference between the high limit for 1880 and the low limit for 2009) -OR- warmed by as much as 1.8ºC (the difference between the low limit for 1880 and the high limit for 2009).
Since the error bars are at least approximately normally distributed, the best bet would be that the actual warming is somewhere in the middle, nearer to 0.8ºC than either of the extremes. Yes, it COULD be that it has cooled 0.2ºC since 1880, or, conversely, that it has warmed an astounding 1.8ºC, but the best bet is near the middle.

The real problem, Ira, is that there is no “it”. Some locations have warmed, some have cooled, and some have remained relatively static over that time span. But, because most of the sensors are now in built-up areas, the average gets boosted.
Nothing “Global” has occurred, lots of regional though, in all directions.

GaryW

I’m not sure folks are understanding the full implications of this paper. Remember, the climate models are adjusted to match the digital version of a temperature record. They are being tweaked to match noise. The trend projected from this record is only about as much as the noise level. In instrumentation terms, that is no trend. That does not give us much confidence in climate model output.
Besides, +/- 0.46 degrees accuracy is awfully optimistic. That is lab instrument accuracy.

magellan

Beesaman,
Are you familiar with the HO83 issue?
http://climateaudit.org/2007/08/22/the-ho-83-hygrothermometer/

stephan

Once temperatures are seeing to be not rising due to adjustments etc…plus a bit of truth probably due to natural causes, the WMO will have to be closed down and reassessed in its totality after their incredibly stupid presss relaese that 20101 has been the hottest year in history LOL

SABR Matt

Remember the IPCC report that the uncertainty in global temperature was +/- 0.05 (!) C?
I am far more inclined to believe THIS scale…though I actually think the uncertainty may be higher.

eadler

The conclusions of Patrick Frank are counter to the generally accepted consensus. It is common sense to demand that some logic be presented that shows why the generally accepted theory is wrong. This was not done in the fragment of the article posted above. We don’t have the author’s arguments, which show that his estimate of uncertainty is correct, and the common estimate of random error is wrong. We only have his conclusions.
People who call themselves skeptics should be skeptical of this, until they see the entire argument and which demonstrates that the author’s conclusions are correct. If copyright forbids direct reproduction of the key argument, it would have been a good to see the argument paraphrased by someone.
It will be interesting to see some critical analysis of the author’s thesis y experts. I am not going to purchase the article to read it for myself.
It seems to me that if there is a systematic error in a thermometer, as long as it does not vary systematically with time, it should not contribute to a significant error in a trend, which is what is used to calculate the global temperature anomaly.
REPLY: Well bucko, buck up and buy the paper or quit your whining. – Anthony

Beesaman says:
January 20, 2011 at 5:49 pm
I noticed NOAA give a tolerance of ± one degree for the MAX/MIN temperatures and for the Palmer Soil thermometer
Unbelievable! And here we are sitting on the edge of our seats over what’s going on month to month. Richard Lindzen said it nicely:
“…..each of these points has a fairly large error bar….. we’re turning normal variations into the ancient notion of an omen. We’re scanning this small residue for small changes and speaking of them as though they were ominous signs of something or other.”

James Smyth

I was just reading that Metrology article and had a thought. In the mid 80’s (I think it was) there was a movement to change the rounding convention of .5 to the closest even number, rather than up to the next number. The idea was that rounding up was biasing numbers up, but parity doesn’t really matter. I wonder if a) someone’s reading of a thermometer is influenced by the current rounding convention(***), and b) whether anyone has ever studied the effect of a convention shift on anything important.
(*** the Metrology article seemed to indicate that recordings of even number are mor common than odd ones, but its not clear that was from rounding convention, thermometer design, or some other inate preference for even number)

Pat Frank

Thanks, everyone, for your comments and interest.
Ira, the meaning of the error bars is that we don’t know where the true temperature trend is, within the (+/-)0.46 C envelope. The physically real trend just has a 68% chance of being within it.
What’s going on is that the individual thermometer readings are bouncing around, and deviating from the “true” air temperature mostly due to systematic effects. The systematic effects are mostly uncontrolled environmental variables. Depending on how the thermometers bounce, one could get this trend, or that one, or another, all of which would deviate from the “true” trend and all of which would be uncertain by (+/-)0.46 C. But, readings were taken, so we’ve gotten one of all the possible trends. It has some shape. But one should resist the temptation to read too much into it.
We know the climate has warmed from other observations — in the article I mention the northern migration of the arctic tree line as indicating warming. But putting a number on that warming is evidently impossible.

Jeff Alberts says:
January 20, 2011 at 7:44 pm
… The real problem, Ira, is that there is no “it”. Some locations have warmed, some have cooled, and some have remained relatively static over that time span. But, because most of the sensors are now in built-up areas, the average gets boosted.
Nothing “Global” has occurred, lots of regional though, in all directions.

Jeff, you are certainly correct that the average measured temperatures have been boosted by urban heat island effects in developed areas. However, I do not see how the paper that is the subject of this thread supports your statement that “Nothing ‘Global’ has occurred”.
All the paper seems to do (and I have only read the abstract and looked at the figure) is to show that the error sources are about as large as the thing the climate Team claims to have measured. Assuming the Abstract faithfully represents the paper, and further that the paper is valid, which I accept for argument’s sake, there are three conclusions, none of which seem to help our skeptic position:
(1) One possible conclusion supported by this paper, is that there has been no net average warming since 1880, and possibly even, as I showed, 0.2ºC of cooling! Most commenters grabbed onto that as a refutation of CAGW. (I personally reject CAGW due to other evidence, but I cannot see how this paper, per se, sheds any light on the argument against CAGW.)
(2) A second, equally possible conclusion, supported by this paper, is that there has been an astounding amount of average warming since 1880, of as much as 1.8ºC. That could be used to support CAGW. My fellow skeptics need to know that before they cite this paper as a refutation of CAGW. Do they not?
(3) Either of these extreme conclusions from this paper are unlikely. Furthermore, the best bet based on the paper, namely the average of the two extremes, about 0.8ºC, is 0.3ºC more average warming than I think occurred since 1880, for the very reasons you state, the location of many measurement stations in areas encroached by urban development, and also the “adjustments” made by the official climate Team that appear to be purposefully cooling datapoints prior to 1940 or 1950 and warming the points afterwards, with no good justification.
So, given the above, what value is this paper to the skeptic position? The only one I can think of is, as I wrote: “… the uncertainty is about as large as the supposed 0.8ºC official Team warming estimate. That supports the view that we should not adopt any drastic, economy-wrecking public policy changes in the ‘environmental’ domain based on the error-prone temperature record we have in hand.”
Do you disagree with that?

Anyone care to comment on the uncertainly bars on the satellite temperatures and give us a number on them?

Lonnie Schubert

I trust Mosh will weigh in. My experience tells me that temperature is almost never known at better than ±½°F. ±½°C for weather data seems sound and conservative to me. I suspect ±2°C is more typical, particularly when working with averages, even when working with anomalies.

ge0050

http://pugshoes.blogspot.com/2010/10/metrology.html
“Temperature cycles in the glass bulb of a thermometer harden the glass and shrink over time, a 10 yr old -20 to +50c thermometer will give a false high reading of around 0.7c”
Isn’t 0.7C about what is claimed for global warming over the past 100 years?