NCDC's USHCN2 paper: some progress, some duck and cover

The fact that my work is mentioned by NCDC at all is a small miracle, even if it is “muted”, as Roger says. However, I’m pleased to get a mention and I express my thanks to Matt Menne for doing so.  Unfortunately they ducked the issue of the long term site bias contribution and UHI to the surface record. But, we’ll address that later – Anthony

bert_the_turtle
Yes, find the temperature shelters, I did.

From Roger Pielke Sr.’s Climate Science website

Comments On The New Paper “The United States Historical Climatology Network Monthly Temperature Data – Version 2 By Menne Et Al 2009

There is a new paper on the latest version of the United States Historical Climatololgy Network (USHCN). This data is used to monitor and report on surface air temperature trends in the United States. The paper is

Matthew J. Menne, Claude N. Williams, Jr. and Russell S. Vose, 2009: The United States Historical Climatology Network Monthly Temperature Data – Version 2.(PDF)  Bulletin of the American Meteorological Society (in press). [url for a copy of the paper added thanks and h/t to Steve McIntyre and RomanM on Climate Audit].

The abstract reads

“In support of climate monitoring and assessments, NOAA’s National Climatic Data Center has developed an improved version of the U.S. Historical Climatology Network temperature dataset (U.S. HCN version 2). In this paper, the U.S. HCN version 2 temperature data are described in detail, with a focus on the quality-assured data sources and the systematic bias adjustments. The bias adjustments are discussed in the context of their impact on U.S. temperature trends from 1895-2007 and in terms of the differences between version 2 and its widely used predecessor (now referred to as U.S. HCN version 1). Evidence suggests that the collective impact of changes in observation practice at U.S. HCN stations is systematic and of the same order of magnitude as the background climate signal. For this reason, bias adjustments are essential to reducing the uncertainty in U.S. climate trends. The largest biases in the HCN are shown to be associated with changes to the time of observation and with the widespread changeover from liquid-in-glass thermometers to the maximum minimum temperature sensor (MMTS). With respect to version 1, version 2 trends in maximum temperatures are similar while minimum temperature trends are somewhat smaller because of an apparent over correction in version 1 for the MMTS instrument change, and because of the systematic impact of undocumented station changes, which were not addressed version 1.”

I was invited to review this paper, and to the authors credit, they did make some adjustments to their paper in their revision. Unfortunately, however, they did not adequately discuss a number of remaining bias and uncertainty issues with the U.S. HCN version 2 data.

The United States Historical Climatology Network Monthly Temperature Data – Version 2 still contains significant biases.

My second review of their paper is reproduced below.

Review By Roger A. Pielke Sr. of Menne et al 2009.

Dear Melissa and Chet

I have reviewed the responses to the reviews of the Menne et al paper, and, while they are clearly excellent scientists, and have provided further useful information, unfortunately, they still did not adequately respond to several of the issues that have been raised. I have summarized these issues below:

1. With respect to the degree of uncertainty associated with the homogenization procedure, they misunderstood the comment. The issue is that in the creation of each adjustment [time-of-observation bias, change of instrument], there is a regression relationship that is used to create these adjustments. These regression relationships have an r-squared associated with them as well as a standard deviation. These deviations arise from the adjustment regression evaluation. These values need to be provided (standard deviations, r-squared) for each formula that they use.

Their statement that

“Based on this assessment, the uncertainty in the U.S. average temperature anomaly in the homogenized (version 2) dataset is small for any given year but contributes to an uncertainty to the trends of about (0.004°C)”

is not the correct (complete) uncertainty analysis.

2.

i) With respect to their recognition of the pivotal work of Anthony Watt, while they are clear on this contribution in their response; i.e.

“Nevertheless, we have now also added a citation acknowledging the work of Anthony Watts whose web site is mentioned by the reviewer. Note that we have met personally with Mr. Watts to discuss our homogenization approach and his considerable efforts in documenting the siting characteristics of the HCN are to be commended. Moreover, it would seem that the impetus for modernizing the HCN has come largely as a reaction to his work. “

the text itself is much more muted on this. The above text should, appropriately, be added to the paper.

Also, the authors bypassed the need to provide the existing photographic documentation (as a url) for each site used in their study. They can clearly link in their paper to the website

http://www.surfacestations.org/ for this documentation. Ignoring this source of information in their paper is inappropriate.

ii) On the authors’ response that

“Moreover, it does not necessarily follow that poorly sited stations will experience trends that disagree with well-sited stations simply as a function of microclimate differences, especially during intervals in which both sites are stable. Conversely, the trends between two well-sited stations may differ because of minor changes to the local environment or even because of meso-scale changes to the environment of one or both stations..”

they are making an unsubstantiated assumption on the “stability” of well-sited and poorly-sited stations. What documentation do they have that determines when “both sites are stable”? As has been clearly shown on Anthony Watt’s website, it is unlikely that any of the poorly sited locations have time invariant microclimates.

Indeed, despite their claim that

“We have documented the impact of station changes in the HCN on calculations of U.S. temperature trends and argue that homogenized data are the only way to estimate the climate signal at the surface (which can be important in normals calculations etc) for the full historical record “

is not correct. Without photographs of each site (which now exists for many of them), they have not adequately documented each station.

iii) The authors are misunderstanding the significance of the Lin et al paper. They state

“Moreover, the homogenized HCN minimum temperature data can be thought of as a fixed network (fixed in both location and height). Therefore, the mix of station heights can be viewed as constant throughout the period of record and therefore as providing estimates of a fixed sampling network albeit at 1.5 and 2m (not at the 9m for which differences in trends were found in Oklahoma). Therefore, these referenced papers do not add uncertainty to the HCN minimum temperature trends per se. “

First, as clearly documented on the Anthony Watts website, many of the observing sites are not at the same height above the ground (i.e. not at 1.5m or 2m). Thus, particularly for the minimum temperatures, which vary more with height near the ground, the height matters in patching all of the data together to create long term temperature trends. Even more significant is that the trend will be different if the measurements are at different heights. For example, if there has been overall long term warming in the lower atmosphere, the trends of the minimum temperature at 2m will be significantly larger than when it is measured at 4m (or other higher level). Including minimum temperature trends together will result in an overstatement of the actual warming.

The authors need to discuss this issue. Preliminary analyses have suggested that this warm bias can overstate the reported warming trend by tenths of a degree C.

iv) While the authors seek to exclude themselves from attribution; i.e.

“Our goal is not to attribute the cause of temperature trends in the U.S. HCN, but to produce time series that are more generally free of artificial bias.”

they need to include a discussion of land use/land cover change effects on long term temperature trends, which now has a rich literature. The authors are correct that there are biases associated with non-climatic and microclimate effects in the immediate vicinity of the observation sites (which they refer to as “artificial bias”), and real effects such as local and regional landscape change. However, they need to discuss this issue more completely than they do in their paper, since, as I am sure the Editors are aware, this data is being used to promote the perspective that the radiative effect of the well-mixed greenhouse gases (i.e. “global warming”) is the predominate reason for the positive temperature trends in the USA.

iv) The neglect of using a complementary data analysis (the NARR) because it only begins in 1979 is not appropriate. The more recent years in the HCN analyses would provide an effective cross-comparison. Also, even if the NARR does not separate maximum and minimum temperatures, the comparison could still be completed using the mean temperature trends.

Their statement that

” Given these complications, we argue that a general comparison of the HCN trends to one of the reanalysis products is inappropriate for this manuscript (which is already long by BAMS standards)”

therefore, is not supportable as part of any assessment of the robustness of the trends that they compute. The length issue is clearly not a justifiable reason to exclude this analysis.

In summary, the authors should include the following:

1. In their section “Bias caused by changes to the time of observation”

the regression relationship used in

“…the predictive skill of the Karl et al. (1986) approach to estimating the TOB was confirmed using hourly data from 500 stations over the period 1965-2001 (whereas the approach was originally developed using data from 79 stations over the period 1957-64)”

should be explicitly included with the value of explained variance (i.e. the r-squared value) and standard deviation, rather than referring the reader to an earlier paper. This uncertainty in the adjustment process has been neglected in presenting the trend values with its +/- values.

2. In their section “Bias associated with other changes in observation practice”

the same need to present the regression relationship that is used to adjust the temperatures due to instrument changes; i.e. from

“Quayle et al. (1991) concluded that this transition led to an average drop in maximum temperatures of about 0.4°C and to an average rise in minimum temperatures of 0.3°C for sites with no coincident station relocation.”

What is the r-squared and the standard deviation from which these “averages” were obtained?

3. With respect to “Bias associated with urbanization and nonstandard siting”,

as discussed earlier in this e-mail, the link to the photographs for each site needs to be included and citation to Anthony Watt’s work on this subject more appropriately highlighted.

On the application of “In contrast, no specific urban correction is applied in HCN version 2″, this conclusion conflicts with quite a number of urban-rural studies. They assume “that adjustments for undocumented changepoints in version 2 appear to account for much of the changes addressed by the Karl et al. (1988) UHI correction used in version 1.”

The use of text that concludes that this adjustment process “appear” to account for the urban correction of Karl et al (1988) indicates even some uneasiness by the authors on this issue. They need more text as to why they assume their adjustment can accommodate such urban effects. Moreover, the urban correction in Karl et al is also based on a regression assessment with an explained variance and standard deviation; the same data Karl used should be applied to ascertain if the new “undocumented changepoint adjustment” can reproduce the Karl et al results.

The authors clearly recognize this limitation also in their paragraph that starts with

“It is important to note, however, that while the pairwise algorithm uses a trend identification process to discriminate between gradual and sudden changes, trend inhomogenieties in the HCN are not actually removed with a trend adjustment..

and ends with

“This makes it difficult to robustly identify the true interval of a trend inhomogeneity (Menne and Williams 2008).”

Yet, despite this clear serious limitation of the ability to quantify long term temperature trends in tenths of a degree C with uncertainties, they present such precise quantitative trends; e.g.

“0.071°and 0.077°C dec-1, respectively” (on page 15).

They also write that

“…there appears to be little evidence of a positive bias in HCN trends caused by the UHI or other local changes”

which ignores detailed local studies that clearly show positive temperature biases; e.g.

Brooks, Ashley Victoria. M.S., Purdue University, May, 2007. Assessment of the Spatiotemporal Impacts of Land Use Land Cover Change on the Historical Climate Network Temperature Trends in Indiana.

Christy, J.R., W.B. Norris, K. Redmond, and K.P. Gallo, 2006, Methodology and results of calculating Central California surface temperature trends: Evidence of human-induced climate change?, J. Climate, 19, 548-563.

Hale, R. C., K. P. Gallo, and T. R. Loveland (2008), Influences of specific land use/land cover conversions on climatological normals of near-surface temperature, J. Geophys. Res., 113, D14113, doi:10.1029/2007JD009548.

4. On the claim that

“However, from a climate change perspective, the primary concern is not so much the absolute measurement bias of a particular site, but rather the changes in that bias over time, which the TOB and pairwise adjustments effectively address (Vose et al. 2003; Menne and Williams 2008) subject to the sensitivity of the changepoint tests themselves.”

this is a circular argument. While I agree it is the changes in bias over time that matter most, without an independent assessment, there is no way for the authors to objectively conclude that their adjustment procedure captures these changes of bias in time.

Their statement that

“Instead, the impact of station changes and non-standard instrument exposure on temperature trends must be determined via a systematic evaluation of the observations themselves (Peterson 2006).”

is fundamentally incomplete. The assessment of the impact “of station changes and non-standard instrument exposure on temperature trends” must be assessed from the actual station location and its changes over time! To rely on the observations to extract this information is clearly circular reasoning.

As a result of these issues, their section “Temperature trends in U.S. HCN” overstate the confidence that should be given to the quantitative values of the trends and the statistical uncertainty in their values.

If this paper is published, the issues raised in this review need to be more objectively and completely presented. It should not be accepted until they do this.

I would be glad to provide further elaboration on the subjects I have presented in this review of their revised paper, if requested.

Best Regards

Roger A. Pielke Sr.

0 0 votes
Article Rating
62 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Antonio San
May 12, 2009 5:10 pm

[SNIP – way way Off Topic, and I’m really growing weary of people putting OT stuff in the very first comment. Just saying “OT” and then posting something completely unrelated is not a license to OT. I busted my butt over many months to get a mention in this paper, so I ask that you have a little respect for the content and focus please – Anthony]

Ron de Haan
May 12, 2009 5:20 pm

This is a typical example of a dog turning in circles biting it’s own tail.
It takes a lot of patience and perseverance before you can teach the dog to sit and give a paw.

Gary
May 12, 2009 5:32 pm

Having tried to understand the histories of several USHCN sites for the SurfaceStations survey, I have serious doubts that is it possible to detect and correct for station relocations. The metadata on such moves are sparse, confusing and essentially unverifiable. We have discovered that many current stations have large microsite issues, but what of stations that moved – sometimes several miles – 20, 40, or 60 years ago? Ignoring this problem of NON-systematic bias is a major flaw in any recalculation/adjustment of the dataset.

Bill Illis
May 12, 2009 5:40 pm

This paper confirms that the total adjustments made to the Raw data is +0.425C or +0.765F (from 1920 to today which is the timeline of the maximum adjustment made).
I calculate the total temperature change over this same time period at 0.53C or just 0.1C over 9 decades excluding the adjustments.

Graeme Rodaughan
May 12, 2009 6:04 pm

Ron de Haan (17:20:30) :
This is a typical example of a dog turning in circles biting it’s own tail.
It takes a lot of patience and perseverance before you can teach the dog to sit and give a paw.

Yeah the tail is so exciting, there it is… it’s gone, ahh it’s back again, bite it whoops, its gone again…. (endless fun in a loop).

May 12, 2009 6:25 pm

If I read the review correctly, it sounded like “start over”.

Ron de Haan
May 12, 2009 6:31 pm

Read tis article, watch the video:
http://penoflight.com/climatebuzz/?p=543

May 12, 2009 6:32 pm

With all their money, wouldn’t you think auditing the very sites and instruments they depend on would be first priority?

FredG
May 12, 2009 6:43 pm

Other Topic…
[snip]

Ron de Haan
May 12, 2009 6:51 pm

Well, it is clear that Anthony and his team did the job for them.
It will be handled like every “good idea”.
At first the idea is ignored, in the next phase it is attacked and in the end they will try to steal it.
It’s good that we all know what an immense piece of work has been performed (and still has to be performed) and how important this work will be in all the discussions in the near future about the real impact of AGW/Climate Change.
Correct data is the basis for all science, let alone political decisions that will determine the economic and political future of the USA and the world.
That is where this is all about. Our future, our freedom and our prosperity.

deadwood
May 12, 2009 7:15 pm

Tom in Texas (18:25:22) :
If I read the review correctly, it sounded like “start over”.

I don’t read that at all. I believe that Professor Pielke is providing the authors valuable constructive criticism on some fairly important deficiencies in what is otherwise a good paper.
This is fairly common practice for reviewers of science papers in respectable journals. This is peer review.

DR
May 12, 2009 7:34 pm

I’m not so sure they have much money allocated to properly audit. That there never has a maintenance and calibration of the surface station network let alone an observational survey as undertaken by Anthony Watts speaks volumes.
RPS sums it all up in this sentence:
“The assessment of the impact “of station changes and non-standard instrument exposure on temperature trends” must be assessed from the actual station location and its changes over time! To rely on the observations to extract this information is clearly circular reasoning.”
I could only imagine what an auditor would do if my lab were run like NCDC 🙂

May 12, 2009 7:37 pm

It might just be an American-English v English-English thing, but I have difficulty with the problems identified by Mr Watts’ survey being described as “systematic” rather than “systemic”.
To describe something as systematic means that it follows a system, where “system” is used to mean a pattern or a particular way of doing things. There is an element of stability to systematic problems because the system stays the same. Apparent anomalies in the results can be adjusted for because you can calculate a theoretical margin of error from a close examination of the established system. All results are subject to that margin of error.
Where problems arise because a system (meaning, in this context, an arrangement of things or rules) contains inherent flaws, those problems are systemic not systematic. Where the system (that is, the arrangement of things or rules) is not constant you cannot calculate a single margin for error for the system as a whole and at all times. A separate adjustment is required for each change in the system.
The crucial difference between systematic and systemic errors is illustrated very well by the surfacestations project. That project has established, for the 70% or so of stations surveyed to date, that the vast majority have changed in location or surroundings to a very substantial extent since they were first established. The problem is systemic in that the system of surface stations today is radically different from the system in place just thirty years ago, perhaps even ten years ago.
Since I first came across the surfacestations project a year or so ago, I have tried to wrap what I laughably call my mind around the concept of calculating an average surface temperature for the USA, or any individual state thereof, and calculating a trend when the margin of error in each measurement is incalculable because of the changing physical conditions under, over and around each measuring point.
There is always an answer to calculating averages and trends from an unstable body of measuring devices and that is to acknowledge the need for a very wide margin of error. On any view of it, it seems to me that the necessary margin of error for the whole wobbly mass of US surface stations must be so high that no narrowly-defined conclusion can be drawn.

rbateman
May 12, 2009 8:01 pm

I have a historical bias question:
I just received my daily weather records for Weaverville Ranger Station from 1894 to 2009 from the Western Regional Climate Center in Reno, NV.
From the Wiki History of SR299 (US 299 prior to 1934) the highway next to the weather station was originally concrete instead of asphalt.
How much difference would concrete vs asphalt make for a UHI?

skepticus
May 12, 2009 8:15 pm

“On any view of it, it seems to me that the necessary margin of error for the whole wobbly mass of US surface stations must be so high that no narrowly-defined conclusion can be drawn.” Aye, there’s the rub. Propagate those errors through non-linear recursive models and the predictions diverge so rapidly that any realistic temperature is equally possible along with a raft of unrealistic ones. So we see Faux Errors being reported in the form of ensemble means and ensemble ranges where the mutual consistency of the models is said to prove something about the real world instead of something about the models.

joshv
May 12, 2009 8:26 pm

@FatBigot – since my days in undergrad physics, “systematic error” has always meant basically non-random error. Error arising from miscalibration, or other properties of the system that usually lead to a consistent measurement bias in one direction or the other.
I don’t know about the brit vs. american English angle, but we use “systemic” on this side of the pond as well – I’d consider them basically synonymous – though I’ve never heard anyone use the term “systemic error”.

Eric Naegle
May 12, 2009 8:31 pm

I’m confused. Mr. Pielke begins his review with: “they are clearly excellent scientists” and then takes their work apart with criticisms like, “fundamentally incomplete,” etc. etc.. Mr. Pielke’s criticisms were clearly stated and, as far as I can see, this study is seriously flawed. After reading Anthony’s report and other blog posts too, I can’t believe that these “scientists” could have any certainty, at all, as to the accuracy of the historical temperature data. Is Mr. Pielke just being nice in his remark or, can you write a paper this bad and still be an “excellent scientist”?

kim
May 12, 2009 8:35 pm

Wow, now I really hate to tiptoe off topic, but this can’t wait. Released today is a blockbuster memo from the White House’s Office of Management and Budget alleging that the Precautionary Principle has been stretched beyond the science, that the EPA’s finding of CO2 endangerment is insufficiently documented, and that EPA regulation of CO2 would lead to serious economic damage. Apparently in response, EPA Director Lisa Jackson told Congress today that the EPA endangerment finding won’t necessarily lead to regulation.
This memo gives heavy legal ammunition to anyone suing the EPA over regulations, it pressurizes the debate in Congress over Cap and Trade, and, by tomorrow, ought to give the opponents of Carbon encumbrance in Congress some heavy artillery.
Code Blue, Code Blue, the CO2=AGW paradigm is flat-lining.
========================================

E.M.Smith
Editor
May 12, 2009 8:45 pm

Their statement that
“Based on this assessment, the uncertainty in the U.S. average temperature anomaly in the homogenized (version 2) dataset is small for any given year but contributes to an uncertainty to the trends of about (0.004°C)”

So now we’re calculating things out to 4/1000 C and asserting that is the most uncertainty possible when the raw data are in full degrees F.
Sigh. Does no one “get it” that you can not take a set of full degree F single samples (for a single location at a single time – no oversampling) average them all together and get any more precision than full degrees F?
It’s called FALSE PRECISION and it’s WRONG.
If any 60 samples for one month for a place average together to 12.00001
all I can know is that it could be 11.00002 or it could be 13.00000 or it could be anywhere in between. I can say that I have “12 +/- 1 F” but I can not say that I have 12.00001 F.
The “Monthly Average Temperature” is not a physical thing that can be over sampled (measured several times to create better precision than the measuring device itself supports). It is a mathematical creation and so is limited to the original precision of the raw data and can never have a higher precision than that.
Further, any calculations that use that monthly average will also be limited to whole degree F precision. That applies to the calculated anomalies. But they ought to have less precision than whole degrees F due the large number of calculations done to get to that anomaly result. Error accumulates.
Just because your calculator has 10 digits of precision it doesn’t mean they have any meaning.

John F. Hultquist
May 12, 2009 9:00 pm

First, that is a well articulated review and Roger Pielke Sr. is thanked, by me at least, for the effort this involved. These things take a lot of time reading, checking, reading again, and so on. Impressive!
Second, Anthony – “muted” is way better than “ignored.” They have problems with this data and they admit knowing it thanks to the Surface Station Project. Bravo!
Third, they ought to publish this with a black box warning on the front and back covers that while the data base is improved it is still not adequate for world, regional and long term comparisons. And, values such as averages should not be presented with digits to the right of the decimal. Any published work using these data should carry this warning.
Fourth, they should commit to at least a 10% sample of stations in an attempt to figure out what has gone on and what, if anything, can be done about it. They may find out they can’t make this rattletrap collection of stations into a scientific instrument at any cost.
Fifth, any researcher using this data should have to sign a statement that they have read and understand your recent Heartland publication:
http://www.heartland.org/books/SurfaceStations.html

Konrad
May 12, 2009 9:11 pm

Anthony,
Maybe you have been asked this before, but is there any work underway to create a temperature record using just using CRN-1 / 2 surface stations? Looking at your map of surveyed stations, while there very few of these, they do seem more evenly distributed and greater in number than stations used in a recent Antarctic temperature reconstruction attempt.
The multiple issues you have identified with stations rated below CRN-2 would indicate that TOB and UHI adjustments would be unlikely to make data from these stations usable. However it seems possible that such adjustments could be applied to CRN-1 / 2 data. The GISS “Lights” adjustment may even be usable.

Evan Jones
Editor
May 12, 2009 9:39 pm

Never judge a duck by its cover.

May 12, 2009 10:02 pm

Eric Naegle (20:31:08):
I’m confused. Mr. Pielke begins his review with: “they are clearly excellent scientists” and then takes their work apart with criticisms like, “fundamentally incomplete,” etc. etc.. Mr. Pielke’s criticisms were clearly stated and, as far as I can see, this study is seriously flawed. After reading Anthony’s report and other blog posts too, I can’t believe that these “scientists” could have any certainty, at all, as to the accuracy of the historical temperature data. Is Mr. Pielke just being nice in his remark or, can you write a paper this bad and still be an “excellent scientist”?
Roger Pielke is applying rules of etiquette and good manners on writings. In any formal writing, its author first mentions something favorable and sensitively acceptable; after the pleasing introduction, the author mentions the inauspicious aspects.

Kazinski
May 12, 2009 10:12 pm

Great work Anthony,
I had a suggestion for a practical use for the work you’ve done grading the USHCN stations. Since you have survey data on 80% of the stations, and 11% of those stations are CRN-1 and 2, it seems to me that computing the temperature anomaly using just those stations could be used to gage the UHI signal from the total USHCN network.

Leon Brozyna
May 12, 2009 10:32 pm

Thanks for sharing this demonstration of how peer review ought to work. Of course peer review is not just about pre-publication review of a paper; the review continues on (or it should) well after a paper is published. And then there is the other shocker – scientists are people too – they’re human; and when you raise a criticism of their work, it’s best to do a little ego stroking.
I was especially pleased to see this recognition of the work done by Anthony and the Surfacestations Project, “… Moreover, it would seem that the impetus for modernizing the HCN has come largely as a reaction to his work. “
I think it entirely appropriate that Dr. Pielke calls for greater reference to the work done by the project – links to the site, for example.
This paper sounds like it is another step in the right direction, albeit a slow, tentative step. I guess we’ll just have to wait for the hard part to get done – just how much of a bias has been introduced due to bad siting, etc.

May 12, 2009 11:14 pm

E.M.Smith (20:45:12) :
. . . So now we’re calculating things out to 4/1000 C and asserting that is the most uncertainty possible when the raw data are in full degrees F.
Sigh. Does no one “get it” that you can not take a set of full degree F single samples (for a single location at a single time – no oversampling) average them all together and get any more precision than full degrees F? . . .
The “Monthly Average Temperature” is not a physical thing that can be over sampled (measured several times to create better precision than the measuring device itself supports). It is a mathematical creation and so is limited to the original precision of the raw data and can never have a higher precision than that.
Further, any calculations that use that monthly average will also be limited to whole degree F precision. That applies to the calculated anomalies. But they ought to have less precision than whole degrees F due the large number of calculations done to get to that anomaly result. Error accumulates.

Hi E.M.
Recalling from my geodesy days, accuracy and precision are two different things. Precision for any instrument is half the difference between the smallest readings. The two stations that I visited had MMTS’s reading down to 0.1°, so they gave a precision of 0.05°. Accuracy is a whole ‘nuther thing, depending on how well (or recently) they were calibrated, and they could be degrees off and not know it.
You can get accuracy greater than your precision if you take multiple measurements of something, different observers, different instruments, but with the station temps, while we’re getting 60 readings a month, those are all of different things – the temps for each different day, and all with one instrument, so we know accuracy cannot be better than 0.05°, and is probably worse.
The Monthly Average Temperature is indeed a creation, meaningful only for trends, and their 0.004° uncertainty is at least an order of magnitude too optimistic. The Stevenson screen thermometers read only to full degrees, plus they are subject to observer bias, so that also kicks in to degrade the overall accuracy.
It’s amazing that a couple tenths of a degree anomaly can drive species to extinction, while they survive day/night differences of 10 to 20 degrees.
.
As an aside, Glenn Beck mentioned Anthony’s report on the air the other day.

Phillip Bratby
May 12, 2009 11:24 pm

What I would like to see, and it should be a formal part of the peer-review process, is publication of the reviewers’ comments together with the paper’s authors responses at to how they have addressed the review comments. The responses would include full justiification for why any comments have been rejected. It appears from the above that many of the review comments have been ignored without justification. Ideally both the authors and reviewers should ultimately sign to accept that the comments have been addressed satisfactorily or that there remain points of disagreement. These should be appended to the paper so that anybody reading the paper or using information in the paper, understands the points of disagreement.

Roger Knights
May 13, 2009 12:06 am

“What I would like to see, and it should be a formal part of the peer-review process, is publication of the reviewers’ comments together with the paper’s authors responses”
That’ll happen once journals move online, where space is limitless and costless.

May 13, 2009 1:29 am

My friend Bratby said: (23:24:59) :
“What I would like to see, and it should be a formal part of the peer-review process, is publication of the reviewers’ comments together with the paper’s authors responses at to how they have addressed the review comments. The responses would include full justiification for why any comments have been rejected.”
I hate to disagree with someone who has been kind enough to comment on my modest excuse for a blog, but I have to disagree with Mr Bratby on this.
Peer review does not give the “peer” a right of veto. He or she can raise objections or suggest improvements but it is for the authors to decide what changes, if any, they make as a result of the reviewer’s comments. Even if the reviewer is correct according to everyone other than the authors, they are entitled to stick to their guns and say “this is our take on the issue, like it or lump it”.
Publishing reviewer’s comments and the response to them would undermine one of the central roles of academic papers on any subject, namely to expose a new thought and leave the reader to form his own view of whether it is sound or unsound. Others will respond if they disagree, that’s what academic debate is all about. An honest author confronted by a comment “don’t you think you’re wrong about X because of Y” will acknowledge that the same argument was raised by Professor Whoever in his pre-publication review and will either change his position or maintain it and debate the point.

May 13, 2009 2:47 am

I very much appreciate the WUWT site and have even referred the site to a UK journalist which resulted in a WUWT-inspired newspaper article. However, there are, quite naturally, times when I disagree with the WUWT argument (e.g. significant solar cycle influence). I’ m also less than convinced about the surface station survey. I just can’t see where it’s heading and I think a number of posters are confused about what it might achieve.
First, let’s be clear that none of the temperature data organisations (GISS, Hadley, RSS & UAH) are claiming that they have an accurate and precise figure fort average temperature of the surface or troposphere. What they are saying is that, within certain error bounds, they are able to provide a figure for how much the temperature has changed. As an analogy, consider the example of a huge lake. We might not have a clue about the depth and volume of the lake but, with enough samples, we could estimate the change in the water level to a reasonable level of accuracy. In other words, we could provide an anomaly relative to a time period of our choosing.
Of course, it’s possible that temperature measurements could be influenced by non-climatic factors, e.g. urban heat. But for this to affect the overall trend, the trend in urban heat would need to change. If urban heat at a station is contributing 2 deg warming in 1970, and is also contributing 2 deg in 2008 , then urban heat should have no effect on the 1970-2008 station trend.
So is there a UH trend in the US record? Maybe. It’s perfectly possible that a UH signal is present if you go back far enough (e.g. 1900), but I’m not convinced UH is significant over the past few decades. These are the temperature trends for the past 30 years (1979-2008) for contiguous 48 states.
GISS +0.25 deg per decade
UAH +0.26 deg per decade
The fact that the warming rate at the surface is so similar to the warming in the lower atmosphere means that it is not going to be easy to convince anyone that urban heat had a significant impact on the US temperature record. What’s more, because of the land/ocean ratio and the fact that the US is only 2% of the earth’s surface, urban heat will have even less impact on the global record.
Anthony’s survey may produce some interesting results, but it’s not going to somehow demolish global warming case.

Bob B
May 13, 2009 3:27 am

Anthony, great post. BTW did you get to read any of the other reviewers comments?

GregS
May 13, 2009 4:41 am

“If urban heat at a station is contributing 2 deg warming in 1970, and is also contributing 2 deg in 2008 , then urban heat should have no effect on the 1970-2008 station trend.” – John Finn
This assumes that all heat islands are created equal and that city growth has not sprawled or intensified in 28 years.
It also assumes that micro-climates have not changed in 28 years, ie. that asphalt parking lots were not built, or acres of shingled roofs added within the vicinity of a surface station.
A station that I am studying is located at a sewage treatment plant that underwent a major expansion in the 1990’s. It is also located downwind from the county fair grounds that was paved two years ago.

hunter
May 13, 2009 5:47 am

The apologists think they can isolate a mistake in this system to 3 decimal places?
Pure bunkonium.
And it is increasingly difficult to grant people who write reports like this good faith in their motives.

pyromancer76
May 13, 2009 6:06 am

To John Finn (02:47:32): Nothing new will demolish the global warming case. It has already been scientifically demolished many times over. And UHIs are the least of the egregious errors. At present AGW in its many guises continues mainly as a political club, but also as a religion for the “masses” of followers, however many remain devoted. It is difficult to understand why you do not see the need for accurate, first-world, scientifically valid gathering of temperature data with which many government policies are made.
As I understand it, Anthony Watts’ final version of the Surfacestation report will find those stations that (might) have remained accurate over the historical time period this paper addresses. Then we might have the data to create a report like “The U.S. Historical Climatology Netword Monthly Temperature Data — Version 2” Right now we do not — and that stinks. Thanks to Dr. Pielke, Sr. for the excellent peer review. The authors, Menne, Williams, and Vose are very fortunate Anthony gave them so much input. He and his crew are the ones with that expertise that we need in our government.
I remember the trial of O.J. Simpson. That was the first time that I got a glimpse of a major city (Los Angeles) in the U.S.ofA. functioning like a the stereotype of a third-world country. What a shock. And the shocks keep coming. I am so grateful for Anthony’s, Dr. Pielke’s, and others’ efforts.

Matt Bennett
May 13, 2009 6:11 am

John Finn,
EXACTLY. None of this is going anywhere, at least as far as disproving that CO2 is currently driving climate and that the temperature direction is up. It’s all smoke and mirrors, the outcome will still be the same because the CO2 effect is indisputably real and, perhaps most pertinently of all, this is for a sampling of an area less than much less than one twentieth of the global surface.
Won’t stop them though.

Matt Bennett
May 13, 2009 6:14 am

Hunter,
Try reading John’s post above – if you don’t even realise the aim is to establish an anomaly value and not an absolute value, you’ve got a lot of reading to do.

Rob
May 13, 2009 6:32 am

John Finn said,
Of course, it’s possible that temperature measurements could be influenced by non-climatic factors, e.g. urban heat. But for this to affect the overall trend, the trend in urban heat would need to change. If urban heat at a station is contributing 2 deg warming in 1970, and is also contributing 2 deg in 2008 , then urban heat should have no effect on the 1970-2008 station trend.
So you are suggesting that all urban development stopped at 1970, try China and UHI.
http://icecap.us/images/uploads/URBANIZATION_IN_THE_TEMPERATURE_DATA_BASES.pdf

MattN
May 13, 2009 7:05 am

Did Pielke find anything correct in the document? That might have been a shorter write-up…

TerryBixler
May 13, 2009 7:09 am

Thank you Anthony and Rodger for your continued efforts. I am not optimistic but hope that actual science will prevail in what feels like a Salem witch hunt and accompanying trials .

hunter
May 13, 2009 7:17 am

Matt,
If the anamoly value is far below the MOE, it is meaningless.
I would suggest that GIGO is one of the main underpinnings of AGW.
How can you defend such garbage with any seriousness?

Evan Jones
Editor
May 13, 2009 7:18 am

Even more significant is that the trend will be different if the measurements are at different heights.
And if station height affects trends, one can reasonably presume that far more egregious violations may do so as well.
Yilmaz (2008). Let us not forget the lessons of Yilmaz . . .
This is a typical example of a dog turning in circles biting it’s own tail.
Is he channeling us, Anthony?
The metadata on such moves are sparse, confusing and essentially unverifiable.
And just plain wrong.
I calculate the total temperature change over this same time period at 0.53C or just 0.1C over 9 decades excluding the adjustments.
For US data, weighting each station equally, raw trends show +0.14C. For USHCN1 TOBS, it’s +0.31
Looking at your map of surveyed stations, while there very few of these, they do seem more evenly distributed and greater in number than stations used in a recent Antarctic temperature reconstruction attempt.
Maybe so. But they are not evenly distributed by any means. They are disproportionately concentrated in naturally warming areas.
the aim is to establish an anomaly value and not an absolute value
Yes. A problem arises, however, when offset data becomes conflated with trend, which, for example, is what happened in the case of Lampasas, TX, or Chama, NM.

hunter
May 13, 2009 7:26 am

Matt,
More importatantly, no one is disputing that CO2 has an effect. What has been falsified is the IPCC/Hansen/Gore hype that the effect of the change in CO2 is going to be apocalyptic.
Until AGW believers can learn to distinguish the facts of CO2 from their faith-based beliefs of what CO2 is doing, believers will continue to believe untrue things and push for unwise policies. And they will continue to embarass themselves defending Mann, Hansen, Lovelock, Gore, the IPCC, etc.

Evan Jones
Editor
May 13, 2009 7:50 am

The fact that the warming rate at the surface is so similar to the warming in the lower atmosphere means that it is not going to be easy to convince anyone that urban heat had a significant impact on the US temperature record.
However, it is a basic premise of AGW theory that, given a warming trend, lower troposphere warms at a faster rate than surface. (FWIW, the issue is not mainly UHI, but microsite and station moves.)
None of this is going anywhere, at least as far as disproving that CO2 is currently driving climate and that the temperature direction is up.
But surely this is not about CO2, per se, in the first place. It is about the 100+ year history of US climate, most of which occurred before the era of “Big CO2”.

TJ Overton
May 13, 2009 8:03 am

The problem is sheer laziness. It would actually involve work to manually look at each of the site histories, analyze siting differences for each one, determine the appropriate “fix” for it, and generate a data set. Especially since you might actually have to do some “science” in the process. Like, I don’t know, maybe set up some test stations which match conditions of the actual stations to verify that things work the way you think they do and that your fixes really do correct for problems.
Nah, it’s so much more fun and a lot less time consuming to play computer games. But it isn’t science.

May 13, 2009 8:24 am

I’m confused. Mr. Pielke begins his review with: “they are clearly excellent scientists” and then takes their work apart with criticisms
When you are about to hang a man it never hurts to be polite.

May 13, 2009 8:31 am

Matt Bennett:

“None of this is going anywhere, at least as far as disproving that CO2 is currently driving climate and that the temperature direction is up. It’s all smoke and mirrors…”

You have it exactly backwards, and you are the one using smoke and mirrors.
No one has to “disprove” that CO2 is driving the climate. It is preposterous to believe that a very minor trace gas is driving much of anything.
It is not up to normal skeptics to “disprove” anything. It is up to the purveyors of the CO2=AGW silliness to prove their case. They have utterly failed to do so.
If increases in CO2 caused global warming, the globe would be getting warmer. Instead, despite a steady rise in CO2, the planet has been cooling for most of the last decade. Therefore, any insignificant warming due to CO2 is inconsequential, and can be completely disregarded.

May 13, 2009 8:41 am

To E.M. Smith and Mike McMillan, re precision and accuracy:
I marvel at the contrived level of precision applied to the temperature record. I am a cooperative observer at a station that is not part of the USHCN, but it has identical equipment, and is actually sited pretty well.
Mike, when you saw MMTS readings to 0.1 deg precision, what you saw was a false precision of the display. The acurracy of the Nimbus instrument is actually 0.3 deg, by its published specs. Anyhow, we round off to one degree when filling in the paper record, at the direction of our local NWS supervisor. I leave it to you guys to figure out what .3, .1, and rounding does to the error rate.
The Nimubs MMTS has a feature that allows retrieval of previous days’ high and low, for stations that are not manned continually, like ours. That feature is problematic. The retrieval method is non-intuitive, and the purported stored readings seldom pass a sanity test. I wonder how much error creeps in because of this.
Of course, before the MMTS came along, it was fluid filled glass thermometers, no two of which ever agreed closer than two degrees F when kept side by side in the same enclosure.
Dan

HarryL
May 13, 2009 9:49 am

Anthony,I live in south jersey 30 miles S. of Philadelphia.I’m told that Phillys official temps are taken from the Phil. airport.The difference in temps are sometime 20 degrees,especially during winter.My question is, where is the location of the temp station at the airport.It sure does seem to be in a UHI location.

AnonyMoose
May 13, 2009 10:55 am

It does seem likely there is a systematic error, based upon the effort to install MMTS cable. The amount of effort increases with distance but there is a more than linear increase, as muscle weakness and complications increase along with increasing distance. So MMTS sensors tend to be close to the building which contains their operator.
The sensor height is an interesting issue when trying to deal with fractions of degrees.

Evan Jones
Editor
May 13, 2009 11:17 am

The problem is sheer laziness. It would actually involve work to manually look at each of the site histories, analyze siting differences for each one, determine the appropriate “fix” for it, and generate a data set.
It would be pretty much pointless. The site histories are wrong. There are station moves on record where no moves occurred. There are station moves that are not on record. Coordinates for stations are imprecise to the point of uselessness, and gets (much) worse the further back one goes.
Even if one reconstructed all of the site histories from the B-91 forms, there would be many gaps and no record whatever of changes in local environment.

Evan Jones
Editor
May 13, 2009 11:22 am

I’m told that Phillys official temps are taken from the Phil. airport. The difference in temps are sometime 20 degrees,especially during winter.
From 1950 to 1980, when stations were moved from urban areas to airports, the airports were cooler. (“Adjustments were made.”) Since then airports have warmed at a considerably faster rate than the original locations, for obvious reasons. (Adjustments were not made.)
Like, I don’t know, maybe set up some test stations which match conditions of the actual stations to verify that things work the way you think they do and that your fixes really do correct for problems.
That would do for a start.

Frank Perdicaro
May 13, 2009 2:02 pm

On the UHI effect.
UHI at local, state and federal facilities has clearly increased
in the past few decades. The key factor has been the ADA,
the Americans with Disabilities Act. Try, just try, and find
government office that does not have an excellent AC system.
All that heat and humidity is pumped outside, where the
temperature equipment is. The ADA has also forced just
about everything to be paved and re-graded.
An excellent example is the Cogswell Station here in SoCal.
It is WAY out on a closed road, 7 miles from a main road.
The station has been around since the 1930s, but now the
pavement of the parking lot extends to a few feet from the
temperature sensor.
40 years ago there was almost no chance a small government
outpost would have a large AC system. Today it is required.

Matt Bennett
May 13, 2009 4:44 pm

Hunter,
“I would suggest that GIGO is one of the main underpinnings of AGW.”
And you would be wrong because to even begin to think that the reality of the major disturbance caused to climate by CO2 relies solely on computer models is to already fool yourself with the comfort offered by a lovely straw-man pillow. Be my guest, but physics, thermodynamics and real world observation tell us otherwise.
“no one is disputing that CO2 has an effect”
You wouldn’t surmise that from Smokey’s ranting post above. It shows he has no idea how much work across multiple disciplines by thousands of scientists has gone into showing just exactly how wrong he is. But I guess he’s smarter’n all them ivory tower guys (who actually apply science correctly and are all too aware of the error bars and limitations in their own work) I’d recommend he actually reads some reputable journals.
And finally, my point stands that we are talking about a temp record of less than 5% of the earth’s surface. Like MWP proponents, they just take one part of the globe that suits their cause and extrapolate. Hardly good science. As I said, if you want to convince yourselves of the reality of AGW, I know it’s dry and not always delivered in sound bites, but look to the scientific papers themselves, not some hack’s interpretation of them.

May 13, 2009 6:22 pm

I guess my 08:31:58 post made someone mad. But it wasn’t a ‘rant,’ read it again and you’ll see. I’m capable of ranting on occasion, but that particular post was made to set the record straight for the umpteenth time: skeptics have no duty to prove anything.
I don’t know why it’s so hard for some folks to get their heads around that concept, which is a basis of the Scientific Method. Those promoting a new hypothesis have the burden of showing that it explains reality better than the theory they’re trying to displace it with, in this case natural climate variability around a gradually rising trend line.
There is no empirical proof that CO2 has any discernible, measurable, real world effect. I understand the assumed forcing, although I disagree with the IPCC’s inflated guesstimate; CO2 is clearly a weak sister compared with other climate effects. If CO2 forcing weren’t so weak, the climate would be getting warmer as CO2 rises. In fact, show us some real world proof that CO2 is not a negative forcing along with water vapor.
When someone says: “None of this is going anywhere, at least as far as disproving that CO2 is currently driving climate…”, they just don’t get the Scientific Method. No one has falsified the theory of natural climate variability, and no one has been able to point to a chart and say, “That part of the [very minor] temperature rise over the past century is due to CO2,” and not simply to natural climate fluctuations above and below trend — which it has been doing naturally, for thousands of years, and within the same historical parameters.
The planet has been gradually warming in fits and starts since the last great Ice Age, and although CO2 has risen, the current climate is well within the parameters of natural fluctuations, which have taken place many times, and in the same way, since well before the first SUV appeared on the scene. The current climate is entirely normal.
“Never increase, beyond what is necessary, the number of entities required to explain anything” — William of Ockham (1285-1349) Natural climate variability explains the climate, and there is no need to add CO2 to the explanation.
Finally, the assumption that CO2 is driving the climate has been shown to be false by the planet itself: as CO2 steadily rises, the climate steadily cools.
Of course the believers in the failed CO2=AGW hypothesis are now saying that global warming causes global cooling. They can believe that nonsense if they want, I’ll listen to what planet Earth is telling us.

Matt Bennett
May 13, 2009 7:48 pm

Smokey,
I agree wholeheartedly that skeptics don’t have to PROVE the negative. You have that spot on. Apologies for calling it a rant, by the way, you’re right, it probably doesn’t qualify. But I’ll get to the meat of the issue and your points again a bit later when I have another break.

Evan Jones
Editor
May 13, 2009 10:38 pm

Faith (n): Belief without evidence in someone who speaks without knowledge of things without parallel. (Ambrose Bierce)
It is a well known truism that 90% of everything is BS. (The IPCC AR4 is 90% certain about CO2-caused AGW.)

May 13, 2009 11:59 pm

evanmjones (21:39:50) :
Never judge a duck by its cover.

Never judge a crock by its cover.

May 14, 2009 12:05 am

None of this is going anywhere, at least as far as disproving that CO2 is currently driving climate and that the temperature direction is up. It’s all smoke and mirrors, the outcome will still be the same because the CO2 effect is indisputably real and, perhaps most pertinently of all, this is for a sampling of an area less than much less than one twentieth of the global surface.
If it is real doesn’t it have to explain the cooling trend for the last eight or ten years?

May 14, 2009 2:21 am

Matt Bennett (06:11:58) :
John Finn,
EXACTLY. ……because the CO2 effect is indisputably real and, perhaps most pertinently of all,…….

Not what I said, by the way. We’ve had (or have) a similar decadal warming trend (~0.16 deg) to that in the 1915-1945 period (~0.14 deg).

hunter
May 14, 2009 6:18 am

Matt,
The only relationship between what AGW predicts about CO2 on the climate and reality are strictly in your mind.
AGW predictions have been falsified. Period.
Find new explanations of how the physics of CO2 are manifested in the climate.
Your side was wrong.
Deal with it.

hunter
May 14, 2009 6:26 am

And Matt,
AS to your weak argument that the US temp record is only 5% of the globe, so what?
It is also the best record kept.
The fundamental fallacy of using a system that is not accurate to within 1o to claim changes of .001o is 8th grade science lab stuff.
No matter how much dtat Mann & pals have claimed to wring out of the record, their work has never withstood reasonable scrutiny.
The only reasonable explanation for the tenacious nature of AGW is that it is a social movement. It is certainly not supported by the climate science.
Now that in the new thread about Arcitc cyclic temps is posted, can you reasonably still claim that AGW theory predicted anything useful about the Arctic?
Only if your belief in AGW is non-falsifiable.
Face it:
The heat content of the oceans is vastly different from AGW theory.
The troposhpere has not behaved as AGW predicted.
ACE is not anywhere close to what AGW predicted.
World temps are not only not reliable, they are going in the opposite direction.
World Sea Ice is not as AGW predicted.
There is not one bit of weather worldwide that is not well within historical norms.
In real world science, when facts do not support theory, the theory gets change.
In AGW, no matter the failure, it is the wickedness of those who point out AGW is wrong that is important.

wisc.edu
May 29, 2009 7:49 pm

Menne and Williams have done very solid work here. This promises to be one of the most important papers on climate published this year. The self-righteous tone of so many comments on this blog is astounding. It’s easy to anonymously criticize someone’s work that you don’t actually have to dialogue with. I suggest that the critics read the paper carefully once it’s published, and research some of the elements that they don’t understand before launching into ill-informed diatribes about its faults.