NCDC writes ghost "talking points" rebuttal to surfacestations project

UPDATE: The “ghost author” has been identified, see the end of the article.

When I first saw it, I laughed.

When I saw the internal memo circulated to top managers at NOAA, I laughed even more.

Why?  Because NOAA and NCDC are rebuking an analysis which I have not even written yet, using old data, and nobody at NOAA or NCDC  had the professionalism to put their name to the document.

First let’s have a look at the National Climatic Data Center’s web page from a week ago:

ncdc_web_page_061209

I was quite surprised to find that my midterm census report on the surfacestations.org project evoked a response from NCDC. I suppose they are getting some heat from the citizenry and some congress critters over lack of quality control. I was even more surprised to see that they couldn’t even get the title right, particularly since the title of my report defines most of what NCDC is all about; Surface Temperature Measurement.

SurfaceStationsReportCoverHere’s the title of my report released in March.

“Is The U.S. Surface Temperature Record Reliable?”

But NCDC calls it: “Is the U.S. Temperature Record Reliable?”

True, a small omission, the word “surface”. But remember, this is a scientific organization that writes papers for peer reviewed journals, where accuracy in citation is a  job requirement. Plus, the director of NCDC is Thomas Karl, who is now president of the American Meteorological Society. The Bulletin of the American Meteorological Society is considered a premiere peer reviewed journal, and Karl has written several articles. For him to allow a botched citation like this is pretty embarrassing.

[NOTE: For those that just want to read my report, please feel free to download and read the free copy here PDF, 4 MB]

But the citation error is not just in the NCDC webpage, it is in the PDF document that NOAA and/or  NCDC wrote up. I can’t be sure since they cite no named author.

NCDC _talking_points

You can download it here (PDF 91KB)

I had few people point out the existence of the NCDC rebuttal to me in the last week, and I’ve been biding my time. I wanted to see what they’d do with it.

Over the weekend I discovered that NOAA had widely circulated NCDC’s “talking points” document to top level division managers in NOAA. I was given this actual internal email, by someone whom appears not to agree with the current NOAA/NCDC thinking.

Date: Tue, 16 Jun 2009 16:26:48 -0600

From: Andrea Bair <Andrea.Bair@noaa.gov>

Subject: Talking Points on SurfaceStations.org

To: _NWS WR Climate Focal Points <WR.Climate.Focal.Points@noaa.gov>,

_NWS WR MICs HICs DivChiefs <wr.mics.hics.divchiefs@noaa.gov>,

_NWS WR DAPM-OPL <Wr.Dapm.Opl@noaa.gov>,

Susan A Nelson <Susan.A.Nelson@noaa.gov>,

Jeff Zimmerman <Jeff.Zimmerman@noaa.gov>, Matt Ocana <Matt.Ocana@noaa.gov>

User-Agent: Thunderbird 2.0.0.21 (Windows/20090302)

Recently I was asked if we had any official talking points on the surfacestations.org report that came out recently.  Attached are some talking points from NOAA that we can use.

AB

Note the “NWS WR MICs HICs DivChiefs” It seems pretty much everyone in management at NOAA got this email, yet a week later the citation error remains. Nobody caught it.

I find it pretty humorous that NOAA felt that a booklet full of photographs that many said at the beginning “don’t matter” required an organization wide notice of rebuttal. Note also some big names there. Senior NOAA scientist Susan (1000 year CO2) Solomon got a copy. So did Matt Ocana, Western Region public affairs officer for the National Weather Service. Along with Jeff Zimmerman who appears to be with the NWS Southern Region HQ. The originator, Andrea Bair, is the Climate Services Program Manager, NWS Western Region HQ.

There are lots of other curious things about that NCDC “Talking Points” document.

1. They give no author for the talking points memo. An inquiry as to the author’s name I sent to my regular contact at NCDC a week ago when I first learned of this has gone unanswered. Usually I have gotten answers in a day.

2. They think they have the current data, they do not. They have data from when the network was about 40% surveyed. They cite 70 CRN1/2 stations when we actually have 92 now. Additionally, some of the ratings have been changed as new/better survey information has come to light. They did their talking points analysis with old data and apparently didn’t know it.

3. They never asked me for a current data set. They know how to contact me, in fact they invited me to give a presentation at NCDC last year, which you can read about here in part 1 and part 2

Normally when a scientific organization prepares a rebuttal, it is standard practice to at least ask the keeper of the data if they have the most current data set, and if any caveats or updates exist, and to make the person aware of the issues so that questions can be answered. I received no questions, no request for data and no notice of any kind.

This is not unlike NCDC’s absurd closing of my access to parts of their station meta database in the summer of 2007 without notice just a few weeks after I started the project:

http://wattsupwiththat.com/2007/07/07/noaa-and-ncdc-restore-data-access/

4. They cite USHCN2 data in their graph, but they can’t even get the the number of stations correct in USHCN2. The correct number from their AMS publication is 1218 stations, they list 1228 on the graph. While the error is a simple one, it shows the person doing the talking points was probably not fully familiar with the USHCN2.

from NCDC's "talking points"rebuttal - click for larger image
from NCDC's "talking points"rebuttal - click for larger image

On page 6 of Matthew J. Menne, Claude N. Williams, Jr. and Russell S. Vose, 2009: The United States Historical Climatology Network Monthly Temperature Data – Version 2.(PDF)  Bulletin of the American Meteorological Society (in press) there is this sentence:

As a result, HCN version 2 contains 1218 stations, 208 of which are composites; relative to the 1996 release, there have been 62 station deletions and 59 additions.

Sure maybe it is a typo, but add the fact that they couldn’t get my report title correctly cited either, it looks pretty sloppy, especially when you can’t count your own stations.

When I was invited to speak at NCDC last year, I had a lengthy conversation with Matt Mennes, the lead author of the USHCN2 method and peer reviewed paper here:

http://wattsupwiththat.com/2009/05/12/ncdcs-ushcn2-paper-some-progress-some-duck-and-cover/

What I learned was this:

a) The USHCN2 is designed to catch station moves and other discontinuities. Such as we see in Lampasas, TX

b) It will NOT catch long term trend issues, like UHI encroachment. Low frequency/long period biases pass unobstructed/undetected. Thus a station that started out well sited, but has had concrete and asphalt built up around it over time (such as the poster child for badly sited stations Marysville, now closed by NOAA just 3 months after I made the world aware of it) would not be corrected or even noted in USHCN2.

5. They give no methodology or provenance for the data shown in their graph. For all I know, they could be comparing homogenized data from CRN1 and 2 (best stations) to homogenized data from CRN 345 (the worst stations), which of course would show nearly no difference. Our study is focusing on the raw data and the differences that changes after adjustments are applied by NCDC. Did they use 1228 stations or 1218 ? Who knows? There’s no work shown. You can’t even get away with not showing your work in high school algebra class. WUWT?

For NCDC not to cite the data and methodology for the graph is simply sloppy “public relations” driven science. But most importantly, it does not tell the story accurately. It is useful to me however, because it demonstrates what a simple analysis produces.

6. They cite 100 year trends in the data/graph they present. However, our survey most certainly cannot account for changes to the station locations or station siting quality any further back than about 30 years. By NCDC’s own admission, (see Quality Control of pre-1948 Cooperative Observer Network Data PDF) they have little or no metadata posted on station siting much further back than about 1948 on their MMS metadatabase. Further, as we’ve shown time and again, siting is not very static over time. More on the metadata issue here.

While we have examined 100 year trends also, our study focus is different in time scale and in scope. If I were to claim that the surfacestations.org survey represented siting conditions at a weather station 50 or 100 years ago, without supporting metadata or photographs, I would be roundly criticized by the scientific community, and rightly so.

We believe most of the effect has occurred in the last 30 years, much of it due to the introducing of the MMTS electronic thermometer into the network about 1985 with a gradual replacement since then. The cable issue has forced official temperature sensors closer to buildings and human habitation with that gradual change.

NCDC’s new USHCN2 method will not detect this long period signal change introduced by the gradual introduction of the MMTS electronic thermometer, nor do they even address the issue in their talking points, which is central to the surfacestations project.

7. In the references section they don’t even cite my publication!

References

Menne, Matthew J., Claude N. Williams, Jr. and Russell S. Vose, 2009: The United States Historical Climatology Network Monthly Temperature Data – Version 2. Bulletin of the American Meteorological Society, in press.

Peterson, Thomas C., 2006: Examination of Potential Biases in Air Temperature Caused by Poor Station Locations. Bulletin of the American Meteorological Society, 87, 1073-1080. It is available from http://ams.allenpress.com/archive/1520-0477/87/8/pdf/i1520-0477-87-8-1073.pdf.

Yet they cite Mennes USHCN2 publication where the 1218 USHCN2 station number is clearly found.

It seems as if this was a rush job, and in the process mistakes were made and common courtesy was tossed aside. I suppose I shouldn’t be upset at the backlash, after all bureaucrats don’t like to be embarrassed by people like me when it is pointed out what a lousy job has been done at temperature measurement nationwide.

I’m working on a data analysis publication with authors that have published in peer reviewed climate an meteorology journals. After learning from John V’s crash analysis in summer 2007 when we had about 30% of the network done, few CRN1/2 stations, and poor spatial distribution that people would try to analyze incomplete data anyway, I’ve kept the rating data and other data gathered private until such time a full analysis and publication can be written.

As NCDC demonstrated, it seems many people just aren’t willing to wait or to even respect he right to first publication of data analysis by the primary researcher.

By not even so much as giving me a courtesy notice or even requesting up to date data, it is clear to me that they don’t think I’m worthy of professional courtesy, yet they’ll gladly publish error laden and incomplete conclusions written by a ghost writer in an attempt to disparage my work before I’ve even had a chance to finish it.

This is the face of NCDC today.

UPDATE:

WUWT commenter Scott Finegan notes that Adobe PDF files have a “properties” section, and that the authors name was revealed there. Here is a screencap:

NCDC_Document_properties

Thomas C. Peterson is the author.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
181 Comments
Inline Feedbacks
View all comments
Richard Percifield
June 24, 2009 1:28 pm

oakgeo (09:15:35) :
Wrote:
“I think that boat sailed long ago. Data as we understand it is different in the politicized climate science arena, with CGM projections being afforded the same official confidence (and sometimes more) as directly quantifiable, empirical results. Its a travesty.”
I would agree with you that the science is very politicized at this moment. Being only an “Applied Scientist” I can name several times I have been unpleasantly surprised by data that did not match my prediction. So given those humbling experiences I do not rule out AGW, but question the the magnitude of the effect. I certainly do not see in the data support the the dire predictions given by the proponents.
In my world data trumps theory any day. In electrical engineering we have modeling programs that try to predict how a system responds to stimuli. I can tell you that many times the output from these programs bear no semblance to reality, and we know most of the parameters that affect the system. How can a model that does not take into account a vast majority of the variables affecting the climate work? Answer, it doesn’t.
The problem for the NCDC is that eventually the cognitive dissonance will be readily apparent to the point that it cannot be ignored. Here in the central US the temperatures are moderating. Planting is happening later and killing frosts are occurring later. So no matter how skewed the data becomes personal experience will eventually prevail. The only issue is what damage will be levied upon us because of this bad data.

geo
June 24, 2009 1:32 pm

“Talking Points Not Responding to that-which-we-cannot-name by he-who-must-not-be-named” would make a great SNL skit.

Peter Hearnden
June 24, 2009 1:56 pm

‘Pingo’ wrote of me:
where he makes a habit of not contributing to debates apart from inciting moderators to get involved by making despicably false accusations of “lying” when he has lost the debate.
OT but just to be clear since you bring a matter from another forum here:
I DID NOT ask for you to be banned from said site or incite moderators so to do. OK?
Reply: And that will be the last comment on a dispute from another time and place, from either side, and as always I DON’T CARE WHO STARTED IT! ~ CHARLES THE SHOUTING MODERATOR

June 24, 2009 2:18 pm

[I meant it, no more comments on the subject] ~ ctm

June 24, 2009 2:28 pm

But their chart reveals another problem – or the UHI coverup itself.
1) They select (cherry pick) 70 “best sited” stations – but (very cleverly) don’t identify what those sites are. But – equally obviouly, they “know” (or have some (unidentified) process) to what constitutes a “good site” from a “bad site” . Coincidently, 70 might the total number of sites that are actually correctly sited: Figure that over 90 percent of all 1218 are improperly sited, leaving less than 10% (close to or less than 100 ?) that “might” be correctly sited.
2) They plot some temperature trend from those 70 sites, but don’t say what the raw data was originally was, or whether the plotted data has been “corrected” for regional UHI from OTHER stations. So, comparing an unknown line based on unknown sources with a known (but corrupted (er, corrected) by some unknown OTHER series of factors!) national trend – why should we be surprised that the two match?
3) Most important: Note HOW the HI effect is supposedly “corrected” – all 1218 stations are reported.
All of the stations within xxx number of kilometers (!) of the current station are compared against the current station, and a “UHI correction factor” is applied against the current station.
They’ve “assured” us that 70 stations are “valid” – uncontaminated by HUI if you will.
Assume you have 122 stations (out of 1218 total) with a +5C UHI. (Reasonable: downtown temperatures are actually +10 degrees F higher than “out in the suburbs” stations in every city radio report every night of the year. In fact, my assumed 122 stations itself might be “low” but its something to start with.)
Assume another 244 have +2.5C UHI.
Assume 244 have “only” +1.5C UHI, and another 488 have “just” .5C UHI. 50 then are “almost correct” with only .25C UHI correction, and that .25C difference must be in only the last few years as development comes closer. (Alt explanation: Those 50 are at airports, and their UHI is grossly distorted by runways and wind direction over the nearest runway or grassy area.)
So, we should see 70 stations with NO change at all. But we don’t – we see ALL stations get “corrected” by the average of the “closest” area temperatures: High heat loads downtown or near an air[ort corrupt the LOW temperature stations “up” – regardless of the few “correct” stations across the nation. The relative few 122 very high temperature stations are dropped “down” a little – but their correction is dominated by the great mass of “nearby 488” (30-60 mile area) “almost as hot” stations. Sicne the regional averaging is so large (instead of the only 5-10 mile UHI limit) the “region” is dominated by the hotter stations. Regions are also by nature grouped around the older (hotter) towns and urban centers, not the cooler rural thermometer stations.
A guaranteed AGW-caused-by-man result.
So, by their own definition, the first 70 must not “need” any UHI correction, and every one of the remaining 1148 MUST BE corrected for varying amounts of UHI bias.
Further, the government’s UHI bias must be corrected (be different) every year since the various ground stations began recording.
So, now we only have to find out how they actually “correct” (corrupt ??) the raw data naitonally and compare it to the regionally-averaged UHI-corrected official results.

henry
June 24, 2009 3:12 pm

But the “talking paper” itself raises new questions:
“We are limited in what we can say due to limited information about station siting. Surfacestations.org has examined about 70% of the 1221 stations in NOAA’s Historical Climatology Network (USHCN). According to their web site of early June 2009, they classified 70 USHCN version 2 stations as good or best (class 1 or 2).”
Work the numbers (using their own data). 70 stations out of the 1221 stations could be classified as good or best.
Still leaves 1151 stations (about 94% or so) listed as either poor (3, 4 or 5) or with unknown siting problems.
So the study still didn’t answer the main question: Is the U.S. Temperature Record Reliable? Well, 6% of it appears to be…

kim
June 24, 2009 3:29 pm

You know, ‘doctor’ is an honorific fundamentally meaning ‘teacher’. I don’t mean to pile on so casually about a matter that has little meaning at all, but I don’t honor this author. Oh, no, quite the contrary.
You better think twice, Santa Claus is coming….to town.
H/t Patrick Sullivan
==============================

kim
June 24, 2009 3:33 pm

Oops, I jumped to the conclusion it was Thomas Karl. Oh well, I’ll take that name, Thomas C. Peterson. I’ve got someone checking it twice, too.
========================================

June 24, 2009 4:33 pm

John Galt (11:39:38) : There are various AGW myths and memes propagated by the “How to talk to a Skeptic” sites that claim to debunk all the skeptics’ arguments. Unfortunately, those sites do no such thing and have themselves been debunked over and over.
But the claim of debunking skeptics live on. Whenever somebody calls into question the quality of the data, they will inevitably reference this document and claim there is no problem with the data whatsoever.

There still is NOT a comprehensive single rebuttal of Coby Beck’s army of straw dogs at Gristmill, or of Skeptical Science’s ditto. Rebuttals exist but only in fragmented form. IMHO these two websites in particular, plus RC’s “info” pages, plus New Scientist’s equivalent pages need integrated rebuttals to douse the AGW wildfire claims. IMHO, this is a job that a skeptics wiki (written by blog-peer-reviewed skeptics) could, should and would undertake, over time.

Leon Brozyna
June 24, 2009 4:34 pm

I took another look at the talking points document:
Talking Points related to: Is the U.S. Temperature Record Reliable?
Q. Do many U.S. stations have poor siting by being placed inappropriately close to trees, buildings, parking lots, etc.?
A. Yes. The National Weather Service has station siting criteria, but they were not always followed. That is one reason why NOAA created the Climate Reference Network, with excellent siting and redundant sensors. It is a network designed specifically for assessing climate change. http://www.ncdc.noaa.gov/oa/climate/uscrn/. Additionally, an effort is underway to modernize the Historical Climatology Network, though funds are currently available only to modernize and maintain stations in the Southwest. Managers of both of these networks work diligently to put their stations in locations not only with excellent current siting, but also where the site characteristics are unlikely to change very much over the coming decades.

While the surfacestations project is focusing on the condition of the USHCN, the ‘rebuttal’ makes it sound like the project is about all temperature data, then raises the high quality of the USHCN as proof that the data is good. So maybe they deliberately chose to misrepresent what the project is about. It’s bait-and-switch – a shell game – now you see it, now you don’t.
So, in my mind, the citation error and the choice of phrasing in the talking points memo is deliberate.

Phil
June 24, 2009 4:56 pm

Quoting from the talking points:
“Two national time series were made using the same gridding and area averaging technique.”
In order to be able to replicate their graph, it would be necessary to know what technique exactly was used.
Second, the analysis makes a fundamental error IMHO. The stations are split into two groups based on a survey done at the end of the trend period. The assumption that this split is valid for the entire period of the trend is unsupported. In other words, if a comprehensive station survey had been done say every 5 years, there might be a significant change in which stations were in categories 1 or 2 vs 3, 4 and 5 over the trend period, thus making the comparisons between the two station groups a little different than the graph shows. In fact, if there were none-trivial changes in station groupings over the trend period, might one not expect that there would be no overall difference between the two station groupings based only on the evaluations done at the end of the trend period?
Unfortunately, the talking points admit to lack of quality control of station siting over the history of the data, when stating:
“We are limited in what we can say due to limited information about station siting.”
Thus, would it not be true that it would be impossible to evaluate any differences between “good” and “bad” stations over the trend period, since there is no information as to station siting except at the very end of the trend period (and that due to Anthony’s work)?

June 24, 2009 5:06 pm

McIntyre’s Climate Audit blog has a link to a news report from May that has a headline you might recognize:
http://wbztv.com/local/surface.stations.weather.2.1008615.html
“Is The U.S. Temperature Record Reliable?”
Apparently, the talking points were prepared after that article or another like it made the rounds.

June 24, 2009 5:34 pm

From Wiki here:
http://en.wikipedia.org/wiki/Talking_points

A talking point is a neologism for an idea which may or may not be factual, usually compiled in a short list with summaries of a speaker’s agenda for public or private engagements. Public relations professionals, for example, sometimes prepare “talking points memos” for their clients to help them more effectively conform public presentations with this advice.
A political think tank will strategize the most effective informational attack on a target topic and launch talking points from media personalities to saturate discourse in order to frame a debate in their favor, standardizing the responses of sympathizers to their unique cause while simultaneously co-opting the language used by those discussing the specific subject. When used politically in this way, the typical purpose of a talking point is to propagandize, specifically using the technique of argumentum ad nauseam, i.e. continuous repetition within media outlets until accepted as fact.

Propaganda: non-factual or selective idea dissemination, loaded messages in order to further a political agenda. A corollary to censorship.
How low has NCDC sunk? All the way to the bottom.

Ted Annonson
June 24, 2009 6:04 pm

May I nominate this for “Quote of the week”?
Mark Young (05:07:45) :
It’s not the heat, it’s the stupidity!

ohioholic
June 24, 2009 6:46 pm

“Managers of both of these networks work diligently to put their stations in locations not only with excellent current siting, but also where the site characteristics are unlikely to change very much over the coming decades.”
The part that I find interesting is the subtle concession that changes in the stations surroundings may have had an effect.
“but also where the site characteristics are unlikely to change very much over the coming decades.”

Evan Jones
Editor
June 24, 2009 7:45 pm

McIntyre’s Climate Audit blog has a link to a news report from May that has a headline you might recognize:
http://wbztv.com/local/surface.stations.weather.2.1008615.html
“Is The U.S. Temperature Record Reliable?”

And be sure to watch the 2-minute video on the right of the page (grin)!
BTW, good call. NWS was well aware of that broadcast, and the similar omission of “Surface” is probably no coincidence.

Neo
June 24, 2009 8:25 pm

It’s long past time to “abandon wornout dogmas,” as President Obama said in his inauguration. One of the dogmas, held by too many scientists and leaders, is that no matter what facts you find, you must report only those that support the current political narrative.

Fluffy Clouds (Tim L)
June 24, 2009 8:44 pm

Jeff Id (10:34:17) :
Dr. Thomas C. Peterson is a research meteorologist at NOAA’s National Climatic Data Center in Asheville, North Carolina. After earning his Ph.D. in Atmospheric Science from Colorado State University in 1991, Tom primarily engaged in creating NCDC’s global land surface data set used to quantify long-term global climate change. Key areas of his expertise include data archaeology,
*********quality control,************
homogeneity testing, international data exchange and global climate analysis using both in situ and satellite data.
I’m glad I don’t have his job. He seems to be the head doc in charge of everything known to be f’d up.
Jeff beat me to it LOL
ALL to nice A.W.

June 24, 2009 9:10 pm

If we had elected representatives who paid attention to reality instead of the latest polls, we could get this AGW sham behind us and move on the the bright future ahead. Unfortunately, we have to go through a period of doubt and uncertainty about the rightness of western culture and values every couple of decades.
At some point, there will be a reversal, and these bureaucrats will be tossed out on their ears, and relegated to the dustbin of history.
The traffic controllers acted similarly back in the 80’s and thought they were indispensible. They weren’t.

Roger Knights
June 24, 2009 9:17 pm

Lucy Skywalker (16:33:05) :
John Galt (11:39:38) : There are various AGW myths and memes propagated by the “How to talk to a Skeptic” sites that claim to debunk all the skeptics’ arguments. Unfortunately, those sites do no such thing and have themselves been debunked over and over.
But the claim of debunking skeptics live on. Whenever somebody calls into question the quality of the data, they will inevitably reference this document and claim there is no problem with the data whatsoever.

There still is NOT a comprehensive single rebuttal of Coby Beck’s army of straw dogs at Gristmill, or of Skeptical Science’s ditto. Rebuttals exist but only in fragmented form. IMHO these two websites in particular, plus RC’s “info” pages, plus New Scientist’s equivalent pages need integrated rebuttals to douse the AGW wildfire claims. IMHO, this is a job that a skeptics wiki (written by blog-peer-reviewed skeptics) could, should and would undertake, over time.
=====================
Hear, hear!

Adam Grey
June 24, 2009 9:56 pm

Normally when a scientific organization prepares a rebuttal, it is standard practice to at least ask the keeper of the data if they have the most current data set, and if any caveats or updates exist, and to make the person aware of the issues so that questions can be answered. I received no questions, no request for data and no notice of any kind.
This brings to mind Lindzen using Wong et al 2001for his Iris hypothesis after they’d revised it in 2005/6. Lindzen was updated here recently (nearly 4 years later) with no chiding from the commenters. Perhaps Anthony could be as politely helpful to the NOAA. And perhaps the commenters could apply standards and approbation equally.

Jeremy
June 24, 2009 10:55 pm

Peter Hearnden (02:18:21) :
Anthony, I ALWAYS post using my real name, would that many of your most outspoken correspondents, notably the people who ad hom James Hansen, here showed that same ‘professionalism’…

That’s actually quite amusing. You’re actually defending a civil servant for abusing a position of trust to affect politics.

dennis ward
June 25, 2009 12:02 am

I wish to thank all the people for their polite responses to my question.

Perry Debell
June 25, 2009 12:05 am

Jimmy Haigh (10:58:17) :
Billy Connolly is a comedian, Rabid AGWarmist William Connolley is a joke.
See his bio at http://en.wikipedia.org/wiki/William_Connolley
There is even a picture of his non listening ear’ole & the back of his head. What’s that all about??
http://en.wikipedia.org/wiki/File:William_Connolley.jpg

Robert W
June 25, 2009 12:35 am

Anthony and/or Moderator,
If you have already seen this USCRN document “CURRENT CONFIGURATION
OF US CLIMATE REFERENCE NETWORK STATIONS” please disregard.
I found this info in the Data Example 7. “In systems with single sensors,
this small temperature error would be very difficult or impossible to detect.
Such time-dependent biases affect the fidelity of the climate record”.
I have been reading with much interest your work on “measuring of surface
temperature”. It appears from this article that a station with a single sensor
could easily give incorrect data and NOT be detected and thus affect incorrectly
Climate data.
Here is the WEB paths to the article:
1. http://www.ncdc.noaa.gov/crn/research.html
2.http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/research/
CurrentConfigUSCRN-AMSmtg-Jan04-final.pdf
3. Within the pdf document under 2. go to the last page “4” item – 7.
7. DATA EXAMPLE
The figure below shows a real world example of
temperature data differences from three probes at one
site. One sensor developed a problem leading to an
error of about 1C. Because of the redundant sensors,
the close initial calibration of the sensors, and the
hourly automated quality control review process, it was
immediately clear when this occurred and which sensor
was at fault. Repairs were quickly accomplished. In
systems with single sensors, this small temperature
error would be very difficult or impossible to detect.
Such time-dependent biases affect the fidelity of the
climate record. In the USCRN these can be detected
quickly, and thus reduce uncertainty in the quality of
the climate record provided to decision makers.
The following figure shows 8 days of temperature
difference data from three normally performing sensors.
The temperature differences rarely exceed 0.2C.
The Author of this PDF document is R. P. Hosker, Jr and
was created on 11/3/03.