NCDC writes ghost "talking points" rebuttal to surfacestations project

UPDATE: The “ghost author” has been identified, see the end of the article.

When I first saw it, I laughed.

When I saw the internal memo circulated to top managers at NOAA, I laughed even more.

Why?  Because NOAA and NCDC are rebuking an analysis which I have not even written yet, using old data, and nobody at NOAA or NCDC  had the professionalism to put their name to the document.

First let’s have a look at the National Climatic Data Center’s web page from a week ago:

ncdc_web_page_061209

I was quite surprised to find that my midterm census report on the surfacestations.org project evoked a response from NCDC. I suppose they are getting some heat from the citizenry and some congress critters over lack of quality control. I was even more surprised to see that they couldn’t even get the title right, particularly since the title of my report defines most of what NCDC is all about; Surface Temperature Measurement.

SurfaceStationsReportCoverHere’s the title of my report released in March.

“Is The U.S. Surface Temperature Record Reliable?”

But NCDC calls it: “Is the U.S. Temperature Record Reliable?”

True, a small omission, the word “surface”. But remember, this is a scientific organization that writes papers for peer reviewed journals, where accuracy in citation is a  job requirement. Plus, the director of NCDC is Thomas Karl, who is now president of the American Meteorological Society. The Bulletin of the American Meteorological Society is considered a premiere peer reviewed journal, and Karl has written several articles. For him to allow a botched citation like this is pretty embarrassing.

[NOTE: For those that just want to read my report, please feel free to download and read the free copy here PDF, 4 MB]

But the citation error is not just in the NCDC webpage, it is in the PDF document that NOAA and/or  NCDC wrote up. I can’t be sure since they cite no named author.

NCDC _talking_points

You can download it here (PDF 91KB)

I had few people point out the existence of the NCDC rebuttal to me in the last week, and I’ve been biding my time. I wanted to see what they’d do with it.

Over the weekend I discovered that NOAA had widely circulated NCDC’s “talking points” document to top level division managers in NOAA. I was given this actual internal email, by someone whom appears not to agree with the current NOAA/NCDC thinking.

Date: Tue, 16 Jun 2009 16:26:48 -0600

From: Andrea Bair <Andrea.Bair@noaa.gov>

Subject: Talking Points on SurfaceStations.org

To: _NWS WR Climate Focal Points <WR.Climate.Focal.Points@noaa.gov>,

_NWS WR MICs HICs DivChiefs <wr.mics.hics.divchiefs@noaa.gov>,

_NWS WR DAPM-OPL <Wr.Dapm.Opl@noaa.gov>,

Susan A Nelson <Susan.A.Nelson@noaa.gov>,

Jeff Zimmerman <Jeff.Zimmerman@noaa.gov>, Matt Ocana <Matt.Ocana@noaa.gov>

User-Agent: Thunderbird 2.0.0.21 (Windows/20090302)

Recently I was asked if we had any official talking points on the surfacestations.org report that came out recently.  Attached are some talking points from NOAA that we can use.

AB

Note the “NWS WR MICs HICs DivChiefs” It seems pretty much everyone in management at NOAA got this email, yet a week later the citation error remains. Nobody caught it.

I find it pretty humorous that NOAA felt that a booklet full of photographs that many said at the beginning “don’t matter” required an organization wide notice of rebuttal. Note also some big names there. Senior NOAA scientist Susan (1000 year CO2) Solomon got a copy. So did Matt Ocana, Western Region public affairs officer for the National Weather Service. Along with Jeff Zimmerman who appears to be with the NWS Southern Region HQ. The originator, Andrea Bair, is the Climate Services Program Manager, NWS Western Region HQ.

There are lots of other curious things about that NCDC “Talking Points” document.

1. They give no author for the talking points memo. An inquiry as to the author’s name I sent to my regular contact at NCDC a week ago when I first learned of this has gone unanswered. Usually I have gotten answers in a day.

2. They think they have the current data, they do not. They have data from when the network was about 40% surveyed. They cite 70 CRN1/2 stations when we actually have 92 now. Additionally, some of the ratings have been changed as new/better survey information has come to light. They did their talking points analysis with old data and apparently didn’t know it.

3. They never asked me for a current data set. They know how to contact me, in fact they invited me to give a presentation at NCDC last year, which you can read about here in part 1 and part 2

Normally when a scientific organization prepares a rebuttal, it is standard practice to at least ask the keeper of the data if they have the most current data set, and if any caveats or updates exist, and to make the person aware of the issues so that questions can be answered. I received no questions, no request for data and no notice of any kind.

This is not unlike NCDC’s absurd closing of my access to parts of their station meta database in the summer of 2007 without notice just a few weeks after I started the project:

http://wattsupwiththat.com/2007/07/07/noaa-and-ncdc-restore-data-access/

4. They cite USHCN2 data in their graph, but they can’t even get the the number of stations correct in USHCN2. The correct number from their AMS publication is 1218 stations, they list 1228 on the graph. While the error is a simple one, it shows the person doing the talking points was probably not fully familiar with the USHCN2.

from NCDC's "talking points"rebuttal - click for larger image
from NCDC's "talking points"rebuttal - click for larger image

On page 6 of Matthew J. Menne, Claude N. Williams, Jr. and Russell S. Vose, 2009: The United States Historical Climatology Network Monthly Temperature Data – Version 2.(PDF)  Bulletin of the American Meteorological Society (in press) there is this sentence:

As a result, HCN version 2 contains 1218 stations, 208 of which are composites; relative to the 1996 release, there have been 62 station deletions and 59 additions.

Sure maybe it is a typo, but add the fact that they couldn’t get my report title correctly cited either, it looks pretty sloppy, especially when you can’t count your own stations.

When I was invited to speak at NCDC last year, I had a lengthy conversation with Matt Mennes, the lead author of the USHCN2 method and peer reviewed paper here:

http://wattsupwiththat.com/2009/05/12/ncdcs-ushcn2-paper-some-progress-some-duck-and-cover/

What I learned was this:

a) The USHCN2 is designed to catch station moves and other discontinuities. Such as we see in Lampasas, TX

b) It will NOT catch long term trend issues, like UHI encroachment. Low frequency/long period biases pass unobstructed/undetected. Thus a station that started out well sited, but has had concrete and asphalt built up around it over time (such as the poster child for badly sited stations Marysville, now closed by NOAA just 3 months after I made the world aware of it) would not be corrected or even noted in USHCN2.

5. They give no methodology or provenance for the data shown in their graph. For all I know, they could be comparing homogenized data from CRN1 and 2 (best stations) to homogenized data from CRN 345 (the worst stations), which of course would show nearly no difference. Our study is focusing on the raw data and the differences that changes after adjustments are applied by NCDC. Did they use 1228 stations or 1218 ? Who knows? There’s no work shown. You can’t even get away with not showing your work in high school algebra class. WUWT?

For NCDC not to cite the data and methodology for the graph is simply sloppy “public relations” driven science. But most importantly, it does not tell the story accurately. It is useful to me however, because it demonstrates what a simple analysis produces.

6. They cite 100 year trends in the data/graph they present. However, our survey most certainly cannot account for changes to the station locations or station siting quality any further back than about 30 years. By NCDC’s own admission, (see Quality Control of pre-1948 Cooperative Observer Network Data PDF) they have little or no metadata posted on station siting much further back than about 1948 on their MMS metadatabase. Further, as we’ve shown time and again, siting is not very static over time. More on the metadata issue here.

While we have examined 100 year trends also, our study focus is different in time scale and in scope. If I were to claim that the surfacestations.org survey represented siting conditions at a weather station 50 or 100 years ago, without supporting metadata or photographs, I would be roundly criticized by the scientific community, and rightly so.

We believe most of the effect has occurred in the last 30 years, much of it due to the introducing of the MMTS electronic thermometer into the network about 1985 with a gradual replacement since then. The cable issue has forced official temperature sensors closer to buildings and human habitation with that gradual change.

NCDC’s new USHCN2 method will not detect this long period signal change introduced by the gradual introduction of the MMTS electronic thermometer, nor do they even address the issue in their talking points, which is central to the surfacestations project.

7. In the references section they don’t even cite my publication!

References

Menne, Matthew J., Claude N. Williams, Jr. and Russell S. Vose, 2009: The United States Historical Climatology Network Monthly Temperature Data – Version 2. Bulletin of the American Meteorological Society, in press.

Peterson, Thomas C., 2006: Examination of Potential Biases in Air Temperature Caused by Poor Station Locations. Bulletin of the American Meteorological Society, 87, 1073-1080. It is available from http://ams.allenpress.com/archive/1520-0477/87/8/pdf/i1520-0477-87-8-1073.pdf.

Yet they cite Mennes USHCN2 publication where the 1218 USHCN2 station number is clearly found.

It seems as if this was a rush job, and in the process mistakes were made and common courtesy was tossed aside. I suppose I shouldn’t be upset at the backlash, after all bureaucrats don’t like to be embarrassed by people like me when it is pointed out what a lousy job has been done at temperature measurement nationwide.

I’m working on a data analysis publication with authors that have published in peer reviewed climate an meteorology journals. After learning from John V’s crash analysis in summer 2007 when we had about 30% of the network done, few CRN1/2 stations, and poor spatial distribution that people would try to analyze incomplete data anyway, I’ve kept the rating data and other data gathered private until such time a full analysis and publication can be written.

As NCDC demonstrated, it seems many people just aren’t willing to wait or to even respect he right to first publication of data analysis by the primary researcher.

By not even so much as giving me a courtesy notice or even requesting up to date data, it is clear to me that they don’t think I’m worthy of professional courtesy, yet they’ll gladly publish error laden and incomplete conclusions written by a ghost writer in an attempt to disparage my work before I’ve even had a chance to finish it.

This is the face of NCDC today.

UPDATE:

WUWT commenter Scott Finegan notes that Adobe PDF files have a “properties” section, and that the authors name was revealed there. Here is a screencap:

NCDC_Document_properties

Thomas C. Peterson is the author.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

181 Comments
Inline Feedbacks
View all comments
RoyFOMR
June 25, 2009 4:02 am

Jimmy Haigh (10:58:17) :
Slightly off topic but as I was walking home this evening I could see a beautiful crescent moon, about 2 days after new, and the brightest “old moon in the new moon’s arm” I have ever seen! This is caused by light reflected back from the earth. Perhaps the earth is very cloudy and reflective at the moment?
Thank you, I’d never made that connection, myself. Maybe there’s a Nobel Prize (or maybe just the odd PhD – undergrads, take note) waiting for an astronomer who can come up with a genuine ‘hockey stick’ to show how Earths’ albedo has increased (or not)-using lunar brightness as a proxy for cloud cover.
There must be loads of raw, uncontaminated data out there, just waiting to be downloaded and then pored over. Better hurray up though folks. The data-retentive lunatics may get their hands on it first and we all know what means. Don’t we?
Once again, Mr. Haigh, many thanks.

RoyFOMR
June 25, 2009 4:05 am

As a rider. Just think of the funding channels that will open up by making the connection between CC and astronomy.
Go get those generous grants!

June 25, 2009 5:55 am

Phil (16:56:38) :
Quoting from the talking points:
“Two national time series were made using the same gridding and area averaging technique.”
In order to be able to replicate their graph, it would be necessary to know what technique exactly was used.

See, that makes my point earlier: If the 70 “best case” stations (those with no location errors – but with UHI-affected raw data????) have been “corrupted” with gridding and area averaging, then they MUST follow the same trend lines as the rest of the 94% bad location stations.
The “area gridding” IS what smears the highest and medium UHI hot spots over on to the adjacent non-UHI sites that are correctly sited.
Again: What are the 70 “good” sites, and what was their raw data.
How many sites are non-UHI contaminated AND have good site locations?

kim
June 25, 2009 5:59 am

RoyFOMR, 04:02:45, I believe albedo’s been measured from moonglow for awhile, and I believe there is a suggestion of increasing albedo, but I don’t think the precision is to the point you could hang your hat on it. Careful, that point may be the oncoming view of a line and is sharp and may hurt if it is moving with any velocity.

Marion Delgado
June 25, 2009 8:51 am

You are yesterday’s news, phlogiston boy.
REPLY: You mean like the “Fascist Oar“? 😉 Last updated Sep. 2008
Folks have a look at the face of alarmism today. This is one of Tamino’s buddies. As for the claim, wait for the paper and we’ll see. – Anthony

gary gulrud
June 25, 2009 9:27 am

“For [director Karl] to allow a botched citation like this is pretty embarrassing.”
Reminiscent of the spurious title “Dr.”.

Marshall Hopkins
June 25, 2009 10:01 am

The NCDC/NOAA temperature record is extremely unreliable. For example Coalinga Ca. almost always reports to be the hottest spot in the Central/Southern San Joaquin Valley, but is not true. It’s at a Fire Station Surrounded by asphalt, concrete, and brick, with a very restricted airflow. When I went by there back in 2003 with a NIST traceable thermometer, I was reading at least 3 degrees cooler than what the station was reporting, but yet they still use this as a forecasting guide for predicting temps in the San Joaquin Valley. I even question the ASOS readings as very often Vacaville, Stockton, and Modesto will report to be just as warm or warmer than Fresno during heat waves of over 100 degrees. I just don’t buy that.

RoyFOMR
June 25, 2009 10:12 am

Stargazer (08:55:03) :
Albedo
http://www.bbso.njit.edu/Research/EarthShine/
http://current.com/items/89433545_earths-albedo-tells-an-interesting-story-why-the-earth-may-stop-global-warming-alone.htm
Thanks for the links Stargazer. This site and the people it attracts never cease to amaze.

Chris Knight
June 25, 2009 10:14 am

There have been some comments from and about a certain Peter Hearnden.
He is no stranger to managing a (personal) weather station and knows the limitations of changing from traditional to modern systems, as this note from his weater website shows:
http://www.bridford.metsite.com/notes.html

RoyFOMR
June 25, 2009 10:32 am

Stargazer (08:55:03) :
Albedo
http://current.com/items/89433545_earths-albedo-tells-an-interesting-story-why-the-earth-may-stop-global-warming-alone.htm
That link also links back to WUWT
http://wattsupwiththat.com/2007/10/17/earths-albedo-tells-a-interesting-story/
That was nearly two years ago, Anthony, time for an update perhaps.
Maybe a re-examination of albedo and Earthshine may help illustrate why Al Gore is full of Moonshine.

wisc.edu
June 25, 2009 9:09 pm

Anthony,
You seem upset that some talking points were developed by NCDC (anonymously) based on your incomplete survey of the HCN. Yet, last month you published your Heartland Institute paper with the very damming conclusion that both the U.S. and global surface temperature records were unreliable. If it is so critical to do an analysis based on a complete survey of the HCN, why then did you publish your paper last month before your survey was complete, and, more importantly, before you did any analysis of the observations themselves? How can you sure that an analysis of the data will support your conclusions?
You also stated in your Heartland publication that “Since these MMTS/Nimbus electronic thermometers have been gradually phased in since their inception in the mid-1980s, the bias trend that likely results from the thermometers being closer to buildings, asphalt, etc. would be gradual, and likely not noticed in the data.” If that is true, why do Menne et al. state in the abstract of their forthcoming BAMS paper that “ The largest biases in the HCN are shown to be associated with changes to the time of observation and with the widespread changeover from liquid-in-glass thermometers to the maximum minimum temperature system (MMTS).”? Plus, the impact of the MMTS has been studied by others (Hubbard and Lin, 2006; Quayle et al. 1991). Have you ignored those studies for some reason?
REPLY: Simply put, they used old data, never contacted me to ask for current data, listed no author, no citation of my publication, and showed no data or methods to arrive at the graph. Let me ask you: can you get a paper published with such techniques?
Finding all of the best sites is the critical issue of the survey. Menne et al have not looked at all of the site biases. They know they can’t spot all these, and know they can’t assign a magnitude, so they don’t try. I published my census report 1) because I had an interest and offer 2) to help build interest in finding the last few best stations. We picked all the low hanging fruit already. A full paper with data analysis follows when I’m confident we’ve gotten all of the best CRN1/2 sites. – Anthony

GlennB
June 25, 2009 9:59 pm

wisc.edu (21:09:28) :
“Anthony,”
snip
Looks to me like your knee jerk resulted in a load of strawmen, nonsequitor and red herring. Who are you?

tulbobroke
June 26, 2009 6:02 am

david johnson (06:18:21) :
The key point in the study is the graph showing that the 70 stations classified (by you) as “good” or “best” show almost exactly the same temperature trends as the set of all stations put together.
You have validated their work!!
Now it is true that this list of 70 station dates from “early June” (i.e., a few weeks ago). And its also true that you’ve added another 20 stations or so, and perhaps changed some ratings. But you don’t indicate that this makes any difference at all to the bottom line, or to the important work you’ve done to validate the excellent work performed at NOAA and NCDC.
So here’s a public challenge: Does your latest data show that their is any systematic departure between the best stations (as determined by you) and the overall record?
If the answer is yes, you will have something interesting to write about.”
I agree with the above: if the best 70 stations (selected from Mr. Watts surfacestations.org site) show no significant difference from the whole network, where’s the problem?
And what’s the response to the public challenge issued above?

tulbobroke
June 26, 2009 10:37 am

Dear Mr. Watts,
Above you say, “5. … For all I know, they could be comparing homogenized data from CRN1 and 2 (best stations) to homogenized data from CRN 345 (the worst stations), which of course would show nearly no difference.”
Doesn’t that imply that the homogenisation process works?

Evan Jones
Editor
June 26, 2009 1:26 pm

No, it just means they’re spreading the error around and blurring the differences.

tulbobroke
June 27, 2009 6:56 am

“evanmjones (13:26:24) :
No, it just means they’re spreading the error around and blurring the differences.”
You must be kidding: that would only work if they used cooling and warming adjustments.

stan
June 27, 2009 7:15 am

My son was just watching a Mythbusters episode on Netflix. They did a piece on whether it was better to run or walk through the rain. They flew to N.C. to interview Thomas Peterson of the NCDC because Peterson and a friend had conducted an experiment during a rain storm.
Perhaps Peterson might be more effective in advancing science if he focused on his real job instead of drafting BS memos and running throught the rain.

Evan Jones
Editor
June 27, 2009 11:16 am

You must be kidding: that would only work if they used cooling and warming adjustments.
#B^1
#P^U

Evan Jones
Editor
June 27, 2009 11:18 am

And what’s the response to the public challenge issued above?
It’s in preparation. Patience required. All of these issues will be directly addressed.

Bill P
June 27, 2009 11:42 pm

Gina Becker (04:58:10) :
I wish we could find money to run a television advertising campaign, showing the broad public…
1. Photos of all the poorly sited stations, including “rural” ones which are supposedly free from heat islands
2. Urban heat island growth around stations
3. Visuals and explanations on how the gradual MMTS changeover corresponds with temperature rise
4. And the point: this is the USA, which has the most extensive, longest, best temperature record. Think what the rest of the world relies on.

I’ll second this. Too bad such a campaign couldn’t have run prior to the passage of the Waxman – Markey bill, but perhaps it’s not too late.

Phil
June 29, 2009 1:52 pm

david johnson (06:18:21) :
tulbobroke (06:02:39) :

“So here’s a public challenge: Does your latest data show that their is any systematic departure between the best stations (as determined by you) and the overall record?”

A trend difference between “good” and “bad” stations has already been identified early on, as follows:
The methodology used to “compare” the “good” to the “bad” stations in the talking points memo appears to be very similar to that used by John V at climateaudit.org identified and posted at http://www.opentemp.org/_results/ very early on when there were less than 20 “good” rural stations identified at surfacestations.org (rural stations only were used in an attempt to avoid urban heat index or UHI contamination). Similarly to the claim in your comments and in the talking points memo now, the claim then was that there did not appear to be much of a difference when comparing trends between the “good” and the “bad” stations.
However, in this comment (http://www.climateaudit.org/?p=3169#comment-267760) Kenneth Fritsch did a multiple regression of population, altitude, latitud, longitude and CRN (quality) rating against anomaly temperature trends for 1920-2005. In this comment (http://www.climateaudit.org/?p=3169#comment-268119), he found that the trend in degrees C per Century increased significantly as the station rating declined (i.e. the trend is greatest for the lowest quality stations – see graph).
On that same thread, commenter RomanM in this comment (http://www.climateaudit.org/?p=3169#comment-270488) did an analysis of covariance “…with the variables altitude, longitude and elevation as covariates, but with CRN rating and population as categorical” and also obtained similar results of a clear increase in trend as the quality of the station decreased. His results can be found at http://www.math.unb.ca/~roman/graphs/trendout.pdf and in the comment referenced above.
Please refer to the referenced comments for more exact language. I may have unintentionally phrased things incorrectly in trying to summarize the analysis. Updating of these results may be difficult because unadjusted station data is no longer available in the new USHCN data sets. The publicly available USHCN data sets apparently use information from “not good” stations to adjust the temperatures of the “good” stations, so trend information may be all mixed up. Unfortunately, the computer code that shows exactly how these adjustments are done has apparently not been released yet either (see http://www.climateaudit.org/?p=6370 for more information and more comments). In closing, RomanM says in this comment (http://www.climateaudit.org/?p=6370#comment-347325), that his analysis “…should not be looked at as definitive in any particular way, but rather more as an example of what should be done in assessing the effect of station quality on trends given the presence of other factors.” In short, the talking points memo analysis appears to ignore other factors that may influence the trend comparisons.

Phil
June 29, 2009 2:08 pm

david johnson (06:18:21) :
tulbobroke (06:02:39) :

“So here’s a public challenge: Does your latest data show that their is any systematic departure between the best stations (as determined by you) and the overall record?”

The talking points memo analysis appears to ignore other factors that may influence the trend comparisons, such as population, altitude, longitude and latitude of the “good” stations. A multiple regression of population, altitude, latitud, longitude and CRN (quality) rating against anomaly temperature trends for 1920-2005 showed significant trend differences in an analysis done early on (1) by Kenneth Fritsch. Similarly, an analysis of covariance “…with the variables altitude, longitude and elevation as covariates, but with CRN rating and population as categorical” done at the same time by RomanM also showed significant trend differences (2).
Unfortunately, updating of these results may be difficult because unadjusted station data is no longer available in the new USHCN data sets. In addition, the publicly available USHCN data sets apparently use information from “not good” stations to adjust the temperatures of the “good” stations, so trend information may be all mixed up (3). Although these analysis may not be definitive, I think they do show that it is important not to ignore other factors when making trend comparisons.
(1) http://www.climateaudit.org/?p=3169#comment-267760 and http://www.climateaudit.org/?p=3169#comment-268119
(2) http://www.climateaudit.org/?p=3169#comment-270488 and http://www.math.unb.ca/~roman/graphs/trendout.pdf
(3) see discussion at http://www.climateaudit.org/?p=6370

Evan Jones
Editor
June 29, 2009 2:14 pm

NOAA in all probability uses adjusted data, therefore homogenized. If so, comparisons are worthless.
Analysis will be done using raw data (and TOBS). There are also angles that John V never considered.
Patience, please.
P.S., it is a scandal in and of itself if NOAA does not make raw data available.

Hu McCulloch
July 1, 2009 1:27 pm

Anthony — The Talking Points memo at http://www.ncdc.noaa.gov/oa/about/response-v2.pdf now (7/1) clearly lists your study, with you as author, in its references. It’s still dated 6/9, however.