GHCN V3 temperature data errors spotted within a day of release

From the Inconvenient Sceptic, word that the new Global Historical Climatological Network (GHCN) Version 3 may not have significant quality control.

He writes:

I have been looking at tweaking the blended temperature set that I have been using. My intent has been to replace the Hadley data with the NCDC data (Global, Land & Sea). While I was looking through the data I found some bizarre discrepancies. The problem might be due to the presentation, but so far I have not found another source to get their data, so I cannot verify this.

The data from their site is tedious to extract as it is available by month from 1880-current. After I built my tables of data I did a comparison between the Beta-V3 and the V2 data. The primary difference I have found is that the difference between the two sets is not being calculated correctly. This problem is most evident in April, September and December. Sometimes enormous differences between the versions exist, but the differences are not being seen in the stated difference.

For example, April of 1996 has V2: 0.71 °C and Beta-V3: 0.24 °C. The calculated difference is 0.02 °C instead of the correct 0.47 °C difference. April and December are full of such errors. Above is a chart of the errors for these two months.

Needless to say it is difficult to trust the value of data that cannot even correctly state the difference between the two sets they are using.

Read more here

Here’s the announcement Monday for dataset availability of GHCN V3:

—–Original Message—–

Date: Mon, 22 Nov 2010 09:05:05 -0600

From: CLIMLIST <climlist@wku.edu>

To: <climlist@lists.wku.edu>

Subject: [CLIMLIST] Announcement: Global Historical Climatology

Network-Monthly (GHCNM) v3 beta dataset announcement

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

CLIMLIST Mailing Number 10-11-58

Origin: Byron.Gleason <Byron.Gleason [at] noaa.gov>

***** DO NOT USE REPLY FUNCTION *****

***** REPEAT – DO NOT USE REPLY! *****

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Dataset Announcement:

The U.S. National Climatic Data Center in Asheville, NC has released

Version 3.0 beta of the Global Historical Climatology Network-Monthly

(GHCNM).

This dataset currently consists of monthly mean temperature data for

7280 stations and includes both raw and adjusted data. The dataset is

hosted at the following web sites:

<http://www.ncdc.noaa.gov/ghcnm/>

and

<ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/>

Users are encouraged to provide feedback for this beta version at:

<NCDC.GHCNM@noaa.gov>

Thank You,

Byron Gleason

Physical Scientist

Climate Analysis Branch

National Climatic Data Center

0 0 votes
Article Rating
48 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
November 23, 2010 5:14 pm

at least you can see the ( susposedly) raw data versus adjusted and be aware of the problems. previosuly it was all a black box of faith .

Stephan
November 23, 2010 5:29 pm

Have a look at maybe time to report? AMSU temps going way way down to below anomaly

SS
November 23, 2010 5:30 pm

That’s cold.
Speaking of cold…looks like the ASMU global temperatures for 14K feet just slipped below 2007 and the average…

David A. Evans
November 23, 2010 5:46 pm

Would I be correct in saying that successive data sets have shown increased warming?
DaveE.

Robert of Ottawa
November 23, 2010 5:47 pm

What we have here is incompetence piled upon incompetence. They were caught out
with their first attmepts at global temperature measurements; they try again and demonstrate why they should just be ignored. Shameful, given the amount of government our money they get to produce the politically desired results.

R. Shearer
November 23, 2010 5:55 pm

Only an agency of the Federal government could handle this junk.

MattN
November 23, 2010 5:56 pm

You can’t get away with that kind of stuff these days. We’re watching…

Paul Vaughan
November 23, 2010 6:41 pm

One of the sites I’ve studied in a fair amount of detail has data available in both raw & GHCN versions. The homogenization procedure makes assumptions about the relationship between TMax & TMin that are beyond ridiculous. At this stage it is not clear what should be done about this, in part because it appears that the folks implementing these algorithms lack the awareness & judgement needed to understand how & why they are going wrong. Sorting this business out is going to be an exceptionally tedious exercise in data analysis, diagnostics, and communication. My advice in the meantime: Consider completely avoiding the GHCN data unless you are both equipped & prepared to run painstaking conditional multivariate diagnostics.

Mike Jowsey
November 23, 2010 7:13 pm

@ Robert of Ottawa: November 23, 2010 at 5:47 pm
Maybe they should follow the New Zealand data collecting authority (NIWA) and simply declare that they don’t have to follow best practice and in fact their temperature record is not an official temperature record.

November 23, 2010 7:25 pm

Is V3 raw data the same as V2 & V1 raw data?

Lawrie Ayres
November 23, 2010 7:25 pm

Is there an independant analysis mechanism to which this information and your concerns can be addressed? We have a similar problem in Australia with BoM and their adjustments that just go one way, up. The only complaints department is BoM itself and the responsible minister who would simply forward the request to , you guessed it, BoM.

November 23, 2010 8:27 pm

I was not even aware they had just released a new data set, I was just comparing the data I could get access to and nothing was making sense. I was baffled when the charted difference between the sets didn’t match. I kept double checking to find where I made my mistake.
Finally I went back and started checking the website. Sure enough, there was no error in what I did, it was all on their side. It makes me hesitant to use any of their data if they can’t even subtract two numbers correctly before releasing the data.
The new set does show more warming than the last set. That is also no surprise.
John Kehr
The Inconvenient Skeptic

Al Gored
November 23, 2010 8:40 pm

Great work John!

Rob R
November 23, 2010 11:33 pm

Mike Jowsey
How can NIWA follow best practice in temperature data archiving and analysis when it has been proven time and again that those who should be setting the standards of best practice continue to fail the target of being the best at anything. How can NIWA be expected to follow “best practice” when the Australian BOM can’t get within a verifiable sight of it? When NOAA can’t achieve it. When the Hadley Centre can’t achieve it. When CRU and GISS never got close enough to read the cover on the manual.
Sadly it appears that NIWA have a point that could almost be valid. There is no such standard in climate science. But at least NIWA does have a database that is reasonably well maintained, open, and fully searchable via the internet and at no charge. The database contains detailed data on a whole host of climatic variables. For NZ it is possible for the public to retrieve virtually all the monthly data (unfortunately not the daily data). The basic data does not appear to have been manipulated in secret and seems to be the same version as it was decades ago.
So for NZ, if an individual wants to invest the time, then the data can be accessed and analysed independently. Infact even though NIWA does not say so in an explicit manner, they are effectively inviting others to have a crack at it. So I would suggest to any complainants that they stop grousing about it and actually do someting about it, regardless of whether NIWA (National Institute of Water and Atmospheric Science) are being funded to carry out this function.

val majkus
November 24, 2010 12:40 am

Anthony, I’m sure your readers have heard about the New Zealand coal mining disaster
http://www.climateconversation.wordshine.co.nz/open-threads/climate/climate-science/energy-and-fuel/
please leave a message
Condolences for the Pike River victims
Send your condolences to the families of the victims of the Pike River coal mine disaster.
To leave a message for their loved ones please use the comment field at the foot of this page.
24/11/2010 – Stuff
The Pike River coal mine victims: Conrad John Adams, Malcolm Campbell, Glen Peter Cruse, Allan John Dixon, Zen Wodin Drew, Christopher Peter Duggan, Joseph Ray Dunbar, John Leonard Hale, Daniel Thomas Herk, David Mark Hoggart, Richard Bennett Holling, Andrew David Hurren, Jacobus (Koos) Albertus Jonker, William John Joynson, Riki Steve Keane, Terry David Kitchin, Samuel Peter Mackie, Francis Skiddy Marden, Michael Nolan Hanmer Monk, Stuart Gilbert Mudge, Kane Barry Nieper, Peter O’Neill, Milton John Osborne, Brendan John Palmer, Benjamin David Rockhouse, Peter James Rodger, Blair David Sims, Joshua Adam Ufer, Keith Thomas Valli.

November 24, 2010 1:41 am

The GHCN version seems to rely upon countries sending raw data to be pooled into a claculation. However, it is very difficult to discover which version of data are being sent. The Australian data are being upgraded continuously as more old paper metadata sheets are processed, but the processing at many sites goes back only a few years. Thus, there is the possibility that the BoM in Australia is sending revised data from time to time. I don’t now if this is the case or not and I cannot get a straight answer.
Presumably the same applies to many other countries. They might have been caught with records in a poor state when global warming took off a few years ago and are still only part way through doing the necessary quality control to make the data good enough for example, to calibrate proxies. And of course, this could be an explanation for the changes shown above.
I keep raining the matter that proxy analysis calibrated on one set of temperature data will not give the same result as proxy data calibrated on another set. Who knows if we have a single reliable set? Where are the corrections to past proxy papers where the authors now should know that their calibrations were wrong?
How wrong? Here, as a reminder, is one reconstruction of Darwin, Australia. A person calibrating proxies might, unwittingly, choose any one of these public graph lines.
http://i260.photobucket.com/albums/ii14/sherro_2008/Darwindifferencespaghetti.jpg

November 24, 2010 2:12 am
stephen richards
November 24, 2010 2:12 am

They do at least ask for feedback. I know they should have done a better job before release but sometimes it’s quicker and easier to get someone you know will find the errors to do it for you. AND boy will you find the errors

November 24, 2010 2:14 am

the errors are not “in the temperatures” the error is in the web presentation of the difference between the two data sets.

D. Patterson
November 24, 2010 3:26 am

Steven Mosher says:
November 24, 2010 at 2:14 am
the errors are not “in the temperatures” the error is in the web presentation of the difference between the two data sets.

I expect you will find there are errors “in the temperatures” after they have been reviewed for awhile. They are still applying adjustments and homogenizations as part of their quality assurance procedures. The dataset overview says in part: “This new versioning format will facilitate improved documentation and communication of updates and modifications that occur as a normal part of the life of a climate dataset.” The dataset is not a fixed set of data memorializing exactly what was observed in the past, unlike what the general public may imagine. Instead, the dataset is under continuing change and “development” with an automated adjustments life of its own. Suffice it to observe that experimenters still do not have access to the original raw data needed to independently validate and verify the origins, identities, quality, and extent of the original observational data records.

kcrucible
November 24, 2010 4:23 am

“Finally I went back and started checking the website. Sure enough, there was no error in what I did, it was all on their side.”
So, V2 was much higher than the new V3 set but they’re claiming only minor adjustment? Sounds like they fixed some “problems” with the data/calculations but basically don’t want to admit it.

John Christy
November 24, 2010 4:57 am

Note in April, the v2 line skipped 1901, so data values are not aligned properly. It’s simply a web issue.

November 24, 2010 5:35 am

“the errors are not “in the temperatures”
I submit that Steven Mosher couldn’t detect errors in the temperature, because he didn’t collect the data.
Andrew

woodNfish
November 24, 2010 6:46 am

…may not have significant quality control.
That’s a joke, right? These guys have NO quality control and have never had any quality control. In fact proper QC would destroy their ability to continue the big lie they are pushing.
The blogosphere is the only QC going on for this crap which is why they despise it so much.

RomanM
November 24, 2010 7:44 am

Some of the “errors” could very well be a numerical rounding issue. For example, .044 and .036 will both round to .04. However, the difference, .008 will round to .01.
There seem to be many places where this occurs in the link given by steven above.

LearDog
November 24, 2010 8:16 am

At least they had the wisdom to call it a Beta and request valuable feedback like this … Positive steps.
Great job !

November 24, 2010 9:46 am

Bad Andrew,
You have a presentation that shows the difference between two numbers
X-Y =Z
in some cases X-Y = Z is calculated correctly.
in other cases X-Y is calculated incorrectly
In no case do you know whether X and Y are accurate. In some cases you know
that X,Y, or Z is wrong.
So, the title “temperature data errors” is really not entirely accurate. Now I would hazard that since the calculations are performed accurately for 9 months and inaccurately for three months, that the error is in the Z variable and rather than being an error in how v3 is calculated, its an error in the web presentation. Not too unusual.

November 24, 2010 9:49 am

D. Patterson.
A bunch of people have been working with v3 for a while. You are welcomed to join.

Colin Aldridge
November 24, 2010 10:21 am

The purpose of issuing a beta version is surely to get independent users to do some testing and QA. No need to gripe about errors only gripe if they don’t fix them!!

November 24, 2010 10:47 am

Steven Mosher,
Thanks for responding. You just further clarified that there are the errors you can detect and the errors you wouldn’t be able to, given the way you are looking at the numbers after the fact. You have no idea how accurately any of the numbers you like to use describe some real temperature measurement. Any number of them could be spot on, or way off, or a mixture. They all could be inaccurate and you wouldn’t know the difference.
Andrew

Enneagram
November 24, 2010 11:31 am

The whole problem begins when temperature it is considered a cause and not a result.

D. Patterson
November 24, 2010 11:38 am

Steven Mosher says:
November 24, 2010 at 9:49 am
D. Patterson.
A bunch of people have been working with v3 for a while. You are welcomed to join.

Steven, thanks for the invitation. I do encourage all efforts to IV&V GHCNv3. It needs to be done to evaluate GHCNv3 on its own terms of reference.
However, I also believe it is no less important to help the general public to understand what GHCNv3 data does represent and does not represent with respect to observations of actual land surface air temperatures. In particular, most people do not know that the air temperature record for any single observation in the GHCNv3 is subject to subsequent adjustments that change the data value away from the value in the original weather log. Most people think the “scientists” wrote down the air temperature observations, and the datasets are those same observations with a few expert determined adjustments to take care of some errors in the data handling. They most often do not realize how the “scientists” have little or no idea themselves how much difference there is between the original air temperature observation and the air temperatures they are adjusting in the datasets.
Worse, the destruction of the original manuscript weather record forms may make it impossible to ever determine what the original observed air temperature values were for some time periods of some weather stations. I hope the praiseworthy efforts to work with GHCNv3 do not have an unintended and undesirable consequence of misleading the public into believing GHCNv3 is preserving the original observational data. Original record forms of an undetermined nature have been and are being allowed to mold, mildew, and be consumed by insect infestations without being digitally imaged or otherwise preserved. This destruction of the original observations can be halted in the NCDC archives. Those observational records which have been preserved and/or digitally imaged can be liberated from their present paywall prisons. Then it would become possible for future volunteer efforts to reconcile the original observational records to the USHCN and GHCN datasets. I believe the differences between the original observations versus the HCN datasets cannot be overstressed for the general public.

November 24, 2010 1:53 pm

Bad Andrew says:
November 24, 2010 at 10:47 am (Edit)
Steven Mosher,
Thanks for responding. You just further clarified that there are the errors you can detect and the errors you wouldn’t be able to, given the way you are looking at the numbers after the fact. You have no idea how accurately any of the numbers you like to use describe some real temperature measurement. Any number of them could be spot on, or way off, or a mixture. They all could be inaccurate and you wouldn’t know the difference.
Andrew
###########
“You just further clarified that there are the errors you can detect and the errors you wouldn’t be able to, given the way you are looking at the numbers after the fact.”
even if you took the measurements yourself you couldnt be certain they are correct.
You couldnt be certain that you hadnt lost your mind, or if your thermometer were off, or if you wrote it down wrong, or if or if or if. That’s the problem of skepticism is sheeps clothing. So yes at some deep philosophical level we are sure of nothing. That’s a pretty thin skepticism. Put another way, you are always and forever looking at the numbers “after the fact.” The question is what kind of errors creep in, how do we estimate those errors, can we “correct” those errors and do those errors bias the result, or do they just make the result less certain?
“You have no idea how accurately any of the numbers you like to use describe some real temperature measurement. ”
actually we do have a an idea how accurate they are. That is we can estimate uncertainty bounds. For example, we can by looking at written records get an indication of the frequency of “digit flipping” that is the frequency with which a person who is there taking the measurement (after the fact) writes down 91 instead of 19. but you first have to start by identifying the error modes.
“Any number of them could be spot on, or way off, or a mixture. They all could be inaccurate and you wouldn’t know the difference.”
any number of them are spot on, and some may be way off. If they are too far off,
say a 5 sigma event, then we can check other thermometers in the area. if grand rapids michigan shows 120F for december and detriot shows 32 and Holland shows 25 and rockford shows 28, then there is a good chance 120F is wrong. But GHCN go a bit further than this and check other sources like local papers. Also, we have to know more than that they are innaccurate, we have to know that the error is biased.
if a few thousand are high and a few thousand are low, the bias in that distribution matters.. do they equal out? Absent any information to the contrary our best estimate of the error is that it is normally distributed mean =0.
We can also test that, by resampling the data. Take 7000 stations. compute the mean.
Now take sub samples. do the same thing. We can also test that by taking MORE stations. For example, from 1950 or so we have up to 40000 stations that supply daily data. we can check those. we did. answer doesnt change.
We can also check using UHA or RSS.
you have a land trend from the surface since 1979 call it .20 degrees per decade.
Now comapre an independent source: UHA. use that to calculate a trend: .17 degrees
per decade. Hmm. that tells me the errors in the surface record are bounded by the UHA record. So, its not likely that the true surface record is 0 degrees per decade.
My best evidence says its between .20 and .17.
I can also check that by looking at the SST record. I see it has a lower slope than the land, which makes sense given the thermal capacity of water. One can pretty much figure that if the Ocean warms by .15C that the air over land did not warm by .1C
So, its trivially true that you cannot know for each and every thermometer whther it is accurate or not. What you can know is that the average of them all is consistent with other records, consistent with sea level rise, consistent with ice melting, consistent with animal migration, consistent with plant migration,

November 24, 2010 1:58 pm

D. Patterson
yes when one sees how many records remain un digitized I want to say more money for data. WRT adjustments. if the adjustments threw the series out of wack with other records ( say UAH or CRN) then the problem would be noteworthy. The biggest issue in adjustments is the lack of carry adjustment uncertainty forward.

November 24, 2010 3:24 pm

Steven Mosher,
You use the word ‘estimate’ 3 times in your comment. Interesting choice of words for someone who is trying to sell the public on how accurate numbers are.
You also say:
“if a few thousand are high and a few thousand are low, the bias in that distribution matters.. do they equal out? Absent any information to the contrary our best estimate of the error is that it is normally distributed mean =0.”
You don’t know what proportion are high or low. That they magically even out for you, just goes to show how strictly imaginary your conclusions are.
Andrew

November 24, 2010 10:33 pm

D. Patterson,
GHCN v3 (and v2 for that matter) contains both raw unadjusted and inhomogeniety adjusted data. If you do not like the adjusted data, simply use the raw.
Overall, the difference between GHCN v2 an v3 is mostly negligable: http://rankexploits.com/musings/2010/ghcn-version-3-beta/
V3.1 will be a bit more interesting, as they plan to significantly update the number of station records available.
Also, I’m not sure what to make of the original post. What exactly were you trying to do when you got that odd initial graph? Were you working with the raw or adjusted dataset, or just the web tool?

November 24, 2010 10:38 pm

Ahh, I followed the link to the original blog article and I see that he was using the web tool. You can access the actual data here; it will be much easier to work with than manually copying values: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/

Nick Stokes
November 24, 2010 11:50 pm

I see the Inconvenient Skeptic now has an update saying that the website issue has been fixed and the errors have gone away.
REPLY: Thanks, I’ll make an update tomorrow. – Anthony

November 25, 2010 1:18 am

Nick Stokes,
Given the knowledge that the temperature record is ever-changing, do you know of a mathematical method to calibrate a proxy so that it gives an unbiased result?

John Marshall
November 25, 2010 2:39 am

Temperature is not a reliable source of information as to climate change, given all these discrepancies. Thermodynamically, temperature of a system can only be taken when that system is at equilibrium. The atmosphere is never at equilibrium.

D. Patterson
November 25, 2010 4:33 am

Zeke Hausfather says:
November 24, 2010 at 10:33 pm
D. Patterson,
GHCN v3 (and v2 for that matter) contains both raw unadjusted and inhomogeniety adjusted data. If you do not like the adjusted data, simply use the raw.

Your comment illustrates the problem of how the public fails to understand that GHCN datasets, so-called raw and adjusted versions, do not necessarily report the original air temperature value reported by an observer on the observation form. The so-called raw GHCN records have in some instances been adjusted prior to being ingested into the GHCN raw files. NCDC warns users for one of many possible examples:

Therefore, users must be aware that if an element in DSI-3210 was flagged as suspicious or in error and an estimated value is included, the estimated value is entered into DSI-3200 as an “original” value.

There are other instances in which the data has been adjusted, altered, or changed in some other manner to differ from what the observer reported on the original observation form.

Overall, the difference between GHCN v2 an v3 is mostly negligable: http://rankexploits.com/musings/2010/ghcn-version-3-beta/

Try to calculate the difference between the air temperature value the observer reported on the observation form versus the air temperature value in the GHCN dataset. Repeat for all of the relevant records, and determine whether the difference of the GHCN values are less than, equal to, or greater than the original observed values recorded on the observation forms for given temporal periods. Then compare those temporal differences to the alleged rates of global warming to determine whether or not the differences between the original observations and GHCN values are greater than, equal to, or less than the alleged rates of global warming.

V3.1 will be a bit more interesting, as they plan to significantly update the number of station records available.

How many digitized images of the original observation forms will be made available to public?

Also, I’m not sure what to make of the original post. What exactly were you trying to do when you got that odd initial graph? Were you working with the raw or adjusted dataset, or just the web tool?

What graph, and what do you think you are talking about?

Nick Stokes
November 25, 2010 7:38 am

Geoff,
A proxy is used to infer temperatures that you can’t directly measure (by instrument), but is calibrated relative to temperatures that you can. So yes, you can make it unbiased relative to that calibration period, but you can’t test whether it is unbiased relative to the temperatures that you can’t measure directly.

kadaka (KD Knoebel)
November 25, 2010 8:08 am

How can one write a paper, or even a newspaper article, using such data? We keep seeings changes, major and minor, noticeable flaws never admitted to but sometimes corrections just show up, now comes this major revision. Can you simply refer to the database as it appeared on this date at this time, and know someone checking your work can easily access exactly the same data from that point in time? Or does one have to indefinitely archive their own copy and provide ready access for checking, and how do they prove all of the info is exactly as it was in the database at that time?
What happens when the database changes subsequently change one’s results? Does one have to provide some sort of regular updates to their work? What happens when the changes invalidate their conclusions? Do they have to publish something like: “The conclusions were correct with the data that was correct, but now are not correct with the data that is now even more correct”?
Is there anywhere in science where it is permissible for a researcher to continually correct and adjust their database of original observations at will, while repeatedly publishing it as correct and suitable for others to use for scientific work as is? And when those changes screw up the work of others, with that database being the foundation for business and government decisions worth millions to many billions of dollars, not only do they freely escape any legal action against them, they don’t even have to apologize!
Let me know when this “scientific database” becomes suitable for use in science, okay?

D. Patterson
November 25, 2010 6:16 pm

Steven Mosher says:
November 24, 2010 at 1:58 pm
D. Patterson
yes when one sees how many records remain un digitized I want to say more money for data. WRT adjustments. if the adjustments threw the series out of wack with other records ( say UAH or CRN) then the problem would be noteworthy. The biggest issue in adjustments is the lack of carry adjustment uncertainty forward.

Define “out of wack” in your context?

dscott
November 26, 2010 8:30 am

A few points to consider here:
Have you ever considered that many of these discrepancies are a result of mixing artificial man made time series with natural cycles?
I noticed that the graphic is using a 30 day calendar time period, April has 30 days and December has 31 days.
The sun has a 24.5 day rotation period and we know that the sun is thee source of energy input to earth’s climate system. TSI and UV is not uniform throughout the solar rotation.
Furthermore, don’t you think it is odd that a comparison is being made between a spring month versus a month that is mostly in the Fall seasonal period? April being in the middle of Spring and Winter starts December 23rd. Orbital position (distance from the sun) and obliquity absolutely influences the energy input from the sun.

November 26, 2010 9:19 am

dscott,
Here is an apples-to-apples comparison of December temperatures.

Pamela Gray
November 26, 2010 9:46 am

Digit flipping? We have digit flipping!?!?!?! Oh. Could I ever joke-jam on that phrase.
But to be serious, on tripcheck.com one of the pull-down road hazard blue button windows recorded 121 degree F in Meacham while the other one displayed minus 21.1 degrees. While tripcheck is not an official weather data site, it is hilarious to count all the typing/data errors on each pull down window. And I now have a new term for it: digit flipping.

dscott
November 26, 2010 11:42 am

Smokey, nice December anomaly graph, interesting that a December in the 1930s shows the highest temperature departure of the time series instead of the end of the century where AGW is supposed to be occuring.