Guest essay by Dan Travers
On Thursday, March 13, 2014, the U.S. National Climactic Data Center switched to gridded GHCN-D data sets it uses to report long term temperature trends – with a resulting dramatic change in the official U.S. climate record. As seems to always happen when somebody modifies the temperature record, the new version of the record shows a significantly stronger warming trend than the unmodified or, in this case, discarded version.
The new dataset, called “nClimDiv,” shows the per decade warming trend from 1895 through 2012 for the contiguous United States to be 0.135 degrees Fahrenheit. The formerly used dataset, called “Drd964x,” shows the per decade warming trend over the same period to be substantially less – only 0.088 degrees. Which is closer to the truth?
As will be illustrated below in the side by side comparison graphs, the increase in the warming trend in the new data set is largely the consequence of significantly lowering the temperature record in the earlier part of the century, thereby creating a greater “warming” trend.
This particular manipulation has a long history. For an outstanding account of temperature record alterations, tampering, modifications and mutilations across the globe, see Joseph D’Aleo and Anthony Watts’ Surface Temperature Records: Policy-Driven Deception?
It should be noted that the 0.088 degree figure above was never reported by the NCDC. The Center’s previous practice was to use one data set for the national figures (nClimDiv or something similar to it) and a different one for the state and regional figures (Drd964x or something similar). To get a national figure using the Drd964x data, one has derive it from the state data. This is done by taking the per decade warming trend for each of the lower forty-eight states and calculating a weighted average, using each states’ respective geographical area as the weightings.
The chart below shows a state by state comparison for the lower forty-eight states of the per decade warming trend for 1895-2012 under both the old and the new data sets.
In the past, alterations and manipulations of the temperature record have been made frequently and are often poorly documented. See D’Aleo and Watts. In this instance, it should be noted, the NCDC made considerable effort to be forthcoming about the data set change. The change was planned and announced well in advance. An academic paper analyzing the major impacts of the transition was written by NOAA/NCDC scientists and made available on the NCDC website. See Fenimore, et. al, 2011. A description of the Drd964x dataset, the nClimDiv Dataset, and a comparison of the two was put on the website and can be see here.
The relative forthcoming approach of the NCDC in this instance notwithstanding, looking at the temperature graphs side by side for the two datasets is highly instructive and raises many questions – the most basic being which of the two data sets is more faithful to reality.
Below are side by side comparisons under the two data sets for California, Maine, Michigan, Oregon and Pennsylvania for the period 1895-2009, with the annual data points being for the twelve month period in the respective year ending in November. The right-side box is the graph under the new nClimDiv dataset, the left-side box is the graph for the same period using the discarded Drd964x dataset. (The reason this particular period is shown is that it is the only one for which I have the data to make the presentation. In December 2009, I happened to copy from the NCDC website the graph of the available temperature record for each of the lower forty-eight states, and the data from 1895 through November 2009 was the most recent that was available at that time.)
I will highlight a few items for each state comparison that I think are noteworthy, but there is much that can be said about each of these. Please comment!
California
![]()
![]()
Left: Before, Right: After – Click to enlarge graphs
- For California, the change in the in datasets results in a lowering of the entire temperature record, but the lowering is greater in the early part of the century, resulting in the 0.07 degree increase per decade in the Drd964x data becoming a .18 degree increase per decade under the nClimDiv data.
- Notice the earliest part of the graphs, up to about 1907. In the graph on left, the data points are between 59 and 61.5 degrees. In the graph on the right, they are between 56 and 57.5 degrees.
- The dips at 1910-1911 and around 1915 in the left graph are between 57 and 58 degrees. In the graph on the right they are between 55 and 56 degrees.
- The spike around 1925 is above 61 degrees in the graph on the left, and is just above 59 degrees in the graph on the right.
Maine
· The change in Maine’s temperature record from the dataset switch is dramatic. The Drd964x data shows a slight cooling trend of negative .03 degrees per decade. The nClimDiv data, on the other hand, shows a substantial .23 degrees per decade warming.
· Notice the third data point in the chart (1898, presumably). On the left it is between 43 and 44 degrees. On the right it is just over 40 degrees.
· Notice the three comparatively cold years in the middle of the decade between 1900 and 1910. On the left the first of them is at 39 degrees and the other two slightly below that. On the right, the same years are recorded just above 37 degrees, at 37 degrees and somewhere below 37 degrees, respectively.
· The temperature spike recorded in the left graph between 45 and 46 degrees around 1913 is barely discernable on the graph at the right and appears to be recorded at 41 degrees.
Michigan
- Michigan’s temperature record went from the very slight cooling trend under Drd964x data of -.01 degrees per decade to a warming trend of .21 degrees per decade under nClimDiv data.
- In Michigan’s case, the differences between the two data sets are starkly concentrated in the period between 1895 and 1930, where for the entire period the temperatures are on average about 2 degrees lower in the new data set, with relatively modest differences in years after 1930.
Oregon
· Notice the first datapoint (1895). The Drd964x dataset records it at slightly under 47.5 degrees. The new dataset states at slightly over 45 degrees, almost 2.5 degrees cooler.
· The first decade appears, on average, to be around 2.5 degrees colder in the new data set than the old.
· The ten years 1917 to 1926 are on average greater than 2 degrees colder in the new data set than the old.
· As is the case with California, the entire period of the graph is colder in the new data set, but the difference is greater in the early part of the twentieth century, resulting in the 0.09 degrees per decade increase shown by the Drd984x data becoming a 0.20 degree increase per decade in the nClimDiv data.
Pennsylvania
· Pennsylvania showed no warming trend at all in the Drd964x data. Under the nClimDiv data, the state experienced a 0.10 degree per decade warming trend.
· From 1895 through 1940, the nClimDiv data shows on average about 1 degree colder temperatures than the Drd964x data, followed by increasingly smaller differences in later years.
Nick, concerning TOB adjustments, as several links I have given you show, the ajsutments do not start there, or end there. In fact they are changing the current as well, as the link in my post above shows with regard to Illinois. Also, there are reasonable arguments that show the TOB issue as overstated.
Globaly the ice thermometers have been trending up for some time. Ice is not a perfect thermometer, but I think it is better then tree rings.
http://stevengoddard.wordpress.com/2014/04/30/april-28-global-sea-ice-area-third-highest-on-record-highest-in-22-years/
One of the reasons for changing the temperature record and pushing down past temperatures is that it also biases the proxy record. Often, proxy records are calibrated by comparing to past temperatures. Therefore, by pushing down past temperatures in the official record, it also pushes down proxy estimates of past temperatures and makes them unreliable.
One of the major goals of climate change and global warming believers is to make it appear that the current temperature is warmer than past periods like the Medieval period. Pushing down past temperatures to bias the calibrations is a good way to do that.
How is temperature measured to 1/1000 of a degree going back to 1895. What is the error band?
Bob Kutz says:
April 30, 2014 at 7:14 am
Nick, your understanding of TOBS differs from mine in such a way that one of us is just making up a story that does not reflect reality.
From GHCN;
The TOB software is an empirical model used to estimate the time of observation biases associated with different observation schedules and the routine computes the TOB with respect to daily readings taken at midnight. Details on the procedure are given in, “A Model to Estimate the Time of Observation Bias Associated with Monthly Mean Maximum, Minimum, and Mean Temperatures.” by Karl, Williams, et al.1986, Journal of Climate and Applied Meteorology 15: 145-160.
SO . . . Mr. Stokes, since your story varies wildly from what the GHCN claims on their own website, maybe you ought to think about looking into things before making assertions.
“From the previously set 5pm to a now common 9am or 10 am.”???
Well if you read the paper you referenced, for example Table 1, you’ll see that in 1931 14% of the stations used AM vs 79% who used PM, whereas in 1984 the split was 42% vs 47%.
Which seems to agree with what Nick said.
It also reports that “it is rare to find cooperative stations with uniform observing throughout their history, and in many instances observers have changed their time of observation several times in a single decade”.
Nick, the data set without TOBS shows about half the warming of the data set with TOBS added. The most recent data set increases the difference. How is it that our understanding of temperature observations 100 years ago now allows us to more accurately adjust them ALL DOWN?
Because the mean trend changes doesn’t mean that all the past observations are adjusted downwards. Clearly if one wants to accurately present past data along with today’s it’s necessary to put them on a common basis, this is what the TOBS adjustment does.
I just can’t see calling this “climate change” any more. We need to call it “Climate alteration” and Catastrophic Anthropogenic Climate Alteration.
You can easily figure out the acronym for that one.
Does anyone have access to the TOBS adjustment algorithms that NOAA uses? I can’t access the software at the link Zeke H. posted earlier? I’m looking for actual code. Thanks.
@Phil “Because the mean trend changes doesn’t mean that all the past observations are adjusted downwards. Clearly if one wants to accurately present past data along with today’s it’s necessary to put them on a common basis, this is what the TOBS adjustment does”.
your adjusting the temperatures pure and simple.
Wouldn’t it be cleaner to just admit the historical temperature records are a mess ?
You can’t accurately present past data with present data if the data isn’t similar or a match otherwise your making subjective adjustments to what you think the data should be now what the data was.
The fact is, you can’t really compare past data to present, because when you do…. it doesn’t show the global warming trend.
thus you have to adjust the past data to make it “comparable” and he presto a slight warming trend appears.
Why on earth should a temperature on a given date and time be adjusted ?
That was the temperature recorded accurately then. You can’t legitimately say because it was X Degrees at 9pm is should be Y degrees at 3pm. Thats impossible.
The NOAA and NDCD data then becomes a temperature estimate and not a temperature record….. no error bars are shown and the entire GHCN data set is a complete mess… but no one has enough balls to say this.
the adjustments aren’t really legitimate and deep down i think all you guys know that.
Its your opinion thats their legtimate based on a few peer reviewed papers.
Go ask an accountant what would happen if he started adjusting a companies books….
Bean says: @ur momisugly April 30, 2014 at 9:14 am
How is temperature measured to 1/1000 of a degree going back to 1895. What is the error band?
>>>>>>>>>>>>>>>>>
That is one of the big LIES.
The data point represents one instrument, one point in time taken at one location. It is a SINGLE unique un replicated data point. However they then apply statistics for multiple readings taken of the same thing. (Think 100 readings of a bathtub full of water.) This is how they get the dubious extra precision.
See: Australian temperature records shoddy, inaccurate, unreliable. Surprise!
In other words the data is good to the nearest whole integer… Maybe.
….
An example of how dubious the readings actually are:
Data from Wunderground for near Raleigh, North Carolina:
Difference in location at the same time:
Silverton Subdivision, Cary, NC…………. 71.6 °F
Westborough, Raleigh, NC…………………72.3 °F
Ashworth, RALEIGH, NC……………………71.2 °F
Hearthstone Farms, Cary, NC…………….71.0 °F
Twin Lakes, Cary, NC………………………..71.8 °F
Brier Creek Country Club, Raleigh, NC…71.4 °F
Westwind Subdivision, Raleigh, nc……….74.8 °F
Westwind- Fountain Park, Raleigh, NC…71.4 °F
Village at Westgate, Raleigh, NC………..71.9 °F
Leesville Hollow, Raleigh, NC ……………70.3 °F
Chapel Hill, NC…………………………………77.0 °F
>>>>>>>>>>>>>>>>>
Data over time: (Chapel Hill, NC)
10:56 AM …………… 75.0 °F
11:31 AM …………… 75.9 °F
11:56 AM …………… 75.9 °F
12:07 PM …………… 75.9 °F
12:09 PM …………… 75.9 °F
12:28 PM …………… 75.9 °F
12:56 PM …………… 77.0 °F
>>>>>>>>>>>>>>>
Data over time: ( Raleigh-Durham Airport, NC)
11:48 AM ……………75.2 °F
11:51 AM ……………75.9 °F
12:04 PM ……………75.2 °F
12:22 PM ……………71.6 °F
12:51 PM ……………71.1 °F
The distance between the Raleigh-Durham Airport, NC and the hospital in Chapel Hill is 14.7 miles.
As I said the readings are good to the nearest integer – maybe, and that does not even get into the fact that you should be taking the amount of water vapor in the air into account if you want to look at energy/heat.
Phil. says: @ur momisugly April 30, 2014 at 9:25 am
….Because the mean trend changes doesn’t mean that all the past observations are adjusted downwards. Clearly if one wants to accurately present past data along with today’s it’s necessary to put them on a common basis, this is what the TOBS adjustment does.
>>>>>>>>>>>>>>>>
The Six min max thermometer invented in 1782, has been around for over two hundred years. TOBS adjustment causing “Warming” which the raw vs ‘adjusted’ data shows therefore just does not make sense. All you are doing is assigning Thursday’s “high reading’ that was read in the morning on Thursday back to Wednesday. Only when the pattern of reading was changed should an adjustment actually matter and given all the other blanks, station moves… it is a really minor factor unless operators changed frequently.
By the way, for those of you who would like to study temperature trends back before the **current** global warming scare, I give you the Monthly Weather Review, Vol. 61, No. 9:
http://www.odlt.org/dcd/docs/kincer-061-09-0251.pdf
Article Title: “IS OUR CLIMATE CHANGING? A STUDY OF LONG-TIME TEMPERATURE TRENDS”
Money quote:
“In concluding this study, other weather features directly related to general temperature conditions were examined such as the occurrence of frost in the fall and spring, the number of days in winter with certain low temperatures, the occurrence of freezing weather in the fall and spring seasons, the length of the winters, as indicated by the first frost in fall and the last in spring, etc. All of these confirm the general statement that we are in the midst of a period of abnormal warmth, which has come on more or less gradually for many years.”
Published: September 1933
Frank K. says:
April 30, 2014 at 9:28 am
Does anyone have access to the TOBS adjustment algorithms that NOAA uses? I can’t access the software at the link Zeke H. posted earlier? I’m looking for actual code. Thanks.
=============================================
Try EM Smith’s site and on the subject graph he has at least two main headers with over fifty posts. Some of the dealt extensively with just this I think. He may be starting to post more, and he responds to over 90 percent of comments.
BTW Frank, the old records, even Jim Hansen’s through the early 1990s confirm the very warm US
USHCN stations in continues operation from the early 1900s to now, set the vast majority of their records highs in the 1930s and 40s. (Same station, and there is no adjustment on the record highs.) I have linked these to Nick, Zeke and Mosher several times. They never comment.
@David A.
Thanks. I want to see if I can get a hold of the actual code NOAA uses to do their raw data adjustments such as TOBS, station moves, UHI, etc. TOBS is a big one, and it’s empirical (which is a to say, it’s a model). While it is described in papers, no one (as far as I know) has published any of the actual codes used to calculate the TOBS adjustment at NOAA/NCDC.
I find the 1933 article amusing since apparently we were going through our first “global warming” crisis back then. It’s also neat to see annual temperature averages reported for places like Philadelphia and New Haven all the way back to the late 1700s! (and no TOBS)
Frank K. says:
April 30, 2014 at 12:42 pm
@David A.
Thanks. I want to see if I can get a hold of the actual code NOAA uses to do their raw data adjustments such as TOBS, station moves, UHI, etc. TOBS is a big one, and it’s empirical (which is a to say, it’s a model). While it is described in papers, no one (as far as I know) has published any of the actual codes used to calculate the TOBS adjustment at NOAA/NCDC.
The authors of the method referred to by NOAA used to provide copies of the original FORTRAN program.
http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281986%29025%3C0145%3AAMTETT%3E2.0.CO%3B2
The formula used is given in the paper.
JustAnotherPoster says:
April 30, 2014 at 9:39 am
Why on earth should a temperature on a given date and time be adjusted ?
Read the paper and you’ll find out why and how.
David A says: April 30, 2014 at 7:53 am
“I also challenged you or Mr. Mosher to just explain the Iceland adjustments.”
I did a post on that topic here.
“However the GHCN stations CONSISTENTLY show a warmer past, and a cooler present”
It isn’t the stations – it’s the subset average, which just depends on what Steven Goddard has put in the subset. I’ve made a Google Maps based station display here. You can zoom in on Michigan to see the station distribution – sparser in the North. You can sub-select time periods to see how it changed. It’s a relative change – I don’t have a similar gadget for USHCN, but they probably have a south bias too. The comparison just depends on which is more biased at any point in time.
JustAnotherPoster says: April 30, 2014 at 8:04 am
“The pure unadjusted data is loaded in from raw CSV files, the methodology is open…. and guess what. The data shows absolutely zero warming trend at all.”
I run a program which every month analyses the raw GHCN data. Here is an early post, which also compares the results of some other bloggers, and the main indices. No adjustments used at all, and very little different from the main indices.
Bob Kutz says: April 30, 2014 at 7:14 am
“Nick, your understanding of TOBS differs from mine in such a way that one of us is just making up a story that does not reflect reality.”
You can read more about my misunderstanding here.
“From the previously set 5pm to a now common 9am or 10 am.”???
Yes. Here is Vose of NOAA:
“[6] There has been a systematic change in the preferred observation time in the U.S. Cooperative Observing Network over the past century. Prior to the 1940s most observers recorded near sunset in accordance with U.S. Weather Bureau instructions, and thus the U.S. climate record as a whole contains a slight warm bias during the first half of the century. A switch to morning observation times has steadily occurred during the latter half of the century to support operational hydrological requirements, resulting in a broad-scale nonclimatic cooling effect. In other words, the systematic change in the time of observation in the United States in the past 50 years has artificially reduced the temperature trend in the U.S. climate record”
I wrote a post trying to reply to everyone, which went in to moderation (probably links). Hopefully soon.
Frank K. says:
April 30, 2014 at 12:42 pm
@David A.
Thanks. I want to see if I can get a hold of the actual code NOAA uses
======================================================
I think EM did that. Besides being an economist, he is a computer geek as well. You can look at Phil’s source, but I am confident EM can show you many issues with the code.
Hi David A.
I think you’re confusing the awful code GISTEMP (which of course is a NASA GISS product) with the NOAA/NCDC codes, which nobody has seen (yet). I want to see the NOAA/NCDC codes. NOAA/NCDC supplies the data used to churn out the ever-changing temperature “anomaly”plots…
Lots of justifiable arguing here over exactly how to measure the temps. Here are a few points that come to mind…
… virtually all the warming before 1945 had to be natural. This almost always gets lumped in as AGW but that’s very unlikely considering the low CO2 emissions compared to now. So, lowering the pre-WWII temps only makes the previous NATURAL warming stronger meaning more recent AGW is less extreme by comparison.
… even with all the people working on this, the errors are as big as the signals. That’s reason enough there to be a skeptic
… humans are only sensitive to about a 3 deg difference in temperatures. So, in my lifetime, the temperature has changed only a fraction of what I can even sense, and that fraction is based on a rather dubious data set. That makes me realize that exactly no one would have noticed global warming at all if not for the hype.
… why is it that climate model forecasts have always busted high and observed data has always been adjusted to make the trend bigger? An unbiased forecast or data adjustment should have a 50-50 probability of either direction, so what is the probability that ALL went in the direction of the CAGW enthusiasts?
Frank K says…
Hi David A.
I think you’re confusing the awful code GISTEMP (which of course is a NASA GISS product) with the NOAA/NCDC codes, which nobody has seen (yet).
====================================
Frank, you are exactly correct, and when I checked out Nick Stokes link where he says this..
Nick Stokes says:
April 30, 2014 at 3:24 pm
David A says: April 30, 2014 at 7:53 am
“I also challenged you or Mr. Mosher to just explain the Iceland adjustments.”
============================
Nick responds, “I did a post on that topic here.”
========================================
Nick has a curious way sometimes, because that post confirms the following about the Iceland adjustments….
1.Regarding the GISS adjustments, the current situation is absurd. GISS starts from the GHCN adjusted data, which includes (erroneous) adjustments for breaks and homogeneity adjustments. It then applies its own algorithm to remove ‘suspicious records’, and then applies its own homogeneity adjustment!
2. .The data for Reykjavik shows that the v2 adjustments cool by about 0.9 in the 60s and 70s, increasing to 1.3 before 1940 and 1.7 before 1920.
So the adjustment-fabricated warming in v2 is similar in magnitude to that shown in the graph for v3.1, though more monotonic. It is a similar picture for Stykkisholmur, with downward adjustments of around a degree before 1970.
3. The counter argument is that the GHCN algorithm is all about neighboring station records. That’s how it works. I recommend you read the papers on it.
4. Which clearly fails because…in all 8 of the Iceland stations unadjusted data, there is a sharp drop around 1965, and this is well established from a number of sources.
In 7 of the 8 cases the GHCN adjustment algorithm incorrectly puts in a break here and gets rid of this sharp drop.
Similarly, the raw data consistently shows a warm period around 1930-1940 which is also established in the literature on air temps and SST temps (papers by Hanna et al). In most cases the GHCN algorithm adjusts these downwards.
So whatever is flagging the GHCN adjustments, it’s not nearby neighbors. If the algorithm looked at neighbors in any sensible way it would know that the raw data was valid.
5. Here’s what the Iceland Met Office says: “The GHCN “corrections” are grossly in error in the case of Reykjavik but not quite as bad for the other stations. But we will have a better look. We do not accept these “corrections”. Of course if they would publish the code, the error could be found fairly easily/
6. In the absence of complete metadata as to changes at each station, there will be inevitable uncertainty in how to do the adjustments. (I just want to initially have explained how ONE AREA of Iceland’s adjustments are done.)
7. It is all mixed up. the GHCN adjustment code is NOT available for download. The GISS code is, and gets changed periodically. We do not know how they interact. The interaction itself possibly creates further erroneous artifacts.
——————————————————————————————————
Nick misdirects when he pretends that his post solved my challenge to explain just the Iceland adjustments. The truth is those adjustments clearly illustrate the problem and the answer is FUBAR.
I wish to take a brief sidebar with regard to Nicks lowball tactic of starting his post with hurled insults at WUWT. WUWT is a community far larger then any that will ever read Nick’s posts. On any given thread it is common to see more then a hundred comments. To cherry pick some uninformed and uneducated comments, is the same as making unjustified adjustments to Iceland, or starting Arctic Ice coverage graphs in 1979, it is cherry picking for the purpose of creating a false impression. WUWT has, in every post, very good and very poor comments from a very wide audience.
Finally Nick responded to this comment of mine.. “However the GHCN stations CONSISTENTLY show a warmer past, and a cooler present” by stating….It isn’t the stations – it’s the subset average, which just depends on what Steven Goddard has put in the subset.”
Nick ignored entirely my direction to numerous graphs which clearly are direct station to station comparisons, raw vs adjusted. Those graphs, reading station to SAME station, are raw warmer in the past, and cooler in the present, as well as verses the “official record” they are also consistently warmer in the past, and cooler in the present. (If location were the cause, they would not flip flop, but would remain warmer in the present as well.)
All the best
David A
Here is one state, SAME station raw, to SAME stations with “corrections”…
http://stevengoddard.wordpress.com/2014/04/26/noaa-blowing-away-all-records-for-data-tampering-in-2014/
I am open to explanations.
Still no NOAA data access:
“NCDC recently experienced an IT failure leading to a degradation of services. We are returning services to full functionality in a methodical fashion. We expect full services to be available by Thursday, May 1, however the exact date and time cannot be predicted with full confidence. Thank you for your patience.”
David A says:
May 1, 2014 at 4:58 am
7. It is all mixed up. the GHCN adjustment code is NOT available for download. The GISS code is, and gets changed periodically. We do not know how they interact. The interaction itself possibly creates further erroneous artifacts.
First, forget about the GISS code – it’s just one interpretation of the GHCN data. As they are fond of saying, they don’t do anything but process the NOAA data, which anyone can do on their own. What I am more interested in is how the “raw” data gets processed by NOAA, and particularly how the TOBS model is implemented. Did you know that TOBS only affects the U.S. data? That is every other climate/weather station around the world from 1880 to the present day took their measurements at the same clock time every day of every year of every decade. And we KNOW this is true because…well, just because! Of course, the U.S. is just 2% of the earth’s surface, so maybe all of this doesn’t matter…
NCDC is abiding by an old saying in the garment district: “The man wants a blue suit, turn on the blue lights.” Nothing short of criminal penalties for forging government data will curtail this nonsense.