Guest essay by Dan Travers
On Thursday, March 13, 2014, the U.S. National Climactic Data Center switched to gridded GHCN-D data sets it uses to report long term temperature trends – with a resulting dramatic change in the official U.S. climate record. As seems to always happen when somebody modifies the temperature record, the new version of the record shows a significantly stronger warming trend than the unmodified or, in this case, discarded version.
The new dataset, called “nClimDiv,” shows the per decade warming trend from 1895 through 2012 for the contiguous United States to be 0.135 degrees Fahrenheit. The formerly used dataset, called “Drd964x,” shows the per decade warming trend over the same period to be substantially less – only 0.088 degrees. Which is closer to the truth?
As will be illustrated below in the side by side comparison graphs, the increase in the warming trend in the new data set is largely the consequence of significantly lowering the temperature record in the earlier part of the century, thereby creating a greater “warming” trend.
This particular manipulation has a long history. For an outstanding account of temperature record alterations, tampering, modifications and mutilations across the globe, see Joseph D’Aleo and Anthony Watts’ Surface Temperature Records: Policy-Driven Deception?
It should be noted that the 0.088 degree figure above was never reported by the NCDC. The Center’s previous practice was to use one data set for the national figures (nClimDiv or something similar to it) and a different one for the state and regional figures (Drd964x or something similar). To get a national figure using the Drd964x data, one has derive it from the state data. This is done by taking the per decade warming trend for each of the lower forty-eight states and calculating a weighted average, using each states’ respective geographical area as the weightings.
The chart below shows a state by state comparison for the lower forty-eight states of the per decade warming trend for 1895-2012 under both the old and the new data sets.
In the past, alterations and manipulations of the temperature record have been made frequently and are often poorly documented. See D’Aleo and Watts. In this instance, it should be noted, the NCDC made considerable effort to be forthcoming about the data set change. The change was planned and announced well in advance. An academic paper analyzing the major impacts of the transition was written by NOAA/NCDC scientists and made available on the NCDC website. See Fenimore, et. al, 2011. A description of the Drd964x dataset, the nClimDiv Dataset, and a comparison of the two was put on the website and can be see here.
The relative forthcoming approach of the NCDC in this instance notwithstanding, looking at the temperature graphs side by side for the two datasets is highly instructive and raises many questions – the most basic being which of the two data sets is more faithful to reality.
Below are side by side comparisons under the two data sets for California, Maine, Michigan, Oregon and Pennsylvania for the period 1895-2009, with the annual data points being for the twelve month period in the respective year ending in November. The right-side box is the graph under the new nClimDiv dataset, the left-side box is the graph for the same period using the discarded Drd964x dataset. (The reason this particular period is shown is that it is the only one for which I have the data to make the presentation. In December 2009, I happened to copy from the NCDC website the graph of the available temperature record for each of the lower forty-eight states, and the data from 1895 through November 2009 was the most recent that was available at that time.)
I will highlight a few items for each state comparison that I think are noteworthy, but there is much that can be said about each of these. Please comment!
California
![]()
![]()
Left: Before, Right: After – Click to enlarge graphs
- For California, the change in the in datasets results in a lowering of the entire temperature record, but the lowering is greater in the early part of the century, resulting in the 0.07 degree increase per decade in the Drd964x data becoming a .18 degree increase per decade under the nClimDiv data.
- Notice the earliest part of the graphs, up to about 1907. In the graph on left, the data points are between 59 and 61.5 degrees. In the graph on the right, they are between 56 and 57.5 degrees.
- The dips at 1910-1911 and around 1915 in the left graph are between 57 and 58 degrees. In the graph on the right they are between 55 and 56 degrees.
- The spike around 1925 is above 61 degrees in the graph on the left, and is just above 59 degrees in the graph on the right.
Maine
· The change in Maine’s temperature record from the dataset switch is dramatic. The Drd964x data shows a slight cooling trend of negative .03 degrees per decade. The nClimDiv data, on the other hand, shows a substantial .23 degrees per decade warming.
· Notice the third data point in the chart (1898, presumably). On the left it is between 43 and 44 degrees. On the right it is just over 40 degrees.
· Notice the three comparatively cold years in the middle of the decade between 1900 and 1910. On the left the first of them is at 39 degrees and the other two slightly below that. On the right, the same years are recorded just above 37 degrees, at 37 degrees and somewhere below 37 degrees, respectively.
· The temperature spike recorded in the left graph between 45 and 46 degrees around 1913 is barely discernable on the graph at the right and appears to be recorded at 41 degrees.
Michigan
- Michigan’s temperature record went from the very slight cooling trend under Drd964x data of -.01 degrees per decade to a warming trend of .21 degrees per decade under nClimDiv data.
- In Michigan’s case, the differences between the two data sets are starkly concentrated in the period between 1895 and 1930, where for the entire period the temperatures are on average about 2 degrees lower in the new data set, with relatively modest differences in years after 1930.
Oregon
· Notice the first datapoint (1895). The Drd964x dataset records it at slightly under 47.5 degrees. The new dataset states at slightly over 45 degrees, almost 2.5 degrees cooler.
· The first decade appears, on average, to be around 2.5 degrees colder in the new data set than the old.
· The ten years 1917 to 1926 are on average greater than 2 degrees colder in the new data set than the old.
· As is the case with California, the entire period of the graph is colder in the new data set, but the difference is greater in the early part of the twentieth century, resulting in the 0.09 degrees per decade increase shown by the Drd984x data becoming a 0.20 degree increase per decade in the nClimDiv data.
Pennsylvania
· Pennsylvania showed no warming trend at all in the Drd964x data. Under the nClimDiv data, the state experienced a 0.10 degree per decade warming trend.
· From 1895 through 1940, the nClimDiv data shows on average about 1 degree colder temperatures than the Drd964x data, followed by increasingly smaller differences in later years.
When you see a temperature set expressed in °F, and which averages temperatures across regions without using anomalies, it’s likely to be carrying a lot of baggage. And that’s the case with Drd964X. To understand why they need to do what they are doing, you need to read the paper of Fenimore.
Drd964X has been built up over years, primarily for agricultural uses etc. It’s not intended for climate science. So it isn’t even internally consistent. It uses different methods for pre-1931 data than later. Why? Because that’s how states used to do it. Continuity was more important than physical climate accuracy.
Not using anomalies caused problems with inhomogeneity. They describe “valley effects” – regions where stations were lower than average topography. And “ridge effects”.
The change in absolute values is inevitable when you change the station set. We saw that trying to compare USHCN and USCRN. USCRN reports cooler temperatures. Why? Their stations are, on average, higher. And there are latitude effects etc.
– – – – – – – – – –
John McClure,
I quite agree about your caution in attributing anything to the scientists that is pejorative. But consider the possibility that they think that what they are doing is good for saving the planet; which is one of my six ‘Possible Cause Categories’ (PCC).
I do not know that is true. But the fact that all the major institutional bodies making the datasets have consistently (?always?) increased the historical warming rate when they make changes. Why? See my other 5 PCCs.
John
Richard M says:
April 29, 2014 at 2:07 pm
I completely agree if we’re discussing Pharma but climate science isn’t a hard science. Its a soft science based more on “educated guess” than scientific fact. Its in its infancy yet evolving well thanks to all the global milk.
The issue isn’t the real need to understand, the issue is the rush to judgement! Scientific dialogue over an issue we can share globally is what we need — not action as its poorly defined. Stupid solutions to an issue that has yet to be properly defined is a Carnie hire.
Hi steve. If a station at elevation x reports a temperature of 59 degrees based on an elevation of 3000 feet. That what’s the temperation is at that station and elevation. The world is a 3D place.
Doing the maths and stating if the station was at sea level the temperature would be 57 degrees at that location isn’t valid. It’s an assumption.
The data is what it is..
The temperature at place a with a elevation of 3000 feet was 59 degrees.
You can adjust it all you want but fundamentally any data post adjustment isn’t a record.
At nick stokes…. That’s an assumption of what they need to do. Just because it’s a paper doesn’t make it a valid adjustment
Steven Mosher says:
If the results disagree with your theory, your theory is busted.
That is what we’ve been pointing out incessantly here: the catastrophic AGW conjecture is well and thoroughly busted, because it doesn’t agree with empirical evidence. It’s a dead parrot.
It has warmed so much that Lake Superior is still 60% ice covered, probably the highest level for this time of year since 1895. How can it be so much warmer. The freeze temperature of water has not varied in 13 billion years so it must be colder!!!.
We cannot rely on the numbers produced by the NCDC. They have adjusted the records about 10 times now. Are they saying, that the 9th adjustment made last year was not accurate (at the time, they said it made the record perfect). Are they saying that the 1st adjustment in 1988 was not accurate (at the time they said it was fully corrected at that point).
The climate is not different than it was in 1900. There is no record from 1900 produced by the NCDC that we can rely on anymore.
We need a forensic audit team to go in and fix our temperature history.
JustAnotherPoster says: April 29, 2014 at 2:28 pm
“If a station at elevation x reports a temperature of 59 degrees based on an elevation of 3000 feet. That what’s the temperation is at that station and elevation. The world is a 3D place.”
Yes, it is. And you can find all that in the GHCN-D data, unadjusted. But it’s not what people generally want to know. If you want to know thw average temperature in Maine, or the US, you then have to worry about which set of stations you have used. If your 3000 ft station in Maine misses a month, the state’s average goes up. Doesn’t mean it was warmer. That’s why you need anomalies for regional averaging.
John Whitman says:
April 29, 2014 at 2:18 pm
John,
Thanks for the comment but the part no one seems to understand or refuses to is the reality. Scientists are free thinkers here and bound by nothing political.
No question we have the characters who game the system but they lose all merit when they do.
I think the distinction you’re looking for is the notion “Scientist” in soft science in relation to the real thing?
– – – – – – – – – –
Bill Illis,
I agree.
My View: The audit team cannot be depend in any way on government funding or have any connection to the major institutional bodies that make the datasets. It needs to be privately funded and to include; statisticians; medical and software research experienced people, professional industry QA/QC experts, professional IT data handling experts, professional auditing company personnel and of course some skeptical climate scientists.
From my perspective, that is the only way to increase the confidence in climate science from its low level.
John
Bad Andrew says: April 29, 2014 at 8:50 am
Mosher Drive By in 3…2…1…
Andrew
Too easy.
John McClure: “I find it absurd to believe any Scientist in NOAA/NCDC, NASA, or any other scientific agency of the US government would intentionally falsify data for gain. There simply isn’t any gain to be had and it flies in the face of reason to think otherwise.”
That is the mindset that makes is responsible for so much acceptance of stuff like “The brutal cold is a result of global warming; the scientists say so.”
In the first place, there’s plenty of gain on offer, since producing results helpful to the skeptic cause can be a career-shortening move in this administration. Listen to the Christy recount an example in his interview over at Alex Epstein’s site.
But there’s something else, something that I do myself all the time. When a computer program I’ve written gives me a result that makes sense to me, I don’t much look for problems. Otherwise, I look for the bug. Many’s the time I’ve gone round and round to find the bug, only to realize eventually that the program was actually giving me the right answer; it just hadn’t seemed right to me. So I’m biased, and only because I’m trying to get it right. Feynman’s recounting of the elementary-charge experiments comes to mind in this context.
So if you’re convinced–because other scientists who you think must be right have tell you so–that the physics tells us warming has to be occurring, you think your results are wrong if that’s not what you see.
And, by the way, there’s something that I’ve seen first hand a lot. Those PhDs routinely make errors first-year undergraduates could detect. (Or even lawyers; I once had to send a tiny, around fifteen-line, central-to-the-system swatch of C code back to a couple of–very bright–PhDs three times before they finally got it right.) So there are a lot of things to correct all the time and therefore plenty of opportunity for (even unintentionally) selective correction.
And, judging by how misleading so much that comes out of the IPCC and other catastrophic-global-warming proponents is, I also wouldn’t discount the possibility of intentional falsehoods just because the folks involved are scientists.
Bill ILLIS
You said “We cannot rely on the numbers produced by the NCDC. When one sees this level of adjustments , I tend to agree with you . The new apparent warming rates by state have changed in a major way. About 1/4 of the 48 states had their apparent warming rate increased by a factor of 2, 4 ,12 and even 26 times higher. Another 1/4 had their warming rate stay the same or slightly reduced . The rest all had their warming rate increased but by lesser amounts .. How can one go to states like Maine or Michigan or Texas and say oops, but your climate has just warmed more by a factor of 26,12 or 9 times since 1895. Eventually NCDC may get to the point where the public no longer believes or trusts any of their data. . To announce this apparent 50% warming trend increase of the entire Contiguous US just after US experienced its 34 th coldest winter and when their annual and winter temperatures have been declining for 17 years is even more peculiar.
John McClure says:
April 29, 2014 at 2:43 pm
@John Whitman on April 29, 2014 at 2:18 pm
John,
Thanks for the comment but the part no one seems to understand or refuses to is the reality. Scientists are free thinkers here and bound by nothing political.
No question we have the characters who game the system but they lose all merit when they do.
I think the distinction you’re looking for is the notion “Scientist” in soft science in relation to the real thing?
– – – – – – – – – – – –
John McClure,
You point out the possibility of making a distinction between ‘real’ and ‘soft’ science. ‘Real’ scientist as in one who conforms to an objective concept of science and ‘soft’ scientist as in one who conforms to a subjective concept of science? I would say that is the distinction.
With that distinction in mind then the point is, prima fascia, that there are reasonable issues raised by a broader science community and by responsible citizens interested in the science. The reasonable issues are about whether the major institutional bodies doing datasets are ‘soft’ (subjective) scientists doing ‘soft’ (subjective) science or are they ‘real’ (objective) scientists doing ‘real’ (objective) science.
In order to establish which kind of scientists and science is involved, would you agree with Bill Illis (April 29, 2014 at 2:41 pm) there should be a forensic audit team to go over the processes and products of the major institutional bodies doing the datasets?
John
The graph for Maine seem to show a cooling trend which became a warming trend .How much more of a cooling trend would there have been with out past data changes?
The clever thing about increasing the heating trend by cooling the past is that today’s temperatures can be verified, those of the past cannot, and have larger error bars as an additional excuse. Given the useful nature of the modifications for the Warmistas, I can only conclude it is a deliberate, dishonest, strategy.
An ongoing discussion here about the integrity of scientists. The NCDC/NOAA/NASA/Met Office etc. scientists have political paymasters and often are only engaged in bureaucratic functions.
Did they offer a justification for this adjustment? Whatever justifies the adjustment also justifies the claim that their original data was not trustworthy. They have established that the original data was not trustworthy. Why was it not trustworthy? What guarantee can they give us that they have eliminated the problem that made the original data not trustworthy. To cut to the chase, what are their criteria for making adjustments in the future? Could they please state those. Honest people would that. Then we could use their criteria to make our own assessments of how well they make adjustments. Honest people would welcome such assessments.
I think you folks are all missing the most important post in this thread, by JonF, I believe….
Let them make 1895 cooler by whatever means they choose.
This introduces a HUGE rise in temperatures well before manmade CO2 could have possibly played an influence on our climate. It simply destroys the theory that the 1970-1997 rise HAS to be CO2….
Game over…..
Joe Born says:
April 29, 2014 at 3:10 pm
; )
You’re running logic traps. Let me know when preconceived notions turn into an honest question.
This data churning is the equivalent of ‘confusion marketing’.
Anyone who has tried to study cellphone/internet sales literature will accept that it is not possible to come to firm conclusions, on the basis of the information given, about which is the best deal.
With all the major data running against their modelled scenarios they are throwing in another dollop of confusion in an attempt to maintain their failed narrative.
Someone tell me how in Michigan where I live the ice is still at record levels yet NCDC claims it has been colder in the recent past, that being the previous 100 years.
As observed by Steven Goddard, there have been massive adjustments for Michigan temperatures.
http://stevengoddard.wordpress.com/2014/04/17/march-madness-in-michigan/
Steve Mosher and Nick Stokes, please explain 5 degrees of adjusting the past cooler and the present warmer. How is that justified given the amount of ice on the Great Lakes this past Winter and Spring.
This is where I doubt either one of you will respond because there is no logical explanation, no double speaking barf mulch that can have any explanation other than the temperatures are 100% phony and it likely applies to virtually all of the “adjustments” coming out of the temperature gatekeepers.
It took until April 29 10:41 am for a commenter (evanmjones) to point out the obvious: The changes have made the slope steeper prior to 1950. But it is not until 1950 that CO2 becomes a significant player. So it means that climate sensitivity to GHGs must be lower than previously thought. As evanmjones says, it adds support to the “recovery from the LIA by unknown mechanism” theory.
charles nelson says:
April 29, 2014 at 3:40 pm
This data churning is the equivalent of ‘confusion marketing
==========
The Marketing Aspect has been an issue from day one. The initial IPCC effort was the basis for the creation on the UNFCCC. Stupid Is as Stupid Tells It To Do ; )
Please Note based on my last Comment.
The Scientific Community is held-harmless in this UN mess. Scientists are the Hero — UN and political goofs the ignorant mistake!