One way adjustments: The Latest Alteration to the U.S. Climate Record

Up_trendGuest essay by Dan Travers

On Thursday, March 13, 2014, the U.S. National Climactic Data Center switched to gridded GHCN-D data sets it uses to report long term temperature trends – with a resulting dramatic change in the official U.S. climate record. As seems to always happen when somebody modifies the temperature record, the new version of the record shows a significantly stronger warming trend than the unmodified or, in this case, discarded version.

The new dataset, called “nClimDiv,” shows the per decade warming trend from 1895 through 2012 for the contiguous United States to be 0.135 degrees Fahrenheit. The formerly used dataset, called “Drd964x,” shows the per decade warming trend over the same period to be substantially less – only 0.088 degrees. Which is closer to the truth?

As will be illustrated below in the side by side comparison graphs, the increase in the warming trend in the new data set is largely the consequence of significantly lowering the temperature record in the earlier part of the century, thereby creating a greater “warming” trend. 

This particular manipulation has a long history. For an outstanding account of temperature record alterations, tampering, modifications and mutilations across the globe, see Joseph D’Aleo and Anthony Watts’ Surface Temperature Records: Policy-Driven Deception?

It should be noted that the 0.088 degree figure above was never reported by the NCDC. The Center’s previous practice was to use one data set for the national figures (nClimDiv or something similar to it) and a different one for the state and regional figures (Drd964x or something similar). To get a national figure using the Drd964x data, one has derive it from the state data. This is done by taking the per decade warming trend for each of the lower forty-eight states and calculating a weighted average, using each states’ respective geographical area as the weightings.

The chart below shows a state by state comparison for the lower forty-eight states of the per decade warming trend for 1895-2012 under both the old and the new data sets.

clip_image002

In the past, alterations and manipulations of the temperature record have been made frequently and are often poorly documented. See D’Aleo and Watts. In this instance, it should be noted, the NCDC made considerable effort to be forthcoming about the data set change. The change was planned and announced well in advance. An academic paper analyzing the major impacts of the transition was written by NOAA/NCDC scientists and made available on the NCDC website. See Fenimore, et. al, 2011. A description of the Drd964x dataset, the nClimDiv Dataset, and a comparison of the two was put on the website and can be see here.

The relative forthcoming approach of the NCDC in this instance notwithstanding, looking at the temperature graphs side by side for the two datasets is highly instructive and raises many questions – the most basic being which of the two data sets is more faithful to reality.

Below are side by side comparisons under the two data sets for California, Maine, Michigan, Oregon and Pennsylvania for the period 1895-2009, with the annual data points being for the twelve month period in the respective year ending in November. The right-side box is the graph under the new nClimDiv dataset, the left-side box is the graph for the same period using the discarded Drd964x dataset. (The reason this particular period is shown is that it is the only one for which I have the data to make the presentation. In December 2009, I happened to copy from the NCDC website the graph of the available temperature record for each of the lower forty-eight states, and the data from 1895 through November 2009 was the most recent that was available at that time.)

I will highlight a few items for each state comparison that I think are noteworthy, but there is much that can be said about each of these. Please comment!

 

California

clip_image004clip_image006

Left: Before, Right: After –  Click to enlarge graphs

  • For California, the change in the in datasets results in a lowering of the entire temperature record, but the lowering is greater in the early part of the century, resulting in the 0.07 degree increase per decade in the Drd964x data becoming a .18 degree increase per decade under the nClimDiv data.
  • Notice the earliest part of the graphs, up to about 1907. In the graph on left, the data points are between 59 and 61.5 degrees. In the graph on the right, they are between 56 and 57.5 degrees.
  • The dips at 1910-1911 and around 1915 in the left graph are between 57 and 58 degrees. In the graph on the right they are between 55 and 56 degrees.
  • The spike around 1925 is above 61 degrees in the graph on the left, and is just above 59 degrees in the graph on the right.

Maine

clip_image008clip_image010

· The change in Maine’s temperature record from the dataset switch is dramatic. The Drd964x data shows a slight cooling trend of negative .03 degrees per decade. The nClimDiv data, on the other hand, shows a substantial .23 degrees per decade warming.

· Notice the third data point in the chart (1898, presumably). On the left it is between 43 and 44 degrees. On the right it is just over 40 degrees.

· Notice the three comparatively cold years in the middle of the decade between 1900 and 1910. On the left the first of them is at 39 degrees and the other two slightly below that. On the right, the same years are recorded just above 37 degrees, at 37 degrees and somewhere below 37 degrees, respectively.

· The temperature spike recorded in the left graph between 45 and 46 degrees around 1913 is barely discernable on the graph at the right and appears to be recorded at 41 degrees.

Michigan

clip_image012clip_image014

  • Michigan’s temperature record went from the very slight cooling trend under Drd964x data of -.01 degrees per decade to a warming trend of .21 degrees per decade under nClimDiv data.
  • In Michigan’s case, the differences between the two data sets are starkly concentrated in the period between 1895 and 1930, where for the entire period the temperatures are on average about 2 degrees lower in the new data set, with relatively modest differences in years after 1930.

Oregon

clip_image016clip_image018

· Notice the first datapoint (1895). The Drd964x dataset records it at slightly under 47.5 degrees. The new dataset states at slightly over 45 degrees, almost 2.5 degrees cooler.

· The first decade appears, on average, to be around 2.5 degrees colder in the new data set than the old.

· The ten years 1917 to 1926 are on average greater than 2 degrees colder in the new data set than the old.

· As is the case with California, the entire period of the graph is colder in the new data set, but the difference is greater in the early part of the twentieth century, resulting in the 0.09 degrees per decade increase shown by the Drd984x data becoming a 0.20 degree increase per decade in the nClimDiv data.

Pennsylvania

clip_image020clip_image022

 

· Pennsylvania showed no warming trend at all in the Drd964x data. Under the nClimDiv data, the state experienced a 0.10 degree per decade warming trend.

· From 1895 through 1940, the nClimDiv data shows on average about 1 degree colder temperatures than the Drd964x data, followed by increasingly smaller differences in later years.

 

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
224 Comments
Inline Feedbacks
View all comments
Rhoda R
April 29, 2014 11:33 am

I thought one of the early Hansen adjustments was to remove the higher elevation reporting stations and increase the number of lower elevation stations — so shouldn’t the downward adjustment for elevation be made on the younger data rather than on the older data?

April 29, 2014 11:34 am

Guest essayist by Dan Travers said,
“As seems to always happen when somebody modifies the temperature record, the new version of the record shows a significantly stronger warming trend than the unmodified or, in this case, discarded version.”

– – – – – – – – – – –
Dan Travers,
Yes, your quoted statement above does seem to be the case. Changes to datasets from the major institutional bodies who make them do seem to have the effect of showing higher warming rates after the changes compared to before the changes.
POSSIBLE CAUSE CATEGORIES (PCC) – these are the possible reasons all the major institutional bodies who make GAST datasets always increase the rate of historical warming when they make changes to their datasets
PCC #1 – It is just a coincidence
PCC #2 – Climate science is a young developing science so all changes are improvements from the earlier state of the dataset construction techniques and data measurement analysis. Improved science is just finding more warming rates in reality, so it is expected that all the major institutional bodies will always have increases in the historical warming rate.
PCC #3 – The major bodies are just professionally collaborating with each other closely and they are all openly and professionally coordinating closely the changes. So it is expected that the changes are in the same direction across all dataset changes and all major institutional bodies.
PCC #4 – outside pressure on each of the major bodies is causing creeping warming rate exaggerationism in their datasets
PCC #5 – the scientists involved with the major institutional bodies have a commonly shared consensus that there is a need to communicate better to save the planet. They are increasingly changing the historical warming rate because they think it is a better way to communicate climate science in order to save the planet.
PCC #6 – its karma
So, which of those PCCs are the more likely reasons and which are less likely reasons that all the changes to datasets from the major institutional bodies who make them do seem to have the effect of showing higher historical warming rates after the changes compared to before the changes?
Take your pick.
For me it is a toss-up between PCC #2 and PCC #5 as the reason the changes are always increasing the historical warming rate. Although PCC #6 should be given some focus.
John

John McClure
April 29, 2014 11:37 am

Poster Child years for Much Above Normal Contiguous United States Annual Temperature Anomalies (values above 1.0): 2012, 2006, 1999, 1998, and 1934. However, values Above Normal include 1998-2007 and 2010-2012.

E. Martin
April 29, 2014 11:44 am

Wouldn’t reporting the names and positions of the “adjustments” perpetrators at NCDC inspire them to justify their changes?

M Seward
April 29, 2014 11:47 am

I have had it with this sort of politically corrected data crap. Surely it must be official now that AGW is a pure, unadulterated HOAX and accordingly I formally declare myself a denier. This latest boondoggle with the numbers is risible.
Jo Nova’s current post about sea level rise numbers looking like they need an upward fiddle ( cos they are going against the AGW thesis) is getting ahead of the curve. I am sure they will be politically “corrected” shortly.
The closest analogy to this sort of figure fiddling I can think of comes from the porn industry where they have girls help the guys “keep the wood” between scenes. That relegates AGW to science porn for the political raincoat brigade which is about right.

John McClure
April 29, 2014 11:52 am

E. Martin says:
April 29, 2014 at 11:44 am
Wouldn’t reporting the names and positions of the “adjustments” perpetrators at NCDC inspire them to justify their changes?
=======
This may have less to do with the actual data, it looks like the criteria used for Anomalies is of interest.

April 29, 2014 12:17 pm

It is unfair to make a prior time colder when the people living then went out without wearing a coat on the basis of how warm it felt at the time.

Henry Clark
April 29, 2014 12:40 pm

This is a good illustration for the U.S., and the like far beyond U.S. data alone is shown at http://hidethedecline.eu
Rewriting of data by the usual suspects is paramount, as without it climate history is consistent, logical, and believable, not having events occurring without a cause (from the “pause,” to the global cooling scare of the 1960s-1970s, to the LIA), basically ending up with my usual illustration: http://tinyurl.com/nbnh7hq .

FlyingFox
April 29, 2014 12:51 pm

Time for a class action. Are you listening CEI?

LarryMc
April 29, 2014 12:53 pm

This is more evidence that as someone on WUWT said a few months ago: “In climate science only the past is uncertain.” Does anyone think this will be the last attempt to modify the temp records? Has the record been perfected now or will there be more adjustments to it?

Jaakko Kateenkorva
April 29, 2014 12:57 pm

The semantics are evolving rapidly in the area of anthropogenic sins, but this rings a bell. Isn’t this how the Intentionally Pathetic Central Committee is mandated to appease the catastrophically Angry God of Weather?

Jaakko Kateenkorva
April 29, 2014 1:01 pm

Illustration http://saraegoodman.files.wordpress.com/2011/02/vectorstock-174507-angry-greek-god-vector.jpg, although I’m convinced that Josh could better that.

April 29, 2014 1:05 pm

Even before climate “science”, this pattern was noticed in Russia.
As a Soviet joke, had it: ”The only thing that is more uncertain than the future is the past”

John Slayton
April 29, 2014 1:09 pm

A matter of considerable popular interest investigated during the year was the proposed establishment of a number of suburban meteorological stations around large cities with a view to getting at truer records. It appears, however, that, with the shelters now in use and considering the various corrections applied, the readings are substantially the same. The question was submitted to observers at a number of selected stations, and lengthy and valuable reports made thereon. It has long been established in meteorology that the average temperatures observed in large cities are higher than those observed in the country nearby….
-The Report of the Chief of the Weather Bureau for 1892, p. 577

What means “the various corrections applied”? Anybody know?

stamper44
April 29, 2014 1:20 pm

This technique of adjusting the early years down is very familiar here in New Zealand.
http://quadrant.org.au/opinion/doomed-planet/2010/05/crisis-in-new-zealand-climatology/
The wheels are falling off the hoax

Steve from Rockwood
April 29, 2014 1:30 pm

Boy were those climate scientists from 1895 ever stupid. They couldn’t even read a thermometer.

Steve from Rockwood
April 29, 2014 1:33 pm

If they drop those historical temperatures any further my ancestors may not survive.

JustAnotherPoster
April 29, 2014 1:37 pm

Hi Steve Mosher…
Very simple question. How on earth can you ‘add’ stations to the past ?
Can you just explain this one statement please. It makes no logical sense. At all.
There is another argument as well….
Data is just pure data. Climate science makes its adjustments valid as they are peer reviewed adjustments.
If a business adjusted its books….. It would get done for fraud.
I genuinely don’t see how any adjustments of the climate record are valid at all.
The data is what it is….. You can’t justify modifying it because it doesn’t fit your theory.
If the temperature in place a was recorded at 110 degrees 20 years ago. That’s the only valid data we have.
Why do climate scientists think that just because their methodology of adjusting the data has been peer reviewed it makes it valid ?

April 29, 2014 1:44 pm

M Seward says:
I have had it with this sort of politically corrected data crap. Surely it must be official now that AGW is a pure, unadulterated HOAX and accordingly I formally declare myself a denier. This latest boondoggle with the numbers is risible.
That recalls the “Harry_read_me” file, where programmer Harry makes some comments about the preposterous state of whatever passes for ‘data’ in mainstream climatology. A few excerpts:

Just how off-beam are these datasets?!!
I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that’s the case? Aarrggghhh! There truly is no end in sight… I suspected a couple of stations were being counted twice, so using ‘comm’ I looked for identical headers. Unfortunately there weren’t any!! So I have invented two stations, hmm… Who added those two series together? When? Why? Untraceable, except anecdotally… I am beginning to wish I could just blindly merge based on WMO code. the trouble is that then I’m continuing the approach that created these broken databases.
Here, the expected 1990-2003 period is MISSING – so the correlations aren’t so hot! Yet the WMO codes and station names /locations are identical (or close). What the hell is supposed to happen here? Oh yeah – there is no ‘supposed’, I can make it up. So I have 🙂
You can’t imagine what this has cost me – to actually allow the operator to assign false WMO codes!! But what else is there in such situations? Especially when dealing with a ‘Master’ database of dubious provenance (which, er, they all are and always will be).
False codes will be obtained by multiplying the legitimate code (5 digits) by 100, then adding 1 at a time until a number is found with no matches in the database… there is no central repository for WMO codes – especially made-up ones – we’ll have to chance duplicating one that’s present in one of the other databases…
I added the nuclear option – to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don’t think people care enough to fix ‘em…
Oh, GOD. What is going on? Are we data sparse and just looking at the climatology? How can a synthetic dataset derived from tmp and dtr produce the same statistics as an ‘real’ dataset derived from observations?
I disagree with publishing datasets that are simple arithmetic derivations of other datasets published at the same time, when the real data could be published instead.. but no.
The suggested way forward is to not use any observations after 1989, but to allow synthetics to take over.
Oh, ****. It’s the bloody WMO codes again. **** these bloody non-standard, ambiguous, illogical systems. Amateur hour again.

The entire carbon scare is based on fakery and fabrications.

JJ
April 29, 2014 1:48 pm

Mark Albright says:

The reason the entire record cools is NCDC is now adding an adjustment for elevation.

Doesn’t explain the difference in trend …

herkimer
April 29, 2014 1:48 pm

Any major adjustment of US temperature records that reflect an increase in the apparent warming trend for Contiguous US by some 50% (0.088F/decade to 0.136F/decade between 1895 and 2012 deserves close scrutiny.) The adjustment for states like MAINE,MICHIGAN,TEXAS,PENNSYLVANIA,, NORTH CAROLINA are especially troublesome. The Maine change from .01 to 0.26 is a 26 times greater warming trend than before . . These call to question the entire process. Either we have bad previous records or bad current adjustments or both. Who said that climate science is sound when we are now making such huge corrections to our base data going back a century, not to mention the failed models and bad predictions for the last 17 years.

John McClure
April 29, 2014 1:52 pm

Thanks Mod for the correction to my last post — it was very kind!
Reading over the comments, I find it absurd to believe any Scientist in NOAA/NCDC, NASA, or any other scientific agency of the US government would intentionally falsify data for gain. There simply isn’t any gain to be had and it flies in the face of reason to think otherwise.
If alterations to data sets, collected over time, is required then there is a logical reason for doing so. They didn’t destroy the original data sets.
I do have an issue with projections based on an evolving understanding of control data but you’ll find Scientists never to this.
Your target(s) for the misplaced anger should be more insightful and should be based on the “conclusions” from those who release “results”.

April 29, 2014 1:57 pm

“I find it absurd to believe any Scientist in NOAA/NCDC, NASA, or any other scientific agency of the US government would intentionally falsify data for gain.”
So you think that NOAA/NCDC/NASA are an arrangement of magical characters that confer innocence on people?
Has anyone ever talked to you about how human beings behave? Did you listen?
Andrew

Richard M
April 29, 2014 2:02 pm

There’s only one reason I can come up with for why older averages would get cooler while newer ones get warmer. Good old UHI or micro site contamination (MSC). In the past there were more rural stations because more of the US was rural. So, the odds are better that any added station will show less UHI/MSC. For recent times the odds now shift to more stations being urban or at a minimum having some MSC problems. Hence, when these are added in they increase the trend. The bottom line is the increase is purely phantom based on the problems we all have seen over the years.
If there was some kind of systematic problem it should impact old and new records about the same. You’d think this logic should have been easy to figure out by the scientists. I wonder why they continue to ignore simple logic.

Richard M
April 29, 2014 2:07 pm

John McClure says:
April 29, 2014 at 1:52 pm
Reading over the comments, I find it absurd to believe any Scientist in NOAA/NCDC, NASA, or any other scientific agency of the US government would intentionally falsify data for gain. There simply isn’t any gain to be had and it flies in the face of reason to think otherwise.

Yes but … the problem is simple researcher bias. They have no motive to look for anything that lowers the trend and completely accept anything that increases the trend. So, when they find something that increases the trend they stop thinking immediately and accept it. When they find anything that reduces the trend they either just assume it’s an error or figure out some reason (whether true or not) to ignore it.
Researcher bias is the entire reason for double blind studies in medical research. They found it was almost a 100% rule. It isn’t necessarily intentional, it just happens.

1 3 4 5 6 7 9