One way adjustments: The Latest Alteration to the U.S. Climate Record

Up_trendGuest essay by Dan Travers

On Thursday, March 13, 2014, the U.S. National Climactic Data Center switched to gridded GHCN-D data sets it uses to report long term temperature trends – with a resulting dramatic change in the official U.S. climate record. As seems to always happen when somebody modifies the temperature record, the new version of the record shows a significantly stronger warming trend than the unmodified or, in this case, discarded version.

The new dataset, called “nClimDiv,” shows the per decade warming trend from 1895 through 2012 for the contiguous United States to be 0.135 degrees Fahrenheit. The formerly used dataset, called “Drd964x,” shows the per decade warming trend over the same period to be substantially less – only 0.088 degrees. Which is closer to the truth?

As will be illustrated below in the side by side comparison graphs, the increase in the warming trend in the new data set is largely the consequence of significantly lowering the temperature record in the earlier part of the century, thereby creating a greater “warming” trend. 

This particular manipulation has a long history. For an outstanding account of temperature record alterations, tampering, modifications and mutilations across the globe, see Joseph D’Aleo and Anthony Watts’ Surface Temperature Records: Policy-Driven Deception?

It should be noted that the 0.088 degree figure above was never reported by the NCDC. The Center’s previous practice was to use one data set for the national figures (nClimDiv or something similar to it) and a different one for the state and regional figures (Drd964x or something similar). To get a national figure using the Drd964x data, one has derive it from the state data. This is done by taking the per decade warming trend for each of the lower forty-eight states and calculating a weighted average, using each states’ respective geographical area as the weightings.

The chart below shows a state by state comparison for the lower forty-eight states of the per decade warming trend for 1895-2012 under both the old and the new data sets.

clip_image002

In the past, alterations and manipulations of the temperature record have been made frequently and are often poorly documented. See D’Aleo and Watts. In this instance, it should be noted, the NCDC made considerable effort to be forthcoming about the data set change. The change was planned and announced well in advance. An academic paper analyzing the major impacts of the transition was written by NOAA/NCDC scientists and made available on the NCDC website. See Fenimore, et. al, 2011. A description of the Drd964x dataset, the nClimDiv Dataset, and a comparison of the two was put on the website and can be see here.

The relative forthcoming approach of the NCDC in this instance notwithstanding, looking at the temperature graphs side by side for the two datasets is highly instructive and raises many questions – the most basic being which of the two data sets is more faithful to reality.

Below are side by side comparisons under the two data sets for California, Maine, Michigan, Oregon and Pennsylvania for the period 1895-2009, with the annual data points being for the twelve month period in the respective year ending in November. The right-side box is the graph under the new nClimDiv dataset, the left-side box is the graph for the same period using the discarded Drd964x dataset. (The reason this particular period is shown is that it is the only one for which I have the data to make the presentation. In December 2009, I happened to copy from the NCDC website the graph of the available temperature record for each of the lower forty-eight states, and the data from 1895 through November 2009 was the most recent that was available at that time.)

I will highlight a few items for each state comparison that I think are noteworthy, but there is much that can be said about each of these. Please comment!

 

California

clip_image004clip_image006

Left: Before, Right: After –  Click to enlarge graphs

  • For California, the change in the in datasets results in a lowering of the entire temperature record, but the lowering is greater in the early part of the century, resulting in the 0.07 degree increase per decade in the Drd964x data becoming a .18 degree increase per decade under the nClimDiv data.
  • Notice the earliest part of the graphs, up to about 1907. In the graph on left, the data points are between 59 and 61.5 degrees. In the graph on the right, they are between 56 and 57.5 degrees.
  • The dips at 1910-1911 and around 1915 in the left graph are between 57 and 58 degrees. In the graph on the right they are between 55 and 56 degrees.
  • The spike around 1925 is above 61 degrees in the graph on the left, and is just above 59 degrees in the graph on the right.

Maine

clip_image008clip_image010

· The change in Maine’s temperature record from the dataset switch is dramatic. The Drd964x data shows a slight cooling trend of negative .03 degrees per decade. The nClimDiv data, on the other hand, shows a substantial .23 degrees per decade warming.

· Notice the third data point in the chart (1898, presumably). On the left it is between 43 and 44 degrees. On the right it is just over 40 degrees.

· Notice the three comparatively cold years in the middle of the decade between 1900 and 1910. On the left the first of them is at 39 degrees and the other two slightly below that. On the right, the same years are recorded just above 37 degrees, at 37 degrees and somewhere below 37 degrees, respectively.

· The temperature spike recorded in the left graph between 45 and 46 degrees around 1913 is barely discernable on the graph at the right and appears to be recorded at 41 degrees.

Michigan

clip_image012clip_image014

  • Michigan’s temperature record went from the very slight cooling trend under Drd964x data of -.01 degrees per decade to a warming trend of .21 degrees per decade under nClimDiv data.
  • In Michigan’s case, the differences between the two data sets are starkly concentrated in the period between 1895 and 1930, where for the entire period the temperatures are on average about 2 degrees lower in the new data set, with relatively modest differences in years after 1930.

Oregon

clip_image016clip_image018

· Notice the first datapoint (1895). The Drd964x dataset records it at slightly under 47.5 degrees. The new dataset states at slightly over 45 degrees, almost 2.5 degrees cooler.

· The first decade appears, on average, to be around 2.5 degrees colder in the new data set than the old.

· The ten years 1917 to 1926 are on average greater than 2 degrees colder in the new data set than the old.

· As is the case with California, the entire period of the graph is colder in the new data set, but the difference is greater in the early part of the twentieth century, resulting in the 0.09 degrees per decade increase shown by the Drd984x data becoming a 0.20 degree increase per decade in the nClimDiv data.

Pennsylvania

clip_image020clip_image022

 

· Pennsylvania showed no warming trend at all in the Drd964x data. Under the nClimDiv data, the state experienced a 0.10 degree per decade warming trend.

· From 1895 through 1940, the nClimDiv data shows on average about 1 degree colder temperatures than the Drd964x data, followed by increasingly smaller differences in later years.

 

Advertisements

  Subscribe  
newest oldest most voted
Notify of

i have to admit the global warming indeed is 100% CAUSED and controlled by HUMANS……..they figured out no matter what the weather does WE CONTROL the records and can make them say whatever we desire!

This of course being completely pointless, since we all agree that the USA is only 3% of the planet ((c) J Hansen)

Dan in Nevada

“He who controls the past controls the future, and he who controls the present controls the past” – George Orwell

dp

Which is closer to the truth??? Trick question? If the truth is known why not publish that? If the truth is not known there is no valid answer to the question.

Todd

Do they ever give reasons for this?
On it’s face, this is nothing more than data manipulation.

Tim Crome

Just a minor adjustment of the temperature record to better fit the CO2 record. What’s the problem!

Cold in Wisconsin

How do they justify the lower/older temperatures in the new models? Are we saying that “gridding” the data applies lower temperature readings to a greater geographical area(s) and thus brings about a lower average temperature? Has anyone tried to reproduce the grids to see whether there is any “gerrymandering” taking place? Moving the grid around slightly could definitely change the outcome of this type of analysis, I would think. That would be a type of sensitivity analysis that I think should be done to determine whether there is an impact to where the grid is placed.
I would be interested to hear whether other more knowledgeable readers see this as a change promoting consistency, which could at least be defended, or a cynical form of data manipulation. The most important information would be why the authors or producers of this data believe that it is important to produce a gridded product versus a data set that is closer to the raw data. I would also expect that the authors would have to show some form of analysis on different grid placements. I think that it is clear that when you start to average things, you run the risk of completely muddying the waters so that no useful information can be discerned.

Greg

many of the “click to enlarge” features are not working .

Ralph Kramden

Is this part of the new “lie and exaggerate” policy of the alarmists?

They are both wrong and that’s the truth

Admad

Well, it’s the right time of year for cherries.
Many more of these “revisions” and the 19th century will become an ice age.

cirby

I’m just trying to figure out why, when you look at those graphs as a whole, you can’t clearly see World War I, the Depression, or World War II.
You can see the Depression in the Michigan chart (but not the others), but if temps are tied so strongly to CO2, you’d think a couple of world wars (with attendant increases in overall production and emission of CO2) would show up, at least a little.

Greg

As with all such changes , what ever the data, this puts the lie on the uncertainty they were claiming before or after, or maybe both.
“the difference is greater in the early part of the twentieth century, resulting in the 0.09 degrees per decade increase shown by the Drd984x data becoming a 0.20 degree increase per decade in the nClimDiv data.”
That’s over a full degree per century. What was the claimed accuracy on the data beforehand and what is it now? Has the uncertainty been reduced by 1 degree per century.?

George Orwell had this in 1984.
Re-write history into a version of events that suits the current governments needs.

Bernd Palmer

I challenge anybody to give a valid reason for changing the temperature values in 1895. Did it take until 2014 before they found out that the thermometers were badly calibrated?

Dave in Canmore

According to the abstract, these adjustments were intended to fix:
“1. For the TCDD, each divisional value from 1931-present is simply the arithmetic average of the station data within it, a computational practice that results in a bias when a division is spatially undersampled in a month (e.g., because some stations did not report) or is climatologically inhomogeneous in general (e.g., due to large variations in topography).
2. For the TCDD, all divisional values before 1931 stem from state averages published by the U.S. Department of Agriculture (USDA) rather than from actual station observations, producing an artificial discontinuity in both the mean and variance for 1895-1930 (Guttman and Quayle, 1996).
3. In the TCDD, many divisions experienced a systematic change in average station location and elevation during the 20th Century, resulting in spurious historical trends in some regions (Keim et al., 2003; Keim et al., 2005; Allard et al., 2009).
4. Finally, none of the TCDD’s station-based temperature records contain adjustments for historical changes in observation time, station location, or temperature instrumentation, inhomogeneities which further bias temporal trends (Peterson et al., 1998).”
The land-based temperature data no longer has any meaning. Good stations without siting problems don’t show these same trends thus invalidating this exercise. The foxes are gaurding the henhouse.

How can climatologists predict the future when they can’t even predict the past?

Greg

Well I was assuming that they actually provided an error estimation for their data. So far I have not been able to even find one!
However this is interesting on the removal of degree.day fields since underlying data not long exist 😕 Huh?
>>
Qualitative Note on Degree Day Values: Preliminary analysis of degree day
data indicate that some values may be noticeably different from current
values. There are two primary reasons for this:
1) General: New dataset and methodology (see prior notes on changes to how
climate division data are computed)
2) Specific to the degree-day products: Population weights have been updated
to use 2010 Census data.
>>
Run that again? Degree.day products depend on population count ? WTF?
If any aspect of your temperature record moves because of a change in “populatin wieghts” you have a problem with your data processing!
The closer you look the worse it gets.

Richard M

I suspect gridding ends up applying more homogenization resulting in upward adjustments to the very best, rural data. Once that data has been “upgraded”, the UHI contamination is complete. This is not science.

Greg

Michael E. Newton says:
April 29, 2014 at 8:38 am
How can climatologists predict the future when they can’t even predict the past?
>>
Sure they can. We can safely predict that in the next few years the past will get colder.
This is obvious from the lack of warming or “pause”. Since there is no warming this decade, the effect of GHG forcing is to make the past cooler.
You have no grasp on how climatolgy works, Where have you been ???

Jimbo

For all the temperature tampering over the years see below.
http://stevengoddard.wordpress.com/?s=ncdc
http://stevengoddard.wordpress.com/?s=tampering

herkimer

IF the observable data does not match the predicted data, then just change the observable data sets to match the predicted data look better .. Most would call this pure corruption of data while others seem to pass this off as sound climate science. This has little resemblance to any science.

Mosher Drive By in 3…2…1…
Andrew

heysuess

Well then. The ‘climate change’ we are currently not experiencing must be that much worse due to the steepness of this new incline.

Frank K.

I am sure the NCDC will provide the computer codes that they used to calculate all of the cited “adjustments” (particularly TOBS). /sarc

Look guys I’ve been telling you this stuff FOR YEARS.
1. GHCN-D is daily data.
2. There are more daily stations in the US than Monthly ( GHCN -M)
3. Daily data is not adjusted. Monthly data is adjusted.
4. D’Aleo and Watts looked at GHCN-M, not GHCN-D
5. Based on my experience with daily data ( since 2007) and based on all my work
with GHCN-D, the following is true:
A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.
Skeptical theory was this: if we use more data ( D’aleo and watts specifically complained about the great thermometer drop out ) and if we use unadjusted data, then you will see that warming
trends are diminished.
Nice theory. That theory has been tested. As far back as 2010 when I processed all GHCN-D data. That theory was tested. That theory is busted.
When you switche to Daily data, data is which is not adjusted, and when you use it all you find out out that the past tends to cool and the present tends to warm or stay the same. This leads to a slightly warmer trend.
Calling Dr Feynman. he would say if you have theory ( the great thermometer drop out and using adjusted data skews the record warm) you test that theory ( add more stations, use daily data)
If the results disagree with your theory, your theory is busted.
The skeptical theory about adding more stations and moving to unadjusted data is busted.
Its been busted since 2010, not that folks noticed

Greg

ftp://ftp.ncdc.noaa.gov/pub/data/cirs/climdiv/nCLIMDIV-briefing-cirs.pdf
“Errors in ɳClimDiv are likely less than 0.5°C for temperature ”
a “likely” uncertainty estimation, in the pdf presentation.
so a century long tend would presumably get attributed 0.5 °C/ sqrt(120). = 0.045 °C/ca
That makes a jump of 1F per century pretty hard to swallow.

Eliza

This is the real McCoy for all you lukewarmers and skeptics who actually believe some warming is occurring.
http://www.drroyspencer.com/global-microwave-sea-surface-temperature-update-for-feb-2013/
O, Zilch, Nada please wake up and stop feeding the warmist trolls LOL

pottereaton

Perhaps they are preparing for that 20 year cold spell that may be coming. Temperatures are trending flat now. What if they start to trend down? Big trouble in Big Green.
Adjusting downward becomes a method by which you retain the illusion of global warming, even if it’s not happening.
Call me a “conspiracy ideationist,” if you’d like.

Eliza

I reckon Mosher will not make any comment here

Gary Pearse

Bernd Palmer says:
April 29, 2014 at 8:30 am
“I challenge anybody to give a valid reason for changing the temperature values in 1895.”
Bernd, they are constrained at the present end by satellite data, so if they want to steepen the hockey stick blade, they are obliged to lower the earlier data. It does reveal their nefarious motives. The last fiddle with recent data was HadCrut going from their 3rd to 4th version. They warmed it marginally but they’ve now pretty well run out headroom at this end of the record. The thing started with Hansen at GISS in 1998 when he pushed the still standing record high temp for the US in 1937 down several tenths of a degree because he realized, if his dream of a new world record was to come in his lifetime, he better take advantage of the strong 1998 El Nino.

Mike Bromley the Kurd

Galling. These keepers of the record are simply lying. There is no other way of telling the story. Unless you fake it. And they continue to sneer and snide and condescend.

I have started to avoid anglo saxon stations as soon as I figured that they were cooking the books to save their jobs.
My own data sets shows substantial global cooling of around 0.17 degree C or K per decade since 2000 until 2012. However, as I am updating (to add 2012 and 2013) I find global cooling has increased…..
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/

Oldseadog

“Lies, damned lies and statistics … ”
Seems we have all three here.
On the other hand, maybe they borrowed Dr. Who’s tardis from the BBC and went back to record the temperatures all over again.

Eliza

These postings + S Goddards will hopefully be used in court as Prima Fascie evidence of fraudalant tampering of data and then used to assess liability and damages deliberately caused by these institutions.

pablo an ex pat

and they expect to be credible, how exactly ?

hunter

Steve Mosher did comment here, and he offered more than rhetoric or cryptic remarks.
One observation of this would be:
– that even if the results for the US show slightly more warming, what has been the actual impact of these changes?
Not much.
Nothing historically unusual. No increase in extreme weather. No huge or dangerous changes.
Let us not do as the climate obsessed do and reject the facts and data.
This small change does not make wind power any less wasteful. It does not make a CO2 tax any less insane. It does not make the hockey stick less hokey.

Latitude

The Hockey Stick Is Real
Posted on April 18, 2014 by stevengoddard
NOAA has set an all-time world record for data tampering in 2014. So far this year they have adjusted US temperature upwards by an average of 1.185 degrees at all USHCN stations, creating a genuine hockey stick.
http://stevengoddard.wordpress.com/2014/04/18/the-hockey-stick-is-real-3/

David Jay

Michiganders Celebrate, we are the winners!
A trend of .02 has increased an order of magnitude to .24. I feel warmer already.
But how does this explain this year’s crocus blooms that (at my home) will survive past May 1st? Oh, that’s right. Everything cold is “weather” and everything hot is “climate”

Eliza

Yes I see that Mosher commented above same BEST drivel as usual. He hasn’t read the posting evidently. Doesn’t want to I would not blame him he probably gets paid by BEST ect to maintain the AGW industry going. BTW no offense meant I have appreciated much of his delving work into Gleick especially LOL He should stick to that.

Latitude

United States to be 0.135 degrees Fahrenheit. The formerly used dataset, called “Drd964x,” shows the per decade warming trend over the same period to be substantially less – only 0.088 degrees. Which is closer to the truth?
=====
Probably neither one…..

Mosher,
“A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.”
No, when YOU comment on temperatures we find that YOU always conform to the Warmer narrative.
So, we can either decide that temperatures always conform to the Warmer narrative, or we can decide that YOU conform to the Warmer narrative.
Which is more likely?
Andrew

Frank K.

So, where is the NCDC GHCN data adjustment software??? Can we the see the calculation algorithms that they used for ourselves? They have nothing to hide…right?

James the Elder

Greg says:
April 29, 2014 at 8:41 am
Run that again? Degree.day products depend on population count ? WTF?
==================================================================
By God man, you found it!!! AGW is PEOPLE!!!!! Remove the people, all the body heat dissipates into space in a few years and the problem is solved— once the Eloi decide on how many Morlocks are actually required in Utopia.

MattN

Certainly does appear to be warming caused by man.

Greg

ftp://ftp.ncdc.noaa.gov/pub/data/cirs/drd/divisional.README
Differences of the biases were small (>
So no clear statement of the uncertainty in the data presented but an implicit statement that there remains some biases of the order of 2 deg.F and that an error 0.3 is regarded as being “small”. , presumably meaning in relation to other errors like the 2F ones.
There’s an error in my last : sqrt(120) should have been sqrt(1200)
Since Dan Travers gives us decadal trends lets stick with sqrt(120) : as an uncertainty scaling for 120 monthly readings.
Now taking the undeclared but infered uncertainty to be +/-2F that gives , for the old DRD data:
2/sqrt(120) = +/-0.18 F/dec on a decadal trend.
“Errors in ɳClimDiv are likely less than 0.5°C for temperature ” Rather unhelpful for data given in fahrenheit. Did they even mean 0.5°C ??? Taking it as stated that’s 0.9F.
0.5*9/5 / 2/sqrt(120) = +/-0.082
California goes from 0.09 +/-0.18 F/dec to 0.22 +/-0.08 F/dec
a change of 0.11 F/dec for a drop in uncertainty of 0.1 F/dec
ie all the errors in the earlier version of the were in the same direction , and than some.
So we’re back to the same old storey , all errors that were ever made in temperature measurement were masking the true horrors of global warming. And only the new “algorithms” and “bias corrections” allows us to see the true scale of what is happening to the planet.

Kev-in-Uk

Steven Mosher says:
April 29, 2014 at 8:54 am
quote:
A) when you add more stations, you will find the past is cooler than previously estimated.
B) when you add more stations, you will find that the present is warmer than you think.
endquote
yes, Steven – there is logical reason for both these statements, such as:
1) if, for example, we have had some global earth recovery from the last ice age – this will show up AND will be increased by the addition of more (later) stations especially in any area grid weighted system.
and
2) UHI – a known and significant effect – will also cause the same effect when adding more stations, as again, more stations simply exaggerates the differences introduced by UHI. if there was one New York station in 1850, compared to 10 today, obvisouly one expects the 10 of today to show the UHI more significantly!
and equally
3) if, there is any actual measurable anthropogenic GLOBAL effect on temperatures, (whether caused by CO2, deforestation or even cow farts!) – this too will be further exaggerated by addition of more stations.
Logically, if, as is obviously the case, there have been an ever increasing number of stations – we all know that there will be an increase in any measured trend due to the number of station ‘bias’ that is introduced into the weighting/gridded scheme. Is this subsequently accounted for and ‘removed’. you would think something like the BEST project would have done this? but AFAIK no major datasets make this adjustment as part of their weighting adjustment? do you know if they do within the gridding code?
Whilst I agree with your observation, I’d like to know if this is being considered in the actual adjustment procedure – by Hadcrut, Giss, Best or whoever?

John McClure

Steven Mosher says:
April 29, 2014 at 8:54 am

Didn’t they also reduce the number of stations over time which would achieve the same result (adding more data to the past)?

Greg

WP screwing around again:
“Differences of the biases were small ( < 0.3 Deg. F.)”

Rick Morcom

I love the name at the top – National CLIMACTIC Data Center. Is the debate reaching a climax? That would be fun!