A new paper comparing NCDC rural and urban US surface temperature data

Note: See update below, new graph added.

There’s a new paper out by Dr. Edward Long that does some interesting comparisons to NCDC’s raw data (prior to adjustments) that compares rural and urban station data, both raw and adjusted in the CONUS.

The paper is titled Contiguous U.S. Temperature Trends Using NCDC Raw and Adjusted Data for One-Per-State Rural and Urban Station Sets. In it,  Dr. Edward Long states:

“The problem would seem to be the methodologies engendered in treatment for a mix of urban and rural locations; that the ‘adjustment’ protocol appears to accent to a warming effect rather than eliminate it.  This, if correct, leaves serious doubt for whether the rate of increase in temperature found from the adjusted data is due to natural warming trends or warming because of another reason, such as erroneous consideration of the effects of urban warming.”

Here is the comparison of raw rural and urban data:

And here is the comparison of adjusted rural and urban data:

Note that even adjusted urban data has as much as a 0.2 offset from adjusted rural data.

Dr. Long suggests that NCDC’s adjustments eradicated the difference between rural and urban environments, thus hiding urban heating.  The consequence:

“…is a five-fold increase in the rural temperature rate of increase and a slight decrease in the rate of increase of the urban temperature.”

The analysis concludes that NCDC “…has taken liberty to alter the actual rural measured values”.

Thus the adjusted rural values are a systematic increase from the raw values, more and more back into time and a decrease for the more current years.  At the same time the urban temperatures were little, or not, adjusted from their raw values.  The results is an implication of warming that has not occurred in nature, but indeed has occurred in urban surroundings as people gathered more into cities and cities grew in size and became more industrial in nature.  So, in recognizing this aspect, one has to say there has been warming due to man, but it is an urban warming.  The temperatures due to nature itself, at least within the Contiguous U. S., have increased at a non-significant rate and do not appear to have any correspondence to the presence or lack of presence of carbon dioxide.

The paper’s summary reads:

Both raw and adjusted data from the NCDC has been examined for a selected Contiguous U. S. set of rural and urban stations, 48 each or one per State. The raw data provides 0.13 and 0.79 oC/century temperature increase for the rural and urban environments. The adjusted data provides 0.64 and 0.77 oC/century respectively. The rates for the raw data appear to correspond to the historical change of rural and urban U. S. populations and indicate warming is due to urban warming. Comparison of the adjusted data for the rural set to that of the raw data shows a systematic treatment that causes the rural adjusted set’s temperature rate of increase to be 5-fold more than that of the raw data. The adjusted urban data set’s and raw urban data set’s rates of temperature increase are the same. This suggests the consequence of the NCDC’s protocol for adjusting the data is to cause historical data to take on the time-line characteristics of urban data. The consequence intended or not, is to report a false rate of temperature increase for the Contiguous U. S.

The full paper may be found here: Contiguous U.S. Temperature Trends Using NCDC Raw and Adjusted Data for One-Per-State Rural and Urban Station Sets (PDF) and is freely available for viewing and distribution.

Dr. Long also recently wrote a column for The American Thinker titled: A Pending American Temperaturegate

As he points out in that column, Joe D’Aleo and I raised similar concerns inSurface Temperature Records: Policy Driven Deception? (PDF)

UPDATE: A reader asked why divergence started in 1960. Urban growth could be one factor, but given that the paper is about NCDC adjustments, this graph from NOAA is likely germane:

http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif


Sponsored IT training links:

Pass 1z0-051 exam fast to save best on your investment. Join today for complete set of 642-972 dumps and 650-251 practice exam.


Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
297 Comments
Inline Feedbacks
View all comments
Louis Hissink
February 27, 2010 2:11 am

I don’t know how to write this politely but obsessing over temperature variations less than the inherent limit of detection of the equipment used to collect that data, smacks of crass scientific incompetence.
Seems past use of chicken entrails, and other signs, hasn’t been divorced from the scientific method.

JMR
February 27, 2010 2:15 am

It looks like USHCN made a basic mathematical mistake they added a bias to the rural numbers without a physical explanation for doing so. If instead they had subtracted a bias from the urban stations, we have a physical justification: elimination of the bias due to recent urbanization such as increased pavement and additional buildings. You can’t claim one is equal to the other if you don’t have good rational explanation.
The only explanation they give for their choice is that it match tree ring growth. Mind you that most forests in the last 50 years have been managed to increase tree growth by eliminating fires planting faster growing trees and other farming techniques. Until we can properly eliminate the effects of forrest management on tree growth we have a problem with hat they did.

wayne
February 27, 2010 2:39 am

E.M.Smith (01:46:24) :
Good point E.M. Boolean and sign errors are sometimes the hardest to see and catch when not a blatant crash!
I was wondering what kind of machine these programs were originally written for. You are a programmer like myself, if this was written on very old hardware, you know the government, limited in all aspects, and never have been re-written from the ground up, this could feasibly be at the core of this type of programming error, if an error. You know what I mean. Keep adding and modifying ancient code written for legacy hardware until the maintenance is taxing and breaking their back, bugs can get buried very, very deep. You saw Harry’s code!
That was just a thought that pass by my mind a few days ago. However, I just can’t believe that if that were the case, someone at sometime would not have noticed the error. They would have to never really look at their end product from a scientific viewpoint. Could they possibly be in that bad shape? Papers were written by scientists years ago, the programmers code it until accepted, the scientists go away, the programmers are there to merely maintain the product with minor modifications, and no one even REALLY looks at the end product? Like, it’s just a paycheck. It would be a huge, huge stretch.

E.M.Smith
Editor
February 27, 2010 4:09 am

wayne (02:39:58) : I was wondering what kind of machine these programs were originally written for.
Well, it looks to me like it’s Sun Friendly code (though a couple of the f90 constructs look more like Cray Fortran of about that vintage). Some of that could just be how the programmer personal style was set, though. I’m not familar enough with SGI Fortran to know it’s fingerprints. FWIW, about the era in question, I hired some folks from NASA. Required experience was Cray and Sun and they had it…
if this was written on very old hardware, you know the government, limited in all aspects, and never have been re-written from the ground up, this could feasibly be at the core of this type of programming error, if an error. You know what I mean. Keep adding and modifying ancient code written for legacy hardware until the maintenance is taxing and breaking their back, bugs can get buried very, very deep. You saw Harry’s code!
I’ve also ported GIStemp to run on Linux. Yeah, you described it right. You can tell the “era” of the particular steps of the code and can even spot some of the (limited) maintenance done. The oldest bits require the f77 compiler and are written in THE ALL CAPS STYLE mandated by that era. The newer bits require the g95 / f90 compiler and are lower case. In between are some that work with both compilers and often have mixed case (especially in ‘maintained’ parts). All of it shows it’s age.
So I don’t think a bit will have been changed once it was assumed to work; unless the researcher wanted to try a new “trick”.
That was just a thought that pass by my mind a few days ago. However, I just can’t believe that if that were the case, someone at sometime would not have noticed the error.
And for how many years have folks pointed out that the averaging of an intensive variable is meaningless and it still is done? (i.e. counting the coins in your pocket without looking at the denominations. Averaging temperatures do not tell you how much heat is accumulating or leaving.)
And for how many years have folks said the thermometers are in poor locations and badly sited and don’t give good data?
And for how many years has the code had a compiler dependent failure that makes it warm 1/10 of the records by 1/10 C in the F to C conversion?
And for how many years did it have an “off by one” read error before it was found a year or two ago?
And for…
No, there is no decent QA suite or validation process run on this code and there is no ‘code review’ worth the name. This isn’t Engineering, after all, it’s only “climate science”…
They would have to never really look at their end product from a scientific viewpoint. Could they possibly be in that bad shape?
Never underestimate the power of self confirmation bias. Once you see what you expect, you assume you were right and nothing is wrong. End of QA process. Close can, ship it. Never look back.
I’ve seen far too many folks do exactly that.
Papers were written by scientists years ago, the programmers code it until accepted, the scientists go away, the programmers are there to merely maintain the product with minor modifications, and no one even REALLY looks at the end product?
I would only add that the programmers are likely not professional programmer staff, but fellow researchers, and they would not be going into the code unless something needed to be changed for some reason. As a result of some new research and even then they most likely do not look at any part other than what they are adding to support the new paper…
There is a reason the QA department is a different group in professional software shops. You really need a different mind set. And a benchmark / QA / Validation suite that is often larger than the code base itself. And have you noticed that NONE of the NOAA / NCDC / NASA / GISS web sites, or people, talk about their work in QA or have titles that look like it? There is no indicia of a QA department or process. Just “one guy” who is listed as the maintenance contact… with a ‘scientist’ title…
So yeah, I could easily see it being that bad.

DirkH
February 27, 2010 4:40 am

“E.M.Smith (04:09:39) :
[…]
I would only add that the programmers are likely not professional programmer staff, but fellow researchers, and they would not be going into the code unless something needed to be changed for some reason. As a result of some new research and even then they most likely do not look at any part other than what they are adding to support the new paper… ”
Gotta second E.M. here. I once had to port a photogrammetrics package written by photogrammetrics researchers from Fortran to C++. The researchers were very good in their domain but hadn’t the slightest clue of modern software architectures nor interest in refactoring old working code just to make it easier to maintain it. It was the worst thing i ever did for money.

DirkH
February 27, 2010 4:46 am

“JMR (02:15:20) :
It looks like USHCN made a basic mathematical mistake they added a bias to the rural numbers without a physical explanation for doing so.”
I object. This is not “a mathematical mistake”. Deciding whether to add an offset to rural stations or subtract an offset from urban stations is directly in the “core competence” region of the climatologist domain. We can’t let them out of this. Ok, they made a dumb mistake when converting from Fahrenheit to Celsius, we can blame that on pragramming incompetence. But this adding/subtracting an offset is a core methodology decision.

DirkH
February 27, 2010 5:00 am

“Phil M (21:05:51) :
To say that this “paper” is amateurish is being too kind. I produced far better work as an undergraduate when I had long hair, blood-shot eyes, and a 48-hour hangovers.”
Care to give a link to a paper you produced when you had long hair, blood-shot eyes, and a 48-hour hangover? If it’s even better than the posted paper, i’m very eager to have a look at it, let alone the papers you produce with short hair, clear vision and no hangover.

BBk
February 27, 2010 7:22 am

Phil M.
“Picking two sites (one rural, one urban) from each state? Since when is it best practice to use less data? I don’t know if anyone has looked at a map of the U.S. lately, but states in the West are large and states in the Northeast are small. Before you even started loading data into Excel you’ve already biased your results geographically. Someone wasn’t paying attention during quantitative spatial analysis. ”
Spacial analysis doesn’t matter. The point here and in another recent analysis, is to pick flaws in the methodology and show that it is incorrect in some instances. Whether it is incorrect in ALL instances is rather irrelevant. It points out that the models being used to derive data sets are junk and that the mathematical models being used need to be revisited.
Refuting a point is much easier than making a point.
Simplistic Example:
Assertion – All green apples are sour.
Refutation – Find a single green apple that isn’t sour.
The point of a refutation is not to come up with an alternative theory, but to show that a theory isn’t valid and needs adjustment. Here’s the real scandal,.., why aren’t any of the “peer reviewers” poking holes in the data being used? What is the value in peer review if they don’t try to refute a point but instead just rubber-stamp an assertion that they agree with or spike an article that they disagree with?

February 27, 2010 9:13 am

Regarding Robert,
I would like to see him back as long as he learned his lesson about manners. I have no idea what he was saying about Anthony in that thread, but I think any of us that have been arond here for a while (I’ve lurked on and off for 2 years, maybe longer before I started following closely about 6 months ago) know that the real beauty of this site is the comments section, and that as the purveyor of this site Anthony is not one to get bent out of shape and overreact to valid criticism.
What is posted here, unlike RC and Tamino, is the starting point for a conversation, not the final word. The comments will challenge it and try and poke holes. Don’t get me wrong, there is plenty of cheerleading here too – sometimes more than warranted (Lacis’ thread comes to mind again – and I’m talking about in the comments not the original posts) – but I guess what I’m getting at is that there is open debate here, and I have always appreciated that.
We need to both challenge and be challenged but civility and respect, at some level, must be maintained.

Gareth
February 27, 2010 11:22 am

If you were to start from an assumption that UHI is a minor issue you could end up convinced that the reliable stations are the urban ones.

Manny
February 27, 2010 8:22 pm

Video of a boy called Peter who did the same study over a year ago with the same conclusion, but without a PhD or govt money :

aMINO aCIDS iN mETEORITES
February 27, 2010 11:56 pm

looks like UHI started showing up in the Roaring 20’s

kwik
February 28, 2010 3:21 am

E.M.Smith (01:46:24) :
…..Something like:…..
“variance = rural – urban”
“rural = rural-variance”
E.M. , I think you are right.
I think this is what has happened;
Someone made a piece of code, with an error as you suggest above.
The result confirms what you expect; Global Warming.=>hurray!!!
So noone sees there is something wrong, and the years go by.
As you know;
-Detecting an error in software means you need a test setup.
-Finding and correcting it means stepwise debugging.
-Knowing where to look; You need to use Visual Soucesafe or
the like, in order to trace who did what and when….
When doing a change nomatter how minoscule it is, you run the test setup. All still well?
Maybe there are too many Managers (driving big SUV’s) in these institutions, and
too few who is actually doing Science (driving Volvo’s) ….
hehe!

juliandroms
February 28, 2010 3:30 am

George Turner said:
> I’m not so sure on this one. If the adjusted graph is correct,
> every year since 1990 should’ve been proclaimed hotter
> than 1934, which still stands as the record even in their
> own adjusted stats.
The plot is of the 11 year moving average, not the yearly average. So on its face, it looks about right.
Full report here:
http://scienceandpublicpolicy.org/originals/temperature_trends.html
If you look at the graphs in the plot which show both annual and 11 year averages, a few years in the 1930’s are much above the 11 year moving average, but funny that t1934 does not come out as one of the highest temp years. Is it because the temps are for te contiguous 48 and do not include Hawaii or Alaska? Someone help me out here…

juliandroms
February 28, 2010 3:44 am

Oh I see.. there are no graph show of the combined raw or adjusted annual contiguous US temperature data sets either above or in the paper. The graphs are shown only broken down s rural only or urban only. Presumably once you combine the two adjusted data sets from NCDC you get what the NCDC publishes here:
http://www.ncdc.noaa.gov/img/climate/research/2006/ann/Reg110Dv00Elem02_01122006_pg-v2.gif

LearDog
February 28, 2010 7:38 am

EM & Anthony –
you guys really have a deep understanding of the sources of data and various ‘corrections’ made on these data.
As a newb to this – I would LOVE to be able (for family and friends) to access data from a rural station my family knows about (and whose quality isn’t in doubt) and be able to demonstrate the ‘corrections’ applied.
The Long paper highlights the problem (must be the luminosity = heat correction idk?), but now obviously needs to be done on a comprehensive and publishable basis (pdf on advocacy website doesn’t scratch my scientific itch I’m afraid, even if I agree with it).
Obviously looks like Anthony and Steve have a great start here – but the kind of Virginia analysis can be told on a site-by-site basis by interested others if given a little bit of a roadmap. I’m sure all the data have been subject of various posts and discussions before – but hard to find as a newb.
A post that describes:
1) describing the data (what is available (min? Max? Avg, tob?), how it is used, when it was released)
2) Accessing the data (where to go),
3) knowing what it is that one is accessing (is it raw (really),
4) where are the data from (the station code list tie to geography), and
5) surfacestation.org code, overview and
6) what differences were applied and when
would be a fantastic roadmap to this corp of citizen scientists. You might be able to harness the power of the internet in an analytic way.

Frozen man
February 28, 2010 11:56 pm

Ummmm… Rural made warming…

lang
March 1, 2010 3:06 am

What I don’t understand is what possible reason could they give for adjusting the rural data. To me it is the only non contaminated data, the only thing affecting it is the change in temperature
What they should do is completely disregard the urban readings and take any trends from the non effected urban data only. This way there is no need to adjust anything.

lang
March 1, 2010 3:08 am

take any trends from the non effected urban data only
of course I meant non effected rural data,

monckhausen
March 4, 2010 2:53 pm

Let’s say, the urban island/microsite effects are real! How would that change GLOBAL warming, e.g. Ts on a global scale. The US occupies 2% of the earth’s surface and the cities are small island within it. And even more so, considering that the reported data are T differences and not absolute Ts. As long as the site conditions do not change, the T differences should be real. And, on top of that, the T increase in the US over the last decades is much smaller than the T increase in the Arctic and Siberia. Considering all this, the surface station quibbles are a bunch of hot air.

monckhausen
March 4, 2010 2:57 pm

Oh, and the so called paper is distributed by some obscure scence and policy institute (I guess more policy than science). It is not in a peer reviewed journal. that’s not a paper, that is a note. Hey, my letters to the editor are also scientific papers.
Wonder what happened if Mann and Jones published their stuff unreviewed from a science and policy institute website…

1 10 11 12