Note: See update below, new graph added.
There’s a new paper out by Dr. Edward Long that does some interesting comparisons to NCDC’s raw data (prior to adjustments) that compares rural and urban station data, both raw and adjusted in the CONUS.
The paper is titled Contiguous U.S. Temperature Trends Using NCDC Raw and Adjusted Data for One-Per-State Rural and Urban Station Sets. In it, Dr. Edward Long states:
“The problem would seem to be the methodologies engendered in treatment for a mix of urban and rural locations; that the ‘adjustment’ protocol appears to accent to a warming effect rather than eliminate it. This, if correct, leaves serious doubt for whether the rate of increase in temperature found from the adjusted data is due to natural warming trends or warming because of another reason, such as erroneous consideration of the effects of urban warming.”
Here is the comparison of raw rural and urban data:
And here is the comparison of adjusted rural and urban data:
Note that even adjusted urban data has as much as a 0.2 offset from adjusted rural data.
Dr. Long suggests that NCDC’s adjustments eradicated the difference between rural and urban environments, thus hiding urban heating. The consequence:
“…is a five-fold increase in the rural temperature rate of increase and a slight decrease in the rate of increase of the urban temperature.”
The analysis concludes that NCDC “…has taken liberty to alter the actual rural measured values”.
Thus the adjusted rural values are a systematic increase from the raw values, more and more back into time and a decrease for the more current years. At the same time the urban temperatures were little, or not, adjusted from their raw values. The results is an implication of warming that has not occurred in nature, but indeed has occurred in urban surroundings as people gathered more into cities and cities grew in size and became more industrial in nature. So, in recognizing this aspect, one has to say there has been warming due to man, but it is an urban warming. The temperatures due to nature itself, at least within the Contiguous U. S., have increased at a non-significant rate and do not appear to have any correspondence to the presence or lack of presence of carbon dioxide.
The paper’s summary reads:
Both raw and adjusted data from the NCDC has been examined for a selected Contiguous U. S. set of rural and urban stations, 48 each or one per State. The raw data provides 0.13 and 0.79 oC/century temperature increase for the rural and urban environments. The adjusted data provides 0.64 and 0.77 oC/century respectively. The rates for the raw data appear to correspond to the historical change of rural and urban U. S. populations and indicate warming is due to urban warming. Comparison of the adjusted data for the rural set to that of the raw data shows a systematic treatment that causes the rural adjusted set’s temperature rate of increase to be 5-fold more than that of the raw data. The adjusted urban data set’s and raw urban data set’s rates of temperature increase are the same. This suggests the consequence of the NCDC’s protocol for adjusting the data is to cause historical data to take on the time-line characteristics of urban data. The consequence intended or not, is to report a false rate of temperature increase for the Contiguous U. S.
The full paper may be found here: Contiguous U.S. Temperature Trends Using NCDC Raw and Adjusted Data for One-Per-State Rural and Urban Station Sets (PDF) and is freely available for viewing and distribution.
Dr. Long also recently wrote a column for The American Thinker titled: A Pending American Temperaturegate
As he points out in that column, Joe D’Aleo and I raised similar concerns in: Surface Temperature Records: Policy Driven Deception? (PDF)
UPDATE: A reader asked why divergence started in 1960. Urban growth could be one factor, but given that the paper is about NCDC adjustments, this graph from NOAA is likely germane:
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
Sponsored IT training links:
Pass 1z0-051 exam fast to save best on your investment. Join today for complete set of 642-972 dumps and 650-251 practice exam.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



Well, nothing displayed at all. Let’s try this:
Pineville, WV
Thank you for the third graph. I hope NCDC will take this paper seriously enough to produce a response for why it is “wrong” or otherwise misleading. Unless they can show the 48/48 sites have been cherry picked inappropriately (heckifino), I should certainly like to hear what other rationale they can come up with as to why this paper misrepresents reality. The paper claims “more than 5x”, but it seems by my math to actually be “more than 6x” difference for urban vs rural, at least in the US.
Anthony, you and Evan are the Lords of Siting –see anything about those 48/48 that gives you pause?
”””’James Chamberlain (12:21:12) : Does anyone miss Robert?””””
James,
Actually, I immensely enjoy the give and take between some of the WUWT regulars and any commenter like him.
So, yes. I miss him in that regard.
John
I can only look at this sad saga from a laymans point of view. So lets see.
Across 2 ends of the world, we have 2 wolves who predicted doom and gloom due to AGW. (Jones at one end, Hansen at the other)
We put these wolves in charge of the sheep (temp data) and now find our sheep have been disappearing at a great rate.
Any surprises?
Interesting study that I should read. Why select 1 or 2 sites per state?
I worry about being open to the characterization of a cherry-picked dataset.
What I would like to understand is how the GISS corrections are made. If the protocol is actually to AVERAGE the data in one of their 5×5 grids – one could imagine the urban sites swamping the rural.
It sure does look suspicious tho. The correction for urban Heat Island Effect doesn’t ratchet urban areas down – it rachets other areas UP!
Their thinking must be: differences minimized, mission accomplished !
Sloppy sloppy work
crosspatch (09:02:30) :
That answers a question I never got around to asking when I was there. The name of Knots Berry Farm amusement park.
DaveE.
There’s a kind of levelling off of urban raw temperatures in the 1970s, before the rise begins again in the 1980s. Since some of us here are trying to explain the rise in urban temperatures via demographics, increase in asphalt, concrete and other hard surfaces, population shifts and intensification of heat-emitting technologies, this temporary plateau might reflect to some extent the effects of the energy conservation initiative due to the Oil Crisis of 1974, the government conservation initiatives, and the stagnant economy. I wonder how widespread was the temporary shift to wood-burning stoves in the late 1970s as a part of the attempt of ordinary people to cut their heating bills? Would this be reflected in any of the data?
Why divergence started around 1965? Doesn’t anyone remember the “Great Society”? People started flocking to the cities effectively killing an agrarian society. Small farms were left vacant to become pastures. Small towns dwindled away to nothing. People migrating to the city left some conditions causing
flight to the suburbs, but still near the cities.Conditions were ideal for the increase of UHI. At first nature managed a heat bias by itself, until some
opportunists decided to help nature. The urban graph mirrors these results perfectly.
[I’ll try to keep this short…]
I think this sums up what most of us see as the real reality that should be being observed/measured.
The rural sites, in toto, should be as un-biased a dataset as we can find. If the urban heat islands are affecting the overall, then it should be reflected in the rural dataset to some degree. If the assumption is that rural temps, less the urban contribution to them, are completely stable (which is not really a correct basic assumption, but we do have to have some basis to start from), then any rise in the rural dataset can be said to be from the bleed-through from the urban heat islands.
I tend strongly to agree with wayne: If the claim is that the average is rising, then that should be best seen where spikes are not present, out away from them, where the blending has smoothed everything out.
The scientists want to homogenize the data? The atmosphere itself homogenizes the temps, and the resultant is the rural data.
Reading ONLY the rural data is the only way to get a true reading – as long as the rural data is still rural. If we are warming the WHOLE planet, shouldn’t that show in the moderate to remote corners of the planet?
If I have one caveat on this it is that the population of datapoints is so small. As a true study, I think this means only that a more in depth study should be taken. There is every probability that these do represent a trend in their methodology that is incorrect, but it needs to have a far wider study done. Replication, replication, replication…
I’ve been arguing that point for quite a while, that this is maybe a major source of UHI. When the indoor heat is displaced to the outside, the overall temperature is still the same, but added to it is the added heat generated by the air conditioner mechanisms’ inefficiency, which is given off as heat energy. The removed heat and added heat together both end up outside buildings, creating micro-climates that often overlap. Yes, there are other factors. This one, IMHO, does not get the attention I think it deserves.
I am glad someone else is considering this, for whatever that is worth.
Frank (13:07:13) :
“Dr. Long’s paper is intriguing, but meaningless.”
I’m sorry but I must disagree. If this is taken as a preliminary study it indicates that there may be a significant problem and that further study is warranted. Also, we do not know the detailed design of the study.
From experience, in an industrial setting for internal use only, the definition and design of a study takes longer and takes up more of the report, than doing the study and reporting the results. If the initial design and definition is inadequate then any study may be worse than useless, it may be so misleading as to result in actions that cause a bad situation to deteriorate into disaster. This is, of course, the major problem with much of the “peer reviewed” studies in the literature. They lack the essential detail and make it impossible to confirm or deny the results, or even that they are investigated the problem that the authors claim to be studying. They have the potential to turn a possible bad situation, CAGW, into a financial and humanitarian disaster far worse than any the world has seen. This is especially true when it is considered that the corrective action does not appear to have the potential to have any detectable effect.
A little elaboration on/restating of my point about UHI possibly affecting rural data:
If we completely assume that the work of CRU/NOAA/NCDC/GISS is all crap, then we need to begin assessing what is really happening, whether warming is small, large, catastrophic, or not happening at all.
If global warming happens in the real world of raw data, then rural areas should be the last to see it happening – but they MUST be seeing it happen IN THE RAW DATA. That is IF it is happening. Since they are farthest from the warming – whether the warming is from UHI, CO2, air conditioners, automobiles, or land use – the rural areas are the canary in the mine. Being remote, they should not NEED any adjustments, since everything else should warm (or cool) before they do. They should be the base line against which trends are measured.
The 0.11ºC/Century MAY be coming from bleed-across from urban areas. If that is the case, it is something to keep an eye on, and something that is easy to check. At the same time, 0.11º/century is an awfully small rate of increase, so other than a check every 25-50 years it shouldn’t be much to worry about.
Again, if the urban areas are affecting the entire globe, then we should be seeing it in the raw rural temps.
Something I am surprised no one has pointed out (and which WILL be pointed out by AGWers) is that this is “only” the U.S., and therefore is only essentially a local phenomenon, at most.
But it is a very important thing that the best measured place in the world is the one being checked out. That means there is no need for reaching across hundreds of miles for reference rural stations (this one intentionally does not use them, anyway), and no large areas unrepresented.
If you can’t look at the well-documented areas, what good can ANY study do?
It is a START. It by itself is not enough. These trends need to be verified with other studies (all easy enough to do), wider studies.
But the real point of this study is that THESE stations have been adjusted weirdly – and THAT is an important observation.
Why have THESE stations been adjusted in this way?
Is there any reason to think that no other stations have had this kind of weird adjustments? Of course not.
The next step is to ask those next questions and find out how widespred this is.
It can’t stop here.
feet2thefire (18:26:05) :
The scientists want to homogenize the data? The atmosphere itself homogenizes the temps, and the resultant is the rural data.
Excellent point.
That is why I am very encouraged to see many folks and scientists out there digging into the rural data.
To say that this “paper” is amateurish is being too kind. I produced far better work as an undergraduate when I had long hair, blood-shot eyes, and a 48-hour hangovers.
Picking two sites (one rural, one urban) from each state? Since when is it best practice to use less data? I don’t know if anyone has looked at a map of the U.S. lately, but states in the West are large and states in the Northeast are small. Before you even started loading data into Excel you’ve already biased your results geographically. Someone wasn’t paying attention during quantitative spatial analysis.
It has been pointed out over and over and over again that time of observation (TOBS) is a known bias in these data. One of your fellow skeptics even points it out above. It must always be accounted for. Why on Earth would someone ignore this? (Because they can…because this will never be peer-reviewed).
I see all these references to “fitted linear regressions”. Where are the model diagnostics? What scientist would discuss a linear regression model and not provide a p-value? Alarm bells should be ringing for anyone that is still conscious.
I would plead with anyone reading this that has anything approximating an open mind to please disregard this paper in its entirety. I won’t ask you to change your opinion; you’re certainly entitled to it.
But PLEASE, this type of “science” must be ignored, even if you agree with the conclusions. There are for more thorough analyses of surface temperature records out there.
One more point -and I really think this is important:
AGW says that increasing CO2 is causing an increasing greenhouse effect for the entire globe. If so, the rural and urban temps should reflect this, very nearly equally. Different large regions may vary from each other, but within smaller regions rural and urban should be very nearly equal in their increases.
1. The adjusted data seems to support this. If it is true.
2. The raw data should also reflect this. But it doesn’t. We can’t even say here, “If it is true.” How can raw data not be true?
But this study, as everyone here is noting – casts a very sinister eye on the NCDC adjustments – on the values used, on the trends in those values, and even on their directions.
IF THIS STUDY IS ALLOWED TO STAND, IT REFUTES THE CO2 GREENHOUSE EFFECT THEORY. If urban areas are warmer and climbing at different rates than rural areas, then something else is happening, not a greenhouse effect. it is a local effect, not a global one.
If so, the global in “global warming” is shot in the foot.
A point many here have noted.
If you read the paper, you’d see that the researcher specifically addressed this question.
As I see it, the point about THIS data is not necessarily where it fits into the larger picture, measured against other data. The point rally is: Why has THIS data been treated this way?
The time of observation is a constant differential for a single station, compared to itself and against the standard set by the NCDC. If the time is off by a constant 5 hours, for example, that variation does not lend itself to increasing over time. Why would the adjustment need to keep on increasing? Once determined, the adjustment must remain constant, barring time of observation or other changes.
In pointing at the way this data was treated, it begs the question if the rest of the data was treated in the same way.
If any group of data is adjusted in a weird way, it is legitimate to ask why. And if that data is not stand-alone, but part of a larger population of data, then it draws attention to the entire treatment and possible errors. In a perfect, non-political scientific world, the NCDC would look at this and review their data – especially for time of observation and urban heat island effect.
Small studies don’t solve anything; they can only point at what may have been done in error. They can only point to the need to look deeper.
feet2thefire (21:14:51) :
The raw data may need to be adjusted to properly reflect nature. Station moves, for instance, can have an effect on the data that hides a trend. Of course, you could just measure the trend to the move, and the trend from the move and I believe it would have the same effect with no need for adjustment. You would just have to move the baseline for the anomaly. You could make a whole new dataset for the new location, which seems to me like the option that reflects reality the best, but hey, who am I to say all these government scientists are making their own lives too hard?
Of course, what is the point of adjusting for a station move (10km) when you can use a station 100km away to homogenize the temperature?
Ummm…. I’ll go with the middle cup this time?!
Steveta, the proper perspective (in the U.S.) is this: People who live in urban areas during the summer experience much cooler temperatures inside of air-conditioned buildings, compared to when my grandfather worked as a farmer all day out under the sun (early 19-hundreds). As these people walk a few minutes from their offices to their air-conditioned cars, they experience on average a 0.6 degree higher temperature because of all the heat energy pumped out of buildings and cars, – and to think like this doesn’t take any advanced degree whatsoever.
“Re: Alexander (01:45:18) :
Another shoe drops…”
Hooray, Alex!
From an old woman in the still almost empty
and extremely beautiful beautiful Arkansas Ozarks!
feet2thefire (18:26:05) :
Thanks for the reply. Sure sounds like you are based in physics and have your head screwed on correct to boot.
I see you got my point. The logic being used today by the agencies seems totally screwed up.
Any logical person would say at most you should only apply the weight to each of rural and urban sites according to their respected measured land area that they occupy. Part of this reason is in increase variance spoken of below.
What caused the 0.11 slope of the rural is unanswerable, its possibilities are numerous, but a good collection of rural stations on a global scale would tell you the closest answer to what has happened to the global temperature over the last decades. For any excess heat created by an urban heat islands would instantly (in hours) move and disperse over the rural sites continuously. That is why you really only need the rural sites and that would be the most accurate possible.
I do see where the agencies are coming from. They have had papers written saying that it doesn’t matter; they had them peer-reviewed. Problem is they are still wrong. It does matter. That is also why you see the variance going up. In an urban site a good wind comes up, blows over the city, it gets cooler. Thirty minutes later it’s still again and the heat gradually goes back up. Now when did you say the temperature was taken? See, the exact time does matter and the variance can be large (ranges up to that of the UHI effect itself). Rural sites do not do this (or to a much reduced magnitude) for the wind blowing in is at or close to the local rural temperature to begin with.
I keep saying, personal computers and easy statistics came in and proper logic and reason when out of the window!
Also, placement of the rural sites would need to be placed in all three dimensions as smoothly as possible to approach the closest measurement of the anomaly. The ultimate is measuring ALL variances of temperature everywhere there is land or sea, which of course is impossible.
If you were to take this and write a paper, then you would delve into the depths of the data and statistics necessary to back up your hypothesis but at least it would have solid logic and reason behind it.
Of course, that is only my viewpoint. This could be too simplistic or just not possible at all but approaching this idealism seem to be a better way.
Do you see the breach in basic logic and reason being made by the agencies as I do? You can see it in all graphs as the ones in this article.
feet2thefire (19:52:31) :
Something I am surprised no one has pointed out (and which WILL be pointed out by AGWers) is that this is “only” the U.S., and therefore is only essentially a local phenomenon, at most.
I forgot to mention that point you made. You infer they will say this is only local but this same type of problem is popping up all over the globe at other stations. This article was only on one station but look at the whole U.S. anomaly last month, the rural was 0.11 and urban 0.79, untouched. See, the same discrepancy applies when looking at all stations in the U.S. and most likely it is a system-wide error occurring globally in the computations because the logic itself is flawed. That is what needs the focus, to answer that question. If it can’t logically be correct, you shouldn’t care what the numbers say. Don’t let them concern you with their authority game.
I read years ago how a young boy, something like 13 years old, tore down a major scientific paper purely at the logic level. Wish I could remember what that was. You don’t need letters to be a scientist or have an impact, for that is truly what science is, proper thought, then followed with numbers to back it up.
(Some of the fluff I throw in is not necessarily for you but for young people following along, science needs all the help it can get these days, especially for future climatologists, don’t you think?)
feet2thefire (19:52:31) :
Do read the article on “Contribution of USHCN and GISS bias in long-term temperature records for a well-sited rural weather station”. Some of my comments were addressing it when speaking of a single station (kinda like a comment-ahead for I have already read it and was mixing to make my point clear to you).
DD More (07:22:27) :
I could never understand how UHI was minimized. If you look at New York City as an example.
Area, including water 468.9 sq mi ( 2,590,000 sq m)
Power used (2008) 54,869 GW-hr (http://www.nyc.gov/html/planyc2030/downloads/pdf/progress_2008_energy.pdf)
Watts/sq m = 2,416 total. The Mayor says 80 percent is used by buildings and therefore 100 percent ends up as heat loss.
So the forcing is 1,933 W/Sq M
The file also remarks that the city has seen a 23 percent increase in the last 10 years, which is close to the increase showing up in the charts.
Interesting to see the figures for NYC. I did some rough calculations for London a while back. Came to conclusion that UHI from anthropogenic heat alone (not including the component from concrete/asphalt and other land use changes) could increase the average temperature of the whole atmosphere in and and above the city by 1C after 4 days of windless weather (assuming no losses to space from the extra 1C heating). Here’s the calculations (pasted from a spreadsheet so forgive the poor layout):
London warming / UHI calculations:
if 6.67 billion on planet:
2398.8 W per person is average enery use
Europe is about 6000W pp
USA about 10-12Kw pp
(see http://www.carboncommentary.com/2008/07/18/86)
London pop c.: 8000000
total energy london: 48000000000 Watts
48000000 Kw
London area: 1600 km2
which is: 1600000000 m2
so London is: 3.03001187737992E-006 of Earth’s atmosphere
mass of atmosphere above London: 16059062950113.6 Kg
energy / area 0.03 kW m-2
0.12984590376905 so roughly equivalent to 8% of Sun’s average output
so it will take: 335735 seconds to heat up atmosphere by 1’C
or: 0.0106390846252544 years
which is: 3.88 days
My conclusions were that what I grandly/pretentiously termed Anthropogenic Atmospheric Heating (AAH) is insiginificant in global terms (averaged globally it is roughly only 1/8000th the heat we recieve from the sun) but that it is of a magnitude which can significantly increase ambient atmopspheric temperatures in large urban areas. , e.g. central London, Paris, Tokyo etc.
However, it is also possible that AAH is of sufficient magnitude to have a noticeable effect at a regional level, e.g. South-East England, Florida, or other highly developed areas. And all this is before we add in the well understood UHI component from land use changes.
Alexander Feht (00:58:08) : A very graphical example of fraud, obvious even to a child. Or should we call it politely “a convenient lie”?
It’s one of those pesky irregular verbs that we have in English:
I Adjust
You lie
He commits fraud
😉 (Stolen in concept from Yes Minister line in another posting)
BTW, it looks to me like a programatic adjustment has the urban and rural variables swapped in an equation somewhere. Something like:
variance = rural – urban
rural = rural-variance
Now if urban is hotter, the variance will be a negative number and if someone thought they were adjusting the things down with a minus, they would find they were actually moving it up with a minus minus.
If you made it instead:
variance = urban – rural
urban = urban-varance
you would have a positive offset for the variance and subtracting a positive from urban would be exactly what you wanted to do.
While this looks like a silly error with variable names like “urban” and “rural”, some programmers use names like annom_adj, annom_out and annom_in so written as:
annom_adj=annom_in-annom_out
annom_in=annom_in-annom_adj
it would be far less obvious that the programmer had gotten it the wrong way round… I’ve seen far worse done in programs. By good programmers.
Usually that kind of thing is caught with QA suites and sample data runs. But I’ve not seen a lot (or, frankly, any) evidence of QA suites in the “scientific programming” done by “climate scientists”. Nor any written acceptance test criteria either. So if they didn’t do much testing, I could easily see that kind of error going through unchallenged.
Just a thought…