Warming in the USHCN is mainly an artifact of adjustments

Dr. Roy Spencer proves what we have been saying for years, the USHCN (U.S. Historical Climatology Network) is a mess compounded by a bigger mess of adjustments.

==============================================================

USHCN Surface Temperatures, 1973-2012: Dramatic Warming Adjustments, Noisy Trends

Guest post by Dr. Roy Spencer PhD.

Since NOAA encourages the use the USHCN station network as the official U.S. climate record, I have analyzed the average [(Tmax+Tmin)/2] USHCN version 2 dataset in the same way I analyzed the CRUTem3 and International Surface Hourly (ISH) data.

The main conclusions are:

1) The linear warming trend during 1973-2012 is greatest in USHCN (+0.245 C/decade), followed by CRUTem3 (+0.198 C/decade), then my ISH population density adjusted temperatures (PDAT) as a distant third (+0.013 C/decade)

2) Virtually all of the USHCN warming since 1973 appears to be the result of adjustments NOAA has made to the data, mainly in the 1995-97 timeframe.

3) While there seems to be some residual Urban Heat Island (UHI) effect in the U.S. Midwest, and even some spurious cooling with population density in the Southwest, for all of the 1,200 USHCN stations together there is little correlation between station temperature trends and population density.

4) Despite homogeneity adjustments in the USHCN record to increase agreement between neighboring stations, USHCN trends are actually noisier than what I get using 4x per day ISH temperatures and a simple UHI correction.

The following plot shows 12-month trailing average anomalies for the three different datasets (USHCN, CRUTem3, and ISH PDAT)…note the large differences in computed linear warming trends (click on plots for high res versions):

The next plot shows the differences between my ISH PDAT dataset and the other 2 datasets. I would be interested to hear opinions from others who have analyzed these data which of the adjustments NOAA performs could have caused the large relative warming in the USHCN data during 1995-97:

From reading the USHCN Version 2 description here, it appears there are really only 2 adjustments made in the USHCN Version 2 data which can substantially impact temperature trends: 1) time of observation (TOB) adjustments, and 2) station change point adjustments based upon rather elaborate statistical intercomparisons between neighboring stations. The 2nd of these is supposed to identify and adjust for changes in instrumentation type, instrument relocation, and UHI effects in the data.

We also see in the above plot that the adjustments made in the CRUTem3 and USHCN datasets are quite different after about 1996, although they converge to about the same answer toward the end of the record.

UHI Effects in the USHCN Station Trends

Just as I did for the ISH PDAT data, I correlated USHCN station temperature trends with station location population density. For all ~1,200 stations together, we see little evidence of residual UHI effects:

The results change somewhat, though, when the U.S. is divided into 6 subregions:

Of the 6 subregions, the 2 with the strongest residual effects are 1) the North-Central U.S., with a tendency for higher population stations to warm the most, and 2) the Southwest U.S., with a rather strong cooling effect with increasing population density. As I have previously noted, this could be the effect of people planting vegetation in a region which is naturally arid. One would think this effect would have been picked up by the USHCN homogenization procedure, but apparently not.

Trend Agreement Between Station Pairs

This is where I got quite a surprise. Since the USHCN data have gone through homogeneity adjustments with comparisons to neighboring stations, I fully expected the USHCN trends from neighboring stations to agree better than station trends from my population-adjusted ISH data.

I compared all station pairs within 200 km of each other to get an estimate of their level of agreement in temperature trends. The following 2 plots show the geographic distribution of the ~280 stations in my ISH dataset, and the ~1200 stations in the USHCN dataset:

I took all station pairs within 200 km of each other in each of these datasets, and computed the average absolute difference in temperature trends for the 1973-2012 period across all pairs. The average station separation in the USHCN and ISH PDAT datasets were nearly identical: 133.2 km for the ISH dataset (643 pairs), and 132.4 km for the USHCN dataset (12,453 pairs).

But the ISH trend pairs had about 15% better agreement (avg. absolute trend difference of 0.143 C/decade) than did the USHCN trend pairs (avg. absolute trend difference of 0.167 C/decade).

Given the amount of work NOAA has put into the USHCN dataset to increase the agreement between neighboring stations, I don’t have an explanation for this result. I have to wonder whether their adjustment procedures added more spurious effects than they removed, at least as far as their impact on temperature trends goes.

And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Latitude

Dr. Spencer, will these people do bank accounts?
😉

Jimbo

Envisat and now this. Why do the great majority of adjustments mean more warmth, rising sea levels, etc.? If the data does not match the predictions then the data has to be faulty and requires ‘necessary adjustments’. Don’t believe me? Ask Dr. James (Coal Death Trains) Hansen. The past isn’t what it used to be.

Jimmy Haigh

Without all the adjustments it would be obvious that temperatures are falling and have been for a while. Global warming? Bollocks!

Upjusting … the adjustment of data used to increase the rate of rise.

NZ Willy

It’s well-remarked-on that El Ninos seem to be followed by warmer average temperatures. I speculated that these events were being used as an opportunity to tweak upwards. This article’s analysis looks to support that idea. Under the cover of a sudden change, “the team” jumps into action, adjusting upwards, ever upwards. The “taxing the plebs into the dirt” part comes later — er, now, actually.

kadaka (KD Knoebel)

ISH refers to the raw International Surface Hourly data at NDSC, the data and subsequent processing into a usable dataset described by Dr. Roy Spencer in a guest post here:
http://wattsupwiththat.com/2012/03/30/spencer-shows-compelling-evidence-of-uhi-in-crutem3-data/

Andrew30

Jimbo says: April 13, 2012 at 3:07 pm
“The past isn’t what it used to be.”
It never was.
Climate Science has a great future behind it.

Interstellar Bill

“And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.”
Suspicious? More like a total ‘Guilty’ verdict!
Neither the satellites nor the ‘unadjusted’ surface record shows ‘Warming!’

Gail Combs

Given that it is almost April 15th, perhaps we should send a copy of this to the IRS. They are the people who really appreciate creative number juggling. A.P. Ershov’s logic would seem appropriate.
“Finding errors in a computer program is like finding mushrooms in the forest having found one look for others in the same place” ~ A. P Ershov

Victor Venema

Dear Dr. Roy Spencer PhD, the axis in your first plot is called “Temperature Departure from Avg.”, but at least the red and the green line are on average way above zero, which is not possible by definition. Can you explain the reason for this problem?

Len

It is a shame that when the global warming alarmist-modelers find their predictions do not match observed data they choose the wrong approach. Choice 1: Models in error wrt observations, continue refining and correcting models and Choice 2: Models in error wrt to observations, go back and adjust (falsify) the observations until they are in better agreement with model predictions.
Choice 1 is honest and advances knowledge of physical processes to be included in models, and thus, advances science. This is good for everyone.
Choice 2 is dishonest and freezes improvements in knowledge of physical processes and the models used for prediction. In addition it corrupts historical data and precludes the opportunity for future scientists to compare their theories of physical processes and modeling with real historical data. It is a sin to lie and it is a sin to add corruption of historical data to the previous lies. Also it is a sin to rob future scientists of a chance to advance our knowledge using historical observations and data. Everyone here loses, and it is good for no one.
Do not think it helps the AGW crooks who seek power and money over over truth and honesty. They may gain temporary advantages in positions, power, and money. But, the truth will come out and attempts to cover up crimes always fail. But in the long run they will be exposed as crooks and fools. Morever, they will forever be damned by honest scientists who honor and need historical data to test theories and hypotheses. The crooked AGW tinkers will have their legacy become recognized and more and more evil over time.
And finally, the damage they do to scientists’ credibility will make billions of people suffer forever after.

Thanks, Dr. Spencer.
Another eye-opener!
Yes AGW is man-made; totally made up.
Great article, I think.

Keith W

USHCN is useless either through gross professional incompetence or malfeasance. Zero out the budgets or eliminate the agencies involved as a budget cost-saving measure.

Ally E.

“And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.”
*
Got it in one.

Doug in Seattle

Documenting this is important. Sadly however, getting it into the published record may not happen.

David A. Evans

Victor Venema says:
April 13, 2012 at 3:43 pm

Dear Dr. Roy Spencer PhD, the axis in your first plot is called “Temperature Departure from Avg.”, but at least the red and the green line are on average way above zero, which is not possible by definition. Can you explain the reason for this problem?

Both sceptical and alarmist climate scientists do this without defining what they mean, which is departure from a baseline 30 year average, previously defined.
DaveE.

KR

Hold on, I thought Fall et al 2011 (http://www.landsurface.org/publications/J108.pdf) concluded that the basic trends in the USHCN records were reliable?

old construction worker

“When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments”
You could say the missing ocean “heat” was found on shore.

NASA says 7C – 9C UHI in urban areas of the northeast.

John M

USHCN uses 1961-1990 average as the baseline.

MIndBuilder

Is there any explanation for why these results seem to disagree with previous studies, including the BEST study and the study done by Anthony Watts? If I recall, those studies found little difference between the trends for adjusted and unadjusted and for quality rural and bad urban sites. Are these results consistent or conflicting with the satellite data for this area?

Neville.

If this really is so simple then why can’t you expose this nonsense as soon as possible? Also if the US station adjustments are so prone to error then what about the rest of the planet? Australia is about to introduce a co2 tax of $23 a tonne on July 1st and will forever waste countless billions $ on this fantasy of trying to change the temp and climate.
This must be the greatest con/fraud in history, certainly the most costly. Hundreds of billions $ already spent, totally wasted for a zero return.

Eric Barnes

Thanks Dr.Spencer.
Who thought this …
http://stevengoddard.files.wordpress.com/2010/10/1998changesannotated.gif?w=500&h=355
was based on sound science?

Andrew

Roy, would you please note the the time period used to define the baseline 30 year average against which these data are compared (see David A Evans above). I have looked but I cannot find it (perhaps I wasn’t looking in the right place?).
This may be quite important in interpreting your findings on pop density (as a proxy for UHI) if most of the changes in pop density at monitoring sites occurred prior to or early on in the monitoring period (1973-2012)…

Victor, you are right. I failed to mention that after computing the CRUTem3 and USHCN anomalies, I offset them vertically so all 3 datasets averages matched each other over the 1st 3 years (1973-75).

Andrew, the anomalies are relative to 1973 through 2011….but also see my comment above.

Willis Eschenbach

Well done, Dr. Roy. A couple of comments. First, you say:

But the ISH trend pairs had about 15% better agreement (avg. absolute trend difference of 0.143 C/decade) than did the USHCN trend pairs (avg. absolute trend difference of 0.167 C/decade).
Given the amount of work NOAA has put into the USHCN dataset to increase the agreement between neighboring stations, I don’t have an explanation for this result. I have to wonder whether their adjustment procedures added more spurious effects than they removed, at least as far as their impact on temperature trends goes.

It is not generally realized that good correlation between the data does not mean agreement between the trends. For example, consider the following graphs:

Now, the correlation of all of these is greater than 0.9 … but their trends are all over the map. I discuss this chart further in my post “GISScapades“.

And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.

Indeed. My point of view about “adjusted” data is that if you adjust data, your confidence interval must include the original data, the adjusted data, and in addition it must encompass the original data with each adjustment added in separately. In the case where the adjustments are about equal to the final trend, of course, this means that the trend will likely be within the error bars …
w.

What, pray tell, is a legitimate “Station Location Quality” Adjustment, that is (a) not a UHI effect, and (b) is a net positive 0.20 C over 60 years? By this, it means that an average station’s reading today must be RAISED by 0.20 to make it functionally equivalent to the same station 60 years before.
As I understand the history, Anthony Watts, started his sleigh ride by investigating a change from white-wash to white latex paint that would require a 1.00 negative adjustment, not a positive one. Encroaching parking lots? That’s another negative adjustment.
Oh, I know! There are fewer incinerators today than 60 years ago. /sarc
Let’s see that list of all positive and negative site location adjustments that are possible. There number and gross sizes should amount to some staggering statistical error bars.

Sean

I am not a fan of any “adjustments” to experimental data.
As far as I am concerned “adjusting data” is the same as making up your results.
Instead what should be done, if climatology is a proper science and if they can not better control their experimental process, is to just increase the stated error for the data set as all of these data quality problems (changing measurement technique, location problems causing UHI, measurement time of day inconsistencies) are effectively instrumentation and measurement errors and should be stated as such. Anything else is just manufacturing results and hiding the real confidence level in the data set.

Nick Stokes

I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.

Sean

In other words, instead of admitting up front that they really do not have useful data on which to draw the kind of conclusions that they are making, due to the poor and inconsistent experimental method used to gather this data, climatology is just making things up and lying. My only conclusion is that field of climatology is currently not a science any more than alchemy is a science.

Willis Eschenbach

Nick Stokes says:
April 13, 2012 at 5:48 pm

I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.

Nick, how did you average data to avoid overweighting the east coast where there are lots of stations?
Thanks,
w.

Geoff Sherrington

Nick Stokes says: April 13, 2012 at 5:48 pm I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.
Nick, many of us have done similar calculations, but the open question is still: Did the Met Station Country Authority adjust the data before sending it to CHGN?
It is actually quite hard to find useful data sets from this country that can confidently be authenticated as “RAW”. If you have a cache, do let us know. Also, can you tell us if this RAW data is the same as received by GHCN?

Ian W

Nick Stokes says:
April 13, 2012 at 5:48 pm
I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.

You found a trend in a compound metric.
Now do the same but use humidity to calculate the atmospheric enthalpy and then from that the average kilo Joules per kilogram of atmosphere. You may be surprised at what you find as (contrary to the AGW hypothesis) global humidity has been dropping . You will also then be using the correct metric for measuring heat content in a gas. Atmospheric temperature alone is meaningless average global temperature is like an average telephone number.

Andrew

RE
Willis Eschenbach says:
@ April 13, 2012 at 5:21 pm
“My point of view about “adjusted” data is that if you adjust data, your confidence interval must include the original data, the adjusted data, and in addition it must encompass the original data with each adjustment added in separately”.
—————–
Agreed, but of course, you’re preaching to the converted. The official adjusters claim that the unadjusted data are an inherently biased version of reality, and all they’re seeking to do is to remove those biases to provide an uncorrupted version of reality (a Utopian version of reality, some might say).
So then, no need for them to calculate error bars using unadjusted (nasty, biased) data… Isn’t that really what they’re saying? In other words, it is they who decide which reality we get to call reality. It’s purely Orwellian.
On a related question, can I ask if you or Roy to briefly address the idea that linear interpolation of data in the land surface temperature record over time provides an efficient mechanism to propagate and transmit localised spatial-temporal biases (eg. from UHI’s) systematically – throughout the temperature record. Under the cloak of “homogenization”.
And, whatever the expressed justification for it might be, linear interpolation over time will link most if not all data through space and time. The statistical pre-requisite of independence of cannot be not satisfied – invalidating attempts that seek to measure or compare temperature trends through space and time using these data.
The data are not fit for the purposes to which they are directed.
Where haveI gone astray in my thinking? Where are the holes in the argument?
Thanks.

Brian H

Display note: Pleazze, do not, evah, plot lines or dots or labels in yellow. Really. It’s display screen invisibility ink.

Andrew

Correction to my point at 6.29pm above.
Should read: …statistical pre-requisite of independence cannot be satisfied…
Sorry.

Tim Clark

[Nick Stokes says:
April 13, 2012 at 5:48 pm
I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.]
Do you consider an increased temperature of 1.61C/century by 2073 as CAGW?

Dr. Spencer, thanks for, once again, casting light on this subject. It needs hammered on, over and over again.
I do agree with Brian H, please avoid yellow on graphs if at all possible.

RoHa

In case you have forgetten, I’d like to remind you that we’re doomed.

Andrew

RE
Andrew says:
@ April 13, 2012 at 6:29 pm
…or indeed anyone who can enlighten me…
PS. I wonder if the land surface temperature records are not more appropriately addressed using statistics better suited to the analysis of neural networks – or other techniques that can accommodate sampling dependencies/ data linkages…?

jorgekafkazar

Gail Combs says: “Finding errors in a computer program is like finding mushrooms in the forest having found one look for others in the same place” ~ A. P Ershov
In my country we say it shorter: “Where bug is, bugs are.”

Andrew

Although I hadn’t intended the pun, come to think of it, this might have inadvertently hit the mark: the work of fiction known as the land surface temperature record is perhaps better suited to statistical techniques capable of probing the workings of the human brain…

I’m sorry, I know this will seem trollish, but every time I see the good ‘ol average = [(Tmax+Tmin)/2]; I can’t help but to think who ever thought that up wouldn’t have done very well on the TV show “Are You Smarter Than a 5th Grader”.

edbarbar

How do the satellite temps compare? Are these adjusted too? And what about BEST?

Nick Stokes

Willis Eschenbach says: April 13, 2012 at 5:57 pm
“Nick, how did you average data to avoid overweighting the east coast where there are lots of stations?”

I used inverse density weighting, measured by 5×5° cells. That does a fairly good job of balancing. But prompted by your query, I ran a triangular mesh weighting, which places each station in a unique cell, and weights by the area. Running a monthly model, as with the first example, the trend came down to 0.42°C/decade. But with an annual model, it went back to 0.161.
I’ll write a blog post with more details.

SirCharge

I had assumed that USHCN’S homogenization technique was mostly to increase rural temperatures until they match urban. Then they just generally decrease the overall temperature by a flat 0.004c per decade (the ipcc’s officially sanctioned estimate of UHI).

hillbilly33

Apologies fo being O/T [snip . . please repost on Tips & Notes . . kbmod]

Nick Stokes

Geoff Sherrington says: April 13, 2012 at 6:14 pm
“Nick, many of us have done similar calculations, but the open question is still: Did the Met Station Country Authority adjust the data before sending it to CHGN?”

Unlikely. Most adjustment is done years later, following some perceived discrepancy. But the GHCN data is as recorded within a month or so.
You can see this with data from our town. Within minutes readings are on line. Within hours, they are on the monthly record. And at the end of the month they go on the CLIMAT form. From which they are transcribed directly into GHCN adjusted. You can check. They don’t change.
Even before the internet, the GHCN data was distributed to thousands of people by CD. You can’t adjust that.