Warming in the USHCN is mainly an artifact of adjustments

Dr. Roy Spencer proves what we have been saying for years, the USHCN (U.S. Historical Climatology Network) is a mess compounded by a bigger mess of adjustments.

==============================================================

USHCN Surface Temperatures, 1973-2012: Dramatic Warming Adjustments, Noisy Trends

Guest post by Dr. Roy Spencer PhD.

Since NOAA encourages the use the USHCN station network as the official U.S. climate record, I have analyzed the average [(Tmax+Tmin)/2] USHCN version 2 dataset in the same way I analyzed the CRUTem3 and International Surface Hourly (ISH) data.

The main conclusions are:

1) The linear warming trend during 1973-2012 is greatest in USHCN (+0.245 C/decade), followed by CRUTem3 (+0.198 C/decade), then my ISH population density adjusted temperatures (PDAT) as a distant third (+0.013 C/decade)

2) Virtually all of the USHCN warming since 1973 appears to be the result of adjustments NOAA has made to the data, mainly in the 1995-97 timeframe.

3) While there seems to be some residual Urban Heat Island (UHI) effect in the U.S. Midwest, and even some spurious cooling with population density in the Southwest, for all of the 1,200 USHCN stations together there is little correlation between station temperature trends and population density.

4) Despite homogeneity adjustments in the USHCN record to increase agreement between neighboring stations, USHCN trends are actually noisier than what I get using 4x per day ISH temperatures and a simple UHI correction.

The following plot shows 12-month trailing average anomalies for the three different datasets (USHCN, CRUTem3, and ISH PDAT)…note the large differences in computed linear warming trends (click on plots for high res versions):

The next plot shows the differences between my ISH PDAT dataset and the other 2 datasets. I would be interested to hear opinions from others who have analyzed these data which of the adjustments NOAA performs could have caused the large relative warming in the USHCN data during 1995-97:

From reading the USHCN Version 2 description here, it appears there are really only 2 adjustments made in the USHCN Version 2 data which can substantially impact temperature trends: 1) time of observation (TOB) adjustments, and 2) station change point adjustments based upon rather elaborate statistical intercomparisons between neighboring stations. The 2nd of these is supposed to identify and adjust for changes in instrumentation type, instrument relocation, and UHI effects in the data.

We also see in the above plot that the adjustments made in the CRUTem3 and USHCN datasets are quite different after about 1996, although they converge to about the same answer toward the end of the record.

UHI Effects in the USHCN Station Trends

Just as I did for the ISH PDAT data, I correlated USHCN station temperature trends with station location population density. For all ~1,200 stations together, we see little evidence of residual UHI effects:

The results change somewhat, though, when the U.S. is divided into 6 subregions:

Of the 6 subregions, the 2 with the strongest residual effects are 1) the North-Central U.S., with a tendency for higher population stations to warm the most, and 2) the Southwest U.S., with a rather strong cooling effect with increasing population density. As I have previously noted, this could be the effect of people planting vegetation in a region which is naturally arid. One would think this effect would have been picked up by the USHCN homogenization procedure, but apparently not.

Trend Agreement Between Station Pairs

This is where I got quite a surprise. Since the USHCN data have gone through homogeneity adjustments with comparisons to neighboring stations, I fully expected the USHCN trends from neighboring stations to agree better than station trends from my population-adjusted ISH data.

I compared all station pairs within 200 km of each other to get an estimate of their level of agreement in temperature trends. The following 2 plots show the geographic distribution of the ~280 stations in my ISH dataset, and the ~1200 stations in the USHCN dataset:

I took all station pairs within 200 km of each other in each of these datasets, and computed the average absolute difference in temperature trends for the 1973-2012 period across all pairs. The average station separation in the USHCN and ISH PDAT datasets were nearly identical: 133.2 km for the ISH dataset (643 pairs), and 132.4 km for the USHCN dataset (12,453 pairs).

But the ISH trend pairs had about 15% better agreement (avg. absolute trend difference of 0.143 C/decade) than did the USHCN trend pairs (avg. absolute trend difference of 0.167 C/decade).

Given the amount of work NOAA has put into the USHCN dataset to increase the agreement between neighboring stations, I don’t have an explanation for this result. I have to wonder whether their adjustment procedures added more spurious effects than they removed, at least as far as their impact on temperature trends goes.

And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.

5 1 vote
Article Rating
151 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Latitude
April 13, 2012 3:03 pm

Dr. Spencer, will these people do bank accounts?
😉

Jimbo
April 13, 2012 3:07 pm

Envisat and now this. Why do the great majority of adjustments mean more warmth, rising sea levels, etc.? If the data does not match the predictions then the data has to be faulty and requires ‘necessary adjustments’. Don’t believe me? Ask Dr. James (Coal Death Trains) Hansen. The past isn’t what it used to be.

April 13, 2012 3:20 pm

Without all the adjustments it would be obvious that temperatures are falling and have been for a while. Global warming? Bollocks!

Scottish Sceptic
April 13, 2012 3:22 pm

Upjusting … the adjustment of data used to increase the rate of rise.

NZ Willy
April 13, 2012 3:25 pm

It’s well-remarked-on that El Ninos seem to be followed by warmer average temperatures. I speculated that these events were being used as an opportunity to tweak upwards. This article’s analysis looks to support that idea. Under the cover of a sudden change, “the team” jumps into action, adjusting upwards, ever upwards. The “taxing the plebs into the dirt” part comes later — er, now, actually.

kadaka (KD Knoebel)
April 13, 2012 3:28 pm

ISH refers to the raw International Surface Hourly data at NDSC, the data and subsequent processing into a usable dataset described by Dr. Roy Spencer in a guest post here:
http://wattsupwiththat.com/2012/03/30/spencer-shows-compelling-evidence-of-uhi-in-crutem3-data/

Andrew30
April 13, 2012 3:32 pm

Jimbo says: April 13, 2012 at 3:07 pm
“The past isn’t what it used to be.”
It never was.
Climate Science has a great future behind it.

Interstellar Bill
April 13, 2012 3:39 pm

“And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.”
Suspicious? More like a total ‘Guilty’ verdict!
Neither the satellites nor the ‘unadjusted’ surface record shows ‘Warming!’

Gail Combs
April 13, 2012 3:41 pm

Given that it is almost April 15th, perhaps we should send a copy of this to the IRS. They are the people who really appreciate creative number juggling. A.P. Ershov’s logic would seem appropriate.
“Finding errors in a computer program is like finding mushrooms in the forest having found one look for others in the same place” ~ A. P Ershov

Victor Venema
April 13, 2012 3:43 pm

Dear Dr. Roy Spencer PhD, the axis in your first plot is called “Temperature Departure from Avg.”, but at least the red and the green line are on average way above zero, which is not possible by definition. Can you explain the reason for this problem?

Len
April 13, 2012 3:57 pm

It is a shame that when the global warming alarmist-modelers find their predictions do not match observed data they choose the wrong approach. Choice 1: Models in error wrt observations, continue refining and correcting models and Choice 2: Models in error wrt to observations, go back and adjust (falsify) the observations until they are in better agreement with model predictions.
Choice 1 is honest and advances knowledge of physical processes to be included in models, and thus, advances science. This is good for everyone.
Choice 2 is dishonest and freezes improvements in knowledge of physical processes and the models used for prediction. In addition it corrupts historical data and precludes the opportunity for future scientists to compare their theories of physical processes and modeling with real historical data. It is a sin to lie and it is a sin to add corruption of historical data to the previous lies. Also it is a sin to rob future scientists of a chance to advance our knowledge using historical observations and data. Everyone here loses, and it is good for no one.
Do not think it helps the AGW crooks who seek power and money over over truth and honesty. They may gain temporary advantages in positions, power, and money. But, the truth will come out and attempts to cover up crimes always fail. But in the long run they will be exposed as crooks and fools. Morever, they will forever be damned by honest scientists who honor and need historical data to test theories and hypotheses. The crooked AGW tinkers will have their legacy become recognized and more and more evil over time.
And finally, the damage they do to scientists’ credibility will make billions of people suffer forever after.

April 13, 2012 4:01 pm

Thanks, Dr. Spencer.
Another eye-opener!
Yes AGW is man-made; totally made up.
Great article, I think.

Keith W
April 13, 2012 4:05 pm

USHCN is useless either through gross professional incompetence or malfeasance. Zero out the budgets or eliminate the agencies involved as a budget cost-saving measure.

Ally E.
April 13, 2012 4:11 pm

“And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.”
*
Got it in one.

Doug in Seattle
April 13, 2012 4:21 pm

Documenting this is important. Sadly however, getting it into the published record may not happen.

David A. Evans
April 13, 2012 4:24 pm

Victor Venema says:
April 13, 2012 at 3:43 pm

Dear Dr. Roy Spencer PhD, the axis in your first plot is called “Temperature Departure from Avg.”, but at least the red and the green line are on average way above zero, which is not possible by definition. Can you explain the reason for this problem?

Both sceptical and alarmist climate scientists do this without defining what they mean, which is departure from a baseline 30 year average, previously defined.
DaveE.

KR
April 13, 2012 4:26 pm

Hold on, I thought Fall et al 2011 (http://www.landsurface.org/publications/J108.pdf) concluded that the basic trends in the USHCN records were reliable?

old construction worker
April 13, 2012 4:32 pm

“When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments”
You could say the missing ocean “heat” was found on shore.

April 13, 2012 4:34 pm

NASA says 7C – 9C UHI in urban areas of the northeast.

John M
April 13, 2012 4:37 pm

USHCN uses 1961-1990 average as the baseline.

MIndBuilder
April 13, 2012 4:41 pm

Is there any explanation for why these results seem to disagree with previous studies, including the BEST study and the study done by Anthony Watts? If I recall, those studies found little difference between the trends for adjusted and unadjusted and for quality rural and bad urban sites. Are these results consistent or conflicting with the satellite data for this area?

Neville.
April 13, 2012 4:41 pm

If this really is so simple then why can’t you expose this nonsense as soon as possible? Also if the US station adjustments are so prone to error then what about the rest of the planet? Australia is about to introduce a co2 tax of $23 a tonne on July 1st and will forever waste countless billions $ on this fantasy of trying to change the temp and climate.
This must be the greatest con/fraud in history, certainly the most costly. Hundreds of billions $ already spent, totally wasted for a zero return.

Eric Barnes
April 13, 2012 4:42 pm

Thanks Dr.Spencer.
Who thought this …
http://stevengoddard.files.wordpress.com/2010/10/1998changesannotated.gif?w=500&h=355
was based on sound science?

April 13, 2012 4:49 pm

One of my fave USHCN charts:
http://icecap.us/images/uploads/USHCNvsCO2.jpg

Andrew
April 13, 2012 4:57 pm

Roy, would you please note the the time period used to define the baseline 30 year average against which these data are compared (see David A Evans above). I have looked but I cannot find it (perhaps I wasn’t looking in the right place?).
This may be quite important in interpreting your findings on pop density (as a proxy for UHI) if most of the changes in pop density at monitoring sites occurred prior to or early on in the monitoring period (1973-2012)…

April 13, 2012 5:11 pm

Victor, you are right. I failed to mention that after computing the CRUTem3 and USHCN anomalies, I offset them vertically so all 3 datasets averages matched each other over the 1st 3 years (1973-75).

April 13, 2012 5:13 pm

Andrew, the anomalies are relative to 1973 through 2011….but also see my comment above.

Editor
April 13, 2012 5:21 pm

Well done, Dr. Roy. A couple of comments. First, you say:

But the ISH trend pairs had about 15% better agreement (avg. absolute trend difference of 0.143 C/decade) than did the USHCN trend pairs (avg. absolute trend difference of 0.167 C/decade).
Given the amount of work NOAA has put into the USHCN dataset to increase the agreement between neighboring stations, I don’t have an explanation for this result. I have to wonder whether their adjustment procedures added more spurious effects than they removed, at least as far as their impact on temperature trends goes.

It is not generally realized that good correlation between the data does not mean agreement between the trends. For example, consider the following graphs:

Now, the correlation of all of these is greater than 0.9 … but their trends are all over the map. I discuss this chart further in my post “GISScapades“.

And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.

Indeed. My point of view about “adjusted” data is that if you adjust data, your confidence interval must include the original data, the adjusted data, and in addition it must encompass the original data with each adjustment added in separately. In the case where the adjustments are about equal to the final trend, of course, this means that the trend will likely be within the error bars …
w.

April 13, 2012 5:38 pm

What, pray tell, is a legitimate “Station Location Quality” Adjustment, that is (a) not a UHI effect, and (b) is a net positive 0.20 C over 60 years? By this, it means that an average station’s reading today must be RAISED by 0.20 to make it functionally equivalent to the same station 60 years before.
As I understand the history, Anthony Watts, started his sleigh ride by investigating a change from white-wash to white latex paint that would require a 1.00 negative adjustment, not a positive one. Encroaching parking lots? That’s another negative adjustment.
Oh, I know! There are fewer incinerators today than 60 years ago. /sarc
Let’s see that list of all positive and negative site location adjustments that are possible. There number and gross sizes should amount to some staggering statistical error bars.

Sean
April 13, 2012 5:43 pm

I am not a fan of any “adjustments” to experimental data.
As far as I am concerned “adjusting data” is the same as making up your results.
Instead what should be done, if climatology is a proper science and if they can not better control their experimental process, is to just increase the stated error for the data set as all of these data quality problems (changing measurement technique, location problems causing UHI, measurement time of day inconsistencies) are effectively instrumentation and measurement errors and should be stated as such. Anything else is just manufacturing results and hiding the real confidence level in the data set.

Nick Stokes
April 13, 2012 5:48 pm

I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.

Sean
April 13, 2012 5:53 pm

In other words, instead of admitting up front that they really do not have useful data on which to draw the kind of conclusions that they are making, due to the poor and inconsistent experimental method used to gather this data, climatology is just making things up and lying. My only conclusion is that field of climatology is currently not a science any more than alchemy is a science.

Editor
April 13, 2012 5:57 pm

Nick Stokes says:
April 13, 2012 at 5:48 pm

I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.

Nick, how did you average data to avoid overweighting the east coast where there are lots of stations?
Thanks,
w.

Geoff Sherrington
April 13, 2012 6:14 pm

Nick Stokes says: April 13, 2012 at 5:48 pm I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.
Nick, many of us have done similar calculations, but the open question is still: Did the Met Station Country Authority adjust the data before sending it to CHGN?
It is actually quite hard to find useful data sets from this country that can confidently be authenticated as “RAW”. If you have a cache, do let us know. Also, can you tell us if this RAW data is the same as received by GHCN?

Ian W
April 13, 2012 6:29 pm

Nick Stokes says:
April 13, 2012 at 5:48 pm
I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.

You found a trend in a compound metric.
Now do the same but use humidity to calculate the atmospheric enthalpy and then from that the average kilo Joules per kilogram of atmosphere. You may be surprised at what you find as (contrary to the AGW hypothesis) global humidity has been dropping . You will also then be using the correct metric for measuring heat content in a gas. Atmospheric temperature alone is meaningless average global temperature is like an average telephone number.

Andrew
April 13, 2012 6:29 pm

RE
Willis Eschenbach says:
@ April 13, 2012 at 5:21 pm
“My point of view about “adjusted” data is that if you adjust data, your confidence interval must include the original data, the adjusted data, and in addition it must encompass the original data with each adjustment added in separately”.
—————–
Agreed, but of course, you’re preaching to the converted. The official adjusters claim that the unadjusted data are an inherently biased version of reality, and all they’re seeking to do is to remove those biases to provide an uncorrupted version of reality (a Utopian version of reality, some might say).
So then, no need for them to calculate error bars using unadjusted (nasty, biased) data… Isn’t that really what they’re saying? In other words, it is they who decide which reality we get to call reality. It’s purely Orwellian.
On a related question, can I ask if you or Roy to briefly address the idea that linear interpolation of data in the land surface temperature record over time provides an efficient mechanism to propagate and transmit localised spatial-temporal biases (eg. from UHI’s) systematically – throughout the temperature record. Under the cloak of “homogenization”.
And, whatever the expressed justification for it might be, linear interpolation over time will link most if not all data through space and time. The statistical pre-requisite of independence of cannot be not satisfied – invalidating attempts that seek to measure or compare temperature trends through space and time using these data.
The data are not fit for the purposes to which they are directed.
Where haveI gone astray in my thinking? Where are the holes in the argument?
Thanks.

Brian H
April 13, 2012 6:44 pm

Display note: Pleazze, do not, evah, plot lines or dots or labels in yellow. Really. It’s display screen invisibility ink.

Andrew
April 13, 2012 6:45 pm

Correction to my point at 6.29pm above.
Should read: …statistical pre-requisite of independence cannot be satisfied…
Sorry.

Tim Clark
April 13, 2012 6:46 pm

[Nick Stokes says:
April 13, 2012 at 5:48 pm
I did a TempLS run using monthly GHCN unadjusted data, ConUS. This data is as it says, unadjusted – as reported by the met stations. I got a trend 1973-2011 (actually Jan 2012) of 0.161 C/decade. A bit less than CRUTem 3, but not nothing.]
Do you consider an increased temperature of 1.61C/century by 2073 as CAGW?

April 13, 2012 7:09 pm

Dr. Spencer, thanks for, once again, casting light on this subject. It needs hammered on, over and over again.
I do agree with Brian H, please avoid yellow on graphs if at all possible.

RoHa
April 13, 2012 7:12 pm

In case you have forgetten, I’d like to remind you that we’re doomed.

Andrew
April 13, 2012 7:37 pm

RE
Andrew says:
@ April 13, 2012 at 6:29 pm
…or indeed anyone who can enlighten me…
PS. I wonder if the land surface temperature records are not more appropriately addressed using statistics better suited to the analysis of neural networks – or other techniques that can accommodate sampling dependencies/ data linkages…?

jorgekafkazar
April 13, 2012 7:44 pm

Gail Combs says: “Finding errors in a computer program is like finding mushrooms in the forest having found one look for others in the same place” ~ A. P Ershov
In my country we say it shorter: “Where bug is, bugs are.”

Andrew
April 13, 2012 7:53 pm

Although I hadn’t intended the pun, come to think of it, this might have inadvertently hit the mark: the work of fiction known as the land surface temperature record is perhaps better suited to statistical techniques capable of probing the workings of the human brain…

April 13, 2012 9:59 pm

I’m sorry, I know this will seem trollish, but every time I see the good ‘ol average = [(Tmax+Tmin)/2]; I can’t help but to think who ever thought that up wouldn’t have done very well on the TV show “Are You Smarter Than a 5th Grader”.

edbarbar
April 13, 2012 10:12 pm

How do the satellite temps compare? Are these adjusted too? And what about BEST?

Nick Stokes
April 13, 2012 10:20 pm

Willis Eschenbach says: April 13, 2012 at 5:57 pm
“Nick, how did you average data to avoid overweighting the east coast where there are lots of stations?”

I used inverse density weighting, measured by 5×5° cells. That does a fairly good job of balancing. But prompted by your query, I ran a triangular mesh weighting, which places each station in a unique cell, and weights by the area. Running a monthly model, as with the first example, the trend came down to 0.42°C/decade. But with an annual model, it went back to 0.161.
I’ll write a blog post with more details.

SirCharge
April 13, 2012 10:26 pm

I had assumed that USHCN’S homogenization technique was mostly to increase rural temperatures until they match urban. Then they just generally decrease the overall temperature by a flat 0.004c per decade (the ipcc’s officially sanctioned estimate of UHI).

hillbilly33
April 13, 2012 10:33 pm

Apologies fo being O/T [snip . . please repost on Tips & Notes . . kbmod]

Nick Stokes
April 13, 2012 10:42 pm

Geoff Sherrington says: April 13, 2012 at 6:14 pm
“Nick, many of us have done similar calculations, but the open question is still: Did the Met Station Country Authority adjust the data before sending it to CHGN?”

Unlikely. Most adjustment is done years later, following some perceived discrepancy. But the GHCN data is as recorded within a month or so.
You can see this with data from our town. Within minutes readings are on line. Within hours, they are on the monthly record. And at the end of the month they go on the CLIMAT form. From which they are transcribed directly into GHCN adjusted. You can check. They don’t change.
Even before the internet, the GHCN data was distributed to thousands of people by CD. You can’t adjust that.

Nick Stokes
April 13, 2012 10:44 pm

Geoff Sherrington says: April 13, 2012 at 6:14 pm
Correction – transcribed into GHCN unadjusted.

Nick Stokes
April 13, 2012 10:49 pm

Tim Clark says: April 13, 2012 at 6:46 pm
“Do you consider an increased temperature of 1.61C/century by 2073 as CAGW?”

Well, it’s not sustainable indefinitely. We have to figure out where we’re going with it.

don penman
April 13, 2012 11:13 pm

Constant adjustment of data does not inspire confidence in the accuracy of the data collected .I would expect that very early data would be less reliable than the present data but it is the present data that is being adjusted and it does not seem that we ever get it right and no more adjustments are needed.All these adjustments to try to make average temperatures more accurate make the local temperature changes unclear the uhi effect is part of these local changes and if the local temperature is measured consistently we should see the way local temperatures are trending.The local weather stations were not meant to measure global or regional temperature only local temperature.

Andrew
April 13, 2012 11:34 pm

RE
Nick Stokes says:
@ April 13, 2012 at 10:49 pm
Tim Clark says: April 13, 2012 at 6:46 pm
“Do you consider an increased temperature of 1.61C/century by 2073 as CAGW?”
Well, it’s not sustainable indefinitely. We have to figure out where we’re going with it.
——————-
Nick: presumably, a rate of 0.0000000000161C per Century isn’t sustainable indefinitely… but do I take it this figure represents the “consensus” estimate of global climate sensitivity?And can you elaborate on your final sentence. Not sure I get the drift…

TheInqjirer
April 14, 2012 12:13 am

One of the claims against AGW scientists is that they have apparently started with a preposition (AGW) and massaged the data to fit.
We all know Dr Spencer’s position on AGW – he’s written books setting it out. How is it he escapes a similar accusation from “sceptics”?
If Dr Spencer is claiming that climate scientists have “fudged the data” he should be having a field day in the journals deconstructing the hypothesis. why is he not doing so? Are “sceptics” going to run with the conspiracy theory racket they seem to be reliant upon and claim he is being thwarted by vested interests?
I’m sure my own skepticism will be unpoplpular here but I suspect Spencer, profiting from contrarian books as he does, also appears to have a vested interest in maintaining his stance, despite the science.
And yet the satellite dataset Spencer maintains, clearly shows the trend he seems to want to deny in the surface record, despite the polynomial fit he applies “for entertainment puposes” but was faithfully reproduced as scientifically based by a contrarian journalist recently.
I think that is there are true sceptics here, perhaps Spencer deserves some of their attention and scrutiny.
[this sounds very like “trolling” so if you don’t wish to be considered a troll perhaps you could include some proof of your suspicions . . kbmod]

April 14, 2012 12:36 am

I can forsee entries in reference books from around 2200AD: “Climate Science; A branch of psychology from the late 20th/early 21st Century, which sought to use statistical manipulation of unrelated, incomplete and low quality data sets to induce mass hysteria in the consumers of the results, and neurosis in the practioners, ultimately affecting government policies and raising vast amounts of unwarranted and unjustified tax revenues. See also Atlantis, Hollow Earth, Flying Saucers, ChemTrails, TinFoil Hats, Doomsday Cults, Fraud.”

Geoff Sherrington
April 14, 2012 12:37 am

Roy, you note that “I took all station pairs within 200 km of each other in each of these datasets, and computed the average absolute difference in temperature trends for the 1973-2012 period across all pairs. The average station separation in the USHCN and ISH PDAT datasets were nearly identical: 133.2 km for the ISH dataset (643 pairs), and 132.4 km for the USHCN dataset (12,453 pairs).”
A significant reason for the lack of agreement is that such UHI as took place, was not at the same time at each observing station. Another significant reason, one that shows in Australia, is that the noise in the data is very large compared with the signal being sought. I’ll start to believe UHI estimates when the baseline, unaffected record for a station can be measured for a couple of decades, so that any UHI change can be measured relative to the baseline. I can’t find a baseline here. All I get is noise which does not correlate with any factor I have available.

April 14, 2012 12:46 am

Nick Stokes says: April 13, 2012 at 10:42 pm Re raw data and GHCN
Nick, when the BOM send data to NOAA or Met Office or whomever, do they currently send Tmax and Tmin as read from Min-max thermometers read once a day, or as calculated from many readings per day? Before the BOM had many-per-day capability, they used to send Tmin and Tmax from thermometers. Has there been a change and if so, have you ever seen details of a splice as the instrumentation changed?

Richard S Courtney
April 14, 2012 12:59 am

Doug in Seattle:
At April 13, 2012 at 4:21 pm you say;
“Documenting this is important. Sadly however, getting it into the published record may not happen.”
I wish to correct that to ‘getting it into the published record IS NOT POSSIBLE’.
This matter was the subject of the ‘Climategate’ email from me and, therefore, the subject of my submission to the Parliamentary investigation which whitewashed ‘The Team’.
I have repeatedly reported on WUWT that it proved impossible to publish a paper which provided an intercomparison of data sets for mean global temperature (MGT). I was its lead author.
The intercomparison showed the differences in trends between the MGT data sets proved the data were of unknowable accuracy and, therefore, are worthless as indicators of global and hemispheric temperature changes. It considered reasons for this according to two different understandings of the nature of MGT, and it recommended changes which would enable the data sets to be useful indicators.
But the data in the data sets kept changing between submission and publication of the paper. This required rejection of the paper for correction of its analysed data. Oh, and Nature rejected it because the Editor said, “Nature does not publish comparisons of data”.
Eventually, we gave up attempting to get it published (with rejection because of need to update its data and, therefore, analysis). I then attempted to get ‘The Team’ to decide on what is a proper procedure which would stop the continual data adjustments of the MGT data sets.
So, the publication of Roy Spencer’s work on WUWT is important. In science only the evidence matters and it does not matter who published it or where. His work is published above and can be referenced to here. But it cannot be published in ‘conventional’ journals because the data will change between its submission and agreement to publish.
Richard

Nick Stokes
April 14, 2012 1:12 am

Andrew says: April 13, 2012 at 11:34 pm
“but do I take it this figure represents the “consensus” estimate of global climate sensitivity?”

None of these figures relate to anything global at all. They are about ConUS. And I do not propose my figure as the correct one – it is the unadjusted result. In fact, the USHCN adjustments are well justified, especially TOB, which is a big one. The times of min/max obs are recorded; they have drifted from evening to morning, and that has a clear and calculable trend biasing effect. They would be clearly wrong not to adjust for it.
We have to figure out where we’re going with adding carbon to the atmosphere because if we keep doing it, it’s going to get hot. We’ve burnt about a tenth of readily available C. And it will take a long time to get an agreement to get it under control, so we’d better start.

Kasuha
April 14, 2012 1:49 am

I think we should not welcome something just because it provides pleasant results. Dr. Spencer’s way of doing population density adjustments is suspicious at best and in my opinion wrong. Urban heat island does not introduce trend changes, it introduces temperature shift which changes with many factors generally summed up as station siting quality which may (and does) change over time. In Dr. Spencer’s work, station siting is approximated by population density and there are even no changes to it considered. If that was true, then assuming there was no warming, badly sited stations would not produce a different data trend, they would just produce higher average temperatures with the same (zero) trend. Yet the adjustment is done assuming there are no changes to station siting and yet done by manipulating station trends. Dr. Spencer also didn’t care to explain his method in sufficient detail and based on the result I assume he sucessfully decreased trends even for lowest population stations – no wonder he got no warming as a result.
USHCN adjustments are suspicious, but not because they differ in a strange way from Dr. Spencer’s data.

Almah Geddon
April 14, 2012 1:49 am

One question I have is what is the meaning of [(Tmax+Tmin)/2] in relation to the temperature at a site? What is the ‘average temperature’ on any given day? As for long term temperature analysis what exactly is it telling you? If the average temperature as defined by [(Tmax+Tmin)/2] goes up, it could be – either both min and max going up; min alone; max alone; min going down at a lesser rate than max is going up. Surely only Tmax and Tmin have a meaning not the average of the two.

Richard S Courtney
April 14, 2012 2:06 am

Nick Stokes:
At April 14, 2012 at 1:12 am you say and assert;
“We have to figure out where we’re going with adding carbon to the atmosphere because if we keep doing it, it’s going to get hot.”
Really? You know that? How?
Please prove it because atmospheric CO2 concentration has been increasing since at least 1958 and to date there is no evidence of any kind that this has had any effect on making the world “hot”.
And please note that this is important to the present discussion because – as Dr Spencer’s above article demonstrates – any changes to the world getting “hot” are so small that they are difficult to discern at ground level. Also, the satellite-derived data show the Earth has not been getting “hot” to a significant degree in recent decades.
Richard

Almah Geddon
April 14, 2012 2:10 am

I have given my above statement some more thought. By definition Tmax is usually recorded on day n, Tmin is usually recorded on day n+1. This is because the meteorological day is defined as 9am to 9am. Is there any correlation between Tmax and Tmin apart from a seasonal one?

The old Seadog.
April 14, 2012 2:12 am

Well, they have to adjust everything now ; because they have found Antarctic Urban Heat Centres are twice what they thought they were….
http://www.businessweek.com/news/2012-04-13/penguin-count-doubles-as-satellite-spies-on-birds-poop-stains

Nick Stokes
April 14, 2012 2:33 am

Richard S Courtney says: April 14, 2012 at 2:06 am
“Really? You know that? How?”

Greenhouse effect. Putting CO2 in the air impedes outgoing IR. Heat accumulates, temperature rises. Flux balance then restored (until more CO2 accumulates).

Peter Miller
April 14, 2012 2:48 am

As a typical geologist practicing in the private sector, I am deeply sceptical of all things CAGW.
When trying to figure out geological structures, mineralisation trends etc., I obviously construct models in an attempt to try and explain what is happening. However, the golden rule is: IF THE DATA DOESN’T FIT THE MODEL, THEN THE MODEL IS WRONG, NOT THE DATA.
In ‘climate science’ it is the exact opposite: IF THE DATA DOESN’T FIT THE MODEL, THEN THE DATA IS WRONG, NOT THE MODEL. So, the data has to be adjusted/manipulated/tortured to fit the model. Roy Spencer’s analysis of USHCN data here is yet another classic example of this.
If geologists followed the so called logic of ‘climate scientists’, we would very rapidly run out of oil, gas and most metals.
If the world follows the logic of ‘climate scientists’, which requires pouring trillions of dollars into a futile attempt to solve a non-problem, it will be an economic disaster.

KnR
April 14, 2012 2:56 am

Ajusting the data is not autmical bad , however what is all important is the justifcation for doing so it made clear and is valid , and to often its simply not and then to compound the problem the raw data ‘goes missing’ so you then can’t check on what has been done and if it makes sense .
However, this is merely in line with the first rule of climate sceince , if the model and reality differ in value its reality which is in error. If you have faith in ‘the cause’ all of these types of problems disapear .

TheInquirer
April 14, 2012 3:00 am

[snip . . please post with content, thank you . . kbmod]

Phil Clarke
April 14, 2012 3:04 am

The graph is taken from Warren Meyer’s site.
http://www.coyoteblog.com/coyote_blog/2007/07/an-interesting-.html
Typically for Meyer, it is wrong. The graph is in degrees F, the captions are in C, leading him and now you, to overstate the trends.
More diligence, more genuine scepticism required.

Urederra
April 14, 2012 3:05 am

Nick Stokes says:
We have to figure out where we’re going with adding carbon to the atmosphere because if we keep doing it, it’s going to get hot. We’ve burnt about a tenth of readily available C. And it will take a long time to get an agreement to get it under control, so we’d better start.

No sir, Goodridge already proved that CO2 has no effect on temperatures by demonstating that rural stations do not show any warming over the last 100 years despite of global CO2 increase. http://wattsupwiththat.com/2012/03/30/spencer-shows-compelling-evidence-of-uhi-in-crutem3-data/
It is also known from the 19th century that plants need CO2 to grow and the more CO2 in the atmosphere, the more plants grow (classic chemical cinetics) So the most sensible thing to do is to pump CO2 into the atmosphere..
However, if you want to live in a cave with no light and no running water, be my friend and go. Don´t expect me to join, thanks. But please, don´t try to impose a global dictatorship over this.

Lars P.
April 14, 2012 3:25 am

Hm, they could simply directly paint the temperature charts in powerpoint or the tool of their choice. This would reduce costs (why maintain all those stations?) and we get to the same results.
Post modern science when continuously changing historical records. Year after year history gets
“improved”, the 30s-40s get colder, now almost aligned to the 60s-70s, the record 98 slowly surpassing 34 but now also getting a little colder. At the time of the record no 200x managed to beat 1998, but now slowly all are growing, or rather 1998 is slowly sinking.
It is not that new data adds at the end of the graph but the whole graph moves like a living snake to fit whatever the adjusters adjust.
http://suyts.wordpress.com/2012/04/11/this-isnt-about-the-climate/

April 14, 2012 3:59 am

edbarbar says: April 13, 2012 at 10:12 pm
How do the satellite temps compare? Are these adjusted too? And what about BEST?
_____________
In 2008 I found Hadcrut3 ST was warming at ~0.07C/decade faster than UAH LT. Different altitudes, but much less warming was observed from satellites.
See Fig. 1 at
http://icecap.us/images/uploads/CO2vsTMacRae.pdf
Satellite temperatures are often adjusted a few months after readings – my observation from 2008-2009 is that the UAH adjustments are typically very small.
No personal opinion on BEST, except that the name may be a misnomer.
http://wattsupwiththat.com/2011/11/03/a-considered-critique-of-berkley-temperature-series/

Jim
April 14, 2012 4:26 am

Kasuha says:
April 14, 2012 at 1:49 am
*****
It makes sense that urbanization adds some amount of temperature, an offset, at the current time. This is a static view of urbanization. But as the population grew, it would actually add to the trend. This is easier to imagine if you mentally start with two identical places, then imagine one begins to experience population growth. The temperature trend in it will be greater in the place with population growth.

April 14, 2012 4:30 am

I should add that I believe there is ample evidence that there is a strong warming bias in most surface temperature (ST) datasets.
Perhaps someone else would be kind enough to provide all the references. From memory:
Anthony Watts surface stations work
Michaels and McKitrick paper
Roy Spencer paper(s)
etc.
_____________________
http://wattsupwiththat.com/2009/02/05/fear-and-loathing-for-california/#comment-82352
Please look at the first graph ( to mid-2008) at:
http://www.iberica2000.org/Es/Articulo.asp?Id=3774
This graph suggests there has been no net global warming since 1940, despite an ~800% increase in humanmade CO2 emissions.
I used Hadcrut3 ST from 1940 (despite of its warming bias), and UAH LT thereafter.
This is the result when one plots the FULL PDO cycle, instead of attempting to extrapolate the WARMING HALF-CYCLE, as many warmists do.
_____________________
In 2003 I wrote that global cooling would soon recur. Bundle up!

April 14, 2012 4:43 am

Nick Stokes says: April 14, 2012 at 2:33 am
Richard S Courtney says: April 14, 2012 at 2:06 am
“Really? You know that? How?”
Greenhouse effect. Putting CO2 in the air impedes outgoing IR. Heat accumulates, temperature rises. Flux balance then restored (until more CO2 accumulates).

Past evidence wouldn’t precisely support your claim. In the early 18th century England (before industrialisation) increase of the CO2 was negligible, and yet if we compare two 50 year temperature records for now and then we get
http://www.vukcevic.talktalk.net/CET1690-1960.htm
great deal of similarity except that the CET than rose faster than now.
You may dismiss this as a local event, but it can be seen that the current CET well correlates to both the Northern hemisphere’s and the global temperatures.
Local natural event then, the CO2 now.
Not so. The same natural event appear to be responsible for the rise then and now, at least as the CET is concerned, And it is all to do with the North Atlantic-Arctic warm/cold currents balance
http://www.vukcevic.talktalk.net/CGNh.htm
And what is the natural force changing that balance?
Not easy to prove, it looks like something to do with the solar activity, but not at a degree that the classic sunspot count would suggest
http://www.vukcevic.talktalk.net/SSN-NAP.htm
Data? Yes, all available.
Published? No, there is no interest, but the ‘climate’ is changing, as ever.

Robertvdl
April 14, 2012 4:46 am

I saw this graph on a dutch website
http://www.klimaatfraude.info/images/sverdrup.gif
from
http://www.klimaatfraude.info/oceaanopwarming-of-zeespiegelstijging-door-co2-is-niet-mogelijk_193094.html
“The solar radiation penetrates the ocean to 100 metres at visible wavelengths but to much shallower depth as wavelength increases. Back radiation in the far infra-red from the Greenhouse Effect occurs at wavelengths centred around 10 micrometres and CANNOT penetrate the ocean beyond the surface ‘skin’.”
If the oceans warm the Earth but CO2 cannot warm the oceans and we know that the last years we had no warming in the oceans
http://www.klimaatfraude.info/flitspost/images/2011-05-30_021050.jpg
Warming must be mainly an artifact of adjustments.

Nick Stokes
April 14, 2012 5:03 am

Urederra says: April 14, 2012 at 3:05 am
“No sir, Goodridge already proved that CO2 has no effect on temperatures by demonstating that rural stations do not show any warming over the last 100 years despite of global CO2 increase.”

He didn’t prove that at all. He showed in 1996 that some rural counties in California showed little increase. He gave no statistics of the effect of this small sample.
In fact globally there is very little difference between the trends of rural and urban stations.

Jean Parisot
April 14, 2012 5:07 am

Why adjust at all? The data set is large enough, and encompasses an enourmous number of natural variables – should not these measurement variables just be averaged in with the rest. Either drop down to a micro-climate level and record the datum properly OR smash it all together and establish an error bound “average”.
Once you start adjusting, where do you stop?

Gail Combs
April 14, 2012 5:28 am

Neville. says:
April 13, 2012 at 4:41 pm
If this really is so simple then why can’t you expose this nonsense as soon as possible? Also if the US station adjustments are so prone to error then what about the rest of the planet?…..
________________________________
The same type of problems are being seen elsewhere. This is highlighting just a few.
New Zealand
href=”http://wattsupwiththat.com/2010/08/16/new-zealands-niwa-sued-over-climate-data-adjustments/”>New Zealand’s NIWA sued over climate data adjustments
The Goat ate my homework: NIWA’s confession that it lost the Schedule of Adjustments (SOA) for the official New Zealand temperature record is the latest event in a long-running scandal…
AUSTRALIA
http://notalotofpeopleknowthat.wordpress.com/2012/03/15/an-adjustment-like-alice/
Australian temperature records shoddy, inaccurate, unreliable. Surprise!
RUSSIA
IEA: Hadley Center “probably tampered with Russian climate data
On Dec 15, 2009, it was reported that the Moscow-based Institute of Economic Analysis (IEA) issued a report “claiming that the Hadley Center for Climate Change based at the headquarters of the British Meteorological Office in Exeter (Devon, England) had probably tampered with Russian-climate data.”
CHINA

Wei-Chyung Wang fabricated some scientific claims
[Important Ubran Heat Island Effect Study]…a crucial
paper, first published in 1991, in which temperature data from 84 meteorological stations in China was compiled by Professor Wei-Chyung Wang, professor at Albany, State University of New York, with particular reference to urban heat island (UHI) effects. Changes in urbanization around temperature measurement stations can increase temperatures by several degrees. It has long been the contention of the global warming sceptics that much of the alleged global warming of the late 20th century is nothing more than increasing urbanization around temperature measurement stations. The anthropogenists, of course, deny this strenuously, claiming that all of the temperature data has been corrected for UHI consequences. The sceptics remained unsatisfied because in particular there seemed to be little correspondence between the temperature record at known stations and the global data published by the Hadley Centre…

There is also the question of how good the data is during the time of Red China’s “Purges” where up to 80 millions were killed. (While reading that link, it pays to remind ourselves that this is the same country our leaders are handing over world economic leadership to using CAGW and the World Trade Agreement.)

Luther Wu
April 14, 2012 5:37 am

Nick Stokes says:
April 14, 2012 at 1:12 am
Andrew says: April 13, 2012 at 11:34 pm
“but do I take it this figure represents the “consensus” estimate of global climate sensitivity?”
None of these figures relate to anything global at all. They are about ConUS.
_________________________
Exactly.
You don’t realize it, but you just gave your whole game away.
Since almost all (and quite possibly all) of the “Global Warming” touted in datasets is from ConUS adjusted datasets.
The cooler the southern hemisphere and other locales- the more adjustments to the ConUS records.

Nick Stokes
April 14, 2012 5:37 am

Geoff Sherrington says: April 14, 2012 at 12:46 am
“Nick, when the BOM send data to NOAA or Met Office or whomever, do they currently send Tmax and Tmin as read from Min-max thermometers read once a day, or as calculated from many readings per day?”

Geoff, here’s a typical BOM monthly summary. You’ll see that they list the max and min for each day. This is based on their daily summary in which they list the Max and Min and exact time observed. They give an average Max and min in their monthly table, and I believe those are the numbers that go into the CLIMAT form.
As to splicing, that is part of the larger issue of matching MMTB readings to the earlier thermometer readings.

Tom in Florida
April 14, 2012 5:49 am

Nick Stokes says:
April 14, 2012 at 2:33 am
“Greenhouse effect. Putting CO2 in the air impedes outgoing IR. Heat accumulates, temperature rises. Flux balance then restored (until more CO2 accumulates).”
I thought the it was the postitive feedback of a warmer atmosphere holding more water vapor that was the major factor of global warming. Are you now not including this feedback or did you just forget to mention it?

Gail Combs
April 14, 2012 5:57 am

Sean says:
April 13, 2012 at 5:53 pm
In other words, instead of admitting up front that they really do not have useful data on which to draw the kind of conclusions that they are making, due to the poor and inconsistent experimental method used to gather this data, climatology is just making things up and lying. My only conclusion is that field of climatology is currently not a science any more than alchemy is a science.
_____________________________________
BINGO!
Especially when much of the early data was to the nearest whole degree, or as one commenter noted intentionally rounded up to give pilots an added safety margin. How the heck anyone can get a trend of “0.161 C/decade” using a sample size of n=1 from that type of data is beyond me.
When we did statistics we separated out the different cavities on each mold for each molding machine as different sets of data that should not be mixed. These alchemists lump data from different days and different locations together using their magic wand called “anomalies” and some how create accuracy and precision where there was none before.

Nick Stokes
April 14, 2012 5:57 am

Tom in Florida says: April 14, 2012 at 5:49 am
” thought the it was the postitive feedback of a warmer atmosphere holding more water vapor that was the major factor of global warming. Are you now not including this feedback or did you just forget to mention it?”

Yes, water vapor feedback amplifies the temperature increase due to any forcing.

April 14, 2012 5:59 am

I have carried out the same analysis for individual states (see the series at Bit Tooth Energy which gives the result by state, listed on the RHS of the blog and comparing time of observation corrected data against final adjusted values and GISS temperatures for each state). Given that the population changes with time one has to be careful as to which intervals one compares with current population. Given also that the larger population centers tend to lie in the lower elevations and that temperature is sensitive both to elevation and latitude, it is really a 3-factor dependence. But the correlation runs at an r^2 of around 0.14 when a log function is used for population (which is the correlation that most often has been cited in the past) and running a five-year ave temp against current populations, in many states.
There is an interesting temperature drop of around 4 degF that happened in about 1950 in the North East states, that they have only just recovered from, and it had an interesting effect on the habits of the Black capped Chickadee.

Gail Combs
April 14, 2012 6:07 am

RoHa says:
April 13, 2012 at 7:12 pm
In case you have forgetten, I’d like to remind you that we’re doomed.
____________________________________
You are correct but it is doom via greedy politicians and the Regulating Class and not doom via climate.

Gail Combs
April 14, 2012 6:16 am

TheInqjirer says:
April 14, 2012 at 12:13 am
…..If Dr Spencer is claiming that climate scientists have “fudged the data” he should be having a field day in the journals deconstructing the hypothesis…..
_________________________
Dr Spencer tried. The journal’s Editor-in-Chief’s resigned because he published Dr Spencer’s paper: http://www.drroyspencer.com/2011/09/editor-in-chief-of-remote-sensing-resigns-from-fallout-over-our-paper/

beng
April 14, 2012 6:16 am

Without reading the replies yet, I predict this will anger the warmers alot.
.013C/decade just doesn’t cut it for proper alarmism.

Evan Jones
Editor
April 14, 2012 6:22 am

Nick Stokes: Yes, ambient water vapor is a positive feedback, but if the vapor instead assumes the form of low cloud cover, then it’s a negative feedback.
Gail: Actually, you can get more precise readings from crude data — by “oversampling”. If you take hundreds of readings that are accurate to within a degree C and average them, you wind up with an accuracy finer than a degree C.
The problem occurs, however, when NOAA claims the readings are accurate to within a degree C, and it transpires that they are not. #B^j
013C/decade just doesn’t cut it for proper alarmism.
McIntyre’s USHCN1 raw data figures average out to 0.14C per century, while the adjusted come out to 0.59. He worked this out around 2007. (USHCN2-adjusted is a good chunk higher.)
When you grid the raw data to 5-degree boxes, however, it comes out to 0.25C. That’s still less than half the adjusted rate, but it is a bit warmer than the ungridded average.
(I’ve been up to my eyeballs in these stats for the last four years.)

rgbatduke
April 14, 2012 6:47 am

OK, that screwed up again. Rats. For some reason the wordpress interface is actually getting worse than it was three or four months ago.
I will quickly summarize my partial post in case it went through instead of away. I’m having trouble with the final two figures and conclusion that average neighboring station distances are the same for the two datasets. The reason is that the continental US has an area of 8 \times 10^6 km$latex^2$. This works out to 2.8 \times 10^4 square km per station in the first case, and 6.7 \times 10^3 square km per station in the second. Taking the square root and multiplying by 2 in both cases leads to a crude estimate of the (root mean) average distance between stations of 330 km and 160 km, respectively. This makes sense — there are four times as many stations in the second set so the distance between stations is half as great.
What this means is that a lot more stations in the second set have neighbors within 200 km, simply because the average distance between stations is less than this cutoff and most stations indeed have multiple neighbors within the cutoff. This is reflected in the numbers — on average every station in the first set has four neighbors within the cutoff (two pairs), while every station in the second set has twenty neighbors inside the cutoff (ten pairs).
And here’s the rub. This means that one packs 20 stations into a circle of radius 200 km and ends up with the same mean distance between them that you end up with packing 4 into the same circle. This is at least a bit odd. I do realize that the stations are hardly randomly distributed (far from it) and that in all likelihood they were selected with a minimum distance criterion, so that they are spatially antibunched on the short end of the length scale, while on the other hand humans live in a highly bunched environment along artery roads an in or near urban centers, so that they are bunched on the long end of the length scale, but one would still expect something like a \sqrt{5} shorter mean distance between the stations in the denser set instead of identical means.
I think that this means that one cannot fairly compare the two spatial autocorrelation “corrections” (where I actually think that trying to compute such a correction is such a horrendous abuse of statistics as to beggar the imagination, by the way — I can see why one has to apply some corrections to site data if it is known that the data is recorded differently between sites, although this correction comes at the expense of increasing the error bars on the final result (that never seem to get plotted, why is that one wonders) but one is never justified in trying to “correct” one site’s temperature on the basis of temperature measurements from neighboring sites.
Once one has the best guess for a site’s max/min/mean temperature, there is only one reasonable way to transform that data into a continentally averaged temperature and that is to perform a numerical integral of the temperature field over the area. There is no possible justification for smoothing the data on any length scale before doing this integral — the whole point of doing the integral is that it is the only unbiased way of doing the smoothing, given that one has no theoretical basis for cutting off any interpolating polynomial representation at some spatial length that is longer than the distance between sites.
I sometimes wonder if anybody who works in climate science has actually taken a course in numerical methods and learned about numerical quadrature and the problems attendant thereupon (or ODE solution methods, or if they actually understand statistics, or…)
What is really needed in this field is a double blind experiment. After all, an unbiased observer, performing a double blind analysis of the data, would not know what is being computed and would not know what direction “up” and “down” were for the variable. They would simply be given a pile of data presumably sampled from a wide range of sites around the US and asked to turn it into an average and trend. It might represent average exam scores of students in high school, it might represent a normalized estimate of the prevalence of drug abuse, it might represent average wind speed, it might represent average income per household (all on suitably obfuscated scales). Note that some of these might have the same kinds of problems that temperature has (only bigger) — the exam itself might have changed over the time studied, or high school students might be getting more intelligent (Flynn effect), or high school exam scores could be confounded by prevalence of drug abuse. However, “correcting” for these things using some sort of prior knowledge is dangerous because one of the things one might wish to do is infer the effect of an exam change, Flynn, drugs from the result, so “correcting” for them a priori simply makes the result useless, a self-fulfilling or obscuring prophecy.
Fortunately, for climate science, there are a number of them that potentially exist. I offer one of them up for consideration:
Suppose one takes all of the station data and inverts it with respect to its (station) mean, and then passes all of the data through the same “correction” process. Note that this is a pure symmetry operation; if one is looking for a temperature anomaly inverting with respect to the station means had better produce a perfectly symmetric downward trend in temperatures!
That is, if one has the UHI effect and all other corrections right, then an inversion of the data must lead to a perfectly symmetric inversion of the trend. This seems like an absolutely trivial test that can be performed with almost no additional programming effort to any of the algorithms used to transform temperature data into anomaly. If inversion of the data is not symmetric, the algorithm is biased. It is as simple as that. Symmetry is a necessary condition of an unbiased algorithm.
There are several other ways to effectively “double blind” temperature analysis. For example, give the data to a professional statistician but lie about what it represents. Tell them that it is data representing the average count of golliwogs per square hectare, that you suspect that it is biased by golliwogs being kept in urban zoos so that those numbers may be up a bit and you want to correct for that in the spatial average as you really want only the wild golliwog count, and that some of your golliwog counters did a better job than others (but you aren’t sure who is who). Let them take it away, crank up SAS or R and come back in six weeks with a Grand Average Golliwog Population of the US curve. I’ll bet that given exactly the same data their curve would look nothing like any of the “official” temperature curves.
I would be very, very interested in how CRU or GISS would respond to a demonstration that their algorithms do not possess inversion symmetry, if it were to be clearly demonstrated that this is the case. Personally, I think that would be the end of the game. How is it possible to argue that they should not possess this symmetry? If there isn’t every bit as much cooling apparent from inverted data as there is warming now, how exactly could we observe cooling in the future? Inversion symmetry must be an absolute constraint of any average temperature estimate, a sign that a UHI correction (or any other corrections applied) are plausible.
rgb

April 14, 2012 6:51 am

Mr. Watts, Taking into consideration the early comments of Richard S Courtney, and further considering the numerous qualified contributors to this site and your universal coverage and appeal, might I suggest that you add a section to this site for ‘peer reviewed climatology papers’ that are ‘impossible to get published anywhere else’. You have a much broader base and readership than the official journals and a h.ll of a lot more credibility. You could start with Richard S. Courtney’s paper. I can feel the frisson going through the ‘Team’ part of the ranks of climatologists, and cheers going up from the other part.

richard verney
April 14, 2012 7:06 am

Robertvdl says:
April 14, 2012 at 4:46 am
///////////////////////////////////////////////
I wish that I could understand the Dutch site. It is probably very interesting.
As I see matters there is a considerable problem with DWLWIR and the oceans which has not been fully nor properly thought through. .
First, as you note, due to its wave length, DWLWIR is fully absorbed within the first 10 microns of the ocean. However, of more significance, 20% of all DWLWIR is absorbed within 1 micron and 60% within 4 microns. If DWLWIR has the intensity suggested by the K & T energy diagram. it would mean that from an optical physics persepctive there would be so much energy being absorbed in the first couple of microns that it would lead to rampant evaporation. There is a problem here since if there was such eavporation, the energy would not enter the ocean but would end up in the atmosphere (as a consequence of the evaporation and latent heat change). We are not seeing rampant evaporation and this therefore suggests that DWLWIR is not of the order of magnitude claimed, or lacks sensible energy, or is merely nothing more than a signal incapable of performing sensible work, or that much of the DWLWIR is either blocked from reaching the ocean, or is simply reflected by the ocean.
Second, water is essentially an LWIR block (this follows from the fact that LWIR is fully absorbed within 10 microns and 60% of all LWIR within 4 microns). If there is a very thin veil of sea swept mist/spume/spray of just a couple of microns thickness, lying immediately above the oceans, then this very thin veil would be sufficient to effectively block DWLWIR from penetrating the oceans below. For much of the time over much of the oceans there will inevitably be such a thin veil of wind swept mist/spray/spume.
Third, the K & T energy diagram shows solar irradiance being reflected from the surface. This is predominantly from the oceans reflecting solar rays at low angles of incidence. Why does the diagram not show any reflection of DWLWIR? Since GHGs radiate in all directions, it follows that some DWLWIR must be bombarding the oceans at a low angle of incidence? Why is this not reflectsd? Can DWLWIR not be reflected? Can the oceans not reflect DWLWIR? Not sufficient consideration has been given to the reflection of DWLWIR.
Fourth. at the very top micron level of the ocean, the energy flow is upward. The top micron layer is colder than the 4 to 8 micron layer. It follows from this and from the first two points that it would appear that there is no effective mechanism whereby DWLWIR could warm the bulk ocean. It would appear that given that the energy flow is upwards, heat cannot run against it and thus any energy absorbed in the first few microns could not find its way downwards into the bulk ocean and thereby cannot heat the bulk ocean.
The upshot of the above is that ocean heat is almost certainly driven simply by solar irradiance and an increase in ocean heat is a factor of an increase in solar energy, possibly (I would say probably) due to a reduction in cloudiness.

April 14, 2012 7:22 am

Anyone who thinks a minus 0.05 degree C adjustment for UHI is adequate need only consider the previous post concerning peak temperatures at airports. Recall as well that even in the early 1950s commercial jets were uncommon.

pochas
April 14, 2012 7:33 am

evanmjones says:
April 14, 2012 at 6:22 am
“When you grid the raw data to 5-degree boxes, however, it comes out to 0.25C. That’s still less than half the adjusted rate, but it is a bit warmer than the ungridded average.
(I’ve been up to my eyeballs in these stats for the last four years.)”
Less than half a degree? My work here is done.

oMan
April 14, 2012 7:36 am

rgbatduke: I am no scientist but boy does your “inversion symmetry test” sound like a good idea. Thanks for a great comment…

Gail Combs
April 14, 2012 7:41 am

vanmjones says:
April 14, 2012 at 6:22 am
Gail: Actually, you can get more precise readings from crude data — by “oversampling”. If you take hundreds of readings that are accurate to within a degree C and average them, you wind up with an accuracy finer than a degree C….
________________________________
I am well aware of that. If I take a sample and mix well, divide into ten samples and do my chemical analysis on each of the ten samples I can come up with a better estimate of the true value. (BTDT) However that is not applicable here because as I said the sample size is ONE.
You are not doing several readings with calibrated thermometers at the same time at the same place. Instead you are doing a bunch of number juggling but the sample size is STILL ONE, therefore the estimate of the true value of the temperature for that specific location at that specific time has the error bars for a sample size of one. You can not get better accuracy or precision into that specific record by comparing it to the reading from the next town over any more than I could by combining the measurements of a widget from the machine next to the first machine. Heck by combining widgets from several cavities in just one machine I would INCREASE the error not decrease it.
We know darn well the Atlantic Ocean modifies the temperatures along the Eastern Seaboard such that you can see it.
Raw 1856 to current Atlantic Multidecadal Oscillation
href=”http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425723080040&data_set=1&num_neighbors=1″>Norfolk City VA
Elisabeth City NC
Wilmington NC
The same holds true for the West Coast and the Pacific. The location of the jets determine the location of the Polar Express, Arctic air that gets sucked down over the USA. Therefore you can not just group a bunch of temperature data together to get a larger sample size.

Lars P.
April 14, 2012 7:43 am

TheInqjirer says:
April 14, 2012 at 12:13 am
“One of the claims against AGW scientists is that they have apparently started with a preposition (AGW) and massaged the data to fit….”
You got it totally wrong. Did you read the article? Did you try to think logically?
The skeptics observe the adjustments, realize these are almost all done in one direction. They realize the adjustments are bigger then the signal, adjustments becoming the signal itself. The new graphs go outside the error bars of the previous graphs, they are not compatible.
And then due to this do skeptics ask themselves if the “AGW scientists” did not massage the data to get the results they wanted knowingly or not.
Your post is pure smoke screen to distract from discussion.

Matt in Houston
April 14, 2012 7:44 am

More good work Dr. Spencer, thank you!

Gail Combs
April 14, 2012 7:45 am

rgbatduke says:
April 14, 2012 at 6:47 am
OK, that screwed up again. Rats. For some reason the wordpress interface is actually getting worse than it was three or four months ago.
____________________________
Boy you can say that again! I get to see about two lines at a time. It is a real pain when trying to edit and I do not have the computing power to have a word program working well when WUWT is open.

rgbatduke
April 14, 2012 7:48 am

Anyone who thinks a minus 0.05 degree C adjustment for UHI is adequate need only consider the previous post concerning peak temperatures at airports. Recall as well that even in the early 1950s commercial jets were uncommon.
It isn’t the average correction that matters — that SHOULD be rather small if there are enough rural sites that don’t need it. The size of the correction as a function of urban density, land use change, and so on should, in most cases, increase with urban density and many sitings, and as Roy pointed out, the real correction need not just be “urban” and may not have a uniform sign — land use change can net cool a site by e.g. planting trees and grass and importing irrigation water from rivers far away in a site that properly speaking should be and once was desert.
However, the real test of any such correction is the inversion test I mentioned above (which is one of a family of symmetries one can test for). One can take the original data, subtract your UHI correction, form the now “corrected” sample means, and invert around this. What happens to the temperatures now? One can apply the UHI with the opposite sign (and then invert). The point is that in the end, all of the algorithms must possess an inversion symmetry of some sort or another or they are biased.
rgb

Lance Hilpert
April 14, 2012 7:59 am

The Future Is Certain; It’s the Past Which Is Constantly Changing

Pamela Gray
April 14, 2012 8:08 am

Gail, you bring up a very important point in this debate. On any given day, all temperature readings have immediate natural drivers that are well known and completely unrelated to AGW (except for jet engines, black top, brick walls, BBQ’s, burn barrels, and overturned aluminum boats). A hot day is hot because of local atmospheric conditions which can be traced back to pressure systems and jetstream interplay. Those can be traced back to systems that are created over oceans and then meander their way to land. The creation of those conditions are brought about by the coreolis affect, Earth’s tilt, land masses, and oceans swirling the pattern of the atmosphere this way and that in a somewhat choreographs dance. A cold day is likewise. Logically, you cannot therfore deduce an AGW signal by gridding up the resulting temperature readings, adding adjustments, data averaging, and plotting.
Straw cannot be spun into gold. Natural temperature readings cannot be spun into another signal of a different nature.

April 14, 2012 8:15 am

Lance Hilpert says: April 14, 2012 at 7:59 am
The Future Is Certain; It’s the Past Which Is Constantly Changing
…and if you wait about 270 years there is a chance that you can see the past nearly repeating itself.
http://www.vukcevic.talktalk.net/CET1690-1960.htm

Chuck Nolan
April 14, 2012 8:40 am

Gail, can I just write my comments on a word processor then paste?

Chuck Nolan
April 14, 2012 8:47 am

I see. You get the same words with no formatting.

Ian W
April 14, 2012 9:10 am

Budgenator says:
April 13, 2012 at 9:59 pm
I’m sorry, I know this will seem trollish, but every time I see the good ‘ol average = [(Tmax+Tmin)/2]; I can’t help but to think who ever thought that up wouldn’t have done very well on the TV show “Are You Smarter Than a 5th Grader”.

Budgenator – unfortunately you are so right.
Firstly, atmospheric temperature is NOT a measure of atmospheric heat content.
Note that Nick Stokes did not respond to my proposal that he used the humidity metrics to calculate the enthalpy of the atmosphere at the time of the measurements. This is because the AGW hypothesis fails totally if the correct metric of KiloJoules per atmospheric Kilogram is used.
The AGW hypothesis is that CO2 traps HEAT —- CO2 cannot trap TEMPERATURE so stop measuring temperature and start measuring heat content!!!.
The heat content in KJ/Kg at each hourly observation should be calculated from the atmospheric enthalpy and temperature at the time of observation and then a graph of the atmospheric heat content would show whether there was any ‘heat being trapped’ or not. A cool humid misty morning at 60F that turns into a low humidity 100F afternoon may actually show a totally level heat content profile. These buffoons would have generated a totally meaningless ‘average temperature of 80F’ then compound the unreality by averaging all those averages.
We appear to be joining the climate ‘scientists’ associated useful idiots and trolls in their deliberate use of incorrect metrics. This must stop.
If they can only think in temperatures then use the ocean temperature – ideally before they have adjusted and overwritten the raw data there as well.

drobin9999
April 14, 2012 9:13 am

Nick, as sea level and temp rise are not accelerating AT ALL, as both metrics continue to defy climate models by staying below their central estimates, isn’t it becoming obvious that climate sensitivity is between 1.2 and 3 C? What data currently supports your fear that it will become ‘hot’?

Richard S Courtney
April 14, 2012 9:35 am

Nick Stokes:
In reply to my request for you to justify your unfounded assertion, you reply at April 14, 2012 at 2:33 am saying;
“Richard S Courtney says: April 14, 2012 at 2:06 am
“Really? You know that? How?”
Greenhouse effect. Putting CO2 in the air impedes outgoing IR. Heat accumulates, temperature rises. Flux balance then restored (until more CO2 accumulates).”
Oh dear, No! That is a ‘schoolboy error’.
The atmosphere is a complex system so a change to one parameter (e.g. atmospheric CO2 concentration) alters everything else. You made the assertion that the net effect of all those changes would be to make the world “hot”. I asked you how you could know that. Your answer says you don’t have any reason to suppose that: you merely know of one change that would occur; i.e. “impedance of outgoing IR”.
To date there is no evidence of any kind to support your assertion that you have now demonstrated you cannot justify but which you used as a putative reason to change the economic and energy policies of the entire world.
Read e.g. the above article by Roy Spencer and learn how little evidence there is to support your assertion (hint, there is none).
Richard

Editor
April 14, 2012 9:42 am

Nick Stokes says:
April 13, 2012 at 10:20 pm

Willis Eschenbach says: April 13, 2012 at 5:57 pm

“Nick, how did you average data to avoid overweighting the east coast where there are lots of stations?”

I used inverse density weighting, measured by 5×5° cells. That does a fairly good job of balancing. But prompted by your query, I ran a triangular mesh weighting, which places each station in a unique cell, and weights by the area. Running a monthly model, as with the first example, the trend came down to 0.42°C/decade. But with an annual model, it went back to 0.161.
I’ll write a blog post with more details.

Thanks, Nick, that was my point. When you say “I’ll write a blog post”, where will it be posted? I’m interested to read it.
w.

Steve Garcia
April 14, 2012 10:15 am

Doc –
“When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.”
That was my general impression from long ago, and I have seen little to disabuse myself of that since. Your take on it makes it more likely to me to be true.
Thanks!

April 14, 2012 10:20 am

Gail Combs says: April 14, 2012 at 5:28 am
There is also the question of how good the data is during the time of Red China’s “Purges” where up to 80 million were killed.
http://www.paulbogdanor.com/left/china/deaths2.html
Thank you Gail for remembering these tortured millions. Below is an excerpt from that article.
The CAGW scam is not, as many of us originally believed, the innocent errors of a close-knit team of highly dyslexic scientists. The evidence from the ClimateGate emails and many other sources, and the intransigence of these global warming fraudsters when faced with the overwhelming failure of their scientific predictions, suggests much darker motives.
The lesson of Mao, Hitler and Stalin is that one should not trust, and one should never cede power, to those who have no moral compass.
Best regards, Allan
Scholars Continue to Reveal Mao’s Monstrosities
Exiled Chinese historians emerge with evidence of cannibalism and up to 80 million deaths under the communist leader’s regime.
Beth Duff-Brown,
Los Angeles Times,
November 20, 1994
Gong Xiaoxia recalls the blank expression on the man’s face as he was beaten to death by a Chinese mob.
He died without a name, becoming another statistic among millions.
“I remember him so vividly, he really had no expression on his face,” Gong said. “After about 10 or 20 minutes, God knows how long, someone took out a knife and hit him right into the heart.”
He was then strung on a pole and left dangling and rotting for two months.
“I think the most terrible thing, when I recall that period, the most terrible thing that struck me was our indifference,” said Gong, today a 38-year-old graduate student at Harvard researching her own history.
That terrible period was China’s 1966-1976 Cultural Revolution. The blinding indifference was in the name of Chairman Mao Tse-tung and the Communist Party.
Gong is among a new wave of scholars and intellectuals, both Western and Chinese, who believe modern Chinese history needs rewriting.
While the focus of many books and articles today is on China’s successful economic reforms, dramatic new figures for the number of people who died as a result of Mao Tse-tung’s policies are surfacing, along with horrifying proof of cannibalism during the Cultural Revolution.
It is now believed that as many as 60 million to 80 million people may have died because of Mao’s policies-making him responsible for more deaths than Adolf Hitler and Josef Stalin combined.
Gong said killer is not a strong enough word to describe Mao. “He was a monster,” she said.
*******************

Ed Barbar
April 14, 2012 10:31 am

OK, I have found a trend line for UAH data set that shows warming of approximately .5 degrees C over the last 30 years.
From listening to Lord Monckton and I recall posts from Anthony Watts, the question of global warming is not so much “if,” but “how much.” So the question is, say the earth warmed another 1 degrees C over the next 60 years, how bad/good is that?
My view is the models are proving themselves to not be worth much. It’s not a surprise, as I think of Climate science as nascent, and perhaps well beyond humans and their tools to really understand for who knows how long. The truth is in the “muck and mire.”
Is it equally futile to attempt to understand what different increases in temperature would mean? What would a 1 degree C increase mean, for instance. I suppose it is a lot easier if you say “Temperatures are going to increase 5 degrees C,” because then major systems would be impacted, and we can all make the case about nasty humans changing mother gaiai earth’s system.
I feel I’m wasting my time trying to understand this stuff. I wish our leaders would take a step back, and re-focus the discussion to something meaningful. I’m tired of hearing how global warming causes an increase in toe fungus.

Steve Garcia
April 14, 2012 11:19 am

@Ed Barbar: “I’m tired of hearing how global warming causes an increase in toe fungus.”
Mountains out of molehills, and very likely imaginary molehills. “Cartoon molehills” may be too harsh, but perhaps “creatively rendered molehills” is not.
One of the genuine problems/concerns seems to be that some people are actively engaged in demonizing our society’s very existence. The tone here has slowly increased about how what used to be called “Progress” – and which is now labeled “planet killing” – really and truly has improved life for humankind, spreading a more hygiene, less threatened, better fed, healthier, and more enjoyable kind of life to more and more people. Should that progress be done with a minimum of negative impact? Certainly. But the ‘let’s all humans commit suicide so that plants and animals can live in peace and harmony” school of thought is pretty much insane.
Skeptics, almost to a man/woman, all know we must not injure the planet beyond a certain point. Much of the alarmist-vs-skeptical argument really stems from our differences in where the point-of-no-return point is. Alarmists believe it is upon us. Skeptics are confused how they can assert such things. The world environmentally was in much worse shape half a century ago, and we’ve done wonders improving it and pushing back any day of reckoning.
We don’t have killer fogs in London anymore, and even Mexico City has gone a long way toward cleaning its air. And China imposed extra export tariffs four years ago or so, to fund cleaning up their air. If Sao Paolo does the same, it is a good thing. Industrial cities as long ago as 100 years had much worses air and environmental hazards than today. We can’t rest on our laurels, but neither should we shut down what industries support our society.
When toe fungus becomes the most inportant issue, we will deal with it. Warming, like toe fungus, is far from our moswt pressing issue. When it DOES become important, we won’t need screamers like James Hansen to tell us – it will be when our envorinment is worse off than 1910 and 1960 – and that will not be for a very long time, if ever. We showed in the 1970s and 1980s that we can fix what we screwed up. Since then nothing has happened to make us drop everything else and fix warming.

April 14, 2012 11:32 am

Ed Barbar says: April 14, 2012 at 10:31 am
OK, I have found a trend line for UAH data set that shows warming of approximately .5 degrees C over the last 30 years.
_____________
Ed – please see my above post.
There is probably NO trend if you look at a FULL PDO cycle rather than the warming HALF, as you have done.
Weather satellites only went up in 1979, about the start of the recent warming.
There was global cooling from ~1940 to 1975. The Surface Temperature data from that time is also probably warm-biased by UHI, etc.

Urederra
April 14, 2012 12:06 pm

Nick Stokes says:
April 14, 2012 at 5:03 am
He didn’t prove that at all. He showed in 1996 that some rural counties in California showed little increase. He gave no statistics of the effect of this small sample.
In fact globally there is very little difference between the trends of rural and urban stations.

What? a free google blogger with some graphs and number with a broken link called “document repository”? what do you want to prove with that?
Here is a more useful link. IMHO.
http://www.retrojunk.com/tv/quotes/343-spongebob-squarepants/

Crispin in Johannesburg
April 14, 2012 2:05 pm


Think about the difference between measuring the temperature 20 times a day at one station v.s. once per day at 20 nearby stations. Each reading has a 0.5 degree precision. Which is more accurate? Now think of measuring one station once. The precision of the measurements remains 0.5 degrees no matter how many times they are read.
What you can say about the multiple readings is that the Mean is known with more confidence. It says nothing about the accuracy. They might all be quite inaccurate. Multiple readings don’t improve precision or accuracy as they remain properties of the measurement system and directly give the error bars that have to be placed around a more or less well know Mean.
You can quickly see the meaningfulness of saying the ConUS temp went up 0.161 plus minus 0.5 degrees.

Nick Stokes
April 14, 2012 2:42 pm

Willis Eschenbach says: April 14, 2012 at 9:42 am
“Thanks, Nick, that was my point. When you say “I’ll write a blog post”, where will it be posted? I’m interested to read it.”

Thanks, Willis. It’s here..

Gail Combs
April 14, 2012 3:47 pm

Chuck Nolan says:
April 14, 2012 at 8:40 am
Gail, can I just write my comments on a word processor then paste?
_______________________
Yes, At least I do it with Ubunto Libre office and firefox or opera. Makes editing much easier. you can use WUWT test to see if the HTML markup is OK.

old construction worker
April 14, 2012 4:48 pm

“Nick Stokes says:
April 14, 2012 at 5:57 am
Yes, water vapor feedback amplifies the temperature increase due to any forcing.”
That’s like saying when I bake bread, the water in the bread which turn into water vapor increases the oven temperature from 375 to 380. I have my doubts.
You are telling me that the water vapor in Shreveport, LA is making it hotter than Yuma, AZ. Both cities have about the same population, are along the same altitude and 90 some miles from a large body of water
http://www.climate-zone.com/climate/united-states/louisiana/shreveport/
http://www.climate-zone.com/climate/united-states/arizona/yuma/

Gail Combs
April 14, 2012 4:55 pm

Crispin in Johannesburg says: @ April 14, 2012 at 2:05 pm
Think about the difference between measuring the temperature 20 times a day at one station v.s. once per day at 20 nearby stations. Each reading has a 0.5 degree precision. Which is more accurate? Now think of measuring one station once. The precision of the measurements remains 0.5 degrees no matter how many times they are read….
What you can say about the multiple readings is that the Mean is known with more confidence. It says nothing about the accuracy…..
_____________________________________
Actually it is WORSE. I look at it the same why I would for sampling a batch or continuous process in industry.
Your best accuracy/precision (tightest distribution) is multiple samples from several different points in a well mixed batch, with testing preformed by the same TRAINED tech with the same CALIBRATED equipment. This gives you the best shot of getting something close to the “true value”
As soon as you add another tech the distribution is not going to be as tight (larger standard deviation), add different equipment and again the standard deviation increases. Add different batches, different mix equipment, different plants and different raw material lots and the distribution becomes wider and wider. I really do not care how many samples you take that standard deviation is going to reflect the differences in equipment and technicians as added error. At this point coming up with the “true value” of say the average mgs of codeine per gram of liquid in cough syrups on the shelf in the USA, by just using batch test records from selected manufacturers gets a lot harder and the standard deviation wider. This is what I see as the equivalent to what climate scientists are trying to do with temperature.
The 95% confidence interval is when 95% of the data is within 1.96 standard deviations of the mean Normally plus minus 2 standard deviations are used. (At least in chemistry)

Nick Stokes
April 14, 2012 5:03 pm

Urederra says: April 14, 2012 at 12:06 pm
‘What? a free google blogger with some graphs and number with a broken link called “document repository”?’

Well, Roy, Anthony, me, we’re all bloggers. Paying a hosting fee doesn’t guarantee validity. The post you read is about two years old, and drop.io has gone out of business. But the repository with the code is still available – just look at “Document Store” in the resources list top right.
But it’s not just me. Lots of people have studied the global temp record. Not just a few counties in California.

Nick Stokes
April 14, 2012 5:07 pm

Ian W says: April 14, 2012 at 9:10 am
“Note that Nick Stokes did not respond to my proposal that he used the humidity metrics to calculate the enthalpy of the atmosphere at the time of the measurements.”

No, because it is OT. This post is about temperature metrics. Why not suggest to Roy and Anthony that they use humidity? Then we could compare.

Nick Stokes
April 14, 2012 5:17 pm

feet2thefire says: April 14, 2012 at 11:19 am
“When toe fungus becomes the most inportant issue, we will deal with it. “

feet2thefire?

Ian W
April 14, 2012 7:09 pm

Nick Stokes says:
April 14, 2012 at 5:07 pm
Ian W says: April 14, 2012 at 9:10 am
“Note that Nick Stokes did not respond to my proposal that he used the humidity metrics to calculate the enthalpy of the atmosphere at the time of the measurements.”
No, because it is OT. This post is about temperature metrics. Why not suggest to Roy and Anthony that they use humidity? Then we could compare.

It is NOT off topic – and if you bothered to read the posts that I made I said:
We appear to be joining the climate ‘scientists’ associated useful idiots and trolls in their deliberate use of incorrect metrics. This must stop.
Atmospheric temperature metrics do not measure the amount of heat ‘trapped’ by the ‘green house effect’. So everyone can discuss temperature metrics as much as they like – they are the incorrect metric.
I am sorry if you don’t understand the concept of enthalpy – you should really learn what it means. then you might not be parading your ignorance quite as loudly.
.

April 15, 2012 1:24 am

“Nick Stokes says:
April 14, 2012 at 5:57 am
Yes, water vapor feedback amplifies the temperature increase due to any forcing.”

The rather obvious problem with the +ve WV feedback is that the oceans are an unlimited source of WV. A +ve WV feedback would immediately cause runaway warming. Which means even a small net +ve WV feedback is impossible (at current temperatures).

Brian H
April 15, 2012 2:58 am

Let us not forget that any large sample of adjustments for instrument error and external factors should average out to zero, meaning about half up and half down, by similar spreads. No such thing is happening; the adjustments display clear patterns, always in the direction of propping up AGW.
There is only one plausible cause for this. The adjusters want it that way.

Nick Stokes
April 15, 2012 4:14 am

Philip Bradley says: April 15, 2012 at 1:24 am
“A +ve WV feedback would immediately cause runaway warming.”

No, that is a misunderstanding of the meaning of positive feedback. The current estimate is that the feedback factor due to water vapor is about 2. You can think of it working this way. If an initial forcong increment of 1 W/m2 is applied, that will increase wv producing an initial 0.5 W/m2 added forcing. That extra 0.5 ups the wv further, producing an extra 0.25. And so on. The sum of the added increments is 1; the total forcing with wv feedback is 2 W/m2.
Of course if the wv initially produced by 1 W/m2 extra forcing itself adds more than 1 W/m2 forcing, then you get runaway.

izen
April 15, 2012 5:10 am

Removing the rate of growth of the population density in the US temperature data is a rather complicated way of demosntrating a corellation between temperature trends and US population.
The corellation does not prove causation.
Looking at another measure of temperature where changes in population are very unlikely to influence the trend might indicate if the trend is real or just a matter of local causation from population changes.
Is the UAH LT record affected by the US population density increase, or would it make any sense to correct the UAH record with the population metric Roy Spencer has used ?

Lars P.
April 15, 2012 5:50 am

Nick Stokes says:
April 14, 2012 at 5:03 pm
“But it’s not just me. Lots of people have studied the global temp record. Not just a few counties in California.”
Nick the good thing about UHI is that the population growth has almost stopped in the temperate and arctic regions where it matters most.
We all know the UHI effect is real, you and your kind want to deny that UHI has any statistical effect on the temperature trend measured. In the work linked by you Berkeley denies that the growth of human population from1 billion to 7 billion has any effect on the temperature trend in human locations – which I simply do not believe. Just look at their map of the USA temperature trends and you will simply identify locations where population has increased based on red spot accumulation.
Here how I try to explain on why they got erroneous results – there is a comparison of a localized rural trend against a global trend (mostly North America, Scandinavia, Russia and some Austrialian locations). Furthermore the selection of very rural location is enough to introduce bias – as was shown here at WUWT – so I trust their work is in error. It does not compare rural versus urban trend for the same region with clearly identified no-UHI locations.
I trust that if we measure city per city we find a growing difference between the temperature outside the city and the city which would be depending on the city population. Of course there will be also need to compare like with like as it will very much depend on location, country, type of development.
In the coming years population growth is to further settle, urbanization reached very high levels in many countries so there will be less and less UHI effect on trend. The trouble you and your thinking alike will have is to explain the cooling effect of big cities, as UHI will simply remain stationary for most big cities, only some small location will grow increasing their UHI in the temperate and arctic regions, this also limited.
The “cooling effect” is simply the missing of further warming effect.
So basically the discussion is really less and less relevant for future temperature measurement and another shot in the knee by CAGW.

rgbatduke
April 15, 2012 6:12 am

Philip Bradley says:
April 15, 2012 at 1:24 am
“Nick Stokes says:
April 14, 2012 at 5:57 am
Yes, water vapor feedback amplifies the temperature increase due to any forcing.”
The rather obvious problem with the +ve WV feedback is that the oceans are an unlimited source of WV. A +ve WV feedback would immediately cause runaway warming. Which means even a small net +ve WV feedback is impossible (at current temperatures).

Such a brilliant, succinct statement of something I’ve been trying to articulate, you leave me stunned. People who postulate positive feedback for a stable system simply don’t understand what the word means. And in this case there is coupled positive feedback — the warmer the oceans the more CO_2 and/or methane they release to complete a vicious cycle of self-amplifying runaway positive feedback.
However, the conclusion is still not precisely correct, because WV feedback could be positive but overall feedback could still be negative. Also, because the Earth’s climate is a self-organized chaotic system, altering WV concentrations could move stable attractors up or down or sideways (and probably does). Still, this argument places a strong burden of proof on those who assert positive feedback to explain in detail why there isn’t an upward trail to Venus-like conditions with the oceans boiled away precisely like Hansen has been idiotic enough to assert as his worst-case doomsday scenario! Oh, wait, this is what they think will happen.
Which is why they should not be taken seriously by anyone with common sense. It’s all well and good to make B-grade science fiction movies that show the Earth’s climate self-organizing overnight into a giant refrigerator that cause glaciers to instantly form or into a hothouse that boils away the oceans, all because Humans Did It with their silly little thing called “civilization”. Sensible people look at the actual data from the paleontological climate record over the last billion or so years as recorded in life forms and other proxies, and recognize that first of all, the Earth itself does have the capacity for catastrophic climate change all by itself on hundred-million-year timescales due to forces or forcings over which we have not the slightest shred of control.
Second that the “catastrophes” visible in that record are without exception in one of two catagories — cold phase/ice age, a disaster beyond our imagining — and atmospheric climate catastrophes brought about by things like the Siberian Traps or Supervolcanos, million year long extrusions of magma dumping enormous amounts of aerosols and dust into the air, or asteroid collisions ditto. Even the latter have not managed to kick the Earth into an ocean-boiling state, with many times as much CO_2 and other GHGs in the atmosphere, and the only thing that humans can do that might trigger a similar event is start a global thermonuclear war or go into space and push an asteroid down ourselves deliberately as an act of war.
The probability of global thermonuclear war is at its lowest point since the invention of thermonuclear weapons — at the moment, it is utterly implausible even if the North Koreans or Iranis or Pakistanis lob a bomb at South Korea or the US, at Israel, at India. It would also be much more likely to trigger a nuclear winter, not runaway heating. Yellowstone could, naturally, decide to wake up and spew magma for 50,000 years at any moment, but there is damn all we can do about it if it decides to do so. Similarly, if a rogue asteroid appears all we’ll be able to do — if we see it in time — is draft Bruce Willis, send up a thermonuclear armed space shuttle to try to deflect it (oh wait, we don’t HAVE a space shuttle any more, damn) and pray.
Beyond that, we are in the only visible warm phase in the climate record. It gets as much as 2 or 3 K warmer, depending on parameters we do not understand, possibly things like the actual positions of the continents and oceanic currents, however, those warmer temperatures appeared to become completely and systematically unstable over a stretch that ran from 4 to 1 million years ago, and are now almost completely stable in an ice age. Our current warm phase is part of a transient, obviously geologically unstable excursion and is very likely approaching its end, although we cannot really predict this without knowing why the Earth warms up in this way in the first place (or why it went cold in the first place) and we haven’t a clue about either one.
We are obsessing over the wrong catastrophe. There is essentially zero chance of a still hotter stable state, and if there were — and it were truly stable, the premature end of the current ice age — it would be the best thing possible for the human race and all species of animal life! If you want to see extinction events galore, bring back the glaciers. Humanity came of age in the Holocene, and the advent of Fimbulwinter once again, ice giants and all, would condemn countless species to a massive die-off as the temperate zones they now inhabit become intemperate and frost reaches towards the equator.
Stability analysis is such an obvious step in climate science, and yet all we have there is not “science”, it is the mad ravings of Hansen.
rgb

Dr. Lurtz
April 15, 2012 6:48 am

“Ian W says: ” – kJ/kg at a temperature.
I agree completely. kJ/kg, at a temperature, is the only accurate way to measure atmospheric energy/heat content. Just measuring temperature as a proxy for energy is silly. It takes energy to create storms, not just temperature.
One problem: If they can’t measure temperature accurately, how would they be able to accurately measure energy; a more complex measurement involving two parameters [kJ/kg and temperature].

Jim Clarke
April 15, 2012 9:22 am

Hey Anthony,
I would like to request that you include a correction to the graph that leads off this article. As Phil Clarke has pointed out, the graph is in degrees F, but the explanation below the NOAA graph gives the amounts in degrees C. I know that the explanation is part of the same image as the graph and cannot be easily edited, but a simple note of the labeling error would suffice.
While I hardly believe that the original creator of the image mislabeled the explanation with the intent to inflate the adjustments, that accusation follows the graph and gives the warmest an excuse to throw away every argument that follows.
Furthermore, Nick Stokes has taken the opportunity to proclaim that climate change crisis skeptics aren’t learning anything new and have nothing but the same disingenuous arguments:
“But I agree about the lack of progress. As Phil Clarke pointed out,15 hours ago, the graphic which cites the adjustment slopes as if they were Celsius (though they were Fahrenheit) is from a 2007 post. The false claim was criticised then, but here it is, reappearing as if nothing had happened. And still uncorrected.”
(From: http://moyhu.blogspot.com.au/2012/04/us-temperature-trends.html )
While it is obvious that the error has no real impact or importance to the discussion, it is like throwing a little chum in shark infested waters.

Mickey Reno
April 15, 2012 10:49 am

Enthalpy vs. temperatures when making atmospheric measurements seems so obviously superior a method. Let’s see, now how should we adjust those humidity levels…
I understand “raw” station measurement data is something of a dying breed. This is a crime against science. The raw data should be maintained above all else. Derived data can be discarded and regenerated at will as long as someone simply saves the formula.

barry
April 15, 2012 10:54 am

Has Dr Spencer commented on the fact that his own data set (UAH) gives a statistically significant warming of the US at 0.22C/decade (1979 – present), and is thus quite similar to the CRUtem (0.198C) and USHCN (0.245C) trends, and quite dissimilar from his populaiton density trend (0.013C/decade)?
http://vortex.nsstc.uah.edu/public/msu/t2lt/uahncdc.lt – (UAH decadal trend for ‘US48’ is given bottom right hand corner of this data page)

Philip Clarke
April 15, 2012 3:24 pm

Josh: “While I hardly believe that the original creator of the image mislabeled the explanation with the intent to inflate the adjustments.”
Nor do I. it was likely a schoolboy error that Meyer missed because it suited his prejudices. However it was pointed out in the very first comment and Meyer did not bother to issue a correction. Embarassing enough for him, if he wants to be taken seriously, arguably even more embarrassing for a ‘science’ site that copies the error without noticing and leaves it uncorrected after it has been pointed out.
Once that has been put right, can we look forward to this ‘paper’ being corrected and withdrawn?
http://scienceandpublicpolicy.org/originals/policy_driven_deception.html
After all its headline premise has been falsified, not least by the BEST project that was contributed to, and endorsed by one of the coauthors.
Don’t hold your breath.

edbarbar
April 15, 2012 4:45 pm

Allan McRae:
I’ve seen lots of temperature graphs. I think knowing the earth’s temperature is tricky business, and applaud Anthony Watt’s efforts here. I admire that and many other things Anthony does.
So, yes I looked at your graph. Does the PDO change the Earth’s temperature? Could be. How is it meant to transport more heat into space? I have no clue.
Regards,
Ed

April 16, 2012 5:42 am

I’m in what appears to be a growing majority that’s concluded that the posted article is wrong. I think a blog update noting the errors pointed in the comment thread is in order.
Nick has taken the opportunity to slam this blog for it. There is a cure for that. If you don’t like being criticized for making mistakes, stop making so many of them.

April 16, 2012 5:50 am

rgb:

People who postulate positive feedback for a stable system simply don’t understand what the word means. And in this case there is coupled positive feedback — the warmer the oceans the more CO_2 and/or methane they release to complete a vicious cycle of self-amplifying runaway positive feedback.

That doesn’t happen here. Positive feedback systems don’t run away if they are stabilized by a dissipative nonlinearity, such as Stefan-Boltzman sigma T^4 factor here.
Simple example of this is the Van der Pol oscillator. This linear equation runs away:
x”(t) – x'(t) + x(t) = 0, x(0) = 1, x'(0) = 0.
This equation is stable and gives sinusoidal oscillations:
x”(t) + (x(t)^2-1) x'(t) + x(t) = 0.
Just for clarity, this equation is not stable:
x”(t) – x'(t) + x(t) + x(t)^3 = 0.
(So just having a nonlinearity in the system doesn’t imply it will be stable.)

Editor
April 16, 2012 11:01 am

Nick Stokes says:
April 14, 2012 at 2:42 pm (Edit)

Willis Eschenbach says: April 14, 2012 at 9:42 am

“Thanks, Nick, that was my point. When you say “I’ll write a blog post”, where will it be posted? I’m interested to read it.”

Thanks, Willis. It’s here..

Much appreciated, I’m back to work but it looks fascinating, I love a chance to learn more about R and about averaging.
w.

Jim Clarke
April 16, 2012 2:47 pm

Phillip Clarke Says:
“Once that has been put right, can we look forward to this ‘paper’ being corrected and withdrawn?
http://scienceandpublicpolicy.org/originals/policy_driven_deception.html
After all its headline premise has been falsified, not least by the BEST project that was contributed to, and endorsed by one of the coauthors.
Don’t hold your breath.”
Now hold on there a second…
I agree that Meyer’s graph at the top of this article has an inconsequential labeling error that should be corrected, but that has nothing to do with the report you mention above.
First of all, the surface data record is highly complex and the debate on adjustments is far from over. I doubt that the BEST project is the definitive word on the subject, nor do I see how it falsifies the idea that some of the adjustments have been policy driven. The BEST project is not immune to the same policy environment that brought us a lot of the adjustments in question.
Secondly, should Watts and D’Aleo be held to a much higher standard than the authors of the many ridiculous papers that appear in the journals supporting the AGW theory? How many of those have been ‘corrected and withdrawn’ after being almost instantly falsified the minute they went public? Have you called for any of those to be corrected and withdrawn?
If not, then I think you should correct and withdraw your request.
I won’t hold my breath.

Eric in CO
April 18, 2012 2:46 pm

I’ll bet in 50 years they will adjust todays temps down again while increasing 2062 temps. Of course eventually they’ll be lowering 1930s iceland temps to absolute zero to make the data fit the theory.

John@EF
April 20, 2012 9:51 am

I find it very curious that Dr. Spencer hasn’t even attempted to square-the-circle between his claims here and his UAH satellite data. It appears he cannot.

Editor
April 20, 2012 1:29 pm

John@EF says:
April 20, 2012 at 9:51 am

I find it very curious that Dr. Spencer hasn’t even attempted to square-the-circle between his claims here and his UAH satellite data. It appears he cannot.

I find it very curious that if someone doesn’t choose to do something according to your personal schedule of what you think they should do and when, you assume that they cannot do so …
w.

John@EF
April 20, 2012 3:50 pm

@Willis Eschenbach says: April 20, 2012 at 1:29 pm,
It’s been a week, Willis. That question has been asked by many over the past seven days. Do you have an opinion as to why the blaring discrepancy? It seems reasonable that Dr. Spencer would anticipate being challenged on that obvious point and have a ready answer.

Editor
April 20, 2012 5:20 pm

John@EF says:
April 20, 2012 at 3:50 pm

@Willis Eschenbach says: April 20, 2012 at 1:29 pm,
It’s been a week, Willis. That question has been asked by many over the past seven days. Do you have an opinion as to why the blaring discrepancy? It seems reasonable that Dr. Spencer would anticipate being challenged on that obvious point and have a ready answer.

Near as I can tell, the question has been asked once, here, five days ago. Maybe it has been echoed by someone else.
Dr. Spencer has not posted to this thread since two days before that. Even if he had posted, that doesn’t mean he’s read every comment. As a result, I have no idea whether Dr. Spencer has even seen the question, and more to the point, neither do you.
You are accusing someone of deliberately ignoring something that you don’t know if he has ever seen. That is underhanded, unfair, and completely unacceptable. All you’ve done is proven that you are wildly biased, which won’t help you get traction for your underlying claim regardless of whether or not it is valid.
You may be right, and I’d be interested in an answer. I’d also be interested in Dr. Spencer’s comments on my post above. But I’m not going to accuse him of anything because he hasn’t answered my or any other post. For all I know, he’s on holiday, or has other work, or hasn’t seen it, or a dozen other reasons. I don’t have a clue, and I’m not guessing.
w.

John@EF
April 21, 2012 10:47 am

@Willis Eschenbach says: April 20, 2012 at 5:20 pm
The question has been asked several times and under multiple original posts on Dr. Spencer’s own site, Willis. There are relatively few responses to wade through in those threads. This in addition to questions posed at multiple related sites across the blogosphere. The chances of Dr. Spencer not having seen some of these comments are slim. He’ll be compelled to respond eventually. I’m just surprised he apparently wasn’t armed with ready response.

Editor
April 21, 2012 1:47 pm

John@EF says:
April 21, 2012 at 10:47 am

@Willis Eschenbach says: April 20, 2012 at 5:20 pm
The question has been asked several times and under multiple original posts on Dr. Spencer’s own site, Willis. There are relatively few responses to wade through in those threads. This in addition to questions posed at multiple related sites across the blogosphere. The chances of Dr. Spencer not having seen some of these comments are slim. He’ll be compelled to respond eventually. I’m just surprised he apparently wasn’t armed with ready response.

Thanks, John, with that additional information it makes more sense.
w.

April 23, 2012 2:53 pm

Given that there are major changes in population in the US on the county level and lower between 1970 and 2010 (major parts of the midwest west of the mississippi had changes of -50%, the mountain west had major increases, etc,) the idea of using population density from a single year is risible.

John@EF
May 5, 2012 9:20 pm

Dr. Spencer finally replied on his blog regarding why his new population adjusted US land temperature trend claim varies so starkly with the UAH LT trend. He claimed ignorance. No real explanation. He claims he never considered the UAH LT data. Does anyone here not find that seriously odd and more than just a bit disappointing?

Editor
May 5, 2012 11:25 pm

Can’t say I’m happy about it at all. I do like the fact that he says “I don’t know” rather than make something up. However, it certainly does put his UHI claims into limbo until he can explain it.
w.