Contribution of USHCN and GISS bias in long-term temperature records for a well-sited rural weather station

Guest post by David W. Schnare, Esq. Ph.D.

When Phil Jones suggested that if folks didn’t like his surface temperature reconstructions, then perhaps they should do their own, he was right. The SPPI analysis of rural versus urban trends demonstrates the nature of the overall problem. It does not, however, go into sufficient detail. A close examination of the data suggests three areas needing address. Two involve the adjustments made by NCDC (NOAA) and by GISS (NASA). Each made their own adjustments and typically these are serial, the GISS done on top of the NCDC. The third problem is organic to the raw data and has been highlighted by Anthony Watts in his Surface Stations project. That involves the “micro-climate” biases in the raw data.

As Watts points out, while there are far too many biased weather station locations, there remain some properly sited ones. Examination of the data representing those stations provides a clean basis by which to demonstrate the peculiarities in the adjustments made by NCDC and GISS.

One such station is Dale Enterprise, Virginia. The Weather Bureau has reported raw observations and summary monthly and annual data from this station since 1891 through the present, a 119 year record. From 1892 to 2008, there are only 9 months of missing data during this 1,404 month period, a missing data rate of less than 0.64 percent. The analysis below interpolates for this missing data by using an average of the 10 years surrounding the missing value, rather than basing any back-filling from other sites. This correction method minimizes the inherent uncertainties associated with other sites for which there is not micro-climate guarantee of unbiased data.

The site itself is in a field on a farm, well away from buildings or hard surfaces. The original thermometer remains at the site as a back-up to the electronic temperature sensor that was installed in 1994.

The Dale Enterprise station site is situated in the rolling hills east of the Shenandoah Valley, more than a mile from the nearest suburban style subdivision and over three miles from the center of the nearest “urban” development, Harrisonburg, Virginia, a town of 44,000 population.

Other than the shift to an electronic sensor in 1994, and the need to fill in the 9 months of missing reports, there is no reason to adjust the raw temperature data as reported by the Weather Bureau.

Here is a plot of the raw data from the Dale Enterprise station.

There may be a step-wise drop in reported temperature in the post-1994 period. Virginia does not provide other rural stations that operated electronic sensors over a meaningful period before and after the equipment change at Dale Enterprise, nor is there publicly available data comparing the thermometer and electronic sensor data for this station. Comparison with urban stations introduces a potentially large warm bias over the 20 year period from 1984 to 2004. This is especially true in Virginia as most such urban sites are typically at airports where aircraft equipment in use and the pace of operations changed dramatically over this period.

Notably, neither NCDC nor GISS adjusts for this equipment change. Thus, any bias due to the 1994 equipment change remains in the record for the original data as well as the NCDC and GISS adjusted data.

The NCDC adjustment

Although many have focused on the changes GISS made from the NCDC data, the NCDC “homogenization” is equally interesting, and as shown in this example, far more difficult to understand.

NCDC takes the originally reported data and adjusts it into a data set that becomes a part of the United States Historical Climatology Network (USHCN). Most researchers, including GISS and the East Anglia University Climate Research Center (CRU) begin with the USHCN data set. Figure 2 documents the changes NCDC made to the original observations and suggests why, perhaps, one ought begin with the original data.

The red line in the graph shows the changes made in the original data. Considering the location of the Dale Enterprise station and the lack of micro-climate bias, one has to wonder why NCDC would make any adjustment whatever. The shape of the red delta line indicates these are not adjustments made for purposes of correcting missing data, or for any obvious other bias. Indeed, with the exception of 1998 and 1999, NCDC adjusts the original data in every year! [Note, when a 62 year old Ph.D. scientist uses an exclamation point, their statement is rather to be taken with some extraordinary attention.]

This graphic makes clear the need to “push the reset button” on the USHCN. Based on this station, alone, one can argue the USHCN data set is inappropriate for use as a starting point for other investigators, and fails to earn the self-applied moniker as a “high quality data set.”

The GISS Adjustment

GISS states that their adjustments reflect corrections for the urban heat island bias in station records. In theory, they adjust stations based on the night time luminosity of the area within which the station is located. This broad-brush approach appears to have failed with regard to the Dale Enterprise station. There is no credible basis for adjusting station data with no micro-climate bias conditions and located on a farm more than a mile from the nearest suburban community, more than three miles from a town and more than 80 miles from a population center of greater than 50,000, the standard definition of a city. Harrisonburg, the nearest town, has a single large industrial operation, a quarry, and is home to a medium sized (but hard drinking) university (James Madison University). Without question, the students at JMU have never learned to turn the lights out at night. Based on personal experience, I’m not sure most of them even go to bed at night. This raises the potential for a luminosity error we might call the “hard drinking, hard partying, college kids” bias. Whether it is possible to correct for that in the luminosity calculations I leave to others. In any case, the lay out of the town is traditional small town America, dominated by single family homes and two and three story buildings. The true urban core of the town is approximately six square blocks and other than the grain tower, there are fewer than ten buildings taller than five stories. Even within this “urban core” there are numerous parks. The rest of the town is quarter-acre and half-acre residential, except for the University, which has copious previous open ground (for when the student union and the bars are closed).

Despite the lack of a basis for suggesting the Dale Enterprise weather station is biased by urban heat island conditions, GISS has adjusted the station data as shown below. Note, this is an adjustment to the USHCN data set. I show this adjustment as it discloses the basic nature of the adjustments, rather than their effect on the actual temperature data.

While only the USHCN and GISS data are plotted, the graph includes the (blue) trend line of the unadjusted actual temperatures.

The GISS adjustments to the USHCN data at Dale Enterprise follow a well recognized pattern. GISS pulls the early part of the record down and mimics the most recent USHCN records, thus imposing an artificial warming bias. Comparison of the trend lines is somewhat difficult to see in the graphic. The trends for the original data, the USHCN data and the GISS data are: 0.24,

-0.32, and 0.43 degrees C. per Century, respectively.

If one presumes the USHCN data reflect a “high quality data set”, then the GISS adjustment does more than produce a faster rate of warming, it actually reverses the sign of the trend of this “high quality” data. Notably, compared to the true temperature record, the GISS trend doubles the actual observed warming.

This data presentation constitutes only the beginning analysis of Virginia temperature records. The Center for Environmental Stewardship of the Thomas Jefferson Institute for Public Policy plans to examine the entire data record for rural Virginia in order to identify which rural stations can serve as the basis for estimating long-term temperature trends, whether local or global. Only a similar effort nationwide can produce a true “high quality” data set upon which the scientific community can rely, whether for use in modeling or to assess the contribution of human activities to climate change.

David W. Schnare, Esq. Ph.D.

Director

Center for Environmental Stewardship

Thomas Jefferson Institute for Public Policy

Springfield Virginia

===================================

UPDATE: readers might be interested in the writeup NOAA did on this station back in 2002 here (PDF, second story). I point this out because initially NCDC tried to block the surfacestations project saying that I would compromise “observer privacy” by taking photos of the stations. Of course I took them to task on it when we found personally descriptive stories like the one referenced above and they relented. – Anthony

0 0 votes
Article Rating
196 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
February 26, 2010 4:06 pm

Thanks for this–we keep seeing more and more of these individual station histories. Hope someone’s keeping score.
BTW, you have some issues with graphics not appearing in this post…

EdB
February 26, 2010 4:17 pm

I agree.. we need to go back to the raw data at each site and adjust if need be, openly, with full disclosure as to why.
The above adjustments to the raw data are simply not credible.
Thank you for your hard work on this.

February 26, 2010 4:35 pm

So is it a fact that the USHCN records available here:
http://cdiac.ornl.gov/epubs/ndp/ushcn/ushcn_map_interface.html
are adjusted and not the raw data? If that’s so, then even the adjusted data from my favorite rural station (Pineville, WV, settled in 1853 and incorporated in 1917, and with only 715 souls recorded by the 2000 Census, after a big population loss in the 1990s) shows no sign of any warming.

kwik
February 26, 2010 4:39 pm

“Note, when a 62 year old Ph.D. scientist uses an exclamation point….”
Good point! Never believed I would ever get interested in something as boring as the climate.
So much more other things to be interested in. But when you smell a fishy stench, you want to investigate….

James Sexton
February 26, 2010 4:46 pm

It is pretty much as I expected. I was mildly surprised to see the USHCN adjustments to move downward, but not all surprised by the arbitrary upward(or historical downward) adjustments by GISS. How does GISS justify the historical downward adjustment for UHI bias? Even if it became necessary to apply an adjustment of that nature, they are applying it backwards. It doesn’t take a PhD to realize urbanization typically occurs in a forward time step. I suppose, it is possible to have had a large thriving metropolis to later diminish in size to a sub-urban or rural area, but I can’t think of one example in the U.S., but that would be the only case where historical downward adjustment would be appropriate.
Its a bit puzzling that they have the backup mercury thermometer but didn’t record the readings for comparison to the electronic device. On a lighter note, maybe when they delete all other stations from the data, they can use this one for the global thermometer.
Dr. Schnare, I truly hope you continue with your stated endeavor. So far, in all of the individual station data I’ve seen, the GISS backward adjustment is applied. The question that comes to my mind is do they apply this to all stations or just a select few?

Dan
February 26, 2010 4:48 pm

“Only a similar effort nationwide can produce a true “high quality” data set upon which the scientific community can rely”
Hasn’t this just been done?
http://www1.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2010.pdf
The authors found that the warming trend was actually slightly stronger when using only those sites rated highly by http://www.surfacestations.org. It seems we need to move on from the hypothesis that the microclimate station siting issue is causing an artificial warming trend.
REPLY: The NCDC study is flawed, and used pilfered data with only 43% of the network surveyed, see why here.
I can say with confidence that the study at 88% surveyed shows a much different picture, and that paper is being written now. – Anthony

Berényi Péter
February 26, 2010 4:50 pm

Average NCDC adjustment for the 1850-2010 period for GHCN rural and non-rural sites. Difference of v2.mean_adj & v2.mean (raw data).
Suggests some bizarre algorithm.
http://ber.parawag.net/images/GHCN_adjustments.jpg
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/

February 26, 2010 4:55 pm

Dr Schnare,
Thank you for this illuminating work & the further Virginia investigations.
I have seen other analysis like this, for instance; Darwin Zero & some work around West Point, NY compared to the adjusted data at New York City.
Do I understand this correctly; these are serial adjustments? The first is made my NCDC & then another by GISS?

February 26, 2010 4:57 pm

If
1: this almost 1 deg C negative bias for pre 1990 temp records by the NCDC is common
and
2: this almost 1.5 deg C negative bias for pre 1900 temp records by the GISS is common
then the whole claim of AGW red herring.
once that is established…. follow the money. you will find the crooks & their enablers

John F. Hultquist
February 26, 2010 4:59 pm

I didn’t have a graphics issue and when I sometimes do I can refresh and get them or get them one at a time.
As for the post, I think it has been clear for awhile now that the temperature records are not good enough for the purpose to which they have been put. This is another fine example.
Line 2 mentions SPPI and an analysis of rural versus urban trends. Maybe there should be a link.
http://scienceandpublicpolicy.org/originals/temperature_trends.html
and
http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/

Robert of Ottawa
February 26, 2010 5:07 pm

Yet another example of the earlier temperatures being lowered. This is becoming a pattern. It is a very clever device for amplifying a rising trend. The current temps are “accurate” so the non-inquisitive enviro-journalist will be convinced the temperature trends must be correct as well.
I am having severe doubts concerning the honesty of these crimatologists; up ’til now, I have always been willing to give them the benefit of doubt concerning deliberate manipulation. Now, I am seeing this lowering of early temperatures too often; perhaps incompetance is not the cause.

Steve Keohane
February 26, 2010 5:09 pm

Thank you for this clear exposition. Starting over with an examination of each and every station record is a must. I suspect Anthony’s work will provide the foundation.

February 26, 2010 5:11 pm

Ok, I reread the post. They are serial… I just can’t believe it! (Question mark by a 60’ish former technologist.)
If we just concentrated on your red adjustment lines, do we know when these adjustments were made? In other way of asking this is, did a past year ever get adjusted more than once?
Senator Imhofe ought to convene a NOAA & NASA executive panel to testify before congress on how this “process” is managed. Don’t ask them why they did it, don’t ask if it’s correct to do it, ask how it was conceived, quality assured & how they satisfied themselves that it didn’t get out of hand.
Once this “Assurance” review is complete, then you can ask if it were scientifically correct. If you really want to get a knot tied in there collective underware… this is the way to do it.

Steve Keohane
February 26, 2010 5:12 pm

Dan (16:48:35) : I think Anthony has pointed out that ‘paper’ used the surfacestations.org data at 45% of completion.

February 26, 2010 5:12 pm

Garbage in, Garbage out, true enough. But in this example of long-term and well sited Climatological GOLD is transmuted into POOP… Poop, being much warmer than gold, is also very suitable for smearing “deniers,” so watch where you step, y’all! LOL
Great article and so on the money! All you other states, start your engines!! There is a juggernaut of good data out there to smash the AGW house of cards – lets GO and find it!

John Blake
February 26, 2010 5:13 pm

We suspect that re-evaluating these data-sets in detail will confirm what has been increasingly evident for quite some time: Adjustments are made without scientific warranty, solely for the purpose of propagandizing discredited AGW hypotheses. But these spurious time-series are but a symptom of Green Gang disease… the fundamental lack of integrity displayed invalidates every single measurement ever reported by these wretched ideologues.
As ever, the burden of proving AGW lies solely with its advocates; dissenters may point out flaws ’til doomsday without ever refuting Warmist orthodoxy as such. Yet as years go by, as a “dead sun” tips Earth from her Long Summer to the overdue end of our Holocene Interglacial Epoch, treating rebounds from a 500-year Little Ice Age as “global warming” from c. 1890 will stand revealed as willful self-deception.

KimW
February 26, 2010 5:15 pm

Seminal work like this, is blithely ignored by the media, the ‘environmental reporters’, and those obsessed by CO2. The problem is that they would rather go down in flames than admit that AGW is the biggest scam in history.

TanGeng
February 26, 2010 5:22 pm

Wow nice methodological review.
Is there any insight into GISS’s “adjustment” process? Seems like a totally unwarranted effort to cool the past.

Dan
February 26, 2010 5:24 pm

Discussion over surface station data quality should be placed in the context of the satellite data record, which has nothing to do with surface station records and which shows a slightly weaker but still similar global temperature trend over the satellite record: GISS = .168 C/decade vs. RSS (satellite) = .156 C/decade, and UAH (satellite) = .132 C/decade.

c james
February 26, 2010 5:26 pm

James Sexton (16:46:41)
“I suppose, it is possible to have had a large thriving metropolis to later diminish in size to a sub-urban or rural area, but I can’t think of one example in the U.S.”
Detroit is surely headed this way. Population has dropped by 50% from 1.8 million to 900,00. Empty lots are now being turned into urban farms. http://www.npr.org/templates/story/story.php?storyId=91354912

Suzanne
February 26, 2010 5:33 pm

Another issue to look at is “how long have the conditions of this weather station been in effect?” I have looked at the pre 2006 weather USHCN station data for Wyoming and Nevada in comparison with the change in population around the sites. What I found was that for cities like Lander, Wyo. which hasn’t grown much, the temperatures were lower than those of the 1930’s. At the same time, areas growing rapidly, like Riverton, showed considerable warming to levels above those of the 1930’s. In both states, the average temperature of rapidly growing cities showed warming of over 1 Degree F. , while the average of those with stagnant populations or weather stations well away from the city showed little of no warming trend the past 70 years

Bruce King
February 26, 2010 5:47 pm

It is easy enough to see. They let UHI increases take care of the heat bias until
the rural sites needed some personal attention to get the heat bias. then they could let their imagination run rampart. Wonder if they had competition to
determine the most exotic “fix”?

RuhRoh
February 26, 2010 5:47 pm

Dear Sir;
Could you please clarify whether you are reporting on USHCNv1 or USHCNv2 ‘adjusted’ (value-added?) data?
Perhaps you might re-run the analysis with both flavors of adjusted (and “raw” ) data to highlight the differences if there are any.
What is the source of the purportedly ‘raw’ data?
I have seen some blinker graphs where ‘raw’ data (downloaded 6 months apart) looked VERY different.
As far as I can tell, both are now available online.
Thanks for this fine expositional effort.
RR

Earle Williams
February 26, 2010 5:49 pm

Dan,
The Menne paper uses adjusted data. See the post at:
http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/
to see the how the adjustment process makes the rural data indistinguishable from urban data.

February 26, 2010 5:56 pm


Dan (17:24:15) :
Discussion over surface station data quality should be placed in the context of the satellite data record,

Dan, what do the satellites measure (i.e., what do they ‘see’)?
Surface temp?
How is this accomplished through overcast skies?
Like we have had in North Central Texas for the last four weeks nearly continuously …
.
.

Mooloo
February 26, 2010 6:07 pm

Discussion over surface station data quality should be placed in the context of the satellite data record
The opaque satellite data? For which we get only a “it’s showing definite warming”?
We’ve seen the robust “hockey stick” turn out to be dodgy.
We’ve seen the robust GISS data to be dodgy.
A cynical person might be worried that the satellite data is also not pristine.
I could be wrong, of course. NASA only have to release the original, unedited data, to allow qualified people to have a look. Do you reckon they will?

Eric Gamberg
February 26, 2010 6:19 pm

NOAA seem to do at least some adjustments on a seasonal basis. Looking at the monthlies would seem to be the type of review required for understanding “climate”.
ref:
http://gallery.surfacestations.org/main.php?g2_itemId=35159&g2_imageViewsIndex=3

Enginear
February 26, 2010 6:23 pm

Thanks for the contribution. This is just another example of how this internet option fo review is working out. I’m new in this and have been carefully observing since the “Climategate” incident and noting which side appears to be looking for truth. From my perspecrive at least the skeptical side for the most part admits when they are wrong but I have never seen any of that from the Warmist group. I even once tried posting a question on Realclimate but it never made to the screen. That could be due to my inexperience so I don’t get excited about it.
This temperature data issue including the paleo work is fascinating to me. As an engineer I have had to get data for some projects and kniow how difficult it is to get anything useful. The work I see here is way out of my league and I am thankful that such distinguished people have contributed to the conversation.
Keep up the good work and thank you again.
Barry Strayer

February 26, 2010 6:26 pm

JamesS (16:35:58) :
So is it a fact that the USHCN records available here:
http://cdiac.ornl.gov/epubs/ndp/ushcn/ushcn_map_interface.html
are adjusted and not the raw data? If that’s so, then even the adjusted data from my favorite rural station (Pineville, WV, settled in 1853 and incorporated in 1917, and with only 715 souls recorded by the 2000 Census, after a big population loss in the 1990s) shows no sign of any warming.
———————-
Not quite clear on your point, but if you’re saying that the adjustments are ‘hiding the decline’ I’d agree that that’s a strong possibility. Great work, Dr. Schnare! I hope you pull out all the stops in continuing with other station records.
Dan:
I’m not an expert about the satellite data records, but I’ll take the liberty of copying a response from the Dr. Long thread from today, with the added comment that at this point, few people here, after experiencing the growing evidence for the pile of crap that has been passed off as climate science, trust the people or institutions that are ‘recording’ the data:
TKL (08:50:20) :
Several people here have called attention to the recent satellite-based temperature data. Going from the data produced by the satellite radiation sensors to an estimate of the earth’s atmosphere and surface temperatures is an “ill-posed” mathematical problem. This means that small random errors in the satellite sensor measurements — and these sorts of errors are always present, they can’t be avoided — lead to big, odd-looking, and obviously wrong temperature estimates unless the computer program estimating these temperatures makes some assumptions about what the satellite sensors are really looking at. These assumptions could be that the actual temperatures are not too far from the climate average expected for the place on the earth and the time of year where the satellite is taking data, or that temperatures close together in the atmosphere or at the surface cannot be different by more than a certain amount, and so on. Then, always insisting that these assumptions are satisfied, the computer programs attempt to find the temperatures that do the best job of matching the radiation measurements. Change those assumptions and the programs will produce different temperatures for the same radiation data coming down from the satellite. People who run these large and complicated programs do not like doing this, because it’s all too easy to introduce bugs that result in no temperature estimates at all being produced, but I would not be surprised to find that under the right sort of outside “encouragement” the programmer would be told to make the effort. All the data coming from the satellite systems is highly digitized, making it easy to produce cool graphics and so on, but given the ill-posed nature of the mathematical problem they are solving I would be wary of treating that temperature data as gospel. What skeptics should really be looking at is the raw radiation sensor data coming from the satellites.

February 26, 2010 6:39 pm

Is this how the rest of NASA works? If so, how do they ever successfully launch a satellite? Do their rockets veer all over according to the ad hoc adjustment of real data in real time? Are there data adjustment officers at Cape Kennedy twiddling with dials while the space shuttle is taking off?
Why is the American taxpayer being jerked around by NASA data adjustment officers? Maybe we should adjust their paychecks. I volunteer for that job. Sorry, Jim, I just tweaked your salary this month to negative $50G’s. Fork it over or my cousin Luigi is gonna pay you a visit you’ll never recover from.
Good golly Miss Molly. And they want us to trust them. “Why don’t you trust us? Is it something we said?”

AnonyMoose
February 26, 2010 6:46 pm

Anthony and helpers might be a bit distracted, and the graphic/image issue might be related. I also didn’t see the graphics until I refreshed. He’s mentioned that his surfacestations.org is flooded with traffic, probably due to the Fox News report about it. I wouldn’t be surprised if there is web server juggling going on, although I have no idea whether this server is directly involved in the other activities.
Consider it as an easy article for Anthony… whatever he’s wrestling with will make for a short story that is writing itself without needing a trip to a library or swamp.

Patrick Davis
February 26, 2010 6:47 pm
John C
February 26, 2010 6:52 pm

How do we know the satellite data is accurate? Because it closely matches the questionable surface data?

Eric Rasmusen
February 26, 2010 7:08 pm

Which stations in the world are the ones driving the average global warming result? It would be useful to focus in on those, and see which ones in those regions are like the one in this post, not needing adjustment, and then see how they are adjusted.

Ed
February 26, 2010 7:12 pm

If you do this again, it would be interesting to see the step by step instructions. You could use the viral nature of the internet to distribute the work of looking at all the weather stations.

Editor
February 26, 2010 7:14 pm

Dan (16:48:35) : edit

“Only a similar effort nationwide can produce a true “high quality” data set upon which the scientific community can rely”

Hasn’t this just been done?
http://www1.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2010.pdf
The authors found that the warming trend was actually slightly stronger when using only those sites rated highly by http://www.surfacestations.org. It seems we need to move on from the hypothesis that the microclimate station siting issue is causing an artificial warming trend.

Dan, you have been suckered by the usual kind of shabby science done by mainstream climate scientists. Despite being warned by Anthony that not all of the stations had been surveyed, and that the results had not yet been quality controlled, the authors went ahead and wrote up the piece of junk that you cited.
Can’t blame you for being duped, that was their intention. See the full details here.

Antonio San
February 26, 2010 7:20 pm

Data and those who understand them… so much better than politics!

February 26, 2010 7:26 pm

”””’This graphic makes clear the need to “push the reset button” on the USHCN.”, by David W. Schnare”””’
Push.
John

Stephen N Cal
February 26, 2010 7:30 pm

For the sake of keeping the Stephens separate, I will now start using Stephen N Cal
While it shows a warming trend from 1885 to present, notice, as near as I can determine from the Dale Enterprise station graph, that there is a cooling trend since about 1920. The way I see it, as the rural areas go, so goes the planet, no matter what the heat islands are doing! I have seen this cooling trend on stations around the USA, several here in Calif. if this cooling trend can be seen on stations on other continents, then it must be Global. Heat islands only reflect micro locations and in my opinion, have no relationship to Global climate; and should not be used for Global climate monitoring, unless the heat island effect can be totally removed from them.

S. Massey
February 26, 2010 7:34 pm

Seems like the scientists moved too fast on this one. We need more data before jumping into the AGW craziness!

Claude Harvey
February 26, 2010 7:38 pm

Re: John C (18:52:56) :
“How do we know the satellite data is accurate? Because it closely matches the questionable surface data?”
I think those who question AGW theory should be careful to not duplicate the fundamental error they accuse the “true believers” of having made. The damning charge against AGW proponents is that they ignored data that did not support their theory and “adjusted” some of that data into a more supportive form. The satellite data seemed quite good enough for the “doubters” so long as it told them what they wished to see and hear. Now that the satellite data shows heating, dramatically so over the past nine months, the satellites are suddenly unreliable?
I think everyone would do well to drop back a couple of paces and remember that the AGW argument is not settled by a degree or so of average global temperature so up or down over the past century. The heart of the AGW argument is that whatever has occurred over the past century is abnormal in the ordinary scheme of things.
Take a look at either the past 1,000 years of reasonably and rationally reconstructed global temperature or the past 450,000 years of reconstructed history and you must conclude that present arguments over a degree or so of average global temperature is simply ludicrous in the overall scheme of things.

geo
February 26, 2010 7:40 pm

I’d be interested to see what Forestburg, SD would show if someone with the talent to look cares to. I tried just now to see how long the station record is to confirm it would be worth looking at, but http://www.surfacestations.org seems to be down (the gallery is up tho) which deprived me of my usual link to get to the station record finder. I can just say I know from personal experience it is as rural as you can get, even if the mmts is too close to the house now. 160 acre farms to the horizon in all directions.

Steve Oregon
February 26, 2010 7:43 pm

Good greif, is there global warming or not?
How much global warming is there?
Is there any reliable measurement of global temperature?
How much is the margin of error?
Is it at all possbile that there has been cooling and not warming over the past 100 years?
How does the AGW alarm keep sounding with such a convoluted quagmire of stuff that’s supposed to be science?

Eric Gamberg
February 26, 2010 7:47 pm

Willis Eschenbach (19:14:08) :
“Dan, you have been suckered by the usual kind of shabby science done by mainstream climate scientists. Despite being warned by Anthony that not all of the stations had been surveyed, and that the results had not yet been quality controlled, the authors went ahead and wrote up the piece of junk that you cited.”
Don’t forget that the surfacestations.org survey results are really only addressing the conditions at the site at the time of the survey. The history of the site and its locations and equipment are not rigorously examined.
REPLY: We may be able to do this if NCDC will give me access to B44 forms which are top view site sketches and description of surroundings, but so far they have not made them available. -A

u.k.(us)
February 26, 2010 7:47 pm

Get ready cause, the weather, it’s gonna change.
Throwing money at it won’t help.
But, if you throw some my way…..

geo
February 26, 2010 7:54 pm

@Willis Eschenbach (19:14:08) :
I do get a grin going whenever the vilifiers get to going on about how the “deniers don’t do any original work”. My response is usually “Scoreboard, mother****er” with a link to http://www.surfacestations.org. It is hard to judge the amplitude of which is greater on an absolute scale –the shame due the establishment for not doing it themselves, or the honor due Anthony and his merry band for taking on the task.

February 26, 2010 7:56 pm

Without accurate data, no endeavor is scientific.
Satellite raw data is not temperature, it is EM radiation of varying amplitude across the EM spectrum—-it can be converted to “temperature” only by correlation with actual contemporaneous temperature measurements from the same site from which the radiation is transmitted. But as we have seen, those measurements are problematic.
As one example, even if a satellite could resolve its reception down to the size of an airfield, where so many ground stations are found, can it separate radiation from paved surfaces, building exhausts, planes, etc., from some nearby grassy area where the ground station is found? If not, then “adjustments” based on some rationale must be made. After “Climategate” and the other frauds and errors documented at this web site, the basis for all such “adjustments” must be re=examined.
KW

Jerry Gustafson
February 26, 2010 7:57 pm

When we are looking at the actual raw temperature data from these stations what are we seeing, the daily high, daily low, or what? could someone please enlighten me?
Thanks.

February 26, 2010 7:57 pm

Dan (17:24:15) :
Discussion over surface station data quality should be placed in the context of the satellite data record, which has nothing to do with surface station records and which shows a slightly weaker but still similar global temperature trend over the satellite record: GISS = .168 C/decade vs. RSS (satellite) = .156 C/decade, and UAH (satellite) = .132 C/decade.

Given the satellite record is only 30 years old.
Given that we are assuming it’s more accurate (and yes, what we wind up seeing as an end-product is “adjusted”).
Given that the planet is over 4 billion years old.
Methinks we need to collect a hell of alot more satellite data before we draw any conclusions on long term warming or cooling trends.

igloowhite
February 26, 2010 7:59 pm

Good thing these fools are not calling in airstrikes in Aganistain or Iraq, their fugged data just could cause them to bomb themselfs, or have they done so with their pencils.

Basil
Editor
February 26, 2010 8:10 pm

I’m confused. Where is the “raw” data coming from here? Can you point me to a .gov source where you get your “raw” data?

February 26, 2010 8:16 pm

Rural Beeville, Texas (Class 1 or 2) 1896 – 2005
NOAA adds 2.8°F / Century to Tmean raw data linear trend.

u.k.(us)
February 26, 2010 8:19 pm

Steve Oregon (19:43:13) :
You just asked the 10 trillion dollar question.
Read Anthony’s archives, then you can tell us 🙂

February 26, 2010 8:20 pm


Claude Harvey (19:38:19) :
Now that the satellite data shows heating, dramatically so over the past nine months, the satellites are suddenly unreliable?

Asked and unanswered; what are they measuring ( ‘ looking at ‘ )?
(See also here: A statistically significant cooling trend in RSS and UAH satellite data)
.
.

February 26, 2010 8:22 pm

Link to Beeville graph:
http://tinypic.com/2mz0dqu/

February 26, 2010 8:24 pm

Sorry, new at this tinypic thingy: Try this:
http://tinypic.com/r/2mz0dqu/6

Eric Gamberg
February 26, 2010 8:27 pm

I’d just like to point out the Hay Springs 12S, NE station is NOT currently a well sited rural station.
I’d also note that NOAA has published an article about the numerous locations of the Academy, SD station in their COOP newsletter covering the early period of the station that is not included in the MMS database. The station volunteer operator had a copy that I quickly read before moving on that particular day.
Both of these stations are included in this study. Neither station’s state or history is addressed as to the selection criterion for this paper.

igloowhite
February 26, 2010 8:27 pm

Steve Oregon,
First Know thyself.
Yours true,
igloowhite

D. King
February 26, 2010 8:28 pm

David,
Here is NOAA talking about the new sensors.
Watch from 3:00 to 3:24

David Schnare
February 26, 2010 8:29 pm

Data sources for the analysis.
The “raw” data come through the NOAA Locate Station portal at:
http://www.ncdc.noaa.gov/oa/climate/stationlocator.html
I pulled from “Daily/Monthly/Annual Virginia Climatological Data” which are the original reports from the various weather stations, raw and unadjusted. As you work back in time, it is interesting to see who was in charge of this data. It all began in the Department of Agriculture where the “Weather Bureau” was located. The key data was about precipitation, not temperature. As my grandfather, an Iowa farmer, oft explained, water makes the crop. There will always be plenty of sun. Today the raw station data comes from NCDC reports which remain remarkable similar in form to the original efforts from the 1890s.
The USHCN adjusted data are pulled from the GISS site. GISS states that it uses the newest available data from USHCN data set. Here is Hanson’s (GISS) description:
“The current analysis uses surface air temperatures measurements from the following data sets: the unadjusted data of the Global Historical Climatology Network (Peterson and Vose, 1997 and 1998), United States Historical Climatology Network (USHCN) data, and SCAR (Scientific Committee on Antarctic Research) data from Antarctic stations. ”
You can find this quote and more at:
http://data.giss.nasa.gov/gistemp/
He does not state which version of the data he uses. I pulled directly from NCDC to compare with the GISS data and have not had time to compare them directly. The data are not in the same format and it requires many multiple data pulls from NCDC to get comparable data to that provided in text form from GISS. Our full report this summer will take a look at that and other similar issues. If you have suggestions on other issues, do let me know.
The GISS and USHCN data come through the GISS portal at:
http://data.giss.nasa.gov/gistemp/station_data/
I used the “raw GHCN data + and USHCN corrections” as the USHCN data, and used “after homogeneity adjustment” as the GISS data.
GISS offers a third option: “after combining sources at same location”. In the Dale Enterprise case study, this data was identical to the “raw GHCN data + and USHCN corrections”, which would be expected for a true rural data set, as there are no other sources at the same location in such a case.

February 26, 2010 8:37 pm


Eric Gamberg (19:47:25) :
Don’t forget that the surfacestations.org survey results are really only addressing the conditions at the site at the time of the survey. The history of the site and its locations and equipment are not rigorously examined.

Survey parties anxiously await the invention of/the discovery of/the release from EVT (Engineering Verification Testing) of the first time-travel machines available for rent or lease for prioritized civilian purposes …
.
.

Frank Schroeder
February 26, 2010 8:38 pm

“Notably, neither NCDC nor GISS adjusts for this equipment change. Thus, any bias due to the 1994 equipment change remains in the record for the original data as well as the NCDC and GISS adjusted data.”
Is this true? Does it not seem as if there is just such an adjustment in the NCDC data set? they have a step adjustment at a time point that looks to be early 90’s – perhaps related to the equipment chage?.

Roger Knights
February 26, 2010 8:43 pm

MODS: TYPO under the first chart in the article:
Change to “most” in: “as must such urban sites “

D. King
February 26, 2010 8:55 pm

Frank Schroeder (20:38:59) :
see the video here:
D. King (20:28:50) :

Mark C
February 26, 2010 9:20 pm

I’m presuming this nighttime luminosity data is from the DMSP satellites, which has (as far as I know) the only sensor measuring nighttime lights.
The OLS sensor uses a photo-multiplier tube, something like night vision goggles, to amplify the available light. Anyone who has seen this imagery can see considerable “bleeding” of bright areas into surrounding dark areas. Some of this can be removed with post-processing and long-term averaging, but it’s probably not enough in areas with sharp gradients from “urban” to “rural”. I suppose it serves as a gross filter mechanism between rural and urban sites, but it’s not good enough as it will probably misclassify sites that are near urban areas to be within the “bright” areas, but still far enough away that the thermometer record would be nearly unaffected.
I think the only proper way is a manual analysis using higher-resolution imagery such as what Google uses. Another substitute might be Landsat, using it to classify how well vegetated the surrounding area is. Urban desert vs. rural desert might be tougher, but I bet it’s doable.

pat
February 26, 2010 9:20 pm

This really evidences fraud at the highest levels of governmental science. I think an investigation must begin. There is something seriously wrong with scientific reportage. And i suspect it may span many more fields than climate.

jorgekafkazar
February 26, 2010 9:22 pm

Tom in Texas (20:16:05) : “Rural Beeville, Texas (Class 1 or 2) 1896 – 2005. NOAA adds 2.8°F / Century to Tmean raw data linear trend.”
Isn’t there some sort of airport there? Or an Air Force base? I had a pizza there about 1976, seem to recall something of the sort.

February 26, 2010 9:31 pm

Very interesting and informative post, Dr. Schnare. Thank you for posting this, and to Anthony for hosting it. I can only guess that the data-manipulators never imagined that one day their work would be exposed.
For those who may be interested, I posted some graphs of the U.S. cities’ temperature records that were made available from Hadley late in 2009, as their attempt to show they had nothing to hide after the Climate-gate fiasco.
There is only one city in that data for Virginia, and that is Richmond. It is not a rural site at all, but a large city, with monthly average temperatures going back to 1911.
http://sowellslawblog.blogspot.com/2010/02/usa-cities-hadcrut3-temperatures.html

Bruce of Newcastle
February 26, 2010 9:38 pm

I like to extrapolate the GISS corrections backwards. In this case Harrisonburg clearly reappeared from beneath its glacier around 1200 AD.
This is slightly later than the hitherto unrecognised ice age in tropical North Queensland, where the Mackay sugar mill emerged from under the ice in 900 AD:
http://kenskingdom.wordpress.com/2010/02/05/giss-manipulates-climate-data-in-mackay/
I can’t do better than Ken Stewart’s own words “Wow- when they adjust, they don’t muck around!”

Reed Coray
February 26, 2010 9:53 pm

Steve (19:43:13), here’s my answer to your question: How does the AGW alarm keep sounding with such a convoluted quagmire of stuff that’s supposed to be science?
Most people want to feel good about themselves–I know I do. One way to accomplish that goal is to participate in saving the world. Jumping on the CAGW bandwagon was a way to do your part to save the world. As a consequence, the main stream media and much of the populace climbed aboard. Once on the bandwagon, it’s hard to be one of the first people to jump off because to do so would be to admit not only that you haven’t been saving the world, but also to admit that you were duped. So an early bailout means you have to admit you were a dupe, not a world save. That’s a hard jump for anyone. However, as the number of jumpers increases (as I hope and believe it will), it becomes more palatable to bail out. If you don’t join the mass of jumpers, you become a “double dupe”–you got duped when you got on the bandwagon, you weren’t smart enough to get off before the wagon went over the cliff. IMO we’ve seen the start of the mass bailout-but only time will tell.

victor meldrew
February 26, 2010 9:56 pm

I really don`t believe what i`ve just seen on bbc news,
Our world
The rise of the sceptics,
So far ,well balanced informative and non-judgemental(i kid you not) .Exerpts from Lord Chris Monktons aussie tour, interviews with sceptical scientists and politicos, the whole lot, i suspect there will be some qualifiers and pro agw views before the end of the programme you may not be able to watch this outside the uk without a proxy,but,GOOD GRIEF the buggers have blinked!

hotrod ( Larry L )
February 26, 2010 10:05 pm

Very interesting — If you tried to screw up the data, you could hardly come up with a better method than to pile undocumented adjustment on top of undocumented adjustment. In addition to that in the examples we have see so far of these forensic examinations of a handful of rural stations they appear to be some what arbritrary adjustments (ie what happens in location x does not happen at location y even though they are both high quality long duration rural stations).
Given they are adjusting for UHI based on luminosity — Wouldn’t it be funny if the sudden move to high efficiency lighting is increasing luminosity of cities even though population is nearly stable. It could be a classic example of unintended consequences.
Folks used to put 100 w incandescent lights in their porch lights. As power costs went up the tended not to leave them on all night but turned them off. Then as security concerns increased, folks went out and bought high efficiency mercury vapor, low pressure sodium vapor and high pressure sodium vapor yard lights (even in very rural areas — many farms have poll mounted yard lights that are activated at sun set). Likewise you now have thousands of solar powered sidewalk lights. In homes people are now lighting patios with multiple 12w CFL bulbs that they think nothing of leaving on for hours, even though they individually have lower power consumption their luminosity has gone up. Cities and businesses have done the same thing for crime prevention. They have over the last few decades added hundreds of high power street and area lights using high efficiency sodium vapor lights in the name of improved citizen safety and traffic safety. Now we have a move underway to high intensity LED lighting.
It would be very interesting if the astronomy community interested in “Dark skies”, have collected data regarding local back scatter light levels in communities that can be compared to local population levels.
Larry

Cassandra King
February 26, 2010 10:10 pm

Climate science is a money making machine like any other, wholly dependent on lavish funding to find and quantify a link between human activity and climate these people were certain to find the link they were looking for whether it existed or not.
If your livihood depends on seeing something then chances are you see it whether it exists or not and if the funding stream contiuation is dependent on finding something then find it you will.
Science has always depended on financial support, control the grant bodies and you effectively control the scientists and the science, since those who demand to aquire proof of AGW/MMCC/AAM now control who gets funding it becomes clear that scientists will follow the funding stream and look to provide results that will allow the contiuation of that funding stream.
Climate science grew too big too fast fed with too much much money that was streamed to those scientists who could find evidence of warming and human involvement whether it existed or not, human nature was understood by the funding agencies and exploited with a ruthless determination.
Human nature is quantifiable and easily understood, the people who have used science for there own ends need to be unmasked and brought to book.

Robert Kral
February 26, 2010 10:10 pm

Bravo! This is exactly the kind of analysis we need to see repeated over and over. I work in a different field of science, in which arbitrary “adjustments” to perfectly credible raw data would result in, at best, failure and, at worst, criminal prosecution. This study begs the most essential question of all: what do the raw data records of truly rural weather stations around the world show? We have to begin there and figure out the rest. If the raw data show nothing outside the realm of known natural variation, then AGW immediately becomes a mythical construct devised by people who have an agenda that does not involve objective truth.

Ian H
February 26, 2010 10:18 pm

It seems to me that we are observing the birth of a new science, the scientific study of the methods of climate scientists.

RockyRoad
February 26, 2010 10:26 pm

Although the following was published two months ago on Dec 17, 2009, it is even more applicable today:
http://www.climategate.com/u-s-lawyers-get-their-legal-briefs-in-order
Indeed, I’d say the lawyers on these cases are having trouble keeping up with all the juicy revelations coming from so many directions. RICO strikes again!
In the meantime, the Warmers will WISH it was only as bad as the Big Tobacco cases (and how ironic–the Warmers are the equivalent of Big Tobacco, not the realists/dissidents).

kadaka
February 26, 2010 10:53 pm

Eric Gamberg (19:47:25) :
Don’t forget that the surfacestations.org survey results are really only addressing the conditions at the site at the time of the survey. The history of the site and its locations and equipment are not rigorously examined.
REPLY: We may be able to do this if NCDC will give me access to B44 forms which are top view site sketches and description of surroundings, but so far they have not made them available. -A

Will this provide what you need?

You may obtain copies of station history documents submitted prior to January 2001, with the personally identifiable information obliterated, at the NCDC’s cost of reproduction. This is the same information accessible via MMS at no charge. For station history submitted after January 2001, MMS (https://mi3.ncdc.noaa.gov/mi3qry) is the only delivery option.

It says above that:

Individual NWS B44 forms are not publicly accessible online because they contain personally identifiable information such as observer name, address, phone number and gender. The cooperative observers are volunteers who donate their time in the interests of the public good with a reasonable expectation that their personal information will remain private.

So, does that mean one can or cannot get the personal-info-redacted B44 forms online, and would they have the sketches and descriptions you need?

John F. Hultquist
February 26, 2010 11:01 pm

c james (17:26:25) :
James Sexton (16:46:41) growth and decline of towns
There are quite a number of towns of historical interest, although not necessarily relevant to the current weather station kerfuffle. Try these,
Silver —- City, Nevada
http://en.wikipedia.org/wiki/Virginia_City,_Nevada
Oil —- Pithole, Pennsylvania
http://en.wikipedia.org/wiki/Pithole,_Pennsylvania

John F. Hultquist
February 26, 2010 11:02 pm

Virginia City, Nevada

D. Patterson
February 26, 2010 11:05 pm

David Schnare (20:29:56) :
Data sources for the analysis.
The “raw” data come through the NOAA Locate Station portal at:
http://www.ncdc.noaa.gov/oa/climate/stationlocator.html
I pulled from “Daily/Monthly/Annual Virginia Climatological Data” which are the original reports from the various weather stations, raw and unadjusted.

The climate science community (CRU, NOAA, GISS, et al) has a distressingly bad habit of using terminology to describe data as “robust”, “raw”, and “unadjusted” when it is, in fact, anything but robust, raw, or unadjusted. Consequently, every time I see such terminology being used, I have to sit up and take notice, because it usually simply isn’t so. The USHCN and GHCN are compiled from daily summaries that include extensive adjustments to the raw, meaning original, observations. In this example, I have to note that the daily summary is itself a summary of supposedly “raw and unadjusted” observations for a single daily period, but is actually a summary of previously adjusted observation records which are too often not actual raw observation values.
The number and type of adjustments used to compile the daily summary varies considrably according to the specific dataset, observations series, and stations being used. Which dataset and dataset number are you referring to in the above comment? Are they perhaps Dataset 3200, Dataset 3210, or another Dataset?
Readers need to understand and approach claims of “raw and unadjusted” observations of surface, marine, and upper air observations with extreme caution and skepticism. Actual “raw and unadjusted” observations used in the USHCN and GHCN datasets are extremely difficult to access, and in certain instances such “raw observations” may have been permanently destroyed and are now unrecoverable. Sources at the National Climatic Data Center (NCDC) have reported that certain unidentified original manuscript observation records have been destroyed by water damage and bookworm type insect infestations. In the event the destroyed documents may include observational records incorporated as adjusted values in the USHCN and GHCN datasets, the actual “raw” data may not ever be recoverable in those instances for the purpose of reconstructing those datasets with the actual raw observations. The extent of this problem is unknown.
What is needed is a well researched diagram detailing every step involved in capturing the temperature observation, recording the observation, performing QC (Quality Control) corrections and adjustments, and all subsequent adjustments and changes applied for summarizations, transcriptions, digitizations, and compilation of the datasets. Doing so will no doubt astound most unsuspecting people when they can see just how many alterations have been applied to a single true “raw” temperature observation taken in one minute, of one hour, of one day, in one year.

Pete H
February 26, 2010 11:10 pm

“based on the night time luminosity of the area within which the station is located.”
A couple of guys have picked up on the above statement.
I sat back for a while and tried to work out what the person who came up with this one was …struggling… to do! Was it just some AGW modeller trying to fit his/her data into the argument put forward by Anthony or is it a true, peer reviewed piece of research, after all, surely no real research could be done just by assuming lights on equals effect.
Come on RC people. Show us the research linking “luminosity of the area with relation to UHI effect”.
I am not being sarcastic, I really am interested, as I am sure are many here, in the research, if it is available.

D. Patterson
February 26, 2010 11:23 pm

Pete H (23:10:24)
Details of the night lights was discussed extensively at Climateaudit some time ago.

Nick Stokes
February 26, 2010 11:29 pm

Re: RedS10 (Feb 26 17:11),
I believe the post is wrong. The sources of data for GIStemp are:
1. Raw GHCN (v2.mean)
2. USHCN noFIL – ie without FILnet adjustment. It includes TOBS, maxmin and SHAP adjustments, according to the code. TOBS, time of observation, is directly based on the station records, as is SHAP. SHAP adjusts for known events in the station history, and would include an adjustment for MMTS.
DALE is in both sets, so I’m not sure which set of data was used. But the “NCDC adjustments” which I presumes means GHCN, would not have been used. In fact, I don’t know of any published index that uses the GHCN adjustments.

juan
February 26, 2010 11:32 pm

The NOAA /GISS data records are byzantine and I am struggling to understand them. Still, mere ignorance never yet kept an American quiet, and I am sure I will be promptly set straight by my fellow commenters.
MMS marks the beginning of COOP 44208 in 1948. During the following 62 years it shows 2 location changes, one of nearly half a mile.
boballab has written a clear exposition of how to look up station data on the USHCN site, with emphasis on flags that show data has been estimated rather than measured: http://boballab.wordpress.com/2010/02/19/before-using-temperature-data-read-the-fine-print/
The two flags showing estimation are E and X. The record for Dale Evanston shows the following:
oct 1913 E
sep 1916 to nov 1917 X
mar 1932 to sep 1932 X
dec 1949 to sep 1951 X
sep 1959 X
mar 1971 E
sep 1972 to jan 1973 X
jan 1983 E
may 2007 X
jun 2007 X
jul 2007 to dec 2008 X
This adds up to 73 months of infilled data, not 9. Have I got this wrong? If not, how does it affect the analysis?

Jeff B.
February 26, 2010 11:38 pm

Frantic attempts by the NOAA and NASA to make the data fit their hypothesis. Thankfully the truth from folks like Schnare and Watts, is coming out.

E.M.Smith
Editor
February 26, 2010 11:43 pm

Dan (17:24:15) : Discussion over surface station data quality should be placed in the context of the satellite data record,
Which is really kind of a dumb thing to do given that the recent past is when the USHCN and GHCN data are left alone and the time period from prior to the satellite period are where all the “cooking by cooling the far past” takes place.
So I’d MUCH rather see what the author has done, exposing the impact of the “rewrite the pre-satellite past” than admire the period where things are left more alone…
BTW, to understand why the “average temperature” now can be up a smidgeon but the world is cooling do this:
Take 10 coins. 2 nickels and 8 pennies. Now, put the nickels in your right pocket and the pennies in your left. Which pocket has the most COINS? That is the temperature measurement. Which pocket has the most VALUE? That is the HEAT measurement.
The “still warm from 20 years of hot sun and warming PDO” is putting a lot of warm air pennies into the air over the large ocean area. It then runs up to the poles and dumps a bucket load of heat out to space. This cold air then heads south over the land and freezes a boat load of water into snow. That “several feet of snow” is the nickels.
So your satellite is just counting the coins, but the FEET of snow and the snow extent all the way to Florida is looking at the denominations…
Or put less metaphorically: The heat content of the air over an ocean is not as much as the heat lost making tons of snow over the land, even if the average temperatures x area is greater over the ocean.

John F. Hultquist
February 26, 2010 11:53 pm
hotrod ( Larry L )
February 26, 2010 11:59 pm

There have been a number of historical mass migrations of populations here in the U.S. that would effect population density on a regional level over the last century.
Among these would be the dust bowl period, large numbers of people moved out of rural mid America in the dust bowl regions (Colorado, Kansas, Oklahoma, Texas etc.)
In the late 1970’s (about the time the satellite records began) there was a substantial shift of people out of the north eastern states (rust belt) both due to economic issues from the high inflation and job losses in that period and the severe cold winters that area experienced in the 1970’s when the popular press was pitching global cooling and a coming ice age.
Census records give snap shots on a decadal scale, but a higher resolution proxy might be local housing starts, school enrollments, building permits, sewage flow, utility usage, retail grocery sales volumes etc. to try to give a cross check on local population trends. The most likely time for sudden population movements would be the deep recession periods with high layoff rates (plant shut downs).
In the Eastern plains of Colorado there are a lot of small towns that have essentially become ghost towns population wise, but they still have high lighting levels in some areas due to interstate businesses serving travelers. Also many farmers and ranchers put up mercury vapor yard lights.
In the mid 1970’s I had occasion to drive from Denver down to Rocky Ford Colorado on a regular basis. I took I-70 to highway 71 south out of Limon Colorado. There is an area south of the intersection of Highway 71 and Highway 94 that you coulc stop the car get out and turn in a full circle and not see a single artificial light late at night. But you could see the sky glow of both Colorado Springs and the Denver Metro area which are 50 -100 miles away over the horizon.
Last time I was down there in the 1990’s that was no longer the case, due to farm yard lights which come on at dusk and go off at dawn. The actual population has not changed a great deal, but the local light environment has.
Larry

pwl
February 27, 2010 12:07 am

Wow, barnacles layered upon barnacles! No wonder there is an alleged warming!
Is it possible they didn’t realize that they were layering warming distortion after warming distortion upon the data? How could they have missed this messing it up so much?
NCDC (NOAA) and GISS (NASA) had better hope that they didn’t leave an audit trail otherwise they’ve been caught with yet another fraudulent invention of data. I wonder what they say now about this?
pwl
http://PathsToKnowledge.NET

steven mosher
February 27, 2010 12:10 am

The simple spot check folks did of nightlights more than a year ago confirmed that it did not “pick out” stations that were rural and classified some rural sites as “small town” and some urban sites as rural. Nightlights is a PROXY for population density and population density is a proxy for UHI. The only method that has a chance at working with high quality is a PHYSICAL INSPECTION of the site.
It’s tedious work. It’s not as sexy as GCM work. It’s nice to try to do this work by looking at satillite photos in your cozy ivory tower. But nothing beats feet on the street.

E.M.Smith
Editor
February 27, 2010 12:14 am

David Schnare (20:29:56) : Data sources for the analysis.
The USHCN adjusted data are pulled from the GISS site. GISS states that it uses the newest available data from USHCN data set.

Be careful with that. It would be better to get the USHCN data directly from NCDC at their FTP site. The data available from that stage of GIStemp is the combination of GHCN and USHCN so for the (about 136) USA stations that are in both you will be getting an odd ‘sort of an average’ merger of the two (that have different modification histories). For the other stations, you ought to be OK (and since the GHCN stations are highly biased toward urban and airports, I doubt you would hit one in a rural pristine selection criteria).
More importantly, GISS swapped from the USHCN (that ends in 5/2007) to the USHCN Version Two (that has a different and ‘warmer’ adjustment history) on about 15 Nov 2009. So depending on when you grabbed your data, you got either USHCN or USHCN.v2 with no obvious notice… ( You had to read the updates log about software changes).

I used the “raw GHCN data + and USHCN corrections” as the USHCN data, and used “after homogeneity adjustment” as the GISS data.

This is pulling GISS data from two different points in the GIStemp processing. The first is ‘after STEP0’ that mostly just merges USHCN.v2 and GHCN and tosses away anything from prior to 1880. (It does a very odd merger. If it has only one, it passes that one through unchanged. If it has both, it “unadjusts” the USHCN and then semi-averages it smoothing any splices with the GHCN for that station). If your station is not in GHCN (and it ought not to be IMHO) then you ought to be getting the USHCN (if prior to Nov 2009 pull date) or USHCN.v2 (if done after Nov 2009) with only the truncate at 1880 change.
After homogenizing is pulling from after the STEP1 homogenizing and, I believe, after the STEP2 UHI as well and will include various infill and related impacts.

GISS offers a third option: “after combining sources at same location”. In the Dale Enterprise case study, this data was identical to the “raw GHCN data + and USHCN corrections”, which would be expected for a true rural data set, as there are no other sources at the same location in such a case.

The ‘as combined’ ought to give you after STEP1 and before STEP2 I think… it’s a bit of a guess since they don’t document it exactly on the web site.
But for your station the only “combining” would be if there are multiple “modification history flags” for that site. That is, if the 12th digit of the StationID comes in more varieties than “0”. That ought to have happened at the time the instrument was changed from LIG to electronic, but the GISS site shows only the one 12 digit StationID, so you are getting no ‘combining’.
GISS just picks up the USHCN.v2 data from NCDC and changes the StationID and format, so it’s basically the same data, but the provenance has another step in it… so things like that swap from USHCN to USHCN.v2 can happen to you. FWIW, I’ve heard an assertion that GHCN is supposed to get some kind of swap “soon” too. So keep eyes open and watch the change log at GISS…
(They sure do like to keep those walnut shells moving… )

E.M.Smith
Editor
February 27, 2010 12:18 am

OT, but there has been an 8.8 Quake in Chile and there are Tsunami watches being issued for all over the place in the pacific… be advised.

kwik
February 27, 2010 12:29 am

So Global Warming IS happening?
But its actually NASA that is doing it, not CO2?
That means NASA doesnt deserve any fundings.
They deserve to be taxed.

Nick Stokes
February 27, 2010 12:32 am

Re: hotrod ( Larry L ) (Feb 26 22:05),
“undocumented adjustment on top of undocumented adjustment”
The GHCN, USHCN and GISS datasets and their adjustments are thoroughly documented. The problems is that people who write posts here on the topic don’t seem to read it.
GHCN documentation is here. USHCN documentation is here. GIStemp documentation is here (with over 20 years of papers), and you can get their code there as well.

Ruhroh
February 27, 2010 1:02 am

Hey Cheif,
What’s your take on that Raw vs. Raw blinker from rockyhigh66?
i.e.,
was there some kind of labelling accident?
was one v1 and the other v2?
The graphics just come roaring directly through when making a blinking GIF, so, not much opportunity for miscreants to monkey around with labels.
My consensus bro is very dubious of the provenance of those blinkers, although perhaps not for the reasons that might concern the midnight aspie crew…
RR

Tenuc
February 27, 2010 1:31 am

Thank you Dr. Schnare for an interesting and easy to understand explanation of why the current ‘standard’ historic temperature data sets like GISS, NCDC and (I suspect) Hadcrut are not fit for purpose.
Your choice of Dale Enterprise, Virginia as an example was very good, because as well as having an almost complete long-term continuous record, the same family had been reporting the temperature there since 1868 (as of Spring 2002).
Details of the family history are available on the link below. This ‘provenance’ for Dale Enterprise weather station is very rare, and I’m sure the family would be interested to see your results – who knows, they may still be using the old mercury thermometer to record the temperature there?
http://origin.www.erh.noaa.gov/lwx/reporter/Spring2002.pdf

Alan S
February 27, 2010 1:51 am
steven mosher
February 27, 2010 2:06 am

Hansen’s nightlights: Imhoff 1997 is a must read if you want to understand the stuff. also, try this
http://www.ngdc.noaa.gov/dmsp/pubs/ElvidgeEtAl-Global_urban_mapping_20090618.pdf
ISA might be better than nightlights as a UHI proxy
http://www.ngdc.noaa.gov/dmsp/download_global_isa.html

Peter Plail
February 27, 2010 2:07 am

What Dr Schnare has done for Dale Enterprise ( and what others have done for other individual sites) can presumably be repeated for other sites using the same data sources, methods, calculations and presentation of results.
This would be a massive undertaking for an individual, but on this site I suspect you would have a lot of willing and able volunteers who just need guidance as to how to accomplish the work.
Would someone consider publishing a document to guide such willing volunteers, together with any particular formulae or algorithms used in processing the data.
Such a methodology could be openly debated here until there is concensus.
At this stage individuals could then volunteer to analyse a specific site or sites. Initially, some degree of quality control can be achieved by having more than one volunteer analysing each site and then comparing results.
I am sure that between us we have sufficient skills and knowledge to plan, set up and implement such an “open source” project that warmists and sceptics alike can contribute to. My quest is to achieve some level of truth that is acceptable to both sides of the debate.

brc
February 27, 2010 2:25 am

I’ve got a question : which came first : the warming bias adjustments or the AGW theory? Is it possible that the adjustments were made, then people looked at the temperature records and went ‘hey, that’s warming up. Maybe co2 is causing it? I can’t think of anything else it could be’

JustPassing
February 27, 2010 2:33 am

Panel Discussion on “Climategate” – Haas School
What Should We Learn from Climategate? Panel discussion with Maximilian Auffhammer, Associate Professor, Agricultural & Resource Economics; Bill Collins, Department Head, Climate Science, Lawrence Berkeley National Laboratory; Rich Muller, Professor of Physics, author, Physics for Future Presidents; Margaret Torn, Program Head, Climate and Carbon Sciences, Lawrence Berkeley National Laboratory on questions about the integrity of the peer review process for climate change research. Presented at the Haas School, UC Berkeley, by the Energy Institute at Haas, Berkeley Energy & Resources Collaborative, and Climate & Energy Policy Institute. Moderated by Severin Borenstein, Co-Director, Energy Institute at Haas. (January 26, 2010)

jaymam
February 27, 2010 2:37 am

Jerry Gustafson (19:57:21)
Yes it is important to know how the temperature is measured. That should be stated on graphs and almost never is.
At sites I’ve looked at, for some unexplained reason the daily mean temperature trends upward while the 9am temperature remains the same.
i.e. if 9am temperatures are used there is no warming. Surely this is important and requires further study?

E.M.Smith
Editor
February 27, 2010 2:39 am

Ruhroh (01:02:41) :
Hey Cheif,

Assuming that is me (Chief though…)

What’s your take on that Raw vs. Raw blinker from rockyhigh66?

I don’t know to what this refers. I did a search of the page for rockyhigh66 and it is only in your posting. So I’m not sure what to look at…
Is it on a different thread?

rbateman
February 27, 2010 2:42 am

Speaking of California’s distant past:
http://www.kcbs.com/bayareanews/Scientists-Say-Storms-With-Devastating-as-Earthqua/6456044
1861-2 Winter was known as the “Inland Sea”. So much water came out of the hills so fast that it was months before the waters went down in the Central Valley. 300-400% of normal precip.
Which has exactly nothing to do whatsoever with C02.

rbateman
February 27, 2010 2:52 am

E.M.Smith (00:18:54) :
Yes, and the faults in Calif. are in overdrive right now.

jmrSudbury
February 27, 2010 3:00 am

Looking at that satellite shot, I wondered if the green arrow pointed to the weather station and if that was a bush or a tree beside it. I thought I would check out the Dale Enterprise, Virginia locatation at the http://www.surfacestations.org/ site, but it says, “No web site is configured at this address.”
John M Reynolds

Adam Gallon
February 27, 2010 3:29 am

http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc3902.htm
“Memorandum submitted by the Institute of Physics (CRU 39)”
This may have been noted before, if so, please snip!

E.M.Smith
Editor
February 27, 2010 3:32 am

juan (23:32:33) : The two flags showing estimation are E and X. The record for Dale Evanston shows the following:
Ah, yes… one of the things that just drives me up a wall. That whole “what data is real and what is fabricated” problem.
BTW, I spent a long time learning that “raw” means “cooked” and “robust” means “close enough or guessed well”… Along the way I figure out that none of the data used by all the temperature series folks can possibly be “raw”.
It’s really fairly simple once you think about it. They all use a “monthly mean” that is created from summation((MIN+MAX/2) over days available) so by definition those monthly means are CONSTRUCTED and not “raw”. It’s also a bit unclear how many days can have missing values or estimated values and still give you a Monthly Mean value in the “raw” dataset.
Then it goes on to another step where some things are E Estimated and others are X that is a different kind of estimated. But in both of those cases the data is not flagged as “missing” since it is present, just fabricated….
FWIW, the NOAA guidelines allow observers to “estimate” a value if there is no reading for that day and just put it on the paper form. One hopes that one of the “Estimated” flags is telling you it was just made up on the spot, while others might tell you if it was made up after the fact by programatic means like FILnet.
All this “data” is what gets fed into GIStemp, which uses it to “in-fill” missing values (that isn’t an estimate, it’s an in-fill, and that is different from the FILnet filling in of missing values, that one presumes is not an estimate, since it’s a fill in … ) but that then eventually has some homgenizing done, that isn’t an estimate either (but can be based on estimated data), but also isn’t a FILnet… but does add data that are missing to segments that need it. And eventually this goes to the anomaly mapping step that adds sea surface anomalies from “optimal interpolations” that are not estimates, nor FILnet, nor infill, but just fill in the missing grid boxes…
Sadley, I must report that prior paragraph is NOT humor nor is it overstating things. In fact, I suspect I’ve forgotten a few steps of data fabrication…
And folks wonder why I don’t trust the 1/10 C place in GIStemp product…
At any rate, by the time the data are in the GISS web site, all that has been swept under the rug and all that is available on the “data” page is the data that is present at that STEP of GIStemp with no provenance information about what bits are fabricated by various means in some prior step of the process. This gets worse the more STEPS you go through. In some cases I’ve seen what look like whole years of data made up from whole cloth for individual sites. Part of why I’d advise to start from as far upstream as possible for an analysis.
With all that said, I doubt that it has any impact on the analysis or the conclusions presented here. The fill in process looks to add ‘reasonable’ values with the major fault being that it clips peaks. That is, it will never estimate or fill in something like the 1998 hot spikes.
But yeah, the data are “holey data” even for ideal stations. Just leaves you to wonder how bad it is for the bad stations…
BTW, in making anomaly maps, GIStemp makes ‘seasonal means’ then uses them to make annual means. As I read the code, a seasonal mean can be missing one whole month and an annual mean can be missing one whole season. IIRC, with all the rules accounted for, you need a minimum of 7 months of data to make an annual mean, but you don’t have to have an actual winter season… So by the time you reach annual anomalies, the data have been stretched Waaaayyyy thin. Even the estimated FILnet filled in homogenized in-fill optimal interpolation “data”…
These folks:
http://diggingintheclay.blogspot.com/2010/02/of-missing-temperatures-and-filled-in.html
find that the percentage of missing data runs about 50% especially in recent decades for most regions of the world. It’s well worth a read. Just make sure the dishes are put away, the dog is out of the room, and you have a stiff drink handy…
But I think they only looked at the -9999 missing data flag too. And have not accounted for the E and X records as ‘missing’…
Now where did I leave that mug…

David
February 27, 2010 3:42 am

No Nick, the problem is that the outcome is utterly inconsistent with the apparent aim of the documentation, which by the way has not been updated in 13 years.

David
February 27, 2010 3:58 am

…and the explanation is completely generic, with no usable audit trail to individual sites, so to all intents and purposes the process is undocumented. If my business documentation looked like this, with a 13-year-old general statement of intent and no specific explanation of individual entries in my ledgers, I would be in jail.

February 27, 2010 4:10 am

Is it common practice to apply statistical testing on the linear regression? I would be curious about the p-value on those trends.

CPT. Charles
February 27, 2010 4:45 am

Whoa. Mag 8.8 earthquake in Chile
http://kore.us/52ArTw
So, yesterday we had a quake off Okinawa, today in Chile.
That’s quicker than usual …. [the ‘Ring of Fire’ samba]

stephan
February 27, 2010 4:54 am

Ot but the recent spate of earthquakes etc haiti and now chile, betcha anything its due to solar status.

February 27, 2010 5:33 am

We know there is a seemingly criminal aspect to the reporting and the “team” of scientists must have a “money” reason for doing it but, once again show me the prosecuted ones. Mann, Hansen, IPCC, Jones(is he back yet). Oh, and Gore. Until something happens to these guys we will still get the same old same old.
Money talks and right now the good ole boys are rolling in it and we are paying for their schemes. I’m frustrated. And angry. I despise being taken.

old construction worker
February 27, 2010 5:39 am

REPLY: We may be able to do this if NCDC will give me access to B44 forms which are top view site sketches and description of surroundings, but so far they have not made them available. -A
‘The Dale Enterprise………… electronic temperature sensor that was installed in 1994’
After reading about all the site problems, my construction experience kicks in. Knowing the government, 1) a letter went out the the sites for informing personal of the pending updates. 2) Asking the site personal to submit a plot plan according to “code”. 3) After plot plan approval and , asking site personal to acquired x amount of bids. 4) Award bid. 5) Install equipment. 6) Submit invoice for payment.
Questions
Did the contractor follow the site plan?
If changes where made, was the USNCH informed?
When did the USNCH start writing their adjustment programs?
Did the USNCH run a control study of “warming bias” from a cross section of sites?
I could go on and on with questions that need to be answered concerning equipment placement and the birth of “warming bias adjustment”.
I’m not saying there was any fraud. The whole situation reminds me of PPP.
PPP= Piss Poor Planning

maz2
February 27, 2010 6:10 am

“BBC tells the truth – shock horror! – iceberg not caused by global warming”
“So, why did they specifically rule out global warming in this report? Because the BBC is no longer unchallenged, is the answer. Along with Al Gore, the IPCC and those dedicated academics at the “University” of East Anglia and Pennsylvania State, they know that every claim they make is now scrutinised by experts and deconstructed virally on the Internet.
AGW sceptics are now connected globally. They know the websites to trust, they can draw on a huge team of specialists and experts, many of them better qualified than the scare-mongers in white coats. This network of sceptics has a global outreach far beyond the scope of the clapped-out BBC. In the case of the iceberg, the science was so basic and indisputable that even Auntie BBC, the wizened old crone who steals our money in the form of the licensing fee, dared not expose herself to world ridicule by making false claims. Instead, she opted to gain credit for honesty on this issue, while continuing to promote the AGW scam elsewhere.
In itself, it is a very small victory for the truth; but its implications are enormous. It tells us the scam merchants are on the back foot; they are in retreat; it will still require trench warfare for years to dislodge them, but the tide has turned. Just one sentence, almost a throwaway line, in a news report, but it signals an awareness that we are on their case. The AGW hysterics have irretrievably lost the battle for public opinion and now it is time to peel their layers of fabrication and falsehood like an onion.”
http://blogs.telegraph.co.uk/news/geraldwarner/100027663/bbc-tells-the-truth-shock-horror-iceberg-not-caused-by-global-warming/

Chris D.
February 27, 2010 6:39 am

Should check out what they’ve done to the Walhalla, SC record this time. Bizarre.
Meanwhile, they don’t touch the record for Saluda, SC at all. That MMTS is 10′ from the AC unit and right next to the parking lot. Came across that one by accident. I’ll upload a cell phone pic – need to return with a better camera.

Green Sand
February 27, 2010 6:41 am

O/T but I think of interest Times-online story University ‘tried to mislead MPs on climate change e-mails’
http://www.timesonline.co.uk/tol/news/environment/article7043566.ece
Even now they still cannot tell it as it is!

starzmom
February 27, 2010 6:46 am

This may be slightly off topic, but I want to thank Anthony and the moderators and everyone else who posts at this site. You all are a bunch of incredibly smart, educated and witty people. Most especially you have given the courage and the information to be much more willing to speak out about this issue in my daily life. Not only do I feel that I have learned a great deal, I have the tools in my brain to support my argument, and the confidence to carry it out.
THANK YOU!!

pyromancer76
February 27, 2010 6:49 am

@jothi85 (16:57:03) :
“If
1: this almost 1 deg C negative bias for pre 1990 temp records by the NCDC is common
and
2: this almost 1.5 deg C negative bias for pre 1900 temp records by the GISS is common
then the whole claim of AGW red herring.
once that is established…. follow the money. you will find the crooks & their enablers”
Yes, this is the essential next step. There must be no misplaced “mercy” without complete establishment of the principles of Transparency and Accountability or we can have no hope for Truth and Justice.
Those who lied, those who pushed draconian public policies, and those who aided and abetted these crimes, must be held accountable and punished. If the punishment fits the crime, then those in leadership positions of this conspiracy deserve not only to be fired but to be fined the amounts of the grants with which they carried out this nefarious business and to lose their pensions/retirement (from the public purse). Jail time is also indicated after proper adjudication.
How about all those so-called professional organizations? How about George Soros and the billions he has put into becoming global emperor through the “green movement” and “globalization”.? How about the money behind the frauds that put an unqualified president at the head of the USA. (This is not a “birther” argument. This is straight forward constitutional. Anyone who runs for president must be a Natural Born Citizen — born of two American citizens. No parent of a US president can be a foreign national, period. This is the Commander-In-Chief, after all).
Please note: EARTH DAY –April 22 — IS ON LENIN’S BIRTHDAY. Wonder how that coincidence happened? This year is the 40th anniversary of Earth Day. From earthday.com
“Forty years after the first Earth Day, the world is in greater peril than ever. While climate change is the greatest challenge of our time, it also presents the greatest opportunity – an unprecedented opportunity to build a healthy, prosperous, clean energy economy now and for the future.
Earth Day 2010 can be a turning point to advance climate policy, energy efficiency, renewable energy and green jobs. Earth Day Network is galvanizing millions who make personal commitments to sustainability. Earth Day 2010 is a pivotal opportunity for individuals, corporations and governments to join together and create a global green economy. Join the more than one billion people in 190 countries that are taking action for Earth Day. ”
Do we have our work cut out for us! Glenn Reynolds suggests we might be ready for another Great Awakening–an American tradition (as is Arbor Day, I think).

JerryB
February 27, 2010 7:08 am

GISS uses NCDC temperature adjustments only for USHCN
station data, not for the data of any other stations.
The Dale Enterprise station times of observation were
either sunset, or 18:00 hours, for most years. In either case
a TOB (time of observation bias) , relative to midnight
readings, would occur. A study of several Virginia station
data suggests that a TOB adjustment of at least 0.5 C
would be appropriate for observations at 18:00 hours at
such stations.

RockyRoad
February 27, 2010 7:09 am

Jack Morrow RE: Show me the prosecuted ones…
Consider the following (in my earlier post and although somewhat dated, is just the beginning):
http://www.climategate.com/u-s-lawyers-get-their-legal-briefs-in-order
Problem is, the discovery process is being overwhelmed with evidence but that is a good thing. Not a day goes by but some new damning evidence is exposed. I suspect “homogenization” of temperature data as illustrated in this thread will be one key exhibit once the actual algorithms used are found. You’re going to see trial lawyers have a big hand in all this and fraudulent scientists and RICO targets will get taken down.
It was feared right after Climategate that the story wouldn’t have legs. Well, I believe it is running pretty fast right now and expanding daily. These are indeed exciting times!

Basil
Editor
February 27, 2010 7:16 am

David Schnare (20:29:56) :
Data sources for the analysis.
The “raw” data come through the NOAA Locate Station portal at:
http://www.ncdc.noaa.gov/oa/climate/stationlocator.html

Thank you for the response. If you used the monthly data, it is stated to have undergone some “quality control,” correct? I’m okay with that, and am not trying to be critical, but this is “raw” only by way of comparison to the subsequent processing the data is subjected to by NCDC or GISS. I’ve searched for an official explanation of what the “quality control” is that this monthly data has gone through, but have never found one. Would you happen to know what it means?
To others, the monthly data in these records (at least the ones I’ve examined), contain the following caveat:
“These data are quality controlled and may not be identical to the original observations”
Does anybody know what “quality controlled” at this stage of the data represents? Again, this is before all the NCDC/GISS type of adjustments.

Richard Garnache
February 27, 2010 7:17 am

Robert of Ottawa
This probably does need to be said to this group,but I have heard several questions about why they apply UHI correction the way they do. Thank you for providing the correct answer. I’m ashamed I didn’t see that for myself.

MIke O
February 27, 2010 7:18 am

What is interesting about UHI effect is that the suburban area temperatures lows are much lower than the urban. I live just outside Detroit (yes it is still urban) and the nighttime temperature where I am can be 4 – 5 degrees cooler than in the city (or more). The interesting thing about this is that I live in a township size city (6mi x 6mi) with a population of 100,000 people surrounded for 10 miles by similar communities and densities. This is in a county of 1.3 MM people. What would the difference be between where I am and a well-sited rural station (Hmmm, I believe the Univ of Michigan maintains one of these not too far away)?
Bottom line, UHI is being significantly understated.

David, UK
February 27, 2010 7:41 am

OT, but as per Tom Fuller’s comment, I too am not seeing any graphics. I wonder if it’s an issue with the image host? I’ve noticed it on previous posts too.

Kevin Kilty
February 27, 2010 7:51 am

Robert of Ottawa (17:07:15) :
Yet another example of the earlier temperatures being lowered. This is becoming a pattern. It is a very clever device for amplifying a rising trend. The current temps are “accurate” so the non-inquisitive enviro-journalist will be convinced the temperature trends must be correct as well.
I am having severe doubts concerning the honesty of these crimatologists; up ’til now, I have always been willing to give them the benefit of doubt concerning deliberate manipulation. Now, I am seeing this lowering of early temperatures too often; perhaps incompetance is not the cause.

There is nothing inherently wrong with lowering earlier temperature, as it mitigates having to endlessly apply corrections going forward. Remember it is the temperature trend that gets everyone excited, and the trend would be unaffected by whichever end of the time series one adjusts. The real issue is the appropriateness of the corrections. One does not have to do a lot of research to figure that things have gone wrong–for example doing homogenization before UHI corrections is simply wrong and this is what NCDC does.
That being said, making the temperature set reliable is a much more difficult task as Steve Mosher says, you’ve got to visit the sites.

February 27, 2010 7:56 am

Regarding the 1994 change of equipment, I am not sure whether you really cannot identify the step-change involved because of the (regrettable) absence of direct comparisons (i.e. running both thermometer and sensor in parallel for some time), and of other rural stations nearby.
All that interests in this situation is the difference between 1993 and 1995. No matter if surrounding stations are rural or urban, their status will not have change a lot between these two years. What did change is the equipment in one place – thermometer throughout 1993, sensor throughout 1995, with 1994 as an ill-defined intermediate (that is, unless the method was changed on Jan. 1st).
So why not look at the average difference *between these two years* for the surrounding stations and compare it with that at Dale? If, say, 1993 was on average 0.5 degrees warmer than 1995 elsewhere in Virginia, but (apparently) 0.5 degrees colder in Dale, wouldn’t this justify saying all the readings from 1995 onwards should be raised by 1 degree, and 1994 by about half that amount? If the exact changeover date is known, one might even look at the very last thermometer reading and the very first sensor reading only, without any averaging, and compare these with simultaneous readings at nearby locations where the daily temperature could reasonably be expected to change in a similar way.
IMHO one does not necessarily need to take into account the (possibly UHI-contaminated) *trends* left and right of the change point, as we are not trying to correct for an ongoing change like UHI but a sudden switch at one point, with constant (but different) “before” and “after” conditions. Think of a sound engineer switching to a different mic in mid-recording, with the result of level dropping at that point in the recording – to correct for this, you would apply a constant boost from the moment of change onwards, not a continuous adjustment that changes and falsifies the dynamics of the recorded signal; and this stepwise adjustment would not depend on how these dynamics happened to be behaving throughout the recording.

hotrod ( Larry L )
February 27, 2010 8:00 am

Nick Stokes (00:32:07) :
Re: hotrod ( Larry L ) (Feb 26 22:05),
“undocumented adjustment on top of undocumented adjustment”
The GHCN, USHCN and GISS datasets and their adjustments are thoroughly documented. The problems is that people who write posts here on the topic don’t seem to read it.
GHCN documentation is here. USHCN documentation is here. GIStemp documentation is here (with over 20 years of papers), and you can get their code there as well.
David (03:42:40) :
No Nick, the problem is that the outcome is utterly inconsistent with the apparent aim of the documentation, which by the way has not been updated in 13 years.
David (03:58:44) :
…and the explanation is completely generic, with no usable audit trail to individual sites, so to all intents and purposes the process is undocumented. If my business documentation looked like this, with a 13-year-old general statement of intent and no specific explanation of individual entries in my ledgers, I would be in jail.

Nick I think you are missing my point here. Just because physical documentation exists, does not mean that the actual changes are effectively documented. The documentation only matters if it accurately describes what actually happens to the data in every case.
For example, if a person could pick a random station and predict before doing a forensic analysis what the adjustments will be, based on the above mentioned documentation, then you would have documented changes. However, it appears from what I have seen with these forensic examinations, that in every single case, the adjustment profile makes absolutely no rational sense when you look at it.
Old well maintained and sited locations which should have minimal or no adjustments for UHI, have all sorts of odd adjustments that defy explanation.
Old data might be adjusted up or down or left alone, not too old data might have something else done to it, and new temp data might be adjusted way up or not. The station might have bleed in from nearby stations as the numerical steps smear data into holes in the temperature map etc.
As E. M. Smith notes above, by the time you add in the unknowns about E and X flagged data you have no clue about what is really going on with a station with out doing a full autopsy on the station data and even then you end up scratching your head about why a certain date range of data is adjusted one way or the other.
This reminds me of places I have worked where you had computer program run books on the shelf that meticulously documented what the program does, but the steps the documentation describe only actually existed and were performed for a brief window in time. The documentation was sometimes a figment of the imagination of the programmer regarding the steps he intended to implement and when the code really hit the fan some steps got dropped, some got added, sometimes the code did not really do what the programmer intended it to do, and sometimes is got “fixed” at a later date and the documentation no longer has any meaningful relationship to what is actually happening. In that sort of situation you have the documentation referring to calls to data bases that no longer are used, or other chapters of the document that no longer say what they did when they were referred to in the document you are reading.
In engineering and the mechanical trades like machinists it is very common to have blue prints that show you how the object was intended to be built but you many times find that the “as built” device or installation is totally different in important regards. I strongly suspect that the same sort of situation exists here, that the “as performed” adjustments are not the same as the intended and documented adjustments.
The only way to know for sure is to take a few well sited and maintained reporting sites like this. Using the written documentation describe what the documentation says should happen and then do a detailed analysis and compare that to what really happens.
Larry

Kevin Kilty
February 27, 2010 8:02 am

Quite a large fraction of NCDC’s correction comes from trying to fix the time of observation (TOB) bias. Unfortunately this correction depends on the actual temperature record per station, which is not how the correction is done. A person could get past the TOB correction by using first-order stations as these are read on schedule, but then one encounters the UHI effect most prominently in these records. However, an examination of just the best first-order stations across the U.S. ought to show some interesting results. I have looked at a handful of such records, and in eastern Wyoming for example they show warming until mid 20th century, mainly from increasing minimum temperatures, and just about dead flat afterward.

RockyRoad
February 27, 2010 8:05 am

Maybe the next sets of data stations investigated should be in California, since that’s considered the battleground for Cap and Trade.
http://www.ocregister.com/opinion/-236562–.html
“California has the most destructive and costly global warming law in the nation, if not the world. In a perverse way, it’s the governor’s crowning achievement. ”
The economic impact to the state of CA is HUGE! If it can be demonstrated that California temperatures show little or no warming and the data have been fudged, it might help them repeal AB32.

harrywr2
February 27, 2010 8:06 am

John C (18:52:56) :
“How do we know the satellite data is accurate? Because it closely matches the questionable surface data?”
The satellite data doesn’t measure surface temperature, they measure the temperature of the lower troposphere.
The satellite trends tend to be lower then other data sets. They don’t do UHI correction as I believe the ‘sampling rate’ is on the order of square miles as opposed to some of the surface thermometers that cover areas of 10’s of thousands of square miles and are at the airport.

Joshua Jones
February 27, 2010 8:10 am

To those suggesting this is evidence of fraudulent data manipulation,
Before you begin throwing out wild accusations, you really need to do a lot more analysis. Dr. Schnare did not answer the question of why this adjustment may have occurred, he only raised it. Now you need to investigate the situation more closely and see if there are any plausible reasons for the adjustment that do not amount to fraud.
For my money, I am betting the reason this station data was adjusted in this way is because it was out of synch with the data from the surrounding stations. If the trend from it is wildly different from the trend at stations just 30 or 40 miles away, whatever algorithms NASA uses to adjust the data may have identified this station as having a bias.
I did a quick check of the USHCN data for three of the stations closest to Dale Enterprise to see if this could possibly be the case. The trends at the surrounding stations are as follows:
Woodstock 2ne = +0.7 degrees per century
Charlottesville 2w = +0.7 degrees per century
Staunton Sewage Plant = +0.2 degrees per century
These are all positive, which leaves Dale Enterprise an obvious outlier with -0.3 degrees per century. So even if the adjustment is in error, there is at least one plausible explanation for it that doesn’t involve fraud.
Dr. Schnare never said this was evidence of fraud, only that he thinks this provides evidence that USHCN is not a high quality set of data. The rest of you guys should take a page from his book and not start slinging accusations of misconduct until you are sure there is no other explanation.

beng
February 27, 2010 8:36 am

*******
Dan (17:24:15) :
Discussion over surface station data quality should be placed in the context of the satellite data record, which has nothing to do with surface station records and which shows a slightly weaker but still similar global temperature trend over the satellite record: GISS = .168 C/decade vs. RSS (satellite) = .156 C/decade, and UAH (satellite) = .132 C/decade.
*******
But comparing surface with sat temps is apples and oranges to some extent. If you go by classical GHG theory, the temp trends both up and down in the upper troposphere (where satellites measure) are magnified compared to surface trends — nearly twice as great.

James Sexton
February 27, 2010 8:45 am

John F. Hultquist (23:01:36) :
“c james (17:26:25) :
James Sexton (16:46:41) growth and decline of towns
There are quite a number of towns of historical interest, although not necessarily relevant to the current weather station kerfuffle. Try these,
Silver —- City, Nevada
http://en.wikipedia.org/wiki/Virginia_City,_Nevada
Oil —- Pithole, Pennsylvania
http://en.wikipedia.org/wiki/Pithole,_Pennsylvania
True, Detroit is losing people by the droves, but aren’t most of them moving to the outlying “greater Detroit area” as opposed to moving away entirely? As for Virginia City and Pithole, yes, at the time they were consider large, but only relative to the time period with top population counts of 30,000 and 15,000 respectively. While obviously, there would be some heat bias (if they were using temps from that area) the heat bias wouldn’t even be as much for towns with those populations in the present time. (No air-conditioners, cars, ect.) So, no I don’t think those examples apply. Of course, this lends to the question of adjustments being properly applied in the apparent “one size fits all” manner. Are global temp adjustments made based only on population in respect to the UHI? At any rate, the later two examples would be very interesting if they had temp records going back to before, during, and after the booms and busts, but likely to be academic only.

Brian Dodge
February 27, 2010 8:46 am

@ _Jim (17:56:31) :
“Dan, what do the satellites measure (i.e., what do they ’see’)?
Surface temp?
How is this accomplished through overcast skies?”
From http://daac.gsfc.nasa.gov/AIRS/documentation/amsu_instrument_guide.shtml
“AMSU-A is primarily a temperature sounder that provides atmospheric information in the presence of clouds, which can be used to correct the infrared measurements for the effects of clouds. This is possible because microwave radiation passes, to a varying degree, through clouds – in contrast with visible and infrared radiation, which are stopped by all but the most tenuous clouds.”
@ keith winterkorn (19:56:07) : “it can be converted to “temperature” only by correlation with actual contemporaneous temperature measurements from the same site from which the radiation is transmitted.”
No, it doesn’t require calibration against measurements of its target.
“The second segment is a rapid scan covering a cold space view and an internal (warm) blackbody calibration target. ”
“can it separate radiation from paved surfaces, building exhausts, planes, etc., from some nearby grassy area where the ground station is found?” Yes. It looks at air temperature.
“AMSU-A1 has 12 channels in the 50-58 GHz oxygen absorption band which provide he primary temperature sounding capabilities ” If paved surfaces, exhausts, grassy areas change the air temperature, the radiation in the oxygen bands will change and be detected by the AMSU receiver. It does also measure other bands which provide surface, water vapor, and cloud top info.

keith in hastings UK
February 27, 2010 8:57 am

Re: Dan (17:24:15) :
“Discussion over surface station data quality should be placed in the context of the satellite data record,”
Dan, you might want to do some reading about the very real challenges the satellite folk have in producing their temperature records. Unless I misremember, there are at least the following:
a the sensors need repeated calibration, by pointing at known temp targets on the satellite. Small errors?
b either measuring slantwise or straight down, processing assumes standard atmospheric temp lapse rates. Localities do vary, so errors, especially in what layer of air has what temp.
c the time series is stitched to gether from records from different satellite systems, sensors, etc, so discontinuities
d because of the difficulty in converting the radiation measures to temps (see earlier posts), satellite measures have to some degree been cross checked to surface themometer data – i don’t know if this was a reliable exercise.
We are looking at tiny tiny trends – or trying to – so I’m not convinced yet about satellites!
But do your own research! Glad you posted.

JimAsh
February 27, 2010 9:11 am

This is going to take some screaming.
Average people (* with whom I discuss these things) are propagandized to the degree that they are convinced that CO2 is a deadly poisonous gas,
and do not know the difference between CO2 and CO.
These folks think that they are being smothered by a deadly pollutant.
A counter-general education effort is called for.

RockyRoad
February 27, 2010 9:20 am

Jack Morrow RE: Show me the prosecuted ones…
Consider the following (in my earlier post and although somewhat dated, is just the beginning):
http://www.climategate.com/u-s-lawyers-get-their-legal-briefs-in-order
Problem is, the discovery process is being overwhelmed with evidence but that is a good thing. Not a day goes by but some new damning evidence is exposed. I suspect “homogenization” of temperature data as illustrated in this thread will be one key exhibit once the actual algorithms used are found. You’re going to see trial lawyers have a big hand in all this and fraudulent scientists and RICO targets will get taken down.
It was feared right after Climategate that the story wouldn’t have legs. Well, I believe it is running pretty fast right now and expanding daily. These are indeed exciting times especially if one considers the latest pronouncement about climategate from the IOP.

John F. Hultquist
February 27, 2010 9:20 am

E.M.Smith (03:32:19) : E.M., you wrote:
Sadley, I must report that prior paragraph is NOT humor
Okay, that’s not fair. You had me laughing. Then this “not humor” bit. Now I feel like I just laughed at a funeral.
I’ll throw out this: Folks talk of urban and rural or even place X or place Y but such things are ill defined also. Boundaries change – such as when a city annexes a parcel of land that is near and, perhaps, being developed into a shopping mall. Say that is done in 1995. In the USA a census of April 2000 will have a different spatial base than the census of April 1990. These changes are documented but seldom used when a study is done trying to find relationships between population and some other factor, say luminosity over time. There are country to country variations to worry about if the study is international in scope.

February 27, 2010 9:29 am


Kevin Kilty (08:02:31) :
Quite a large fraction of NCDC’s correction comes from trying to fix the time of observation (TOB) bias.

To this engineer’s mind, peak detect means to detect the peak (as min-max thermometry would seem to do) –
Can you explain in just a few sentences why TOB (time of observation bias) is needed in light of peak detect methodology?
.
.

John F. Hultquist
February 27, 2010 9:37 am

David, UK (07:41:43) : OT, but as per Tom Fuller’s comment, I too am not seeing any graphics.
Try the following one at a time, or all at once.
Try shutting down your computer and do a restart. Try shutting off your connection to the internet. Shut off your browser. Use a different one.
I’ve had the problem before, but not on this post. Thus, I think these are local issues. I’ve just quit worrying about it and search for a fix.

Bob Maginnis
February 27, 2010 9:50 am

Note that there are 7 irrigation companies near Harrisonburg, VA, googling ‘irrigation near Harrisonburg, VA.’ As more farmland has been irrigated over the years, the evaporation of water has skewed the temperature readings cooler (ICE, irrigation cooling effect.)

February 27, 2010 9:50 am


Brian Dodge (08:46:31) :

Thanks, Brian, but that question was for Dan, from whom we have not heard back from …
Now, let me pose a question your direction.
Are we (the satellites and the statistical processing applied by RSS and UAH) measuring increased convective activity vis-a-vis higher reported satellite temperatures then at the possible ‘cost’ of energy removed from the surface and boundary layer air masses?
Also bear in mind the MSU’s aboard those sats are also going to see the result of convective activity, i.e., precipitation in its varied forms, which are more reflective of temperature seen at altitude and not the boundary layer or ground.
.
.

February 27, 2010 9:59 am


John F. Hultquist (09:37:14) :
Try the following one at a time, or all at once.
Try shutting down your ..

How about: Try a different DNS (Domain Name Server)?
I’ve had really good luck/faster response/no missing images using Google: 8.8.8.8 or 8.8.4.4 per: Google Public DNS
I had *trouble* using the DNS server simply served up on my present at-home connection … to change your DNS go to Settings, Network, Local Area xxxx, Properties, TCP/IP, Properties, “Use the following DNS …” and enter the above addresses.
This can be done separately from automatically obtaining an IP address BTW.
.
.

February 27, 2010 10:02 am

Kevin Kilty (08:02:31) :
Quite a large fraction of NCDC’s correction comes from trying to fix the time of observation (TOB) bias.
kevin,
as we do not have the temperarture – dateTime series, I am not sure where we go with TOB logic. all we seem to have is the average temp for each day.
Obviously, there is no need for a TOB correction on this site for the average daily temp series.
if this site had been as pristine 100 years ago, as it looks to be today, there is no need to apply the UHI correction either.
the correction in the USHCN data seems to be arbitrary. the GISS correction seems to be goal oriented……purely to get the temperature up for the recent times. so….. decrease the older temps by as much as 1.0 deg C. There is no way, the folks who did these corrections were ignorant of what they were trying to achieve and have achieved by the correction.

D. Patterson
February 27, 2010 10:08 am

Joshua Jones (08:10:25) :
To those suggesting this is evidence of fraudulent data manipulation,
[….]

Most such critics are concerned about fraud in relation to an apparent intent to deceive an unknowing general public by deliberately and knowingly misreperesenting the character and reliability of the data handling and adjustments. Disclosures of the communications between key Alarmist climate scientists reveal a pattern of abusing the data handling methods being used, regardless of whetheror not those methods have any plausible scientific legitimacy. A given mathematical procedure and/or scientific procedure does not have to be inherently false or fraudulent to be used in a fraud to deceive people. Snake oil salesmen often sold perfectly legitimate remedies by fraudulently misrepresenting their appropriate applications and efficacies.
Plausible deniabililty beyond a reasonable doubt may be an appropriate standard for use in criminal law, but plausible certainty beyond a reasonable doubt is more of the standard to be used in science.

February 27, 2010 10:14 am

JustPassing: (02:33:36) :
My blood is boiling listening to Margaret Torn at the Climategate panel in Berkeley. Speaker No. 1, Maximilian Auffhammer, has spoken about his experience of having his and his Korean co-worker’s study of the use of tree ring proxies being rejected by Jones et al. because it contradicted the Michael Mann and Briffa agendas. All junior academics will recognize the personal pain experienced when their own work is rejected – especially if they have a good understanding that the work has merit. This leads to all sorts of self-doubt: academia is a blood-sport and is very hard on the ego for those of us who are not academic stars. Yet just 20 minutes later Torn recites the AGW litany: the ‘theft’ of the Climategate e-mails; the death-threats to the scientists involved, which means now scientists have to worry about their personal safety in embarking on their careers, and how the e-mails have been taken out of context. She and the other climate scientist, Bill Collins, are more interested in buttressing their field than addressing the problems that have been revealed and both continue to defending the rubustness of the evidence.
I notice that Collins is now willing to allow that AGW has only broken temperature records going back 4-500 years; perhaps (?) it was warmer in the time of Charlemagne. But Collins believes that the correction of the Mann hockeystick was achieved by proper scientific self-correction, while the Climategate scandals reveals there’s a ‘cancer’ that needs to be excised from the science. Collins also believes the ‘theft’ of the e-mails is unacceptable, but believes an investigation at UAE is needed.
Still listening…

Tim Channon
February 27, 2010 10:14 am

Here is a link to an article where I show the WMO data surrounding Dale Enterprises.
Expanding this to the whole period would be a major job, an awful lot of data.
The earliest data starts 1859 but is bad.
So I have concentrated on a brief period where all stations have data, a flavour of the variations.
http://daedalearth.wordpress.com/wmo-72417-and-surroundings/

hotrod ( Larry L )
February 27, 2010 10:18 am

Bottom line, they are trying to make silk purse out of a sow’s ear.
The original data quality is simply not adequate to derive trends to a fraction of a degree even in the well sited monitor locations. They were intended for a totally different purpose.
They are heaping correction on top of correction modified by subjective analysis, depending on local variables they have no control over and in most cases no knowledge of.
They are coming up with a meaningless metric that in my view has no validity for the purpose they are trying to use it.
This is even before you begin discussing issues like how useful is it to figure average temperatures from a mean value (which might be computed in as many as 100 different ways according to their paper)

Peterson, T.C., and R.S. Vose, 1997: An overview of the Global Historical Climatology Network temperature database. Bulletin of the American Meteorological Society, 78 (12), 2837-2849.

“The procedure for duplicate elimination with mean
temperature was more complex. The first 10 000 duplicates
(out of 30 000+ source time series) were identified
using the same methods applied to the maximum
and minimum temperature datasets. Unfortunately,
because monthly mean temperature has been computed
at least 101 different ways (Griffiths 1997), digital
comparisons could not be used to identify the remaining
duplicates. Indeed, the differences between
two different methods of calculating mean temperature
at a particular station can be greater than the temperature
difference from two neighboring stations.”

“Probable duplicates were assigned the
same station number but, unlike the previous cases,
not merged because the actual data were not exactly
identical (although they were quite similar). As a result,
the GHCN version 2 mean temperature dataset
contains multiple versions of many stations.”

Bulletin of the American Meteorological Society pp 2841
“All the homogeneity testing was done with annual
time series because annual reference series are more
robust than monthly series. However, the effects of
most discontinuities vary with the season. Therefore,
monthly reference series were created and differences
in the difference series for each month were calculated
both before and after the discontinuity. These
potential monthly adjustments were then smoothed
with a nine-point binomial filter and all the months
were adjusted slightly
so the mean of all the months
equaled the adjustment determined by the annual
analysis.”

“Our approach to adjusting historical data is to make
them homogeneous with present-day observations, so
that new data points can easily be added to homogeneity-
adjusted time series. Since the primary purpose
of homogeneity-adjusted data is long-term climate
analysis, we only adjusted time series that had at least
20 yr of data. Also, not all stations could be adjusted.
Remote stations for which we could not produce an
adequate reference series (the correlation between
first-difference station time series and its reference
time series must be 0.80 or greater) were not adjusted.
The homogeneity-adjusted version of GHCN includes
only those stations that were deemed homogeneous
and those stations we could reliably adjust to make
them homogeneous.”

Bulletin of the American Meteorological Society pp 2845
“A great deal of effort went into the homogeneity
adjustments. Yet the effects of the homogeneity adjustments
on global average temperature trends are
minor (Easterling and Peterson 1995b). However, on
scales of half a continent or smaller, the homogeneity
adjustments can have an impact. On an individual
time series, the effects of the adjustments can be enormous.

These adjustments are the best we could do
given the paucity of historical station history metadata
on a global scale. But using an approach based on a
reference series created from surrounding stations
means that the adjusted station’s data is more indicative
of regional climate change and less representative
of local microclimatic change than an individual
station not needing adjustments. Therefore, the best
use for homogeneity-adjusted data is regional analyses
of long-term climate trends
(Easterling et al.
1996b).”

Bulletin of the American Meteorological Society pp 2846
Vol. 78, No. 12, December 1997
I am not suggesting that these corrective measures were necessarily malicious!
These are just representative quotes snagged out of a single paper that jumped out at me as flashing warning signs about the inherent limitations of the data. Some of which are plainly stated by the authors of the paper itself.
I just think they were over optimistic attempts to do something which is practically impossible. By the time you twiddle around numbers for thousands of stations with a great number of sometimes arbitrary adjustments based on “assumptions” and judgment, you may have a pretty data set, but is it really useful for the purpose you are trying to develop it for?
I think they are fooling themselves into thinking they have improved the value of the data, and also grossly over estimating the statistical significance, and precision that is appropriate to assign to the output.
In my judgment (I am not a statistics whiz – just practical good common sense), they are lucky if they can accurately detect a trend smaller than +/- 1to 2 Deg C in the data series they have developed. If some statistics whiz can show me an error budget that supports higher precision than plus or minus 1 or 2 deg C. I would like to see it.
Larry

February 27, 2010 10:52 am

Still watching that video: (02:33:36) :
Rich Muller, Professor of Physics, author, Physics for Future Presidents, is visibly angry at the end of his response to Collins and Torn. His response begins around 42 minutes into the video: an IPCC skeptic is born!
Muller has a true understanding of the nature of real science. He now doubts the AGW claims, which have exaggerated any risks. He is outraged that the models ignored the contribution of cloud cover and negative feedbacks, and is equally outraged at the economic risks being promoted by climate ‘science’. He is emphatic that the e-mails HAVE NOT been taken out of context. Whew! Great to hear a real scientist debating the phonies. My skin is crawling at their squirming response to his criticisms.

February 27, 2010 10:58 am

Joshua Jones says:
February 27, 2010 at 8:10 am
To those suggesting this is evidence of fraudulent data manipulation,….
I doubt its fraud as well. It’s more likely the case that a lot of these scientists are fairly inept especially in statistics. These linear trends are based on a false proposition that there is a linear trend. Temperature is cyclic ( from night to day and from month to month, and probably year to year and decade to decade, etc). There should be a lot more Fourier transforms, multivariate analysis, and statistical hypothesis testing going on than genetic linear regression in Excel. Here’s an example of the danger: roll dice and plot the number over time. Now do a linear regression. It will show a slope. Would you take this slope as evidence that there is a true trend? No. How can you be sure? What if three out of four dice showed a positive trend? Would you adjust the fourth to match the first three? No, you’d say it was rand chance. Same with temperature trends. Take all stations unadjusted, calculate trends, then statistically test if the distribution of trends is statistically different than the null hypothesis.

JimAsh
February 27, 2010 10:59 am

“If some statistics whiz can show me an error budget that supports higher precision than plus or minus 1 or 2 deg C. I would like to see it.”
I think we’d all like to see that.
Allow me to try to summarize with my high school education.
The Homogenized data has been adjusted.
The adjustments appear to include ( on a number of examined stations)
lowering of past reported temperatures exaggerating any warming trend,
and,
Adjusting for the Urban Heat Island effect, not by adjusting Urban stations down, but by adjusting adjacent Rural stations UP .

Roger Knights
February 27, 2010 11:15 am

D. Patterson (10:08:16) :
A given mathematical procedure and/or scientific procedure does not have to be inherently false or fraudulent to be used in a fraud to deceive people. Snake oil salesmen often sold perfectly legitimate remedies by fraudulently misrepresenting their appropriate applications and efficacies.

E.g., Lydia Pinkham’s Vegetable Compound, “Efficacious in every case.” (You betcha, at 15 proof alcohol!)

kwik
February 27, 2010 11:16 am

The strangest thing is that over at RealClimate, they dont seem to even want this to be fixed. It is indeed very strange.
Hopefully more and more people will realise the situation.
It looks like an awfull mess, seen from my standpoint.

February 27, 2010 11:21 am

Dr. Schnare, this is the approach I like to see. There’s a lot of overreaching at WUWT.

Roger Knights
February 27, 2010 11:54 am

SteveS (03:12:58) :
“On a more serious note: I don’t feel optimistic about this at all.”

RK: Give it time. The Team has made many enemies in their field. (For instance, that guy Karlen from Scandanavia who got brushed off by T____.) They have been waiting for an opportune moment to strike back and be heard. Now they have it. As some of them speak up, others will be encouraged to come forward. And the strength of their condemnation of the IPCC and the Team will rise, and lots more dirty linen will be aired. It’ll develop along the lines of Watergate, with the public getting hooked on their weekly scandal, and the defenders in the bunker getting more and more implausible and desperate.
As that happens, the media will be more inclined to pay attention to dissenters. There will be articles exploring, with the aid of graphics, the links (remember that word?) between the Team, the IPCC, and the various gatekeepers in the field. There may even be articles exploring topics like, “What is climate science all about anyway?”
A tectonic shift is underway. The media’s current silence is an indication that they are re-assessing the situation, and that their treatment in the future will be less outrageous. Even if you don’t grant them any sense of fairness at all, which is silly, you should realize that they have to be concerned about not alienating the skeptical portion of their readership too badly, now that noticeable segments of it are sounding off in their online comments sections. Previously, they were only getting badgered by the enviros, whenever they failed to toe the line. Now, the forces on the contrarians side are coalescing, mainly thanks to internet sites like this, and making their impact felt.

RockyRoad (07:09:36) :
It was feared right after Climategate that the story wouldn’t have legs. Well, I believe it is running pretty fast right now and expanding daily. These are indeed exciting times!

Not by me! Here’s what I posted here during Weeks 1 & 2, in exchanges with Brendan_H (primarily). (These aren’t all in chronological order.)
(However, I was a bit “off” in the specifics. I thought there would be more disaffected insiders like Georg Kaser coming forward. Instead, what primarily emerged were new “gate” scandals, in AR4. Nevertheless, I still think that the “tectonic shift” of insider opinion is occurring quietly and that it will be the “sea change” in the debate over warming that will make the difference, not these recent scandals. I.e., even if the recent “gates” blow over in a few months, as warmist forces hope, probably with good reason, the former consensus has been shattered and won’t be able to regroup well enough to effectively marginalize and intimidate cautionary and contrary voices in the scientific community and in the media. Warmist momentum and solidarity has been lost. Hence, appreciation of the wobbliness of its case will grow over the years.)

Brendan H (15:52:09) :

Roger Knights: “It’ll develop along the lines of Watergate, with the public getting hooked on their weekly scandal…”
———-

It’s also doubtful that many other whistle-blowers are waiting in the wings. Over the years there has been ample encouragement, and opportunity and outlets, for people to spill their beans, so anyone who has wanted to speak up has most likely done so.

On the contrary, there has been ample discouragement for people to spill their beans, and the encouragement offered hasn’t been much. An op-ed in a right-wing newspaper, or a some-expenses paid trip to an NIPCC conference, or an online petition to sign? These merely brought down abuse and shunning. Some encouragement!
That’s probably why that Scandinavian Karlen, who was brushed off by CRU, didn’t make a public stink about it. My inference is that there are more like him out there with stories to tell, and that they will come forward now that people are willing to give credence to what they have to say. In addition, critics who have already spoken out but been generally ignored and/or scoffed at will have gained credibility in light of what has been revealed, and hence will be de-marginalized by being interviewed on TV, etc. Here’s what I prophesied:
RK: “The Team has made many enemies in their field. … They [those enemies] have been waiting for an opportune moment to strike back and be heard. Now they have it. As some of them speak up, others will be encouraged to come forward. And the strength of their condemnation of the IPCC and the Team will rise, and lots more dirty linen will be aired. It’ll develop along the lines of Watergate, with the public getting hooked on their weekly scandal, …”

Brendan H (15:52:09) :
“Given the timing of the hack/leak, what we have seen is probably the leaker’s best shot, and there’s unlikely to be much in the way of additional material to maintain a media drip-feed.”

I wouldn’t be too sure. Additional e-mails involving the team will be subpoenaed by Inhofe’s committee. There’s likely to be embarrassing material in them that will titillate the public, and whet their blood-lust for more.

Brendan_H: “Remember that the central media drama of Watergate was the gradual exposure of conspiracy and cover-up, and the nightly revelations that followed. In the current situation, no such conspiracy and cover-up has been alleged [sure it has] or shown, nor even any compelling evidence of wrongdoing.”

Spoken like another Ron Zigler or Rabbi Korff (remember him?)! I hope you get interviewed as a CRUgate-defender on TV: there’s a need for someone to fill those roles, to heighten the absurdity of it all.
Unlike you, Monbiot has recognized that there is plenty of evidence of wrongdoing, collusion, and butt-covering among The Team, that the public is going to see it that way, and therefore that a timely abandonment of them and their indefensible activity is the only way for warmism to salvage some credibility from this train wreck. The truth of his insight should be obvious, but if you and your brethren would rather be oblivious, I’m fine with that. Don’t give up the [glug ….]!

“Just as importantly from a media perspective, the material is being used to support a pre-existing narrative, that climate scientists have engaged in corruption and fraud. That is probably one reason why the wider media is treating the issue with caution.”

Sure. But now some of them are beginning to think, “Maybe we were wrong to dismiss the pre-existing narrative. That’s what we did with Watergate. We ignored McGovern’s pre-election charges that the break-in had been orchestrated from above because it sounded partisan, outrageously unlikely, and would have brought down obloquy on us if we had entertained the possibility publicly. Mutatis mutandis …”
Now that there’s been some “hard” confirmation of outsiders’ charges of smug, thuggish groupthink that has been “leaning” on the peer review process, climate critics no longer can be dismissed as cranks. They are going to be given a respectful hearing, at least in a fair number of venues.
Similarly, scientific societies are going to have to take a serious second look at this controversy, instead of just rubber-stamping the “correct” opinion. Every time one of them distances itself from the consensus, it’ll be newsworthy. Every time a warmist becomes a turncoat, or even merely criticizes an outrageous defense of the Team (like the absurd defenses of Nixon that were offered), it’ll be newsworthy.
The dam is cracking, the increased waterflow will widen the cracks, the media will like the ratings the drama is getting, more blood will get into the water, the feeding frenzy will intensify, more countries will put a hold on their anti-carbon legislation, more prestigious scientific statesmen and popularizers will weigh in on the side of caution or contrarianism, more hapless/ludicrous defenses of the consensus will be made, and the whole world will grab some popcorn and watch with glee.
Over the next few years, the warmists will be in retreat and on the defensive, despite occasional blips. The warm has turned. All the sanctimonious viciousness and hypocrisy (“we’re doing real science”) of the enviro-nuts to date will make them wonderful targets for popular scorn and down-peg-pulling.
“Dr Phil Jones says this has been the worst week of his professional career.”
So far.

Brendan H (01:57:26) :

Roger Knights: “There’s likely to be embarrassing material in them that will titillate the public, and whet their blood-lust for more.”

“Titillation is one thing, conspiracy another. To get a Watergate situation, you need actual scientific wrongdoing, strong evidence of fraud and collusion, and to date there’s been none of that, nor any reason to suspect any more such evidence in future.”

The Team stands accused of that and more. Look at some of the bills of particulars that others here have posted, and several newspaper columnists too. Argue with them. I’m convinced.

RK: “The dam is cracking, the increased waterflow will widen the cracks…”

Brendan_H: “At the beginning of 2009 a former NASA administrator came out in opposition to AGW, causing enormous excitement among sceptics. As one poster opined: “We are finally witnessing the last gasps of a dying theory.”
The death of AGW has been regularly predicted for a good while now. Certainly, this email leak is a more serious matter than the views of a retired scientist, and is a setback to climate science, but beware of confirmation bias. My suggestion is that celebration is premature, and may well lead to serious disappointment.”

I wasn’t among those who made such claims. I’m not inclined to such over-optimism. I can tell that this is different–I can smell blood. A brick has been removed from the wall that protected the team, and it will be much easier to pry out further bricks as a result. Now there is justification for congressional hearings examining the machinations of the IPCC and its failure to behave fairly. For instance, NASA scientist Vincent Gray has complained that he submitted over 1100 comments to the IPCC, all of which were ignored. The IPCC might soon have to justify those refusals. No doubt there are dozens of other scientists whose skeptical contributions were ignored, or whose drafts were high-handedly revised. The IPCC will have to justify those as well. It won’t come out looking good. Thereafter, its endorsement of alarmist findings won’t carry nearly as much weight among the innocent public and opinion-leaders as heretofore.
This is like the moment Nixon’s taping system was revealed. Until Buttersworth revealed that to the committee, it looked as though Nixon would be able to wiggle out of the affair. After that, the pursuit went into high gear. I was watching at the time and realized instantly, “Now they’ve got him. He can run, but he can’t hide.” I have a similar feeling about this business. Until now the Team was Teflon: accusations slid off them, because of their presumptive objectivity and high-mindedness. Now they are under a cloud of suspicion, subject to subpoena and testimony under oath; they won’t be able to keep their misdeeds concealed from all but their victims. They’re on the run.
As Monbiat has said, persons like you who don’t/won’t realize the dreadfulness of this situation for your side are living in a fool’s paradise.

RK: “Over the next few years, the warmists will be in retreat and on the defensive, despite occasional blips.”

Brendan_H: “You’re assuming that the CRU emails have disconfirmed AGW.”

On the contrary, I’ve made several posts in the past few days stating that I think the effect of the Team’s fiddling with the measured temperature data is likely minor, and that the overall shape of the blade of the hockey stick won’t be changed much. (I’ve also said repeatedly that there are likely innocent explanations for much of the awkward material in the e-mails.) So I don’t think that AGW has been disproved.
I see what’s happened as the first step in a lengthy process of objectively and scientifically reexamining the data and reasoning behind warmism, after the Team and the IPCC and peer review have had their halos removed. Their prestige, plus their power and willingness to enforce groupthink by any means necessary, will no longer be factors. Doubters will feel safe to speak out.

Brendan_H: “But the current crop of climate scientists remain convinced that their science is correct. Whatever happens in the political sphere will not affect the scientific findings, nor, for that matter, the actual climate.”

That’s naive. Academic and social “politics” already taints their judgment. Until now climate science has been politicized, in the sense that the Team’s paradigm was “enforced” by their mafia tactics and by madly warmist funding agencies, journal editors, and journalists. In such an environment, “Reason comes running / Eager to ratify.”
Once de-politicalization occurs and marginalized voices can be heard and harkened to without penalty, and non-warmist research can get funded, opinions among climate scientists are likely to shift substantially.
Of course, for many it will be too awkward to change, because they are so complicit in the shiftiness of warmism’s history. They will hang tough, like the tiny crew of post-Watergate Nixon loyalists.
PS: I should have said above that the main outcome of Climategate, IMO, is that the Team is no longer trustworthy in the public eye, and that a cloud of suspicion has fallen over peer review, the IPCC, and the consensus, which seems to have been engineered or manufactured. This is where the real damage has occurred, on an intangible level. Therefore, a re-do of the case for CAWG, under neutral scientific auspices, is needed. Plus more transparency, etc.

Brendan_H: “Its going to take some time for the ‘temperature to cool’ so to speak. Meanwhile it will cost a fortune.”

Not necessarily. Give Climategate awhile to sink in, and for additional dirty laundry to come to light, and for additional scientists to weigh in against the consensus. The pendulum of alarmism has reached its apogee and is poised to swing the other way. Copenhagen is a dead man walking.
There is no chance now that the US will pass any major carbon tax without a lot of hearings and scientific investigations first that produce findings supporting alarmism–and that is impossible, if neutral scientists oversee the process, similar to the Wegman investigation.
And if the US won’t get on board, neither will China and India. So the only money lost will be in Europe, similar to what happened post-Kyoto.

Roger Knights
February 27, 2010 12:28 pm

Oops!! — I forgot to include the introduction to my long post above. Here it is, for context:

RockyRoad (09:20:03) :
It was feared right after Climategate that the story wouldn’t have legs. Well, I believe it is running pretty fast right now and expanding daily. These are indeed exciting times especially if one considers the latest pronouncement about climategate from the IOP.

Not by me! Here’s what I wrote in Weeks 1 and 2 after Climategate:

Roger Knights
February 27, 2010 12:33 pm

OOPs: I accidentally placed the introduction to my long post above beneath the quote from Steve and my response to it.
(Mods: Please delete my prior, incorrect correction?! TIA)
REPLY: Unclear what you want – comment stands, -A

Rhoda R
February 27, 2010 1:54 pm

There is, in the US Govt circles, a concept known as “error so egregious as to constitute deliberate fraud.” It’s not necessary to prove deliberate fraud if it can be shown that there were so many ‘mistakes’ or ‘errors’ that they cannot be reasonably ignored.

February 27, 2010 2:01 pm

My commentary on the Climategate Panel at Berkeley posted by JustPassing (02:33:36) is somewhat OT, but I want to make a final observation that brings it into the relevant discussion on this thread.
David L. suggests that what has occurred in the USCHN and GISS data sets is not fraud, but incompetence. He states: David L (10:58:42) : “I doubt its fraud as well. It’s more likely the case that a lot of these scientists are fairly inept especially in statistics. These linear trends are based on a false proposition that there is a linear trend…” This sounds like a likely scenario to me.
If you happen to watch the entire YouTube video (runs to nearly 2 hours), you will see the panel discussion go through several cycles. The physicist Rich Muller at times appears to defend the IPCC 4th report, and soft-soaps his criticisms, but then the anger re-emerges and at one point he says there’s no reason to trust there will be no further ‘dirty laundry’ in the report. When the panelists are asked how Mann and the guilty CRU parties’ academic misconduct should be punished, they all agree that no crime occurred! Although Muller is on the attack, there is a high degree of academic circling of wagons going on: it is all very polite. The worst punishment that would be possible: ban Mann et al. from further IPCC reports!!!!?!
It appears, therefore that a far more apposite punishment, if that is the route that must be followed, should be meted out. These incompetent clowns should be obliged to take some advanced stats courses from Ross McKitrick. Problem solved – although probably not to RM’s taste.
P.S. Non-academics here, if they can stomach watching Bill Collins and Margaret Torn, will get a glimpse as to how academics conduct themselves in a situation where there is a major case of academic misconduct that must be resolved and at the same time the academic community wants to save face.
This is why, Roger Knights, your observations are correct that it will take time for the full implications of Climategate and all the other alarmist-gates to cause the AGW establishment to crumble. Keep chipping away!

Nick Stokes
February 27, 2010 2:32 pm

Re: _Jim (Feb 27 09:29),
Can you explain in just a few sentences why TOB (time of observation bias) is needed in light of peak detect methodology?
Jim, the old thermometers could tell you the peak in a 24hr period. But when they are read has an effect. The NWS originally asked that the reading happen late afternoon.
That meant that on a hot day, the peak would be counted twice, with a genuine peak in mid-afternoon say, and the temp just after the reading (and re-setting) being the peak for the next day. Cold mornings, however, were only ever counted once.
Over time, observers shifted to reading and resetting in the morning, since that is when they had to read the rain gauges. Then the opposite happened. Warm peaks only counted once, while cold mornings were often counted twice.
Since the average daily cycle is fairly well measured at each site, there’s a good basis for making a correction.
REPLY: But the real problem is, TOBS imparts a significant warm bias to the entire record post 1960, so if the intent was to prevent counting double peaks and having them bias the record upwards (or downwards), it seems to have failed miserably. BTW the “new” thermometers (MMTS) can also “tell you the peak in a 24hr period”. There’s no operational difference between reading “old” Mercury thermometers and newer electronic “MMTS” thermometers. They are both read manually, at a preferred time of day, rounded to the nearest whole degree, hand recorded on a B91 form, and the form mailed into NCDC once a month. Not sure where you get the erroneous idea that “old” thermometers are operationally different in some way other than the sensing element/enclosure.
The TOBS bias is shown here, as you can see TOBS is the lions share:
all adjustments
Source: http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_pg.gif
The final total bias of all adjustments is here:
USHCN total bias
Source: http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
-Anthony

Brian Dodge
February 27, 2010 3:59 pm

@ _Jim (09:50:56) :
“Are we…measuring increased convective activity vis-a-vis higher reported satellite temperatures…?”
“…MSU’s aboard those sats are also going to see the result of convective activity, i.e., precipitation in its varied forms, which are more reflective of temperature seen at altitude and not the boundary layer or ground.”
Yes we do see increased convective activity highly correlated with SST. The powerful advance in technology of the multiple spectral channel data from the Aqua and Terra Atmospheric Infrared Sounder (AIRS) and Advanced Microwave Sounding Unit (AMSU) is that we see the temperature from the bottom up, and can tell which signal is coming from where. The algorithms to sort it all out aren’t trivial, and the volume of data is pretty daunting – according to ” Aumann, H. H., D. T. Gregorich, S. E. Broberg, and D. A. Elliott (2007), Seasonal correlations of SST, water vapor, and convective activity in tropical oceans: A new hyperspectral data set for climate model testing, Geophys. Res. Lett., 34, L15813, doi:10.1029/ 2006GL029191.”
trs-new.jpl.nasa.gov/dspace/bitstream/2014/40966/1/06-4186.pdf, a “small subset” of the available data, the AIRS Climate Data Set, is 300Mbytes per day

Kevin Kilty
February 27, 2010 4:59 pm

_Jim (09:29:06) :
Kevin Kilty (08:02:31) :
Quite a large fraction of NCDC’s correction comes from trying to fix the time of observation (TOB) bias.
To this engineer’s mind, peak detect means to detect the peak (as min-max thermometry would seem to do) –
Can you explain in just a few sentences why TOB (time of observation bias) is needed in light of peak detect methodology?

A large fraction of the data was recorded by volunteers who changed their observation schedules away from midnight to some other more convenient time. As one does not know the time at which maximum and minimum occur, just the maximum and minimum values, reading at a time other than midnight produces an ambiguity as to what day the true maximum or minimum occurred. For example, reading in the afternoon results in an upward bias as the maximum could have occurred on the current day, or on the previous day–that is the maximum temperature reported for two consecutive days could have come from a single day. Reading in the morning produces a cool bias for similar reasons.
My point is that the true correction involves actual temperature as read at the station, but NCDC uses a statistical/geographical model to estimate the bias.

REPLY:
Important to note: the observers state the time the temperatures were taken, right on the B91 form.
For example here:
Hand written example: http://gallery.surfacestations.org/main.php?g2_view=core.DownloadItem&g2_itemId=28573
Typed example: http://wattsupwiththat.files.wordpress.com/2010/02/bartow_b91_aug09.pdf

Yet NCDC discards that time of observation info in the transcription to digital data process
. Here is the ASCII data from the B91 form for that station for August 09, same as the typed form example above. Even though the form says “1700” (5PM), note that the “tobs” listed in the ASCII data is the temperature at observation time, NOT the “time of observation”
Coopid,Year,Month,Day,tmax,tmin,tobs,prcp,snow,snwd,wdmv,evap,sgc4,sxy4,sny4,sgc8,sxy8,sny8,meanmax,meanmin,sumprcp,sumsnow
080478,2009,07,1,78,74,77,0.35,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,2,87,73,85,0.18,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,3,92,73,89,0.25,0,999999, ,9999.99, , , , , ,
080478,2009,07,4,91,74,79,0.70,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,5,93,75,91,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,6,92,77,89,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,7,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,8,91,76,83,0.02,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,9,90,74,75,0.25,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,10,89,72,87,0.16,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,11,96,69,85,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,12,91,69,83,1.78,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,13,92,74,83,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,14,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,15,95,74,93,0.27,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,16,96,77,92,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,17,93,77,92,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,18,94,77,86,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,19,90,71,88,0.48,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,20,90,69,86,0.38,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,21,91,71,90,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,22,92,74,90,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,23,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,24,94,75,93,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,25,94,73,91,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,26,93,74,83,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,27,89,72,79,0.10,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,28,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,29,95,73,91,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,30,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,31,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , , ,91.5,73.5,4.92,
Just in case anyone wants to see it, here is what the station looks like:
http://wattsupwiththat.com/2007/10/01/how-not-to-measure-temperature-part-32/
Oddly, the time of observation seems less important than one might think. NWS advises observers that if they have to deviate from the 7AM time, well that’s OK too. See this:
http://www.srh.noaa.gov/ohx/dad/coop/WxEye2.pdf

What if You Take your Ob at a Different Time Than Scheduled?

No problem! That is why there is a remarks
section! Just note in the remarks if you take
your ob late or early. Use the remarks
section to also indicate any missing obs and
why. For example, you may go on vacation
for a week and miss a whole week of data.
You can use remarks to let us know why
data is missing. Sometimes equipment is
faulty. Jot this down in remarks too.
I can see why it’s “no problem” NCDC doesn’t give a hoot as to what time the observation is taken, they simply throw that information away!

Kevin Kilty
February 27, 2010 5:11 pm

jothi85 (10:02:51) :
Kevin Kilty (08:02:31) :
Quite a large fraction of NCDC’s correction comes from trying to fix the time of observation (TOB) bias.
kevin,
as we do not have the temperarture – dateTime series, I am not sure where we go with TOB logic. all we seem to have is the average temp for each day.
Obviously, there is no need for a TOB correction on this site for the average daily temp series.
if this site had been as pristine 100 years ago, as it looks to be today, there is no need to apply the UHI correction either.

What we have is (max+min)/2, referred to as average temperature for each day. Anthony and Nick, above, have covered this concept pretty well, so I won’t continue the whipping of dead horse.
Indeed, we cannot go back and look at individual stations to see if the statistical/geographical model that NCDC uses to correct TOB is appropriate or not. We have to take what they hand us, or start over with some alternative model of estimating average temperature versus time. This is why using first-order stations would be useful–there is no TOB to correct. However, before starting such a task, I’d like to know that mean temperature is or is not a useful concept in the first place. I can come up with reasons why it is not.

Kevin Kilty
February 27, 2010 5:17 pm

I would just like to add one more comment to what Nick Stokes said above. The NCDC data set comes from COOP stations (at least that is my interpretation) and we do not know the daily cycle at these stations. All we have is a time series of max-min values. This is why I am pretty skeptical of the TOB bias correction that NCDC applies. The bias results from the actual series of daily cycles, but this isn’t available, and instead they use a method of correction due to Tom Karl, which makes some sense physically, but which has no associated measure of its “robustness”, to use a trendy phrase.

Eric Gamberg
February 27, 2010 5:57 pm

_Jim (20:37:49) :
“Survey parties anxiously await the invention of/the discovery of/the release from EVT (Engineering Verification Testing) of the first time-travel machines available for rent or lease for prioritized civilian purposes …”
I think you miss the point that to be able to use data from a given station to determine “climate” trends, the station must have a consistent and appropriate micro-climate, instrumentation and time of observation.
The SurfaceStations exercise only examines the current microclimate to see if it meets the standards. The intent of the USHCN is apparently to use stations that both currently and in the past meet the standards of a weather station capable of evaluating climate.
It is a necessary condition that a station is currently meeting the criteria of an acceptable microclimate before even examining its history of meeting the criteria.
After eliminating the USHCN stations that are not currently meeting the acceptable criteria currently, examination of the history of the stations results in only a potential few that are viable candidates for climate change evaluation (Walhalla , SC is the only one that comes to mind).
Using the 100 yr data record to determine the trends within less than a degree of uncertainty for a given station is a fool’s errand.
The paleo-climate situation regarding sites is similar with no sites likely suitable for sub degree resolution.

Roger Knights
February 27, 2010 7:05 pm

Instead of saying that we want the “raw” data, we might occasionally say:
We want the uncooked data.
We want the “sushi” data.

“Uncooked” conveys a neat double entendre. (I.e., unslanted.)

steven mosher
February 27, 2010 9:21 pm

JerryB (07:08:41) :
good to see you again. TOBS will come around for yet another discussion

steven mosher
February 27, 2010 9:30 pm

people who want to understand TOBS should go to CA and read everything jerryB has written. he also has pointers to data so you can ply to your hears content

steven mosher
February 27, 2010 9:34 pm

Kevin Kilty (17:17:30) :
Karl’s TOBs adjustment is an empirical model with data held out for verification as I recall. been a couple years since I read it

Nick Stokes
February 27, 2010 9:45 pm

Re: Nick Stokes (Feb 27 14:32),
But the real problem is, TOBS imparts a significant warm bias to the entire record post 1960, so if the intent was to prevent counting double peaks and having them bias the record upwards (or downwards), it seems to have failed miserably.
Anthony, I don’t get the logic there. If the trend in TOBS was towards morning rather than evening reading, then that would have biased the observations downward. And so, when corrected, the corrections will have an upward trend. I don’t see how that invalidates them.
REPLY: Read my reply to Kevin Kilty. NCDC doesn’t even record the time of observation in the transcribed data. The TOBS correction this appears to be applied without an actual basis of the hour. How can there be a basis if they don’t use the data provided by the observer. It’s insanely sloppy. The point of a correction is to get it back inline with nature, not to throw a random element of chance into the mix, which is what it appears to do. – A

D. Patterson
February 28, 2010 4:08 am

Many readers will be stunned when they discover how the practice of using the minimum and maximum daily air temperatures from a single observation each day artificially distorts the actual record of the location’s air temperature/s and thermal state for the day, month, and year, even BEFORE any TOBS or other adjustments are applied. Reducing the reported daily temperatures to only two extreme points of observaton disregards all of the actual intermediate air temperatures which occurred in the air mass throughout the day. This results in a false representation of the true average air temperature, with variations in whole degrees F/C.
Find weather stations which make intra-hourly special observations, hourly observations, 3-hourly observations, and 6-hourly observations. Compare (1) the official MIN, MAX, and Daily Average/MEAN air temperatures to the computed daily averages of (2) all reported air temperatures from 0000 Hours local time to 2359 Hours local time, (3) only the hourly air temperatures, (4) only the 3-hourly air temperatures, and (5) only the 6-hourly air temperatures. For additional interest, repeat the exercise while shifting the 24 hour period one hour or other period of time throughout the 24 hour day and observe the effect it has upon the computed averages versus the MIN-Max method of computing the daily average.
Note how the additional special observations taken at regualr and irregular intervals between the hourly observations can shift the daily average air temperature even more than the regularly scheduled hourly observations do with respect to the single daily observation of the MIN-MAX air temperatures.
The ask the IPCC climate experts how they intend to prove a global change in total average air temperature of less than one degree C exists on the basis of a surface weather observation method which varies by one or more whole degrees from the actual air temperatures when sampled more than twice a day?

February 28, 2010 5:50 am


Eric Gamberg (17:57:08) :
I think you miss the point that to be able to use data from a given station to determine

Not at all Eric; levity, (excessive or unseemly frivolity) yes, miss the point, no.
.
.

February 28, 2010 6:15 am


Nick Stokes (14:32:28) :
Jim, the old thermometers could tell you the peak in a 24hr period.

Do tell; I think this is painfully obvious to all but a novice in this field.
A reference would be appreciated; not for me, but for the novices …


But when they are read has an effect.

Please explain this, for the novices; this isn’t RC.
.
.

Tim Channon
February 28, 2010 7:09 am

What I think they are getting at…
At 4pm you read the max thermometer and then reset it.
The following day is colder.
At 4pm the next day you read the thermometer _which is showing a max from 4pm the previous day, the same as when you reset it. Yesterday was hotter at 4PM than the peak today.

February 28, 2010 8:06 am


Nick Stokes (14:32:28) :

That meant that on a hot day, the peak would be counted twice, with a genuine peak in mid-afternoon say, and the temp just after the reading (and re-setting) being the peak for the next day.

Oh, okay. I was hoping for a little more supporting logic and rationale (a ‘value added post’ rather than regurgitation of texts) but I can see its the old “hot day bleed-over effect” bleeding from one day’s noted observation to the next.
Let’s look at the case where … each day has same profile for ‘warming’ in the diurnal cycle (each day) AND the fixed point in time where the reading was care fully chosen. The result: Equal _peak_ temperatures for adjacent days on our min-max thermo … again if things were *perfect* (no chaotic weather).
I’ll grant you: adjacent days could see equal peaks.
This falls apart as far as ‘gathering data’ is concerned in the real world though –
Consider a long time series (one where the “law of large numbers” would apply), with a rigidly adhered-to sampling schedule (or even a random about a centroid point in time), with each time period representing a ‘bin’, where recording the min and max for that bin period results in systematic skewing of data (computed averages) to one side or the other?
EVEN IF that theoretical point for reading the temperature in the afternoon could be chosen at the inflection point where the temperature is at its peak, there would be steadily decreasing (or increasing) sets of ‘peak’ temperature throughout the year … where does this skew the results given the LLN ( Probability: Law of large numbers ) .
Correct me where I am wrong, but please do it with reference to sampling theory, noted observations in nature or demonstration by experiment.
.
.

February 28, 2010 8:15 am


steven mosher (21:34:44) :

Karl’s TOBs adjustment is an empirical model with data held out for verification as I recall. been a couple years since I read it

Hide the recline; a fall-back position (“It’s not who votes that counts. It’s who counts the votes.” – As seen in Diebold ad parody) …
.
.

February 28, 2010 8:45 am


Kevin Kilty (17:17:30) :
I would just like to add one more comment to what Nick Stokes said above. The NCDC data set comes from COOP stations (at least that is my interpretation) and we do not know the daily cycle at these stations. All we have is a time series of max-min values. This is why I am pretty skeptical of the TOB bias correction that NCDC applies.

WHEN would I (were I a volunteer reader) read (and record) my min max thermo, gee, in the evening, after getting home (were it not in the morning) and that has worked out to sometime after 7 PM local, which is well after the assumed (and not in dispute) peak temp of the day.
Quoting from Aguado and Burt “Understanding Weather & Climate” 3rd edition, pg 81, “the warmest period of the day … is sometime in the early afternoon, usually between 2:00 and 4:00 PM.”
So, the assertion above by some of the min/max reading in late afternoon (an assumption, perhaps a gross assumption “the assumption of scale”) would result in a peak less than the peak of the actual peak that day …
Also, I think COMMON SENSE (stemming from in situ observation of the data) would prevail after awhile, after discussion amongst peers, ag agents, et al – these volunteers didn’t just fall off a turnip truck and start recording temperature, they have/had an interest in weather, and parametrically (measurement of parameters, like temperature).
.
.

Kevin Kilty
February 28, 2010 10:26 am

steven mosher (21:34:44) :
Kevin Kilty (17:17:30) :
Karl’s TOBs adjustment is an empirical model with data held out for verification as I recall. been a couple years since I read it

Your memory is correct, but the period of study was still just a few year period in the 1960s and I have wondered about how well this makes the adjustment for all time periods.
Anthony: Your comments tacked onto mine have shown me this TOB bias correction is more insane than I thought. I wonder if we can find someone to help out who has cross disciplinary work in both psychology and climatology? What’s more, there are other problems beyond TOB — like making adjustments out of order and smearing error all over via homogenization.

Tim Channon (07:09:06) : and _Jim:

Yes you guys have this correct now. Maybe I could have explained this better, but the concept is not all that easy to explain. The correction NCDC tries to apply to this is even harder to visualize — read Tom Karl’s paper. ”A Model to Estimate the Time of Observation Bias Associated with Monthly Mean Maximum, Minimum, and Mean Temperatures.” by Karl, Williams, et al.1986, Journal of Climate and Applied Meteorology 15: 145-160.
actually a good place to start is Donald G. Baker, 1975, Effect of Observation Time on Mean Temperature Estimation, J. Applied Meteorology, 14, 471-476.

_Jim (08:45:43) :
Kevin Kilty (17:17:30) :
I would just like to add one more comment to what Nick Stokes said above. The NCDC data set comes from COOP stations (at least that is my interpretation) and we do not know the daily cycle at these stations. All we have is a time series of max-min values. This is why I am pretty skeptical of the TOB bias correction that NCDC applies.

WHEN would I (were I a volunteer reader) read (and record) my min max thermo, gee, in the evening, after getting home (were it not in the morning) and that has worked out to sometime after 7 PM local, which is well after the assumed (and not in dispute) peak temp of the day.
Quoting from Aguado and Burt “Understanding Weather & Climate” 3rd edition, pg 81, “the warmest period of the day … is sometime in the early afternoon, usually between 2:00 and 4:00 PM.”
So, the assertion above by some of the min/max reading in late afternoon (an assumption, perhaps a gross assumption “the assumption of scale”) would result in a peak less than the peak of the actual peak that day …

Have a look at Tom Channon’s posting. He is dean-on about the reason and sign of the bias. The thermometers catch the max-min values of a 24 hour period, but because of individual reading schedules, the 24-hour periods do not coincide among all stations.
Better yet look at Donald Baker’s paper, which I referenced above in this post, and you’ll see how the examination of actual data shows the sign of the bias. It is very interesting.

Also, I think COMMON SENSE (stemming from in situ observation of the data) would prevail after awhile, after discussion amongst peers, ag agents, et al – these volunteers didn’t just fall off a turnip truck and start recording temperature, they have/had an interest in weather, and parametrically (measurement of parameters, like temperature).

I am not saying these people were ignorant in any way. However, they were collecting data for a very different purpose than the climate researchers are putting the data to now. The biases mattered not one jot to them, nor did anyone even imagine how this bias could enter long-run records at the time. The new use for this data makes the bias important.

Kevin Kilty
February 28, 2010 10:29 am

Sorry Tim, “Tim Channon” not Tom Channon in the previous post.

February 28, 2010 11:02 am

Thank you for a substantive post Brian.

@ _Jim (09:50:56) : “Are we…measuring increased convective activity vis-a-vis higher reported satellite temperatures…?”
“…MSU’s aboard those sats are also going to see the result of convective activity, i.e., precipitation in its varied forms, which are more reflective of temperature seen at altitude and not the boundary layer or ground.”
Brian Dodge (15:59:08) : Yes we do see increased convective activity highly correlated with SST.

Well, not arguing that point, but rather: Is the satellite AMSU measuring the increased tropospheric temperature as a result of the release of latent heat during during the processes that result in precipitation (wholesale condensation, release of latent heat, etc)?


The powerful advance in technology of the multiple spectral channel data from the Aqua and Terra Atmospheric Infrared Sounder (AIRS) and Advanced Microwave Sounding Unit (AMSU) is that we see the temperature from the bottom up, and can tell which signal is coming from where. The algorithms to sort it all out aren’t trivial, and

Amen.
Quoting Shakespeare “ay, there’s the rub”; the tuning, the tweaking, the adjustments of coefficients and factors (and perhaps powers) in those non-trivial algorithms …
.
.

steven mosher
February 28, 2010 9:09 pm

_Jim (08:06:40) :
You can see how TOBS works by going over to CA.
climateaudit.org
search on TOBS
find the thread named TOBS
Look at the comments.
Find my comment commenting on JerryBs work.
Download the data.
Analyze for yourself.
The data is there. the work is simple.
we’ve been over this ground before.

steven mosher
February 28, 2010 9:11 pm

Kevin Kilty (10:26:59) :
For a long time back in the day I kinda lobbied for people to have another look at TOBS. Not because it doesnt make sense.. in theory.

steven mosher
February 28, 2010 9:15 pm

Nick Stokes (21:45:44) :
I was doing some background reading on TOBS the other day and ran across an approach that di the adjustment WITHOUT ANY METADATA.
But my old brain is failing and I can’t recall the paper. Anyways, I thought that would be a cool thing to look at. Not much of a hint on where to look, sorry

Eric Gamberg
February 28, 2010 11:04 pm

” _Jim (05:50:21) :
Eric Gamberg (17:57:08) :
I think you miss the point that to be able to use data from a given station to determine
Not at all Eric; levity, (excessive or unseemly frivolity) yes, miss the point, no.”
I still think you are tollously missing the point. Certainly only humorous to you.

Eric Gamberg
February 28, 2010 11:23 pm

” steven mosher (21:11:25) :
Kevin Kilty (10:26:59) :
For a long time back in the day I kinda lobbied for people to have another look at TOBS. Not because it doesnt make sense.. in theory.

Given that most sites in the NH exhibit a bigger max-min in the Spring than in the Fall and that there is a daily temp rise in the Spring and fall in the Fall (!!! who knew?), TOB adj should easily be a function of the day of the year for a given lat. and long.

E.M.Smith
Editor
March 1, 2010 12:16 am

Kevin Kilty (07:51:41) : One does not have to do a lot of research to figure that things have gone wrong–for example doing homogenization before UHI corrections is simply wrong and this is what NCDC does.
It is also what GIStemp does. “Homogenization” is done in STEP1 and UHI is done in STEP2… I had not thought about it, but you are quite right. That is a very silly order of action…
John F. Hultquist (09:20:08) : Okay, that’s not fair. You had me laughing. Then this “not humor” bit. Now I feel like I just laughed at a funeral.
I think of it more like laughing at one of those “funniest videos” films where someone “not so bright” managed to mangle various body parts, repeatedly… You don’t want to laugh. You know someone was injured. But when they launch off the ramp on a bike into a mud pit with gators and hit the 4 x 4 next to the ramp, then spin into the mud, landing ON the bike… well, it’s just a bit hard not to snicker… but it’s wrong 😉
So while they did not intend their actions to be humor, it’s perfectly OK to laugh in one of those “What WERE you thinking?” kind of ways… after all, the alternative is weeping uncontrollably…

D. Patterson
March 1, 2010 2:37 am

steven mosher (21:11:25) :
Kevin Kilty (10:26:59) :
For a long time back in the day I kinda lobbied for people to have another look at TOBS. Not because it doesnt make sense.. in theory.

Steve, I would argue that you don’t even need to look at TOBS at all to conclude the Daily Summary air temperatures are too erroneous and unreliable to use for a determination of a Global mean air temperature (assuming for the sake of argument such a global mean can exist).
Using only the 24 hour minimum and maximum air temperatures, with fixed or variable observation periods, is non-representative of the true average daily air temperature. The TOBS adjustments no matter how well accomplished can only compound the already egregious errors in the daily observation of only the two extremes.
This problem is easily illustrated by simply taking stations with more than 24 observations of air temperatures, 36 or more are better, and computing the simple averages for all air temperature observations in the daily period and then the typical synoptic subsets of the same observations for the 1-Hourly, 3-Hourly, 6-Hourly, and once daily MIN-MAX-MEAN datasets. The different methods typically yield differences in daily average air temperatures up to 1F or more before any TOBS or other adjustments can be applied. TOBS simply adds to the errors and the unrealities when used with only a single daily observation of MIN-MAX-MEAN.

1DandyTroll
March 1, 2010 8:11 am

How much, if any, does the luminosity change when street lamp design is changed to lessen light pollution, especially in the upwards direction, and also when switching to more energy efficient but more environmental hazardous lamps?
Is the negative change equally dramatic the additive change is when they first light up a newly built 50 km freeway?

March 1, 2010 5:17 pm


Eric Gamberg (23:04:42) :

I still think you are tollously missing the point. Certainly only humorous to you.

Eric, on the slimmest of evidence, on the basis of a single post evidently used as a ‘proxy’ for the sum of my life (and endeavors, discoveries) to date, you arrive at this conclusion … so, I have to ask: “Climate Science much?”
A little trick I picked up from Mosher (A/K/A Moshpiy); a YouTube video, since Klaus sez (sic) it better:
[youtube=http://www.youtube.com/watch?v=d-Yrg9xNSS0&hl=en_US&fs=1&]
.
.

March 1, 2010 5:26 pm


steven mosher (21:09:05) :
You can see how TOBS works by going over to CA.
climateaudit.org
search on TOBS
find the thread named TOBS
Look at the comments.
Find my comment commenting on JerryBs work.
Download the data.
Analyze for yourself.
The data is there. the work is simple.
we’ve been over this ground before.

Then … there must be a theorem worked up; other than the RC-style ‘go find it in the references’ answer you just gave.
Something based on ‘sampling’ or binning theory, min-max peak detect concepts? C’mon … you’re a systems analyst kinda guy …
Any accommodation for instrumentation response (over time; as when a a min/max thermo is reset, how much time does it take to repeat if any?)
How about the recognition by field personnel of this facet, leading to self-directed resetting of (some) instruments? (Argument getting thin at this point)
.
.

March 1, 2010 5:51 pm

Not sure this took 1st time … try again …


steven mosher (21:09:05) :
You can see how TOBS works by going over to CA.
climateaudit.org
search on TOBS
find the thread named TOBS
Look at the comments.
Find my comment commenting on JerryBs work.
Download the data.
Analyze for yourself.
The data is there. the work is simple.
we’ve been over this ground before.

Then … there must be a theorem worked up; other than the RC-style ‘go find it in the references’ answer you just gave.
Something based on ‘sampling’ or binning theory, min-max peak detect concepts? C’mon … you’re a systems analyst kinda guy …
Any accommodation for instrumentation response (over time; as when a a min/max thermo is reset, how much time does it take to repeat if any?)
How about the recognition by field personnel of this facet, leading to self-directed resetting of (some) instruments? (Argument getting thin at this point)
.
.

Mike Rankin
March 1, 2010 8:50 pm

After reading this post and some comments I did some investigation of my own. JamesS at 16:35:58 on 02/26/2010 gave a link to
http://cdiac.ornl.gov/epubs/ndp/ushcn/ushcn_map_interface.html
I used the link and selected Virginia on the drop down menu and clicked on “Map Sites”. This shows the cluster of surface stations in VA and brings up a listing on the right of the stations. I clicked on Dale Enterprise and the map brought up a bubble of Dale Enterprise showing key info plus offering options. I clicked on “Get Daily Data”. This brought up a page with numerous further opportunities to get information. I scrolled down to the bottom of the page and found means to obtain a CSV file of data. I clicked on boxes for Temperature min and Temperature max as well as the data quality flags for each. I next clicked on the submit button. After a few seconds I was taken to a page showing a link for the file containing the data. Clicking on this brought up a form that allowed me to select “Save File”. I did so and clicked “OK”. This copied the file to my Download folder (XP). I started a session in open office calc and opened the downloaded file.
The file had daily records for max and min. According to some information on other pages, this data is not adjusted … yet. I imagine that adjusting daily data taken in integer degrees F would not be easy. Over the past three days I have spent a lot of effort to examine the data up close and personal. The raw data certainly shows major weather patterns constantly changing. I programmed in open office to re-arrange the data into yearly sheets.
I subsequently downloaded a file of the monthly data for Dale Enterprise by using the link provided by David Schnare at (20:29:56) : 02/26/2010. I selected “Raw GHCN data + USHCN corrections” and entered “Dale Enterprise” for the station. I clicked on the link and clicked on the link “Download monthly data as text”. I imported the data into open office. I programmed open office to give an equivalent of the “Raw GHCN data + USHCN corrections” from the daily data. I programmed open office to give a month by month difference. There appears to be substantial pattern in the differences. I found a huge difference in the data for Jan 1904. I would share the difference report but have no means of presenting it.

March 11, 2010 5:44 pm

For those interested, here are the papers on how the GISS surface temperature analysis was originally designed and how – and why – raw station data are adjusted:
http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf
http://pubs.giss.nasa.gov/docs/1999/1999_Hansen_etal.pdf
http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf and
http://data.giss.nasa.gov/gistemp/updates/
Makes for dense reading, but if you want to tell the Climate Change wheat from the chaff, you gotta RTFR!