Guest post by David W. Schnare, Esq. Ph.D.
When Phil Jones suggested that if folks didn’t like his surface temperature reconstructions, then perhaps they should do their own, he was right. The SPPI analysis of rural versus urban trends demonstrates the nature of the overall problem. It does not, however, go into sufficient detail. A close examination of the data suggests three areas needing address. Two involve the adjustments made by NCDC (NOAA) and by GISS (NASA). Each made their own adjustments and typically these are serial, the GISS done on top of the NCDC. The third problem is organic to the raw data and has been highlighted by Anthony Watts in his Surface Stations project. That involves the “micro-climate” biases in the raw data.
As Watts points out, while there are far too many biased weather station locations, there remain some properly sited ones. Examination of the data representing those stations provides a clean basis by which to demonstrate the peculiarities in the adjustments made by NCDC and GISS.
One such station is Dale Enterprise, Virginia. The Weather Bureau has reported raw observations and summary monthly and annual data from this station since 1891 through the present, a 119 year record. From 1892 to 2008, there are only 9 months of missing data during this 1,404 month period, a missing data rate of less than 0.64 percent. The analysis below interpolates for this missing data by using an average of the 10 years surrounding the missing value, rather than basing any back-filling from other sites. This correction method minimizes the inherent uncertainties associated with other sites for which there is not micro-climate guarantee of unbiased data.
The site itself is in a field on a farm, well away from buildings or hard surfaces. The original thermometer remains at the site as a back-up to the electronic temperature sensor that was installed in 1994.
The Dale Enterprise station site is situated in the rolling hills east of the Shenandoah Valley, more than a mile from the nearest suburban style subdivision and over three miles from the center of the nearest “urban” development, Harrisonburg, Virginia, a town of 44,000 population.
Other than the shift to an electronic sensor in 1994, and the need to fill in the 9 months of missing reports, there is no reason to adjust the raw temperature data as reported by the Weather Bureau.
Here is a plot of the raw data from the Dale Enterprise station.
There may be a step-wise drop in reported temperature in the post-1994 period. Virginia does not provide other rural stations that operated electronic sensors over a meaningful period before and after the equipment change at Dale Enterprise, nor is there publicly available data comparing the thermometer and electronic sensor data for this station. Comparison with urban stations introduces a potentially large warm bias over the 20 year period from 1984 to 2004. This is especially true in Virginia as most such urban sites are typically at airports where aircraft equipment in use and the pace of operations changed dramatically over this period.
Notably, neither NCDC nor GISS adjusts for this equipment change. Thus, any bias due to the 1994 equipment change remains in the record for the original data as well as the NCDC and GISS adjusted data.
The NCDC adjustment
Although many have focused on the changes GISS made from the NCDC data, the NCDC “homogenization” is equally interesting, and as shown in this example, far more difficult to understand.
NCDC takes the originally reported data and adjusts it into a data set that becomes a part of the United States Historical Climatology Network (USHCN). Most researchers, including GISS and the East Anglia University Climate Research Center (CRU) begin with the USHCN data set. Figure 2 documents the changes NCDC made to the original observations and suggests why, perhaps, one ought begin with the original data.
The red line in the graph shows the changes made in the original data. Considering the location of the Dale Enterprise station and the lack of micro-climate bias, one has to wonder why NCDC would make any adjustment whatever. The shape of the red delta line indicates these are not adjustments made for purposes of correcting missing data, or for any obvious other bias. Indeed, with the exception of 1998 and 1999, NCDC adjusts the original data in every year! [Note, when a 62 year old Ph.D. scientist uses an exclamation point, their statement is rather to be taken with some extraordinary attention.]
This graphic makes clear the need to “push the reset button” on the USHCN. Based on this station, alone, one can argue the USHCN data set is inappropriate for use as a starting point for other investigators, and fails to earn the self-applied moniker as a “high quality data set.”
The GISS Adjustment
GISS states that their adjustments reflect corrections for the urban heat island bias in station records. In theory, they adjust stations based on the night time luminosity of the area within which the station is located. This broad-brush approach appears to have failed with regard to the Dale Enterprise station. There is no credible basis for adjusting station data with no micro-climate bias conditions and located on a farm more than a mile from the nearest suburban community, more than three miles from a town and more than 80 miles from a population center of greater than 50,000, the standard definition of a city. Harrisonburg, the nearest town, has a single large industrial operation, a quarry, and is home to a medium sized (but hard drinking) university (James Madison University). Without question, the students at JMU have never learned to turn the lights out at night. Based on personal experience, I’m not sure most of them even go to bed at night. This raises the potential for a luminosity error we might call the “hard drinking, hard partying, college kids” bias. Whether it is possible to correct for that in the luminosity calculations I leave to others. In any case, the lay out of the town is traditional small town America, dominated by single family homes and two and three story buildings. The true urban core of the town is approximately six square blocks and other than the grain tower, there are fewer than ten buildings taller than five stories. Even within this “urban core” there are numerous parks. The rest of the town is quarter-acre and half-acre residential, except for the University, which has copious previous open ground (for when the student union and the bars are closed).
Despite the lack of a basis for suggesting the Dale Enterprise weather station is biased by urban heat island conditions, GISS has adjusted the station data as shown below. Note, this is an adjustment to the USHCN data set. I show this adjustment as it discloses the basic nature of the adjustments, rather than their effect on the actual temperature data.
While only the USHCN and GISS data are plotted, the graph includes the (blue) trend line of the unadjusted actual temperatures.
The GISS adjustments to the USHCN data at Dale Enterprise follow a well recognized pattern. GISS pulls the early part of the record down and mimics the most recent USHCN records, thus imposing an artificial warming bias. Comparison of the trend lines is somewhat difficult to see in the graphic. The trends for the original data, the USHCN data and the GISS data are: 0.24,
-0.32, and 0.43 degrees C. per Century, respectively.
If one presumes the USHCN data reflect a “high quality data set”, then the GISS adjustment does more than produce a faster rate of warming, it actually reverses the sign of the trend of this “high quality” data. Notably, compared to the true temperature record, the GISS trend doubles the actual observed warming.
This data presentation constitutes only the beginning analysis of Virginia temperature records. The Center for Environmental Stewardship of the Thomas Jefferson Institute for Public Policy plans to examine the entire data record for rural Virginia in order to identify which rural stations can serve as the basis for estimating long-term temperature trends, whether local or global. Only a similar effort nationwide can produce a true “high quality” data set upon which the scientific community can rely, whether for use in modeling or to assess the contribution of human activities to climate change.
David W. Schnare, Esq. Ph.D.
Director
Center for Environmental Stewardship
Thomas Jefferson Institute for Public Policy
Springfield Virginia
===================================
UPDATE: readers might be interested in the writeup NOAA did on this station back in 2002 here (PDF, second story). I point this out because initially NCDC tried to block the surfacestations project saying that I would compromise “observer privacy” by taking photos of the stations. Of course I took them to task on it when we found personally descriptive stories like the one referenced above and they relented. – Anthony
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.





OT, but as per Tom Fuller’s comment, I too am not seeing any graphics. I wonder if it’s an issue with the image host? I’ve noticed it on previous posts too.
There is nothing inherently wrong with lowering earlier temperature, as it mitigates having to endlessly apply corrections going forward. Remember it is the temperature trend that gets everyone excited, and the trend would be unaffected by whichever end of the time series one adjusts. The real issue is the appropriateness of the corrections. One does not have to do a lot of research to figure that things have gone wrong–for example doing homogenization before UHI corrections is simply wrong and this is what NCDC does.
That being said, making the temperature set reliable is a much more difficult task as Steve Mosher says, you’ve got to visit the sites.
Regarding the 1994 change of equipment, I am not sure whether you really cannot identify the step-change involved because of the (regrettable) absence of direct comparisons (i.e. running both thermometer and sensor in parallel for some time), and of other rural stations nearby.
All that interests in this situation is the difference between 1993 and 1995. No matter if surrounding stations are rural or urban, their status will not have change a lot between these two years. What did change is the equipment in one place – thermometer throughout 1993, sensor throughout 1995, with 1994 as an ill-defined intermediate (that is, unless the method was changed on Jan. 1st).
So why not look at the average difference *between these two years* for the surrounding stations and compare it with that at Dale? If, say, 1993 was on average 0.5 degrees warmer than 1995 elsewhere in Virginia, but (apparently) 0.5 degrees colder in Dale, wouldn’t this justify saying all the readings from 1995 onwards should be raised by 1 degree, and 1994 by about half that amount? If the exact changeover date is known, one might even look at the very last thermometer reading and the very first sensor reading only, without any averaging, and compare these with simultaneous readings at nearby locations where the daily temperature could reasonably be expected to change in a similar way.
IMHO one does not necessarily need to take into account the (possibly UHI-contaminated) *trends* left and right of the change point, as we are not trying to correct for an ongoing change like UHI but a sudden switch at one point, with constant (but different) “before” and “after” conditions. Think of a sound engineer switching to a different mic in mid-recording, with the result of level dropping at that point in the recording – to correct for this, you would apply a constant boost from the moment of change onwards, not a continuous adjustment that changes and falsifies the dynamics of the recorded signal; and this stepwise adjustment would not depend on how these dynamics happened to be behaving throughout the recording.
Nick I think you are missing my point here. Just because physical documentation exists, does not mean that the actual changes are effectively documented. The documentation only matters if it accurately describes what actually happens to the data in every case.
For example, if a person could pick a random station and predict before doing a forensic analysis what the adjustments will be, based on the above mentioned documentation, then you would have documented changes. However, it appears from what I have seen with these forensic examinations, that in every single case, the adjustment profile makes absolutely no rational sense when you look at it.
Old well maintained and sited locations which should have minimal or no adjustments for UHI, have all sorts of odd adjustments that defy explanation.
Old data might be adjusted up or down or left alone, not too old data might have something else done to it, and new temp data might be adjusted way up or not. The station might have bleed in from nearby stations as the numerical steps smear data into holes in the temperature map etc.
As E. M. Smith notes above, by the time you add in the unknowns about E and X flagged data you have no clue about what is really going on with a station with out doing a full autopsy on the station data and even then you end up scratching your head about why a certain date range of data is adjusted one way or the other.
This reminds me of places I have worked where you had computer program run books on the shelf that meticulously documented what the program does, but the steps the documentation describe only actually existed and were performed for a brief window in time. The documentation was sometimes a figment of the imagination of the programmer regarding the steps he intended to implement and when the code really hit the fan some steps got dropped, some got added, sometimes the code did not really do what the programmer intended it to do, and sometimes is got “fixed” at a later date and the documentation no longer has any meaningful relationship to what is actually happening. In that sort of situation you have the documentation referring to calls to data bases that no longer are used, or other chapters of the document that no longer say what they did when they were referred to in the document you are reading.
In engineering and the mechanical trades like machinists it is very common to have blue prints that show you how the object was intended to be built but you many times find that the “as built” device or installation is totally different in important regards. I strongly suspect that the same sort of situation exists here, that the “as performed” adjustments are not the same as the intended and documented adjustments.
The only way to know for sure is to take a few well sited and maintained reporting sites like this. Using the written documentation describe what the documentation says should happen and then do a detailed analysis and compare that to what really happens.
Larry
Quite a large fraction of NCDC’s correction comes from trying to fix the time of observation (TOB) bias. Unfortunately this correction depends on the actual temperature record per station, which is not how the correction is done. A person could get past the TOB correction by using first-order stations as these are read on schedule, but then one encounters the UHI effect most prominently in these records. However, an examination of just the best first-order stations across the U.S. ought to show some interesting results. I have looked at a handful of such records, and in eastern Wyoming for example they show warming until mid 20th century, mainly from increasing minimum temperatures, and just about dead flat afterward.
Maybe the next sets of data stations investigated should be in California, since that’s considered the battleground for Cap and Trade.
http://www.ocregister.com/opinion/-236562–.html
“California has the most destructive and costly global warming law in the nation, if not the world. In a perverse way, it’s the governor’s crowning achievement. ”
The economic impact to the state of CA is HUGE! If it can be demonstrated that California temperatures show little or no warming and the data have been fudged, it might help them repeal AB32.
John C (18:52:56) :
“How do we know the satellite data is accurate? Because it closely matches the questionable surface data?”
The satellite data doesn’t measure surface temperature, they measure the temperature of the lower troposphere.
The satellite trends tend to be lower then other data sets. They don’t do UHI correction as I believe the ‘sampling rate’ is on the order of square miles as opposed to some of the surface thermometers that cover areas of 10’s of thousands of square miles and are at the airport.
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/institute_of_physics_damns_the_climategaters_science#67690
To those suggesting this is evidence of fraudulent data manipulation,
Before you begin throwing out wild accusations, you really need to do a lot more analysis. Dr. Schnare did not answer the question of why this adjustment may have occurred, he only raised it. Now you need to investigate the situation more closely and see if there are any plausible reasons for the adjustment that do not amount to fraud.
For my money, I am betting the reason this station data was adjusted in this way is because it was out of synch with the data from the surrounding stations. If the trend from it is wildly different from the trend at stations just 30 or 40 miles away, whatever algorithms NASA uses to adjust the data may have identified this station as having a bias.
I did a quick check of the USHCN data for three of the stations closest to Dale Enterprise to see if this could possibly be the case. The trends at the surrounding stations are as follows:
Woodstock 2ne = +0.7 degrees per century
Charlottesville 2w = +0.7 degrees per century
Staunton Sewage Plant = +0.2 degrees per century
These are all positive, which leaves Dale Enterprise an obvious outlier with -0.3 degrees per century. So even if the adjustment is in error, there is at least one plausible explanation for it that doesn’t involve fraud.
Dr. Schnare never said this was evidence of fraud, only that he thinks this provides evidence that USHCN is not a high quality set of data. The rest of you guys should take a page from his book and not start slinging accusations of misconduct until you are sure there is no other explanation.
*******
Dan (17:24:15) :
Discussion over surface station data quality should be placed in the context of the satellite data record, which has nothing to do with surface station records and which shows a slightly weaker but still similar global temperature trend over the satellite record: GISS = .168 C/decade vs. RSS (satellite) = .156 C/decade, and UAH (satellite) = .132 C/decade.
*******
But comparing surface with sat temps is apples and oranges to some extent. If you go by classical GHG theory, the temp trends both up and down in the upper troposphere (where satellites measure) are magnified compared to surface trends — nearly twice as great.
John F. Hultquist (23:01:36) :
“c james (17:26:25) :
James Sexton (16:46:41) growth and decline of towns
There are quite a number of towns of historical interest, although not necessarily relevant to the current weather station kerfuffle. Try these,
Silver —- City, Nevada
http://en.wikipedia.org/wiki/Virginia_City,_Nevada
Oil —- Pithole, Pennsylvania
http://en.wikipedia.org/wiki/Pithole,_Pennsylvania”
True, Detroit is losing people by the droves, but aren’t most of them moving to the outlying “greater Detroit area” as opposed to moving away entirely? As for Virginia City and Pithole, yes, at the time they were consider large, but only relative to the time period with top population counts of 30,000 and 15,000 respectively. While obviously, there would be some heat bias (if they were using temps from that area) the heat bias wouldn’t even be as much for towns with those populations in the present time. (No air-conditioners, cars, ect.) So, no I don’t think those examples apply. Of course, this lends to the question of adjustments being properly applied in the apparent “one size fits all” manner. Are global temp adjustments made based only on population in respect to the UHI? At any rate, the later two examples would be very interesting if they had temp records going back to before, during, and after the booms and busts, but likely to be academic only.
@ur momisugly _Jim (17:56:31) :
“Dan, what do the satellites measure (i.e., what do they ’see’)?
Surface temp?
How is this accomplished through overcast skies?”
From http://daac.gsfc.nasa.gov/AIRS/documentation/amsu_instrument_guide.shtml
“AMSU-A is primarily a temperature sounder that provides atmospheric information in the presence of clouds, which can be used to correct the infrared measurements for the effects of clouds. This is possible because microwave radiation passes, to a varying degree, through clouds – in contrast with visible and infrared radiation, which are stopped by all but the most tenuous clouds.”
@ur momisugly keith winterkorn (19:56:07) : “it can be converted to “temperature” only by correlation with actual contemporaneous temperature measurements from the same site from which the radiation is transmitted.”
No, it doesn’t require calibration against measurements of its target.
“The second segment is a rapid scan covering a cold space view and an internal (warm) blackbody calibration target. ”
“can it separate radiation from paved surfaces, building exhausts, planes, etc., from some nearby grassy area where the ground station is found?” Yes. It looks at air temperature.
“AMSU-A1 has 12 channels in the 50-58 GHz oxygen absorption band which provide he primary temperature sounding capabilities ” If paved surfaces, exhausts, grassy areas change the air temperature, the radiation in the oxygen bands will change and be detected by the AMSU receiver. It does also measure other bands which provide surface, water vapor, and cloud top info.
Re: Dan (17:24:15) :
“Discussion over surface station data quality should be placed in the context of the satellite data record,”
Dan, you might want to do some reading about the very real challenges the satellite folk have in producing their temperature records. Unless I misremember, there are at least the following:
a the sensors need repeated calibration, by pointing at known temp targets on the satellite. Small errors?
b either measuring slantwise or straight down, processing assumes standard atmospheric temp lapse rates. Localities do vary, so errors, especially in what layer of air has what temp.
c the time series is stitched to gether from records from different satellite systems, sensors, etc, so discontinuities
d because of the difficulty in converting the radiation measures to temps (see earlier posts), satellite measures have to some degree been cross checked to surface themometer data – i don’t know if this was a reliable exercise.
We are looking at tiny tiny trends – or trying to – so I’m not convinced yet about satellites!
But do your own research! Glad you posted.
This is going to take some screaming.
Average people (* with whom I discuss these things) are propagandized to the degree that they are convinced that CO2 is a deadly poisonous gas,
and do not know the difference between CO2 and CO.
These folks think that they are being smothered by a deadly pollutant.
A counter-general education effort is called for.
Jack Morrow RE: Show me the prosecuted ones…
Consider the following (in my earlier post and although somewhat dated, is just the beginning):
http://www.climategate.com/u-s-lawyers-get-their-legal-briefs-in-order
Problem is, the discovery process is being overwhelmed with evidence but that is a good thing. Not a day goes by but some new damning evidence is exposed. I suspect “homogenization” of temperature data as illustrated in this thread will be one key exhibit once the actual algorithms used are found. You’re going to see trial lawyers have a big hand in all this and fraudulent scientists and RICO targets will get taken down.
It was feared right after Climategate that the story wouldn’t have legs. Well, I believe it is running pretty fast right now and expanding daily. These are indeed exciting times especially if one considers the latest pronouncement about climategate from the IOP.
E.M.Smith (03:32:19) : E.M., you wrote:
Sadley, I must report that prior paragraph is NOT humor
Okay, that’s not fair. You had me laughing. Then this “not humor” bit. Now I feel like I just laughed at a funeral.
I’ll throw out this: Folks talk of urban and rural or even place X or place Y but such things are ill defined also. Boundaries change – such as when a city annexes a parcel of land that is near and, perhaps, being developed into a shopping mall. Say that is done in 1995. In the USA a census of April 2000 will have a different spatial base than the census of April 1990. These changes are documented but seldom used when a study is done trying to find relationships between population and some other factor, say luminosity over time. There are country to country variations to worry about if the study is international in scope.
To this engineer’s mind, peak detect means to detect the peak (as min-max thermometry would seem to do) –
Can you explain in just a few sentences why TOB (time of observation bias) is needed in light of peak detect methodology?
.
.
David, UK (07:41:43) : OT, but as per Tom Fuller’s comment, I too am not seeing any graphics.
Try the following one at a time, or all at once.
Try shutting down your computer and do a restart. Try shutting off your connection to the internet. Shut off your browser. Use a different one.
I’ve had the problem before, but not on this post. Thus, I think these are local issues. I’ve just quit worrying about it and search for a fix.
Note that there are 7 irrigation companies near Harrisonburg, VA, googling ‘irrigation near Harrisonburg, VA.’ As more farmland has been irrigated over the years, the evaporation of water has skewed the temperature readings cooler (ICE, irrigation cooling effect.)
Thanks, Brian, but that question was for Dan, from whom we have not heard back from …
Now, let me pose a question your direction.
Are we (the satellites and the statistical processing applied by RSS and UAH) measuring increased convective activity vis-a-vis higher reported satellite temperatures then at the possible ‘cost’ of energy removed from the surface and boundary layer air masses?
Also bear in mind the MSU’s aboard those sats are also going to see the result of convective activity, i.e., precipitation in its varied forms, which are more reflective of temperature seen at altitude and not the boundary layer or ground.
.
.
How about: Try a different DNS (Domain Name Server)?
I’ve had really good luck/faster response/no missing images using Google: 8.8.8.8 or 8.8.4.4 per: Google Public DNS
I had *trouble* using the DNS server simply served up on my present at-home connection … to change your DNS go to Settings, Network, Local Area xxxx, Properties, TCP/IP, Properties, “Use the following DNS …” and enter the above addresses.
This can be done separately from automatically obtaining an IP address BTW.
.
.
Kevin Kilty (08:02:31) :
Quite a large fraction of NCDC’s correction comes from trying to fix the time of observation (TOB) bias.
kevin,
as we do not have the temperarture – dateTime series, I am not sure where we go with TOB logic. all we seem to have is the average temp for each day.
Obviously, there is no need for a TOB correction on this site for the average daily temp series.
if this site had been as pristine 100 years ago, as it looks to be today, there is no need to apply the UHI correction either.
the correction in the USHCN data seems to be arbitrary. the GISS correction seems to be goal oriented……purely to get the temperature up for the recent times. so….. decrease the older temps by as much as 1.0 deg C. There is no way, the folks who did these corrections were ignorant of what they were trying to achieve and have achieved by the correction.
Most such critics are concerned about fraud in relation to an apparent intent to deceive an unknowing general public by deliberately and knowingly misreperesenting the character and reliability of the data handling and adjustments. Disclosures of the communications between key Alarmist climate scientists reveal a pattern of abusing the data handling methods being used, regardless of whetheror not those methods have any plausible scientific legitimacy. A given mathematical procedure and/or scientific procedure does not have to be inherently false or fraudulent to be used in a fraud to deceive people. Snake oil salesmen often sold perfectly legitimate remedies by fraudulently misrepresenting their appropriate applications and efficacies.
Plausible deniabililty beyond a reasonable doubt may be an appropriate standard for use in criminal law, but plausible certainty beyond a reasonable doubt is more of the standard to be used in science.
JustPassing: (02:33:36) :
My blood is boiling listening to Margaret Torn at the Climategate panel in Berkeley. Speaker No. 1, Maximilian Auffhammer, has spoken about his experience of having his and his Korean co-worker’s study of the use of tree ring proxies being rejected by Jones et al. because it contradicted the Michael Mann and Briffa agendas. All junior academics will recognize the personal pain experienced when their own work is rejected – especially if they have a good understanding that the work has merit. This leads to all sorts of self-doubt: academia is a blood-sport and is very hard on the ego for those of us who are not academic stars. Yet just 20 minutes later Torn recites the AGW litany: the ‘theft’ of the Climategate e-mails; the death-threats to the scientists involved, which means now scientists have to worry about their personal safety in embarking on their careers, and how the e-mails have been taken out of context. She and the other climate scientist, Bill Collins, are more interested in buttressing their field than addressing the problems that have been revealed and both continue to defending the rubustness of the evidence.
I notice that Collins is now willing to allow that AGW has only broken temperature records going back 4-500 years; perhaps (?) it was warmer in the time of Charlemagne. But Collins believes that the correction of the Mann hockeystick was achieved by proper scientific self-correction, while the Climategate scandals reveals there’s a ‘cancer’ that needs to be excised from the science. Collins also believes the ‘theft’ of the e-mails is unacceptable, but believes an investigation at UAE is needed.
Still listening…
Here is a link to an article where I show the WMO data surrounding Dale Enterprises.
Expanding this to the whole period would be a major job, an awful lot of data.
The earliest data starts 1859 but is bad.
So I have concentrated on a brief period where all stations have data, a flavour of the variations.
http://daedalearth.wordpress.com/wmo-72417-and-surroundings/