Guest post by David W. Schnare, Esq. Ph.D.
When Phil Jones suggested that if folks didn’t like his surface temperature reconstructions, then perhaps they should do their own, he was right. The SPPI analysis of rural versus urban trends demonstrates the nature of the overall problem. It does not, however, go into sufficient detail. A close examination of the data suggests three areas needing address. Two involve the adjustments made by NCDC (NOAA) and by GISS (NASA). Each made their own adjustments and typically these are serial, the GISS done on top of the NCDC. The third problem is organic to the raw data and has been highlighted by Anthony Watts in his Surface Stations project. That involves the “micro-climate” biases in the raw data.
As Watts points out, while there are far too many biased weather station locations, there remain some properly sited ones. Examination of the data representing those stations provides a clean basis by which to demonstrate the peculiarities in the adjustments made by NCDC and GISS.
One such station is Dale Enterprise, Virginia. The Weather Bureau has reported raw observations and summary monthly and annual data from this station since 1891 through the present, a 119 year record. From 1892 to 2008, there are only 9 months of missing data during this 1,404 month period, a missing data rate of less than 0.64 percent. The analysis below interpolates for this missing data by using an average of the 10 years surrounding the missing value, rather than basing any back-filling from other sites. This correction method minimizes the inherent uncertainties associated with other sites for which there is not micro-climate guarantee of unbiased data.
The site itself is in a field on a farm, well away from buildings or hard surfaces. The original thermometer remains at the site as a back-up to the electronic temperature sensor that was installed in 1994.
The Dale Enterprise station site is situated in the rolling hills east of the Shenandoah Valley, more than a mile from the nearest suburban style subdivision and over three miles from the center of the nearest “urban” development, Harrisonburg, Virginia, a town of 44,000 population.
Other than the shift to an electronic sensor in 1994, and the need to fill in the 9 months of missing reports, there is no reason to adjust the raw temperature data as reported by the Weather Bureau.
Here is a plot of the raw data from the Dale Enterprise station.
There may be a step-wise drop in reported temperature in the post-1994 period. Virginia does not provide other rural stations that operated electronic sensors over a meaningful period before and after the equipment change at Dale Enterprise, nor is there publicly available data comparing the thermometer and electronic sensor data for this station. Comparison with urban stations introduces a potentially large warm bias over the 20 year period from 1984 to 2004. This is especially true in Virginia as most such urban sites are typically at airports where aircraft equipment in use and the pace of operations changed dramatically over this period.
Notably, neither NCDC nor GISS adjusts for this equipment change. Thus, any bias due to the 1994 equipment change remains in the record for the original data as well as the NCDC and GISS adjusted data.
The NCDC adjustment
Although many have focused on the changes GISS made from the NCDC data, the NCDC “homogenization” is equally interesting, and as shown in this example, far more difficult to understand.
NCDC takes the originally reported data and adjusts it into a data set that becomes a part of the United States Historical Climatology Network (USHCN). Most researchers, including GISS and the East Anglia University Climate Research Center (CRU) begin with the USHCN data set. Figure 2 documents the changes NCDC made to the original observations and suggests why, perhaps, one ought begin with the original data.
The red line in the graph shows the changes made in the original data. Considering the location of the Dale Enterprise station and the lack of micro-climate bias, one has to wonder why NCDC would make any adjustment whatever. The shape of the red delta line indicates these are not adjustments made for purposes of correcting missing data, or for any obvious other bias. Indeed, with the exception of 1998 and 1999, NCDC adjusts the original data in every year! [Note, when a 62 year old Ph.D. scientist uses an exclamation point, their statement is rather to be taken with some extraordinary attention.]
This graphic makes clear the need to “push the reset button” on the USHCN. Based on this station, alone, one can argue the USHCN data set is inappropriate for use as a starting point for other investigators, and fails to earn the self-applied moniker as a “high quality data set.”
The GISS Adjustment
GISS states that their adjustments reflect corrections for the urban heat island bias in station records. In theory, they adjust stations based on the night time luminosity of the area within which the station is located. This broad-brush approach appears to have failed with regard to the Dale Enterprise station. There is no credible basis for adjusting station data with no micro-climate bias conditions and located on a farm more than a mile from the nearest suburban community, more than three miles from a town and more than 80 miles from a population center of greater than 50,000, the standard definition of a city. Harrisonburg, the nearest town, has a single large industrial operation, a quarry, and is home to a medium sized (but hard drinking) university (James Madison University). Without question, the students at JMU have never learned to turn the lights out at night. Based on personal experience, I’m not sure most of them even go to bed at night. This raises the potential for a luminosity error we might call the “hard drinking, hard partying, college kids” bias. Whether it is possible to correct for that in the luminosity calculations I leave to others. In any case, the lay out of the town is traditional small town America, dominated by single family homes and two and three story buildings. The true urban core of the town is approximately six square blocks and other than the grain tower, there are fewer than ten buildings taller than five stories. Even within this “urban core” there are numerous parks. The rest of the town is quarter-acre and half-acre residential, except for the University, which has copious previous open ground (for when the student union and the bars are closed).
Despite the lack of a basis for suggesting the Dale Enterprise weather station is biased by urban heat island conditions, GISS has adjusted the station data as shown below. Note, this is an adjustment to the USHCN data set. I show this adjustment as it discloses the basic nature of the adjustments, rather than their effect on the actual temperature data.
While only the USHCN and GISS data are plotted, the graph includes the (blue) trend line of the unadjusted actual temperatures.
The GISS adjustments to the USHCN data at Dale Enterprise follow a well recognized pattern. GISS pulls the early part of the record down and mimics the most recent USHCN records, thus imposing an artificial warming bias. Comparison of the trend lines is somewhat difficult to see in the graphic. The trends for the original data, the USHCN data and the GISS data are: 0.24,
-0.32, and 0.43 degrees C. per Century, respectively.
If one presumes the USHCN data reflect a “high quality data set”, then the GISS adjustment does more than produce a faster rate of warming, it actually reverses the sign of the trend of this “high quality” data. Notably, compared to the true temperature record, the GISS trend doubles the actual observed warming.
This data presentation constitutes only the beginning analysis of Virginia temperature records. The Center for Environmental Stewardship of the Thomas Jefferson Institute for Public Policy plans to examine the entire data record for rural Virginia in order to identify which rural stations can serve as the basis for estimating long-term temperature trends, whether local or global. Only a similar effort nationwide can produce a true “high quality” data set upon which the scientific community can rely, whether for use in modeling or to assess the contribution of human activities to climate change.
David W. Schnare, Esq. Ph.D.
Director
Center for Environmental Stewardship
Thomas Jefferson Institute for Public Policy
Springfield Virginia
===================================
UPDATE: readers might be interested in the writeup NOAA did on this station back in 2002 here (PDF, second story). I point this out because initially NCDC tried to block the surfacestations project saying that I would compromise “observer privacy” by taking photos of the stations. Of course I took them to task on it when we found personally descriptive stories like the one referenced above and they relented. – Anthony





Bottom line, they are trying to make silk purse out of a sow’s ear.
The original data quality is simply not adequate to derive trends to a fraction of a degree even in the well sited monitor locations. They were intended for a totally different purpose.
They are heaping correction on top of correction modified by subjective analysis, depending on local variables they have no control over and in most cases no knowledge of.
They are coming up with a meaningless metric that in my view has no validity for the purpose they are trying to use it.
This is even before you begin discussing issues like how useful is it to figure average temperatures from a mean value (which might be computed in as many as 100 different ways according to their paper)
“The procedure for duplicate elimination with mean
temperature was more complex. The first 10 000 duplicates
(out of 30 000+ source time series) were identified
using the same methods applied to the maximum
and minimum temperature datasets. Unfortunately,
because monthly mean temperature has been computed
at least 101 different ways (Griffiths 1997), digital
comparisons could not be used to identify the remaining
duplicates. Indeed, the differences between
two different methods of calculating mean temperature
at a particular station can be greater than the temperature
difference from two neighboring stations.”
“Probable duplicates were assigned the
same station number but, unlike the previous cases,
not merged because the actual data were not exactly
identical (although they were quite similar). As a result,
the GHCN version 2 mean temperature dataset
contains multiple versions of many stations.”
Bulletin of the American Meteorological Society pp 2841
“All the homogeneity testing was done with annual
time series because annual reference series are more
robust than monthly series. However, the effects of
most discontinuities vary with the season. Therefore,
monthly reference series were created and differences
in the difference series for each month were calculated
both before and after the discontinuity. These
potential monthly adjustments were then smoothed
with a nine-point binomial filter and all the months
were adjusted slightly so the mean of all the months
equaled the adjustment determined by the annual
analysis.”
“Our approach to adjusting historical data is to make
them homogeneous with present-day observations, so
that new data points can easily be added to homogeneity-
adjusted time series. Since the primary purpose
of homogeneity-adjusted data is long-term climate
analysis, we only adjusted time series that had at least
20 yr of data. Also, not all stations could be adjusted.
Remote stations for which we could not produce an
adequate reference series (the correlation between
first-difference station time series and its reference
time series must be 0.80 or greater) were not adjusted.
The homogeneity-adjusted version of GHCN includes
only those stations that were deemed homogeneous
and those stations we could reliably adjust to make
them homogeneous.”
Bulletin of the American Meteorological Society pp 2845
“A great deal of effort went into the homogeneity
adjustments. Yet the effects of the homogeneity adjustments
on global average temperature trends are
minor (Easterling and Peterson 1995b). However, on
scales of half a continent or smaller, the homogeneity
adjustments can have an impact. On an individual
time series, the effects of the adjustments can be enormous.
These adjustments are the best we could do
given the paucity of historical station history metadata
on a global scale. But using an approach based on a
reference series created from surrounding stations
means that the adjusted station’s data is more indicative
of regional climate change and less representative
of local microclimatic change than an individual
station not needing adjustments. Therefore, the best
use for homogeneity-adjusted data is regional analyses
of long-term climate trends (Easterling et al.
1996b).”
Bulletin of the American Meteorological Society pp 2846
Vol. 78, No. 12, December 1997
I am not suggesting that these corrective measures were necessarily malicious!
These are just representative quotes snagged out of a single paper that jumped out at me as flashing warning signs about the inherent limitations of the data. Some of which are plainly stated by the authors of the paper itself.
I just think they were over optimistic attempts to do something which is practically impossible. By the time you twiddle around numbers for thousands of stations with a great number of sometimes arbitrary adjustments based on “assumptions” and judgment, you may have a pretty data set, but is it really useful for the purpose you are trying to develop it for?
I think they are fooling themselves into thinking they have improved the value of the data, and also grossly over estimating the statistical significance, and precision that is appropriate to assign to the output.
In my judgment (I am not a statistics whiz – just practical good common sense), they are lucky if they can accurately detect a trend smaller than +/- 1to 2 Deg C in the data series they have developed. If some statistics whiz can show me an error budget that supports higher precision than plus or minus 1 or 2 deg C. I would like to see it.
Larry
Still watching that video: (02:33:36) :
Rich Muller, Professor of Physics, author, Physics for Future Presidents, is visibly angry at the end of his response to Collins and Torn. His response begins around 42 minutes into the video: an IPCC skeptic is born!
Muller has a true understanding of the nature of real science. He now doubts the AGW claims, which have exaggerated any risks. He is outraged that the models ignored the contribution of cloud cover and negative feedbacks, and is equally outraged at the economic risks being promoted by climate ‘science’. He is emphatic that the e-mails HAVE NOT been taken out of context. Whew! Great to hear a real scientist debating the phonies. My skin is crawling at their squirming response to his criticisms.
Joshua Jones says:
February 27, 2010 at 8:10 am
To those suggesting this is evidence of fraudulent data manipulation,….
I doubt its fraud as well. It’s more likely the case that a lot of these scientists are fairly inept especially in statistics. These linear trends are based on a false proposition that there is a linear trend. Temperature is cyclic ( from night to day and from month to month, and probably year to year and decade to decade, etc). There should be a lot more Fourier transforms, multivariate analysis, and statistical hypothesis testing going on than genetic linear regression in Excel. Here’s an example of the danger: roll dice and plot the number over time. Now do a linear regression. It will show a slope. Would you take this slope as evidence that there is a true trend? No. How can you be sure? What if three out of four dice showed a positive trend? Would you adjust the fourth to match the first three? No, you’d say it was rand chance. Same with temperature trends. Take all stations unadjusted, calculate trends, then statistically test if the distribution of trends is statistically different than the null hypothesis.
“If some statistics whiz can show me an error budget that supports higher precision than plus or minus 1 or 2 deg C. I would like to see it.”
I think we’d all like to see that.
Allow me to try to summarize with my high school education.
The Homogenized data has been adjusted.
The adjustments appear to include ( on a number of examined stations)
lowering of past reported temperatures exaggerating any warming trend,
and,
Adjusting for the Urban Heat Island effect, not by adjusting Urban stations down, but by adjusting adjacent Rural stations UP .
E.g., Lydia Pinkham’s Vegetable Compound, “Efficacious in every case.” (You betcha, at 15 proof alcohol!)
The strangest thing is that over at RealClimate, they dont seem to even want this to be fixed. It is indeed very strange.
Hopefully more and more people will realise the situation.
It looks like an awfull mess, seen from my standpoint.
Dr. Schnare, this is the approach I like to see. There’s a lot of overreaching at WUWT.
RK: Give it time. The Team has made many enemies in their field. (For instance, that guy Karlen from Scandanavia who got brushed off by T____.) They have been waiting for an opportune moment to strike back and be heard. Now they have it. As some of them speak up, others will be encouraged to come forward. And the strength of their condemnation of the IPCC and the Team will rise, and lots more dirty linen will be aired. It’ll develop along the lines of Watergate, with the public getting hooked on their weekly scandal, and the defenders in the bunker getting more and more implausible and desperate.
As that happens, the media will be more inclined to pay attention to dissenters. There will be articles exploring, with the aid of graphics, the links (remember that word?) between the Team, the IPCC, and the various gatekeepers in the field. There may even be articles exploring topics like, “What is climate science all about anyway?”
A tectonic shift is underway. The media’s current silence is an indication that they are re-assessing the situation, and that their treatment in the future will be less outrageous. Even if you don’t grant them any sense of fairness at all, which is silly, you should realize that they have to be concerned about not alienating the skeptical portion of their readership too badly, now that noticeable segments of it are sounding off in their online comments sections. Previously, they were only getting badgered by the enviros, whenever they failed to toe the line. Now, the forces on the contrarians side are coalescing, mainly thanks to internet sites like this, and making their impact felt.
Not by me! Here’s what I posted here during Weeks 1 & 2, in exchanges with Brendan_H (primarily). (These aren’t all in chronological order.)
(However, I was a bit “off” in the specifics. I thought there would be more disaffected insiders like Georg Kaser coming forward. Instead, what primarily emerged were new “gate” scandals, in AR4. Nevertheless, I still think that the “tectonic shift” of insider opinion is occurring quietly and that it will be the “sea change” in the debate over warming that will make the difference, not these recent scandals. I.e., even if the recent “gates” blow over in a few months, as warmist forces hope, probably with good reason, the former consensus has been shattered and won’t be able to regroup well enough to effectively marginalize and intimidate cautionary and contrary voices in the scientific community and in the media. Warmist momentum and solidarity has been lost. Hence, appreciation of the wobbliness of its case will grow over the years.)
On the contrary, there has been ample discouragement for people to spill their beans, and the encouragement offered hasn’t been much. An op-ed in a right-wing newspaper, or a some-expenses paid trip to an NIPCC conference, or an online petition to sign? These merely brought down abuse and shunning. Some encouragement!
That’s probably why that Scandinavian Karlen, who was brushed off by CRU, didn’t make a public stink about it. My inference is that there are more like him out there with stories to tell, and that they will come forward now that people are willing to give credence to what they have to say. In addition, critics who have already spoken out but been generally ignored and/or scoffed at will have gained credibility in light of what has been revealed, and hence will be de-marginalized by being interviewed on TV, etc. Here’s what I prophesied:
RK: “The Team has made many enemies in their field. … They [those enemies] have been waiting for an opportune moment to strike back and be heard. Now they have it. As some of them speak up, others will be encouraged to come forward. And the strength of their condemnation of the IPCC and the Team will rise, and lots more dirty linen will be aired. It’ll develop along the lines of Watergate, with the public getting hooked on their weekly scandal, …”
I wouldn’t be too sure. Additional e-mails involving the team will be subpoenaed by Inhofe’s committee. There’s likely to be embarrassing material in them that will titillate the public, and whet their blood-lust for more.
Spoken like another Ron Zigler or Rabbi Korff (remember him?)! I hope you get interviewed as a CRUgate-defender on TV: there’s a need for someone to fill those roles, to heighten the absurdity of it all.
Unlike you, Monbiot has recognized that there is plenty of evidence of wrongdoing, collusion, and butt-covering among The Team, that the public is going to see it that way, and therefore that a timely abandonment of them and their indefensible activity is the only way for warmism to salvage some credibility from this train wreck. The truth of his insight should be obvious, but if you and your brethren would rather be oblivious, I’m fine with that. Don’t give up the [glug ….]!
Sure. But now some of them are beginning to think, “Maybe we were wrong to dismiss the pre-existing narrative. That’s what we did with Watergate. We ignored McGovern’s pre-election charges that the break-in had been orchestrated from above because it sounded partisan, outrageously unlikely, and would have brought down obloquy on us if we had entertained the possibility publicly. Mutatis mutandis …”
Now that there’s been some “hard” confirmation of outsiders’ charges of smug, thuggish groupthink that has been “leaning” on the peer review process, climate critics no longer can be dismissed as cranks. They are going to be given a respectful hearing, at least in a fair number of venues.
Similarly, scientific societies are going to have to take a serious second look at this controversy, instead of just rubber-stamping the “correct” opinion. Every time one of them distances itself from the consensus, it’ll be newsworthy. Every time a warmist becomes a turncoat, or even merely criticizes an outrageous defense of the Team (like the absurd defenses of Nixon that were offered), it’ll be newsworthy.
The dam is cracking, the increased waterflow will widen the cracks, the media will like the ratings the drama is getting, more blood will get into the water, the feeding frenzy will intensify, more countries will put a hold on their anti-carbon legislation, more prestigious scientific statesmen and popularizers will weigh in on the side of caution or contrarianism, more hapless/ludicrous defenses of the consensus will be made, and the whole world will grab some popcorn and watch with glee.
Over the next few years, the warmists will be in retreat and on the defensive, despite occasional blips. The warm has turned. All the sanctimonious viciousness and hypocrisy (“we’re doing real science”) of the enviro-nuts to date will make them wonderful targets for popular scorn and down-peg-pulling.
“Dr Phil Jones says this has been the worst week of his professional career.”
So far.
The Team stands accused of that and more. Look at some of the bills of particulars that others here have posted, and several newspaper columnists too. Argue with them. I’m convinced.
I wasn’t among those who made such claims. I’m not inclined to such over-optimism. I can tell that this is different–I can smell blood. A brick has been removed from the wall that protected the team, and it will be much easier to pry out further bricks as a result. Now there is justification for congressional hearings examining the machinations of the IPCC and its failure to behave fairly. For instance, NASA scientist Vincent Gray has complained that he submitted over 1100 comments to the IPCC, all of which were ignored. The IPCC might soon have to justify those refusals. No doubt there are dozens of other scientists whose skeptical contributions were ignored, or whose drafts were high-handedly revised. The IPCC will have to justify those as well. It won’t come out looking good. Thereafter, its endorsement of alarmist findings won’t carry nearly as much weight among the innocent public and opinion-leaders as heretofore.
This is like the moment Nixon’s taping system was revealed. Until Buttersworth revealed that to the committee, it looked as though Nixon would be able to wiggle out of the affair. After that, the pursuit went into high gear. I was watching at the time and realized instantly, “Now they’ve got him. He can run, but he can’t hide.” I have a similar feeling about this business. Until now the Team was Teflon: accusations slid off them, because of their presumptive objectivity and high-mindedness. Now they are under a cloud of suspicion, subject to subpoena and testimony under oath; they won’t be able to keep their misdeeds concealed from all but their victims. They’re on the run.
As Monbiat has said, persons like you who don’t/won’t realize the dreadfulness of this situation for your side are living in a fool’s paradise.
On the contrary, I’ve made several posts in the past few days stating that I think the effect of the Team’s fiddling with the measured temperature data is likely minor, and that the overall shape of the blade of the hockey stick won’t be changed much. (I’ve also said repeatedly that there are likely innocent explanations for much of the awkward material in the e-mails.) So I don’t think that AGW has been disproved.
I see what’s happened as the first step in a lengthy process of objectively and scientifically reexamining the data and reasoning behind warmism, after the Team and the IPCC and peer review have had their halos removed. Their prestige, plus their power and willingness to enforce groupthink by any means necessary, will no longer be factors. Doubters will feel safe to speak out.
That’s naive. Academic and social “politics” already taints their judgment. Until now climate science has been politicized, in the sense that the Team’s paradigm was “enforced” by their mafia tactics and by madly warmist funding agencies, journal editors, and journalists. In such an environment, “Reason comes running / Eager to ratify.”
Once de-politicalization occurs and marginalized voices can be heard and harkened to without penalty, and non-warmist research can get funded, opinions among climate scientists are likely to shift substantially.
Of course, for many it will be too awkward to change, because they are so complicit in the shiftiness of warmism’s history. They will hang tough, like the tiny crew of post-Watergate Nixon loyalists.
PS: I should have said above that the main outcome of Climategate, IMO, is that the Team is no longer trustworthy in the public eye, and that a cloud of suspicion has fallen over peer review, the IPCC, and the consensus, which seems to have been engineered or manufactured. This is where the real damage has occurred, on an intangible level. Therefore, a re-do of the case for CAWG, under neutral scientific auspices, is needed. Plus more transparency, etc.
Not necessarily. Give Climategate awhile to sink in, and for additional dirty laundry to come to light, and for additional scientists to weigh in against the consensus. The pendulum of alarmism has reached its apogee and is poised to swing the other way. Copenhagen is a dead man walking.
There is no chance now that the US will pass any major carbon tax without a lot of hearings and scientific investigations first that produce findings supporting alarmism–and that is impossible, if neutral scientists oversee the process, similar to the Wegman investigation.
And if the US won’t get on board, neither will China and India. So the only money lost will be in Europe, similar to what happened post-Kyoto.
Oops!! — I forgot to include the introduction to my long post above. Here it is, for context:
Not by me! Here’s what I wrote in Weeks 1 and 2 after Climategate:
OOPs: I accidentally placed the introduction to my long post above beneath the quote from Steve and my response to it.
(Mods: Please delete my prior, incorrect correction?! TIA)
REPLY: Unclear what you want – comment stands, -A
There is, in the US Govt circles, a concept known as “error so egregious as to constitute deliberate fraud.” It’s not necessary to prove deliberate fraud if it can be shown that there were so many ‘mistakes’ or ‘errors’ that they cannot be reasonably ignored.
My commentary on the Climategate Panel at Berkeley posted by JustPassing (02:33:36) is somewhat OT, but I want to make a final observation that brings it into the relevant discussion on this thread.
David L. suggests that what has occurred in the USCHN and GISS data sets is not fraud, but incompetence. He states: David L (10:58:42) : “I doubt its fraud as well. It’s more likely the case that a lot of these scientists are fairly inept especially in statistics. These linear trends are based on a false proposition that there is a linear trend…” This sounds like a likely scenario to me.
If you happen to watch the entire YouTube video (runs to nearly 2 hours), you will see the panel discussion go through several cycles. The physicist Rich Muller at times appears to defend the IPCC 4th report, and soft-soaps his criticisms, but then the anger re-emerges and at one point he says there’s no reason to trust there will be no further ‘dirty laundry’ in the report. When the panelists are asked how Mann and the guilty CRU parties’ academic misconduct should be punished, they all agree that no crime occurred! Although Muller is on the attack, there is a high degree of academic circling of wagons going on: it is all very polite. The worst punishment that would be possible: ban Mann et al. from further IPCC reports!!!!?!
It appears, therefore that a far more apposite punishment, if that is the route that must be followed, should be meted out. These incompetent clowns should be obliged to take some advanced stats courses from Ross McKitrick. Problem solved – although probably not to RM’s taste.
P.S. Non-academics here, if they can stomach watching Bill Collins and Margaret Torn, will get a glimpse as to how academics conduct themselves in a situation where there is a major case of academic misconduct that must be resolved and at the same time the academic community wants to save face.
This is why, Roger Knights, your observations are correct that it will take time for the full implications of Climategate and all the other alarmist-gates to cause the AGW establishment to crumble. Keep chipping away!
Re: _Jim (Feb 27 09:29),


Can you explain in just a few sentences why TOB (time of observation bias) is needed in light of peak detect methodology?
Jim, the old thermometers could tell you the peak in a 24hr period. But when they are read has an effect. The NWS originally asked that the reading happen late afternoon.
That meant that on a hot day, the peak would be counted twice, with a genuine peak in mid-afternoon say, and the temp just after the reading (and re-setting) being the peak for the next day. Cold mornings, however, were only ever counted once.
Over time, observers shifted to reading and resetting in the morning, since that is when they had to read the rain gauges. Then the opposite happened. Warm peaks only counted once, while cold mornings were often counted twice.
Since the average daily cycle is fairly well measured at each site, there’s a good basis for making a correction.
REPLY: But the real problem is, TOBS imparts a significant warm bias to the entire record post 1960, so if the intent was to prevent counting double peaks and having them bias the record upwards (or downwards), it seems to have failed miserably. BTW the “new” thermometers (MMTS) can also “tell you the peak in a 24hr period”. There’s no operational difference between reading “old” Mercury thermometers and newer electronic “MMTS” thermometers. They are both read manually, at a preferred time of day, rounded to the nearest whole degree, hand recorded on a B91 form, and the form mailed into NCDC once a month. Not sure where you get the erroneous idea that “old” thermometers are operationally different in some way other than the sensing element/enclosure.
The TOBS bias is shown here, as you can see TOBS is the lions share:
Source: http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_pg.gif
The final total bias of all adjustments is here:
Source: http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
-Anthony
@ur momisugly _Jim (09:50:56) :
“Are we…measuring increased convective activity vis-a-vis higher reported satellite temperatures…?”
“…MSU’s aboard those sats are also going to see the result of convective activity, i.e., precipitation in its varied forms, which are more reflective of temperature seen at altitude and not the boundary layer or ground.”
Yes we do see increased convective activity highly correlated with SST. The powerful advance in technology of the multiple spectral channel data from the Aqua and Terra Atmospheric Infrared Sounder (AIRS) and Advanced Microwave Sounding Unit (AMSU) is that we see the temperature from the bottom up, and can tell which signal is coming from where. The algorithms to sort it all out aren’t trivial, and the volume of data is pretty daunting – according to ” Aumann, H. H., D. T. Gregorich, S. E. Broberg, and D. A. Elliott (2007), Seasonal correlations of SST, water vapor, and convective activity in tropical oceans: A new hyperspectral data set for climate model testing, Geophys. Res. Lett., 34, L15813, doi:10.1029/ 2006GL029191.”
trs-new.jpl.nasa.gov/dspace/bitstream/2014/40966/1/06-4186.pdf, a “small subset” of the available data, the AIRS Climate Data Set, is 300Mbytes per day
A large fraction of the data was recorded by volunteers who changed their observation schedules away from midnight to some other more convenient time. As one does not know the time at which maximum and minimum occur, just the maximum and minimum values, reading at a time other than midnight produces an ambiguity as to what day the true maximum or minimum occurred. For example, reading in the afternoon results in an upward bias as the maximum could have occurred on the current day, or on the previous day–that is the maximum temperature reported for two consecutive days could have come from a single day. Reading in the morning produces a cool bias for similar reasons.
My point is that the true correction involves actual temperature as read at the station, but NCDC uses a statistical/geographical model to estimate the bias.
REPLY: Important to note: the observers state the time the temperatures were taken, right on the B91 form.
For example here:
Hand written example: http://gallery.surfacestations.org/main.php?g2_view=core.DownloadItem&g2_itemId=28573
Typed example: http://wattsupwiththat.files.wordpress.com/2010/02/bartow_b91_aug09.pdf
Yet NCDC discards that time of observation info in the transcription to digital data process. Here is the ASCII data from the B91 form for that station for August 09, same as the typed form example above. Even though the form says “1700” (5PM), note that the “tobs” listed in the ASCII data is the temperature at observation time, NOT the “time of observation”
Coopid,Year,Month,Day,tmax,tmin,tobs,prcp,snow,snwd,wdmv,evap,sgc4,sxy4,sny4,sgc8,sxy8,sny8,meanmax,meanmin,sumprcp,sumsnow
080478,2009,07,1,78,74,77,0.35,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,2,87,73,85,0.18,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,3,92,73,89,0.25,0,999999, ,9999.99, , , , , ,
080478,2009,07,4,91,74,79,0.70,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,5,93,75,91,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,6,92,77,89,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,7,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,8,91,76,83,0.02,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,9,90,74,75,0.25,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,10,89,72,87,0.16,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,11,96,69,85,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,12,91,69,83,1.78,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,13,92,74,83,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,14,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,15,95,74,93,0.27,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,16,96,77,92,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,17,93,77,92,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,18,94,77,86,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,19,90,71,88,0.48,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,20,90,69,86,0.38,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,21,91,71,90,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,22,92,74,90,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,23,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,24,94,75,93,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,25,94,73,91,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,26,93,74,83,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,27,89,72,79,0.10,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,28,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,29,95,73,91,0,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,30,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , ,
080478,2009,07,31,999999,999999,999999,9999.99,99999.9,999999, ,9999.99, , , , , , ,91.5,73.5,4.92,
Just in case anyone wants to see it, here is what the station looks like:
http://wattsupwiththat.com/2007/10/01/how-not-to-measure-temperature-part-32/
Oddly, the time of observation seems less important than one might think. NWS advises observers that if they have to deviate from the 7AM time, well that’s OK too. See this:
http://www.srh.noaa.gov/ohx/dad/coop/WxEye2.pdf
What if You Take your Ob at a Different Time Than Scheduled?
No problem! That is why there is a remarks
section! Just note in the remarks if you take
your ob late or early. Use the remarks
section to also indicate any missing obs and
why. For example, you may go on vacation
for a week and miss a whole week of data.
You can use remarks to let us know why
data is missing. Sometimes equipment is
faulty. Jot this down in remarks too.
I can see why it’s “no problem” NCDC doesn’t give a hoot as to what time the observation is taken, they simply throw that information away!
What we have is (max+min)/2, referred to as average temperature for each day. Anthony and Nick, above, have covered this concept pretty well, so I won’t continue the whipping of dead horse.
Indeed, we cannot go back and look at individual stations to see if the statistical/geographical model that NCDC uses to correct TOB is appropriate or not. We have to take what they hand us, or start over with some alternative model of estimating average temperature versus time. This is why using first-order stations would be useful–there is no TOB to correct. However, before starting such a task, I’d like to know that mean temperature is or is not a useful concept in the first place. I can come up with reasons why it is not.
I would just like to add one more comment to what Nick Stokes said above. The NCDC data set comes from COOP stations (at least that is my interpretation) and we do not know the daily cycle at these stations. All we have is a time series of max-min values. This is why I am pretty skeptical of the TOB bias correction that NCDC applies. The bias results from the actual series of daily cycles, but this isn’t available, and instead they use a method of correction due to Tom Karl, which makes some sense physically, but which has no associated measure of its “robustness”, to use a trendy phrase.
_Jim (20:37:49) :
“Survey parties anxiously await the invention of/the discovery of/the release from EVT (Engineering Verification Testing) of the first time-travel machines available for rent or lease for prioritized civilian purposes …”
I think you miss the point that to be able to use data from a given station to determine “climate” trends, the station must have a consistent and appropriate micro-climate, instrumentation and time of observation.
The SurfaceStations exercise only examines the current microclimate to see if it meets the standards. The intent of the USHCN is apparently to use stations that both currently and in the past meet the standards of a weather station capable of evaluating climate.
It is a necessary condition that a station is currently meeting the criteria of an acceptable microclimate before even examining its history of meeting the criteria.
After eliminating the USHCN stations that are not currently meeting the acceptable criteria currently, examination of the history of the stations results in only a potential few that are viable candidates for climate change evaluation (Walhalla , SC is the only one that comes to mind).
Using the 100 yr data record to determine the trends within less than a degree of uncertainty for a given station is a fool’s errand.
The paleo-climate situation regarding sites is similar with no sites likely suitable for sub degree resolution.
Instead of saying that we want the “raw” data, we might occasionally say:
We want the uncooked data.
We want the “sushi” data.
“Uncooked” conveys a neat double entendre. (I.e., unslanted.)
JerryB (07:08:41) :
good to see you again. TOBS will come around for yet another discussion
people who want to understand TOBS should go to CA and read everything jerryB has written. he also has pointers to data so you can ply to your hears content
Kevin Kilty (17:17:30) :
Karl’s TOBs adjustment is an empirical model with data held out for verification as I recall. been a couple years since I read it
Re: Nick Stokes (Feb 27 14:32),
But the real problem is, TOBS imparts a significant warm bias to the entire record post 1960, so if the intent was to prevent counting double peaks and having them bias the record upwards (or downwards), it seems to have failed miserably.
Anthony, I don’t get the logic there. If the trend in TOBS was towards morning rather than evening reading, then that would have biased the observations downward. And so, when corrected, the corrections will have an upward trend. I don’t see how that invalidates them.
REPLY: Read my reply to Kevin Kilty. NCDC doesn’t even record the time of observation in the transcribed data. The TOBS correction this appears to be applied without an actual basis of the hour. How can there be a basis if they don’t use the data provided by the observer. It’s insanely sloppy. The point of a correction is to get it back inline with nature, not to throw a random element of chance into the mix, which is what it appears to do. – A
Many readers will be stunned when they discover how the practice of using the minimum and maximum daily air temperatures from a single observation each day artificially distorts the actual record of the location’s air temperature/s and thermal state for the day, month, and year, even BEFORE any TOBS or other adjustments are applied. Reducing the reported daily temperatures to only two extreme points of observaton disregards all of the actual intermediate air temperatures which occurred in the air mass throughout the day. This results in a false representation of the true average air temperature, with variations in whole degrees F/C.
Find weather stations which make intra-hourly special observations, hourly observations, 3-hourly observations, and 6-hourly observations. Compare (1) the official MIN, MAX, and Daily Average/MEAN air temperatures to the computed daily averages of (2) all reported air temperatures from 0000 Hours local time to 2359 Hours local time, (3) only the hourly air temperatures, (4) only the 3-hourly air temperatures, and (5) only the 6-hourly air temperatures. For additional interest, repeat the exercise while shifting the 24 hour period one hour or other period of time throughout the 24 hour day and observe the effect it has upon the computed averages versus the MIN-Max method of computing the daily average.
Note how the additional special observations taken at regualr and irregular intervals between the hourly observations can shift the daily average air temperature even more than the regularly scheduled hourly observations do with respect to the single daily observation of the MIN-MAX air temperatures.
The ask the IPCC climate experts how they intend to prove a global change in total average air temperature of less than one degree C exists on the basis of a surface weather observation method which varies by one or more whole degrees from the actual air temperatures when sampled more than twice a day?
Not at all Eric; levity, (excessive or unseemly frivolity) yes, miss the point, no.
.
.