How not to measure temperature, part 10

Russ Steele, a blogger in Nevada County at NCWatch is volunteering to do weather station site surveys as I’ve been doing. Yesterday Russ visited Petaluma California to see the USHCN climate station of record there. It used to be at the city fire station but has been moved to the airport, but apparently the NASA climate database hasn’t yet caught up with that as it still shows “fire station” as the place.

Ok we have a temperature sensor strapped to a wooden deck, near a sea of tarmac.

petaluma_east.jpg

And not only that, the building with the deck is only six seet away, and has air conditioners exhausting hot air on the south side. Prevailing wind direction in that are is from the south, so that means wihen wind hits that wall, it will spread out the hot a/c exhaust east and west.

petaluma_west.jpg

Prevailing southerly winds will transfer heat from the burgeoning suburbs to the south, and when the wind reverses and comes from the north (after a frontal passage for example) it will transfer heat from the acres of tarmac to the sensor.

Petaluma_AP_Google_Earth.jpg

So its really no surprise to see this plot. But not to worry, the climate modeler Dr. James Hansen at NASA has it all mathematically accounted for, except he still doesn’t know the station is at the airport. He should try visiting weather stations someday.

Petaluma_station_plot.gif

This station data is in fact used in climate modeling to predict our climate future.

0 0 votes
Article Rating
18 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
June 18, 2007 12:28 pm

Forgive me if you’ve already covered this somewhere in the site, but the stats all still seem to trend upwards, with occasional outliers. I guess I would expect more erratic measurements with the faulty device placements.
And do you have well-placed stations with static data? Just curious about some things that would compel me in one direction or the other. I don’t really have a strong opinion on the matter at hand.

June 18, 2007 1:58 pm

When you collect pictures of a good sample (or better yet all of them), it would be good to rank them in order of worthwhileness-> absurdity for measuring the temperature change (without looking at the charts). Then cut off at some point, and compute the average temperature graph for the stations above the cutoff. That’s a blind way of assessing the magnitude of the problem.

paul
June 18, 2007 2:58 pm

thanks for providing this useful service. it might also be worth comparing this dataset to others out there, such as the first-order weather stations in the US (typically found at aiports). of course you may have already done that.
here’s some relevant text at:
http://www.ncdc.noaa.gov/oa/climate/onlineprod/drought/ftppage.html
regarding ushcn vs. palmer drought index data, which use another dataset (see below):
(begin quote)
The Climate Division database’s nationally averaged temperatures (records beginning with “11002”) in the file “drd964x.tmpst.txt” that were provided via this web page from late 2001 through February 2003 had been partially overwritten with data from the USHCN database. This error resulted in a national trend for the contiguous US that was similar to the trend based on USHCN data. The trend for the contiguous US calculated from climate division data is approximatelt 0.03F/decade less than the trend calculated using USHCN adjusted data. All nationally averaged climate division temperatures were recalculated and placed in file “drd964x.tmpst.txt” on February 28, 2003.
(end quote)
so the ushcn data do show a significant, albeit small, trend difference vs. these data, which come from the Cooperative Network (COOP) Summary of the Day dataset (TD-3200), according to http://www.ncdc.noaa.gov/oa/climate/research/2006/nadm-workshop/20061019/1161283800-abstract.pdf
do the first order stations suffer the same issues as ushcn? i do know that they’ve had to toss data from the san diego airport when planes idle next to the station there.

Eric P
June 18, 2007 4:28 pm

The presentation of the site pictures with the accompanying temperature graph is extremely misleading, as not all the data in the graph is from the pictured site. In fact, the data for Petaluma is from at least 3 different sites (reference NCDC station metadata), with only the recent few years from the pictured airport site. Thus a large portion of the warming displayed in the graph took place before the station was located at the pictured location. While the current location is certainly poor, one must remember that almost all climate stations have relocated several times in their history. In fact many pre-WWI weather observations were taken on the roofs of Post Offices, which were certainly not ideal locations for temperature measurement.
While I can agree that there are multiple sources of error in various temperature records, the cherry picking of sites and the misleading presentation of data on this blog does little to advance the argument the global climate is not warming. In reality, the various temperature graphs presented, which all show warming despite the various biases and errors in the data, probably do more to confirm the global warming argument than disprove it.

Anthony Watts
June 18, 2007 4:37 pm

The main website has examples of both good and bad stations, so the claim of “cherry picking” is not valid. The blog is to highlight ones we’ve found problems with, not to prove or disprove global warming.
There are lots of sites that don’t have such problems that show a cooling trend. They’ll all be available when the surfacestations.org website comes back up, thanks for your patience.

Eric P
June 18, 2007 4:53 pm

Your quote from the Pittsburgh TR “I believe we will be able to demonstrate that some of the global warming increase is not from CO2 but from localized changes in the temperature-measurement environment” certainly sounds like someone out to disprove global warming!!

Anthony Watts
June 18, 2007 8:02 pm

***REPLY to Eric P
I’m only trying to illustrate that some sites (and certainly not all of them because some are in good working order) are contributing to the data set indicating that there is a warming trend, which may be interpreted as global warming. The surface data record is only part of the entire global warming equation. There is satellite data, sea surface temperture data, and radiosonde (weather balloon) data.
When the good surface sites and the bad ones are tagged, then analyzed again, there may be no net difference at all. We don’t know yet and won’t until the survey is finished.
But right now a statistically significant numnber of the total stations surveyed thus far seem to have issues that have not been dealt with. That percentage may not hold as the survey progresses or it may get larger. Again we won’t know until we do them all.
But don’t you think it s a good idea to flag the obviously bad sites that don’t meet the NOAA standards for temperature measurement? By doing so, they can be removed or adjusted in the data set so that whatever conclusions are drawn are not based on faulty data.
Flagging sites that have car radiators six feet from the sensor or air conditioners blowing on them seems like an obvious step that is just basic science.

GoingGreen
June 19, 2007 5:30 am

Anthony,
Great work! I’ve known about this problem of bad measurement for over ten years now, and it has been hard to get people to take notice. You are doing a valuable service. (Would love to compare notes sometime.) Keep up the great work.

Frank
June 19, 2007 6:49 am

Eric,
Fundamentalists do not like to accept evidence that the earth is older than the Bible would seem to indicate, either.
As their faith is supposedly scientifically-based, Global Warmists should be prepared to accept that non-conformists are going evaluate their claims and the means used to arrive at these claims, as well as the extrapolations drawn from these claims.
I find it interesting that “Doubters” have taken on the role of Galileo, whilst Al Gore & Heidi Cullen play the Vatican hierarchy.
Frank

Lon
June 19, 2007 3:29 pm

Eric P. makes a good point. The poor location of the existing Petaluma site might be an improvement over the previous sites. For example, the previous sites where surface temperature increases were noted may have been in a barbeque pit, or perhaps located on top of a pottery kiln.
It seems somewhat misleading to use temperature measurements from from a variety of locales that have all been “adjusted for error”, when nobody knows what the error sources are.
Lon

Ryan
June 20, 2007 7:04 am

In response to Eric’s comments I agree with Lon. Its true that we don’t know exact locations for the previous readings. But if the location selection is presently so poor, why should we expect that the previous locations were any better? Unfortunately you and I cannot say definitively either way, be we can only say that the current location is poor.

kirby
June 20, 2007 9:59 am

Someone must have addressed this, but many of these trend graphs show near constant rise in temp over 80-100 years.
Somehow I have a hard time believing that the air conditioners, automobile parking, asphalt, etc. that is suggested as contaminating said data has been around that long.
If these sources are measurably influencing temperature at the compromised locations, I would expect to see a step function increase that could be tied to the point in time when the site became compromised.
*** REPLY: Perhaps, but in some sites, many things around the site are changing too, such as building up around it, Chnages in trees, grass, asphalt etc in neaby areas, so sometimes it gets masked. The point is, these weather stations are out of spec by NOAA’s own published standards.

June 20, 2007 11:52 am

Don’t let me get in the way of your efforts here, but please stop saying that “This data is in fact used in climate modeling to predict our climate future”.
This is simply not so.
You’ve downloaded the GISS model – perhaps you’d like to show me where these station data are used? You won’t be able to because they aren’t.
Observational data at large scale (not individual stations) are used to evaluate the models after they’ve been run – but again generally only at the continental scale and above. The evaluation is not just with trends but with patterns of variability (El Nino responses, NAO etc.) and obviously, the better the data the more reliable the evaluation.
Note that the climate model hindcasts for this area are around 0.5 over the 20th Century – significantly less than this individual station. Should this record therefore be shown to contaminated, it would actually improve our confidence in the models, not lessen it!
MODERATOR NOTE: see my reply in this new thread here
http://www.norcalblogs.com/watts/2007/06/a_note_from_a_nasa_climate_res.html

Michael Jankowski
June 20, 2007 12:16 pm

Eric P said: “Your quote…certainly sounds like someone out to disprove global warming!!”
His quote referred to explaining “SOME” global warming as attributable to these biases, not MOST, and certainly not ALL.
Exactly why did you leap to the extreme when reading/representing the quote in question?

pat
June 20, 2007 2:03 pm

Eric,
There is a difference between cherry picking and cherry pruning. If someone was motivated to make a case by purposefully selecting outliers he would be guilty of cherry picking – a form of deceit. But if someone examines outliers for purposes of quality control, that is cherry pruning and is a positive contribution to the state of science.

Nick
June 20, 2007 6:09 pm

It would seem to me that even if these sensors are badly installed, unless their setup is worsening at a rate that corresponds to the rise in temperature, then the trend is still relevant. Are you suggesting that each of these weather stations is subject to a gradually increasing bias?
***REPLY: Some may in fact have a gradually increasing bias. Take for example the Marysville Fire station. It used to be Grass in the rear where the sensor was, then it was changed to concrete, then changes around the station started, such as cell phone towers. Nearby, a new City government building went up, 200 yards away, a strip mall, then a second strip mall on the other side of the street where there was parkland.
My point is…humans have built up around what used to be rural weather stations, and while some effects can be accounted for, and mathematically adjusted based on population growth, others, like the fireman’s BBQ, and the close by parking, and the a/c units from the cell phone towers exhausting hot air are random occurrences that researchers don’t know about nor have adjusted for.

DemocracyRules
June 21, 2007 2:58 pm

The presumption made by some posters here is that the basic and initiating hypothesis is global is warming. Any data which questions this must be made to single-handedly disprove global warming.
This turns science on its head. The BASE hypothesis for science is always the NULL hypothesis. The null hypothesis states that with respect to the apparent phenomenon in question, there is no phenomenon. It does not exist. The researcher is wrong, and there is not enough evidence to prove the hypothesis that the phenomenon exists. This approach is an irreducible tenet of scientific methodology that is centuries old, and there is no reason to change the history of scientific methodology for the sake of global warming theory.
Therefore, the null hypothesis is that the global temperature of the planet has not changed in the last century. Period. Unless data can be marshalled and organized to disprove the null hypothesis, then we are compelled to accept the null hypothesis. If the existing data can convincingly disprove the null hypothesis, then we accept that the phenomenon exists, with the caveat that new data must always be considered. As scientists, we all know of apparent phenomena which were thought to exist, but were subsequently invalidated by new evidence. That is, the new data made it impossible to disprove the null hypothesis.
THEREFORE, THE BURDEN OF PROOF, ladies and gentlemen, lies on the theory of global warming, not on the null hypothesis. If the surface temperature sensors are faulty, then this is a problem for global warming theorists, not for the null hypothesis.
More specifically, about this UPWARD TREND, others have pointed out that there are flaws in the design and maintenance of common and ubiquitous surface temperature devices. These flaws are prone to produce falsely high temperature readings. The measurement bias continues an upward trend as the device becomes progressively more faulty.
From: http://www.realclimate.org/
“Some considerations regarding the surface temperature record relate to the measurement techniques themselves. The thermometer radiation screens affect the sensor readings. A factory-new Stevenson standard screen still allows solar radiation impact on the temperature measurement of more than one degree. Accumulation of dirt, growth of mould, even flaking of paint, is naturally occurring over the life cycle of some 15 years that the product is used in the field. The radiation error then multiplies…
Comment by Pejk — 27 Dec 2004 @ 5:10 am”
Two other factors could produce upward trends: (1) the Urban Heat Island effect is larger than we once thought, and can be expected to become more extreme as the expanding urban area more fully engulfs the temperature sensor, and (2) as noted by Pejk immediately above, solar radiation is known to contaminate air temperature readings. Impinging solar radiation appears to be increasing on the earth (as discussed in detail by others), and reducing cloud cover. More solar radiation would tend to bias the temperature readings upwards. Note that this would be a solar radiation increase. It would not be a real air temperature increase.
There is another key dictum of science which is being ignored on this is thread. ‘Extraordinary claims must be supported by extraordinary proof.’ Systematic global warming is an extraordinary claim, and the assertion that this is caused by humanity is very extraordinary indeed. I, for one, need to see more clean and credible data.

DemocracyRules
June 21, 2007 3:21 pm

The idea that there is an almost universal upward trend in surface temperature data is untrue. Here is a sample of actual surface temperature trend data taken from remote places on earth. Many places appear to be on a cooling trend, and there is certainly a huge amount of error variance in these temperature readings.
From: http://www.realclimate.org
“Three years ago, I tried to get a handle on whether UHI [Urban Heat Island] was responsible for the recent warming trend in most of the temperature datasets by comparing the trends for the UAH/MSU 2LT channel and the Jones et al. surface data for some of the world’s “empty places”.
Location/ MSU (deg C per decade) / Jones et al. (deg C per decade)
N Quebec / 0.317 / 0.327
N Ontario / 0.413 / 0.533
N Alb/Sask/Man / 0.424 / 0.470
E Yukon and Nunavut / 0.101 / 0.666
N Alaska / 0.191 / -0.008
SW Alaska / 0.196 / -0.013
Arabian peninsula / 0.021 / 0.328
Sahara / 0.105 / 0.346
W China / 0.335 / 0.328
Outback / 0.007 / -0.057
Amazon Basin / -0.183 / 0.171
Patagonia / -0.013 / 0.049
All trends from the Idso’s web-site world temperature calculator at http://www.co2science.com. The MSU and Jones et al. trends for these empty places are weakly positively correlated with r = 0.426. Notable in this data is how strong the warming in the North is and how different the Southern Hemisphere is.
Comment by Jim Dukelow – 15 Dec 2004 @ 5:08 pm”
NOTE that the correlation between these two data sets is distressingly poor. If r=0.426, then each data set explains only about 17% of the variance of the other data set!