Sometimes, words fail me in describing the absolute disregard of the placement of NOAA official climate monitoring sites. For example, this one in Clarinda, Iowa submitted by surfacestations volunteer Eric Gamberg:
Click for larger image
The MMTS temperature sensor is the short pole next to the half pickup truck.
For those of you that don’t know, this station is located at the wastewater treatment plant there. I’ve written many times about the placement of stations at WWTP’s being a bad idea due to the localized heat bubble that is created due to all the effluent coming though. The effect is especially noticeable in winter. Often you’ll see steam/water vapor in the air around these sites in winter, and more than one COOP observer has told our volunteers that snow sometimes does not stick to the ground at WWTP’s.
The larger pole appears to be a gas burnoff torch for excess methane. I can’t say how often it is activated (note the automatic ignitor circuit on the pole) but I can tell you that putting an official NOAA climate thermometer within a few feet of such a device is one of the worst examples of thoughtless station placement on the part of NOAA I’ve ever seen. Here is an example of a methane burn-off device at another WWTP.
We’ll probably never know what the true temperature is in Clarinda because untangling a measurements mess like this is next to impossible. How many days was Tmin and/or Tmax affected at this location by gas burnoff and to what magnitude? We shouldn’t have to ask these questions.
And, adding insult to stupidity, the GISTEMP Homogenization adjustment makes the trend go positive, especially in recent years:

According to the NCDC MMS database for this station, the MMTS was installed on October 1, 1985. Who knows what the data would have looked like if somebody had thought through the placement. Whether or not the temperature sensor has been significantly affected or not by this placement is not the issue, violation of basic common sense siting guideline that bring the data into question is. Anything worth measuring using our public tax dollars is worth measuring correctly.
Dr. Hansen and Mr. Karl – welcome, feast your eyes on the source of your data. You might want to think about changing this description on the NCDC website for USHCN:
The United States Historical Climatology Network (USHCN) is a high quality, moderate-sized data set of daily and monthly records of basic meteorological variables from over 1000 observing stations across the 48 contiguous United States.
I suggest to NCDC that “high quality” doesn’t really apply in the description anymore.
I really could use some help, especially in Texas, Oklahoma, Alabama, Mississippi, and Arkansas to get the USHCN nationwide climate network survey completed. If you have a digital camera and can follow some simple instructions, why not visit www.surfacestations.org and sign up as a volunteer surveyor. If you can’t help that way, donations to help fund trips such as these that I’ve been doing are greatly appreciated.
UPDATE 11/20 4:20PMPST: Some commenters such as Krysten Byrnes and Steve have suggested that the blink comparator above is wrong due to the fact that the scale on the left changes in offset. I realize that may create some confusion. A couple of clarifications are needed to address that.
First, these graphs are generated by the GISTEMP database, not me. I simply copied both from the GISTEMP website into my animation program. This includes the scale offset which is part of the difference in the original GISTEMP generated images. You can do the same thing also by visiting here: http://data.giss.nasa.gov/gistemp/station_data/ and putting Clarinda in the search box. Use the pulldown menu to select either data set you want. The above is the “combined sources” and also “after homogeneity adjustment”.
Second what is important to note here is that the slope of the trend changes as a result of the adjustment applied by GISS. It becomes more positive in the “homogenized” data set.
Third, in the “homogenized” data set, the past has been cooled, the present also made warmer, making the slope more positive over the timeline. Here is the Clarinda GISTEMP Homogenized data plot overlaid on the “raw” data plot. Again these are the original unmodified GISTEMP generated graphs using a simple cut and paste with transparent background technique:
Click graph for full sized image
Note how the hinge point appears around 1980 where the data appears to match. Note also how the divergence between the two data sets increases either direction from this hinge point.

Drew Latta wrote:
Well, the methane burn off thing is interesting, but it might never be activated. A lot of these WWTPs use excess methane from sludge digestion to heat the digester and other processes. Depending on the temperature their digester runs at it might be heated most or all of the time, and they might hardly ever flare the methane.
Given the discoloration on the top of the burnoff torch, I think “never” is out of the question.
B Kerr (14:12:35) :
Where are all the UK sites?
Can anyone help?
You may like to check this page out
http://badc.nerc.ac.uk/data/ukmo-midas/
Scroll down to “Find your weather station” and download
Alternatively goto
http://badc.nerc.ac.uk/googlemap/midas_googlemap.cgi
With regard to sampling errors and possible misreadings, the following CNN report contains testimonials from Inuit concerning the “decline” in Polar Bear numbers as being a sampling error on the part of briefly visiting researchers:
http://edition.cnn.com/video/#/video/tech/2008/11/19/intv.kangaroo.dna.graves.cnn
Obviously, being hunters, they may be biased, but the testimony at least seems sincere and heartfelt.
OFFLIST- TO THE MODERATOR:
In my recent post I mistakenly added the wrong link. The correct one for the Inuit dispute of polar bear decline, at least in their area, was:
http://edition.cnn.com/video/#/video/tech/2008/11/20/mcginty.can.polar.bears.pt.3.itn
If you do post the comment, please do correct the link!
George E. Smith (20:06:47), you raise a number of points.
1. There is a different rational for each adjustment and some are better documented than others.
The basic problem is we cannot go back and redo temperature measurements from the past. Therefore, in order to determine the temperature in the past, the existing temperature measurements (at the existing sites) must be used and any deficiencies ‘fixed’ by adjustments. (A rationale I don’t agree with. See below)
2. You then get into gridded temperature averages.
Determining the average temperature and trend of an area by random sampling, and by weighting and adjusting measurements across the area are different and as far as I am concerned exclusive methods.
Random sampling by definition deals with any and all biases that affect station siting and hence you need to know nothing about the biases. Weighting and adjusting assumes you know what the biases are in order to weight and adjust for them.
3. I agree with you that the Forcings model is invalid and the GCMs are based on invalid premises.
BTW, the way to approximate random siting is to only use sites that are remote from human influences. What I call pristine locations. Stations on remote islands are especially good candidates because the location of remote islands are close to ‘random’ for the purposes of determining the Earth’s climate.
Great article Evan!
To: Ray Reynolds (14:33:12) and others.
I wouldn’t worry about sprinkler systems. Look at that fire hose lying in the grass. How the near end looks like it was just disconnected from the adjacent hydrant and dropped in the grass, after the other end was used to water the grass? And the MMTS and everything else in the area? Ever tried to hold a fire hose single-handedly? Why else would it not be rolled up neatly like the one in the lower left corner of the photo if it wasn’t used regularly? And that sure is lush green grass. I wonder where they get their fertilizer?
I can think of about a hundred ways that a site could produce anomalous warmer readings. Is there any way that a site could produce anomalous cooler readings?
REPLY: Fast growing bushes/trees near the sensor could produce shade (we’ve seen some of those too), The sensor could be relocated to shade areas, the sensor could malfunction.
Water in the form of a lawn sprinkler near the area has the effect of reducting Tmax if run during the day, but increased humidity at night will elevate Tmin.
There’s not a lot of cool biases. – Anthony
OT headsup: “WASHINGTON (Reuters) – Much of the United States can anticipate a mild winter, with warmer-than-normal weather forecast from the Rocky Mountains to the Appalachian Mountains through February, government forecasters said on Thursday.
The National Oceanic and Atmospheric Administration said the greatest chance for above normal temperatures was expected in Missouri, eastern Kansas, Oklahoma and Arkansas, with a lesser probability extending into southern Wisconsin, western Ohio and Texas.
The agency’s winter forecast also predicted an equal chance for temperatures to be normal, above normal or below normal on the East Coast and in the western United States, extending into Montana and northern Minnesota.
The forecast could be good news for the Northeast and the Midwest, which are the largest users of heating oil and natural gas, respectively, in the United States.
http://www.reuters.com/article/domesticNews/idUSTRE4AJ4V020081120 ”
Of course, right now here in Indiana we are running 5-10 degrees below normal. I have changed sides. I am PRAYING for global warming.
Grant
I don’t know exactly how Dr. James Hansen computes GISStemp anomalies or even why he does that.
But determining the mean surface temperature of the earth is inherently very simple. You put some thermometers in the gound (solid and liquid ground), and then you read them all at the same time, multiply each by the area representing each thermometer; add them up, then divide by the total surface area of the earth to get the mean. You then repeat the process periodically to update the temperature which will change because the sun moves to a different place over the earth. So a complete year of records would seem to fairly represent the global average for climate purposes (whatever climate purposes ther may be in even doing this)
Now the SI units of length and time, are the Metre, and the second, so I recommend you put one thermometer at the center of each square meter of the surface, and you read them all at the same time, once per second. How easy is that?
Well that’s a lot of thermometers, and if you calculate how fast the sun is moving, I think at the equator it moves about 500 meters per second or thereabouts, so it hops over a lot of thermometers. So maybe it is more suitable to place one thermometer at the center of each 1000 metre square on the surface rather than each metre. Now the sun is in each square for about two successive readings, and this has the advantage of reducing the number of thermometers by a factor of a million; very good for the budget.
Well that is still one hell of a lot of thermometers, and people don’t tend to notice how much the temperature changes in just one second, so maybe we don’t need to read it that often.
So how can we determine how often we really need to read each thermometer, and how many do we really need.
What i am hearing from these discussions, is that the climatology community thinks it is quite sufficient to read the thermometers twice a day; or just get the max and min readings each day, and don’t worry about when you actually read them. Well now you never would actually know what the mean temperature was at any moment, because you never ever have a true current set of readings. The also seem to think that a thousand or 10,000 or even 100,000 thermometers globally are more than enough.
And if you distribute them randomly they think you get more believeable result. Nobody seems to realize that one perfectly reasonable result of a completely random positioning,, would be if every thermometer ended up in exactly the same place. that is no more unlikely than any other random placement.
What we are talking about here is a continuous function of basically two variables; one is time, and the other is spatial location, and it isn’t possible to measure that function at every possible place and time; so we have to go to a SAMPLED DATA SYSTEM. We have to sample this two variable function, in time and space; and we have to do it in such a way that we don’t lose any important information.
The theory of sampled data systems; which apaprently climatologists know absolutely nothing about, is a well studied branch of mathematics, and the modern world wouldn’t work without it. Our entire global communications networks and hardware dpend on sampled data systems, and the communications engineering communit live and breathe sampled data theory.
Sampled data system theory, says that it is possible to sample a continuous function in such a way that the continuous function can be completely reconstructed from the recorded sample; PROVIDED YOU COMPLY WITH CERTAIN RULES.
Now when I say, the continuous function can be completely reconstructed, that means that you will be able to determine the value of that continuous function, not only for the specific points at which it was sampled; but for ANY other point in the continuous function; there is NO loss of information.
So WHAT ARE THE RULES ? Well first of all the continuous function must be “band limited”. That means it may only contain signal frequencies (cyclic changes) less than some maximum frequency called the bandwidth of the signal. For our climate puzzle, those frequencies would be cyclic changes in time and also in space.
For example if we wanted to sample say an audio frequency continuous signal from say a symphony orchestra, we might limit the bandwidth of the signal to 20 KHz., in the belief that any signals at higher frequencies would be inaudible, and so eliminating them would result oin no loss to the music listener.
The second, and most important rule says that the band limited signal must be sampled at a rate (frequency) that is at least twice the frequency of the band limit. So our orchestra signal would have to be sampled at a rate of at least 40 KHz.
Now, the samples don’t have to be uniformly spaced in the independant variable axis; but if they are not, then you still have to be sure that at least one sample is taken during each and every half cycle of the highest signal frequency (the band limit). So that is what is wrong with random sample placement; it does not allow you to take fewer sample; in fact it demands that you take more samples, because the samples can never be further apart, that half the wavelength of the highest frequency of signal change; but they can be closer and that means more samples.
So the moste economical sampling regimen is to take uniformly spaced samples.
If you do that, SDS theory says you can completely recover the original continuous function from just those discrete samples. In the communications industry the gaps between those regular samples don’t go to waste; they simply put samples of a completely different signal or thousands of different signals, into the gaps, and then sort them out to distribute to wherever each signal was supposed to go.
Now the problem of high fidelity reconstruction is itself quite complicated; but it is a well understood discipline.
The rule that says you need one sample (at least) per half cycle of the highest frqeuency in the band limited signal, is called the Nyquist criterion, after the scientist (Belll Labs I seem to recall) who first announced it.
So what happens if you don’t comply with the Nyquist criterion in your sampling regimen. In other word if you sample a signal at the Nyquist rate; but your signal contains information at higher frequencies than half that rate.
If the band limit is B and you sample at a 2B rate; suppose you have a signal at a frequency B+b, outside the band limit you designed for.
After you sample that signal at a rate 2B, you will find on reconstruction, that you now have a completely fictitious (noise) signal at a new frequency of B-b. It is total garbage, and since its frequency is B-b, it is inside your signal bandwidth; so it is impossible to remove without throwing away usefull real signal information.
Suppose your errant out of band signal is at a frequency of 2B, the same as the sampling rate, and twice the band limit.
Well now you have an erroneous noise signal at a frquency of B-B, which is zero; and the zero frequency signal is in fact the average value of your recovered signal.
So you only have to violate Nyquist’s criterion by a factor of two, and you can no longer correctly compute the average value of your recovered continuous function; it it permanently corrupted by “aliassing ” noise.
All practical sampled data signal processing systems include an “anti-aliassing filter” which limits the bandwidth of the signals to be processed to less than half the system sampling rate.
Movies, Television, hi fi CDs, digital cameras, are all examples of sampled data systems, and many of them exhibit aliassing noise. In the movie or TV horse opera, the wagon wheels go backwards, because the frame repetition rate is too low to correctly sample the moving wheel spoke images.
So now would somebody like to try and convince me that GISStemp actually derives anything vaguely akin the the global mean temperature of any climately useful partion of the planet’s surface (five feet over the weber or wherever).
It is total garbage; and even if the sampling sytem was correct; the result you might obtain, has absolutely no scientific validity or meaning whatsoever, as far as the global climate is concenred.
Hansen’s religious ritual if rigidly applied determines occasional new values of a value called the GISStemp anomaly; which has no scientific connection to anything else, including global climate.
George
B Kerr, Colin Aldridge, M White
For anyone interested in discussing UK sites I have set up a thread at
http://www.climateaudit.org/phpBB3/viewforum.php?f=6
The locations of the sites GISS uses are interesting.
The obvious incompetent and fraudulent temperature sampling, manipulation, and publication are unacceptable.
We need a program to STOP this COUP and restore faith and trust in science.
OT ?, Science is supposed to be a pure pursuit of the TRUTH… do we have anything like that now?
Also, if you are trying to control the masses… revising history is imperative to support your goals.
Temperature Data, Ocean Water Levels, Glacial Masses, etc.; are being sampled for such an insignificant slice of time that NO ONE, especially a real SCIENTIST, can be even remotely honest in presenting any of this as anything more than an academic exercise.
“Retroproxy (22:00:55) :
Slightly off topic, but it’s another example of the nonsense behind AGW, I actually heard Arnold Schwarzenegger blame the recent wildfires in Southern California on global warming. Then we just learned that a bonfire set by college students caused one of the fires. But the other ones must have been caused by global warming!”
Arnold Schwarzenegger chose to ignore L.A.’s Historical Santa Ana Winds AND the Historical Fires they support – at least 10,000 years.
The Science of CO2 migration in ICE CORES prevents ANY honest presentation of that data.
The Science of SOIL’s along the coastlines does not seem well understood and sinking (substinence) of soils is not included in calculations.
George E. Smith
Nobody seems to realize that one perfectly reasonable result of a completely random positioning,, would be if every thermometer ended up in exactly the same place. that is no more unlikely than any other random placement.
The above statement is false, although the rest of your arguments are logical. Garbage in = garbage out, as we say in the land of data collection. We seem to be spending a lot of time trying to optimize our computer climate models based on suspect data.
Correction:
That second paragraph should not be in italics. italics on italics off…
To Paul M re UK sites
Thanks. I am registering and will contribute.
“…one of the worst examples of thoughtless station placement…”
The assumtion of “thoughtless station placement” may be as misplaced as the station itself.
You know, the idiotic placement of some sensors aside, one thing I have considered extensively is this: the US Navy, Royal Navy, and former USSR’s military have been making fairly accurate measurements of weather, temperature and polar ice density since at least the 50’s.
Why has neither side of the ‘debate’ published any findings that are based on this? As far as I can tell, no one has viewed this data extensively on either side of the aisle, and one would think that the eternal naval fixation with weather would produce some findings one way or the other.
BTW: Personally, I find the evidence of man-made global warming somewhat compelling, though not concrete. Trying to draw ridicule to the idea based on the fact that some weather sensors are poorly placed does not make that every weather sensor is poorly placed. There is one not far from my parents house that is in an open field with nothing but hay for company. Does that make it more or less accurate?
Further, yes, we can alter the planet in large scale and drastic ways and the idea that our atmosphere might react in a similar manner to other planets when the same situation is applied is only common sense. Given that we can observe the effects of large amounts of carbon dioxide gas in other worlds (Venus, Mars), would it not stand to reason that increases in our own levels might not have similar results?
I doubt that any argument of mine will convince people of my views, as I’ve seen science and cynicism confused on several occasions (I might with glee recall the man in the Royal Society oh so loudly proclaimed that no amount of evidence would convince him that the supernatural was real.) While debunking and fresh views are are a requirement for good science, there comes a point where it becomes like the scientists in the employ of the tobacco companies trying to convince people smoking is healthy, and cancer is good for you.
i was told that hot air tends to RISE. did anything change recently?
Well GreenBaron just what is your evidence that large amounts of carbon dioxide on Venus and Mars are having effects. By the way; what are those effects that “we can observe” ?
Venus is about 2/3 of the earth’s distance from the sun, so it receives about 1.9 times as intense sunlight, and it has only 90% of the earth gravity, yet it’s atmospheric pressure is many times that of earth. It clearly isn’t composed of the same gases as earth has. It’s surface temperature is so high that the CO2 absorption mechanism is quite different from what happens on earth.
CO2 is quite transparent to visible light, yet we don’t see the Venus surface so those clouds that completely cover Venus all the way to the ground, are clearly not CO2 and they can’t be water either because the temperature is too high for any liquid or solid water to exist.
I could go on and on; but I am sure you can google Venus as easy as I could, and find out the properties of that planet.
We have absolutely no historic data for either Mars or Venus, that would show that CO2 has had any historic effect on either planet’s climate.
If you find the evidence for man made global warming somewhat compelling, perhaps you could enlighten the rest of us who see no evidence whatsoever; let alone anything compelling.
ALL of the peer reviewed evidence relating earth’s global surface temperatures, and the atmospheric CO2 concentration, shows unequivocally that it is the warming or cooling that produces the increased or decreased CO2 and not the other way round.
You see the wolf and the lamb drinking in the river, and you blame the lamb for the disturbed muddy water; even though it is the wolf that is upstream.
How compelling is that kind of argument?
George
Yes SOD, hot air still rises… at least on earth without microclimate related downdrafts.
sod (07:34:01) :
i was told that hot air tends to RISE. did anything change recently?
________________________________________
Also, George stated it very well indeed…
George E. Smith (08:38:15) :
Well GreenBaron just what is your evidence that large amounts of carbon dioxide on Venus and Mars are having effects. By the way; what are those effects that “we can observe” ?
Venus is about 2/3 of the earth’s distance from the sun, so it receives about 1.9 times as intense sunlight, and it has only 90% of the earth gravity, yet it’s atmospheric pressure is many times that of earth. It clearly isn’t composed of the same gases as earth has. It’s surface temperature is so high that the CO2 absorption mechanism is quite different from what happens on earth.
CO2 is quite transparent to visible light, yet we don’t see the Venus surface so those clouds that completely cover Venus all the way to the ground, are clearly not CO2 and they can’t be water either because the temperature is too high for any liquid or solid water to exist.
I could go on and on; but I am sure you can google Venus as easy as I could, and find out the properties of that planet.
We have absolutely no historic data for either Mars or Venus, that would show that CO2 has had any historic effect on either planet’s climate.
If you find the evidence for man made global warming somewhat compelling, perhaps you could enlighten the rest of us who see no evidence whatsoever; let alone anything compelling.
ALL of the peer reviewed evidence relating earth’s global surface temperatures, and the atmospheric CO2 concentration, shows unequivocally that it is the warming or cooling that produces the increased or decreased CO2 and not the other way round.
You see the wolf and the lamb drinking in the river, and you blame the lamb for the disturbed muddy water; even though it is the wolf that is upstream.
How compelling is that kind of argument?
George
_____________________________________
I would like to add that “compelling” is quite easy for a those unwilling to observe and review with an open mind.
Tom
Yet another case of “make the past cooler, to make the present seem warmer.” I dont’ suppose that GISS has ever released the algorithm used to “homogenize” the data.