
Temperature averages of continuously reporting stations from the GISS dataset
Guest post by Michael Palmer, University of Waterloo, Canada
Abstract
The GISS dataset includes more than 600 stations within the U.S. that have been
in operation continuously throughout the 20th century. This brief report looks at
the average temperatures reported by those stations. The unadjusted data of both
rural and non-rural stations show a virtually flat trend across the century.
The Goddard Institute for Space Studies provides a surface temperature data set that
covers the entire globe, but for long periods of time contains mostly U.S. stations. For
each station, monthly temperature averages are tabulated, in both raw and adjusted
versions.
One problem with the calculation of long term averages from such data is the occurrence of discontinuities; most station records contain one or more gaps of one or more months. Such gaps could be due to anything from the clerk in charge being a quarter drunkard to instrument failure and replacement or relocation. At least in some examples, such discontinuities have given rise to “adjustments” that introduced spurious trends into the time series where none existed before.
1 Method: Calculation of yearly average temperatures
In this report, I used a very simple procedure to calculate yearly averages from raw
GISS monthly averages that deals with gaps without making any assumptions or adjustments.
Suppose we have 4 stations, A, B, C and D. Each station covers 4 time points, without
gaps:
In this case, we can obviously calculate the average temperatures as:
A more roundabout, but equivalent scheme for the calculation of T1 would be:
With a complete time series, this scheme offers no advantage over the first one. However, it can be applied quite naturally in the case of missing data points. Suppose now we have an incomplete data series, such as:
…where a dash denotes a missing data point. In this case, we can estimate the average temperatures as follows:
The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.
One advantage that may not be immediately obvious is that this scheme also removes
systematic errors due to change of instrument or instrument siting that may have occurred concomitantly with a data gap.
Suppose, for example, that data point B1 went missing because the instrument in station B broke down and was replaced, and that the calibration of the new instrument was offset by 1 degree relative to the old one. Since B2 is never compared to B0, this offset will not affect the calculation of the average temperature. Of course, spurious jumps not associated with gaps in the time series will not be eliminated.
In all following graphs, the temperature anomaly was calculated from unadjusted
GISS monthly averages according to the scheme just described. The code is written in
Python and is available upon request.
2 Temperature trends for all stations in GISS
The temperature trends for rural and non-rural US stations in GISS are shown in Figure
1.

This figure resembles other renderings of the same raw dataset. The most notable
feature in this graph is not in the temperature but in the station count. Both to the
left of 1900 and to the right of 2000 there is a steep drop in the number of available
stations. While this seems quite understandable before 1900, the even steeper drop
after 2000 seems peculiar.
If we simply lop off these two time periods, we obtain the trends shown in Figure
2.

The upward slope of the average temperature is reduced; this reduction is more
pronounced with non-rural stations, and the remaining difference between rural and
non-rural stations is negligible.
3 Continuously reporting stations
There are several examples of long-running temperature records that fail to show any
substantial long-term warming signal; examples are the Central England Temperature record and the one from Hohenpeissenberg, Bavaria. It therefore seemed of interest to look for long-running US stations in the GISS dataset. Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.
The temperature trends of these stations are shown in Figure 3.

While the sequence and the amplitudes of upward and downward peaks are closely similar to those seen in Figure 2, the trends for both rural and non-rural stations are virtually zero. Therefore, the average temperature anomaly reported by long-running stations in the GISS dataset does not show any evidence of long-term warming.
Figure 3 also shows the average monthly data point coverage, which is above 90%
for all but the first few years. The less than 10% of all raw data points that are missing
are unlikely to have a major impact on the calculated temperature trend.
4 Discussion
The number of US stations in the GISS dataset is high and reasonably stable during the 20th century. In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out, to the point that the GISS dataset no longer seems to offer a valid basis for comparison of the present to the past. If we confine the calculation of average temperatures to the 20th century, there remains an upward trend of approximately 0.35 degrees.

Interestingly, this trend is virtually the same with rural and non-rural stations.
The slight upward temperature trend observed in the average temperature of all
stations disappears entirely if the input data is restricted to long-running stations only, that is those stations that have reported monthly averages for at least one month in every year from 1900 to 2000. This discrepancy remains to be explained.
While the long-running stations represent a minority of all stations, they would
seem most likely to have been looked after with consistent quality. The fact that their
average temperature trend runs lower than the overall average and shows no net warming in the 20th century should therefore not be dismissed out of hand.
Disclaimer
I am not a climate scientist and claim no expertise relevant to this subject other than
basic arithmetics. In case I have overlooked equivalent previous work, this is due to my ignorance of the field, is not deliberate and will be amended upon request.



Brian,
Try this shoe on and tell me if it fits:
“there are… numerous well meaning individuals who have allowed propagandists to convince them that in accepting the alarmist view of anthropogenic climate change, they are displaying intelligence and virtue. For them, their psychic welfare is at stake.”
The source is M.I.T. Climatologist Dr. Richard Lindzen:
http://thegwpf.org/opinion-pros-a-cons/2229-richard-lindzen-a-case-against-precipitous-climate-action.html
Dr. Lindzen goes on to say:
“With all this at stake, one can readily suspect that there might be a sense of urgency provoked by the possibility that warming may have ceased and that the case for such warming as was seen being due in significant measure to man, disintegrating. For those committed to the more venal agendas, the need to act soon, before the public appreciates the situation, is real indeed. However, for more serious leaders, the need to courageously resist hysteria is clear. Wasting resources on symbolically fighting ever present climate change is no substitute for prudence. Nor is the assumption that the earth’s climate reached a point of perfection in the middle of the twentieth century a sign of intelligence.”
Do I hear the sound of a pseudo-scientific religious cult crumbling?
http://sbvor.blogspot.com/p/climate-change-science-overview.html
As to the post-2000 dying of the thermometers, so soon after the Great Dying of the Thermometers in about 1990, reminds me of a short story I read back in the early 1960, when I was a mere lad, as the say across the pond NW of France.
It was the era of Reader’s Digest’s greatest popularity, including hardbound volumes with truncated versions of novels great and not-so-great, about 4 or five to the volume. Someone tongue-in-cheek (and well over my young literary-virginal head) wrote about the trend to digest books more and more, more and more, and taking it to its logical but extreme limit, told a tale of a book that had been digested all the way down to a single word. I believe that word was the name of the story, but it HAS been a long time, and I was ever so young.
One might suppose that that is what the climate establishment is aiming for – to digest all current thermometer readings to one special one that represents the entire globe.
And why not? Why should they have to go through all that tedious data assembling – instruments and proxies, tree rings, ice cores, varves, corals, and the various thermometer types? Wouldn’t there be far less disagreement and more settled science that way? Shouldn’t that wonder of our two most recent decades – science – digest down all their data collecting to one and only one reading per day? Wouldn’t all this NH/SH, El Niño-La Niña, AMO, SST, confusion be done away with, not to mention the problem of semi-drunken local temperature readers – FINALLY! – so that the experts can sit back, in the full glory of their expertship, puffing on their Meerschaums and Marlboros (and self-rolled Zig-Zags and whatever might find itself therein), blowing rings and smiling like the Cheshire cat – and fading blissfully from our sight, into the upper reaches of yon ivory towers of yore? Isn’t that what we really want our scientists to do?
If science is at its core about improving life and making life easier and simpler, well why shouldn’t the climatologists partake of that life of Riley, too? They pay taxes, too, after all. We should nod our heads in agreement at such a development – this perfect, singular global temperature data point from the one perfect temperature point on Earth – as the apex of science’s great accomplishments on behalf of homo sapiens sapiens. Rather than bemoaning the defuncting of the confusion-engendering Yamal or Polar Ural tree rings, the obfuscating UAH satellite blather, the TOB changes, the TOB differences, UHI adjustments, petulant declines, the PDO, solar irradiance, and cosmic rays, we should be having a rousing wake, celebrating the part all of them had done for us in the past, when our climate folks were getting their feet under themselves, and we should toast to the new age of Unitemp. Gone will be all the confusion and gone will be all the endless ragging on each other over what graph and what data set is BEST – and most especially the endless tug-of-war over what it all means.
Let us revel in our oneness of agreement. The one temperature cannot be confused, and isn’t that better? Unity beyond complexity. War is peace. Simple is more complete.
We can just hand over a small scrap of paper with one number on it, once a year, and so let Congress or the European Parliament get on with their job of whatever it is they do. Isn’t that so much more efficient and civilized?
There! I feel better, just for having digested it all down for your reading enjoyment…
Smokey:
one thing you are is consistent (with a capital ‘C’). Others like Brian claiming otherwise are lost and wrong. This is a great science site.
I wanted to show you a couple of new things I have stumbled upon for you are one person here I know will not forget, you are very persistent too! 😉
A traipse through a search of all “water vapor”,spencer,miskolczi led me to a missed article by Dr. Curry many months ago at http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/ and related to http://judithcurry.com/2011/09/25/trends-in-tropospheric-humidity/ . And, within, it points to a paper by Kratz et al ( http://miskolczi.webs.com/p27jqsrt.pdf ) where I came upon a statement I found very interesting.
One was:
“The far-infrared, speci[fi]ed here to cover the spectral range with wave-numbers less than
650 cm−1, is dominated by the pure rotation band of water vapor, and has been shown to account for
over 40% of the energy emitted to space by the Earth’s atmosphere-surface system for clear-sky conditions
[1].”
Always suspected that but had never explicitly read “40%” it in a paper. There it is.
Second, in that you have read in some of my comments, or I think you have, of how 1/6th of the 390Wm-2 down welling IR displayed in Keihl-Trenberth graphic is real, that is when all figures on the energy budget balance, well, I also found in a summary of Miskolczi’s papers at http://www.friendsofscience.org/assets/documents/The_Saturated_Greenhouse_Effect.htm was said that:
“He applies the Virial Theorem to the atmosphere, which states that the kinetic energy of a system is half of the potential energy. The internal kinetic energy is taken as the upward long wave energy flux at the top of the atmosphere, and the potential energy is the upward radiation flux from the surface. This result is used to determine the fraction of the upward radiation from the surface that is transmitted directly to space (rather than absorbed by the atmosphere), which is 1/6.”
That to me is very curious, 1/6. He is stating of upward LW being 1/6 while I have been pointing out the 1/6 downward. Four sixth is of course always horizontal in three dimensions. Never had read that explicitly stated either. When you carry this into the Virial Theorum of what energy exactly supports the mass of our atmosphere every second of every day it now all seems to finally make perfect sense.
Thought this a good time to pass that and think about it.
One thing which I find intriguing is that global temperatures in 1959 are shown to be somewhere in the region of 287.22 K. – I say in the region of 287.22 K because there are a few different figures “on the market” -. The chart I chose, at random, started in 1959 and ended in 2004 and showed a global temperature of 287.77 K for that last of the two years.
There were a few “ups and downs” mainly behind the decimal point during those years but the 287 K bit was reproduced for every year (almost).
So, the intriguing bit for me is; if thermometers placed at a few places on the Earth’s dry surface plus a lot of highly unreliable temperature reports from the world’s merchant and military shipping can be accurate to within 0,55 K or better, then why the Ken-Nell do we spend not just millions but billions or more of $€£ on satellite measurements?
Jerome, DirkH
“I have to disagree. He is using the temperature delta (Δtemperature) to average with other deltas. That makes much more sense than what you have assumed.”
A fundamental problem I see with this method is that the delta’s propergate forward. So any error that will unavoidable occur if a station is missing from sample n then gets carried forward into the calculation for samples n+1, n+2 etc. Ultiimately all future deltas contain some effect from all past errors. In addition there is the problem of propogating inaccuracies in the performance of the calculation. Computers do not calculate to infinite precision and since most of this is about calculating differences between larger values to produce much smaller differences then continually summing these differences the finite accuracy of each stage of arithmatic will propgate forward in the result. It would take some serious analysis to work out whether the net effect of this over time will all cancel out or be cumulative.
In contrast the method used by the mainstream analyses of comparing each reading for each station against its own long term average means that the anomaly (the delta if you like) is always calculated against a fixed reference. So any isues that might occur as a result of problems at one sample point don’t automatically propogate to future samples.
Also, Michaels method does not use area weighting. He is either doing one calculation for the whole US or he is subdividing the country. – this isn’t clear. But his method of simply dividing by the available station count means that the weighting for the stations effectively changes each time there is a missing reading. That is over and above the fact that he is not area weighting at all. If for example you are using 10 stations in Texas and 10 stations in Vermont, by his method the climate change in Vermont is given equal weight to that in Texas even though Texas is a much larger proportion of the US. Then, if you are looking at stations over time, you can introduce a time bias towards the climate changes in a region where the number of stations has grown over time. In my example. If Texas had 3 stations in 1900 and Vermont 6 because it was more densely poulated then the count changed to my first example by 2000, that is introducing a time bias towards the Texas climate over time.
Area weighting ensures that these geographic biases don’t occur
“But if you do area weighting your result will be hugely biased towards the trends of isolated thermometers with no second data source a thousand miles around, like in GISS. Is this what you consider a better approach? ”
The key question here is how many stations you need to adequately characterise the CHANGE in a regions climate. Note this is not the same as characterising that regions climate. Read the posts I put up earlier on teleconnection and where the GISS range of 1200 km comes from – sorry I couldn’t include the graphics, WUWT would be more effective as a platform for discussion if commenters could illustrate their points.
It seems to me that a common misperception many people have is that to adequately observe any climate change we need lots of stations. The climate of2 locations may be quite different to each other, even if they are relatively close, due to altitude differences. But locations that are at similar altitudes can have very similar climates over quite long distances. And when we are looking at how the climates of 2 locations CHANGE relative to each other they are often quite well correlated over long distances, particularly over land. Thus the 1200 figure used by GISS. This isn’t based on speculation, but on observation. Looking at the correlation between large numbers of station pairs and seeing how that correlation varies with separation.
So the case of a truly isolated station that is the only one within 1200 km would be problematic. But there aren’t many situations like that. However failing to area weight in calculations means that in effect every single station is producing a bias. And these biases have a definite pattern. Regions with dense station counts will bias the result towards them.
It also makes manipulations much simpler. The PNS approach. Drop the thermometers that don’t confess.
Well with all this talk about missing data, and thermometers dying (none of Mother Gaia’s thermometers die; so she always knows the Temperature) and methods of diddling the averages to substitute for the missing data.
It’s kind of a lost cause; there’s this thing called the Nyquist Sampling Theorem, and it says you don’t have anywhere near enough global stations ; and never have had, to either reconstruct the original continuous Temperature map (don’t need to do); but you do need the ability to reconstruct the original continuous Temperature function; in order to even extract a correct average of the values; whether you recreate the values or not.
So whatever you have as a calculation for the average of the data set, whether data samples are missing or not; the zero frequency signal; which is another name for the average value, is itslef corrupted with aliassing noise; so what one does to fix it is somehwat irrelevent..
And the twice a day, min-max Temperature readings, are already in violation of the Nyquist criterion for the temporal sampling, since the daily temperature variation is not a simple sinusoid with no harmonic content, so at least a second harmonic with a 12 hour period must be present, and twice in 24 hours sampling, will result in the daily average calculation also containing aliassing noise. Not to mention that any varying cloud cover during the day, will totally bamboozle the min-max thermometer (but not Mother Gaia’s thermometers)..
Glenn Tamblyn says:
As Requested:
October 24, 2011 at 3:38 pm – Off Averages & Anomalies Part 1A
October 24, 2011 at 3:40 pm – Off Averages & Anomalies, Part 1B
October 24, 2011 at 3:41 pm – Off Averages & Anomalies Part 2A
October 24, 2011 at 3:42 pm – Of Averages & Anomalies Part 2B
About ten screens for each of these posts. And the whole shabang is a humumgous diversion.
Glenn, we’re questioning your unstated assumptions: that the basic data, including the so-called rural, are free of UHI / or that the UHI is accurately enough known – and that the corrections applied are appropriate.
The longstanding records that are the subject of this post, are very strong evidence against all your assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work.
Moderators: should Glenn’s posts be raised to their own thread, to assemble a fuller answer from WUWT? At least, Glen could not say on SkS that nobody here even pays attention to his “real” science. Yes I know, we get the traffic at WUWT. But the politicians and the science academies are still playing poker with our finances, and putting on shows like the real skeptics’ science doesn’t exist.
Looks like we have an un-closed italics tag up above.
[Fixed. ~dbs]
Wayne,
Thanks for the info. Interestingly, Dr Miskolczi’s estimate of climate sensitivity is… zero.
Dave Springer @ur momisugly here
And others talking about adjustments…
Are you aware that many studies, including Fall et al (Anthony Watts’ study), have corroborated the average temperature record for the US? There have also been many blog initiatives taking raw, adjusted, rural, airport, urban data and processed them in different ways and basically corroborated that the US record for average temperatures is robust?
Here is an excellent breakdown of US adjustment choices, focussing on UHI, over at The Blackboard, Lucia’s blog.
Last year Steve Mosher and Zeke Hausfather wrote a seminal post at WUWT discussing numerous attempts at reconstructing global temperatures.
http://wattsupwiththat.com/2010/07/13/calculating-global-temperature/
These were worked up from raw data and from adjusted data, and with various filters and processes, the general result being that the official temperature records are robust. Mosh and Zeke made an excellent case to move on from questions about the need for adjustments to more incisive enquiry about the robustness of some of them (like UHI).
(Check the link above, because it points to numerous projects from both sides of the aisle reaching pretty good agreement on basic ideas.)
So many issues have been investigated – we must not lose sight of all the work that has been done. For example, here is a link to an experiment taking 60 rural stations from around the globe with at least 90 years of uninterrupted data. Result? Good agreement with official (land-only) records.
For US rural/urban comparisons, there have been several blog attempts, which conclude that the difference is negligible.
Recall also the global temperature record from raw data at The Air Vent (just one of the skeptical sites I have cited here). Time and again both sides of the aisle have tested the data thoroughly and found the official records to be fairly robust.
Regarding the top post, there is good agreement between rural and urban temps, from independent analyses, in the literature, and even according to Michael Palmer’s work above.. There is no need to discard recent data, although it would be good to learn why rural stations have dropped off lately – and remember the last time station drop out was thought to be an issue and it turned out it wasn’t anything nefarious, and that it didn’t make a difference to the temp records anyway.
barry,
Three “robusts” and one “robustness”! That word always sounds faintly ridiculous to me, like those using it are trying to make their argument stronger.
Here’s the real argument, which is avoided as much as possible by the robust crowd: “Carbon” [by which they mean carbon dioxide, a gas] has been demonized as something harmful that will cause bad things to happen, like climate disruption, runaway global warming, coral bleaching, etc.
The truth is this: there is no evidence to support those conclusions. The only evidence we have is that CO2 is harmless and beneficial. Falsify that testable hypothesis, if you can.
Yes, it would appear the good doctor does not believe we’ve experienced global ice ages over the past million years.
Brian says:
October 24, 2011 at 1:19 pm
“Amazing that the “politicians and environmental promoters” have managed to convince 97% of climatologists and essentially every major scientific organization that AGW is real.”
So, essentially, you mean that all them people and organizations who depend on the money, essentially, coming from “politicians and environmental promoters”, cave to those “politicians and environmental promoters” demands? In your reality: I wonder why?
Back to reality: do you happen to be able to supply proof, or do you just like to blow smoke from your bong every which where?
barry says:
October 24, 2011 at 5:48 pm
Very interesting post, barry, but it is incomplete. Lucy Skywalker sets forth what you need to address as follows:
“Glenn, we’re questioning your unstated assumptions: that the basic data, including the so-called rural, are free of UHI / or that the UHI is accurately enough known – and that the corrections applied are appropriate.
The longstanding records that are the subject of this post, are very strong evidence against all your assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work.”
In addition, I am questioning the entire framework that Mosher and friends inherited but never questioned. As scientists, it is not enough to draw your diagrams across maps of Earth’s surface and apply your inherited statistical techniques when you have empirical evidence of huger importance staring you in the face. The longstanding records from what I call the “well managed” stations are powerful evidence that the records from the other stations are questionable. Shame on you for ignoring those long standing stations. I take it that you cannot think outside the box enough to address this empirical matter. But it is your duty now as scientists to explain empirically the differences between the long standing stations and the others.
Coming back to the thread after a few days, I was shocked at some of the comments directed at “Brian”, since they didn’t apply to anything I’d said. I immediately checked for other Brians, of course, and came across the d****l “Brian” was posting.
He dishonours our shared given name! Glad I appended my surname initial to my tag from the get-go.
SBVOR;
Thanks for the additional Lindzen links. He’s becoming ever more accurate and forthright in his diagnoses.
My dear Smokey,
I’d rather lick my way to the centre of the earth than attempt to make points through your endlessly shifting goal posts. Been there before, bubba. Bought the T-shirt, got back on the boat.
You’re welcome to respond to the content of my post rather than snarking about a word in it. It would be on-topic to boot. I may even reply with more consideration.
🙂
Ahh, but what if you do have stations that have reliable data for the entire period? Well, then you do not need to do all this complicated stuff to make up for that, do you? And this post is about exactly such stations. And this post shows that these reliable stations data disagrees with the data of stations that are less reliable, and use a lot of complicated math to supposedly make up for that. In the scientific method, this is called “falsification”. This shows that the method described in “Off Averages & Anomalies” has been falsified.
The Scientific method:
Hypothesis, some stations have unreliable data some of the time which introduces error. If we use the method described in “Off Averages & Anomalies” we will be able to screen out that error.
Test of Hypothesis- Compare the output of this method to known stations that do have reliable data, and see if there is a significant difference. If none, the method works and can be used to eliminate error, if a significant difference, the method is falsified and the current reported temperatures contain error.
Result of Experiment- There is a very considerable difference in the known reliable stations and output of the method that claims to be able to eliminate this unreliability.
Conclusion- This method does not eliminate the error, it has been falsified (assuming it was even used correctly and honestly in the first place).
Also, the idea that you can eliminate most of the stations and still achieve a reliable record of whether the temperature is rising or not is incorrect. If you give me control over which stations are included in the temperature record, and which are dropped, I can get you warming regardless of what method you use to screen out my deliberate and false warming. All I need to do is carefully select stations that show a gradual and steady warming. I would almost certainly have to eliminate most stations, using too many stations would make this too hard for me, there would only be so many stations with this kind of record. There is, after all, only so many cities, and some stations may have been too carefully maintained and calibrated to make this possible (they may have compensated for the UHI effect by careful citing and re siting). These can be stations that are in growing urban areas. The above article shows that those are the stations kept. I would also eliminate most rural stations, except for a few where the local environment right around the station itself contributed to a slowly growing heat. In all cases, what I am looking for is a slow rise of reported temperatures due to a slowly increasing Urban Heat Island Effect. If I choose those stations where the increase is slow and gradual enough, and drop all stations where there is not such a gradual rise, no amount of fancy math will correct for my deliberately introduced warm bias.
I notice a few things here (“you” is the author of “Off Averages & Anomalies”):
*The number of stations dropped off the temperature record is huge, that is exactly what I would need to falsify the temperature record by including only those stations that show rising temperatures, and dropping all others. The number of stations that would show such a gradual rising temperature may be far smaller than the total number of stations, thus, I would need to drop most of them. Thus, this huge drop of stations must raise serious suspicion.
*I paid for these stations with my tax dollars, many still exist and are still reporting temperatures, yet are not being entered into the global temperature record, why not? I paid for them, you are wasting my tax dollars by not using them for the purpose I paid for them. Where is the money going, if it is not being used for these stations?
*You claim to have a method here which will eliminate bias and error. You drop a huge number of stations, which still exist and report. You cannot claim you dropped those stations because they report in error, since you claim to have a method to eliminate this error. So why have you dropped these stations?
*If I were to deliberately introduce a warm bias, I would wish to drop most rural stations. We now have no more rural stations reporting than we had some 150 years ago. This should make anyone suspicious, whatever the excuse made.
*If I were to deliberately introduce a warm bias, I would drop far more rural then urban stations, this is exactly what has happened. The percentage of urban stations is now far greater a percentage of the total stations than it was at any time in history, including 150 years ago.
*If I were to deliberately introduce a warm bias recently, I would expect to see “rising temperatures” right around the time I eliminate most stations. That is exactly what we do see.
*If I have 1/3 of stations reporting warming , 1/3 reporting cooling, and 1/3 reporting steady, can this method fail to show warming if I drop the cooling and most of the steady ones, will it correct for that error, will it even warn you of it? What if I have only 5 or 10% of the stations giving me the slowly rising temperatures I want, and drop most of the rest, can this method fail to report warming?
*You claim to have not carefully selected for warm bias stations, the above article shows that this is suspect, at the very least. Therefore, I would want to see proof of that, your word alone is no longer enough. The fact that “global warming” being true results in greater budgets, job security, and prestige for you has to make me very suspicious, you have a clear conflict of interest.
<blockquoteCould it also be that the people responsible for the ongoing temperature record realise that you don’t need that many stations for a reliable result and thus aren’t concerned about the decline in station numbers – why keep using stations that aren’t needed if they are harder to work with?
“Could it also be”, now there is a definitive, scientific phrase, sure to fill me with unbounded confidence! If you are going to say that we don’t need these stations, I am going to need something more difinative than a “could”. I have a much more likely idea, since we know that there used to be far more temperature stations reporting than now, we know they can, indeed be used, because they were. But now you say “it’s too hard!”. Well, then, how about we fire your lazy ass and get someone out there who will do the work!. You say it’s too expensive, well, how much has your budget dropped, if at all? Unless you can show that your budget has dropped a lot, than this is just an excuse. And you are asking us to expend trillions of dollars to combat “global warming”, and you are trying to skimp a few bucks here? Before I am willing to do all the hard work and expense to combat something, you had better do the far less hard work and expense to show me I need to.
Here’s an idea, before we just accept that it is OK if you drop all these stations (many of which still exist and still report temperatures, thus showing that there is no need to not use them, it is not too hard because someone is doing it now), how about we try adding back in the temperatures they still report, and you can then use all your fancy math to screen out the errors ( have no objection to honest error screening after all), and then we can see if the record still shows the same. Or…we could try using only the most reliable, long term stations, and see if they concur with your method. You know, like is done above. Oh wait, it does not concur. Conclusion, all the verbiage in “Off Averages & Anomalies” shows the old saying, if you can’t dazzle them with brilliance, baffle them with BS. In fact, the pro AGW camp can go further, you can baffle each other, each of you can only do a little dishonesty, while telling each other how diligent you are being with the truth. So long as you compartmentalize it, say with only a few key players adding in just a little dishonesty at key steps (with lots of rationalizations for it), why, you can continue to believe that your record is honest. The above actual use of the scientific method, however, to test that, and find it wanting, should give you pause…
The fact that a number of the chief proponants of AGW have been actually caught monkeying with the temperature trend, and ‘adjusting” it well after the fact (despite not having a time machine to be able to tell if they need to, actual example, how 1938 used to be the highest recorded US temperature, yet was adjusted downward in incriments till 1998 was), as well as actual criminal behavior and deliberatly not using the scientific mthod (such as not releasing their data and code and even threatening to destroy it rather tha do so, so that their work cannot be replicated or even checked), also means that the claim that they are, indeed, not up to anything is either suspect in the extreme, or a flat out lie. Note that it is quite possible in a large loose organization like that to beleive you are telling the truth simply because everyone else around you assures you that you are. Enough compartmentalization if little lies here and there and you can collect them together into one huge whopper and never know it. Throw in a bit of “noble cause corruption” as a rationalization and there you go, concience cleared!
BTW, one thing I would surely like to see, as an amendment and addition to this article, see if there are stations like these, with guaranteed long and accurate records, in countries other than the US. Yes, I know that others have shown in this thread that there are other very long records, what I am looking for is
1) How accurate and reliable are they? This would require them to be multiple stations, rather than just, say, at one spot. The GISS record hare is of 600 stations, are there any such records from other countries?
2)I would like to see it over the same 100 year time frame, apples to apples.
3) Unadjusted data, of course.
Lucy Skywalker says:
October 24, 2011 at 5:26 pm
“The longstanding records that are the subject of this post, are very strong evidence against all your assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work.”
Excellent post. The longstanding records should be treated with respect and not bundled with the other records in knee jerk fashion. It is incumbent upon the bundlers to provide empirical evidence that the two sets of records should be treated the same. Without such evidence, one wonders whether the reason for bundling the weak records with the longstanding records is to achieve a higher average temperature. Recognizing that such evidence exists and can be addressed might require some folks to think outside the box.
peetee says:
October 24, 2011 at 3:31 pm
“uhhh… the article references to ‘abstract’ – where’s the actual published paper to be found? What journal? Uhhh… what peer-reviewed journal? Surely, surely…. this can’t be a pre-release!!!”
—
peetee, this IS the FINAL release, and the paper is peer-reviewed right here. That means you and your peers!
“But wait”, I hear you saying, “you can’t be serious! My troll posts count for peer review?” To which I reply: “Why yes, for sure! You have no idea what real academic peer review can be like.”
Glad we could clear that up.
Glenn Tamblyn says:
October 24, 2011 at 3:38 pm
“In this series I intend to look at how the temperature records are built and why they are actually quite robust. In this first post (Part 1A) I am going to discuss the basic principles of how a reasonable surface temperature record should be assembled, Then in Part 1B I will look at how the major temperature products are built. Finally in Parts 2A and 2B I will then look at a number of the claims of ‘faults’ against this to see if they hold water or are exaggerated based on misconceptions.”
We are not asking for a tour of the box. We want you to think outside the box. What empirical evidence can you offer for not treating the longstanding records differently from the other records? In an early post, I suggested that the longstanding records should be treated as the standard and that all other records should be treated as deficient because of siting issues and related matters. Anthony’s 30 years of data offer considerable empirical evidence for investigating the siting issues. Please try to address the empirical questions about siting.
It’s also worth puzzling over GISS vs adjusted data in Australia …
http://www.waclimate.net/bomhq-giss.html
barry,
I’ll respond to your comment (as you requested).
Fine…
Let’s pretend the instrument data are flawless across the entire history. AMO warming cycles alone are still enough to account for essentially all of the USA warming signal:
http://sbvor.blogspot.com/2011/10/amo-as-driver-of-climate-change.html
I submit that the AMO alone is also enough to fully explain the birth and death of the CAGW cult (and the global cooling cult which preceded it):
http://sbvor.blogspot.com/2010/12/how-amo-killed-cagw-cult.html
From the same post (above) Dr. Lindzen asserts that:
“The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries. Recent work (Tsonis et al, 2007), suggests that this variability is enough to account for all climate change since the 19th Century.”
Theo @ur momisugly here
Included in my post are links to posts on the questions that Lucy Skywalker raised, testing for UHI in the records and in the data. This work has been done under the auspices of skeptic websites and others.I have not included the literature references as I assumed these would be disregarded out of hand. I took pains only to reference sources that have the trust, or seem to have the trust, of the skeptical milieu.
Lucy’s riposte seems to be that there is doubt that stations have not been properly screened to distinguish rural from urban. There are a number of different methods, Zeke works with three in his post on UHI in the US. At Residual Analysis, the screening methods tested are (1) GHCN classification, then from population density, and then from (2) ‘vegetation’ metadata,from which he sourced actually rural (no settlement nearby) stations.
Other parameters have been tested – airport trends (example, example) is a good example, where the results are not much different than urban, rural or all sites.
Unsure about GHCN data? Then use the GSOD data set, which shows a similar profile,/a> to the other records – this has been compiled outside the official channels by citizens bloggers. Here is a comparison to GHCN data from 1950 (caveat – GSOD has poorer coverage before 1974, at least that was the case last time I read up on it a year ago).
My original post to this one is aimed more at the people here who are impugning the methods and motives of the official compilers of temperature data and records. There is no call for that when there is so much material – from skeptics as well – corroborating the official records. Even Anthony Watts paper (Fall et al) corroborates the averaged temperature record for the US.
In reply to your comment to me, Lucy’s queries are of course valid, but it is wrong to suggest that they have not been addressed.
Garrett Curley (@ga2re2t) says:
October 24, 2011 at 3:13 am
I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it?
————————————-
It’s that darned consensus thingy.