Unadjusted data of long period stations in GISS show a virtually flat century scale trend

Hohenpeissenberg Meteorological Observatory - Image from GAWSIS - click for details

Temperature averages of continuously reporting stations from the GISS dataset

Guest post by Michael Palmer, University of Waterloo, Canada

Abstract

The GISS dataset includes more than 600 stations within the U.S. that have been

in operation continuously throughout the 20th century. This brief report looks at

the average temperatures reported by those stations. The unadjusted data of both

rural and non-rural stations show a virtually flat trend across the century.

The Goddard Institute for Space Studies provides a surface temperature data set that

covers the entire globe, but for long periods of time contains mostly U.S. stations. For

each station, monthly temperature averages are tabulated, in both raw and adjusted

versions.

One problem with the calculation of long term averages from such data is the occurrence of discontinuities; most station records contain one or more gaps of one or more months. Such gaps could be due to anything from the clerk in charge being a quarter drunkard to instrument failure and replacement or relocation. At least in some examples, such discontinuities have given rise to “adjustments” that introduced spurious trends into the time series where none existed before.

1 Method: Calculation of yearly average temperatures

In this report, I used a very simple procedure to calculate yearly averages from raw

GISS monthly averages that deals with gaps without making any assumptions or adjustments.

Suppose we have 4 stations, A, B, C and D. Each station covers 4 time points, without

gaps:

In this case, we can obviously calculate the average temperatures as:

A more roundabout, but equivalent scheme for the calculation of T1 would be:

With a complete time series, this scheme offers no advantage over the first one. However, it can be applied quite naturally in the case of missing data points. Suppose now we have an incomplete data series, such as:

…where a dash denotes a missing data point. In this case, we can estimate the average temperatures as follows:

The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.

One advantage that may not be immediately obvious is that this scheme also removes

systematic errors due to change of instrument or instrument siting that may have occurred concomitantly with a data gap.

Suppose, for example, that data point B1 went missing because the instrument in station B broke down and was replaced, and that the calibration of the new instrument was offset by 1 degree relative to the old one. Since B2 is never compared to B0, this offset will not affect the calculation of the average temperature. Of course, spurious jumps not associated with gaps in the time series will not be eliminated.

In all following graphs, the temperature anomaly was calculated from unadjusted

GISS monthly averages according to the scheme just described. The code is written in

Python and is available upon request.

2 Temperature trends for all stations in GISS

The temperature trends for rural and non-rural US stations in GISS are shown in Figure

1.

Figure 1: Temperature trends and station counts for all US stations in GISS between 1850 and 2010. The slope for the rural stations is 0.0039 deg/year, and for the other stations 0.0059 deg/year.

This figure resembles other renderings of the same raw dataset. The most notable

feature in this graph is not in the temperature but in the station count. Both to the

left of 1900 and to the right of 2000 there is a steep drop in the number of available

stations. While this seems quite understandable before 1900, the even steeper drop

after 2000 seems peculiar.

If we simply lop off these two time periods, we obtain the trends shown in Figure

2.

Figure 2: Temperature trends and station counts for all US stations in GISS between 1900 and 2000. The slope for the rural stations is 0.0034 deg/year, and for the other stations 0.0038 deg/year.

The upward slope of the average temperature is reduced; this reduction is more

pronounced with non-rural stations, and the remaining difference between rural and

non-rural stations is negligible.

3 Continuously reporting stations

There are several examples of long-running temperature records that fail to show any

substantial long-term warming signal; examples are the Central England Temperature record and the one from Hohenpeissenberg, Bavaria. It therefore seemed of interest to look for long-running US stations in the GISS dataset. Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.

The temperature trends of these stations are shown in Figure 3.

Figure 3: Temperature trends and station counts for all US stations in GISS reporting continuously, that is containing at least one monthly data point for each year from 1900 to 2000. The slope for the rural stations (335 total) is -0.00073 deg/year, and for the other stations (278 total) -0.00069 deg/year. The monthly data point coverage is above 90% throughout except for the very first few years.

While the sequence and the amplitudes of upward and downward peaks are closely similar to those seen in Figure 2, the trends for both rural and non-rural stations are virtually zero. Therefore, the average temperature anomaly reported by long-running stations in the GISS dataset does not show any evidence of long-term warming.

Figure 3 also shows the average monthly data point coverage, which is above 90%

for all but the first few years. The less than 10% of all raw data points that are missing

are unlikely to have a major impact on the calculated temperature trend.

4 Discussion

The number of US stations in the GISS dataset is high and reasonably stable during the 20th century. In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out, to the point that the GISS dataset no longer seems to offer a valid basis for comparison of the present to the past. If we confine the calculation of average temperatures to the 20th century, there remains an upward trend of approximately 0.35 degrees.

Figure 4: Locations of US stations continuously reporting between 1900 and 2000 and contained in the GISS dataset. Rural stations in red, others in blue. This figure clearly shows that the US are large, but the world (shown in FlatEarth™ projection) is even larger.

Interestingly, this trend is virtually the same with rural and non-rural stations.

The slight upward temperature trend observed in the average temperature of all

stations disappears entirely if the input data is restricted to long-running stations only, that is those stations that have reported monthly averages for at least one month in every year from 1900 to 2000. This discrepancy remains to be explained.

While the long-running stations represent a minority of all stations, they would

seem most likely to have been looked after with consistent quality. The fact that their

average temperature trend runs lower than the overall average and shows no net warming in the 20th century should therefore not be dismissed out of hand.

Disclaimer

I am not a climate scientist and claim no expertise relevant to this subject other than

basic arithmetics. In case I have overlooked equivalent previous work, this is due to my ignorance of the field, is not deliberate and will be amended upon request.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
265 Comments
Inline Feedbacks
View all comments
October 24, 2011 2:48 am

^^^ Dang! ^^^
“wet build temperature” = “wet bulb temperature”

DirkH
October 24, 2011 2:50 am

Glenn Tamblyn says:
October 24, 2011 at 12:42 am
“Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser. ”
But if you do area weighting your result will be hugely biased towards the trends of isolated thermometers with no second data source a thousand miles around, like in GISS. Is this what you consider a better approach? It also makes manipulations much simpler. The PNS approach. Drop the thermometers that don’t confess.

James Reid
October 24, 2011 2:52 am

A question to anyone who is familiar with the techniques from someone who has not had access to the published papers in this area; How is the global mean temperature “normally” calculated from daily maximum and minimum records?
Can anyone point to a primer in this area please?

kim;)
October 24, 2011 3:03 am

KPO says:
October 24, 2011 at 2:17 am
“This thing with averages of averages of averages of data points (numbers) bothers me. ”
xxxxxxxxxxxxxx
Well said!

KV
October 24, 2011 3:07 am

Re “The Great Dying of Thermometers” which occurred worldwide predominantly in 1992-93. Of course, most of them didn’t actually die – Hansen and associates simply stopped using the data for reasons nobody ever seems to have explained. The following is my own interpretation of available facts.
1988 was reportedly the hottest summer in the US for 52 years and no doubt gave James Hansen confidence for his alarmist appearance before a US Committee that year (reportedly with windows purposely left wide open and air conditioning off on one of the hottest days of the year)! He did not have the backing of NASA as his former supervisor, senior NASA atmospheric scientist Dr. John S. Theon made clear after he retired. He said Hansen had “embarrassed NASA” with his alarming climate claims and violated NASA’s official agency position on climate forecasting (i.e., ‘we did not know enough to forecast climate change or mankind’s effect on it’).
It is evident Hansen had put his reputation and credibility on the line and when, starting with significant falls in 1989 and 1990 the next four years to 1992 showed sharp cooling in many parts of the world, a cooling which was then further exacerbated by the June 1991 eruption of Mt.Pinatubo, Hansen must have been put under extreme pressure, not only by his critics within and outside NASA, but also other scientists supporting and pushing the AGW hypothesis.
In the absence of any official plausible explanation I feel this to be a credible background and motive for the dropping of stations and the now well-established abuse of raw data. It is worthy of note that others have found drops in station numbers resulted in a significant step rise in most graphed temperatures.

October 24, 2011 3:13 am

I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it? Maybe Anthony Watts can shed some light on this?
Also, Anthony and others were quite annoyed that the recent BEST results were publicized before peer-review, and here we have a biochemist being given free reign to post what amounts to a back-of-the-envelope calculation of average ground temps. Michael Palmer or Anthony Watts could have, at the very least, got a statistician or near equivalent to verify the averaging analysis, which I suspect is particularly well suited to giving an average trend of zero.
Cheers.

TBear (Sydney, where it is finally warming Up, after coldest October in nearly 50 yrs)
October 24, 2011 3:16 am

KPO says:
October 24, 2011 at 2:17 am
Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone. I have this sense that there are parameters missing such as humidity, cloud cover and perhaps wind speed/direction that should also be recorded and used collectively when compiling a record. My feeling is that a more complete “measure” would be accomplished, EG an average temp of 15c/70% humidity/45% cloud/20km/hr/NW – in short, a sort of micro-climate record, expanded to regional and then global if that’s possible. This thing with averages of averages of averages of data points (numbers) bothers me. Perhaps my thinking is off so please straighten me out. Thanks
_______________
So let’s say KPO is correct.
Does this sort of point indicate that we really have no safe handle on total energy in the climate system?
If that is correct, or even plausible, why are we spending trillions of dollars on AGW?
Frustrated that, if these types of points have any true validity, that the general scientific community does not get its act together and call this CAGW argument for the misguided tripe that it may well be. Is it because scientists feel too constrained by specialism? Sad, very sad, if that is the case.
Waiting for the push-back revoluition to begin …

PlainJane
October 24, 2011 3:18 am

Glen Tamblyn
“Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser.”
This article never claims to be working out the average temperature of America or any region in particular – it is only looking at long term trends in individual stations, so this point is not valid.
It is also petty and peevish to complain about being “snipped” here when you have not had your comment disappeared down a black hole so that no other people may know you posted something. Most readers of this blog, if interested, would be savvy enough to find your work anyway from the comment Anthony left stating where your work was. You have been invited to place your specific work here so we can read it easily in context. Could you please do that?

Steve C
October 24, 2011 3:21 am

There’s another graph I’d like to see, prompted by an oft-quoted statistic in these and other pages, and again by those RUTI vs BEST graphs from hidethedecline.eu, above. The oft-quoted statistic is that, of the land area of the earth, urban areas comprise around 1%, rural 99%. Has anyone drawn a graph in which 99% weighting was given to the best of the rural stations, and 1% to the urban?
Of course, this still ignores the two-thirds of the planet which is sea, but I’d certainly call such a graph a more accurate picture of overall land temperature than the usual heavily urban- and airport-influenced offerings. Has anyone here come across such a graph, or can someone produce one a bit more quickly than my own rusty data processing would allow?

October 24, 2011 3:31 am

Glenn Tamblyn,
Please do not refer to the Skeptical Pseudo-Science propaganda blog. John Cook has no ethics, no honesty, and his mendacious blog has been thoroughly deconstructed in articles and comments here. Do a search of the archives, and get educated.
Of course, if you suffer from cagnitive dissonance like most True Believers, SPS is the echo chamber for you. Just be aware that you’re being spoon-fed alarmist propaganda at that heavily censoring blog. Some folks actually like being led by the nose, so maybe that’s A-OK with you. To each his own.

Harry Snape
October 24, 2011 3:35 am

Replacing a missing temp from one station with the average of the other stations seems like madness to me. The missing reading could be for a station in Antarctica, or the Equator, they would bear no relationship to the “average” of other stations.
Equally, infilling with the average for that time of the year from that station, or something similar is also likely to introduce an error. As someone said above Sydney has an anomalously low Oct, so infilling an average Oct temp would have overestimated the temp. But at least this method has the temp in the ball park, i.e. it will be an average Syd temp for Oct, not an average world temp for Syd (which would be seriously low).
What needs to happen when stations are missing is that the average for that location/time is used, but the error bar on the calculated temp is increased by the maximum variance at that location/time (possibly more for short data records). So the loss of any substantial portion of the record will be observable by the width of the error bars and once can deduce how confident we should be of the final number.

oMan
October 24, 2011 3:53 am

KPO: you’ve caught the flavor of my frustration with the (understandable, inevitable) enterprise of reducing the complexity of a system such as local weather or (its big sister, integrated over time and space) climate, into a single parameter called “temperature” and even that not measuring heat content but just a dry bulb thermometer. We lose so much information in this process. It would be nice if the final statistic came with a label reminding us of the magnitude of what has been left on the cutting-room floor and how subjective the cutting has been. Just use a five-star or color code. I know, I know, error bars are a good nudge in that direction; and this post by Michael Palmer contains, in Figure 1, another excellent pointer for quality, namely the number of stations. It tells the reader that the data before 1900 and after about 1990 is “thin” and “different.”. I use those words to suggest the orthogonality of the information-space we are trying to explore if not capture in the temperature time series for the entire US “represented” by GISS numbers.
That dying of the thermometers is the real story for me, and Michael Palmer adds a valuable chapter to the story. Many thanks.

Stephen Wilde
October 24, 2011 3:54 am

So, how to reconcile the widespread sceptic acceptance that there has been SOME warming since the LIA with the now clear possibility that the surface temperature record via thermometers has been primarily recording UHI effects and/or suffers from unjustified ‘adjustments’ towards the warm side ?
Firstly we can simply say that the background warming since the LIA is less than that apparently recorded by the thermometers.
Although there has been some warming, the effect of UHI and inaccurate ‘adjustments’ has exagerrated it during the late 20th century and may now be hiding a slight decline.
Secondly we can see from the chart above that although there may have been little or no net change in temperature during the 20th century at the most reliable long term sites there have been changes up and down commensurate with many other observations i.e. warming followed by cooling then warming again and now possibly cooling.
What such a pattern suggests is that the Earth’s watery thermostat is highly effective but takes a while ( a few decades) to adjust to any new forcing factor.
Thus the system energy content remains much the same (including the main body of the oceans) but in the process of adjusting the energy flow through the system in order to retain that basic system energy content the surface pressure distribution changes so as to alter the size and position off the climate zones especially in the mid latitudes where most temperature sensors are situated.
From our perspective there seems to be a warming or cooling at the recording sites when averaged out overall but in reality all that is being recorded is the rate of energy flow past the recording sites as the speed of energy flow through the system changes in an inevitable negative response to a forcing agent whether it be sun sea or GHGs.In effect the positions of the surface sensors vary in relation to the position of the climate zones and they record that variance and NOT any change in energy content for the system as a whole.
A warmer surface temperature recording at a given site (excluding UHI and adjustments) just reflects the fact that more warm air from a more equatorward direction flows across it more often. That does not imply a warmer system if the system is expelling energy to space commensurately faster.
When a forcing agent tries to warm the system the flow of energy through the system and out to space increases so that more warm air crosses our sensors.
When a forcing agent tries to cool the system the flow of energy through the system and out to space decreases so that cooler air crosses our sensors.
Hence the disparity between satellite and surface records. The satellites are independent of the rate of energy flow across the surface and attempt to ascertain the energy content of the entire system. That energy content varies little if at all because the net effect of changes in the rate of energy flow through the system is to stabilise the total system energy content despite internal system variations or external (solar) variations seeking to disturb it.
Higher temperature readings at surface stations therefore do not necessarily reflect a higher system energy content at all, merely a local or regional surface warming as more energy flows through on its way to space.

October 24, 2011 3:56 am

Hi, Michael. One needed adjustment has to do with time-of-observation bias. There are literature references on this available via the internet. You may want to check Steve McIntyre’s Climate Audit for discussions of this.
I believe that the raw data needs this adjustment.

Richard
October 24, 2011 4:19 am

A chilling analysis of James Hansen’s machinations.

Richard
October 24, 2011 4:25 am

Garrett Curley (@ga2re2t) says:
“I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it?”
Maybe Arithmetic and free discussion?
This is not a religious site perhaps?
Not a discussion of beliefs but rather on the basis of the beliefs?
Maybe Hansen introduces a consistent warming bias with his “adjustments”?

Dave Springer
October 24, 2011 4:30 am

Frank Lansner says:
October 24, 2011 at 1:05 am
Here RUTI (Rural Unadjusted Temperature Index) versus BEST global land trend:
http://hidethedecline.eu/media/ARUTI/GlobalTrends/Est1Global/fig1c.jpg
The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.
_________________________________________________________________
Hi Frank. I had a look at the graphs and recognized the source of the difference. That’s the infamous Time of Observation adjustment. It’s the biggest upward adjustment they make. Without it there is no significant 20th century warming trend. This is why the warming trend focus has now shifted to the period 1950-2010. You don’t hear the AGW boffins discussing dates earlier than that anymore. The author of the OP here evidently didn’t get the memo which was issued about the same time as the order to stop calling it “climate change” and begin calling it “global climate disruption”. It’s all about framing, you see. They frame the times and they frame the terms. It’s a despicable, dishonest, unscientific agenda they pursue.
Steve Goddard has a good explanation of the TOB here:
http://stevengoddard.wordpress.com/2011/08/01/time-of-observation-bias/

Time Of Observation Bias
Posted on August 1, 2011 by stevengoddard
The largest component of the USHCN upwards adjustments is called the time of observation bias. The concept itself is simple enough, but it assumes that the station observer is stupid and lazy.
Suppose that you have a min/max thermometer and you take the reading once per day at 3 pm. On day one it reads 50F for the high – which for arguments sake occurred right at 3 pm. That night a cold front comes through and drops temperatures by 40 degrees. The next afternoon, the maximum temperature is also going to be recorded as 50F – because the max marker hasn’t moved since yesterday. This is a serious and blatantly obvious problem, if the observer is too stupid to reset the min/max markers before he goes to bed. The opposite problem occurs if you take your readings early in the morning.
I had a min/max thermometer when I was six years old. It took me about three days to realize that you had to reset the markers every night before you went to bed. Otherwise half of the temperature readings are garbage.
USHCN makes it worse by claiming that people used to take the readings in the afternoon, but now take them in the morning. That is how they justify adjusting the past cooler and the present warmer.
They should use the raw data. These adjustments are largely BS.

October 24, 2011 4:36 am

Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone.

Indeed. The mainstream datasets are based upon a daily temperature reading.
The problem here is what does this temperature reading actually represent?
For the moment forget all about all the problems associated with the accuracy of the thermometer… forget the Urban Heat Island effect and all about the known location issues associated with thermometers… and lets take a look at what daily data is actually being recorded.
If we used a basic data logging thermometer to record the temperature at the end of every minute during a single calendar day we would accumulate 1,440 temperature data points… the maths is simple: 60 minutes x 24 hours = 1,440.
From these 1,440 data points we could then very easily calculate an Average Daily Temperature for that day.
Unfortunately, this is not how the mainstream dataset their daily temperature reading.
This is what they actually do.
First:
They capture the data point with the highest temperature and call it their Daily High Temperature.
Although this is fairly reasonable it is important to remember that this is the high outlier value from the 1,440 data points for that day.
Second:
They capture the data point with the lowest temperature and call it their Daily Low Temperature.
Although this is fairly reasonable it is important to remember that this is the low outlier value from the 1,440 data points for that day.
Third:
They add the Daily High Temperature outlier value to the Daily Low Temperature outlier value and then divide this intermediate number by two. So what would a rational person call this number? By definition (i.e. the maths) it is the mid-point between the daily extreme outlier temperatures. However, climatologists by some bizarre logic call this number the Daily Average Temperature. It is only in climatology that the average of 1,440 data points is calculated by just using the two extreme outlier values for the day… no wonder climate science is regarded as a weird science.
To underline just how weird this weird science really is lets ask ourselves:
Question: What data would a rational person use to demonstrate rising temperatures?
Rational Answer: The series of Daily High Temperatures.
Climatology Answer: The series of mid-points between the daily extreme outlier temperatures.

Dave Springer
October 24, 2011 4:41 am

More info on individual adjustments and how they change twencen temp record.
A picture is worth a thousand words (maybe more in this case):
http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_pg.gif

Dave Springer
October 24, 2011 4:44 am

Final result of adjustments:
http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
As you can see the entire twencen warming is indeed manmade. Made by the adjustments applied to the thermometer readings that is…

Rhys Jaggar
October 24, 2011 4:45 am

I think the debate is beginning to move toward the position where we can see that the result obtained in terms of temperature trends depends on the source data. SURPRISE, SURPRISE!
It may in fact be the case that about 50 independent analyses be done to show what happens depending on what data you use. This one is just for the USA. Whch is a large continental land mass bounded by the world’s two largest oceans to the East and West, a warm sea to the South and a major land mass to the north, with a smaller land mass in the SW.
You might find different results in you studied Russia: a large continental land mass surrounded by ice/ocean to the north and a continental land mass to the south.
You might find different resuilts if you studied Europe. A mid-sized continental land mass with a major ocean to the west, a small sea to the south, smaller enclosed seas to the SE and NE.
You might find different results for Australia: a mid sized land mass totally surrounded on all sides by a major ocean.
I am certain, based on the fact that the Thames simply doesn’t freeze over like it used to in the 19th century, that London must be much warmer in winter than it used to be 200 years ago. So I’d frankly be amazed if we couldn’t agree that we have had warming in the past 200 years, although the 20th century may be less clear cut.
Where the debate has been the past 20 years is a small group unilaterally determining the source data sets and not having that most important decision subjected to the most rigorous analysis by all. That can’t change quickly enough in my book.
It would also appear that debates about how you search for deviations can get you different results. Particularly if the datasets have gaps in the record. It might be helpful to commit to building a century of reliable, consistent, internationally agreed datasets to ensure that this working with limited data be something which becomes less important in time. One hopes that this can include wireless-based transmission of data, particularly in rural areas with extreme cold in winter. Whether that is technically feasible is something specialist climatologists should no doubt enighten the public about.
One is minded to suggest that the IPCC bears all the hallmarks in climatology that FIFA has done in world football. A deeply flawed organisation, but not completely evil. Which is about as strong a signal for fundamental reform that can be given using measured langauge…………

Bigdinny
October 24, 2011 4:48 am

I have been lurking here and several other sites for over a year now, trying to make sense of all this from both sides of the fence. In answer to this simple question, Is the earth’s temperature rising? depending upon whom you ask you get the differing responses YES!, NO!, DEPENDS! IT WAS BUT NO LONGER IS! I think I have learned a great deal. Of one thing I am now fairly certain. Nobody knows.

DocMartyn
October 24, 2011 4:48 am

Have a look at the histograms of Fig.3, BEST UHI paper. It appears that when they compare 10 year, 10-20, 10-30 and 30+ data series the distribution of temperature RATE changes from a normal to a Poisson distribution.
My guess is that you will find the same thing. Moreover, if you look at the individual rates in the same manner, you will be able to identify when the transition occurred..

Jim Butler
October 24, 2011 4:51 am

First hand example of how data sets get messed up…
This morning, as part of my first cuppa routine, I checked the weather online using Intellicast’s site. 47deg. Let my dogs out…hmmmm …seems much cooler than 47deg, checked Accu. 47deg. Then checked Wunderground, 47deg. Used Wunderground’s “station select” feature, saw that it had defaulted to APRSWXNET – Milford, Ma.. All of the surrounding stations were reporting 34-37degs, and they appeared to be amateur stations, whereas I’m guessing that APRSWXNET is an “official” station of some sort.
Whatever it is, multiple services are using it, and it’s wrong by about 12deg.
JimB

October 24, 2011 4:56 am

Richard says:
“Maybe Arithmetic and free discussion?”
I would be of the opinion that arithmetic and free discussion of that type should be reserved to forum threads. This site is considered, from what I gather, as a reference on climate skepticism, so back-of-the-envelope calculations seem out of place to me.
“This is not a religious site perhaps?”
Well, I would argue that using any Tom, Dick and Harry analysis just to place doubt on GW (and therefore AGW) is being somewhat religious. But putting that aside, not being religious/dogmatic about something does not imply that every discussion is fair-game. Being open-minded about something does not require you to let your intellectual defenses down.