# Unadjusted data of long period stations in GISS show a virtually flat century scale trend

Temperature averages of continuously reporting stations from the GISS dataset

Guest post by Michael Palmer, University of Waterloo, Canada

Abstract

The GISS dataset includes more than 600 stations within the U.S. that have been

in operation continuously throughout the 20th century. This brief report looks at

the average temperatures reported by those stations. The unadjusted data of both

rural and non-rural stations show a virtually flat trend across the century.

The Goddard Institute for Space Studies provides a surface temperature data set that

covers the entire globe, but for long periods of time contains mostly U.S. stations. For

each station, monthly temperature averages are tabulated, in both raw and adjusted

versions.

One problem with the calculation of long term averages from such data is the occurrence of discontinuities; most station records contain one or more gaps of one or more months. Such gaps could be due to anything from the clerk in charge being a quarter drunkard to instrument failure and replacement or relocation. At least in some examples, such discontinuities have given rise to “adjustments” that introduced spurious trends into the time series where none existed before.

1 Method: Calculation of yearly average temperatures

In this report, I used a very simple procedure to calculate yearly averages from raw

GISS monthly averages that deals with gaps without making any assumptions or adjustments.

Suppose we have 4 stations, A, B, C and D. Each station covers 4 time points, without

gaps:

In this case, we can obviously calculate the average temperatures as:

A more roundabout, but equivalent scheme for the calculation of T1 would be:

With a complete time series, this scheme offers no advantage over the first one. However, it can be applied quite naturally in the case of missing data points. Suppose now we have an incomplete data series, such as:

…where a dash denotes a missing data point. In this case, we can estimate the average temperatures as follows:

The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.

One advantage that may not be immediately obvious is that this scheme also removes

systematic errors due to change of instrument or instrument siting that may have occurred concomitantly with a data gap.

Suppose, for example, that data point B1 went missing because the instrument in station B broke down and was replaced, and that the calibration of the new instrument was offset by 1 degree relative to the old one. Since B2 is never compared to B0, this offset will not affect the calculation of the average temperature. Of course, spurious jumps not associated with gaps in the time series will not be eliminated.

In all following graphs, the temperature anomaly was calculated from unadjusted

GISS monthly averages according to the scheme just described. The code is written in

Python and is available upon request.

2 Temperature trends for all stations in GISS

The temperature trends for rural and non-rural US stations in GISS are shown in Figure

1.

This figure resembles other renderings of the same raw dataset. The most notable

feature in this graph is not in the temperature but in the station count. Both to the

left of 1900 and to the right of 2000 there is a steep drop in the number of available

stations. While this seems quite understandable before 1900, the even steeper drop

after 2000 seems peculiar.

If we simply lop off these two time periods, we obtain the trends shown in Figure

2.

The upward slope of the average temperature is reduced; this reduction is more

pronounced with non-rural stations, and the remaining difference between rural and

non-rural stations is negligible.

3 Continuously reporting stations

There are several examples of long-running temperature records that fail to show any

substantial long-term warming signal; examples are the Central England Temperature record and the one from Hohenpeissenberg, Bavaria. It therefore seemed of interest to look for long-running US stations in the GISS dataset. Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.

The temperature trends of these stations are shown in Figure 3.

While the sequence and the amplitudes of upward and downward peaks are closely similar to those seen in Figure 2, the trends for both rural and non-rural stations are virtually zero. Therefore, the average temperature anomaly reported by long-running stations in the GISS dataset does not show any evidence of long-term warming.

Figure 3 also shows the average monthly data point coverage, which is above 90%

for all but the first few years. The less than 10% of all raw data points that are missing

are unlikely to have a major impact on the calculated temperature trend.

4 Discussion

The number of US stations in the GISS dataset is high and reasonably stable during the 20th century. In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out, to the point that the GISS dataset no longer seems to offer a valid basis for comparison of the present to the past. If we confine the calculation of average temperatures to the 20th century, there remains an upward trend of approximately 0.35 degrees.

Interestingly, this trend is virtually the same with rural and non-rural stations.

The slight upward temperature trend observed in the average temperature of all

stations disappears entirely if the input data is restricted to long-running stations only, that is those stations that have reported monthly averages for at least one month in every year from 1900 to 2000. This discrepancy remains to be explained.

While the long-running stations represent a minority of all stations, they would

seem most likely to have been looked after with consistent quality. The fact that their

average temperature trend runs lower than the overall average and shows no net warming in the 20th century should therefore not be dismissed out of hand.

Disclaimer

I am not a climate scientist and claim no expertise relevant to this subject other than

basic arithmetics. In case I have overlooked equivalent previous work, this is due to my ignorance of the field, is not deliberate and will be amended upon request.

Article Rating
Inline Feedbacks
Brian H
October 24, 2011 12:13 am

A fine extraction of genuine information from an egregiously abused data set. The retro-chilling of old records and The Great Dying of The Thermometers (actually about 1990, they went from ~6000 to ~1600) are so brazen as to defy belief.

crosspatch
October 24, 2011 12:14 am

In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out

At some point they are going to run out of tricks to use to create a warming signal.

Torgeir Hansson
October 24, 2011 12:40 am

Unadjusted data???? Are you crazy? Don’t you go and challenge Dr. Jones et al now. They run this game, sonny, and you’d better get used to it.

Glenn Tamblyn
October 24, 2011 12:42 am

Michael. Interesting post. However I have to disagree with the method you have used to handle gaps in the record. By using the average of other stations to sustitute for a missing station when averaging temperatures this assumes that the missing station is essentially at a similar temperature to the others. If its temperature is significantly different, this will introduce a bias – if it is colder, a warming bias, if it is warmer, a cooling bias.
This isn’t the method used by the mainstream temperature records. They base their calculations on comparing each reading from a station against its own long term average over some base period. Then they take the difference between the individual reading and this long term average to calculates the temperature anomaly for that reading on that day. This produces quite a different behaviour when looking at missing readings.
Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser.
[snip sorry SkS doesn’t treat people with any sense of fairness – for Example Dr. Peilke Sr. If you want to reference any of your work, your are welcome print it out in detail here, but until SkS changes how they treat people, sorry not going to allow you to use it as a reference. Be as upset as you wish, but the better thing to do is work for change there- Anthony ]

Goldie
October 24, 2011 12:48 am

To the untrained eye this looks to have two warming cooling cycles in it. The first in the early to mid 20th century, which corresponds with the well attested long hot summer that made the Battle of Britain possible and the second in the late part of the 20th Century which corresponds with the well undertood warming cycle that began in approximately 1974/76 following the 1974 La Nina with a step change in global rainfall.
Just a word of warning though – last week this website was going off the deep end at BEST for using a non-standard period for assessing climate change. Whilst this assessment is useful, it is important that the limitations of the data to fully inform the current debate are fully understood

Ian of Fremantle
October 24, 2011 12:49 am

Have you yourself or do you know of anyone who has asked GISS why particular stations have been discontinued? In Australia there also seems to have been selective removal of some stations. Of course it would be uncharitable to suggest the removals are to tie in with the proposition of global warming but it would be good to get an official answer. It would also be a lot more good if posts like this were publicised in the MSM

Steeptown
October 24, 2011 12:53 am

Is there an official explanation for why, in the modern era with all the funding available, the number of stations has dropped precipitously?

tokyoboy
October 24, 2011 12:55 am

Many folks here know that for a long time, n’est-ce-pas?

October 24, 2011 1:05 am

Palmer
“At some point they are going to run out of tricks to create a warming signal”
I appreciate very much that you just put it as it is. We sceptics sometimes want to sound extremely nuanced etc. simply to be taken serious, and thus, when someone just say the truth we all know just like that, its a great relief.
Here RUTI (Rural Unadjusted Temperature Index) versus BEST global land trend:
http://hidethedecline.eu/media/ARUTI/GlobalTrends/Est1Global/fig1c.jpg
The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.
RUTI global taken from:
http://hidethedecline.eu/pages/posts/ruti-global-land-temperatures-1880-2010-part-1-244.php
RUTI will grow stronger and stronger and even though all beginning is tough, I hope everone will help collecting even more original temperature data to me to make RUTI even better.
Tricks: Coastal stations.
One (important) trick from Hadcrut is to use rural coastal stations so that they do have rural data aboard.
Problem is, that coastal stations world wide has around 0,6K more heat trend 1925-2010 than near by non-coastal stations, see
Joanne Nova/RUTI :
http://hidethedecline.eu/pages/posts/ruti-coastal-temperature-stations-242.php
K.R. Frank

Brian H
October 24, 2011 1:05 am

I actually find this chilling. I’d counted on an underlying base trend of about .6K/century to give a bit of a leg up to resist the coming downturn. Not to be, apparently!
A possible positive outcome could be that the Cooling freaks out the Alarmists, and they flip over to pushing CO2 emissions to combat it. That will do nothing for temperature, but will unclog the energy generation pipelines and be great for agriculture and silviculture. Maybe even viticulture!

TBear (Sydney, where it is finally warming Up, after coldest October in nearly 50 yrs)
October 24, 2011 1:07 am

Interesting.
But does it stack up?
Interested to know …

October 24, 2011 1:08 am

The graph I find most interesting is #2. I have seen elsewhere in many places that the global temps are just not rising (at least not significantly, if at all) in this century. How is it that the GISS records for the US are rising very significantly in the 21st century. It appears to be about as much warming in the last decade as the whole 20th century! (Allowing for some smoothing)

Tom Harley
October 24, 2011 1:09 am

Tom Harley reblogged this on pindanpost and commented: The same result for NW Australia…virtually flat for over 100 years in Broome

October 24, 2011 1:14 am

Glenn Tamblyn says:
October 24, 2011 at 12:42 am
Michael. Interesting post. However I have to disagree with the method you have used to handle gaps in the record. By using the average of other stations to sustitute for a missing station when averaging temperatures this assumes that the missing station is essentially at a similar temperature to the others. If its temperature is significantly different, this will introduce a bias – if it is colder, a warming bias, if it is warmer, a cooling bias.

I have to disagree. He is using the temperature delta (Δtemperature) to average with other deltas. That makes much more sense than what you have assumed.

The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.

That is not to say the technique is not problematic, but it should likely be much more accurate than any method I have seen described in this matter. The nearer the site is, I suspect the better the correlation, regardless of the offset in average temperature.

October 24, 2011 1:16 am

So, where we have continuous, reliable, non-manipulated data, there is no warming at all. QED
Strange that many “skeptics” seem to have reconciled themselves with the notion that “there was some global warming in the 20th century.”
Repetitive, all-pervasive lie, if non constantly resisted, in time would appear as containing at least some truth in it. I remember that in the 1980s one could hardly find a person in Russia, however opposed to the regime, who would not believe in some part of the Soviet propaganda. It has been nailed into people’s brains for 70 years, from the womb to the tomb, and only a few were stubborn enough to see it through.

son of mulder
October 24, 2011 1:39 am

Interesting. You use stations continuously operating in 20th century. What does the graph look like if you include only stations that operated continuously through the 20th century upto the present day? That would indicate any bias in the removal of stations recently.

Glenn Tamblyn
October 24, 2011 1:40 am

Michael
I had some stuff you might have been interested in reading but Anthony snipped the link. Interesting, it isn’t the Mod’s here at WUWT snipping this, it is Anthony himself
Look at posts at SkS during May this year or my posts, under my authors name. Unless Anthony snips this as well.
Also, Anthony, if you want to talk about how people are treated, seriously read all the exchanges between SkS and Dr P Snr. Please note the civil tone of it all.
Unless you want to snip this too.
Note that I have copied this post so we can show what you have snipped Anthony
{see these: http://wattsupwiththat.com/2011/09/25/a-modest-proposal-to-skeptical-science/ and http://wattsupwiththat.com/2011/10/11/on-skepticalscience-%e2%80%93-rewriting-history/ and explain why that sort of behavior is OK for SkS How do you justify changing/deleting user comments months and years later? ~mod}

October 24, 2011 2:05 am

In this case, we can estimate the average temperatures as follows:

Yikes! You introduce “fiction” into fact. Well, the temperatrue averages are already an artificial construct. One that doesn’t actually represent the time-averaged thermal state of the system being measured. Even the time-average has knobs on it for subsequent use. Didn’t Doug Keenan explain that adequately a couple of days ago?
What is sorely needed is analysis that can cope with “holes” in the data. Gaps. Analysis that doesn’t require invention of data to bridge the gaps. Which is going to be somewhat harder than for homogenised data, but at least one isn’t analysing guesses instead of raw data.
Moreover, what is needed is an understanding of the physical system. A dry-air temperature isn’t sufficient to describe the thermodynamic state; the enthalphy of the short-term climate system.

John Marshall
October 24, 2011 2:06 am

Unfortunately the adding of all temperatures and dividing by the total number does not actually produce a correct answer.
Many inputs can affect these temperatures between the times of reading which will skew the average without any knowledge that this has happened. A continuous recording, like a barograph for pressure, would be much more accurate. Whether this is done I have no idea.
See:- Does a Global Temperature Exist? Essex, McKitrick and Andresen 2006.

October 24, 2011 2:12 am

The most notable feature in this graph is not in the temperature but in the station count.

Very true, very alarming, very indicative of manipulation.
The station count, reporting years and monthly data point coverage can be used to generate a monthly GISS Credibility Index for their overall dataset… unfortunately this credibility index started to fall off a cliff in the 1970s and is currently very close to zero.
The fact that their average temperature trend runs lower than the overall average and shows no net warming in the 20th century should therefore not be dismissed out of hand.

Totally correct.
The subset of raw data with a very high GISS Credibility Index actually shows a very slight cooling trend in the US during the 20th century.

KPO
October 24, 2011 2:17 am

Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone. I have this sense that there are parameters missing such as humidity, cloud cover and perhaps wind speed/direction that should also be recorded and used collectively when compiling a record. My feeling is that a more complete “measure” would be accomplished, EG an average temp of 15c/70% humidity/45% cloud/20km/hr/NW – in short, a sort of micro-climate record, expanded to regional and then global if that’s possible. This thing with averages of averages of averages of data points (numbers) bothers me. Perhaps my thinking is off so please straighten me out. Thanks

Pete in Cumbria UK
October 24, 2011 2:33 am

What KPO says above at 2:17 +1
How is it that changing temperature (alone) is being used as a proxy for ‘changing climate’
Its like saying that because jeans are usually blue, all items of blue clothing are made of denim and worn around your ass. (or something like that, yanno wot i meen)

Bill
October 24, 2011 2:35 am

Have the stations gone away, or are the stations still there, but just no longer counted?

October 24, 2011 2:42 am

It has often struck me as I extend my Carbon Footprint around the globe, that a very interesting, very consistent, and very much available temperature record may well be available. It is the temperature as recorded by planes as they travel.
The temperature, height and time are all constantly recorded. I can see that all we need to add may be the humidity. I guess it may be very low at most cruising heights however.
This may not give us everything, but it would at least give us something we could investigate. The cost of gathering this data should be trivial.
I personally volunteer to help, as long as my expenses are met. Obviously it would be much better to observe from the front of the plane, so only first class tickets will be accepted 😉

October 24, 2011 2:47 am

KPO says:
October 24, 2011 at 2:17 am
Please forgive my ignorance if I am being way off here, ….

I had the same thought. The use of Google for about 20 minutes fixed the ignorance. It is something to do with “wet build temperature” or similar. Basically the relative humidity is taken into account. (I am sure others with infinitely more knowledge can explain or correct me).
If you mean that the ‘temperature’ itself is not a good reading, because what we need to measure is ‘energy’, you have my vote. This view has been expounded on this site often, but I apologise for forgetting by whom. Records of weather such as could would also be extremely useful, IIRC Willis has posted a few essays on the matter.

October 24, 2011 2:48 am

^^^ Dang! ^^^
“wet build temperature” = “wet bulb temperature”

DirkH
October 24, 2011 2:50 am

Glenn Tamblyn says:
October 24, 2011 at 12:42 am
“Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser. ”
But if you do area weighting your result will be hugely biased towards the trends of isolated thermometers with no second data source a thousand miles around, like in GISS. Is this what you consider a better approach? It also makes manipulations much simpler. The PNS approach. Drop the thermometers that don’t confess.

October 24, 2011 2:52 am

A question to anyone who is familiar with the techniques from someone who has not had access to the published papers in this area; How is the global mean temperature “normally” calculated from daily maximum and minimum records?
Can anyone point to a primer in this area please?

kim;)
October 24, 2011 3:03 am

KPO says:
October 24, 2011 at 2:17 am
“This thing with averages of averages of averages of data points (numbers) bothers me. ”
xxxxxxxxxxxxxx
Well said!

KV
October 24, 2011 3:07 am

Re “The Great Dying of Thermometers” which occurred worldwide predominantly in 1992-93. Of course, most of them didn’t actually die – Hansen and associates simply stopped using the data for reasons nobody ever seems to have explained. The following is my own interpretation of available facts.
1988 was reportedly the hottest summer in the US for 52 years and no doubt gave James Hansen confidence for his alarmist appearance before a US Committee that year (reportedly with windows purposely left wide open and air conditioning off on one of the hottest days of the year)! He did not have the backing of NASA as his former supervisor, senior NASA atmospheric scientist Dr. John S. Theon made clear after he retired. He said Hansen had “embarrassed NASA” with his alarming climate claims and violated NASA’s official agency position on climate forecasting (i.e., ‘we did not know enough to forecast climate change or mankind’s effect on it’).
It is evident Hansen had put his reputation and credibility on the line and when, starting with significant falls in 1989 and 1990 the next four years to 1992 showed sharp cooling in many parts of the world, a cooling which was then further exacerbated by the June 1991 eruption of Mt.Pinatubo, Hansen must have been put under extreme pressure, not only by his critics within and outside NASA, but also other scientists supporting and pushing the AGW hypothesis.
In the absence of any official plausible explanation I feel this to be a credible background and motive for the dropping of stations and the now well-established abuse of raw data. It is worthy of note that others have found drops in station numbers resulted in a significant step rise in most graphed temperatures.

October 24, 2011 3:13 am

I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it? Maybe Anthony Watts can shed some light on this?
Also, Anthony and others were quite annoyed that the recent BEST results were publicized before peer-review, and here we have a biochemist being given free reign to post what amounts to a back-of-the-envelope calculation of average ground temps. Michael Palmer or Anthony Watts could have, at the very least, got a statistician or near equivalent to verify the averaging analysis, which I suspect is particularly well suited to giving an average trend of zero.
Cheers.

TBear (Sydney, where it is finally warming Up, after coldest October in nearly 50 yrs)
October 24, 2011 3:16 am

KPO says:
October 24, 2011 at 2:17 am
Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone. I have this sense that there are parameters missing such as humidity, cloud cover and perhaps wind speed/direction that should also be recorded and used collectively when compiling a record. My feeling is that a more complete “measure” would be accomplished, EG an average temp of 15c/70% humidity/45% cloud/20km/hr/NW – in short, a sort of micro-climate record, expanded to regional and then global if that’s possible. This thing with averages of averages of averages of data points (numbers) bothers me. Perhaps my thinking is off so please straighten me out. Thanks
_______________
So let’s say KPO is correct.
Does this sort of point indicate that we really have no safe handle on total energy in the climate system?
If that is correct, or even plausible, why are we spending trillions of dollars on AGW?
Frustrated that, if these types of points have any true validity, that the general scientific community does not get its act together and call this CAGW argument for the misguided tripe that it may well be. Is it because scientists feel too constrained by specialism? Sad, very sad, if that is the case.
Waiting for the push-back revoluition to begin …

PlainJane
October 24, 2011 3:18 am

@ Glen Tamblyn
“Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser.”
This article never claims to be working out the average temperature of America or any region in particular – it is only looking at long term trends in individual stations, so this point is not valid.
It is also petty and peevish to complain about being “snipped” here when you have not had your comment disappeared down a black hole so that no other people may know you posted something. Most readers of this blog, if interested, would be savvy enough to find your work anyway from the comment Anthony left stating where your work was. You have been invited to place your specific work here so we can read it easily in context. Could you please do that?

Steve C
October 24, 2011 3:21 am

There’s another graph I’d like to see, prompted by an oft-quoted statistic in these and other pages, and again by those RUTI vs BEST graphs from hidethedecline.eu, above. The oft-quoted statistic is that, of the land area of the earth, urban areas comprise around 1%, rural 99%. Has anyone drawn a graph in which 99% weighting was given to the best of the rural stations, and 1% to the urban?
Of course, this still ignores the two-thirds of the planet which is sea, but I’d certainly call such a graph a more accurate picture of overall land temperature than the usual heavily urban- and airport-influenced offerings. Has anyone here come across such a graph, or can someone produce one a bit more quickly than my own rusty data processing would allow?

October 24, 2011 3:31 am

Glenn Tamblyn,
Please do not refer to the Skeptical Pseudo-Science propaganda blog. John Cook has no ethics, no honesty, and his mendacious blog has been thoroughly deconstructed in articles and comments here. Do a search of the archives, and get educated.
Of course, if you suffer from cagnitive dissonance like most True Believers, SPS is the echo chamber for you. Just be aware that you’re being spoon-fed alarmist propaganda at that heavily censoring blog. Some folks actually like being led by the nose, so maybe that’s A-OK with you. To each his own.

Harry Snape
October 24, 2011 3:35 am

Replacing a missing temp from one station with the average of the other stations seems like madness to me. The missing reading could be for a station in Antarctica, or the Equator, they would bear no relationship to the “average” of other stations.
Equally, infilling with the average for that time of the year from that station, or something similar is also likely to introduce an error. As someone said above Sydney has an anomalously low Oct, so infilling an average Oct temp would have overestimated the temp. But at least this method has the temp in the ball park, i.e. it will be an average Syd temp for Oct, not an average world temp for Syd (which would be seriously low).
What needs to happen when stations are missing is that the average for that location/time is used, but the error bar on the calculated temp is increased by the maximum variance at that location/time (possibly more for short data records). So the loss of any substantial portion of the record will be observable by the width of the error bars and once can deduce how confident we should be of the final number.

oMan
October 24, 2011 3:53 am

KPO: you’ve caught the flavor of my frustration with the (understandable, inevitable) enterprise of reducing the complexity of a system such as local weather or (its big sister, integrated over time and space) climate, into a single parameter called “temperature” and even that not measuring heat content but just a dry bulb thermometer. We lose so much information in this process. It would be nice if the final statistic came with a label reminding us of the magnitude of what has been left on the cutting-room floor and how subjective the cutting has been. Just use a five-star or color code. I know, I know, error bars are a good nudge in that direction; and this post by Michael Palmer contains, in Figure 1, another excellent pointer for quality, namely the number of stations. It tells the reader that the data before 1900 and after about 1990 is “thin” and “different.”. I use those words to suggest the orthogonality of the information-space we are trying to explore if not capture in the temperature time series for the entire US “represented” by GISS numbers.
That dying of the thermometers is the real story for me, and Michael Palmer adds a valuable chapter to the story. Many thanks.

Stephen Wilde
October 24, 2011 3:54 am

So, how to reconcile the widespread sceptic acceptance that there has been SOME warming since the LIA with the now clear possibility that the surface temperature record via thermometers has been primarily recording UHI effects and/or suffers from unjustified ‘adjustments’ towards the warm side ?
Firstly we can simply say that the background warming since the LIA is less than that apparently recorded by the thermometers.
Although there has been some warming, the effect of UHI and inaccurate ‘adjustments’ has exagerrated it during the late 20th century and may now be hiding a slight decline.
Secondly we can see from the chart above that although there may have been little or no net change in temperature during the 20th century at the most reliable long term sites there have been changes up and down commensurate with many other observations i.e. warming followed by cooling then warming again and now possibly cooling.
What such a pattern suggests is that the Earth’s watery thermostat is highly effective but takes a while ( a few decades) to adjust to any new forcing factor.
Thus the system energy content remains much the same (including the main body of the oceans) but in the process of adjusting the energy flow through the system in order to retain that basic system energy content the surface pressure distribution changes so as to alter the size and position off the climate zones especially in the mid latitudes where most temperature sensors are situated.
From our perspective there seems to be a warming or cooling at the recording sites when averaged out overall but in reality all that is being recorded is the rate of energy flow past the recording sites as the speed of energy flow through the system changes in an inevitable negative response to a forcing agent whether it be sun sea or GHGs.In effect the positions of the surface sensors vary in relation to the position of the climate zones and they record that variance and NOT any change in energy content for the system as a whole.
A warmer surface temperature recording at a given site (excluding UHI and adjustments) just reflects the fact that more warm air from a more equatorward direction flows across it more often. That does not imply a warmer system if the system is expelling energy to space commensurately faster.
When a forcing agent tries to warm the system the flow of energy through the system and out to space increases so that more warm air crosses our sensors.
When a forcing agent tries to cool the system the flow of energy through the system and out to space decreases so that cooler air crosses our sensors.
Hence the disparity between satellite and surface records. The satellites are independent of the rate of energy flow across the surface and attempt to ascertain the energy content of the entire system. That energy content varies little if at all because the net effect of changes in the rate of energy flow through the system is to stabilise the total system energy content despite internal system variations or external (solar) variations seeking to disturb it.
Higher temperature readings at surface stations therefore do not necessarily reflect a higher system energy content at all, merely a local or regional surface warming as more energy flows through on its way to space.

October 24, 2011 3:56 am

Hi, Michael. One needed adjustment has to do with time-of-observation bias. There are literature references on this available via the internet. You may want to check Steve McIntyre’s Climate Audit for discussions of this.
I believe that the raw data needs this adjustment.

Richard
October 24, 2011 4:19 am

A chilling analysis of James Hansen’s machinations.

Richard
October 24, 2011 4:25 am

Garrett Curley (@ga2re2t) says:
“I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it?”
Maybe Arithmetic and free discussion?
This is not a religious site perhaps?
Not a discussion of beliefs but rather on the basis of the beliefs?
Maybe Hansen introduces a consistent warming bias with his “adjustments”?

Dave Springer
October 24, 2011 4:30 am

Frank Lansner says:
October 24, 2011 at 1:05 am
Here RUTI (Rural Unadjusted Temperature Index) versus BEST global land trend:
http://hidethedecline.eu/media/ARUTI/GlobalTrends/Est1Global/fig1c.jpg
The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.
_________________________________________________________________
Hi Frank. I had a look at the graphs and recognized the source of the difference. That’s the infamous Time of Observation adjustment. It’s the biggest upward adjustment they make. Without it there is no significant 20th century warming trend. This is why the warming trend focus has now shifted to the period 1950-2010. You don’t hear the AGW boffins discussing dates earlier than that anymore. The author of the OP here evidently didn’t get the memo which was issued about the same time as the order to stop calling it “climate change” and begin calling it “global climate disruption”. It’s all about framing, you see. They frame the times and they frame the terms. It’s a despicable, dishonest, unscientific agenda they pursue.
Steve Goddard has a good explanation of the TOB here:
http://stevengoddard.wordpress.com/2011/08/01/time-of-observation-bias/

Time Of Observation Bias
Posted on August 1, 2011 by stevengoddard
The largest component of the USHCN upwards adjustments is called the time of observation bias. The concept itself is simple enough, but it assumes that the station observer is stupid and lazy.
Suppose that you have a min/max thermometer and you take the reading once per day at 3 pm. On day one it reads 50F for the high – which for arguments sake occurred right at 3 pm. That night a cold front comes through and drops temperatures by 40 degrees. The next afternoon, the maximum temperature is also going to be recorded as 50F – because the max marker hasn’t moved since yesterday. This is a serious and blatantly obvious problem, if the observer is too stupid to reset the min/max markers before he goes to bed. The opposite problem occurs if you take your readings early in the morning.
I had a min/max thermometer when I was six years old. It took me about three days to realize that you had to reset the markers every night before you went to bed. Otherwise half of the temperature readings are garbage.
USHCN makes it worse by claiming that people used to take the readings in the afternoon, but now take them in the morning. That is how they justify adjusting the past cooler and the present warmer.
They should use the raw data. These adjustments are largely BS.

October 24, 2011 4:36 am

Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone.

Indeed. The mainstream datasets are based upon a daily temperature reading.
The problem here is what does this temperature reading actually represent?
For the moment forget all about all the problems associated with the accuracy of the thermometer… forget the Urban Heat Island effect and all about the known location issues associated with thermometers… and lets take a look at what daily data is actually being recorded.
If we used a basic data logging thermometer to record the temperature at the end of every minute during a single calendar day we would accumulate 1,440 temperature data points… the maths is simple: 60 minutes x 24 hours = 1,440.
From these 1,440 data points we could then very easily calculate an Average Daily Temperature for that day.
Unfortunately, this is not how the mainstream dataset their daily temperature reading.
This is what they actually do.
First:
They capture the data point with the highest temperature and call it their Daily High Temperature.
Although this is fairly reasonable it is important to remember that this is the high outlier value from the 1,440 data points for that day.
Second:
They capture the data point with the lowest temperature and call it their Daily Low Temperature.
Although this is fairly reasonable it is important to remember that this is the low outlier value from the 1,440 data points for that day.
Third:
They add the Daily High Temperature outlier value to the Daily Low Temperature outlier value and then divide this intermediate number by two. So what would a rational person call this number? By definition (i.e. the maths) it is the mid-point between the daily extreme outlier temperatures. However, climatologists by some bizarre logic call this number the Daily Average Temperature. It is only in climatology that the average of 1,440 data points is calculated by just using the two extreme outlier values for the day… no wonder climate science is regarded as a weird science.
To underline just how weird this weird science really is lets ask ourselves:
Question: What data would a rational person use to demonstrate rising temperatures?
Rational Answer: The series of Daily High Temperatures.
Climatology Answer: The series of mid-points between the daily extreme outlier temperatures.

Dave Springer
October 24, 2011 4:41 am

A picture is worth a thousand words (maybe more in this case):
http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_pg.gif

Dave Springer
October 24, 2011 4:44 am

http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

Rhys Jaggar
October 24, 2011 4:45 am

I think the debate is beginning to move toward the position where we can see that the result obtained in terms of temperature trends depends on the source data. SURPRISE, SURPRISE!
It may in fact be the case that about 50 independent analyses be done to show what happens depending on what data you use. This one is just for the USA. Whch is a large continental land mass bounded by the world’s two largest oceans to the East and West, a warm sea to the South and a major land mass to the north, with a smaller land mass in the SW.
You might find different results in you studied Russia: a large continental land mass surrounded by ice/ocean to the north and a continental land mass to the south.
You might find different resuilts if you studied Europe. A mid-sized continental land mass with a major ocean to the west, a small sea to the south, smaller enclosed seas to the SE and NE.
You might find different results for Australia: a mid sized land mass totally surrounded on all sides by a major ocean.
I am certain, based on the fact that the Thames simply doesn’t freeze over like it used to in the 19th century, that London must be much warmer in winter than it used to be 200 years ago. So I’d frankly be amazed if we couldn’t agree that we have had warming in the past 200 years, although the 20th century may be less clear cut.
Where the debate has been the past 20 years is a small group unilaterally determining the source data sets and not having that most important decision subjected to the most rigorous analysis by all. That can’t change quickly enough in my book.
It would also appear that debates about how you search for deviations can get you different results. Particularly if the datasets have gaps in the record. It might be helpful to commit to building a century of reliable, consistent, internationally agreed datasets to ensure that this working with limited data be something which becomes less important in time. One hopes that this can include wireless-based transmission of data, particularly in rural areas with extreme cold in winter. Whether that is technically feasible is something specialist climatologists should no doubt enighten the public about.
One is minded to suggest that the IPCC bears all the hallmarks in climatology that FIFA has done in world football. A deeply flawed organisation, but not completely evil. Which is about as strong a signal for fundamental reform that can be given using measured langauge…………

Bigdinny
October 24, 2011 4:48 am

I have been lurking here and several other sites for over a year now, trying to make sense of all this from both sides of the fence. In answer to this simple question, Is the earth’s temperature rising? depending upon whom you ask you get the differing responses YES!, NO!, DEPENDS! IT WAS BUT NO LONGER IS! I think I have learned a great deal. Of one thing I am now fairly certain. Nobody knows.

DocMartyn
October 24, 2011 4:48 am

Have a look at the histograms of Fig.3, BEST UHI paper. It appears that when they compare 10 year, 10-20, 10-30 and 30+ data series the distribution of temperature RATE changes from a normal to a Poisson distribution.
My guess is that you will find the same thing. Moreover, if you look at the individual rates in the same manner, you will be able to identify when the transition occurred..

Jim Butler
October 24, 2011 4:51 am

First hand example of how data sets get messed up…
This morning, as part of my first cuppa routine, I checked the weather online using Intellicast’s site. 47deg. Let my dogs out…hmmmm …seems much cooler than 47deg, checked Accu. 47deg. Then checked Wunderground, 47deg. Used Wunderground’s “station select” feature, saw that it had defaulted to APRSWXNET – Milford, Ma.. All of the surrounding stations were reporting 34-37degs, and they appeared to be amateur stations, whereas I’m guessing that APRSWXNET is an “official” station of some sort.
Whatever it is, multiple services are using it, and it’s wrong by about 12deg.
JimB

October 24, 2011 4:56 am

Richard says:
“Maybe Arithmetic and free discussion?”
I would be of the opinion that arithmetic and free discussion of that type should be reserved to forum threads. This site is considered, from what I gather, as a reference on climate skepticism, so back-of-the-envelope calculations seem out of place to me.
“This is not a religious site perhaps?”
Well, I would argue that using any Tom, Dick and Harry analysis just to place doubt on GW (and therefore AGW) is being somewhat religious. But putting that aside, not being religious/dogmatic about something does not imply that every discussion is fair-game. Being open-minded about something does not require you to let your intellectual defenses down.

Dave Springer
October 24, 2011 4:59 am

Garrett Curley (@ga2re2t) says:
“I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it?”
It depends on what time frame you’re talking about. I’ve no doubt the average temperature of the globe has been rising (with substantial positive benefits!!!) beginning in 1970. This is confirmed by satellite data beginning in 1979. From 1940-1970 the globe was cooling. I’m old enough to recall climate scientists becoming alarmed by possible catastrophic anthropogenic global cooling in the early 1970’s.
The question is whether the last 30-40 years is unusual or not. Roald Amundsen was able navigate the Northwest Passage around 1906 so Arctic Sea Ice extent today doesn’t seem out of line with where it was in the past. Retreating glaciers around the world are constantly revealing human artifacts on the newly exposed ground giving concrete proof that the glaciers have not retreated as far back as they’ve retreated in the past. Greenland today still has a colder climate than when the Vikings were farming it and named it Greenland.
So if there appears to be some dissonance about global warming or not this is why. There surely has been some warming in recent decades but it doesn’t appear to be anything really out of the ordinary compared to other warming episodes in recorded history.

Gary Mount
October 24, 2011 5:07 am

Jim Butler says:
October 24, 2011 at 4:51 am
Whatever it is, multiple services are using it, and it’s wrong by about 12deg.

Sometime last winter I was checking the temp for Ottawa, or thereabouts and the weather app was showing a rather warm anomaly. I checked a different source and discovered that the negative sign had been left out.

October 24, 2011 5:09 am

Garrett Curley says:
“…using any Tom, Dick and Harry analysis just to place doubt on GW (and therefore AGW)…”
You are conflating the widely accepted fact of natural global warming since the LIA with the AGW conjecture.

Mergold
October 24, 2011 5:20 am

Excellent analysis. As a Republican voter for 40 years, albeit one now residing in Australia, can I just say I have never seen any evidence of warming. Ain’t no difference between a summer’s day in 1970 and a summer’s day now. I’m not sure why all this business of saying the true skeptics believe the world is warming has come up. There is maybe some regional warming somewhere, but no place I’ve been. I think this site is better when it avoids that bunkum. It’s dangerous.

October 24, 2011 5:31 am

C
You write: “Has anyone drawn a graph in which 99% weighting was given to the best of the rural stations, and 1% to the urban?”
RUTI is : “Rural Unadjusted Temperature Index”, and thus the goal is exactly what you look for.
In several areas it is hardly possible to get real rural data, but all possibilities are tried to get best data as possible.
In many areas, there are not long rural stations 1900-2010 uninterrupted, but very often it was possible to look at a larger area where many mostly rural or small-urban sites combined made a VERY solid mostly rural trend for the whole area. This is important beacause many even sceptics believes that a mostly rural temperature index is impossible just because there are not many long rural stations public available.
Check it out:
http://hidethedecline.eu/pages/ruti.php
Thanks for comment.
K.R. Frank

Roger Knights
October 24, 2011 5:34 am

Harry Snape says:
October 24, 2011 at 3:35 am
Replacing a missing temp from one station with the average of the other stations seems like madness to me. The missing reading could be for a station in Antarctica, or the Equator, they would bear no relationship to the “average” of other stations.

1. “Abstract
The GISS dataset includes more than 600 stations within the U.S. …”
So no worries about Antarctica or the Equator.
2. It isn’t the temperature that’s replaced, but the delta (the little triangle is the delta) of the temperature; i.e., the anomaly. (If I’m reading the formula correctly.)

October 24, 2011 5:40 am

Springer
Frank Lansner: “The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.”
Dave Springer:”Hi Frank. I had a look at the graphs and recognized the source of the difference. That’s the infamous Time of Observation adjustment. It’s the biggest upward adjustment they make.”
Thanks for comment. Yes , the time of observsation… its amazing.
So across the world from country to country, from culture to culture, continent to continent then thermometers NOT meant for climate purposes, just to tell people about their local temperatures, we have this synchronous TOBS.
Everywhere, the time of observation has systematically been changed in one direction causing too cold temperature data, that “must” be corrected massively.
If this is mostly TOBS (or similar) then I find it interesting that global warming is not really measured, but is created on the desk.
K.R. Frank

October 24, 2011 5:48 am

Palmer
Thankyou again for important work!
I think you would find it interesting to see the MASACRE done to rural stations in Turkey…
“Imagine, that GHCN took all USA rural stations and cut down to 1960-90. Then took smaller cities limited to 1950-90 or 1960-2010, and then only the largest citiest had long datasets 1930-2010 or longer? Sounds impossible? Well this is what is done for Turkey, Bon apetite. ”
http://hidethedecline.eu/pages/ruti/asia/turkey.php
@Dear Anthony… I think you should consider publishing this one, the slaughter of rural Turkish data by GHCN?
– Why a slaughter of rural data if UHI is not important?
(We see similar elsewhere, if interested)
K.R. Frank

October 24, 2011 5:51 am

In answer to this simple question, Is the earth’s temperature rising? depending upon whom you ask you get the differing responses YES!, NO!, DEPENDS! IT WAS BUT NO LONGER IS! I think I have learned a great deal. Of one thing I am now fairly certain. Nobody knows.

Correct. NOBODY KNOWS.
Lots of people believe different things… and say different things… but in the end: nobody knows.
It is possible that people were beginning to get a handle on the situation back in the 1970s… at that time they said: the world is cooling… so they threw more money at climate research… unfortunately this money was spent on manipulating data and inventing the Global Warming myth… we are no further forward… and none the wiser regarding that specific question… we are actually a whole lot dumber overall regarding climate.
However, we have discovered that the concept of a Global Average Temperature is intangible (and largely irrelevant)… the temperature profile of the Earth is fractal in nature and cannot actually be measured in a meaningful way… additionally Climate and Weather is regional in nature… for example: the occurrence of an El Niña (or La Niña) impacts different regional areas in different ways… therefore one-size does not fit all geographic regions in the same way… warming can be beneficial in one place while it is detrimental in another.
Additionally, it is easy to support the following statements:
1) Global Warming is generally a good thing… more people die when it is cold… living organisms flourish when it is warmer… Additionally, convection dominates the daily energy flows when there is water in the environment. Therefore, the extent of any Global Warming is limited by convection and can only every become a problem for arid and desert locations that remain deprived of water.
2) CO2 is vital for plants and increasing levels of CO2 result in increasing crop yields and overall promote the greening of the earth… again water dominates any greenhouse gas effect that may exist in the atmosphere… and CO2 is basically irrelevant as a greenhouse gas because it only constitutes 0.039% of the atmosphere.
3) The Global Warming myth is just scare mongering… as you say: nobody knows… so just forget about it until you are asked to pay for some crackpot Global Warming scam / scheme… in that case just say NO and walk away – nothing to see here – just another snake oil salesman.

Ivor Ward
October 24, 2011 5:54 am

I thoroughly enjoy coming to this site to read the latest on the climate conundrum. I post occasionally under two different names, Disko Troop and Ivor Ward depending on whether I think my ex-wife is watching that day (and whose computer I am using). I have never been subjected to nasty, snarky, rude or mean responses to my potentially all encompassing ignorance of the subjects in question. The only time that I ever witness this kind of ill will here is when an influx of what one might call “the other side” appears somewhat akin to a plague of locusts when they feel that their side has scored some kind of browny point, e.g. the BEST shenanigans I have tried to raise the occasional question in forums such as Dr Schmidt’s and Mr Cook’s and been met with torrents of abuse. I asked why the sea level rise suddenly changed with the advent of sateilites, why the trees in my garden depend on rainfall and amount of sunlight to grow but theirs only depend on temperature. Why temperatures are shown as rising in the arctic by people who guess the data and not by the Danish Uni that has the buoys. Such simple and honest enquiries were met by abusive replies as to my ignorance and propensity for hanging around under bridges and eating Billy Goats Gruff. As a retired mariner once responsible for the safe navigation of the largest ships in the world you can imagine my response should one of these pseudo academics choose to insult me face to face. However, enough said. So I would like to thank Anthony, if I may call you that, for providing this forum with its air of relative civility.
I was responsible for many maritime mobile weather stations We reported every 6 hours, wet/dry/RH. Sea temp, wind,cloud type,height and cover, wind speed, direction, sea ice, sea state, swell direction, etc., and of course our postion to within one or two miles by celestial navigation. Sometimes we would be the only ship reporting in the entire Southern Ocean, and this was in the 60’s and 70’s . Sometimes the only reporting ship within a thousand miles in the North Pacific.
Had I known then how so called climatologists would currently miss-use the data we collected for weather forcasting, I would have thrown every Met instrument on board over the bloody side.

polistra
October 24, 2011 6:00 am

Excellent. I’ve never understood why anyone wasted any time on interpolating, or on urban locations, when so many continuous rural locations are available. It’s just common sense to use good sensors (if available) and totally ignore dubious sensors. In this case good sensors are available, so we should only be paying attention to them.

1DandyTroll
October 24, 2011 6:01 am

Here’s the true hippie version:
Since the pebble is an exact replica of the mountain (all serious bong users says so):
600/600 = 1
Now you have just one station to work with, much easier.
Now take that station’s data points and divide every point by itself, then add those together and divide the sum by the number of data points, et voila, you get a smoothed numero uno.
That is called the reference point. However since it is the reference point:
1 = 0
Now all your work has zero points to it.
But that never stops the communist climate hippies, that’s why they’re probably crazed.
As a true hippie you put that shittie zero into the machine-bong to smoke out a result. As everyone knows the result will most likely be zero, so you have to use the bag of tricks attachment to run the resulting zero through the random alarm generator algorithm, seeded by +1..+7 (you probably don’t want to go higher ‘an +7 because then it becomes obvious you’re crazy), and before you know it: OMG! ALARMA! We are doomed! Hand over all your money so we can save you!

October 24, 2011 6:07 am

There is one part of the AGW theory that I am not skeptical about. That’s the part that says that there is such a thing as “The Urban Heat Island Effect” (UHIE) which is all due to us anthropoids are getting more numerous as we reap the benefits of burning more fossil fuels.
There is no reason, in my opinion, to assume that just because air temperatures have gone up in cities and other urban areas air temperatures in the surrounding areas have gone down in some kind of response – they probably have not as there is no reason for them to do so. However there is absolutely no reason to try to “fiddle figures” in order to wish any UHIE away. That goes for both “warmists and skeptics” alike.
Therefore if for a hundred or so years we measured air temperatures (T) at say 3000 stations, all in rural surroundings, and we are happy to equal the average (T) of those 3000 stations with “the average global T” (15° C), then if, say 100 or so stations have become “urbanized” resulting in each of them experiencing a T rise of say a couple of degrees C each then yes, AGW is happening but only in our paperwork, and is a kind of warming that can only be detected locally. Furthermore the UHIE has got nothing to do with “Back- radiation” from CO2.
If, the trend for the last century (1900 – 2000), in spite of the UHIE, is flat – then that should tell us that, in spite of a couple of “warming spikes” the world outside our windows is getting cooler.
And by the way – now that, allegedly, most skeptical scientists as well as “Real Climate scientists” are pandering to the notion that AGW is due to CO2 back radiation, I am wondering why I still cannot find any actual data that proves it. – Is the “proof” needed yet another “Consensus”?

MFKBoulder
October 24, 2011 6:11 am

Quote form the guest post:
“There are several examples of long-running temperature records that fail to show any
substantial long-term warming signal; examples are the Central England Temperature record and the one from Hohenpeissenberg, Bavaria.”
Look at the Hohenpeissenerbg Graph and you see the statement quoted is nothig but belony
http://preview.tinyurl.com/Hohenpeissenberg
….
….
Just like the “Winter start” reoprted for St.Motiz a month ago. Still ROTFLOL

KPO
October 24, 2011 6:37 am

oMan says:
October 24, 2011 at 3:53 am
…. “reducing the complexity of a system such as local weather or (its big sister, integrated over time and space) climate, into a single parameter called “temperature” Also
Jer0me says:
October 24, 2011 at 2:47 am
…”If you mean that the ‘temperature’ itself is not a good reading, because what we need to measure is ‘energy’, you have my vote.”
Yes, both your replies are what I’m getting at. I would however like an “expert” explanation as to why they do what they do, so I’ll do some digging – if anyone here has a quick ‘n easy – thanks.
I have this mental picture of a future 300 year old Mom and Son phone call on Mothers day “ …what’s the climate like there Johnny? “Oh an average of 21C” ????

October 24, 2011 6:39 am

Ivor Ward/Disko Troop,
That is a fine post. My thanks and kudos.
Dave Springer and Frank Lansner, kudos also. Great comments!

October 24, 2011 7:19 am

I didn’t mention it in my reply above, but after reading Doug’s comments on the statistics, I put together these notes on the detection of “global warming”. Read the whole thing and tell me what’s wrong with it.
“Pull text”:

The heat content of the climate system isn’t just in the dry air over time. One has to measure moisture content and soil-/water-surface temperature for a start. Then, for each component, calculate enthalpy over each area (specifically, the thermal mass of each component). That gives the “instantaneous” heat content for the measured region.
Do that for the whole globe. Then sum for the global total at that instant.
It’s that simple. Meticulous and rigourous, but simple.

Unlike “global temperature”, the enthalpy figure is a real thing.
Plot the real thing over time. look at the graph. Then try to figure out what’s happening in the real world.

Michael Palmer
October 24, 2011 7:24 am

Frank Lansner says:
October 24, 2011 at 1:05 am
Palmer
“At some point they are going to run out of tricks to create a warming signal”
I appreciate very much that you just put it as it is.

Those weren’t my words.

October 24, 2011 7:28 am

Logic is a scary thing!

Etaoin Shrdlu
October 24, 2011 7:29 am

Anybody investigating the cause of death for all those stations?

ferd berple
October 24, 2011 7:40 am

Alexander Feht says:
October 24, 2011 at 1:16 am
So, where we have continuous, reliable, non-manipulated data, there is no warming at all. QED
Strange that many “skeptics” seem to have reconciled themselves with the notion that “there was some global warming in the 20th century.”
When Climategate came out, I looked at the Canadian temperature records. There was no trend apparent in the long running Canadian records either.
The obvious question to be asked is why? Why is there a statistical difference between the long running stations and the complete data set? There shouldn’t be.

Chuck Nolan
October 24, 2011 7:42 am

kim;) says:
October 24, 2011 at 3:03 am
KPO says:
October 24, 2011 at 2:17 am
“This thing with averages of averages of averages of data points (numbers) bothers me. ”
xxxxxxxxxxxxxx
Well said!
—————————-
In the words of the immortal Wills, “It’s models all the way down.”

Chuck Nolan
October 24, 2011 7:43 am

That would be “Willis”

Dave Springer
October 24, 2011 8:12 am

Ivor Ward says:
October 24, 2011 at 5:54 am
“I was responsible for many maritime mobile weather stations We reported every 6 hours, wet/dry/RH. Sea temp, wind,cloud type,height and cover,”
Hi Ivor. Just curious about how cloud height is determined aboard ship. I ask because back in the 1970’s I was a military meteorological equipment repair tech. I basically had to keep all the weather-related gear at an airport working and calibrated. One of the systems under my care was an old fashioned cloud height indicator that had some spinning floodlights on end of a runway and and receiver on the other end. The height of the cloud was determined by the angle of the transmitting light source when the receiver detected it. All analog electronics (vacuum tubes back in those days) but quite dependable and accurate.
Anyhow, reading your comment I was wondering how this is determined aboard a rolling ship with a baseline too short and likely not equipped with an expensive bulky piecie of gear like I had. I got to thinking about how we estimated the yield of a nuclear weapon (I went through Nuclear, Biological, and Chemical Warfare school) in the field. You basically time how long it takes from the flash to when you hear the sound to get the distance to it. You then measure the height of the mushroom cloud and determine from it’s color whether it was sub-surface, surface, or air burst, then you can use a simple formula to determine the kilotonnage of the weapon and from that you can also determine how close to ground zero you can get and how long you can remain before receiving a sickening (or fatal) dose of radiation.
So anyhow, I figure so long as the clouds observed aboard ship stretched to the horizon and it was during daytime you could measure the angle between horizon and cloud and figure out cloud height that way. Is that how it was done?

KR
October 24, 2011 8:18 am

So looking at :
– No area weighting – a single station in Montana perhaps hundreds of miles from any other has the same weighting as a pair of stations that might be less than five miles apart. This is a serious biasing of the data used.
– Averaging raw temperatures (which vary hugely over short distances) rather than anomalies (which don’t – a mountaintop and a nearby pass/beach have different raw temperatures, but see roughly the same weather patterns).
– Throwing out 90% of the temperature records, when even a quick examination shows 1/3 of stations with a negative trend, 2/3 with a positive trend, making any conclusions from 10% poorly supported. I’m not surprised by a low trend – I would be equally unsurprised by a huge trend given the poor data treatment.
This article by Michael Palmer really says nothing meaningful, due to bad data handling – it’s like a compilation of “Never Do This” methods stuck together.
Michael Palmer – For the correlation of nearby station anomalies and area weighting, I would recommend Hansen & Lebedeff 1987 (http://pubs.giss.nasa.gov/cgi-bin/abstract.cgi?id=ha00700d), which discusses this issue (and a lot more) in terms of trying to compute these trends.

kwik
October 24, 2011 8:20 am

Okay, this means that Peter was right, then;

Ivor Ward
October 24, 2011 8:23 am

Would anyone care to explain to me why the graph linked by Mr Palmer for Hohenpiessenberg at http://climatereason.com/LittleIceAgeThermometers/Hohenpeissenberg_Germany.html shows a virtual flat line and then one that MFK Boulder links to at http://preview.tinyurl.com/Hohenpeissenberg shows our favourite hockey stick, preferably without using the word “baloney” in the text even if you do spell it correctly.

Bob Kutz
October 24, 2011 8:30 am

In Re; Garrett Curley (@ga2re2t) says:
October 24, 2011 at 3:13 am
As to the notion that ‘skeptics’ don’t deny that the Earth has warmed; I think it a bit much to lump all of us into one group with one set of beliefs. The Earth seems to have warmed, according to what the record tells us, but it is difficult to imagine that we know with any degree of certainty. Most of us (I believe) stop short of accusing the curators of the data of out an out malfeasance, but even given honest brokers; the statistics of the matter are mind bogglingly difficult. Imagine estimating the average temperature of a sphere roughly some 8000 miles in diameter with a data set of thermometers unevenly spaced, gaps and overlaps abound and 2/3 of the surface has no meaningful data. We simply do not have one giant thermometer that gives us global mean surface T. (The closest we come now is the satellite data, which may in part explain the reduced number of stations; they aren’t really all that necessary with the satellites giving us significantly more accurate and complete data.) Too bad that record only begins in 1979.
In light of the best evidence; I think the Earth has probably warmed. It’s certainly warmed from the late 1800’s. In fact it’s likely warmed continously from the LIA to present.
But this article just takes a look at a rather humble slice of data; unadjusted data from stations with continuous reporting. It is not peer reviewed. It isn’t presented as such and isn’t yet meant for publication.
I think a lot of you believers do not have a good understanding about the nature of this debate. There may be a paper that results from this. It wouldn’t be the first time something that started out here turned into a paper survived peer-review and got published. But we don’t ostracize those with dissenting ideas here. We talk about those ideas.
That is how science usually begins; a hypothesis is developed and a means of testing it is devised. I don’t know how the current cargo culturalists who run orthodox climatology are doing it these days. It appears that their science is based entirely on models, data adjustments, squelching dissent and gaming peer review. That is just my perception. That is my opinion and perhaps I am wrong.
Either way, if the idea in this article were to develop into a peer reviewed, published paper it would certainly give those charlatans some issues to address. Especially if the data and methodology were shared freely and without objection.

October 24, 2011 8:30 am

MFKBoulder October 24, 2011 at 6:11 am says: “Look at the Hohenpeissenerbg Graph and you see the statement quoted is nothig but belony.”
The link you provided does not suggest whether the graph uses raw or adjusted data.
Furthermore, the link mentions that, “In March 1950, the status of the Hohenpeissenberg station was upgraded to that of a meteorological observatory.”
Did it switch in 1950 from “Mannheim hours” (at 700, 1400 and 2100 hours local mean time) to hourly readings? If so, does the graph use only the data taken at “Mannheim hours” for accurate comparison?
John M Reynolds

More Soylent Green!
October 24, 2011 8:32 am

Mergold says:
October 24, 2011 at 5:20 am
Excellent analysis. As a Republican voter for 40 years, albeit one now residing in Australia, can I just say I have never seen any evidence of warming. Ain’t no difference between a summer’s day in 1970 and a summer’s day now. I’m not sure why all this business of saying the true skeptics believe the world is warming has come up. There is maybe some regional warming somewhere, but no place I’ve been. I think this site is better when it avoids that bunkum. It’s dangerous.

IN 1970, very few of us had air conditioning. If we had A/C at home, it was a window unit. (Many of my friends slept outside during the summer.) We didn’t have A/C in our cars. Most people didn’t have A/C at work.
Today, we wake up in an air conditioned home, walk 10 feet and get in our air conditioned cars. We then walk through the parking lot (god, it’s hot out!) and into an air conditioned office or other workplace. It’s no wonder it seems hot out, because we’re no longer used to the normal summer heat.

October 24, 2011 8:35 am

KR says:
October 24, 2011 at 8:18 am
“Averaging raw temperatures (which vary hugely over short distances) rather than anomalies (which don’t – a mountaintop and a nearby pass/beach have different raw temperatures, but see roughly the same weather patterns).”
In fact, my method amounts to averaging anomalies rather than raw temperatures.
“Throwing out 90% of the temperature records, when even a quick examination shows 1/3 of stations with a negative trend, 2/3 with a positive trend, making any conclusions from 10% poorly supported.”
The 10% were selected not for some trend or for location but purely based on continuity. If you think that criterion meaningless, fine, I just don’t agree with you.
“For the correlation of nearby station anomalies and area weighting, I would recommend Hansen & Lebedeff 1987 …”
I’m not claiming to have calculated The One True Average Temperature Trend. The only point I make is that long-running stations trend differently from ones that are not, and for that I don’t need area weighting.
Thanks for playing.

October 24, 2011 8:36 am

Thanks, Michael. You also show that those without certified climate degrees, but with the powers of observation and the tools of sharp pencils can make significant scientific conclusions. With a “problem” that is global, it amazes me that the claim you need finely tuned technical backgrounds, million dollar computers and a statistician’s understanding of why it looks like A but is actually B, is so easily accepted by the mainstream. Or was, anyway.
Your question as to the large dropoff of stations is about a situation one I’ve never understood, either. You would think that a “problem” of the end of mankind/the biosphere would generate more, not less field work. Yet as the problem became, in the warmist view, worse, the station count collapsed. I’m no stranger to the need to check only the right few to determine the course of the many (politicians as well as businessmen rely on such surveys), but reducing what was going on at such a time seems very odd. Saving pennies when about to spend billions doesn’t seem what would happen in a budgetary process. Again, the MSM doesn’t seem to find this peculiar.
Senator Inhofe said that AGW was the greatest scam of all; perhaps he was thinking like a man with common sense, seeing such things as the station count drop and saying the whole thing just didn’t make sense. I’d agree with that.

Dave Springer
October 24, 2011 8:39 am

Frank Lansner says:
October 24, 2011 at 5:40 am

Springer
Frank Lansner: “The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.”
Dave Springer:”Hi Frank. I had a look at the graphs and recognized the source of the difference. That’s the infamous Time of Observation adjustment. It’s the biggest upward adjustment they make.”
Thanks for comment. Yes , the time of observsation… its amazing.
So across the world from country to country, from culture to culture, continent to continent then thermometers NOT meant for climate purposes, just to tell people about their local temperatures, we have this synchronous TOBS.
Everywhere, the time of observation has systematically been changed in one direction causing too cold temperature data, that “must” be corrected massively.
If this is mostly TOBS (or similar) then I find it interesting that global warming is not really measured, but is created on the desk.

As James Woods said at the end of the film “Contact”… “Yes, that is interesting, isn’t it.”
The strength of the global warming narrative is in the satellite data beginning in 1979. There is little doubt that the data is reliable, accurate to the degree necessary, and coverage is near global and around the clock. The earth’s temp was rising between 1979 and 1999 and has leveled off since then.
But that’s not a long enough period of time for a “climate trend” which is defined as 30 years of weather. Even 30 years is questionable for being long enough because we know for a fact that there are climate cycles that go far beyond a mere 30 years. Interglaical periods for instance are on a cycle of 100,000 years. The AMDO (Atlantic Multi-Decadal Oscillation) is a 60-year cycle. In fact many of us believe that the past several decades are simply the warm side of the AMDO being measured by satellites and nothing more.

KR
October 24, 2011 8:47 am

Michael Palmer
My apologies on the anomaly/averaging – re-reading your post I see I was incorrect on that.
The lack of area weighting and discarding of 90% of the data, on the other hand, are quite serious issues. As I stated in my previous post, given the limitations you have imposed on the data, I would be equally unsurprised by flat temperatures as by a temperature rise several times what is noted in any of the records. You have also used the raw data, rather than data adjusted for changes at the various stations (as in new thermometers and the like). That could change the data either up or down – but will inevitably add yet another source of error and variation, making your conclusions even less statistically supported.
Area weighting data simply allows you to use the other 90% of the available data.
“Thanks for playing” – Oh? You consider this a game?

peetee
October 24, 2011 8:48 am

Michael Palmer – Disclaimer: I am not a climate scientist and claim no expertise relevant to this subject other than basic arithmetics. In case I have overlooked equivalent previous work, this is due to my ignorance of the field
nuff said!

TheGoodLocust
October 24, 2011 8:51 am

Wouldn’t a more accurate (and more computationally intense) method be to take the temperature deltas by months or even days and then average those together to get the average delta?
To clarify, if you went by days (or even specific times of day), you’d take all the January 1st readings, and calculate your base period and delta for that specific day. You’d then do that for every day in the year, and then calculate the year from there.
This would mitigate missing records from days or times by simply ignoring them.
There may still be bias though since the temperatures may not be measured under certain weather conditions (i.e very cold), but it would prevent any input of false signals.
To make it even more accurate and challenging. You could recalculate the base period every time the station data is offline for more than a week or so. Essentially, if a station moves or equipment is changed/repaired, then this may be reflected by a period of missing records. The logical thing to do would be to treat it as a completely separate station instead of comparing its data to the older data.

Ivor Ward
October 24, 2011 8:55 am

Dave Springer asks:Hi Ivor. Just curious about how cloud height is determined aboard ship.
Hi Dave,
We used the mark 1. uncorrected Eyeball, a large chart supplied by the Met office, vast amounts of experience and a lot of guesswork. As you know the various types of clouds have different base levels so it was always the starting point to decide what type of cloud you were looking at. The low clouds were fairly easy to determine but high clouds were largely a matter of looking at the type on a chart and then trying to see if it had a positive base i.e. at the lowest end of its height range; or blurred baseline, possibly higher. In those days we did Met as part of the Board of Trade exams. I don’t recall a course called “staring at computer screens” ,we actually had to look out of the bridge windows! Vertical sextant angle would not work unfortunately unless God dropped a plumb line to give a reference point between horizon and cloud base. Cloud cover was estimated by quartering the sky and using percentage estimates. You can understand my chagrin at the way our guesstimated data are now refined to multiple decimal points.
I too took the nuclear defence course. We were told to take cover in the shaft tunnel by an enthusiastic Lt Commander. When we pointed out the lack of such on a 200,000 ton tanker we were told to turn our backs to the flash, close our eyes, bend over and …………you can guess the rest.

crosspatch
October 24, 2011 8:57 am

Is there an official explanation for why, in the modern era with all the funding available, the number of stations has dropped precipitously?

Well, if you have ever had a look at the code that does the “adjustments” I would guess that it gets very complex with a lot of stations. So if you reduce the number of stations, it becomes much faster / easier to do their “adjustments”. That could explain a reason for wanting to reduce the *number* of stations, but it doesn’t explain why the *nature* of stations selected for deletion were biased toward colder stations. Rural and high altitude stations were chopped mercilessly and not just in the US, the same goes for South America, too.
To address another question asked above, yes, these stations are by-and-large still reporting every day. They are available in many cases electronically over the Internet. The stations are still there, and the data are still there, GISS simply no longer uses them.
This is crazy when you consider that the three coastal stations now representing all of California in no way reflect the weather in, say, Bridgeport, California which is at about 7,000 feet altitude and is East of the Sierra Nevada. For example, the forecast temperatures in Bridgeport for Wednesday are High 47°F, Low 14 °F where the forecast temperatures for San Francisco are High 70°F Low 56°F
It certainly changes the “average” temperature of the state and requires a much greater degree of “adjustment” and does not take into account wind direction. San Francisco can warm considerably this time of year when the wind comes from the East and we get adiabatic warming from air dropping in altitude from the Sierra Nevada (like a “Chinook” wind).

Rob Potter
October 24, 2011 9:03 am

A number of people are questioning why there is a post on this site arguing the the temps have not gone up when there is also widespread acceptance of warming during the same time period and I think the problem is that this posting is not talking about “global” temperature, but the US.
I also got confused with the BEST figures, because they such an increase since the 1930s when – in the US – these were as warm as the last decade and I wondered why no-one was querying this. The fact is that – taken over the whole planet – records show an increase in average temperature since the 1800s and although there are still arguments over how much, no-one on any side is really debating this. [That’s why Muller’s comments on the BEST analysis shooting down skeptics is a straw man argument.]
This post is talking about the US and considering the 20th century as a single chunk – partly to point out the effect of the changes in the number of stations on the rate of change. It is as much an exercise in methods of developing a long term record in the presence of missing data as it is a comment on actual temperatures, but this is an important point since we know there are problems with missing data.
It has been a very effective posting, because it has generated a lot of comments, some of which contain very interesting and useful information themselves. Excuse me for shouting, but THAT IS THE POINT OF A SCIENTIFIC BLOG. If all you want is confirmation of your existing opinion, go to a political blog site.
Thanks Michael for this analysis, thanks to Anthony for posting it (and much, much more) and thanks to the commentors who have read the post, thought about it and are providing some useful feedback and discussion.

Dave Springer
October 24, 2011 9:26 am

Glenn Tamblyn says:
October 24, 2011 at 1:40 am
Skeptical Science was caught red-handed editing post-facto an article in which a senior climate scientist was making critical comments. They not only treated a very civil senior scientist with expert credentials in the field poorly they edited their own article afterward to make him look worse. This was proven beyond a shadow of a doubt by compariing archived versions of the article and commentary (@ archive.org). SkS was busted beyond any doubt at all.
Anthony Watts does not want links to SkS appearing here because that raises SkS google rankings appreciably and they do not deserve the added page views that come with a higher search ranking. It’s not rocket science, it’s quite understandable, and it’s Anthony’s call to make.
Besides that if whatever point you were trying to make had any merit to it you wouldn’t need to rely on a single source for a reference. If SkS is the only source you have then it’s a moot point to begin with.

Mike Smith
October 24, 2011 9:28 am

Yikes.
Dr Parmer’s paper appears to demonstrate that the settled science of warming is attributable solely to the “fudge factors” (data selection and corrections) typically applied to such work.
It does not address the question of whether or not these “fudge factors” are legitimate or justified but it surely begs for further examination of same.
I hope this work can be submitted for formal peer-reviewed publication so that the warmists are forced (shamed) into explaining why the “corrections” applied to their raw data just happen to be exactly equal to the warming trend they report. The usual hand waving is insufficient and the powerful presentation in this article makes that pretty darn obvious.
On the methodology… we all know that data selection is always dangerous. However, the particular selection used in this article, based on nothing more than the continuity of the station data, does seem perfectly justified and certainly raises some fascinating questions.
Beautiful paper!

Matt
October 24, 2011 9:30 am

The “lopping” off of the pre-1900s and post-2000s is a major influencing factor on the regressions because the decreased period of time gives more weight to significant events occuring during the time period used for Figure 2, especially those significant events that took place in the first half of the century. Anomalies like the 1930s and 1950s droughts have a tendency to skew regressions negatively towards the end of the century because they were such major events temporally and spatially.

Werner Brozek
October 24, 2011 9:31 am

“KPO says:
October 24, 2011 at 2:17 am
I have this sense that there are parameters missing such as humidity,”
I understand where you are coming from. However how much difference does it really make in the end? The percent water vapor in the atmosphere can vary from close to 0 to about 4%. Let us assume that in a dry year, the humidity averages 1% and in a humid year, it averages 3%. The specific heat capacity of air is 1.0. Let us assume the specific heat capacity of water vapor is 2.0. So if the air has 1% water vapor, the average specific heat capacity is 1.01. And if the air has 3% water vapor, the average specific heat capacity is 1.03. I know the molar mass of water is 18 and not 29, but if we just assume they are the same, then the mass of the atmosphere with 3% water vapor is 2% larger than if there is 1% water vapor. (I am also generously assuming water vapor exists evenly throughout the atmosphere and does not condense out.) Then applying mct(moist air) = mct(dry air), we find that the mc for the moist air is 4% larger than for dry air. So to balance things out, the dry air has to have a temperature change that is 4% larger than the moist air. In other words, if moist air goes up by 1.00 degrees C, the dry air, with the same energy input, would go up by 1.04 degrees C. So I would say the difference is very small. Perhaps the error bars need to be made just a wee bit larger to account for the unknown average humidity values? Note that I am not addressing phase changes that may occur due to humidity which is a separate topic.

Dave Springer
October 24, 2011 9:33 am

Rob Potter says:
October 24, 2011 at 9:03 am
“I also got confused with the BEST figures, because they such an increase since the 1930s when – in the US – these were as warm as the last decade and I wondered why no-one was querying this. The fact is that – taken over the whole planet – ”
Rob, the fact is that there IS NO TEMPERATURE record for the whole planet. Period. Even today there are vast areas missing inside the arctic and antarctic regions because the satellites don’t have a view into them.
The southern hemisphere was almost a complete unknown with virtually no instrumental temperature record until well into the 20th century. Adding insult to injury there is almost no coverage for the entire continent of Asia until well into the 20th century and virtually none for any of the world’s oceans exept in shipping lanes.
To pretend the situation is different is an outright lie. There IS NO RELIABLE GLOBAL instrumental record pre-dating the satellite era, period. End of story.

Dave in Canmore
October 24, 2011 9:39 am

Stephen Wilde says:
“From our perspective there seems to be a warming or cooling at the recording sites when averaged out overall but in reality all that is being recorded is the rate of energy flow past the recording sites as the speed of energy flow through the system changes in an inevitable negative response to a forcing agent whether it be sun sea or GHGs.In effect the positions of the surface sensors vary in relation to the position of the climate zones and they record that variance and NOT any change in energy content for the system as a whole.”
A most welcome summary of the starting point that is a temperature record. For many, a temperature record is the end of the thought process but it really is the beginning. Thanks for reminding us what these data points actually mean!

DR
October 24, 2011 9:39 am

Has USHCN-M any data worth looking at yet?

Theo Goodwin
October 24, 2011 9:40 am

Michael Palmer’s article is an important one and we need to focus on the Big Picture. The article introduces two topics, one about station quality and the other about calculation. The station quality topic is logically prior to the other topic and raises the very important question about station quality and the empirical evidence for it.
Palmer describes the stations reporting continuously since 1900 as follows:
“Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.”
Graphing these stations, he concludes that:
“While the sequence and the amplitudes of upward and downward peaks are closely similar to those seen in Figure 2, the trends for both rural and non-rural stations are virtually zero. Therefore, the average temperature anomaly reported by long-running stations in the GISS dataset does not show any evidence of long-term warming.”
From Palmer’s observations, we need to ask what we can infer about the stations. I suggest that the most important and telling inference that can be drawn is that these stations have been well managed. The stations that do not fall into the category of “well managed” can then be graded on various levels of “poor management.” The levels of poor management can be determined by searching for causes of gaps or bumps and similar matters. (Bumps occur when there is a sudden and large continuous change in temperatures reported.)
I emphasize poor management for a very important reason. The only reasonable inference that can be made about stations with numerous gaps and bumps is that the readings that come from them are flaky. Yes, flaky, as in usually inaccurate and maybe in several different ways. The inferences that Warmista want to draw at this point are that errors offset one another, that errors are one-time shifts that do not affect trends, that surrounding stations are not flaky, and so on. Obviously, none of those inferences are justifiable without the results of empirical research done on the ground. Because Warmista adamantly refuse to engage in such empirical research, they are making wholly unjustified assumptions.
For the last thirty years, Anthony Watts and others have gathered information about siting which could explain many gaps and bumps and which could be used in grading poor management. Watts’ factual information goes far beyond what has been described here.
When cornered, the Warmista response is that all of these empirical matters are unimportant because their incredibly sophisticated statistical techniques enable them to compensate for all flakiness in all weather station records. The breath taking boldness of this claim makes it highly suspect. It raises the question whether Warmista could specify any degree of flakiness that could not be accommodated within their statistical techniques. (Please note that questions of calculation are separate from and can be in conflict with empirical knowledge of stations.)
The practical conclusion of all this is that the records of well managed weather stations should be privileged over those of poorly managed stations in calculations of average temperatures. Palmer’s claim that the well managed stations show no temperature trend at all should be the accepted baseline among climate scientists and deviations from it should require justification from empirical research about particular poorly managed stations.

Mike Smith
October 24, 2011 9:41 am

Matt says:
The “lopping” off of the pre-1900s and post-2000s is a major influencing factor on the regressions…
A point that was fully addressed in the paper.

beng
October 24, 2011 9:48 am

****
Frank Lansner says:
October 24, 2011 at 5:40 am
Thanks for comment. Yes , the time of observsation… its amazing.
So across the world from country to country, from culture to culture, continent to continent then thermometers NOT meant for climate purposes, just to tell people about their local temperatures, we have this synchronous TOBS.
Everywhere, the time of observation has systematically been changed in one direction causing too cold temperature data, that “must” be corrected massively.

****
Technically, TOBS is a legit correction, but then how could corrections to so many stations (thousands) produce a TOBS correction so lopsided upward? One would think that such a correction applied globally over so many stations would end up nearly random — near zero. And the TOBS correction applied isn’t even done by each individual stations’ data, it’s done by a TOBS “model” (algorithm).
I assume TOBS “models” are as trustworthy as climate models, until shown otherwise.

Dave Springer
October 24, 2011 9:49 am

Ivor Ward says:
October 24, 2011 at 8:55 am
Interesting. I’d have thought you could simply measure the amount of sky showing between horizon and bottom of cloud deck. The distance to the horizon at sea should be pretty constant with possibly some adjustment needed for height of the ship’s deck above the waterline which would let you see some distance further than line of sight from waterline.
In NBC school we didn’t have sextants. In order to determine the height of the mushroom cloud we used “thumb widths” i.e. hold your arm straight out with thumb horizontal and count the number of thumb widths from the ground to top of mushroom cloud. IIRC correctly each thumb width is about 5 degrees. With a distance estimate taken by time between flash and sound you have the length of one side and two angles (including the 90 degree angle at the base of the mushroom cloud) of a right triangle which is sufficient data to solve for the length of the other sides. Exactly the same thing -should- work at sea to measure the height of the cloud deck although on a rolling ship it might be quite difficult counting thumb widths! Maybe beyond difficult as I’ve never tried anything like that and have virtually zero time spent on any ships at sea. I’ve been in all kinds of planes and helicopters, all kinds of watercraft on inland waters, and all sorts of land vehicles but only a couple of half day ocean fishing trips for my maritime experience – enough to know I don’t get seasick in modest swells but that’s about it.

October 24, 2011 9:57 am

The strength of the global warming narrative is in the satellite data beginning in 1979. There is little doubt that the data is reliable, accurate to the degree necessary, and coverage is near global and around the clock.

I admire your faith in satellite data that is not calibrated against earth based thermometers… satellite data that cannot be independently verified / processed / checked… although the satellite data starts in 1979 the data series has been accumulated from various satellites with differing equipment and differing failure rates… I do not share your unquestioning faith in it the accuracy of the data, the reliability of the equipment, the scope & timeliness of the coverage… let alone the subsequent processing of the raw data by the usual suspects.

October 24, 2011 10:22 am

I am not trying to downplay this work, but again,
any study that does not show the development of maxima and minima TOGETHER with the averages is pretty useless, I think.
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming

Rob Potter
October 24, 2011 10:24 am

Dave,
I agree that there is no reliable global thermometer, but there are a set of global records that people use to create an artificial construct called the global temperature and – for some reason – everyone looks at it and thinks it means something.
I was simply pointing out the that the supposed disconnect in this article was the comparison of the (artificial) US temp with the (artificial) world temp.
The whole notion of a global temperature (even if you use satellites) is an artificial construct. Heck, the concept of temperature itself is an artificial construct of a way to refer to energy. However, it serves a useful purpose because it is something that can be defined simply and compared over time and space.

DirkH
October 24, 2011 10:27 am

Rob Potter says:
October 24, 2011 at 9:03 am
“A number of people are questioning why there is a post on this site arguing the the temps have not gone up when there is also widespread acceptance of warming during the same time period and I think the problem is that this posting is not talking about “global” temperature, but the US.”
One of the longest running records in Europe is Berlin; nearly no trend over 300 years:
There are, though, smaller waves, and I think it is pretty clear that we are at the moment at the top of one of these waves; so one can construct 30 year trends that show a warming. The CAGW movement uses this to shout “This time is different” – like the people who believed in ever-increasing house prices. We know how that one ended.

October 24, 2011 10:31 am

Springer
“The strength of the global warming narrative is in the satellite data beginning in 1979. ”
Yes, the great argument is that land temperature data is not too far from the satelite data, some say.
But the largest difference occured 1950-78 it seems.
K.R. Frank

October 24, 2011 10:32 am

DirkH says:
October 24, 2011 at 2:50 am
But if you do area weighting your result will be hugely biased towards the trends of isolated thermometers with no second data source a thousand miles around,
You MUST use some form of area weighting. This may be difficult to do depending on whether there is location data available. But a poor man’s weighting would be to calculate the average temp for each state [if that location is available], then average the 48 states.

October 24, 2011 10:39 am

KR says:
October 24, 2011 at 8:47 am
“The lack of area weighting and discarding of 90% of the data, on the other hand, are quite serious issues.”
You persevere in misunderstanding the intention of my post. But, if throwing away 90% of the stations is so terrible: Why does GISS do the same, then? They axed 90% of their stations themselves.
“Area weighting data simply allows you to use the other 90% of the available data.”
Thank you. I already suspected that you were clueless, but now I’m sure of it.
‘“Thanks for playing” – Oh? You consider this a game?’
Yes. For me, it is – my day job is in real science.

Brian
October 24, 2011 10:45 am

Interesting that posts like this keep popping up when Anthony has admitted the earth is warming, just not that humans are causing it. Are you going back on your pledge to “accept whatever result they produce, even if it proves my premise wrong.”?
Has Anthony ever detailed what exactly it would take for him to accept AGW? How high to temperatures have to get? Who has to do the analysis? Because after his reversal on the BEST study, it seems that any study that supports AGW must be wrong for some reason or another.

Ivor Ward
October 24, 2011 10:46 am

Dave Springer says:
October 24, 2011 at 9:49 am
The difficulty is in deciding which point of the cloud base is directly above the horizon. Due to the curvature of the Earth the cloud base continues over the horizon until eventually it appears to meet the horizon. Without the plumb line to indicate exactly where the point of measurement should be you can pick the observed height all the way down to zero. We have corrections for refraction, parallax and dip(height of eye) and vertical sextant angle can be used to determine the distance of an object of known height, or the height of an object of known distance.
(I still use rule of thumb when yachting!)

Louis
October 24, 2011 10:55 am

Has anyone estimated the margin of error associated with calculating global temperature? That could be the elephant in the room. I suspect that the margin of error is greater than the estimated warming of about 1 degree C over the past century. If a margin of error has been estimated, can someone please provide a link to it.
Local temperatures can change several degrees in less than an hour. So, unless all temperature stations around the world are synchronized to record temperature at the exact same time of day, the margin of error could be greater than 1 degree C. Just differences in Daylight Savings Time around the world could play havoc with the data.
Then you have to consider if the number of data points are sufficient to get an accurate estimate of the world’s temperature. The BEST data claims that a third of temperatures show a decline and two-thirds show an increase. This indicates a large variability from region to region and implies that you need a great number of data points around the world to accurately measure an average temperature. But, instead, the number of data points have been drastically reduced, leaving large regions without any measurements. The large polar regions, that are supposed to be the most affected by warming, have no temperature recordings but are entirely estimated. This too increases the margin of error for any global temperature calculation. Am I the only one who suspects that the margin of error dwarfs the small increase in temperature over the past century?

October 24, 2011 10:57 am

Anthony, et al,
I would love to see you address the following in a post on your site.
An even easier way to demonstrate a long term flat USA trend is to simply insist that the date range begin and end at similar points in the AMO cycle. See the following post:
http://sbvor.blogspot.com/2011/10/amo-as-driver-of-climate-change.html

Gosport Mike
October 24, 2011 11:25 am

Just a couple of points if I may.
1. I believe average Global temperature to be immeasurable and meaningless. Local average temperature variations may have some use but, surely, it is the extremes that matter. After all. a climate which freezes at night and fries during the day would hardly be characterised by its average.
2. Apart from the effrontery of the pseudo science the only thing that really matters about AGW is the suggestion that we should be doing something about it. This has led to vast sums being wasted on Carbon trading, windmills and second rate climate studies – all of which should stop now.

Ken Harvey
October 24, 2011 11:31 am

I am one of those who is sceptical as to whether there has been any increase in average temperature over the last century or so. I am one of those who is sceptical as to whether an average can be arrived at and whether it would have anything other than a conceptual meaning if it could. I am one of those that deny that an average can be calculated using existing resources.
Having regard for the width of the error bars arising from the shortcomings in the metrology, from the hodgepodge availability and distribution of data and the inconsistency of the data sources, the current temperature number is no more than a guess, and I can see no reason to believe that it is correct to within 2 degrees.
How is it that lacking any qualification regarding climate science, I can be so adamant in what I say? It is because I am blessed (or cursed) with being numerate. If the data is dodgy one cannot manipulate it in a way that would resolve to a valid answer.

Steve C
October 24, 2011 11:34 am

Lansner – Thanks for that! – I wasn’t familiar with RUTI, so was assuming that it was *only* rural. But as a mix close to what I was asking for, it certainly looks a darn sight more convincing, as expected. I shall have to come over to hidethedecline and look around. 🙂

Sean
October 24, 2011 11:37 am

malagaview:
While not a new idea, the mixing of max and min temp can be justified on the basis of consistancy with practices when there were no data loggers. You do want you can with the information you have. However, The real ‘no no’ is collating daily readings into monthy and calling it monthy raw data. Months are not all the same length. Over much of the recorded period, the months did not even start on within the same week in different countries. Working with months, you have infilling and dropping as days in the month, and again months in years.
Raw means what you saw, That means, the max and the min, or the reading and the time of day as written down – with the gaps where there are gaps.

October 24, 2011 11:43 am

Sean,
Quite correct. For example, the December temperature changes look like this.

Interstellar Bill
October 24, 2011 11:43 am

This discard of stations that, in effect,
are the ones reporting level or declining temperatures:
Before they calculate the ‘global’ sea-level,
they throw out all stations showing decline or no rise,
since they ‘know’
that something they call ‘global sea level’
is on the bring of catastrophically rising.
They can’t have those meaningless outliers contaminating their message.

crosspatch
October 24, 2011 11:43 am

What I find interesting is that NOAA’s National Climate Data Center shows significant cooling for the continental US since 1998 using the USHCNv2 data set. The rate of cooling for the most recent 12-month period (October to September) since 1998 is -0.77 degF/decade. That is a significant cooling trend in the US.

More Soylent Green!
October 24, 2011 11:51 am

Brian says:
October 24, 2011 at 10:45 am
Interesting that posts like this keep popping up when Anthony has admitted the earth is warming, just not that humans are causing it. Are you going back on your pledge to “accept whatever result they produce, even if it proves my premise wrong.”?
Has Anthony ever detailed what exactly it would take for him to accept AGW? How high to temperatures have to get? Who has to do the analysis? Because after his reversal on the BEST study, it seems that any study that supports AGW must be wrong for some reason or another.

No matter high the temperatures get, it’s still not evidence of AGW! Global warming is not AGW! We could set a record high temperature everywhere on the globe from now until the sun burns out and it still wouldn’t be evidence of AGW because global warming does not mean AGW.
Remember these two things
1) Evidence of global warming is not evidence of anthropegenic global warming.
2) Repeat #1 until you get it.

Harry Snape
October 24, 2011 12:15 pm

Roger Knights wrote:
“The GISS dataset includes more than 600 stations within the U.S. …
So no worries about Antarctica or the Equator.”
I’d expect quite a difference in continental US temps between Florida and North Dakota.
“It isn’t the temperature that’s replaced, but the delta (the little triangle is the delta) of the temperature; i.e., the anomaly. (If I’m reading the formula correctly.)”
Replacing a missing figure like the example I gave, Sydney’s 50 year low in Oct with an average of Oct would be quite wrong. If deltas are used, and the average delta is infilled, the error bars should be extended by the maximum variance in the deltas seen historically for that date, and even larger if the number of historical records are low.

Brian
October 24, 2011 12:19 pm

@Soylent Green
But once it’s clear the earth is warming (really it already is) you need to suggest a cause. Either it’s the human emissions of gasses that are known to have warming effects, or something else. The possible list of “something else” shrinks as temperatures keep going up. Claiming that it’s a coincidence is pretty hard to swallow.

October 24, 2011 12:29 pm

More Soylent Green! says:
October 24, 2011 at 8:32 am
… It’s no wonder it seems hot out, because we’re no longer used to the normal summer heat.

I’ve thought about that myself quite often. I vaguely remember suffering from the heat back then, but it was a natural part of our life and we just made do. Much the same thing has happened with hygiene. 100 years ago or more, being somewhat dirty all the time was simply a matter of course. Now, however, we’re used to being clean virtually all the time.
Another factor is the “humidex.” While I’d never argue against the merit of having a humidex, it does tend to fool people into thinking it’s hotter now than before. I have, so very, very often, heard people saying things like, “My God, it’s 42 degrees (Celsius). It never reached those temperatures here when I was a kid!” Well, it hasn’t reached those temperatures here now, either, you moron, because that’s the freakin’ humidex!

October 24, 2011 12:35 pm

Henry and Soylent green
most of the warming is natural (witness the large increases in maxima) and a small % is caused by the increase in vegetation (that traps the extra heat), mostly in the NH
stick with the truth.
http://www.letterdash.com/HenryP/more-carbon-dioxide-is-ok-ok

October 24, 2011 12:42 pm

Brian sez:
“But once it’s clear the earth is warming (really it already is) you need to suggest a cause.”
Warming over what time frame? The only dataset I trust is the UAH satellite data.
But, the UAH data begin around the bottom of an AMO cooling cycle and currently end around the peak of an AMO warming cycle. The next AMO cooling cycle will bottom out somewhere around 2040 to 2045. So, we’ll have to wait at least that long to even begin to have enough data to draw any sort of reasonable conclusions.
Meantime, a century scale flat USA trend is easily demonstrated by simply insisting that the date range begin and end at similar points in the AMO cycle. See the following post:
http://sbvor.blogspot.com/2011/10/amo-as-driver-of-climate-change.html
I am reasonably certain the same would hold true for Greenland and most of Europe. In 2045, once we have credible global data, we’ll begin to have some idea to what extent that holds true for the entire planet.
In the following post, I have cited several examples of peer reviewed science demonstrating the extent to which the roughly 70 year AMO cycle drives global temperature cycles:
http://sbvor.blogspot.com/2010/12/how-amo-killed-cagw-cult.html

October 24, 2011 12:50 pm

Michael Palmer says: Yes. For me, it is – my day job is in real science.

BRAVO! Give this man a cigar 🙂

October 24, 2011 12:54 pm

Brian says:
“But once it’s clear the earth is warming (really it already is) you need to suggest a cause. Either it’s the human emissions of gasses that are known to have warming effects, or something else. The possible list of “something else” shrinks as temperatures keep going up. Claiming that it’s a coincidence is pretty hard to swallow.”
• • •
Brian, Prof Richard Lindzen explains:
The notion of a static, unchanging climate is foreign to the history of the earth or any other planet with a fluid envelope. The fact that the developed world went into hysterics over changes in global mean temperature anomaly of a few tenths of a degree will astound future generations. Such hysteria simply represents the scientific illiteracy of much of the public, the susceptibility of the public to the substitution of repetition for truth, and the exploitation of these weaknesses by politicians, environmental promoters, and, after 20 years of media drum beating, many others as well. Climate is always changing. We have had ice ages and warmer periods when alligators were found in Spitzbergen. Ice ages have occurred in a hundred thousand year cycle for the last 700 thousand years, and there have been previous periods that appear to have been warmer than the present despite CO2 levels being lower than they are now. More recently, we have had the medieval warm period and the little ice age. During the latter, alpine glaciers advanced to the chagrin of overrun villages. Since the beginning of the 19th Century these glaciers have been retreating. Frankly, we don’t fully understand either the advance or the retreat… For small changes in climate associated with tenths of a degree, there is no need for any external cause. The earth is never exactly in equilibrium. The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries. Recent work suggests that this variability is enough to account for all climate change since the 19th Century. [my emphasis]
Invoking “carbon” as a cause of natural climate variability is the basis for the entire climate alarmist industry. But the 40% increase in harmless, beneficial CO2 has not made an appreciable difference; the warming over the past century and a half has been from 288K to 288.8K, a minuscule rise. And there is zero testable evidence that it was caused by the rise in CO2.

AlexW
October 24, 2011 12:55 pm

@DirkH says:
October 24, 2011 at 10:27 am
“One of the longest running records in Europe is Berlin; nearly no trend over 300 years”
Berlin Dahlem is also one of the sites which has been removed from the GISS record last year

October 24, 2011 12:56 pm

Brian, we don’t need to suggest a cause for the current warming because natural variability is the null hypothesis.
Before I could accept climate change as caused by anthropogenic CO2 emissions the following would have to apply :-
1. Temperatures would have to rise above those in the Medieval Warm Period, the Roman Warm Period, the Minoan Warm Period, most of the early Holocene, and several earlier interglacial periods. Otherwise it’s natural cycles.
2. Temperatures would have to track CO2 levels with a suitable lag. They don’t. Turning points in the temperature record ( 1910, 1940, 1970, 2000 ) would have to correspond to turning points in the CO2 record. They don’t.
3. The major assumption of the CO2 warming hypothesis, i.e. positive water vapor feedbacks, would have to be demonstrated in empirical data. My reading of the situation is that Spencer & Braswell, Lindzen & Choi, Miskolczi and others are winning this argument hands down.

crosspatch
October 24, 2011 1:06 pm

It isn’t “global warming” that anyone is skeptical of. It is whether it is caused by anything people are doing or if anything people are doing have a significant impact on the rate/amount of warming. The globe is pretty much always “warming” or “cooling” and is rarely stable for any great length of time. The point is that nobody has shown that any current warming is at any greater rate than has occurred naturally before the industrial age.
In fact, the post-industrial warming is simply the only major warming trend we have had since the LIA. What we have measured is the recovery from the LIA till 1933, then cooling until 1976, then another period of warming till about 1998.
What is so frustrating is the constant adjusting of the adjustments. When we have a warm year that is close to 1933, it gets adjusted upward and 1933 gets adjusted downward to make a new “hottest” year. It simply would not fly in any other field of science. There is too much subjective “adjustment” applied to the records. I would want to look at rural stations that are still rural (continuously reporting notwithstanding) and see what the raw data show.
I believe the “adjustments” are corrupt and are agenda-driven.

cwj
October 24, 2011 1:08 pm

If the objective is to determine the average trend of many stations, wouldn’t the least biased method of estimating missing data be to determine a trend for that station based on available data, and substitute the value predicted by that trend for the missing data? Then when all the data are aggregated, the trends expressed by the data for each of the stations would be weighted equally. One station would not be affecting the trend in the data from another station.

KR
October 24, 2011 1:19 pm

Michael Palmer
‘“Thanks for playing” – Oh? You consider this a game?’
Yes. For me, it is – my day job is in real science.

That’s a very telling statement. If you don’t consider studying the climate worthy of actual scientific effort, then, well, never mind. I’ll just take your post as seriously as you apparently do.

Brian
October 24, 2011 1:19 pm

Amazing that the “politicians and environmental promoters” have managed to convince 97% of climatologists and essentially every major scientific organization that AGW is real. Especially with the entire Republican party (always friends of science!) and oil and coal interests fighting tooth and nail. As a layman observer it seems clear that claiming to understand the research well enough to side with the 3% of people who know what they’re talking about is disingenuous.

DR_UK
October 24, 2011 1:22 pm

This is interesting. The use of long station series seems a very good idea.
But isn’t this a similar method of taking first differences that was discussed and criticised before? See Hu McCulloch’s 2010 post at Climate Audit http://climateaudit.org/2010/08/19/the-first-difference-method/. I can’t say I understand all the arguments in that thread, but is there a better way of dealing with missing years?

October 24, 2011 1:22 pm

beng
“I assume TOBS “models” are as trustworthy as climate models, until shown otherwise.”
TOBS is an empirically derived adjustment. you can read karls paper or the subsequent verification of it.
Essentially here is the process: for the entire united states all of the HOURLY stations were assembled. That database is then split into two parts. One part for model development the other part for model test.
Model development. Since you have HOURLY data you can calculated the following
what is (Tmin+Tmax)/2 if you record at
1Am, 2am,3am, 4am, 5am,6am, 7am etc
That gives you a Tave for every hour of the day, or rather the Tave that would be recorded IF the TOB was at a given hour.
From that your derive an offset. Like so.
Suppose that Tave at 5pm is 15C and Tave at 7Am is 14.5C
That gives you an adjustment of -.5C
This allows you to adjust ANY TOB to a common TOB. Historically, rural stations manned by individuals had TOB of 5pm.. those are “moved” forward by adjusting to the 7AM time.
Give these “deltas” an emprical model is then developed for every region of the usa. It depends upon latitude and longitude and the suns position (season) That emprical model is basically a regression.
The regression is then tested against the “held out stations” to see how well it predicts the actual
Tave. All of the standard errors of prediction are in the karl paper
CA had a whole discussion of this some time ago. At first TOBS made no sense to me then I went through some examples prepared by JerryB for john daly’s old site.
Arguing about TOBS is a waste of time. It needs to be applied in ANY analysis that does simple averages. Otherwise you will get the wrong answer. demonstrably wrong.

George Turner
October 24, 2011 1:29 pm

Brian, what about the list of possible causes of the Medieval Warm Period, the Roman, etc, or the fact that even the BEST team shows a steeper temperature rise in the early 1800’s than anything in the 20th century? Since the list of natural causes is so diminished, would you suggest pirates, jousting tournaments, and goat entrails being sacrificed to Apollo as likely forcings for those events?
Most of the apocalyptic hand waving dismissed solar effects, as the variation in visual magnitude solar output is rather small. But now we’re finding strong links between UV output and upper atmospheric temperatures, along with a possibly huge influence on cloud coverage because of cosmic ray cloud seeding, which is modulated by the strength of the solar wind, and that strength does vary strongly with solar cycles. Then they often pretend that the AMO, PDO, AO, and other natural oscillations don’t exist, and pretend that the world has statistically significant warming in the past 15 years, which it hasn’t.
If you follow some of the adjustments they make to the temperature record, we should worry less about the present getting warmer than the alarming rate at which the past keeps getting cooler. If present trends continue, millions of extra people in the 1940’s are going to freeze to death.

October 24, 2011 1:31 pm

Brian,
After trying to help you by providing an explanation by Prof Richard Lindzen, who knows something about the subject, you come out with the 97% nonsense. That 97% number has been thoroughly debunked. In fact, the OISM Petition has far more signatures from degreed professionals in the hard sciences than the total of all the alarmist counter-petitions. The fact is that most scientists and almost all engineers reject the catastrophic AGW conjecture.
And IANAR, but tarring the “the entire Republican party” with the same brush makes you look like a credulous dope walking around with your zipper down. Run along now back to Skeptical Pseudo-Science. You need to load up on some new talking points.

Brian
October 24, 2011 1:38 pm

Smokey,
This is the problem with arguing with [SNIP: – Policy Violation -REP] , they all have different arguments, and are willing to change them at the drop of a hat. “The earth isn’t warming!” “Ok maybe it is but it’s not humans!” “Ok it’s humans but it’s not harmful!” “Ignore the fact that I was wrong on my first two premises!”
Most scientists reject catastrophic AGW? I like that you slip “catastrophic in there. Make up your mind, are you denying global warming, AGW, or that AGW is harmful? Changing your position from argument to argument is not acceptable. Technically you may be correct, most scientists do not believe AGW will lead to armaggedon. But denying the scientific consensus on man-made climate change is absurd. http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change

October 24, 2011 1:51 pm

Michael Palmer, I cannot thank you enough.
You have given the evidence in statistically significant swathes for what I’ve been banging on about for years. Long individual station records are essential for the proper deconstruction of the data problems, fudges and manipulations. We don’t need lots of stations, and we do not need the globe sectorized into areas of homogenized temperature soup. Surprisingly few stations will do, so long as they are trustworthy and long enough.
I did quite a lot of work on this when I had more time, some of which Anthony published here (one of my Circling Yamal pages, and my Circling the Arctic page). I also did GISS temperature records in the British Isles. And my page Removing UHI Distortion shows it’s the trustworthy US records that seem to be overall flat over the last century; worldwide there seems to be slight increase overall. Find all those pages here. And here in my “Primer are some of the prime long records, and the horrendous record of USHCN “corrections”
I was inspired by the choice of station records of the late John Daly: you, he and I are all, I think, amateur scientists, in the best sense of the word -love of real science.

Cherry Pick
October 24, 2011 1:55 pm

“The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.”
Why do we need to do that replacement? Isn’t it possible to calculate the global temperature trend by using just the real observations?
1. Calculate the trend of one site (station) in one day of year. For example the 25th October trend in 1900..2011.
2. Do that for every site
3. Do that for every day of the year
What is the impact of changing 2. and 3. ?
If the site has been moved or there is some other changes, just consider the site closed and start a new one.

phlogiston
October 24, 2011 2:02 pm

Garrett Hurley
You are picking up the wrong end of the stick. Who is it who should be expected to have coherent data on global temperatures? Is it the climate research commmunity, with salaries and pensions funded by the taxpayer and billions of dollars worth of politically mandated funding for equipment, satellites etc? Or an amateur group of scientists and concerned citizens, including some climate scientists sacked for heretical views? And you ask why WUWT does not have single party line on global temperature trends?
What has the climate science establishment done with the tremendous influx of funding over the last decade or two? One would reasonably have assumed that the first obvious thing to do would be increase the number of weather monitoring stations worldwide,making them fully automatic, with automated and correct min and max temperature recording, intercommunication and compilation. Iron out once and for all issues of area averaging, missing regions, etc.
The scandalous and unbelievble fact is that the opposite has happened. Somehow, all the new funding for climate research has been accompanied not by an increase, but a decrease in monitored weather stations worldwide, and a precipitous decline at that. WUWT? The climate community needs to defend itself from an inescapable accusation of massive fraud on almost unbelievable scale.
And now a straightforward demonstration by Michael Palmer that, sorting for the best quality continouos data records in the USA annihilates at a stroke any sign of warming on that continent, raises further the question – WHAT THE HELL HAS THE CLIMATE RESEARCH COMMINITY BEEN DOING??
BTW do mobile phones have thermometers? They should have. If they do, then someone should write an app to make a citizens’ global climate monitoing network. Mobile phones at least know where they are and what planet they are on.

Mike Jowsey
October 24, 2011 2:03 pm

@ Bob Kutz :October 24, 2011 at 8:30 am
Thanks for a thoughtful and well-constructed post.

Theo Goodwin
October 24, 2011 2:15 pm

Brian says:
October 24, 2011 at 1:38 pm
Either this is your first time on this site and you are truly innocent and lost or else you are a troll.
As I have explained above, in line with Palmer’s thesis about stations, the reliable stations show no warming in the US. That is my position for the world. Everything that shows warming is not empirically testable.
Some people say there is warming but it is not manmade.
Professor Lindzen follows Arrhenius in saying that there is manmade warming but it is and will be harmless.
Others say that warming might be somewhat harmful but adaptation is superior to mitigation.
All of those positions fall on the sceptical side and none of them are clearly mistaken.
By the way, Smokey is as reliable a guide as one can find.

October 24, 2011 2:24 pm

KR says: October 24, 2011 at 1:19 pm
Michael Palmer: ‘“Thanks for playing” – Oh? You consider this a game?’ Yes. For me, it is – my day job is in real science.
That’s a very telling statement. If you don’t consider studying the climate worthy of actual scientific effort, then, well, never mind. I’ll just take your post as seriously as you apparently do.

_______________________________________________________________________
KR has got Michael’s point upside down. Thank goodness there are still real scientists like Michael who choose, in their free time, to visit an area of science that has become so corrupted that its gatekeepers are corrupt mad, proven by the fact that they proclaim they have 97% support, whilst gagging dissenters and failing to count their true number.
With mad gatekeepers, I guess entry to this domain often has to be played as a game.

Dave Springer
October 24, 2011 2:24 pm

steven “one trick pony” mosher says:
October 24, 2011 at 1:22 pm
It’s nothing short of amazing you can use the phrase “the right answer” in regard to obtaining a global average temperature when it’s derived from instruments placed in narrow band of latitudes on a single continent. Adding insult to injury the continental region in question was the most rapidly anthropogenically transformed land area of its size on the planet.
There is no right answer from the instrument record, Steverino. Perhaps you could simply admit that no amount of pencil whipping can possibly transform this regional daily temperature sample set into a global average.

October 24, 2011 2:24 pm

KR says:
October 24, 2011 at 1:19 pm
“If you don’t consider studying the climate worthy of actual scientific effort, then, well, never mind.”

I find studying the climate eminently worthy of scientific effort. What I cannot take seriously is the “reconstruction” of the “true” temperature record from a woefully incomplete and corrupted database. No amount of adjusting, correcting, weighting, averaging, extrapolating, smoothing, roughing, digesting and regurgitating will get us past the garbage in, garbage out problem.

Theo Goodwin
October 24, 2011 2:26 pm

steven mosher says:
October 24, 2011 at 1:22 pm
Wow! You actually used the word ’empirically’, though it does occur in “empirically derived adjustment.”
Given what you said, please explain one thing. How is it that you can do all your wonderful adjustments and come up with something that conflicts with the lack of a trend from the well managed stations? What is wrong with the well managed stations? Can you express this in empirical terms?
Second question. What would it take to get you to agree that our weather station measurement system is not up to the task and needs to be replaced? Is there anything that you could discover about the measurement system that would lead you to discard it? Or will you defend this system come hell or high water?

October 24, 2011 2:27 pm

@ Lucy Skywalker
Thanks very much for your comments. I have seen your posts here and also recently perused your own page from top to bottom and enjoyed it. I look forward to your further posts!
Best wishes, Michael

Theo Goodwin
October 24, 2011 2:28 pm

Brian says:
October 24, 2011 at 12:19 pm
This is a classic fallacy. You don’t have to present a replacement to show that a theory is false. If a theory is false, it is false all by itself.

Dave Springer
October 24, 2011 2:30 pm

phlogiston says:
October 24, 2011 at 2:02 pm
“BTW do mobile phones have thermometers?”
Yes they do. Too bad they have an internal heat source and are carried around much of the time in physical contact with a 98.6F warm body or inside heated/air conditioned buildings and vehicles.
You didn’t think about that question very much.
Some people say there’s no such thing as a stupid question but this one proves them wrong.

October 24, 2011 2:31 pm

Brian says: October 24, 2011 at 1:38 pm
…denying the scientific consensus on man-made climate change is absurd.
http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change

It’s shameful gatekeeper-corrupted material like this that makes me continue banging on about establishing a skeptics’ climate science wiki, especially aimed at restitution of the most corrupted truths and rehabilitation of the most wrongfully-tarred individuals.

October 24, 2011 2:31 pm

Brian says:
“Changing your position from argument to argument is not acceptable.”
I am willing to change my opinion if new facts warrant it, but my position has remained unchanged for a very long time. It is this:
A doubling of CO2 will probably result in a ≈1°C rise in temperature, ± ≈0.5°C. The additional warmth will be entirely beneficial. It will result in millions of new arable acres in places like Siberia, Mongolia and Canada. Furthermore, any current and projected rise in CO2 will be entirely beneficial to the biosphere. Agricultural production will continue to increase as a direct result of more CO2. There is no credible downside to a rise in CO2, a beneficial and harmless trace gas.
I have posted several hundred comments here that say essentially the same thing. I have been very consistent in this. If you can find a comment I made that contradicts anything in this post, please point it out.

Ulric Lyons
October 24, 2011 2:34 pm

@Ivor Ward says:
October 24, 2011 at 8:23 am
“Would anyone care to explain to me why the graph linked by Mr Palmer for Hohenpiessenberg at http://climatereason.com/LittleIceAgeThermometers/Hohenpeissenberg_Germany.html shows a virtual flat line and then one that MFK Boulder links to at http://preview.tinyurl.com/Hohenpeissenberg shows our favourite hockey stick, preferably without using the word “baloney” in the text even if you do spell it correctly.”
It is easy to see which one is bogus from the original data:
http://members.multimania.nl/ErrenWijlens/co2/t_hohenpeissenberg_200306.txt

Dave Springer
October 24, 2011 2:37 pm

Theo Goodwin says:
October 24, 2011 at 2:26 pm
That was a brutal drubbing of the one trick pony. Took him to school with that one you did. Wished I’d have written it!

Dave Springer
October 24, 2011 2:47 pm

Smokey says:
October 24, 2011 at 2:31 pm
“A doubling of CO2 will probably result in a ≈1°C rise in temperature”
If you change that to a maximum of 1C in the most arid places on the planet where evaporation and convection play little role in surface cooling then I’ll agree to it. Rather basic physics there without confounding factors. CO2 has very little effect over the oceans which are 71% of the planet’s surface because the ocean only gives up 20% of its solar heating through radiation and CO2 only slows down radiative cooling. CO2 DOES NOT slow down evaporative, convective, or conductive cooling. Over land surfaces, especially dry surfaces, the dominant mode of cooling is radiative. CO2 will have its maximum theoretical effect there and there-only.

Brian
October 24, 2011 2:49 pm

@Smokey
So you basically accept the findings of climate research except for the consequences. So do you agree that 90% of the self-proclaimed “skeptics” are crazy for denying the earth is warming and that humans are causing (at least) much of it? What other scientific consensus’ do you deny? Evolution? The germ theory of disease? Do you think the evidence that smoking causes lung cancer is just a political ploy?
Climate-denial-gate has robbed the “skepticism” movement of whatever credibility it had.

kwik
October 24, 2011 2:50 pm

Brian says:
October 24, 2011 at 1:38 pm
Brian, are you very young and innocent?
How else can it be explained that you think wikipedia is a source to be recited? Or are you being deceitful by purpose?
We all know about the frantical editing by the warmistas over at wikipedia.
http://notrickszone.com/2011/10/24/german-meteorologists-horror-winter-to-hit-central-europe/

October 24, 2011 2:53 pm

DR_UK says:
October 24, 2011 at 1:22 pm
“But isn’t this a similar method of taking first differences that was discussed and criticised before? See Hu McCulloch’s 2010 post at Climate Audit http://climateaudit.org/2010/08/19/the-first-difference-method/.”

Thanks for the link. The idea is indeed similar. There is one difference, however: The CA post states: “Missing observations may simply be interpolated for the purposes of computing first differences (thereby splitting the 2 or more year observed difference into 2 or more equal interpolated differences).” In contrast to this approach, I did not fill the gaps with interpolated numbers.

phlogiston
October 24, 2011 3:04 pm

Dave Springer
Maybe the earlier suggestion of using airliner temperature records was a better one.

Ivor Ward
October 24, 2011 3:05 pm

George Turner says:
October 24, 2011 at 1:29 pm
If you follow some of the adjustments they make to the temperature record, we should worry less about the present getting warmer than the alarming rate at which the past keeps getting cooler. If present trends continue, millions of extra people in the 1940′s are going to freeze to death.
Thank you George!! This was the comment of the day for me. ( I hope my mum and dad aren’t affected because then I wouldn’t be here)

October 24, 2011 3:09 pm

Brian says:
October 24, 2011 at 1:38 pm
Smokey,
This is the problem with arguing with [SNIP: – Policy Violation -REP] , they all have different arguments, and are willing to change them at the drop of a hat. “The earth isn’t warming!” “Ok maybe it is but it’s not humans!” “Ok it’s humans but it’s not harmful!” “Ignore the fact that I was wrong on my first two premises!”
Most scientists reject catastrophic AGW? I like that you slip “catastrophic in there.

In the last few days, Brian is only one of many who question “what do skeptics believe” and point out that “skeptics” are all over the board in their (our) beliefs.
“From the best I’ve determined, this is what most of both “skeptics” and “lukewarmers” believe:
“There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth.”
Many here know where that statement originates, but the source of the phrase isn’t as important as is the general sentiment it portrays. ”
We don’t slip “catastrophic” in anywhere – the position of the “warmists/alarmists”, the position that “requires” restriction of anthropogenic CO2 emissions, is that these same CO2 emissions are driving the climate toward catastrophic events.
We (skeptics/warmists) should not let a “warmist/alarmist” define what we believe.

October 24, 2011 3:14 pm

Brian says:
October 24, 2011 at 2:49 pm
“What other scientific consensus’ do you deny? Evolution? The germ theory of disease? Do you think the evidence that smoking causes lung cancer is just a political ploy?”

None of these, actually. You may also be surprised to learn that not all of us own a gun.
Anyone who likens the profundity of our understanding of global warming, and the quality of the evidence to those pertaining to any of the above theories only proves their ignorance.

Dave Springer
October 24, 2011 3:15 pm

Louis says:
October 24, 2011 at 10:55 am
“Local temperatures can change several degrees in less than an hour.”
I’ve seen 20F drop in ten minutes here in south central Texas when a cold front blows through on a warm day. It’s enough to take your breath away. Like you opened up your freezer door and stuck your head inside.
The locals here say “There ain’t nothing that stands between Austin and the Arctic circle but barbed wire fence.”

October 24, 2011 3:17 pm

Brian,
You’re beginning to sound like a lunatic. The large majority of scientific skeptics know that the planet has warmed naturally since the Little Ice Age [LIA]. You deliberately misrepresent the skeptical position by making truly absurd comments like: “…do you agree that 90% of the self-proclaimed ‘skeptics’ are crazy for denying the earth is warming…”
First, skeptics are not crazy. We simply ask for testable, real world evidence – per the scientific method – showing that human-produced CO2 causes measurable global warming. It may, but as of now there is no evidence that it does. None. The entire AGW facade is based on computer models and conjecture.
Next, you keep raising the red herring argument of “consensus”. Every last person on earth could believe that the moon is made out of green cheese. But the consensus would be wrong. Further, the canard that most scientists and engineers believe that human activity caused most of the ≈0.7°C warming over the past hundred and fifty years is provably wrong, as I’ve shown in the OISM links I posted. In fact, the alarmist crowd is in the minority, and your misguided belief system cannot change that verifiable fact.
Finally, your wacko comment about ‘climate-denial-gate’ confirms you as a member of the climate alarmist lunatic fringe. This is the internet’s “Best Science” site, not a tinfoil hat blog like tamino’s, Romm’s, Cook’s Skeptical Pseudo-Science, etc. Please take your wild-eyed conspiracy theories to them, they eat that stuff up.
• • •
Dave Springer,
The 1°C number is a ballpark figure, because we don’t know for certain what the sensitivity number is. It may even change depending on CO2 concentration and/or temperature. That’s why I give a 0.5°C error band. [Or fudge figure, if you like.☺] I personally think that when the question is definitively answered, the number will be ≤1°C.

Dave Springer
October 24, 2011 3:25 pm

phlogiston says:
October 24, 2011 at 3:04 pm
“Maybe the earlier suggestion of using airliner temperature records was a better one.”
Satellites are doing a fine job. Global coverage, 24/7, NBS traceability. Interestingly though they too were also “adjusted” around 1999 to go from showing a cooling trend to a warming trend.
It’s just a testament to the incredibly small effect (CO2) they’re trying to tease out of the data that pencil whipping within the error bars of the highest tech sensing gear can change a cooling trend to a warming trend. In reality I strongly suspect there is no detectable temperature trend from CO2 but there is indeed a trend and that trend is higher agricultural output because whatever else CO2 is it is definitely plant food and we’re fertilizing the atmosphere by burning fossil fuels.

John-X
October 24, 2011 3:27 pm

Hansen’s own GISStemp data show that it is actually COOLER than his infamous “Scenario C” forecast, which was based on NO (i.e. ZERO, NONE, NOT ANY) man-made CO2 emissions after 2000!
http://www.real-science.com/doubt-temperatures-rising-fast-hansens-emissions
If Hansen paid any attention at all to HIS OWN DATA, as well as his own predictions, pronouncements, prognositcations and various other bloviations, he would have concluded years ago that CO2-based warming simply DOES NOT HAPPEN!

peetee
October 24, 2011 3:31 pm

uhhh… the article references to ‘abstract’ – where’s the actual published paper to be found? What journal? Uhhh… what peer-reviewed journal? Surely, surely…. this can’t be a pre-release!!!

Glenn Tamblyn
October 24, 2011 3:38 pm

As Requested:
Off Averages & Anomalies Part 1A
In recent years a number of claims have been made about ‘problems’ with the surface temperature record: that it is faulty, biased, or even ‘being manipulated’. Many of the criticisms often seem to revolve around misunderstandings of how the calculations are done and thus exaggerated ideas of how vulnerable to error the analysis of the record is. In this series I intend to look at how the temperature records are built and why they are actually quite robust. In this first post (Part 1A) I am going to discuss the basic principles of how a reasonable surface temperature record should be assembled, Then in Part 1B I will look at how the major temperature products are built. Finally in Parts 2A and 2B I will then look at a number of the claims of ‘faults’ against this to see if they hold water or are exaggerated based on misconceptions.
How NOT to calculate the Surface Temperature
So, we have records from a whole bunch of meteorological stations from all around the world. They have measurements of daily maximum and minimum temperatures for various parts of the last century and beyond. And we want to know how much the world has warmed or not.
Sounds simple enough. Each day we add up all these station’s daily average temperatures together, divide by the number of stations and, voilá, we have the average temperature for the world that day. Then do that for the next day and the next and…. Now we know the world’s average temperature, each day, for all that measurement period. Then compare the first and last days and we know how much warming has happened – how big the ‘Temperature Anomaly’ is – between the two days. We are calculating the ‘Anomaly of the Averages’. Sounds fairly simple doesn’t it? What could go wrong?
Absolutely everything.
So what is wrong with the method I described above?
1. Every station may not have data for the entire period covered by the record. They have come and gone over the years for all sorts of reasons. Or a station may not have a continuous record. It may not be measured on weekends because there wasn’t the budget for someone to read the station then. Or it couldn’t be reached in the dead of winter.
Imagine we have 5 measuring stations, A to E that have the following temperatures on a Friday:
A = 15, B = 10, C = 5, D = 20 & E = 25
The average of these is (15+10+5+20+25)/5 = 15
Then on Saturday, the temperature at each station is 2 °C colder because a weather system is passing over. But nobody reads station C because it is high in the mountains and there is no budget for someone to go up there at the weekend. So the average we calculate from the data we have available on Saturday is:
(13+8+18+23)/4 = 15.5.
But if station C had been read as well it would have been:
(13+8+3+18+23)/5 = 13
This is what we should be calculating! So our missing reading has distorted the result.
We can’t just average stations together! If we do, every time a station from a warmer climate drops off the record, our average drops. Every time a station from a colder climate drops off, our average rises. And the reverse for adding stations. If stations report erratically then our record bounces erratically. We can’t have a consistent temperature record if our station list fluctuates and we are just averaging them. We need another answer!
2. Our temperature measurements aren’t from locations spaced evenly around the world. Much of the world isn’t covered at all – the 70% that is oceans. And even on land our stations are not evenly spread. How many stations are there in the roughly 1000 km between Maine and Washington DC, compared to the number in the roughly 4000 km between Perth & Darwin?
We need to allow for the fact that each station may represent the temperature of very different size regions. Just doing a simple average of all of them will mean that readings from areas with a higher station density will bias the result. Again, we can’t just average stations together!
We need to use what is called an Area Weighted Average. Do something like: take each station’s value, multiply it by the area it is covering, add all these together, and then divide by the total area. Now the world isn’t colder just because the New England states are having a bad winter!
3. And how good an indicator of its region is each station anyway? A station might be in a wind or rain shadow. It might be on a warm plain or higher in adjacent mountains, or in a deep valley that cools quicker as the Sun sets. It might get a lot more cloud cover at night or be prone to fogs that cause night-time insulation. So don’t we need a lot of stations to sample all these micro-climates to get a good reliable average? How small does each station’s ‘region’ need to be before its readings are a good indicator of that region? If we are averaging stations together we need a lot of stations!
4. Many sources of bias and errors can exist in the records. Were the samples always taken at the same time of day? If Daylight Savings Time was introduced, was the sampling time adjusted for this? Where log sheets for a station (in the good old days before new fangled electronic recording gizmos) written by someone with bad handwriting – is that a 7 or a 9? Did the measurement technology or their calibrations change? Has the station moved, or changed altitude? Are there local sources of biasing around the station? And do these biases cause one-off changes or a time-varying bias?
We can’t take the reading from a station at face-value. We need to check for problems. And if we find them we need to decide whether we can correct for the problem or need to throw that reading or maybe all that station’s data away. But each reading is a precious resource – we don’t have a time-machine to go back and take another reading. We shouldn’t reject it unless there is no alternative.
So, we have a Big Problem. If we just average the temperatures of stations together, even with the Area Weighting answer to problem #2, this doesn’t solve problems #1, #3 or #4. It seems we need a very large detailed network, which has existed for all of the history of the network, with no variations in stations, measurement instruments etc, and without any measurement problems or biases.
And we just don’t have that. Our station record is what it is. We don’t have that time machine. So do we give up? No!
How do stations’ climates change?
Let’s consider a few key questions. If we look at just one location over its entire measurement history, say down on the plains, what will the numbers look like? Seasons come and go; there are colder and warmer years. But what is the longer term average for this location? What is meant by ‘long term’? The World Meteorological Organisation (WMO) defines Climate as the average of Weather over a 30 year period. So if we look at a location averaged over something like a 30 year period and compare the same location averaged over a different 30 year period, the difference between the two is how much the average temperature for that location has changed. And what we find is that they don’t change by very much at all. Short term changes may be huge but the long term average is actually pretty stable.
And if we then look at a nearby location, say up in the mountains, we see the same thing: lots of variation but a fairly stable average with only a small long term change. But their averages are very different from each other. So although a station’s average change over time is quite small, an adjacent station can have a very different average even though its change is small as well. Something like this:
Next question: if each of our two stations averages only change by a small amount, how similar are the changes in their averages? This is not an idle question. It can be investigated, and the answer is: mostly by very little. Nearby locations will tend to have similar variations in their long term averages. If the plains warm long term by 0.5°C, it is likely that the nearby mountains will warm by say 0.4–0.6°C in the long term. Not by 1.5 or -1.5°C.
It is easy to see why this would tend to be the case. Adjacent stations will tend to have the same weather systems passing over them. So their individual weather patterns will tend to change in lockstep. And thus their long term averages will tend to be in lock-step as well. Santiago in Chile is down near sea level while the Andes right at its doorstep are huge mountains. But the same weather systems pass over both. The weather that Adelaide, Australia gets today, Melbourne will tend to get tomorrow.
Station Correlation Scatter Plots (HL87)Final question. If nearby locations have similar variations in their climate, irrespective of each station’s local climate, what do we mean by ‘nearby’? This too isn’t an idle question; it can be investigated, and the answer is many 100’s of kilometres at low latitudes, up to 1000 kilometres or more at high latitudes. In Climatology this is the concept of ‘Teleconnection’ – that the climates of different locations are correlated to each other over long distances.
Figure 3, from Hansen & Lebedeff 1987 (apologies for the poor quality, this is an older paper) plots the correlation coefficients versus separation for the annual mean temperature changes between randomly selected pairs of stations with at least 50 common years in their records. Each dot represents one station pair. They are plotted according to latitude zones: 64.2-90N, 44.4-64.2N, 23.6-44.4N, 0-23.6N, 0-23,6S, 23.6-44.4S, 44.4-64.2S.
Notice how the correlation coefficients are highest for stations closer together and less so as they stretch farther apart. These relationships are most clearly defined at mid to high northern latitudes and mid southern latitudes – the regions of the Earth with higher proportions of land to ocean.
This makes intuitive sense since surface air temperatures of the oceanic regions are influenced also by water temperatures, ocean currents etc instead of just air masses passing over them, while land temperatures don’t have this other factor. So land temperatures would be expected to have better correlation since movement of weather systems over them is a stronger factor in their local weather.
This is direct observational evidence of Teleconnection. Not just climatological theory but observation.
So what if we do the following? Rather than averaging all our stations together, instead we start out by looking at each station separately. We calculate its long term average over some suitable reference period. Then we recalculate every reading for that station as a difference from that reference period average. We are comparing every reading from that station against its own long term average. Instead of a series of temperatures for a station, we now have a series of ‘Temperature Anomalies’ for that station. And then we repeat this for each individual station, using the same reference period to produce the long term average for each separate station.
Then, and only then, do we start calculating the Area Weighted Average of these Anomalies. We are now calculating the ‘Area Average of the Anomalies’ rather than the ‘Anomaly of the Area Averages’ – now there’s a mouthful. Think about this. We are averaging the changes, not averaging the absolute temperatures.
Does this give us a better result? In our imaginary ideal world where we have lots of stations, always reporting all the time, no missing readings, etc., then these two methods will give the same result.
The difference arises when we work in an imperfect world. Here is an example (for simplicity I am only doing simple averages here rather than area weighted averages):
Let’s look at stations A to E. Let’s say their individual long term reference average temperatures are:
A = 15, B = 10, C = 5, D = 20 & E = 25
Then for one day’s data their individual readings are:
A = 15.8, B = 10.4, C = 5.7, D = 20.4 & E = 25.3
Using the simple Anomaly of Averages method from earlier we have:
(15.8+10.4+5.7+20.9+25.3)/5 – (15+10+5+20+25)/5 = 0.52
While using our Average of Anomalies method we get:
((15.8-15) + (10.4-10) + (5.7-5) + (20.4-20) + (25.3-25))/5 = 0.52
Exactly the same!
However, if we remove station C as in our earlier example, things look very different. Anomaly of Averages gives us:
(15.8+10.4+20.4+25.3)/4 – (15+10+5+20+25)/5 = 2.975 !!
While Average of Anomalies gives us:
((15.8-15) + (10.4-10) + (20.4-20) + (25.3-25))/4 = 0.475
Obviously both values don’t match what the correct value would be if station C were included, but the second method is much closer to the correct value. Bearing in mind that Teleconnection means that adjacent stations will have similar changes in anomaly anyway, this ‘Average of Anomalies’ method is much less sensitive to variations in station availability.
Now let’s consider how this approach could be used when looking at station histories over long periods of time. Consider 3 stations in ‘adjacent’ locations. A has readings from 1900 to 1960. B has reading from 1930 to 2000 and C has readings from 1970 to today. A overlaps with B, B overlaps with C. But C doesn’t overlap with A. If our reference period is say 1930 – 1960, we can use the readings from A & B. But C doesn’t have any readings from our reference period. So how can we splice together A, B, & C to give a continuous record for this location?
Doesn’t this mean we can’t use C since we can’t reference it to out 1930-1960 baseline? And if we use a more recent reference period we lose A. Do we have to ignore C’s readings entirely? Surely that means that as the years roll by and the old stations disappear, eventually we will have no continuity to our record at all? That’s not good enough.
However there is a way we can ‘splice’ them together.
A & B have a common period from 1930-1960. And B & C have a common period from 1970-2000. So if we take the average of B from 1930 to 1960 and compare it to the same average from A for the same period we know how much their averages differ. Similarly we can compare the average of B from 1930-1960 to the average for B from 1970-2000 to see how much B has changed over the intervening period. Then we can compare B vs C over the 1970-2000 period to relate them together. Knowing these three differences, we can build a chain of relationships that links C1970-2000 to B1970-2000 to B1930-1960 to A1930-1960
Something like this:
‘Chaining’ station histories together
If we have this sort of overlap we can ‘stitch together’ a time series stretching beyond more than one station’s data. We have the means to carry forward our data series beyond the life (and death) of any one station, as long as there is enough time overlap between them. But we can only do this if we are using our Average of Anomalies method. The Anomaly of Averages method doesn’t allow us to do this.
So where has this got us in looking at our problems? The Average of Anomalies approach directly addresses problem #1. Area Weighted Averaging addresses problem #2. Teleconnection and comparing a station to itself helps us hugely with problem #3 – if fog provides local insulation, it probably always had, so any changes are less related to the local conditions and more to underlying climate changes. Local station bias issues still need to be investigated but if they don’t change over time, then they don’t introduce ongoing problems. For example, if a station is too close to an artificial heat source, then this biases that station’s temperature. But if this heat source has been a constant bias over the life of the station, then it cancels out when calculate the anomaly for the station. So this method also helps us with (although doesn’t completely solve) problem #4. In contrast, using the Anomaly of Averages method, local station biases and erratic station availability will compound each other making things worse.
So this looks like a better method.
Which is why all the surface temperature analyses use it!
The Average of Anomalies approach is used precisely because it avoids many of the problems and pitfalls.
In Part 1B I will look at how the main temperature records actually compile their trends.

Dave Springer
October 24, 2011 3:39 pm

@Smokey
“Dave Springer, The 1°C number is a ballpark figure, because we don’t know for certain what the sensitivity number is.”
Sensitivity is another word for feedbacks. 1C is what you get from a CO2 doubling in the absence of feedbacks and is a hard number that you can take to the bank in a dry atmosphere over dry land. Now you know. Of course there’s no such thing in the real world as a totally dry cloud-free atmosphere over a dry land so this is maximum theoretical no-feedback effect. Some arid regions may approximate it fairly well. Those regions will also approximate a black body fairly well too. Once liquid water or water vapor enters the picture all bets are off. Given that the earth is a 71% covered in water that pretty much means that 71% of the bets are off. For instance, there is very little atmospheric greenhouse effect. The earth is warmer than the moon not because of its atmosphere but because the surface is 71% covered by water that averages 12,000 feet deep. The atmosphere’s primary role is establishing a surface pressure in which water has a wide temperature range in which it can exist as a liquid. If the ocean weren’t there this planet would be as cold as the moon which has an average temperature of -23C.

Glenn Tamblyn
October 24, 2011 3:40 pm

Off Averages & Anomalies, Part 1B
In Part 1A we looked at how a reasonable temperature record needs to be compiled. If you haven’t already read part 1A, it might be worth reading it before 1B.
There are four major surface temperature analysis products produced at present: GISTemp from the Goddard Institute of Space Sciences (GISS); HadCRUT, a collaboration between the Hadley Research Center and the University of East Anglia Climate Research Unit (HadCRUT); The US National Oceanic And Atmospheric Administration’s (NOAA) National Climatic Data Center (NCDC); and the Japanese Meteorological Agency (JMA). Another major analysis effort is currently underway: the Berkeley Earth Surface Temperature Project (BEST), but as yet their results are preliminary.
GISTemp
We will look first specifically at the product from GISS, at how they do their Average of Anomalies, and their Area Weighting scheme. This product dates back to work undertaken at GISS in the early 1980s with the principle paper describing the method being Hansen & Lebedeff 1987 (HL87).
The following diagram illustrates the Average of Anomalies method used by HL87
Reference Station method for comparing stations series
This method is called the ‘Reference Station Method’. One station in the region to be analysed is chosen as station 1, the reference station. The next stations are 2, 3, 4, etc., to ‘N’. The average for each pair of stations (T1, T2), (T1, T3), etc. is calculated over the common reference period using the data series for each station T1(t), T2(t), etc., where “t” is the time of the temperature reading. So for each station their anomaly series is the individual readings – Tn(t) – minus the average value of Tn.
“δT” is the difference between their two averages. Simply calculating the two averages is sufficient to produce two series of anomalies, but GISTemp then shifts T2(t) down by δT, combines the values of T1(t) and T2(t) to produce a modified T1(t), and generates a new average for this (the diagram doesn’t show this, but the paper does describe it). Why are they doing this? Because this is where their Area Averaging scheme is included.
When combining T1(t) and T2(t) together, after adjusting for the difference in their averages, they still can’t just add them because that wouldn’t include any Area Weighting. Instead, each reading is multiplied by an Area Weighting factor based on the location of each station; these two values are then added together and divided by the combined area weighting for the two stations. So series T1(t) is now modified to be the area weighted average of series T1(t) and T2(t). Series T1(t) now needs to be averaged again since the values will have changed. Then they are ready to start incorporating data from station 3 etc. Finally, when all the stations have been combined together, the average is subtracted from the now heavily-modified T1(t) series, giving us a single series of Temperature Anomalies for the region being analysed.
So how are the Area Weighting values calculated? And how does GISTemp then average out larger regions or the entire globe?
They divide up the Earth into 80 equal area boxes – this means each box has sides of around 2500km. Then within each box they divide these up into 100 equal area smaller sub-boxes.
GISTemp Grids
They then calculate an anomaly for each sub-box using the method above. Which stations get included in this calculation? Every station within 1200 km of the centre of the sub-box. And the weighting for each station used simply diminishes in proportion to its distance from the centre of the sub-box. So a station 10km from the centre will have a weighting of 1190/1200 = 0.99167, while a station 1190 km from the centre will have a weighting of 10/1200 = 0.00833. In this way, stations closer to the centre have a much larger influence while those farther away an ever smaller influence. And this method can be used even if there are no stations directly in the sub-box, inferring its result from surrounding stations.
In the event that stations are extremely sparse and there were only 1 station within 1200 km, then that reading would be used for a sub-box. But as soon as you have even a handful of stations within range, their values will quickly start to balance out the result. And closer stations will tend to predominate. Then the sub-boxes are simply averaged together to produce an average for the larger box – we can do this without any further area averaging because we have already used area averaging within the sub-box and they are all of equal area. Then in turn the larger boxes can be averaged to produce results for latitude bands, hemispheres, or globally. Finally these results are then averaged over long time periods.
Remember our previous discussion of Teleconnection, and that long term climates are linked over significant distances. This is why this process can produce a meaningful result even when data is sparse. On the other hand, if we were trying to use this method to estimate daily temperatures in a sub-box, the results would be meaningless. The short term chaotic nature of daily weather would swamp any longer range relationships. But averaged out over longer time periods and larger areas, the noise starts to cancel out and underlying trends emerge. For this reason, the analysis used here will be inherently more accurate when looked at over larger times and distances. The monthly anomaly for one sub-box will be much less meaningful than the annual anomaly for the planet. And the 10-year average will be more meaningful again.
And why the range of 1200 km? This was determined in HL87 based on the correlation coefficients between stations shown in the earlier chart. The paper explains this choice:
“The 1200-km limit is the distance at which the average correlation coefficient of temperature variations falls to 0.5 at middle and high latitudes and 0.33 at low latitudes. Note that the global coverage defined in this way does not reach 50% until about 1900; the northern hemisphere obtains 50% coverage in about 1880 and the southern hemisphere in about 1940. Although the number of stations doubled in about 1950, this increased the area coverage by only about 10%, because the principal deficiency is ocean areas which remain uncovered even with the greater number of stations. For the same reason, the decrease in the number of stations in the early 1960s, (due to the shift from Smithsonian to Weather Bureau records), does not decrease the area coverage very much. If the 1200-km limit described above, which is somewhat arbitrary, is reduced to 800 km, the global area coverage by the stations in recent decades is reduced from about 80% to about 65%.”
Effect of station count on area coverage
It’s a trade-off between how much coverage we have of the land area and how good the correlation coefficient is. Note that the large increase in contributing station numbers in the 1950s and subsequent drop off in the mid-1970s does not have much of an impact on percentage station coverage – once you have enough stations, more doesn’t improve things much. And remember, this method only applies to calculating surface temperatures on land; ocean temperatures are calculated quite separately. When calculating the combined Land-Ocean temperature product, GISTemp uses land-based data in preference as long as there is a station within 100 km. Otherwise it uses ocean data. So in large land areas with sparse station coverage, it still calculates using the land-based method out to 1200 km. However, for an island in the middle of a large ocean, the land-based data from that island is only used out to 100 km. After that point, the ocean based data prevails. In this way data from small islands don’t influence the anomalies reported for large areas of ocean when ocean temperature data is available.
One aspect of this method is the order in which stations are merged together. This is done by ordering the list of stations used in calculating a sub-box by those that have the longest history of data first, with the stations with shorter histories last. So they are merging progressively shorter data series into a longer series. In principle, the method used to select the order in which the stations are being processed could have a small effect on the result. Selecting stations closer to the centre of the sub-box first is an alternative approach. HL87 considered this and found that the two techniques produced differences that were two orders of magnitude smaller than the observed temperature trends. And their chosen method was found to produce the lowest estimate of errors. They also looked at the 1200 km weighting radius and considered alternatives. Although this produced variations in temperature trends for smaller areas, it made no noticeable difference to zonal, hemispheric, or global trends.
The Others
The other temperature products use somewhat simpler methods.
HadCRUT (or really CRU, since the Hadley Centre contribution is Sea Surface Temperature data) calculate based on grid boxes that are xº by xº, with default value being 5º by 5º. At the equator, this means they are approximately 550 x 550 km, although much smaller at the polar regions. They then take a simple average all the anomalies for every station within that grid box. This is a much simpler area averaging scheme. Because they aren’t interpolating data like GISS, they are limited by the availability of stations as to how small their grid size can go, otherwise too many of their grids may have no station at all. And in grid boxes where there is no data available, they do not calculate anything. So they aren’t extrapolating / interpolating data. But equally, any large-scale temperature anomaly calculation such as for a hemisphere or the globe is effectively assuming that any uncalculated grid boxes are all actually at the calculated average temperature. However, to then build results for larger areas, they need to area weight the averages for the differing sizes of the grid boxes depending on the latitude they are at.
NCDC and JMA also use a 5º by 5º grid size and simply average anomalies for all stations within that grid then area weighted averaging is used to average grid boxes together. All three also combine land and sea anomaly data. Unlike HadCRU & NCDC, which use the same ocean data, JMA maintain their own separate database of SST data.
In this article and the Part 1A, I have tried to give you an overview of how and why surface temperatures are calculated, and why calculating anomalies then averaging them is far more robust than what might seem the more intuitive method of averaging then calculating the anomaly.
In parts 2A & 2B we will look at the implications of this for the various questions and criticisms that have been raised about the temperature record.

Glenn Tamblyn
October 24, 2011 3:41 pm

Off Averages & Anomalies Part 2A
In Part 1A and Part 1B we looked at how surface temperature trends are calculated, the importance of using Temperature Anomalies as your starting point before doing any averaging and why this can make our temperature record more robust.
In this Part 2A and a later Part 2B we will look at a number of the claims made about ‘problems’ in the record and how misperceptions about how the record is calculated can lead us to think that it is more fragile than it actually is. This should also be read in conjunction with earlier posts here at SkS on the evidence here, here & here that these ‘problems’ don’t have much impact. In this post I will focus on why they don’t have much impact.
If you hear a statement such as ‘They have dropped stations from cold locations so the result is now give a false warming bias’ and your first reaction is, yes, that would have that effect, then please, if you haven’t done so already, go and read Part 1A and Part 1B then come back here and continue.
Part 2A focuses on issues of broader station location. Part 2B will focus on issues related to the immediate station locale.
Now to the issues. What are the possible problems?
Urban Heat Island Effect
This is the first possible issue we will consider. The Urban Heat Island Effect (UHI) is a real phenomenon. In built-up urban areas the concentration of heat storing materials in buildings, roads, etc. such as concrete, bitumen, bricks and so on, and heat sources such as heaters, air-conditioners, lighting, cars, etc. all combine to produce a local ‘heat island’: a region where temperatures tend to be warmer than the surrounding rural land. This is well-known and you can even see its effects just looking at reports of daily temperatures. If we have weather stations inside such a heat island they will record higher temperatures than they would if they were in the surrounding country side. If we don’t make some sort of compensation for this then this could be a real source of bias in our result, and since we never see ‘cool islands’, its bias would be towards warming.
This is why the major temperature records include some method for compensating for it, either by applying a compensating adjustment to the broad result they calculate, or by trying to identify stations that have such an issue and adjusting them. GISTemp for example seeks to identify such urban stations and then adjust them so that the urban station’s long-term trend is brought into line with adjacent rural stations. There is also the question of identifying which stations are ‘urban’. Previous methods relied on records of how stations were classified, but this can change over time as cities grow out into the country, for example. GISTemp recently started using satellite observations of lights at night to identify urban regions – more light means more urban.
What other factors might limit or exaggerate the impact a heat island might have?
Is the UHI effect at the station growing, changing over time? Has the UHI effect in a city got steadily warmer, or has the area that is affected by the heat island expanded but the magnitude of the effect hasn’t changed. This will depend on things like how the density of the city changes, what sort of activities happen where, etc. For a station that has always been inside a city, say an inner city university, it will only be affected if the magnitude of the heat island effect increases. If the UHI at a site stays constant, then that isn’t a bias to the trend.
On the other hand, a previously rural station that has been engulfed by an expanding city will most definitely feel some warming and will show a trend during the period of its engulfing, although again how much will depend on circumstances. And this will look like a trend. If it has been engulfed by low density suburbia and its piece of ‘country’ has been preserved as a large park around it, the impact will be lower than if a complete satellite city has sprung up around it and it is on the pavement next to a 6 lane expressway.
But remember, the existing products include a compensation to try and remove UHI, UHI only impacts our long term temperature results if the magnitude of the effect is growing, and each station’s data still has to be added to the results for all other stations using Area Weighted Averaging. And since the vast majority of the Earths land area isn’t urban, UHI can only have a limited impact on the final result anyway. And the Oceans aren’t affected by UHI and they are 70% of the Earth’s surface.
Airports
One particular example sometimes cited is the number of stations located at airports, with images being painted of ‘all those hot jet exhausts’ distorting the results. Firstly we are interested in daily average temperatures not instantaneous values. So the station would need to get hit by a lot of jets.
Think about a medium-sized airport. At its busiest it might have one aircraft movement (take off or landing) per minute. Each takeoff involves less than a minute at full power while the rest of the take off and landing, 10 minutes or so of taxiing, is at relatively low power. For the rest of the one to several hours that the aircraft is on the ground, its engines are off. So for each jet at the airport, its average power output over its entire stay there is a very tiny percentage of its full power. And many airports have night-time curfews when no aircraft are flying. So how much do the jets contribute to any bias?
Consider instead that the airport is like a mini-city – buildings and lots of concrete and bitumen tarmac. But also lots of grassed land in between. So the real impact of an airport on any station located there will be more like a mini-UHI effect. But how much does an airport grow? Usually they have a fixed area of land set aside for them. The number of runways and taxiways doesn’t change much. And the area of apron around the terminal buildings doesn’t change that much over time. So the magnitude of this UHI effect is unlikely to change greatly over time unless the airport is growing rapidly.
If an airport is located in a rural area then any changes to the climate in the airport is going to be moderated by effects from surrounding countryside since it after all a mini-city not a city. If an airport has always been inside an urban area such a Le Guardia in New York then it is going to be adjusted for by the UHI compensations described above. And a rural airport that has been enveloped by its city will eventually have a UHI compensation applied. So the airports that are most likely to have a significant impact need to be and remain rural, be so big that moderating effects from the surrounding countryside don’t have much effect, and be expanding so that their bias keeps growing and thus isn’t compensated out by the analysis method. Then they need to dominate the temperature record for large areas, with few other adjacent stations. And then there are no airports on the oceans. So any airport that is likely to have an impact needs to be near a large growing city to generate the large and increasing traffic volumes to cause the airport to be large and growing, in a region that is sparsely populated otherwise so there are few other stations. And most large growing cities tend to be near other such cities.
Islands
There is one special case sometimes cited in relation to GISTemp: islands. If the only station on an island in the ocean is at an airport or has ‘problems’, that islands data will then supposedly be used for the temperature of the ocean up to 1200 km away in all directions, extending any problems over a large area. This claim is missing one key point: the temperature series used to determine global warming trends is the combined Land and Ocean series. And when land data isn’t available such as around an island, ocean data is substituted instead.
This is some data from a patch of ocean in the South Pacific (OK, it’s from around Tahiti, I’m a sucker for exotic locations). I calculated this by using GISTemp to calculate temperature anomalies for grids around the world for 1900 to 2010, using consecutively land only data, ocean only data and combined land & ocean data. I then calculated from the three values obtained from each grid point the percentage contribution to the combined land/ocean data of each of the two sources. The following graph shows the percentage contribution of the land data at each grid point. And for reference below I have listed the temperature stations in the area with their Lat/Long. Obviously this isn’t coming just from land only data and in grids too far from land the % contribution of land data falls to zero. Each 2º by 2º grid is approximately 200 x 200 km, much less than the 1200km averaging radius used by GISTemp.
% Land Contributionaround Tahiti
There aren’t enough stations
A common criticism is that there aren’t enough temperature stations to produce a good quality temperature record. A related criticism is that the number of stations being used in the analysis has dropped off in recent decades and that this might be distorting the result. On the Internet comments such as ‘Do you know how many stations they have in California?’ – By implication not enough – are not uncommon. This seems to reflect a common misperception that you need large numbers of stations to adequately capture the temperature signal with all its local variability.
However, as I discussed in Part 1A, the combination of calculating based on Anomalies and the climatological concept of Teleconnection means that we need far fewer stations than most people realise to capture the long-term temperature signal. If this isn’t clear, perhaps re-read Part 1A.
So how few stations do we need to still get an adequate result? Nick Stokes ran an interesting analysis using just 61 stations with long reporting histories from around the world. His results, plotting his curve against CruTEM3, although obviously much noisier than the data from the full global temperature still produced a recognisably similar temperature curve even with just 61 stations worldwide!
Just 61 Stations!Just 61 Stations – Smoothed!
So even a handful of stations get you quite close. What reducing station numbers does is diminish the smoothing effect that lots of stations gives. But the underlying trend remains quite robust even with far fewer stations. What is perhaps more important is if the reduction in station numbers reduces ‘station coverage’ – the percentage of the land surface with at least one station within ‘x’ kilometres of that location. But as we discussed in Part 1A, Teleconnection means that ‘x’ can be surprisingly large and still give meaningful results. And with Anomaly based calculations, the absolute temperature at the station isn’t relevant; it is the long term change in the station we are working with.
The Thermometers are Marching!
A related criticism is that the decline in used station count has disproportionately removed stations from colder climates and thus introduced a false warming bias to the record. This has been labelled “The March of the Thermometers”. With the secondary ‘conspiracy theory’ type claim that this is intentional, all part of the ‘fudging’ of the data. This can seem intuitively reasonable – surely if you remove cold data from your calculations the result will look warmer. And if that is the result then, hey, that could be deliberate.
But the apparent reasonableness of this idea rests on a mathematical misconception which we discussed in detail in Part 1A. If we average together the absolute temperatures from all the sites then most certainly removing colder stations would produce a warm bias. Which is one of the most important reasons why it isn’t done that way! Using that approach (what I called the Anomaly of Averages method) would produce a very unstable, unreliable temperature record indeed.
Instead what is done is to calculate the Anomaly for each station relative to its own history then average these anomalies (what I called the Average of Anomalies method).
Since we are interested in how much each station has changed compared to itself, removing a cold station will not cause a warming bias. Removing a cooling station would! The hottest station in the world could still be a cooling station if its long term average was dropping. 50 °C down to 49 °C is still cooling. Removing that station would add a warming bias. However, removing a station whose average has gone from -5 °C up to -4 °C would add a cooling bias since you have removed a warming station.
We are averaging the changes in the stations, not their absolute values. And remember that Teleconnection means that stations relatively close to each other tend to have climates that follow each other. So removing one station won’t have much effect if ‘adjacent’ stations are showing similar long term changes. So for station removals to add a definite warming bias we would need to remove stations that have or are showing less warming, remove other adjacent stations that might be doing the same, but leave any stations that are showing more warming. If this station removal was happening randomly, there is no reason to think that any effect from this would be anything other than random, not a bias.
If this were part of some ‘wicked scheme’, then the schemers would need to carefully analyse all the world’s stations, look for the patterns of warming so they could cherry-pick the stations that would have the best impact for their scheme, and then ‘arrange’ for those station to become ‘unavailable’ from the supplier countries, while leaving the stations that support their scheme in place. And why would anyone want to remove stations in the Canadian Arctic for example as part of their ‘scheme’? Some of the highest rates of warming in the world is happening up there. Why remove them to make the warming look higher? Maybe someone is scheming. I’ll let you think about how likely that is.
But what if the pattern of station removals is driven by other factors – physical accessibility of the stations, operating budgets to keep them running etc.? Wouldn’t the stations more likely to be dropped be the ones in remote, difficult to reach, and thus expensive locations? Like mountains, arctic regions, poorer countries? Which are substantially where the ‘biasing’ stations are alleged to have disappeared from. If you drop ‘difficult’ stations you are very likely to remove Arctic and Mountain stations.
Could it also be that the people responsible for the ongoing temperature record realise that you don’t need that many stations for a reliable result and thus aren’t concerned about the decline in station numbers – why keep using stations that aren’t needed if they are harder to work with?
For example, here are the stats on stations used by GISTemp. The number of stations rose during the 60’s and dropped of during the 90’s but percentage coverage of the land surface only dropped off slightly. Where coverage is concerned, its not quantity that counts but quality.
Coverage from GISS
Station coverage from GISTemp
GISTemp ‘extrapolates’ 1200 kilometers
One particular criticism made of the GISTemp method is that ‘they use temporatures from 1200 km away’ usually spoken with a tome of incredulity and some suggestion that this number was plucked out of thin air.
Station Correlation Scatter Plots (HL87)As explained in Part 1A and Part 1B, the 1200 km area weighting scheme used by GISTemp is based on the known and observed phenomena of Teleconnection; that climates are connected over surprisingly long distances. The 1200 km range used by GISTemp was determined emprically to give the best balance between correlation between stations and area of coverage.
Figure 3, from Hansen & Lebedeff 1987 (apologies for the poor quality, this is an older paper) plots the correlation coefficients versus separation for the annual mean temperature changes between randomly selected pairs of stations with at least 50 common years in their records. Each dot represents one station pair. They are plotted according to latitude zones: 64.2-90N, 44.4-64.2N, 23.6-44.4N, 0-23.6N, 0-23,6S, 23.6-44.4S, 44.4-64.2S.
When multiple stations are located within 1200 km of the centre of a grid point, the value calculated is the weighted average of their individual anomolies. A station 10 km from the centre has 100 times the weighting of a station 1000 km from the centre. And as discussed under the section on islands previously, for small islands, the ocean data predominates not the land data.
One area of some contention is temperatures in the Arctic Ocean. Unlike the Antarctic, the Arctic does not have temperature stations out on the ice. So the neasest temperature stations are on the coast around the Arctic Ocean, Greenland and some islands. And ocean temperature data can’t be used instead since this is not available for the Arctic Ocean.
Other temperature products don’t calulate a result for theArctic Ocean. The result is that when compiling the Global trend, the headline figure most people are interested in, this method effectively assumes that the Arctic Ocean is warming at the same rate as the global average. Yet we know the land around then Arctic is warming faster than the global average so it seems unreasonable to suggest that the ocean isn’t, Satellite temperature measurements up to 82.5 N support this as does the decline of Arctic sea ice here, here & here.
So it seems reasonable that the Arctic Ocean would be warming at a rate comparable to the land. Since the GISTemp method is based on empirical data regarding teleconnection, projecting this out seems to me the better option since we know the alternative method will produce an underestimate. Many parts of the Arctic Ocean are significantly less than 1200 km from land, with the main region where this isn’t the case being between Alaska & East Siberia and the North Pole.
Certainly the implied suggestion that GISTemp’s estimates of Arctic Ocean anomalies are false isn’t justified. It may not be perfect but it is better than any of the alternatives.
In this post we have looked at some of the reasons why the temperature trend may be more robust with respect to factors affecting the broader region in which stations are located than might seem the case. The method used to calculate temperature trends does seem to provide good protection against these kinds of problems
In Part 2B we will continue, looking at issues very local to a station and why these aren’t as serious as many might think…

Glenn Tamblyn
October 24, 2011 3:42 pm

Of Averages & Anomalies Part 2B
In Part 1A and Part 1B we looked at how surface temperature trends are calculated, the importance of using Temperature Anomalies as your starting point before doing any averaging and why this can make our temperature record more robust.
In Part 2A and in this Part 2B we will look at a number of the claims made about ‘problems’ in the record, and how misperceptions about how the record is calculated can lead us to think that it is more fragile than it actually is. This should also be read in conjunction with earlier posts here at SkS on the evidence here, here & here that these ‘problems’ don’t have much impact. In this post I will focus on why they don’t have much impact.
If you hear a statement such as ‘They have dropped stations from cold locations so the result is now give a false warming bias’ and your first reaction is, yes, that would have that effect, then please, if you haven’t done so already, go and read Part 1A and Part 1B then come back here and continue.
Part 2A focused on issues of broader station location. Part 2B focuses on issues related to the immediate station locale.
Now to the issues. What are the possible problems?
One issue that has received considerable attention is the question of the ‘quality’ of surface observation stations, particularly in the US. How well do the stations in the observation network meet quality standards with respect to location and avoidance of local biasing issues, and how much might this impact on the accuracy of the temperature record.
The upshot of investigations into this is that, at least in the US, a substantial proportion of stations have poor location quality ratings. However, analysis of the impact of the site quality problems by a number of independent analysts suggests that these problems have had almost no impact on the accuracy of the long term temperature record. How could this be? Surely that is the whole point of these quality rankings – poor quality sites can give bad results. So why wouldn’t they?
The definition of the best quality sites, Category 1 is as follows:
“Flat and horizontal ground surrounded by a clear surface with a slope below 1/3 (<19º). Grass/low vegetation ground cover 3 degrees.”
Down to Category 5:
“(error ≥ 5ºC) – Temperature sensor located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface”
Lets consider a few of these factors. And remember, we are interested in factors that have an impact on long-term changes in the temperature readings at a site. If a factor results in a bias in the reading but this bias does not change over time, then it will not impact on the analysis since we are interested in changes – static biases get cancelled out in the analysis and have no long-term impact. Firstly, let’s look at the standard enclosure used for a meterological measurement station – the Stevenson Screen:
Stevenson Screen
The screen is designed to isolate the instruments inside from outside influences, particularly radiant effects from its surrounds and rain. It is usually made from a material such as wood or similar that is a fairly good insulator and isn’t going to change temperature too much because of radiant heating/cooling from its surroundings. The double-slatted design suppresses air movement from wind through the enclosure, minimising wind chill effects and restricting rain entry onto the instruments. The double-slatted design also means that any air rising from beneath the enclosure isn’t being preferentially drawn into or out of the box. And the design of the base allows air movement from below while shielding from radiation from below.
So what are the problems which can change the category of a temperature monitoring station to lower than 1?
Slope > 19º
A problem may arise if the station isn’t located on sufficiently flat ground. This can produce air movements that are caused by temperature, resulting in warmer air possibly moving towards the station. However, unless there have been really major earthworks around the site, this factor doesn’t change over time and is unlikely to have a long-term changing impact.
Grass/low vegetation ground cover >10 centimeters high
This can impact on air movements around the station. Also, if the vegetation changes substantially – low grass to shrubs and trees – then this could change water evaporation rates around the station and alter air temperatures. Major increases in vegetation might have a cooling effect on the station due to evaporative effects, while declines in vegetation back to Category 1 standards might have a warming impact. However, unless there is a regular and progressive change in the vegetation pattern around the station, this would not produce an ongoing change of any bias. If maintenance of vegetation around the station over its lifetime has been poor or erratic, then the bias may fluctuate up and down. This would create shorter term fluctuations in the bias but this would tend to cancel out in the longer term.
Shading when the sun elevation >3 degrees
If the degree to which the station and its surrounds are shaded over the course of the day changes, this can alter local heating. Primarily this is going to impact as a result of shading causing differing heating/cooling of the ground under/around the enclosure, resulting in changes in the temperature and flow rate of rising air up through the enclosure. Unless the cause of the shading varies over long, multi-year time frames such as trees growing or buildings rising, the shading effect is not a long-term changing biasing factor. Depending on the cause of the shading, this may cause changes in the bias over the course of a day and over the seasons, but as a multi-year bias, this would remain constant.
Not far enough from large bodies of water
This too is a static bias. The body of water would have a cooling effect due to evaporation that would vary with daily weather conditions and the seasons but would not be a multi-year biasing factor.
Static artificial heating sources
Essentially surfaces such a brick, concrete, bitumen, etc. that can act as local heat stores, greater than normal grass covered earth would be, that can then release heat either radiantly or by heating the surrounding air. These can be vertical structures, horizontal surfaces away from the enclosure, or a horizontal surface beneath the enclosure. The enclosures are designed to minimise radiant heat penetration into the enclosure from its surrounds, so the major impact of such static heating sources is going to be from heating surrounding air which may then pass through the enclosure. This will be worst when such a surface is very close to the enclosure, particularly beneath it, generating rising warmer air into the box.
Also an important factor will be the extent to which any such surfaces tend to form a partial ‘room’ around the enclosure, restricting horizontal air movement. Any such surface will tend to heat the air near/above it, causing that air to rise. More air is then drawn in to replace this, potentially flowing over or through the enclosure. If the distances involved and the geometry of the site result in this new air being warmer than the general surroundings, this could provide a warming bias for the site. Conversely if this replacement air is being drawn from a location that isn’t warmer then there may be no bias at all, possibly even a cooling effect. Ambient winds may also blow warmed air towards, or away from enclosure, depending on wind direction. And the effect of any such bias will vary over the course of the day and the seasons.
However, since the main source of any such bias is the amount and layout of such surfaces and sunlight, these biases won’t change over multi-year time frames unless the area of the surfaces is changing. This could be due to construction, or changes in shading of these surfaces such as by trees growing or building construction nearby. And some of these shading changes could actual reduce the bias over time, resulting in a long-term cooling trend. Also to be considered is whether the site is included within a region that is or becomes urban, in which case the UHI adjustments mentioned previously may cancel out any bias completely. And we still have to allow for area weighting of data from such a site when averaged over the Earth’s land surface. And this doesn’t affect the oceans at all.
Dynamic artificial heating sources
These are similar to the static surfaces, but they are things that actively pump heated air into the environment. Things such as Air Conditioner condensers, Exhaust fans, Heater flues, Cooling towers, Vehicle exhausts, etc. As with the static sources, a key issue here is geometry. They are generating hot air which will tend to rise unless winds blow it towards the enclosure. Does any such device actively blow warm air towards the enclosure? Or does its operation tend to draw air in from elsewhere and over the enclosure? How distant is the device and what is the geometry?
Also how long does the device operate for; 24/7 or intermittently? A station may be next to a large car park, but unless there is continuous activity, even thousands of cars have no extra impact if they are all parked and empty. Does an Air-conditioner run 24/7 or just 9-5 weekdays? Is it a reverse cycle A/C unit also used for heating in winter or at night, in which case it will pump out colder air then that doesn’t rise? How much do these activities vary with the seasons? And ultimately do these activities grow in magnitude over multi-year time frames? Otherwise they again contribute to short-term intra-annual biasing but not multi-year effects. And they may be cancelled out anyway by UHI compensations.
The US network certainly isn’t as good as it should be. There are certainly factors operating there that influence short term daily and seasonal readings and these may have important implications for use in daily Meteorological forecasting which rely on absolute temperatures. However, for long term multi-year Climatological uses, it is perhaps easy to overestimate the impact of these problems.
It easy to understand how our subjective impressions, standing near a poor quality site, seeing an A/C roaring away or feeling the radiant heat from a concrete parking lot nearby, could lead us to think this is a big issue. But the combination of the screening properties of the enclosures, long-term averaging, anomaly-based averaging, and UHI compensation will certainly tend to remove many biases that do not have long term-trend changes. And area averaging over the Earth’s land surface combined with the fact that most of the Earth is water reduces any impact even further.
So it isn’t surprising that the long term temperature trend data doesn’t seem to be significantly affected by station quality issues. That is not to say that there may not be noticeable impacts on shorter term measures – local and seasonal trends and possibly daily temperature range (DTR) effects for example. But for the headline Global Temperature Anomaly, which is a main indicator of Climate Change, station quality issues appear to be a very minor issue, something that ‘all comes out in the wash’.
Station Homogenisation issues
Finally we come to ‘Station Homogenisation’ – the process of reviewing station data records looking for errors that are a result of how the measurement was taken, rather than what the temperature actually was.
A common misconception is that ‘the thermometer never lies’. That the raw data is the gold standard. As anyone who works in any field involving instrumentation knows, this isn’t true; there are always ‘issues’ that you have to monitor for. Any instrument, even a simple thermometer will have its own built in biases.
Sometimes there will be readings that are just plain whacky. And surrounding influences can have an impact. A thermometer out in the sunshine will have a different reading from one shaded by your hand for a few minutes. A caretaker who can work quickly taking the readings when the enclosure door is open will produce a different bias from one who works slowly, or reads the instruments in a different order. Bias and error is everywhere.
If readings at a station weren’t always taken at the same time of day, this can introduce biases. Changes in the instruments used can introduce a bias. Some readings can be just plain wrong. Imagine some scenarios:
The caretaker of a station may have had a ‘big night out’ and not read the thermometer very accurately. There is an error there but we probably can’t detect it.
The caretaker of a station may have a ‘big night out’ every Friday night. Now there might be a regular error in Saturday’s readings. With a pattern like this, we might be able to detect it with statistical analysis. We might be able to correct it but only if we are certain enough.
That caretaker might have had one ‘REALLY big night out’ and next morning broke the thermometer. He replaced it but did he record that fact in the station log? If he did, we know that a change of bias has been introduced between the two thermometers. Then we can compare readings from before and after and try to find the change. But only after we have years’ worth of data from both thermometers. And if he didn’t log it, then we only spot a problem if that station seems to have a strange change compared to nearby stations.
Over time the Stevenson Screen may have fallen into disrepair, resulting in a slow changing bias as outside influences start to penetrate. Then the site is updated with a new screen. Biases removed, although the new screen may have its own small bias. If we now about the change we can try to compensate for it. Eventually when we have enough data from before and after.
The caretaker at the station in Ushuaia right at the southern tip of Argentina records data through the early 1900’s. In Spanish, with poor handwriting – really hard to tell 7’s and 9’s apart. The log sheets are sent to Buenos Aires where the data from this and many other stations are collated and typed up onto summary sheets by a clerk with an old battered typewriter. Then they are filed away; 40 years later they are extracted, faded and old, photocopied on a poor quality early copier and mailed to the US for incorporation into climate databases. Where they must be copied into the database again by hand. How many errors have crept in during that process?
So, we can’t simply take the raw data at face value. It has noise in it. We need to analyse this data looking for problems and correcting them when we are confident enough of the correction. But also being careful that we don’t introduce errors through unjustified corrections. This requires care and judgment and it is sometimes a real detective story. And often corrections cannot be made until many, many years later because you need lots of data before you can spot changes in bias.
So this process of working through the data, trying to make it more accurate is ongoing.
But what of its impact on the temperature record? Again, if the biases at a station don’t change over time, they don’t affect our analysis. Individual errors matter but they will tend to be random, some higher, some lower so when we average over large areas and long time periods, they tend to cancel out. Again, it is problems that cause changing biases that matter. And analysis of changes due to Homogenisation in the record indicate that there are as many cooling changes as warming ones. Such as this from Brohan et al 2006:
Homogenisation Distribution
Conclusion
So, Part 1A looked at how we should calculate the temperature record and why the method used is very important to the result. And that this doesn’t necessarily match our intuitive idea of how it should be done; in this our intuition is often wrong. In Part 1B looked at how we DO calculate the temperature record, that is using the method outlined in part one and that the area weighting scheme used by one record is based on empirical evidence. In Part 2A we looked at some of the areas where the temperature record has been criticised with respect to its broader locale. And in this post we have explored issues related to the immediate surrounds of the station.
I think we have seen that there are many reasons why we tend to overestimate the effect of these problems. This conclusion is consistent with the evidence here, here & here from various analyses that show that these possible problems haven’t had any significant effect on the result.
My conclusion is that we can have a strong confidence in the results produced for the global temperature trend. Any problems will show up more in short-term patterns such as seasonal, monthly and daily trends. But the headline global numbers look pretty robust.
You will have to make up your own mind but I hope I have been able to give you some food for thought when you are thinking about this.

dcfl51
October 24, 2011 3:45 pm

Brian
1. Scientific truth is not determined by holding a vote. It is determined by collecting and evaluating evidence. The history of science is littered with examples of where the consensus was wrong.
2. Not one of the major scientific institutions you allude to has actually ballotted its members on CAGW. The committees just presume to speak on behalf of their membership.
3. See the paper from which the 97% agreement figure was derived. The question on man’s influence on the climate was not well-formed. It did not restrict itself to global rather than local effects, nor was it specific to CO2 ( as opposed to land use changes, etc ) and it also relied on the respondent’s interpretation of what was meant by “significant”.
4. The questionnaire was circulated to those working in “earth sciences”. So, no solar physicists ? There are quite a lot of people who think that big yellow thing in the sky might have something to do with the climate. No cosmologists – what about the cosmic ray theory? No specialists in thermodynamics, many of whom question the scientific principle of the greenhouse effect ? No botanists, whose stomata studies directly contradict the ice core CO2 records ? etc, etc.
5. Finally, the 97% represented only 75 scientists, all of whom were described as climatologists. This probably means they worked for the institutions which produce the climate models and, …. wait for it ….. they believe in their own models !

Legatus
October 24, 2011 3:45 pm

I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it? Maybe Anthony Watts can shed some light on this?

Have you heard of the Little Ice Age? Click that link and you will see two things. The first is the obligatory “I believe in global warming like you, so don’t cut off my funding!” message (see Climategate and it’s emails showing what is done to those who fail to support AGW). At the very least, this shows that you cannot say that the scientist here has an anti AGW bias. Second, there is this, yet another site (Wikipedia) that starts with the obligatory “we support AGW!” message, trying to say that the little ice age (according to the IPCC) was not worldwide, and then is followed by actual scientific evidence from , respectively, Europe, North America, Central America, Africa, Antarctica, Australia, New Zealand and Pacific Islands, and South America, which shows that the LIA was, indeed, worldwide, and that the IPCC was wrong or flat out lying. And if you look here, you see a 300 year record of 5 European cities showing about 150 years of cooling, followed by 150 years of warming. We also have records of the cold people wrote about then, such as the frost fairs held on the top of the ice of the frozen river Thames in England, and how George Washington dragged cannon across the frozen Hudson River. Conclusion, it was colder then. Then about 150 years ago, it started to warm up, and warmed up rapidly.
Thus, by the time we get to 1900 or so, it has warmed up from it’s formerly cold temperature. and since then the temperature has remained about the same. Thus we can clearly say “the world is warming”, since it is clearly warmer than the little ice age, and doubt modern, unprecedented warming since 1950, since we can see that for the last 100 years or a bit more, the temperature has not changed nearly as much as it did when the climate changed from the little ice age to the modern warm period.
To sum up:
There was a Little Ice Age, so called because it was cold.
It was consistently colder than now for quite some time, perhaps 300 years.
The Little Ice Age ended, and it warmed up rapidly.
For the last 100 years or more, in the USA at least, it has remained about the same, warmer than the Little Ice Age.
If global warming were true, it would not have remained consistently the same, it would have warmed rapidly after 1950, it did not.
This is according to the most reliable temperature measurement station we have, the ones that have remained constant for 100+ years.
Or in one sentence, it has definitely warmed up since the Little Ice Age, and has remained fairly warm for over 100 years, but has not warmed much more in that 100 years from what it was 100 years ago.
You just need a little perspective, compared to 200 years ago, the world has warmed, compared to 100 years ago, the world has not warmed.

dcfl51
October 24, 2011 4:13 pm

Erratum : the question did restrict itself to global effects. Sorry.

Myrrh
October 24, 2011 4:33 pm

Legatus says:
October 24, 2011 at 3:45 pm
Or in one sentence, it has definitely warmed up since the Little Ice Age, and has remained fairly warm for over 100 years, but has not warmed much more in that 100 years from what it was 100 years ago.
You just need a little perspective, compared to 200 years ago, the world has warmed, compared to 100 years ago, the world has not warmed.

It’s the last hundred years temperature data they’d been playing with as in New Zealand, the second mentions Salinger’s connection with CRU: http://climaterealists.com/index.php?id=6151
http://www.climateconversation.wordshine.co.nz/tag/nz-temperature-records/
It’s taken around forty years to put the record straight..

October 24, 2011 4:35 pm

Brian,
Try this shoe on and tell me if it fits:
“there are… numerous well meaning individuals who have allowed propagandists to convince them that in accepting the alarmist view of anthropogenic climate change, they are displaying intelligence and virtue. For them, their psychic welfare is at stake.”
The source is M.I.T. Climatologist Dr. Richard Lindzen:
http://thegwpf.org/opinion-pros-a-cons/2229-richard-lindzen-a-case-against-precipitous-climate-action.html
Dr. Lindzen goes on to say:
“With all this at stake, one can readily suspect that there might be a sense of urgency provoked by the possibility that warming may have ceased and that the case for such warming as was seen being due in significant measure to man, disintegrating. For those committed to the more venal agendas, the need to act soon, before the public appreciates the situation, is real indeed. However, for more serious leaders, the need to courageously resist hysteria is clear. Wasting resources on symbolically fighting ever present climate change is no substitute for prudence. Nor is the assumption that the earth’s climate reached a point of perfection in the middle of the twentieth century a sign of intelligence.”
Do I hear the sound of a pseudo-scientific religious cult crumbling?
http://sbvor.blogspot.com/p/climate-change-science-overview.html

Steve Garcia
October 24, 2011 4:39 pm

As to the post-2000 dying of the thermometers, so soon after the Great Dying of the Thermometers in about 1990, reminds me of a short story I read back in the early 1960, when I was a mere lad, as the say across the pond NW of France.
It was the era of Reader’s Digest’s greatest popularity, including hardbound volumes with truncated versions of novels great and not-so-great, about 4 or five to the volume. Someone tongue-in-cheek (and well over my young literary-virginal head) wrote about the trend to digest books more and more, more and more, and taking it to its logical but extreme limit, told a tale of a book that had been digested all the way down to a single word. I believe that word was the name of the story, but it HAS been a long time, and I was ever so young.
One might suppose that that is what the climate establishment is aiming for – to digest all current thermometer readings to one special one that represents the entire globe.
And why not? Why should they have to go through all that tedious data assembling – instruments and proxies, tree rings, ice cores, varves, corals, and the various thermometer types? Wouldn’t there be far less disagreement and more settled science that way? Shouldn’t that wonder of our two most recent decades – science – digest down all their data collecting to one and only one reading per day? Wouldn’t all this NH/SH, El Niño-La Niña, AMO, SST, confusion be done away with, not to mention the problem of semi-drunken local temperature readers – FINALLY! – so that the experts can sit back, in the full glory of their expertship, puffing on their Meerschaums and Marlboros (and self-rolled Zig-Zags and whatever might find itself therein), blowing rings and smiling like the Cheshire cat – and fading blissfully from our sight, into the upper reaches of yon ivory towers of yore? Isn’t that what we really want our scientists to do?
If science is at its core about improving life and making life easier and simpler, well why shouldn’t the climatologists partake of that life of Riley, too? They pay taxes, too, after all. We should nod our heads in agreement at such a development – this perfect, singular global temperature data point from the one perfect temperature point on Earth – as the apex of science’s great accomplishments on behalf of homo sapiens sapiens. Rather than bemoaning the defuncting of the confusion-engendering Yamal or Polar Ural tree rings, the obfuscating UAH satellite blather, the TOB changes, the TOB differences, UHI adjustments, petulant declines, the PDO, solar irradiance, and cosmic rays, we should be having a rousing wake, celebrating the part all of them had done for us in the past, when our climate folks were getting their feet under themselves, and we should toast to the new age of Unitemp. Gone will be all the confusion and gone will be all the endless ragging on each other over what graph and what data set is BEST – and most especially the endless tug-of-war over what it all means.
Let us revel in our oneness of agreement. The one temperature cannot be confused, and isn’t that better? Unity beyond complexity. War is peace. Simple is more complete.
We can just hand over a small scrap of paper with one number on it, once a year, and so let Congress or the European Parliament get on with their job of whatever it is they do. Isn’t that so much more efficient and civilized?
There! I feel better, just for having digested it all down for your reading enjoyment…

wayne
October 24, 2011 4:41 pm

Smokey:
one thing you are is consistent (with a capital ‘C’). Others like Brian claiming otherwise are lost and wrong. This is a great science site.
I wanted to show you a couple of new things I have stumbled upon for you are one person here I know will not forget, you are very persistent too! 😉
A traipse through a search of all “water vapor”,spencer,miskolczi led me to a missed article by Dr. Curry many months ago at http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/ and related to http://judithcurry.com/2011/09/25/trends-in-tropospheric-humidity/ . And, within, it points to a paper by Kratz et al ( http://miskolczi.webs.com/p27jqsrt.pdf ) where I came upon a statement I found very interesting.
One was:
“The far-infrared, speci[fi]ed here to cover the spectral range with wave-numbers less than
650 cm−1, is dominated by the pure rotation band of water vapor, and has been shown to account for
over 40% of the energy emitted to space by the Earth’s atmosphere-surface system for clear-sky conditions
[1].”
Always suspected that but had never explicitly read “40%” it in a paper. There it is.
Second, in that you have read in some of my comments, or I think you have, of how 1/6th of the 390Wm-2 down welling IR displayed in Keihl-Trenberth graphic is real, that is when all figures on the energy budget balance, well, I also found in a summary of Miskolczi’s papers at http://www.friendsofscience.org/assets/documents/The_Saturated_Greenhouse_Effect.htm was said that:
“He applies the Virial Theorem to the atmosphere, which states that the kinetic energy of a system is half of the potential energy. The internal kinetic energy is taken as the upward long wave energy flux at the top of the atmosphere, and the potential energy is the upward radiation flux from the surface. This result is used to determine the fraction of the upward radiation from the surface that is transmitted directly to space (rather than absorbed by the atmosphere), which is 1/6.”
That to me is very curious, 1/6. He is stating of upward LW being 1/6 while I have been pointing out the 1/6 downward. Four sixth is of course always horizontal in three dimensions. Never had read that explicitly stated either. When you carry this into the Virial Theorum of what energy exactly supports the mass of our atmosphere every second of every day it now all seems to finally make perfect sense.
Thought this a good time to pass that and think about it.

October 24, 2011 5:10 pm

One thing which I find intriguing is that global temperatures in 1959 are shown to be somewhere in the region of 287.22 K. – I say in the region of 287.22 K because there are a few different figures “on the market” -. The chart I chose, at random, started in 1959 and ended in 2004 and showed a global temperature of 287.77 K for that last of the two years.
There were a few “ups and downs” mainly behind the decimal point during those years but the 287 K bit was reproduced for every year (almost).
So, the intriguing bit for me is; if thermometers placed at a few places on the Earth’s dry surface plus a lot of highly unreliable temperature reports from the world’s merchant and military shipping can be accurate to within 0,55 K or better, then why the Ken-Nell do we spend not just millions but billions or more of \$€£ on satellite measurements?

Glenn Tamblyn
October 24, 2011 5:22 pm

Jerome, DirkH
“I have to disagree. He is using the temperature delta (Δtemperature) to average with other deltas. That makes much more sense than what you have assumed.”
A fundamental problem I see with this method is that the delta’s propergate forward. So any error that will unavoidable occur if a station is missing from sample n then gets carried forward into the calculation for samples n+1, n+2 etc. Ultiimately all future deltas contain some effect from all past errors. In addition there is the problem of propogating inaccuracies in the performance of the calculation. Computers do not calculate to infinite precision and since most of this is about calculating differences between larger values to produce much smaller differences then continually summing these differences the finite accuracy of each stage of arithmatic will propgate forward in the result. It would take some serious analysis to work out whether the net effect of this over time will all cancel out or be cumulative.
In contrast the method used by the mainstream analyses of comparing each reading for each station against its own long term average means that the anomaly (the delta if you like) is always calculated against a fixed reference. So any isues that might occur as a result of problems at one sample point don’t automatically propogate to future samples.
Also, Michaels method does not use area weighting. He is either doing one calculation for the whole US or he is subdividing the country. – this isn’t clear. But his method of simply dividing by the available station count means that the weighting for the stations effectively changes each time there is a missing reading. That is over and above the fact that he is not area weighting at all. If for example you are using 10 stations in Texas and 10 stations in Vermont, by his method the climate change in Vermont is given equal weight to that in Texas even though Texas is a much larger proportion of the US. Then, if you are looking at stations over time, you can introduce a time bias towards the climate changes in a region where the number of stations has grown over time. In my example. If Texas had 3 stations in 1900 and Vermont 6 because it was more densely poulated then the count changed to my first example by 2000, that is introducing a time bias towards the Texas climate over time.
Area weighting ensures that these geographic biases don’t occur
“But if you do area weighting your result will be hugely biased towards the trends of isolated thermometers with no second data source a thousand miles around, like in GISS. Is this what you consider a better approach? ”
The key question here is how many stations you need to adequately characterise the CHANGE in a regions climate. Note this is not the same as characterising that regions climate. Read the posts I put up earlier on teleconnection and where the GISS range of 1200 km comes from – sorry I couldn’t include the graphics, WUWT would be more effective as a platform for discussion if commenters could illustrate their points.
It seems to me that a common misperception many people have is that to adequately observe any climate change we need lots of stations. The climate of2 locations may be quite different to each other, even if they are relatively close, due to altitude differences. But locations that are at similar altitudes can have very similar climates over quite long distances. And when we are looking at how the climates of 2 locations CHANGE relative to each other they are often quite well correlated over long distances, particularly over land. Thus the 1200 figure used by GISS. This isn’t based on speculation, but on observation. Looking at the correlation between large numbers of station pairs and seeing how that correlation varies with separation.
So the case of a truly isolated station that is the only one within 1200 km would be problematic. But there aren’t many situations like that. However failing to area weight in calculations means that in effect every single station is producing a bias. And these biases have a definite pattern. Regions with dense station counts will bias the result towards them.
It also makes manipulations much simpler. The PNS approach. Drop the thermometers that don’t confess.

George E. Smith;
October 24, 2011 5:25 pm

Well with all this talk about missing data, and thermometers dying (none of Mother Gaia’s thermometers die; so she always knows the Temperature) and methods of diddling the averages to substitute for the missing data.
It’s kind of a lost cause; there’s this thing called the Nyquist Sampling Theorem, and it says you don’t have anywhere near enough global stations ; and never have had, to either reconstruct the original continuous Temperature map (don’t need to do); but you do need the ability to reconstruct the original continuous Temperature function; in order to even extract a correct average of the values; whether you recreate the values or not.
So whatever you have as a calculation for the average of the data set, whether data samples are missing or not; the zero frequency signal; which is another name for the average value, is itslef corrupted with aliassing noise; so what one does to fix it is somehwat irrelevent..
And the twice a day, min-max Temperature readings, are already in violation of the Nyquist criterion for the temporal sampling, since the daily temperature variation is not a simple sinusoid with no harmonic content, so at least a second harmonic with a 12 hour period must be present, and twice in 24 hours sampling, will result in the daily average calculation also containing aliassing noise. Not to mention that any varying cloud cover during the day, will totally bamboozle the min-max thermometer (but not Mother Gaia’s thermometers)..

October 24, 2011 5:26 pm

Glenn Tamblyn says:
As Requested:
October 24, 2011 at 3:38 pm – Off Averages & Anomalies Part 1A
October 24, 2011 at 3:40 pm – Off Averages & Anomalies, Part 1B
October 24, 2011 at 3:41 pm – Off Averages & Anomalies Part 2A
October 24, 2011 at 3:42 pm – Of Averages & Anomalies Part 2B

About ten screens for each of these posts. And the whole shabang is a humumgous diversion.
Glenn, we’re questioning your unstated assumptions: that the basic data, including the so-called rural, are free of UHI / or that the UHI is accurately enough known – and that the corrections applied are appropriate.
The longstanding records that are the subject of this post, are very strong evidence against all your assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work.
Moderators: should Glenn’s posts be raised to their own thread, to assemble a fuller answer from WUWT? At least, Glen could not say on SkS that nobody here even pays attention to his “real” science. Yes I know, we get the traffic at WUWT. But the politicians and the science academies are still playing poker with our finances, and putting on shows like the real skeptics’ science doesn’t exist.

Jeff Alberts
October 24, 2011 5:41 pm

Looks like we have an un-closed italics tag up above.
[Fixed. ~dbs]

October 24, 2011 5:46 pm

Wayne,
Thanks for the info. Interestingly, Dr Miskolczi’s estimate of climate sensitivity is… zero.

barry
October 24, 2011 5:48 pm

Dave Springer @ here
Are you aware that many studies, including Fall et al (Anthony Watts’ study), have corroborated the average temperature record for the US? There have also been many blog initiatives taking raw, adjusted, rural, airport, urban data and processed them in different ways and basically corroborated that the US record for average temperatures is robust?
Here is an excellent breakdown of US adjustment choices, focussing on UHI, over at The Blackboard, Lucia’s blog.
Last year Steve Mosher and Zeke Hausfather wrote a seminal post at WUWT discussing numerous attempts at reconstructing global temperatures.
http://wattsupwiththat.com/2010/07/13/calculating-global-temperature/
These were worked up from raw data and from adjusted data, and with various filters and processes, the general result being that the official temperature records are robust. Mosh and Zeke made an excellent case to move on from questions about the need for adjustments to more incisive enquiry about the robustness of some of them (like UHI).
(Check the link above, because it points to numerous projects from both sides of the aisle reaching pretty good agreement on basic ideas.)
So many issues have been investigated – we must not lose sight of all the work that has been done. For example, here is a link to an experiment taking 60 rural stations from around the globe with at least 90 years of uninterrupted data. Result? Good agreement with official (land-only) records.
For US rural/urban comparisons, there have been several blog attempts, which conclude that the difference is negligible.
Recall also the global temperature record from raw data at The Air Vent (just one of the skeptical sites I have cited here). Time and again both sides of the aisle have tested the data thoroughly and found the official records to be fairly robust.
Regarding the top post, there is good agreement between rural and urban temps, from independent analyses, in the literature, and even according to Michael Palmer’s work above.. There is no need to discard recent data, although it would be good to learn why rural stations have dropped off lately – and remember the last time station drop out was thought to be an issue and it turned out it wasn’t anything nefarious, and that it didn’t make a difference to the temp records anyway.

October 24, 2011 6:01 pm

barry,
Three “robusts” and one “robustness”! That word always sounds faintly ridiculous to me, like those using it are trying to make their argument stronger.
Here’s the real argument, which is avoided as much as possible by the robust crowd: “Carbon” [by which they mean carbon dioxide, a gas] has been demonized as something harmful that will cause bad things to happen, like climate disruption, runaway global warming, coral bleaching, etc.
The truth is this: there is no evidence to support those conclusions. The only evidence we have is that CO2 is harmless and beneficial. Falsify that testable hypothesis, if you can.

barry
October 24, 2011 6:06 pm

Interestingly, Dr Miskolczi’s estimate of climate sensitivity is… zero.

Yes, it would appear the good doctor does not believe we’ve experienced global ice ages over the past million years.

1DandyTroll
October 24, 2011 6:08 pm

Brian says:
October 24, 2011 at 1:19 pm
“Amazing that the “politicians and environmental promoters” have managed to convince 97% of climatologists and essentially every major scientific organization that AGW is real.”
So, essentially, you mean that all them people and organizations who depend on the money, essentially, coming from “politicians and environmental promoters”, cave to those “politicians and environmental promoters” demands? In your reality: I wonder why?
Back to reality: do you happen to be able to supply proof, or do you just like to blow smoke from your bong every which where?

Theo Goodwin
October 24, 2011 6:11 pm

barry says:
October 24, 2011 at 5:48 pm
Very interesting post, barry, but it is incomplete. Lucy Skywalker sets forth what you need to address as follows:
“Glenn, we’re questioning your unstated assumptions: that the basic data, including the so-called rural, are free of UHI / or that the UHI is accurately enough known – and that the corrections applied are appropriate.
The longstanding records that are the subject of this post, are very strong evidence against all your assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work.”
In addition, I am questioning the entire framework that Mosher and friends inherited but never questioned. As scientists, it is not enough to draw your diagrams across maps of Earth’s surface and apply your inherited statistical techniques when you have empirical evidence of huger importance staring you in the face. The longstanding records from what I call the “well managed” stations are powerful evidence that the records from the other stations are questionable. Shame on you for ignoring those long standing stations. I take it that you cannot think outside the box enough to address this empirical matter. But it is your duty now as scientists to explain empirically the differences between the long standing stations and the others.

Brian H
October 24, 2011 6:12 pm

Coming back to the thread after a few days, I was shocked at some of the comments directed at “Brian”, since they didn’t apply to anything I’d said. I immediately checked for other Brians, of course, and came across the d****l “Brian” was posting.
He dishonours our shared given name! Glad I appended my surname initial to my tag from the get-go.
SBVOR;
Thanks for the additional Lindzen links. He’s becoming ever more accurate and forthright in his diagnoses.

barry
October 24, 2011 6:15 pm

My dear Smokey,
I’d rather lick my way to the centre of the earth than attempt to make points through your endlessly shifting goal posts. Been there before, bubba. Bought the T-shirt, got back on the boat.
You’re welcome to respond to the content of my post rather than snarking about a word in it. It would be on-topic to boot. I may even reply with more consideration.
🙂

Legatus
October 24, 2011 6:19 pm

Off Averages & Anomalies 1. Every station may not have data for the entire period covered by the record. They have come and gone over the years for all sorts of reasons. Or a station may not have a continuous record. It may not be measured on weekends because there wasn’t the budget for someone to read the station then. Or it couldn’t be reached in the dead of winter.

Ahh, but what if you do have stations that have reliable data for the entire period? Well, then you do not need to do all this complicated stuff to make up for that, do you? And this post is about exactly such stations. And this post shows that these reliable stations data disagrees with the data of stations that are less reliable, and use a lot of complicated math to supposedly make up for that. In the scientific method, this is called “falsification”. This shows that the method described in “Off Averages & Anomalies” has been falsified.
The Scientific method:
Hypothesis, some stations have unreliable data some of the time which introduces error. If we use the method described in “Off Averages & Anomalies” we will be able to screen out that error.
Test of Hypothesis- Compare the output of this method to known stations that do have reliable data, and see if there is a significant difference. If none, the method works and can be used to eliminate error, if a significant difference, the method is falsified and the current reported temperatures contain error.
Result of Experiment- There is a very considerable difference in the known reliable stations and output of the method that claims to be able to eliminate this unreliability.
Conclusion- This method does not eliminate the error, it has been falsified (assuming it was even used correctly and honestly in the first place).
Also, the idea that you can eliminate most of the stations and still achieve a reliable record of whether the temperature is rising or not is incorrect. If you give me control over which stations are included in the temperature record, and which are dropped, I can get you warming regardless of what method you use to screen out my deliberate and false warming. All I need to do is carefully select stations that show a gradual and steady warming. I would almost certainly have to eliminate most stations, using too many stations would make this too hard for me, there would only be so many stations with this kind of record. There is, after all, only so many cities, and some stations may have been too carefully maintained and calibrated to make this possible (they may have compensated for the UHI effect by careful citing and re siting). These can be stations that are in growing urban areas. The above article shows that those are the stations kept. I would also eliminate most rural stations, except for a few where the local environment right around the station itself contributed to a slowly growing heat. In all cases, what I am looking for is a slow rise of reported temperatures due to a slowly increasing Urban Heat Island Effect. If I choose those stations where the increase is slow and gradual enough, and drop all stations where there is not such a gradual rise, no amount of fancy math will correct for my deliberately introduced warm bias.
I notice a few things here (“you” is the author of “Off Averages & Anomalies”):
*The number of stations dropped off the temperature record is huge, that is exactly what I would need to falsify the temperature record by including only those stations that show rising temperatures, and dropping all others. The number of stations that would show such a gradual rising temperature may be far smaller than the total number of stations, thus, I would need to drop most of them. Thus, this huge drop of stations must raise serious suspicion.
*I paid for these stations with my tax dollars, many still exist and are still reporting temperatures, yet are not being entered into the global temperature record, why not? I paid for them, you are wasting my tax dollars by not using them for the purpose I paid for them. Where is the money going, if it is not being used for these stations?
*You claim to have a method here which will eliminate bias and error. You drop a huge number of stations, which still exist and report. You cannot claim you dropped those stations because they report in error, since you claim to have a method to eliminate this error. So why have you dropped these stations?
*If I were to deliberately introduce a warm bias, I would wish to drop most rural stations. We now have no more rural stations reporting than we had some 150 years ago. This should make anyone suspicious, whatever the excuse made.
*If I were to deliberately introduce a warm bias, I would drop far more rural then urban stations, this is exactly what has happened. The percentage of urban stations is now far greater a percentage of the total stations than it was at any time in history, including 150 years ago.
*If I were to deliberately introduce a warm bias recently, I would expect to see “rising temperatures” right around the time I eliminate most stations. That is exactly what we do see.
*If I have 1/3 of stations reporting warming , 1/3 reporting cooling, and 1/3 reporting steady, can this method fail to show warming if I drop the cooling and most of the steady ones, will it correct for that error, will it even warn you of it? What if I have only 5 or 10% of the stations giving me the slowly rising temperatures I want, and drop most of the rest, can this method fail to report warming?
*You claim to have not carefully selected for warm bias stations, the above article shows that this is suspect, at the very least. Therefore, I would want to see proof of that, your word alone is no longer enough. The fact that “global warming” being true results in greater budgets, job security, and prestige for you has to make me very suspicious, you have a clear conflict of interest.
<blockquoteCould it also be that the people responsible for the ongoing temperature record realise that you don’t need that many stations for a reliable result and thus aren’t concerned about the decline in station numbers – why keep using stations that aren’t needed if they are harder to work with?
“Could it also be”, now there is a definitive, scientific phrase, sure to fill me with unbounded confidence! If you are going to say that we don’t need these stations, I am going to need something more difinative than a “could”. I have a much more likely idea, since we know that there used to be far more temperature stations reporting than now, we know they can, indeed be used, because they were. But now you say “it’s too hard!”. Well, then, how about we fire your lazy ass and get someone out there who will do the work!. You say it’s too expensive, well, how much has your budget dropped, if at all? Unless you can show that your budget has dropped a lot, than this is just an excuse. And you are asking us to expend trillions of dollars to combat “global warming”, and you are trying to skimp a few bucks here? Before I am willing to do all the hard work and expense to combat something, you had better do the far less hard work and expense to show me I need to.
Here’s an idea, before we just accept that it is OK if you drop all these stations (many of which still exist and still report temperatures, thus showing that there is no need to not use them, it is not too hard because someone is doing it now), how about we try adding back in the temperatures they still report, and you can then use all your fancy math to screen out the errors ( have no objection to honest error screening after all), and then we can see if the record still shows the same. Or…we could try using only the most reliable, long term stations, and see if they concur with your method. You know, like is done above. Oh wait, it does not concur. Conclusion, all the verbiage in “Off Averages & Anomalies” shows the old saying, if you can’t dazzle them with brilliance, baffle them with BS. In fact, the pro AGW camp can go further, you can baffle each other, each of you can only do a little dishonesty, while telling each other how diligent you are being with the truth. So long as you compartmentalize it, say with only a few key players adding in just a little dishonesty at key steps (with lots of rationalizations for it), why, you can continue to believe that your record is honest. The above actual use of the scientific method, however, to test that, and find it wanting, should give you pause…
The fact that a number of the chief proponants of AGW have been actually caught monkeying with the temperature trend, and ‘adjusting” it well after the fact (despite not having a time machine to be able to tell if they need to, actual example, how 1938 used to be the highest recorded US temperature, yet was adjusted downward in incriments till 1998 was), as well as actual criminal behavior and deliberatly not using the scientific mthod (such as not releasing their data and code and even threatening to destroy it rather tha do so, so that their work cannot be replicated or even checked), also means that the claim that they are, indeed, not up to anything is either suspect in the extreme, or a flat out lie. Note that it is quite possible in a large loose organization like that to beleive you are telling the truth simply because everyone else around you assures you that you are. Enough compartmentalization if little lies here and there and you can collect them together into one huge whopper and never know it. Throw in a bit of “noble cause corruption” as a rationalization and there you go, concience cleared!

Legatus
October 24, 2011 6:31 pm

BTW, one thing I would surely like to see, as an amendment and addition to this article, see if there are stations like these, with guaranteed long and accurate records, in countries other than the US. Yes, I know that others have shown in this thread that there are other very long records, what I am looking for is
1) How accurate and reliable are they? This would require them to be multiple stations, rather than just, say, at one spot. The GISS record hare is of 600 stations, are there any such records from other countries?
2)I would like to see it over the same 100 year time frame, apples to apples.

Theo Goodwin
October 24, 2011 6:33 pm

Lucy Skywalker says:
October 24, 2011 at 5:26 pm
“The longstanding records that are the subject of this post, are very strong evidence against all your assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work.”
Excellent post. The longstanding records should be treated with respect and not bundled with the other records in knee jerk fashion. It is incumbent upon the bundlers to provide empirical evidence that the two sets of records should be treated the same. Without such evidence, one wonders whether the reason for bundling the weak records with the longstanding records is to achieve a higher average temperature. Recognizing that such evidence exists and can be addressed might require some folks to think outside the box.

October 24, 2011 6:36 pm

peetee says:
October 24, 2011 at 3:31 pm
“uhhh… the article references to ‘abstract’ – where’s the actual published paper to be found? What journal? Uhhh… what peer-reviewed journal? Surely, surely…. this can’t be a pre-release!!!”

peetee, this IS the FINAL release, and the paper is peer-reviewed right here. That means you and your peers!
“But wait”, I hear you saying, “you can’t be serious! My troll posts count for peer review?” To which I reply: “Why yes, for sure! You have no idea what real academic peer review can be like.”
Glad we could clear that up.

Theo Goodwin
October 24, 2011 6:40 pm

Glenn Tamblyn says:
October 24, 2011 at 3:38 pm
“In this series I intend to look at how the temperature records are built and why they are actually quite robust. In this first post (Part 1A) I am going to discuss the basic principles of how a reasonable surface temperature record should be assembled, Then in Part 1B I will look at how the major temperature products are built. Finally in Parts 2A and 2B I will then look at a number of the claims of ‘faults’ against this to see if they hold water or are exaggerated based on misconceptions.”
We are not asking for a tour of the box. We want you to think outside the box. What empirical evidence can you offer for not treating the longstanding records differently from the other records? In an early post, I suggested that the longstanding records should be treated as the standard and that all other records should be treated as deficient because of siting issues and related matters. Anthony’s 30 years of data offer considerable empirical evidence for investigating the siting issues. Please try to address the empirical questions about siting.

October 24, 2011 7:00 pm

It’s also worth puzzling over GISS vs adjusted data in Australia …
http://www.waclimate.net/bomhq-giss.html