Unadjusted data of long period stations in GISS show a virtually flat century scale trend

Hohenpeissenberg Meteorological Observatory - Image from GAWSIS - click for details

Temperature averages of continuously reporting stations from the GISS dataset

Guest post by Michael Palmer, University of Waterloo, Canada

Abstract

The GISS dataset includes more than 600 stations within the U.S. that have been

in operation continuously throughout the 20th century. This brief report looks at

the average temperatures reported by those stations. The unadjusted data of both

rural and non-rural stations show a virtually flat trend across the century.

The Goddard Institute for Space Studies provides a surface temperature data set that

covers the entire globe, but for long periods of time contains mostly U.S. stations. For

each station, monthly temperature averages are tabulated, in both raw and adjusted

versions.

One problem with the calculation of long term averages from such data is the occurrence of discontinuities; most station records contain one or more gaps of one or more months. Such gaps could be due to anything from the clerk in charge being a quarter drunkard to instrument failure and replacement or relocation. At least in some examples, such discontinuities have given rise to “adjustments” that introduced spurious trends into the time series where none existed before.

1 Method: Calculation of yearly average temperatures

In this report, I used a very simple procedure to calculate yearly averages from raw

GISS monthly averages that deals with gaps without making any assumptions or adjustments.

Suppose we have 4 stations, A, B, C and D. Each station covers 4 time points, without

gaps:

In this case, we can obviously calculate the average temperatures as:

A more roundabout, but equivalent scheme for the calculation of T1 would be:

With a complete time series, this scheme offers no advantage over the first one. However, it can be applied quite naturally in the case of missing data points. Suppose now we have an incomplete data series, such as:

…where a dash denotes a missing data point. In this case, we can estimate the average temperatures as follows:

The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.

One advantage that may not be immediately obvious is that this scheme also removes

systematic errors due to change of instrument or instrument siting that may have occurred concomitantly with a data gap.

Suppose, for example, that data point B1 went missing because the instrument in station B broke down and was replaced, and that the calibration of the new instrument was offset by 1 degree relative to the old one. Since B2 is never compared to B0, this offset will not affect the calculation of the average temperature. Of course, spurious jumps not associated with gaps in the time series will not be eliminated.

In all following graphs, the temperature anomaly was calculated from unadjusted

GISS monthly averages according to the scheme just described. The code is written in

Python and is available upon request.

2 Temperature trends for all stations in GISS

The temperature trends for rural and non-rural US stations in GISS are shown in Figure

1.

Figure 1: Temperature trends and station counts for all US stations in GISS between 1850 and 2010. The slope for the rural stations is 0.0039 deg/year, and for the other stations 0.0059 deg/year.

This figure resembles other renderings of the same raw dataset. The most notable

feature in this graph is not in the temperature but in the station count. Both to the

left of 1900 and to the right of 2000 there is a steep drop in the number of available

stations. While this seems quite understandable before 1900, the even steeper drop

after 2000 seems peculiar.

If we simply lop off these two time periods, we obtain the trends shown in Figure

2.

Figure 2: Temperature trends and station counts for all US stations in GISS between 1900 and 2000. The slope for the rural stations is 0.0034 deg/year, and for the other stations 0.0038 deg/year.

The upward slope of the average temperature is reduced; this reduction is more

pronounced with non-rural stations, and the remaining difference between rural and

non-rural stations is negligible.

3 Continuously reporting stations

There are several examples of long-running temperature records that fail to show any

substantial long-term warming signal; examples are the Central England Temperature record and the one from Hohenpeissenberg, Bavaria. It therefore seemed of interest to look for long-running US stations in the GISS dataset. Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.

The temperature trends of these stations are shown in Figure 3.

Figure 3: Temperature trends and station counts for all US stations in GISS reporting continuously, that is containing at least one monthly data point for each year from 1900 to 2000. The slope for the rural stations (335 total) is -0.00073 deg/year, and for the other stations (278 total) -0.00069 deg/year. The monthly data point coverage is above 90% throughout except for the very first few years.

While the sequence and the amplitudes of upward and downward peaks are closely similar to those seen in Figure 2, the trends for both rural and non-rural stations are virtually zero. Therefore, the average temperature anomaly reported by long-running stations in the GISS dataset does not show any evidence of long-term warming.

Figure 3 also shows the average monthly data point coverage, which is above 90%

for all but the first few years. The less than 10% of all raw data points that are missing

are unlikely to have a major impact on the calculated temperature trend.

4 Discussion

The number of US stations in the GISS dataset is high and reasonably stable during the 20th century. In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out, to the point that the GISS dataset no longer seems to offer a valid basis for comparison of the present to the past. If we confine the calculation of average temperatures to the 20th century, there remains an upward trend of approximately 0.35 degrees.

Figure 4: Locations of US stations continuously reporting between 1900 and 2000 and contained in the GISS dataset. Rural stations in red, others in blue. This figure clearly shows that the US are large, but the world (shown in FlatEarth™ projection) is even larger.

Interestingly, this trend is virtually the same with rural and non-rural stations.

The slight upward temperature trend observed in the average temperature of all

stations disappears entirely if the input data is restricted to long-running stations only, that is those stations that have reported monthly averages for at least one month in every year from 1900 to 2000. This discrepancy remains to be explained.

While the long-running stations represent a minority of all stations, they would

seem most likely to have been looked after with consistent quality. The fact that their

average temperature trend runs lower than the overall average and shows no net warming in the 20th century should therefore not be dismissed out of hand.

Disclaimer

I am not a climate scientist and claim no expertise relevant to this subject other than

basic arithmetics. In case I have overlooked equivalent previous work, this is due to my ignorance of the field, is not deliberate and will be amended upon request.

0 0 votes
Article Rating
265 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
October 24, 2011 12:13 am

A fine extraction of genuine information from an egregiously abused data set. The retro-chilling of old records and The Great Dying of The Thermometers (actually about 1990, they went from ~6000 to ~1600) are so brazen as to defy belief.

crosspatch
October 24, 2011 12:14 am

In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out

At some point they are going to run out of tricks to use to create a warming signal.

Torgeir Hansson
October 24, 2011 12:40 am

Unadjusted data???? Are you crazy? Don’t you go and challenge Dr. Jones et al now. They run this game, sonny, and you’d better get used to it.

Glenn Tamblyn
October 24, 2011 12:42 am

Michael. Interesting post. However I have to disagree with the method you have used to handle gaps in the record. By using the average of other stations to sustitute for a missing station when averaging temperatures this assumes that the missing station is essentially at a similar temperature to the others. If its temperature is significantly different, this will introduce a bias – if it is colder, a warming bias, if it is warmer, a cooling bias.
This isn’t the method used by the mainstream temperature records. They base their calculations on comparing each reading from a station against its own long term average over some base period. Then they take the difference between the individual reading and this long term average to calculates the temperature anomaly for that reading on that day. This produces quite a different behaviour when looking at missing readings.
Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser.
[snip sorry SkS doesn’t treat people with any sense of fairness – for Example Dr. Peilke Sr. If you want to reference any of your work, your are welcome print it out in detail here, but until SkS changes how they treat people, sorry not going to allow you to use it as a reference. Be as upset as you wish, but the better thing to do is work for change there- Anthony ]
I will be interested in your comments.

Goldie
October 24, 2011 12:48 am

To the untrained eye this looks to have two warming cooling cycles in it. The first in the early to mid 20th century, which corresponds with the well attested long hot summer that made the Battle of Britain possible and the second in the late part of the 20th Century which corresponds with the well undertood warming cycle that began in approximately 1974/76 following the 1974 La Nina with a step change in global rainfall.
Just a word of warning though – last week this website was going off the deep end at BEST for using a non-standard period for assessing climate change. Whilst this assessment is useful, it is important that the limitations of the data to fully inform the current debate are fully understood

Ian of Fremantle
October 24, 2011 12:49 am

Have you yourself or do you know of anyone who has asked GISS why particular stations have been discontinued? In Australia there also seems to have been selective removal of some stations. Of course it would be uncharitable to suggest the removals are to tie in with the proposition of global warming but it would be good to get an official answer. It would also be a lot more good if posts like this were publicised in the MSM

Steeptown
October 24, 2011 12:53 am

Is there an official explanation for why, in the modern era with all the funding available, the number of stations has dropped precipitously?

tokyoboy
October 24, 2011 12:55 am

Many folks here know that for a long time, n’est-ce-pas?

October 24, 2011 1:05 am

Palmer
“At some point they are going to run out of tricks to create a warming signal”
I appreciate very much that you just put it as it is. We sceptics sometimes want to sound extremely nuanced etc. simply to be taken serious, and thus, when someone just say the truth we all know just like that, its a great relief.
Here RUTI (Rural Unadjusted Temperature Index) versus BEST global land trend:
http://hidethedecline.eu/media/ARUTI/GlobalTrends/Est1Global/fig1c.jpg
The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.
RUTI global taken from:
http://hidethedecline.eu/pages/posts/ruti-global-land-temperatures-1880-2010-part-1-244.php
RUTI will grow stronger and stronger and even though all beginning is tough, I hope everone will help collecting even more original temperature data to me to make RUTI even better.
Tricks: Coastal stations.
One (important) trick from Hadcrut is to use rural coastal stations so that they do have rural data aboard.
Problem is, that coastal stations world wide has around 0,6K more heat trend 1925-2010 than near by non-coastal stations, see
Joanne Nova/RUTI :
http://hidethedecline.eu/pages/posts/ruti-coastal-temperature-stations-242.php
K.R. Frank

October 24, 2011 1:05 am

I actually find this chilling. I’d counted on an underlying base trend of about .6K/century to give a bit of a leg up to resist the coming downturn. Not to be, apparently!
A possible positive outcome could be that the Cooling freaks out the Alarmists, and they flip over to pushing CO2 emissions to combat it. That will do nothing for temperature, but will unclog the energy generation pipelines and be great for agriculture and silviculture. Maybe even viticulture!

TBear (Sydney, where it is finally warming Up, after coldest October in nearly 50 yrs)
October 24, 2011 1:07 am

Interesting.
But does it stack up?
Interested to know …

Jer0me
October 24, 2011 1:08 am

The graph I find most interesting is #2. I have seen elsewhere in many places that the global temps are just not rising (at least not significantly, if at all) in this century. How is it that the GISS records for the US are rising very significantly in the 21st century. It appears to be about as much warming in the last decade as the whole 20th century! (Allowing for some smoothing)

Tom Harley
October 24, 2011 1:09 am

Tom Harley reblogged this on pindanpost and commented: The same result for NW Australia…virtually flat for over 100 years in Broome

Jer0me
October 24, 2011 1:14 am

Glenn Tamblyn says:
October 24, 2011 at 12:42 am
Michael. Interesting post. However I have to disagree with the method you have used to handle gaps in the record. By using the average of other stations to sustitute for a missing station when averaging temperatures this assumes that the missing station is essentially at a similar temperature to the others. If its temperature is significantly different, this will introduce a bias – if it is colder, a warming bias, if it is warmer, a cooling bias.

I have to disagree. He is using the temperature delta (Δtemperature) to average with other deltas. That makes much more sense than what you have assumed.

The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.

That is not to say the technique is not problematic, but it should likely be much more accurate than any method I have seen described in this matter. The nearer the site is, I suspect the better the correlation, regardless of the offset in average temperature.

October 24, 2011 1:16 am

So, where we have continuous, reliable, non-manipulated data, there is no warming at all. QED
Strange that many “skeptics” seem to have reconciled themselves with the notion that “there was some global warming in the 20th century.”
Repetitive, all-pervasive lie, if non constantly resisted, in time would appear as containing at least some truth in it. I remember that in the 1980s one could hardly find a person in Russia, however opposed to the regime, who would not believe in some part of the Soviet propaganda. It has been nailed into people’s brains for 70 years, from the womb to the tomb, and only a few were stubborn enough to see it through.

son of mulder
October 24, 2011 1:39 am

Interesting. You use stations continuously operating in 20th century. What does the graph look like if you include only stations that operated continuously through the 20th century upto the present day? That would indicate any bias in the removal of stations recently.

Glenn Tamblyn
October 24, 2011 1:40 am

Michael
I had some stuff you might have been interested in reading but Anthony snipped the link. Interesting, it isn’t the Mod’s here at WUWT snipping this, it is Anthony himself
Look at posts at SkS during May this year or my posts, under my authors name. Unless Anthony snips this as well.
Also, Anthony, if you want to talk about how people are treated, seriously read all the exchanges between SkS and Dr P Snr. Please note the civil tone of it all.
Unless you want to snip this too.
Note that I have copied this post so we can show what you have snipped Anthony
{see these: http://wattsupwiththat.com/2011/09/25/a-modest-proposal-to-skeptical-science/ and http://wattsupwiththat.com/2011/10/11/on-skepticalscience-%e2%80%93-rewriting-history/ and explain why that sort of behavior is OK for SkS How do you justify changing/deleting user comments months and years later? ~mod}

Bernd Felsche
October 24, 2011 2:05 am

In this case, we can estimate the average temperatures as follows:

Yikes! You introduce “fiction” into fact. Well, the temperatrue averages are already an artificial construct. One that doesn’t actually represent the time-averaged thermal state of the system being measured. Even the time-average has knobs on it for subsequent use. Didn’t Doug Keenan explain that adequately a couple of days ago?
What is sorely needed is analysis that can cope with “holes” in the data. Gaps. Analysis that doesn’t require invention of data to bridge the gaps. Which is going to be somewhat harder than for homogenised data, but at least one isn’t analysing guesses instead of raw data.
Moreover, what is needed is an understanding of the physical system. A dry-air temperature isn’t sufficient to describe the thermodynamic state; the enthalphy of the short-term climate system.

John Marshall
October 24, 2011 2:06 am

Unfortunately the adding of all temperatures and dividing by the total number does not actually produce a correct answer.
Many inputs can affect these temperatures between the times of reading which will skew the average without any knowledge that this has happened. A continuous recording, like a barograph for pressure, would be much more accurate. Whether this is done I have no idea.
See:- Does a Global Temperature Exist? Essex, McKitrick and Andresen 2006.

October 24, 2011 2:12 am

The most notable feature in this graph is not in the temperature but in the station count.

Very true, very alarming, very indicative of manipulation.
The station count, reporting years and monthly data point coverage can be used to generate a monthly GISS Credibility Index for their overall dataset… unfortunately this credibility index started to fall off a cliff in the 1970s and is currently very close to zero.
The fact that their average temperature trend runs lower than the overall average and shows no net warming in the 20th century should therefore not be dismissed out of hand.

Totally correct.
The subset of raw data with a very high GISS Credibility Index actually shows a very slight cooling trend in the US during the 20th century.

KPO
October 24, 2011 2:17 am

Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone. I have this sense that there are parameters missing such as humidity, cloud cover and perhaps wind speed/direction that should also be recorded and used collectively when compiling a record. My feeling is that a more complete “measure” would be accomplished, EG an average temp of 15c/70% humidity/45% cloud/20km/hr/NW – in short, a sort of micro-climate record, expanded to regional and then global if that’s possible. This thing with averages of averages of averages of data points (numbers) bothers me. Perhaps my thinking is off so please straighten me out. Thanks

Pete in Cumbria UK
October 24, 2011 2:33 am

What KPO says above at 2:17 +1
How is it that changing temperature (alone) is being used as a proxy for ‘changing climate’
Its like saying that because jeans are usually blue, all items of blue clothing are made of denim and worn around your ass. (or something like that, yanno wot i meen)

Bill
October 24, 2011 2:35 am

Have the stations gone away, or are the stations still there, but just no longer counted?

Jer0me
October 24, 2011 2:42 am

It has often struck me as I extend my Carbon Footprint around the globe, that a very interesting, very consistent, and very much available temperature record may well be available. It is the temperature as recorded by planes as they travel.
The temperature, height and time are all constantly recorded. I can see that all we need to add may be the humidity. I guess it may be very low at most cruising heights however.
This may not give us everything, but it would at least give us something we could investigate. The cost of gathering this data should be trivial.
I personally volunteer to help, as long as my expenses are met. Obviously it would be much better to observe from the front of the plane, so only first class tickets will be accepted 😉

Jer0me
October 24, 2011 2:47 am

KPO says:
October 24, 2011 at 2:17 am
Please forgive my ignorance if I am being way off here, ….

I had the same thought. The use of Google for about 20 minutes fixed the ignorance. It is something to do with “wet build temperature” or similar. Basically the relative humidity is taken into account. (I am sure others with infinitely more knowledge can explain or correct me).
If you mean that the ‘temperature’ itself is not a good reading, because what we need to measure is ‘energy’, you have my vote. This view has been expounded on this site often, but I apologise for forgetting by whom. Records of weather such as could would also be extremely useful, IIRC Willis has posted a few essays on the matter.

Jer0me
October 24, 2011 2:48 am

^^^ Dang! ^^^
“wet build temperature” = “wet bulb temperature”

DirkH
October 24, 2011 2:50 am

Glenn Tamblyn says:
October 24, 2011 at 12:42 am
“Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser. ”
But if you do area weighting your result will be hugely biased towards the trends of isolated thermometers with no second data source a thousand miles around, like in GISS. Is this what you consider a better approach? It also makes manipulations much simpler. The PNS approach. Drop the thermometers that don’t confess.

James Reid
October 24, 2011 2:52 am

A question to anyone who is familiar with the techniques from someone who has not had access to the published papers in this area; How is the global mean temperature “normally” calculated from daily maximum and minimum records?
Can anyone point to a primer in this area please?

kim;)
October 24, 2011 3:03 am

KPO says:
October 24, 2011 at 2:17 am
“This thing with averages of averages of averages of data points (numbers) bothers me. ”
xxxxxxxxxxxxxx
Well said!

KV
October 24, 2011 3:07 am

Re “The Great Dying of Thermometers” which occurred worldwide predominantly in 1992-93. Of course, most of them didn’t actually die – Hansen and associates simply stopped using the data for reasons nobody ever seems to have explained. The following is my own interpretation of available facts.
1988 was reportedly the hottest summer in the US for 52 years and no doubt gave James Hansen confidence for his alarmist appearance before a US Committee that year (reportedly with windows purposely left wide open and air conditioning off on one of the hottest days of the year)! He did not have the backing of NASA as his former supervisor, senior NASA atmospheric scientist Dr. John S. Theon made clear after he retired. He said Hansen had “embarrassed NASA” with his alarming climate claims and violated NASA’s official agency position on climate forecasting (i.e., ‘we did not know enough to forecast climate change or mankind’s effect on it’).
It is evident Hansen had put his reputation and credibility on the line and when, starting with significant falls in 1989 and 1990 the next four years to 1992 showed sharp cooling in many parts of the world, a cooling which was then further exacerbated by the June 1991 eruption of Mt.Pinatubo, Hansen must have been put under extreme pressure, not only by his critics within and outside NASA, but also other scientists supporting and pushing the AGW hypothesis.
In the absence of any official plausible explanation I feel this to be a credible background and motive for the dropping of stations and the now well-established abuse of raw data. It is worthy of note that others have found drops in station numbers resulted in a significant step rise in most graphed temperatures.

October 24, 2011 3:13 am

I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it? Maybe Anthony Watts can shed some light on this?
Also, Anthony and others were quite annoyed that the recent BEST results were publicized before peer-review, and here we have a biochemist being given free reign to post what amounts to a back-of-the-envelope calculation of average ground temps. Michael Palmer or Anthony Watts could have, at the very least, got a statistician or near equivalent to verify the averaging analysis, which I suspect is particularly well suited to giving an average trend of zero.
Cheers.

TBear (Sydney, where it is finally warming Up, after coldest October in nearly 50 yrs)
October 24, 2011 3:16 am

KPO says:
October 24, 2011 at 2:17 am
Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone. I have this sense that there are parameters missing such as humidity, cloud cover and perhaps wind speed/direction that should also be recorded and used collectively when compiling a record. My feeling is that a more complete “measure” would be accomplished, EG an average temp of 15c/70% humidity/45% cloud/20km/hr/NW – in short, a sort of micro-climate record, expanded to regional and then global if that’s possible. This thing with averages of averages of averages of data points (numbers) bothers me. Perhaps my thinking is off so please straighten me out. Thanks
_______________
So let’s say KPO is correct.
Does this sort of point indicate that we really have no safe handle on total energy in the climate system?
If that is correct, or even plausible, why are we spending trillions of dollars on AGW?
Frustrated that, if these types of points have any true validity, that the general scientific community does not get its act together and call this CAGW argument for the misguided tripe that it may well be. Is it because scientists feel too constrained by specialism? Sad, very sad, if that is the case.
Waiting for the push-back revoluition to begin …

PlainJane
October 24, 2011 3:18 am

@ Glen Tamblyn
“Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser.”
This article never claims to be working out the average temperature of America or any region in particular – it is only looking at long term trends in individual stations, so this point is not valid.
It is also petty and peevish to complain about being “snipped” here when you have not had your comment disappeared down a black hole so that no other people may know you posted something. Most readers of this blog, if interested, would be savvy enough to find your work anyway from the comment Anthony left stating where your work was. You have been invited to place your specific work here so we can read it easily in context. Could you please do that?

Steve C
October 24, 2011 3:21 am

There’s another graph I’d like to see, prompted by an oft-quoted statistic in these and other pages, and again by those RUTI vs BEST graphs from hidethedecline.eu, above. The oft-quoted statistic is that, of the land area of the earth, urban areas comprise around 1%, rural 99%. Has anyone drawn a graph in which 99% weighting was given to the best of the rural stations, and 1% to the urban?
Of course, this still ignores the two-thirds of the planet which is sea, but I’d certainly call such a graph a more accurate picture of overall land temperature than the usual heavily urban- and airport-influenced offerings. Has anyone here come across such a graph, or can someone produce one a bit more quickly than my own rusty data processing would allow?

October 24, 2011 3:31 am

Glenn Tamblyn,
Please do not refer to the Skeptical Pseudo-Science propaganda blog. John Cook has no ethics, no honesty, and his mendacious blog has been thoroughly deconstructed in articles and comments here. Do a search of the archives, and get educated.
Of course, if you suffer from cagnitive dissonance like most True Believers, SPS is the echo chamber for you. Just be aware that you’re being spoon-fed alarmist propaganda at that heavily censoring blog. Some folks actually like being led by the nose, so maybe that’s A-OK with you. To each his own.

Harry Snape
October 24, 2011 3:35 am

Replacing a missing temp from one station with the average of the other stations seems like madness to me. The missing reading could be for a station in Antarctica, or the Equator, they would bear no relationship to the “average” of other stations.
Equally, infilling with the average for that time of the year from that station, or something similar is also likely to introduce an error. As someone said above Sydney has an anomalously low Oct, so infilling an average Oct temp would have overestimated the temp. But at least this method has the temp in the ball park, i.e. it will be an average Syd temp for Oct, not an average world temp for Syd (which would be seriously low).
What needs to happen when stations are missing is that the average for that location/time is used, but the error bar on the calculated temp is increased by the maximum variance at that location/time (possibly more for short data records). So the loss of any substantial portion of the record will be observable by the width of the error bars and once can deduce how confident we should be of the final number.

oMan
October 24, 2011 3:53 am

KPO: you’ve caught the flavor of my frustration with the (understandable, inevitable) enterprise of reducing the complexity of a system such as local weather or (its big sister, integrated over time and space) climate, into a single parameter called “temperature” and even that not measuring heat content but just a dry bulb thermometer. We lose so much information in this process. It would be nice if the final statistic came with a label reminding us of the magnitude of what has been left on the cutting-room floor and how subjective the cutting has been. Just use a five-star or color code. I know, I know, error bars are a good nudge in that direction; and this post by Michael Palmer contains, in Figure 1, another excellent pointer for quality, namely the number of stations. It tells the reader that the data before 1900 and after about 1990 is “thin” and “different.”. I use those words to suggest the orthogonality of the information-space we are trying to explore if not capture in the temperature time series for the entire US “represented” by GISS numbers.
That dying of the thermometers is the real story for me, and Michael Palmer adds a valuable chapter to the story. Many thanks.

October 24, 2011 3:54 am

So, how to reconcile the widespread sceptic acceptance that there has been SOME warming since the LIA with the now clear possibility that the surface temperature record via thermometers has been primarily recording UHI effects and/or suffers from unjustified ‘adjustments’ towards the warm side ?
Firstly we can simply say that the background warming since the LIA is less than that apparently recorded by the thermometers.
Although there has been some warming, the effect of UHI and inaccurate ‘adjustments’ has exagerrated it during the late 20th century and may now be hiding a slight decline.
Secondly we can see from the chart above that although there may have been little or no net change in temperature during the 20th century at the most reliable long term sites there have been changes up and down commensurate with many other observations i.e. warming followed by cooling then warming again and now possibly cooling.
What such a pattern suggests is that the Earth’s watery thermostat is highly effective but takes a while ( a few decades) to adjust to any new forcing factor.
Thus the system energy content remains much the same (including the main body of the oceans) but in the process of adjusting the energy flow through the system in order to retain that basic system energy content the surface pressure distribution changes so as to alter the size and position off the climate zones especially in the mid latitudes where most temperature sensors are situated.
From our perspective there seems to be a warming or cooling at the recording sites when averaged out overall but in reality all that is being recorded is the rate of energy flow past the recording sites as the speed of energy flow through the system changes in an inevitable negative response to a forcing agent whether it be sun sea or GHGs.In effect the positions of the surface sensors vary in relation to the position of the climate zones and they record that variance and NOT any change in energy content for the system as a whole.
A warmer surface temperature recording at a given site (excluding UHI and adjustments) just reflects the fact that more warm air from a more equatorward direction flows across it more often. That does not imply a warmer system if the system is expelling energy to space commensurately faster.
When a forcing agent tries to warm the system the flow of energy through the system and out to space increases so that more warm air crosses our sensors.
When a forcing agent tries to cool the system the flow of energy through the system and out to space decreases so that cooler air crosses our sensors.
Hence the disparity between satellite and surface records. The satellites are independent of the rate of energy flow across the surface and attempt to ascertain the energy content of the entire system. That energy content varies little if at all because the net effect of changes in the rate of energy flow through the system is to stabilise the total system energy content despite internal system variations or external (solar) variations seeking to disturb it.
Higher temperature readings at surface stations therefore do not necessarily reflect a higher system energy content at all, merely a local or regional surface warming as more energy flows through on its way to space.

October 24, 2011 3:56 am

Hi, Michael. One needed adjustment has to do with time-of-observation bias. There are literature references on this available via the internet. You may want to check Steve McIntyre’s Climate Audit for discussions of this.
I believe that the raw data needs this adjustment.

Richard
October 24, 2011 4:19 am

A chilling analysis of James Hansen’s machinations.

Richard
October 24, 2011 4:25 am

Garrett Curley (@ga2re2t) says:
“I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it?”
Maybe Arithmetic and free discussion?
This is not a religious site perhaps?
Not a discussion of beliefs but rather on the basis of the beliefs?
Maybe Hansen introduces a consistent warming bias with his “adjustments”?

Dave Springer
October 24, 2011 4:30 am

Frank Lansner says:
October 24, 2011 at 1:05 am
Here RUTI (Rural Unadjusted Temperature Index) versus BEST global land trend:
http://hidethedecline.eu/media/ARUTI/GlobalTrends/Est1Global/fig1c.jpg
The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.
_________________________________________________________________
Hi Frank. I had a look at the graphs and recognized the source of the difference. That’s the infamous Time of Observation adjustment. It’s the biggest upward adjustment they make. Without it there is no significant 20th century warming trend. This is why the warming trend focus has now shifted to the period 1950-2010. You don’t hear the AGW boffins discussing dates earlier than that anymore. The author of the OP here evidently didn’t get the memo which was issued about the same time as the order to stop calling it “climate change” and begin calling it “global climate disruption”. It’s all about framing, you see. They frame the times and they frame the terms. It’s a despicable, dishonest, unscientific agenda they pursue.
Steve Goddard has a good explanation of the TOB here:
http://stevengoddard.wordpress.com/2011/08/01/time-of-observation-bias/

Time Of Observation Bias
Posted on August 1, 2011 by stevengoddard
The largest component of the USHCN upwards adjustments is called the time of observation bias. The concept itself is simple enough, but it assumes that the station observer is stupid and lazy.
Suppose that you have a min/max thermometer and you take the reading once per day at 3 pm. On day one it reads 50F for the high – which for arguments sake occurred right at 3 pm. That night a cold front comes through and drops temperatures by 40 degrees. The next afternoon, the maximum temperature is also going to be recorded as 50F – because the max marker hasn’t moved since yesterday. This is a serious and blatantly obvious problem, if the observer is too stupid to reset the min/max markers before he goes to bed. The opposite problem occurs if you take your readings early in the morning.
I had a min/max thermometer when I was six years old. It took me about three days to realize that you had to reset the markers every night before you went to bed. Otherwise half of the temperature readings are garbage.
USHCN makes it worse by claiming that people used to take the readings in the afternoon, but now take them in the morning. That is how they justify adjusting the past cooler and the present warmer.
They should use the raw data. These adjustments are largely BS.

October 24, 2011 4:36 am

Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone.

Indeed. The mainstream datasets are based upon a daily temperature reading.
The problem here is what does this temperature reading actually represent?
For the moment forget all about all the problems associated with the accuracy of the thermometer… forget the Urban Heat Island effect and all about the known location issues associated with thermometers… and lets take a look at what daily data is actually being recorded.
If we used a basic data logging thermometer to record the temperature at the end of every minute during a single calendar day we would accumulate 1,440 temperature data points… the maths is simple: 60 minutes x 24 hours = 1,440.
From these 1,440 data points we could then very easily calculate an Average Daily Temperature for that day.
Unfortunately, this is not how the mainstream dataset their daily temperature reading.
This is what they actually do.
First:
They capture the data point with the highest temperature and call it their Daily High Temperature.
Although this is fairly reasonable it is important to remember that this is the high outlier value from the 1,440 data points for that day.
Second:
They capture the data point with the lowest temperature and call it their Daily Low Temperature.
Although this is fairly reasonable it is important to remember that this is the low outlier value from the 1,440 data points for that day.
Third:
They add the Daily High Temperature outlier value to the Daily Low Temperature outlier value and then divide this intermediate number by two. So what would a rational person call this number? By definition (i.e. the maths) it is the mid-point between the daily extreme outlier temperatures. However, climatologists by some bizarre logic call this number the Daily Average Temperature. It is only in climatology that the average of 1,440 data points is calculated by just using the two extreme outlier values for the day… no wonder climate science is regarded as a weird science.
To underline just how weird this weird science really is lets ask ourselves:
Question: What data would a rational person use to demonstrate rising temperatures?
Rational Answer: The series of Daily High Temperatures.
Climatology Answer: The series of mid-points between the daily extreme outlier temperatures.

Dave Springer
October 24, 2011 4:41 am

More info on individual adjustments and how they change twencen temp record.
A picture is worth a thousand words (maybe more in this case):
http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_pg.gif

Dave Springer
October 24, 2011 4:44 am

Final result of adjustments:
http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
As you can see the entire twencen warming is indeed manmade. Made by the adjustments applied to the thermometer readings that is…

Rhys Jaggar
October 24, 2011 4:45 am

I think the debate is beginning to move toward the position where we can see that the result obtained in terms of temperature trends depends on the source data. SURPRISE, SURPRISE!
It may in fact be the case that about 50 independent analyses be done to show what happens depending on what data you use. This one is just for the USA. Whch is a large continental land mass bounded by the world’s two largest oceans to the East and West, a warm sea to the South and a major land mass to the north, with a smaller land mass in the SW.
You might find different results in you studied Russia: a large continental land mass surrounded by ice/ocean to the north and a continental land mass to the south.
You might find different resuilts if you studied Europe. A mid-sized continental land mass with a major ocean to the west, a small sea to the south, smaller enclosed seas to the SE and NE.
You might find different results for Australia: a mid sized land mass totally surrounded on all sides by a major ocean.
I am certain, based on the fact that the Thames simply doesn’t freeze over like it used to in the 19th century, that London must be much warmer in winter than it used to be 200 years ago. So I’d frankly be amazed if we couldn’t agree that we have had warming in the past 200 years, although the 20th century may be less clear cut.
Where the debate has been the past 20 years is a small group unilaterally determining the source data sets and not having that most important decision subjected to the most rigorous analysis by all. That can’t change quickly enough in my book.
It would also appear that debates about how you search for deviations can get you different results. Particularly if the datasets have gaps in the record. It might be helpful to commit to building a century of reliable, consistent, internationally agreed datasets to ensure that this working with limited data be something which becomes less important in time. One hopes that this can include wireless-based transmission of data, particularly in rural areas with extreme cold in winter. Whether that is technically feasible is something specialist climatologists should no doubt enighten the public about.
One is minded to suggest that the IPCC bears all the hallmarks in climatology that FIFA has done in world football. A deeply flawed organisation, but not completely evil. Which is about as strong a signal for fundamental reform that can be given using measured langauge…………

Bigdinny
October 24, 2011 4:48 am

I have been lurking here and several other sites for over a year now, trying to make sense of all this from both sides of the fence. In answer to this simple question, Is the earth’s temperature rising? depending upon whom you ask you get the differing responses YES!, NO!, DEPENDS! IT WAS BUT NO LONGER IS! I think I have learned a great deal. Of one thing I am now fairly certain. Nobody knows.

DocMartyn
October 24, 2011 4:48 am

Have a look at the histograms of Fig.3, BEST UHI paper. It appears that when they compare 10 year, 10-20, 10-30 and 30+ data series the distribution of temperature RATE changes from a normal to a Poisson distribution.
My guess is that you will find the same thing. Moreover, if you look at the individual rates in the same manner, you will be able to identify when the transition occurred..

Jim Butler
October 24, 2011 4:51 am

First hand example of how data sets get messed up…
This morning, as part of my first cuppa routine, I checked the weather online using Intellicast’s site. 47deg. Let my dogs out…hmmmm …seems much cooler than 47deg, checked Accu. 47deg. Then checked Wunderground, 47deg. Used Wunderground’s “station select” feature, saw that it had defaulted to APRSWXNET – Milford, Ma.. All of the surrounding stations were reporting 34-37degs, and they appeared to be amateur stations, whereas I’m guessing that APRSWXNET is an “official” station of some sort.
Whatever it is, multiple services are using it, and it’s wrong by about 12deg.
JimB

October 24, 2011 4:56 am

Richard says:
“Maybe Arithmetic and free discussion?”
I would be of the opinion that arithmetic and free discussion of that type should be reserved to forum threads. This site is considered, from what I gather, as a reference on climate skepticism, so back-of-the-envelope calculations seem out of place to me.
“This is not a religious site perhaps?”
Well, I would argue that using any Tom, Dick and Harry analysis just to place doubt on GW (and therefore AGW) is being somewhat religious. But putting that aside, not being religious/dogmatic about something does not imply that every discussion is fair-game. Being open-minded about something does not require you to let your intellectual defenses down.

Dave Springer
October 24, 2011 4:59 am

Garrett Curley (@ga2re2t) says:
“I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it?”
It depends on what time frame you’re talking about. I’ve no doubt the average temperature of the globe has been rising (with substantial positive benefits!!!) beginning in 1970. This is confirmed by satellite data beginning in 1979. From 1940-1970 the globe was cooling. I’m old enough to recall climate scientists becoming alarmed by possible catastrophic anthropogenic global cooling in the early 1970’s.
The question is whether the last 30-40 years is unusual or not. Roald Amundsen was able navigate the Northwest Passage around 1906 so Arctic Sea Ice extent today doesn’t seem out of line with where it was in the past. Retreating glaciers around the world are constantly revealing human artifacts on the newly exposed ground giving concrete proof that the glaciers have not retreated as far back as they’ve retreated in the past. Greenland today still has a colder climate than when the Vikings were farming it and named it Greenland.
So if there appears to be some dissonance about global warming or not this is why. There surely has been some warming in recent decades but it doesn’t appear to be anything really out of the ordinary compared to other warming episodes in recorded history.

October 24, 2011 5:07 am

Jim Butler says:
October 24, 2011 at 4:51 am
Whatever it is, multiple services are using it, and it’s wrong by about 12deg.

Sometime last winter I was checking the temp for Ottawa, or thereabouts and the weather app was showing a rather warm anomaly. I checked a different source and discovered that the negative sign had been left out.

October 24, 2011 5:09 am

Garrett Curley says:
“…using any Tom, Dick and Harry analysis just to place doubt on GW (and therefore AGW)…”
You are conflating the widely accepted fact of natural global warming since the LIA with the AGW conjecture.

Mergold
October 24, 2011 5:20 am

Excellent analysis. As a Republican voter for 40 years, albeit one now residing in Australia, can I just say I have never seen any evidence of warming. Ain’t no difference between a summer’s day in 1970 and a summer’s day now. I’m not sure why all this business of saying the true skeptics believe the world is warming has come up. There is maybe some regional warming somewhere, but no place I’ve been. I think this site is better when it avoids that bunkum. It’s dangerous.

October 24, 2011 5:31 am

C
You write: “Has anyone drawn a graph in which 99% weighting was given to the best of the rural stations, and 1% to the urban?”
RUTI is : “Rural Unadjusted Temperature Index”, and thus the goal is exactly what you look for.
In several areas it is hardly possible to get real rural data, but all possibilities are tried to get best data as possible.
In many areas, there are not long rural stations 1900-2010 uninterrupted, but very often it was possible to look at a larger area where many mostly rural or small-urban sites combined made a VERY solid mostly rural trend for the whole area. This is important beacause many even sceptics believes that a mostly rural temperature index is impossible just because there are not many long rural stations public available.
Check it out:
http://hidethedecline.eu/pages/ruti.php
Thanks for comment.
K.R. Frank

Roger Knights
October 24, 2011 5:34 am

Harry Snape says:
October 24, 2011 at 3:35 am
Replacing a missing temp from one station with the average of the other stations seems like madness to me. The missing reading could be for a station in Antarctica, or the Equator, they would bear no relationship to the “average” of other stations.

1. “Abstract
The GISS dataset includes more than 600 stations within the U.S. …”
So no worries about Antarctica or the Equator.
2. It isn’t the temperature that’s replaced, but the delta (the little triangle is the delta) of the temperature; i.e., the anomaly. (If I’m reading the formula correctly.)

October 24, 2011 5:40 am

Springer
Frank Lansner: “The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.”
Dave Springer:”Hi Frank. I had a look at the graphs and recognized the source of the difference. That’s the infamous Time of Observation adjustment. It’s the biggest upward adjustment they make.”
Thanks for comment. Yes , the time of observsation… its amazing.
So across the world from country to country, from culture to culture, continent to continent then thermometers NOT meant for climate purposes, just to tell people about their local temperatures, we have this synchronous TOBS.
Everywhere, the time of observation has systematically been changed in one direction causing too cold temperature data, that “must” be corrected massively.
If this is mostly TOBS (or similar) then I find it interesting that global warming is not really measured, but is created on the desk.
K.R. Frank

October 24, 2011 5:48 am

Palmer
Thankyou again for important work!
I think you would find it interesting to see the MASACRE done to rural stations in Turkey…
“Imagine, that GHCN took all USA rural stations and cut down to 1960-90. Then took smaller cities limited to 1950-90 or 1960-2010, and then only the largest citiest had long datasets 1930-2010 or longer? Sounds impossible? Well this is what is done for Turkey, Bon apetite. ”
http://hidethedecline.eu/pages/ruti/asia/turkey.php
@Dear Anthony… I think you should consider publishing this one, the slaughter of rural Turkish data by GHCN?
– Why a slaughter of rural data if UHI is not important?
(We see similar elsewhere, if interested)
K.R. Frank

October 24, 2011 5:51 am

In answer to this simple question, Is the earth’s temperature rising? depending upon whom you ask you get the differing responses YES!, NO!, DEPENDS! IT WAS BUT NO LONGER IS! I think I have learned a great deal. Of one thing I am now fairly certain. Nobody knows.

Correct. NOBODY KNOWS.
Lots of people believe different things… and say different things… but in the end: nobody knows.
It is possible that people were beginning to get a handle on the situation back in the 1970s… at that time they said: the world is cooling… so they threw more money at climate research… unfortunately this money was spent on manipulating data and inventing the Global Warming myth… we are no further forward… and none the wiser regarding that specific question… we are actually a whole lot dumber overall regarding climate.
However, we have discovered that the concept of a Global Average Temperature is intangible (and largely irrelevant)… the temperature profile of the Earth is fractal in nature and cannot actually be measured in a meaningful way… additionally Climate and Weather is regional in nature… for example: the occurrence of an El Niña (or La Niña) impacts different regional areas in different ways… therefore one-size does not fit all geographic regions in the same way… warming can be beneficial in one place while it is detrimental in another.
Additionally, it is easy to support the following statements:
1) Global Warming is generally a good thing… more people die when it is cold… living organisms flourish when it is warmer… Additionally, convection dominates the daily energy flows when there is water in the environment. Therefore, the extent of any Global Warming is limited by convection and can only every become a problem for arid and desert locations that remain deprived of water.
2) CO2 is vital for plants and increasing levels of CO2 result in increasing crop yields and overall promote the greening of the earth… again water dominates any greenhouse gas effect that may exist in the atmosphere… and CO2 is basically irrelevant as a greenhouse gas because it only constitutes 0.039% of the atmosphere.
3) The Global Warming myth is just scare mongering… as you say: nobody knows… so just forget about it until you are asked to pay for some crackpot Global Warming scam / scheme… in that case just say NO and walk away – nothing to see here – just another snake oil salesman.

Ivor Ward
October 24, 2011 5:54 am

I thoroughly enjoy coming to this site to read the latest on the climate conundrum. I post occasionally under two different names, Disko Troop and Ivor Ward depending on whether I think my ex-wife is watching that day (and whose computer I am using). I have never been subjected to nasty, snarky, rude or mean responses to my potentially all encompassing ignorance of the subjects in question. The only time that I ever witness this kind of ill will here is when an influx of what one might call “the other side” appears somewhat akin to a plague of locusts when they feel that their side has scored some kind of browny point, e.g. the BEST shenanigans I have tried to raise the occasional question in forums such as Dr Schmidt’s and Mr Cook’s and been met with torrents of abuse. I asked why the sea level rise suddenly changed with the advent of sateilites, why the trees in my garden depend on rainfall and amount of sunlight to grow but theirs only depend on temperature. Why temperatures are shown as rising in the arctic by people who guess the data and not by the Danish Uni that has the buoys. Such simple and honest enquiries were met by abusive replies as to my ignorance and propensity for hanging around under bridges and eating Billy Goats Gruff. As a retired mariner once responsible for the safe navigation of the largest ships in the world you can imagine my response should one of these pseudo academics choose to insult me face to face. However, enough said. So I would like to thank Anthony, if I may call you that, for providing this forum with its air of relative civility.
I was responsible for many maritime mobile weather stations We reported every 6 hours, wet/dry/RH. Sea temp, wind,cloud type,height and cover, wind speed, direction, sea ice, sea state, swell direction, etc., and of course our postion to within one or two miles by celestial navigation. Sometimes we would be the only ship reporting in the entire Southern Ocean, and this was in the 60’s and 70’s . Sometimes the only reporting ship within a thousand miles in the North Pacific.
Had I known then how so called climatologists would currently miss-use the data we collected for weather forcasting, I would have thrown every Met instrument on board over the bloody side.

October 24, 2011 6:00 am

Excellent. I’ve never understood why anyone wasted any time on interpolating, or on urban locations, when so many continuous rural locations are available. It’s just common sense to use good sensors (if available) and totally ignore dubious sensors. In this case good sensors are available, so we should only be paying attention to them.

1DandyTroll
October 24, 2011 6:01 am

Here’s the true hippie version:
Since the pebble is an exact replica of the mountain (all serious bong users says so):
600/600 = 1
Now you have just one station to work with, much easier.
Now take that station’s data points and divide every point by itself, then add those together and divide the sum by the number of data points, et voila, you get a smoothed numero uno.
That is called the reference point. However since it is the reference point:
1 = 0
Now all your work has zero points to it.
But that never stops the communist climate hippies, that’s why they’re probably crazed.
As a true hippie you put that shittie zero into the machine-bong to smoke out a result. As everyone knows the result will most likely be zero, so you have to use the bag of tricks attachment to run the resulting zero through the random alarm generator algorithm, seeded by +1..+7 (you probably don’t want to go higher ‘an +7 because then it becomes obvious you’re crazy), and before you know it: OMG! ALARMA! We are doomed! Hand over all your money so we can save you!

October 24, 2011 6:07 am

There is one part of the AGW theory that I am not skeptical about. That’s the part that says that there is such a thing as “The Urban Heat Island Effect” (UHIE) which is all due to us anthropoids are getting more numerous as we reap the benefits of burning more fossil fuels.
There is no reason, in my opinion, to assume that just because air temperatures have gone up in cities and other urban areas air temperatures in the surrounding areas have gone down in some kind of response – they probably have not as there is no reason for them to do so. However there is absolutely no reason to try to “fiddle figures” in order to wish any UHIE away. That goes for both “warmists and skeptics” alike.
Therefore if for a hundred or so years we measured air temperatures (T) at say 3000 stations, all in rural surroundings, and we are happy to equal the average (T) of those 3000 stations with “the average global T” (15° C), then if, say 100 or so stations have become “urbanized” resulting in each of them experiencing a T rise of say a couple of degrees C each then yes, AGW is happening but only in our paperwork, and is a kind of warming that can only be detected locally. Furthermore the UHIE has got nothing to do with “Back- radiation” from CO2.
If, the trend for the last century (1900 – 2000), in spite of the UHIE, is flat – then that should tell us that, in spite of a couple of “warming spikes” the world outside our windows is getting cooler.
And by the way – now that, allegedly, most skeptical scientists as well as “Real Climate scientists” are pandering to the notion that AGW is due to CO2 back radiation, I am wondering why I still cannot find any actual data that proves it. – Is the “proof” needed yet another “Consensus”?

MFKBoulder
October 24, 2011 6:11 am

Quote form the guest post:
“There are several examples of long-running temperature records that fail to show any
substantial long-term warming signal; examples are the Central England Temperature record and the one from Hohenpeissenberg, Bavaria.”
Look at the Hohenpeissenerbg Graph and you see the statement quoted is nothig but belony
http://preview.tinyurl.com/Hohenpeissenberg
….
….
Just like the “Winter start” reoprted for St.Motiz a month ago. Still ROTFLOL

KPO
October 24, 2011 6:37 am

oMan says:
October 24, 2011 at 3:53 am
…. “reducing the complexity of a system such as local weather or (its big sister, integrated over time and space) climate, into a single parameter called “temperature” Also
Jer0me says:
October 24, 2011 at 2:47 am
…”If you mean that the ‘temperature’ itself is not a good reading, because what we need to measure is ‘energy’, you have my vote.”
Yes, both your replies are what I’m getting at. I would however like an “expert” explanation as to why they do what they do, so I’ll do some digging – if anyone here has a quick ‘n easy – thanks.
I have this mental picture of a future 300 year old Mom and Son phone call on Mothers day “ …what’s the climate like there Johnny? “Oh an average of 21C” ????

October 24, 2011 6:39 am

Ivor Ward/Disko Troop,
That is a fine post. My thanks and kudos.
Dave Springer and Frank Lansner, kudos also. Great comments!

Bernd Felsche
October 24, 2011 7:19 am

I didn’t mention it in my reply above, but after reading Doug’s comments on the statistics, I put together these notes on the detection of “global warming”. Read the whole thing and tell me what’s wrong with it.
“Pull text”:

The heat content of the climate system isn’t just in the dry air over time. One has to measure moisture content and soil-/water-surface temperature for a start. Then, for each component, calculate enthalpy over each area (specifically, the thermal mass of each component). That gives the “instantaneous” heat content for the measured region.
Do that for the whole globe. Then sum for the global total at that instant.
It’s that simple. Meticulous and rigourous, but simple.

Unlike “global temperature”, the enthalpy figure is a real thing.
Plot the real thing over time. look at the graph. Then try to figure out what’s happening in the real world.

Michael Palmer
October 24, 2011 7:24 am

Frank Lansner says:
October 24, 2011 at 1:05 am
Palmer
“At some point they are going to run out of tricks to create a warming signal”
I appreciate very much that you just put it as it is.

Those weren’t my words.

October 24, 2011 7:28 am

Logic is a scary thing!

Etaoin Shrdlu
October 24, 2011 7:29 am

Anybody investigating the cause of death for all those stations?

ferd berple
October 24, 2011 7:40 am

Alexander Feht says:
October 24, 2011 at 1:16 am
So, where we have continuous, reliable, non-manipulated data, there is no warming at all. QED
Strange that many “skeptics” seem to have reconciled themselves with the notion that “there was some global warming in the 20th century.”
When Climategate came out, I looked at the Canadian temperature records. There was no trend apparent in the long running Canadian records either.
The obvious question to be asked is why? Why is there a statistical difference between the long running stations and the complete data set? There shouldn’t be.

Chuck Nolan
October 24, 2011 7:42 am

kim;) says:
October 24, 2011 at 3:03 am
KPO says:
October 24, 2011 at 2:17 am
“This thing with averages of averages of averages of data points (numbers) bothers me. ”
xxxxxxxxxxxxxx
Well said!
—————————-
In the words of the immortal Wills, “It’s models all the way down.”

Chuck Nolan
October 24, 2011 7:43 am

That would be “Willis”

Dave Springer
October 24, 2011 8:12 am

Ivor Ward says:
October 24, 2011 at 5:54 am
“I was responsible for many maritime mobile weather stations We reported every 6 hours, wet/dry/RH. Sea temp, wind,cloud type,height and cover,”
Hi Ivor. Just curious about how cloud height is determined aboard ship. I ask because back in the 1970’s I was a military meteorological equipment repair tech. I basically had to keep all the weather-related gear at an airport working and calibrated. One of the systems under my care was an old fashioned cloud height indicator that had some spinning floodlights on end of a runway and and receiver on the other end. The height of the cloud was determined by the angle of the transmitting light source when the receiver detected it. All analog electronics (vacuum tubes back in those days) but quite dependable and accurate.
Anyhow, reading your comment I was wondering how this is determined aboard a rolling ship with a baseline too short and likely not equipped with an expensive bulky piecie of gear like I had. I got to thinking about how we estimated the yield of a nuclear weapon (I went through Nuclear, Biological, and Chemical Warfare school) in the field. You basically time how long it takes from the flash to when you hear the sound to get the distance to it. You then measure the height of the mushroom cloud and determine from it’s color whether it was sub-surface, surface, or air burst, then you can use a simple formula to determine the kilotonnage of the weapon and from that you can also determine how close to ground zero you can get and how long you can remain before receiving a sickening (or fatal) dose of radiation.
So anyhow, I figure so long as the clouds observed aboard ship stretched to the horizon and it was during daytime you could measure the angle between horizon and cloud and figure out cloud height that way. Is that how it was done?

KR
October 24, 2011 8:18 am

So looking at :
– No area weighting – a single station in Montana perhaps hundreds of miles from any other has the same weighting as a pair of stations that might be less than five miles apart. This is a serious biasing of the data used.
– Averaging raw temperatures (which vary hugely over short distances) rather than anomalies (which don’t – a mountaintop and a nearby pass/beach have different raw temperatures, but see roughly the same weather patterns).
– Throwing out 90% of the temperature records, when even a quick examination shows 1/3 of stations with a negative trend, 2/3 with a positive trend, making any conclusions from 10% poorly supported. I’m not surprised by a low trend – I would be equally unsurprised by a huge trend given the poor data treatment.
This article by Michael Palmer really says nothing meaningful, due to bad data handling – it’s like a compilation of “Never Do This” methods stuck together.
Michael Palmer – For the correlation of nearby station anomalies and area weighting, I would recommend Hansen & Lebedeff 1987 (http://pubs.giss.nasa.gov/cgi-bin/abstract.cgi?id=ha00700d), which discusses this issue (and a lot more) in terms of trying to compute these trends.

kwik
October 24, 2011 8:20 am

Okay, this means that Peter was right, then;

Ivor Ward
October 24, 2011 8:23 am

Would anyone care to explain to me why the graph linked by Mr Palmer for Hohenpiessenberg at http://climatereason.com/LittleIceAgeThermometers/Hohenpeissenberg_Germany.html shows a virtual flat line and then one that MFK Boulder links to at http://preview.tinyurl.com/Hohenpeissenberg shows our favourite hockey stick, preferably without using the word “baloney” in the text even if you do spell it correctly.

Bob Kutz
October 24, 2011 8:30 am

In Re; Garrett Curley (@ga2re2t) says:
October 24, 2011 at 3:13 am
As to the notion that ‘skeptics’ don’t deny that the Earth has warmed; I think it a bit much to lump all of us into one group with one set of beliefs. The Earth seems to have warmed, according to what the record tells us, but it is difficult to imagine that we know with any degree of certainty. Most of us (I believe) stop short of accusing the curators of the data of out an out malfeasance, but even given honest brokers; the statistics of the matter are mind bogglingly difficult. Imagine estimating the average temperature of a sphere roughly some 8000 miles in diameter with a data set of thermometers unevenly spaced, gaps and overlaps abound and 2/3 of the surface has no meaningful data. We simply do not have one giant thermometer that gives us global mean surface T. (The closest we come now is the satellite data, which may in part explain the reduced number of stations; they aren’t really all that necessary with the satellites giving us significantly more accurate and complete data.) Too bad that record only begins in 1979.
In light of the best evidence; I think the Earth has probably warmed. It’s certainly warmed from the late 1800’s. In fact it’s likely warmed continously from the LIA to present.
But this article just takes a look at a rather humble slice of data; unadjusted data from stations with continuous reporting. It is not peer reviewed. It isn’t presented as such and isn’t yet meant for publication.
I think a lot of you believers do not have a good understanding about the nature of this debate. There may be a paper that results from this. It wouldn’t be the first time something that started out here turned into a paper survived peer-review and got published. But we don’t ostracize those with dissenting ideas here. We talk about those ideas.
That is how science usually begins; a hypothesis is developed and a means of testing it is devised. I don’t know how the current cargo culturalists who run orthodox climatology are doing it these days. It appears that their science is based entirely on models, data adjustments, squelching dissent and gaming peer review. That is just my perception. That is my opinion and perhaps I am wrong.
Either way, if the idea in this article were to develop into a peer reviewed, published paper it would certainly give those charlatans some issues to address. Especially if the data and methodology were shared freely and without objection.

jmrsudbury
October 24, 2011 8:30 am

MFKBoulder October 24, 2011 at 6:11 am says: “Look at the Hohenpeissenerbg Graph and you see the statement quoted is nothig but belony.”
The link you provided does not suggest whether the graph uses raw or adjusted data.
Furthermore, the link mentions that, “In March 1950, the status of the Hohenpeissenberg station was upgraded to that of a meteorological observatory.”
Did it switch in 1950 from “Mannheim hours” (at 700, 1400 and 2100 hours local mean time) to hourly readings? If so, does the graph use only the data taken at “Mannheim hours” for accurate comparison?
John M Reynolds

More Soylent Green!
October 24, 2011 8:32 am

Mergold says:
October 24, 2011 at 5:20 am
Excellent analysis. As a Republican voter for 40 years, albeit one now residing in Australia, can I just say I have never seen any evidence of warming. Ain’t no difference between a summer’s day in 1970 and a summer’s day now. I’m not sure why all this business of saying the true skeptics believe the world is warming has come up. There is maybe some regional warming somewhere, but no place I’ve been. I think this site is better when it avoids that bunkum. It’s dangerous.

IN 1970, very few of us had air conditioning. If we had A/C at home, it was a window unit. (Many of my friends slept outside during the summer.) We didn’t have A/C in our cars. Most people didn’t have A/C at work.
Today, we wake up in an air conditioned home, walk 10 feet and get in our air conditioned cars. We then walk through the parking lot (god, it’s hot out!) and into an air conditioned office or other workplace. It’s no wonder it seems hot out, because we’re no longer used to the normal summer heat.

October 24, 2011 8:35 am

KR says:
October 24, 2011 at 8:18 am
“Averaging raw temperatures (which vary hugely over short distances) rather than anomalies (which don’t – a mountaintop and a nearby pass/beach have different raw temperatures, but see roughly the same weather patterns).”
In fact, my method amounts to averaging anomalies rather than raw temperatures.
“Throwing out 90% of the temperature records, when even a quick examination shows 1/3 of stations with a negative trend, 2/3 with a positive trend, making any conclusions from 10% poorly supported.”
The 10% were selected not for some trend or for location but purely based on continuity. If you think that criterion meaningless, fine, I just don’t agree with you.
“For the correlation of nearby station anomalies and area weighting, I would recommend Hansen & Lebedeff 1987 …”
I’m not claiming to have calculated The One True Average Temperature Trend. The only point I make is that long-running stations trend differently from ones that are not, and for that I don’t need area weighting.
Thanks for playing.

Doug Proctor
October 24, 2011 8:36 am

Thanks, Michael. You also show that those without certified climate degrees, but with the powers of observation and the tools of sharp pencils can make significant scientific conclusions. With a “problem” that is global, it amazes me that the claim you need finely tuned technical backgrounds, million dollar computers and a statistician’s understanding of why it looks like A but is actually B, is so easily accepted by the mainstream. Or was, anyway.
Your question as to the large dropoff of stations is about a situation one I’ve never understood, either. You would think that a “problem” of the end of mankind/the biosphere would generate more, not less field work. Yet as the problem became, in the warmist view, worse, the station count collapsed. I’m no stranger to the need to check only the right few to determine the course of the many (politicians as well as businessmen rely on such surveys), but reducing what was going on at such a time seems very odd. Saving pennies when about to spend billions doesn’t seem what would happen in a budgetary process. Again, the MSM doesn’t seem to find this peculiar.
Senator Inhofe said that AGW was the greatest scam of all; perhaps he was thinking like a man with common sense, seeing such things as the station count drop and saying the whole thing just didn’t make sense. I’d agree with that.

Dave Springer
October 24, 2011 8:39 am

Frank Lansner says:
October 24, 2011 at 5:40 am

Springer
Frank Lansner: “The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.”
Dave Springer:”Hi Frank. I had a look at the graphs and recognized the source of the difference. That’s the infamous Time of Observation adjustment. It’s the biggest upward adjustment they make.”
Thanks for comment. Yes , the time of observsation… its amazing.
So across the world from country to country, from culture to culture, continent to continent then thermometers NOT meant for climate purposes, just to tell people about their local temperatures, we have this synchronous TOBS.
Everywhere, the time of observation has systematically been changed in one direction causing too cold temperature data, that “must” be corrected massively.
If this is mostly TOBS (or similar) then I find it interesting that global warming is not really measured, but is created on the desk.

As James Woods said at the end of the film “Contact”… “Yes, that is interesting, isn’t it.”
The strength of the global warming narrative is in the satellite data beginning in 1979. There is little doubt that the data is reliable, accurate to the degree necessary, and coverage is near global and around the clock. The earth’s temp was rising between 1979 and 1999 and has leveled off since then.
But that’s not a long enough period of time for a “climate trend” which is defined as 30 years of weather. Even 30 years is questionable for being long enough because we know for a fact that there are climate cycles that go far beyond a mere 30 years. Interglaical periods for instance are on a cycle of 100,000 years. The AMDO (Atlantic Multi-Decadal Oscillation) is a 60-year cycle. In fact many of us believe that the past several decades are simply the warm side of the AMDO being measured by satellites and nothing more.

KR
October 24, 2011 8:47 am

Michael Palmer
My apologies on the anomaly/averaging – re-reading your post I see I was incorrect on that.
The lack of area weighting and discarding of 90% of the data, on the other hand, are quite serious issues. As I stated in my previous post, given the limitations you have imposed on the data, I would be equally unsurprised by flat temperatures as by a temperature rise several times what is noted in any of the records. You have also used the raw data, rather than data adjusted for changes at the various stations (as in new thermometers and the like). That could change the data either up or down – but will inevitably add yet another source of error and variation, making your conclusions even less statistically supported.
Area weighting data simply allows you to use the other 90% of the available data.
“Thanks for playing” – Oh? You consider this a game?

peetee
October 24, 2011 8:48 am

Michael Palmer – Disclaimer: I am not a climate scientist and claim no expertise relevant to this subject other than basic arithmetics. In case I have overlooked equivalent previous work, this is due to my ignorance of the field
nuff said!

TheGoodLocust
October 24, 2011 8:51 am

Wouldn’t a more accurate (and more computationally intense) method be to take the temperature deltas by months or even days and then average those together to get the average delta?
To clarify, if you went by days (or even specific times of day), you’d take all the January 1st readings, and calculate your base period and delta for that specific day. You’d then do that for every day in the year, and then calculate the year from there.
This would mitigate missing records from days or times by simply ignoring them.
There may still be bias though since the temperatures may not be measured under certain weather conditions (i.e very cold), but it would prevent any input of false signals.
To make it even more accurate and challenging. You could recalculate the base period every time the station data is offline for more than a week or so. Essentially, if a station moves or equipment is changed/repaired, then this may be reflected by a period of missing records. The logical thing to do would be to treat it as a completely separate station instead of comparing its data to the older data.

Ivor Ward
October 24, 2011 8:55 am

Dave Springer asks:Hi Ivor. Just curious about how cloud height is determined aboard ship.
Hi Dave,
We used the mark 1. uncorrected Eyeball, a large chart supplied by the Met office, vast amounts of experience and a lot of guesswork. As you know the various types of clouds have different base levels so it was always the starting point to decide what type of cloud you were looking at. The low clouds were fairly easy to determine but high clouds were largely a matter of looking at the type on a chart and then trying to see if it had a positive base i.e. at the lowest end of its height range; or blurred baseline, possibly higher. In those days we did Met as part of the Board of Trade exams. I don’t recall a course called “staring at computer screens” ,we actually had to look out of the bridge windows! Vertical sextant angle would not work unfortunately unless God dropped a plumb line to give a reference point between horizon and cloud base. Cloud cover was estimated by quartering the sky and using percentage estimates. You can understand my chagrin at the way our guesstimated data are now refined to multiple decimal points.
I too took the nuclear defence course. We were told to take cover in the shaft tunnel by an enthusiastic Lt Commander. When we pointed out the lack of such on a 200,000 ton tanker we were told to turn our backs to the flash, close our eyes, bend over and …………you can guess the rest.

crosspatch
October 24, 2011 8:57 am

Is there an official explanation for why, in the modern era with all the funding available, the number of stations has dropped precipitously?

Well, if you have ever had a look at the code that does the “adjustments” I would guess that it gets very complex with a lot of stations. So if you reduce the number of stations, it becomes much faster / easier to do their “adjustments”. That could explain a reason for wanting to reduce the *number* of stations, but it doesn’t explain why the *nature* of stations selected for deletion were biased toward colder stations. Rural and high altitude stations were chopped mercilessly and not just in the US, the same goes for South America, too.
To address another question asked above, yes, these stations are by-and-large still reporting every day. They are available in many cases electronically over the Internet. The stations are still there, and the data are still there, GISS simply no longer uses them.
This is crazy when you consider that the three coastal stations now representing all of California in no way reflect the weather in, say, Bridgeport, California which is at about 7,000 feet altitude and is East of the Sierra Nevada. For example, the forecast temperatures in Bridgeport for Wednesday are High 47°F, Low 14 °F where the forecast temperatures for San Francisco are High 70°F Low 56°F
It certainly changes the “average” temperature of the state and requires a much greater degree of “adjustment” and does not take into account wind direction. San Francisco can warm considerably this time of year when the wind comes from the East and we get adiabatic warming from air dropping in altitude from the Sierra Nevada (like a “Chinook” wind).

Rob Potter
October 24, 2011 9:03 am

A number of people are questioning why there is a post on this site arguing the the temps have not gone up when there is also widespread acceptance of warming during the same time period and I think the problem is that this posting is not talking about “global” temperature, but the US.
I also got confused with the BEST figures, because they such an increase since the 1930s when – in the US – these were as warm as the last decade and I wondered why no-one was querying this. The fact is that – taken over the whole planet – records show an increase in average temperature since the 1800s and although there are still arguments over how much, no-one on any side is really debating this. [That’s why Muller’s comments on the BEST analysis shooting down skeptics is a straw man argument.]
This post is talking about the US and considering the 20th century as a single chunk – partly to point out the effect of the changes in the number of stations on the rate of change. It is as much an exercise in methods of developing a long term record in the presence of missing data as it is a comment on actual temperatures, but this is an important point since we know there are problems with missing data.
It has been a very effective posting, because it has generated a lot of comments, some of which contain very interesting and useful information themselves. Excuse me for shouting, but THAT IS THE POINT OF A SCIENTIFIC BLOG. If all you want is confirmation of your existing opinion, go to a political blog site.
Thanks Michael for this analysis, thanks to Anthony for posting it (and much, much more) and thanks to the commentors who have read the post, thought about it and are providing some useful feedback and discussion.

Dave Springer
October 24, 2011 9:26 am

Glenn Tamblyn says:
October 24, 2011 at 1:40 am
Skeptical Science was caught red-handed editing post-facto an article in which a senior climate scientist was making critical comments. They not only treated a very civil senior scientist with expert credentials in the field poorly they edited their own article afterward to make him look worse. This was proven beyond a shadow of a doubt by compariing archived versions of the article and commentary (@ archive.org). SkS was busted beyond any doubt at all.
Anthony Watts does not want links to SkS appearing here because that raises SkS google rankings appreciably and they do not deserve the added page views that come with a higher search ranking. It’s not rocket science, it’s quite understandable, and it’s Anthony’s call to make.
Besides that if whatever point you were trying to make had any merit to it you wouldn’t need to rely on a single source for a reference. If SkS is the only source you have then it’s a moot point to begin with.

Mike Smith
October 24, 2011 9:28 am

Yikes.
Dr Parmer’s paper appears to demonstrate that the settled science of warming is attributable solely to the “fudge factors” (data selection and corrections) typically applied to such work.
It does not address the question of whether or not these “fudge factors” are legitimate or justified but it surely begs for further examination of same.
I hope this work can be submitted for formal peer-reviewed publication so that the warmists are forced (shamed) into explaining why the “corrections” applied to their raw data just happen to be exactly equal to the warming trend they report. The usual hand waving is insufficient and the powerful presentation in this article makes that pretty darn obvious.
On the methodology… we all know that data selection is always dangerous. However, the particular selection used in this article, based on nothing more than the continuity of the station data, does seem perfectly justified and certainly raises some fascinating questions.
Beautiful paper!

Matt
October 24, 2011 9:30 am

The “lopping” off of the pre-1900s and post-2000s is a major influencing factor on the regressions because the decreased period of time gives more weight to significant events occuring during the time period used for Figure 2, especially those significant events that took place in the first half of the century. Anomalies like the 1930s and 1950s droughts have a tendency to skew regressions negatively towards the end of the century because they were such major events temporally and spatially.

October 24, 2011 9:31 am

“KPO says:
October 24, 2011 at 2:17 am
I have this sense that there are parameters missing such as humidity,”
I understand where you are coming from. However how much difference does it really make in the end? The percent water vapor in the atmosphere can vary from close to 0 to about 4%. Let us assume that in a dry year, the humidity averages 1% and in a humid year, it averages 3%. The specific heat capacity of air is 1.0. Let us assume the specific heat capacity of water vapor is 2.0. So if the air has 1% water vapor, the average specific heat capacity is 1.01. And if the air has 3% water vapor, the average specific heat capacity is 1.03. I know the molar mass of water is 18 and not 29, but if we just assume they are the same, then the mass of the atmosphere with 3% water vapor is 2% larger than if there is 1% water vapor. (I am also generously assuming water vapor exists evenly throughout the atmosphere and does not condense out.) Then applying mct(moist air) = mct(dry air), we find that the mc for the moist air is 4% larger than for dry air. So to balance things out, the dry air has to have a temperature change that is 4% larger than the moist air. In other words, if moist air goes up by 1.00 degrees C, the dry air, with the same energy input, would go up by 1.04 degrees C. So I would say the difference is very small. Perhaps the error bars need to be made just a wee bit larger to account for the unknown average humidity values? Note that I am not addressing phase changes that may occur due to humidity which is a separate topic.

Dave Springer
October 24, 2011 9:33 am

Rob Potter says:
October 24, 2011 at 9:03 am
“I also got confused with the BEST figures, because they such an increase since the 1930s when – in the US – these were as warm as the last decade and I wondered why no-one was querying this. The fact is that – taken over the whole planet – ”
Rob, the fact is that there IS NO TEMPERATURE record for the whole planet. Period. Even today there are vast areas missing inside the arctic and antarctic regions because the satellites don’t have a view into them.
The southern hemisphere was almost a complete unknown with virtually no instrumental temperature record until well into the 20th century. Adding insult to injury there is almost no coverage for the entire continent of Asia until well into the 20th century and virtually none for any of the world’s oceans exept in shipping lanes.
To pretend the situation is different is an outright lie. There IS NO RELIABLE GLOBAL instrumental record pre-dating the satellite era, period. End of story.

Dave in Canmore
October 24, 2011 9:39 am

Stephen Wilde says:
“From our perspective there seems to be a warming or cooling at the recording sites when averaged out overall but in reality all that is being recorded is the rate of energy flow past the recording sites as the speed of energy flow through the system changes in an inevitable negative response to a forcing agent whether it be sun sea or GHGs.In effect the positions of the surface sensors vary in relation to the position of the climate zones and they record that variance and NOT any change in energy content for the system as a whole.”
A most welcome summary of the starting point that is a temperature record. For many, a temperature record is the end of the thought process but it really is the beginning. Thanks for reminding us what these data points actually mean!

DR
October 24, 2011 9:39 am

Has USHCN-M any data worth looking at yet?

Theo Goodwin
October 24, 2011 9:40 am

Michael Palmer’s article is an important one and we need to focus on the Big Picture. The article introduces two topics, one about station quality and the other about calculation. The station quality topic is logically prior to the other topic and raises the very important question about station quality and the empirical evidence for it.
Palmer describes the stations reporting continuously since 1900 as follows:
“Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.”
Graphing these stations, he concludes that:
“While the sequence and the amplitudes of upward and downward peaks are closely similar to those seen in Figure 2, the trends for both rural and non-rural stations are virtually zero. Therefore, the average temperature anomaly reported by long-running stations in the GISS dataset does not show any evidence of long-term warming.”
From Palmer’s observations, we need to ask what we can infer about the stations. I suggest that the most important and telling inference that can be drawn is that these stations have been well managed. The stations that do not fall into the category of “well managed” can then be graded on various levels of “poor management.” The levels of poor management can be determined by searching for causes of gaps or bumps and similar matters. (Bumps occur when there is a sudden and large continuous change in temperatures reported.)
I emphasize poor management for a very important reason. The only reasonable inference that can be made about stations with numerous gaps and bumps is that the readings that come from them are flaky. Yes, flaky, as in usually inaccurate and maybe in several different ways. The inferences that Warmista want to draw at this point are that errors offset one another, that errors are one-time shifts that do not affect trends, that surrounding stations are not flaky, and so on. Obviously, none of those inferences are justifiable without the results of empirical research done on the ground. Because Warmista adamantly refuse to engage in such empirical research, they are making wholly unjustified assumptions.
For the last thirty years, Anthony Watts and others have gathered information about siting which could explain many gaps and bumps and which could be used in grading poor management. Watts’ factual information goes far beyond what has been described here.
When cornered, the Warmista response is that all of these empirical matters are unimportant because their incredibly sophisticated statistical techniques enable them to compensate for all flakiness in all weather station records. The breath taking boldness of this claim makes it highly suspect. It raises the question whether Warmista could specify any degree of flakiness that could not be accommodated within their statistical techniques. (Please note that questions of calculation are separate from and can be in conflict with empirical knowledge of stations.)
The practical conclusion of all this is that the records of well managed weather stations should be privileged over those of poorly managed stations in calculations of average temperatures. Palmer’s claim that the well managed stations show no temperature trend at all should be the accepted baseline among climate scientists and deviations from it should require justification from empirical research about particular poorly managed stations.

Mike Smith
October 24, 2011 9:41 am

Matt says:
The “lopping” off of the pre-1900s and post-2000s is a major influencing factor on the regressions…
A point that was fully addressed in the paper.

beng
October 24, 2011 9:48 am

****
Frank Lansner says:
October 24, 2011 at 5:40 am
Thanks for comment. Yes , the time of observsation… its amazing.
So across the world from country to country, from culture to culture, continent to continent then thermometers NOT meant for climate purposes, just to tell people about their local temperatures, we have this synchronous TOBS.
Everywhere, the time of observation has systematically been changed in one direction causing too cold temperature data, that “must” be corrected massively.

****
Technically, TOBS is a legit correction, but then how could corrections to so many stations (thousands) produce a TOBS correction so lopsided upward? One would think that such a correction applied globally over so many stations would end up nearly random — near zero. And the TOBS correction applied isn’t even done by each individual stations’ data, it’s done by a TOBS “model” (algorithm).
I assume TOBS “models” are as trustworthy as climate models, until shown otherwise.

Dave Springer
October 24, 2011 9:49 am

Ivor Ward says:
October 24, 2011 at 8:55 am
Interesting. I’d have thought you could simply measure the amount of sky showing between horizon and bottom of cloud deck. The distance to the horizon at sea should be pretty constant with possibly some adjustment needed for height of the ship’s deck above the waterline which would let you see some distance further than line of sight from waterline.
In NBC school we didn’t have sextants. In order to determine the height of the mushroom cloud we used “thumb widths” i.e. hold your arm straight out with thumb horizontal and count the number of thumb widths from the ground to top of mushroom cloud. IIRC correctly each thumb width is about 5 degrees. With a distance estimate taken by time between flash and sound you have the length of one side and two angles (including the 90 degree angle at the base of the mushroom cloud) of a right triangle which is sufficient data to solve for the length of the other sides. Exactly the same thing -should- work at sea to measure the height of the cloud deck although on a rolling ship it might be quite difficult counting thumb widths! Maybe beyond difficult as I’ve never tried anything like that and have virtually zero time spent on any ships at sea. I’ve been in all kinds of planes and helicopters, all kinds of watercraft on inland waters, and all sorts of land vehicles but only a couple of half day ocean fishing trips for my maritime experience – enough to know I don’t get seasick in modest swells but that’s about it.

October 24, 2011 9:57 am

The strength of the global warming narrative is in the satellite data beginning in 1979. There is little doubt that the data is reliable, accurate to the degree necessary, and coverage is near global and around the clock.

I admire your faith in satellite data that is not calibrated against earth based thermometers… satellite data that cannot be independently verified / processed / checked… although the satellite data starts in 1979 the data series has been accumulated from various satellites with differing equipment and differing failure rates… I do not share your unquestioning faith in it the accuracy of the data, the reliability of the equipment, the scope & timeliness of the coverage… let alone the subsequent processing of the raw data by the usual suspects.

October 24, 2011 10:22 am

I am not trying to downplay this work, but again,
any study that does not show the development of maxima and minima TOGETHER with the averages is pretty useless, I think.
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming

Rob Potter
October 24, 2011 10:24 am

Dave,
I agree that there is no reliable global thermometer, but there are a set of global records that people use to create an artificial construct called the global temperature and – for some reason – everyone looks at it and thinks it means something.
I was simply pointing out the that the supposed disconnect in this article was the comparison of the (artificial) US temp with the (artificial) world temp.
The whole notion of a global temperature (even if you use satellites) is an artificial construct. Heck, the concept of temperature itself is an artificial construct of a way to refer to energy. However, it serves a useful purpose because it is something that can be defined simply and compared over time and space.

DirkH
October 24, 2011 10:27 am

Rob Potter says:
October 24, 2011 at 9:03 am
“A number of people are questioning why there is a post on this site arguing the the temps have not gone up when there is also widespread acceptance of warming during the same time period and I think the problem is that this posting is not talking about “global” temperature, but the US.”
One of the longest running records in Europe is Berlin; nearly no trend over 300 years:
http://notrickszone.com/2010/09/23/own-weather-records-contradict-germanys-weather-service-director/
There are, though, smaller waves, and I think it is pretty clear that we are at the moment at the top of one of these waves; so one can construct 30 year trends that show a warming. The CAGW movement uses this to shout “This time is different” – like the people who believed in ever-increasing house prices. We know how that one ended.

October 24, 2011 10:31 am

Springer
“The strength of the global warming narrative is in the satellite data beginning in 1979. ”
Yes, the great argument is that land temperature data is not too far from the satelite data, some say.
But the largest difference occured 1950-78 it seems.
K.R. Frank

October 24, 2011 10:32 am

DirkH says:
October 24, 2011 at 2:50 am
But if you do area weighting your result will be hugely biased towards the trends of isolated thermometers with no second data source a thousand miles around,
You MUST use some form of area weighting. This may be difficult to do depending on whether there is location data available. But a poor man’s weighting would be to calculate the average temp for each state [if that location is available], then average the 48 states.

October 24, 2011 10:39 am

KR says:
October 24, 2011 at 8:47 am
“The lack of area weighting and discarding of 90% of the data, on the other hand, are quite serious issues.”
You persevere in misunderstanding the intention of my post. But, if throwing away 90% of the stations is so terrible: Why does GISS do the same, then? They axed 90% of their stations themselves.
“Area weighting data simply allows you to use the other 90% of the available data.”
Thank you. I already suspected that you were clueless, but now I’m sure of it.
‘“Thanks for playing” – Oh? You consider this a game?’
Yes. For me, it is – my day job is in real science.

Brian
October 24, 2011 10:45 am

Interesting that posts like this keep popping up when Anthony has admitted the earth is warming, just not that humans are causing it. Are you going back on your pledge to “accept whatever result they produce, even if it proves my premise wrong.”?
Has Anthony ever detailed what exactly it would take for him to accept AGW? How high to temperatures have to get? Who has to do the analysis? Because after his reversal on the BEST study, it seems that any study that supports AGW must be wrong for some reason or another.

Ivor Ward
October 24, 2011 10:46 am

Dave Springer says:
October 24, 2011 at 9:49 am
The difficulty is in deciding which point of the cloud base is directly above the horizon. Due to the curvature of the Earth the cloud base continues over the horizon until eventually it appears to meet the horizon. Without the plumb line to indicate exactly where the point of measurement should be you can pick the observed height all the way down to zero. We have corrections for refraction, parallax and dip(height of eye) and vertical sextant angle can be used to determine the distance of an object of known height, or the height of an object of known distance.
(I still use rule of thumb when yachting!)

Louis
October 24, 2011 10:55 am

Has anyone estimated the margin of error associated with calculating global temperature? That could be the elephant in the room. I suspect that the margin of error is greater than the estimated warming of about 1 degree C over the past century. If a margin of error has been estimated, can someone please provide a link to it.
Local temperatures can change several degrees in less than an hour. So, unless all temperature stations around the world are synchronized to record temperature at the exact same time of day, the margin of error could be greater than 1 degree C. Just differences in Daylight Savings Time around the world could play havoc with the data.
Then you have to consider if the number of data points are sufficient to get an accurate estimate of the world’s temperature. The BEST data claims that a third of temperatures show a decline and two-thirds show an increase. This indicates a large variability from region to region and implies that you need a great number of data points around the world to accurately measure an average temperature. But, instead, the number of data points have been drastically reduced, leaving large regions without any measurements. The large polar regions, that are supposed to be the most affected by warming, have no temperature recordings but are entirely estimated. This too increases the margin of error for any global temperature calculation. Am I the only one who suspects that the margin of error dwarfs the small increase in temperature over the past century?

October 24, 2011 10:57 am

Anthony, et al,
I would love to see you address the following in a post on your site.
An even easier way to demonstrate a long term flat USA trend is to simply insist that the date range begin and end at similar points in the AMO cycle. See the following post:
http://sbvor.blogspot.com/2011/10/amo-as-driver-of-climate-change.html

Gosport Mike
October 24, 2011 11:25 am

Just a couple of points if I may.
1. I believe average Global temperature to be immeasurable and meaningless. Local average temperature variations may have some use but, surely, it is the extremes that matter. After all. a climate which freezes at night and fries during the day would hardly be characterised by its average.
2. Apart from the effrontery of the pseudo science the only thing that really matters about AGW is the suggestion that we should be doing something about it. This has led to vast sums being wasted on Carbon trading, windmills and second rate climate studies – all of which should stop now.

Ken Harvey
October 24, 2011 11:31 am

I am one of those who is sceptical as to whether there has been any increase in average temperature over the last century or so. I am one of those who is sceptical as to whether an average can be arrived at and whether it would have anything other than a conceptual meaning if it could. I am one of those that deny that an average can be calculated using existing resources.
Having regard for the width of the error bars arising from the shortcomings in the metrology, from the hodgepodge availability and distribution of data and the inconsistency of the data sources, the current temperature number is no more than a guess, and I can see no reason to believe that it is correct to within 2 degrees.
How is it that lacking any qualification regarding climate science, I can be so adamant in what I say? It is because I am blessed (or cursed) with being numerate. If the data is dodgy one cannot manipulate it in a way that would resolve to a valid answer.

Steve C
October 24, 2011 11:34 am

Lansner – Thanks for that! – I wasn’t familiar with RUTI, so was assuming that it was *only* rural. But as a mix close to what I was asking for, it certainly looks a darn sight more convincing, as expected. I shall have to come over to hidethedecline and look around. 🙂

Sean
October 24, 2011 11:37 am

malagaview:
While not a new idea, the mixing of max and min temp can be justified on the basis of consistancy with practices when there were no data loggers. You do want you can with the information you have. However, The real ‘no no’ is collating daily readings into monthy and calling it monthy raw data. Months are not all the same length. Over much of the recorded period, the months did not even start on within the same week in different countries. Working with months, you have infilling and dropping as days in the month, and again months in years.
Raw means what you saw, That means, the max and the min, or the reading and the time of day as written down – with the gaps where there are gaps.

October 24, 2011 11:43 am

Sean,
Quite correct. For example, the December temperature changes look like this.

Interstellar Bill
October 24, 2011 11:43 am

This discard of stations that, in effect,
are the ones reporting level or declining temperatures:
Didn’t I read here at WUWT the same thing about sea-level?
Before they calculate the ‘global’ sea-level,
they throw out all stations showing decline or no rise,
since they ‘know’
that something they call ‘global sea level’
is on the bring of catastrophically rising.
They can’t have those meaningless outliers contaminating their message.

crosspatch
October 24, 2011 11:43 am

What I find interesting is that NOAA’s National Climate Data Center shows significant cooling for the continental US since 1998 using the USHCNv2 data set. The rate of cooling for the most recent 12-month period (October to September) since 1998 is -0.77 degF/decade. That is a significant cooling trend in the US.

More Soylent Green!
October 24, 2011 11:51 am

Brian says:
October 24, 2011 at 10:45 am
Interesting that posts like this keep popping up when Anthony has admitted the earth is warming, just not that humans are causing it. Are you going back on your pledge to “accept whatever result they produce, even if it proves my premise wrong.”?
Has Anthony ever detailed what exactly it would take for him to accept AGW? How high to temperatures have to get? Who has to do the analysis? Because after his reversal on the BEST study, it seems that any study that supports AGW must be wrong for some reason or another.

No matter high the temperatures get, it’s still not evidence of AGW! Global warming is not AGW! We could set a record high temperature everywhere on the globe from now until the sun burns out and it still wouldn’t be evidence of AGW because global warming does not mean AGW.
Remember these two things
1) Evidence of global warming is not evidence of anthropegenic global warming.
2) Repeat #1 until you get it.

Harry Snape
October 24, 2011 12:15 pm

Roger Knights wrote:
“The GISS dataset includes more than 600 stations within the U.S. …
So no worries about Antarctica or the Equator.”
I’d expect quite a difference in continental US temps between Florida and North Dakota.
“It isn’t the temperature that’s replaced, but the delta (the little triangle is the delta) of the temperature; i.e., the anomaly. (If I’m reading the formula correctly.)”
Replacing a missing figure like the example I gave, Sydney’s 50 year low in Oct with an average of Oct would be quite wrong. If deltas are used, and the average delta is infilled, the error bars should be extended by the maximum variance in the deltas seen historically for that date, and even larger if the number of historical records are low.

Brian
October 24, 2011 12:19 pm

@Soylent Green
But once it’s clear the earth is warming (really it already is) you need to suggest a cause. Either it’s the human emissions of gasses that are known to have warming effects, or something else. The possible list of “something else” shrinks as temperatures keep going up. Claiming that it’s a coincidence is pretty hard to swallow.

October 24, 2011 12:29 pm

More Soylent Green! says:
October 24, 2011 at 8:32 am
… It’s no wonder it seems hot out, because we’re no longer used to the normal summer heat.

I’ve thought about that myself quite often. I vaguely remember suffering from the heat back then, but it was a natural part of our life and we just made do. Much the same thing has happened with hygiene. 100 years ago or more, being somewhat dirty all the time was simply a matter of course. Now, however, we’re used to being clean virtually all the time.
Another factor is the “humidex.” While I’d never argue against the merit of having a humidex, it does tend to fool people into thinking it’s hotter now than before. I have, so very, very often, heard people saying things like, “My God, it’s 42 degrees (Celsius). It never reached those temperatures here when I was a kid!” Well, it hasn’t reached those temperatures here now, either, you moron, because that’s the freakin’ humidex!

October 24, 2011 12:35 pm

Henry and Soylent green
guys, please get it in your head. Stop spreading lies.
http://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/#comment-776475
most of the warming is natural (witness the large increases in maxima) and a small % is caused by the increase in vegetation (that traps the extra heat), mostly in the NH
stick with the truth.
http://www.letterdash.com/HenryP/more-carbon-dioxide-is-ok-ok

October 24, 2011 12:42 pm

Brian sez:
“But once it’s clear the earth is warming (really it already is) you need to suggest a cause.”
Warming over what time frame? The only dataset I trust is the UAH satellite data.
But, the UAH data begin around the bottom of an AMO cooling cycle and currently end around the peak of an AMO warming cycle. The next AMO cooling cycle will bottom out somewhere around 2040 to 2045. So, we’ll have to wait at least that long to even begin to have enough data to draw any sort of reasonable conclusions.
Meantime, a century scale flat USA trend is easily demonstrated by simply insisting that the date range begin and end at similar points in the AMO cycle. See the following post:
http://sbvor.blogspot.com/2011/10/amo-as-driver-of-climate-change.html
I am reasonably certain the same would hold true for Greenland and most of Europe. In 2045, once we have credible global data, we’ll begin to have some idea to what extent that holds true for the entire planet.
In the following post, I have cited several examples of peer reviewed science demonstrating the extent to which the roughly 70 year AMO cycle drives global temperature cycles:
http://sbvor.blogspot.com/2010/12/how-amo-killed-cagw-cult.html

October 24, 2011 12:50 pm

Michael Palmer says: Yes. For me, it is – my day job is in real science.

BRAVO! Give this man a cigar 🙂

October 24, 2011 12:54 pm

Brian says:
“But once it’s clear the earth is warming (really it already is) you need to suggest a cause. Either it’s the human emissions of gasses that are known to have warming effects, or something else. The possible list of “something else” shrinks as temperatures keep going up. Claiming that it’s a coincidence is pretty hard to swallow.”
• • •
Brian, Prof Richard Lindzen explains:
The notion of a static, unchanging climate is foreign to the history of the earth or any other planet with a fluid envelope. The fact that the developed world went into hysterics over changes in global mean temperature anomaly of a few tenths of a degree will astound future generations. Such hysteria simply represents the scientific illiteracy of much of the public, the susceptibility of the public to the substitution of repetition for truth, and the exploitation of these weaknesses by politicians, environmental promoters, and, after 20 years of media drum beating, many others as well. Climate is always changing. We have had ice ages and warmer periods when alligators were found in Spitzbergen. Ice ages have occurred in a hundred thousand year cycle for the last 700 thousand years, and there have been previous periods that appear to have been warmer than the present despite CO2 levels being lower than they are now. More recently, we have had the medieval warm period and the little ice age. During the latter, alpine glaciers advanced to the chagrin of overrun villages. Since the beginning of the 19th Century these glaciers have been retreating. Frankly, we don’t fully understand either the advance or the retreat… For small changes in climate associated with tenths of a degree, there is no need for any external cause. The earth is never exactly in equilibrium. The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries. Recent work suggests that this variability is enough to account for all climate change since the 19th Century. [my emphasis]
Invoking “carbon” as a cause of natural climate variability is the basis for the entire climate alarmist industry. But the 40% increase in harmless, beneficial CO2 has not made an appreciable difference; the warming over the past century and a half has been from 288K to 288.8K, a minuscule rise. And there is zero testable evidence that it was caused by the rise in CO2.

AlexW
October 24, 2011 12:55 pm

@DirkH says:
October 24, 2011 at 10:27 am
“One of the longest running records in Europe is Berlin; nearly no trend over 300 years”
Berlin Dahlem is also one of the sites which has been removed from the GISS record last year

October 24, 2011 12:56 pm

Brian, we don’t need to suggest a cause for the current warming because natural variability is the null hypothesis.
Before I could accept climate change as caused by anthropogenic CO2 emissions the following would have to apply :-
1. Temperatures would have to rise above those in the Medieval Warm Period, the Roman Warm Period, the Minoan Warm Period, most of the early Holocene, and several earlier interglacial periods. Otherwise it’s natural cycles.
2. Temperatures would have to track CO2 levels with a suitable lag. They don’t. Turning points in the temperature record ( 1910, 1940, 1970, 2000 ) would have to correspond to turning points in the CO2 record. They don’t.
3. The major assumption of the CO2 warming hypothesis, i.e. positive water vapor feedbacks, would have to be demonstrated in empirical data. My reading of the situation is that Spencer & Braswell, Lindzen & Choi, Miskolczi and others are winning this argument hands down.

crosspatch
October 24, 2011 1:06 pm

It isn’t “global warming” that anyone is skeptical of. It is whether it is caused by anything people are doing or if anything people are doing have a significant impact on the rate/amount of warming. The globe is pretty much always “warming” or “cooling” and is rarely stable for any great length of time. The point is that nobody has shown that any current warming is at any greater rate than has occurred naturally before the industrial age.
In fact, the post-industrial warming is simply the only major warming trend we have had since the LIA. What we have measured is the recovery from the LIA till 1933, then cooling until 1976, then another period of warming till about 1998.
What is so frustrating is the constant adjusting of the adjustments. When we have a warm year that is close to 1933, it gets adjusted upward and 1933 gets adjusted downward to make a new “hottest” year. It simply would not fly in any other field of science. There is too much subjective “adjustment” applied to the records. I would want to look at rural stations that are still rural (continuously reporting notwithstanding) and see what the raw data show.
I believe the “adjustments” are corrupt and are agenda-driven.

cwj
October 24, 2011 1:08 pm

If the objective is to determine the average trend of many stations, wouldn’t the least biased method of estimating missing data be to determine a trend for that station based on available data, and substitute the value predicted by that trend for the missing data? Then when all the data are aggregated, the trends expressed by the data for each of the stations would be weighted equally. One station would not be affecting the trend in the data from another station.

KR
October 24, 2011 1:19 pm

Michael Palmer
‘“Thanks for playing” – Oh? You consider this a game?’
Yes. For me, it is – my day job is in real science.

That’s a very telling statement. If you don’t consider studying the climate worthy of actual scientific effort, then, well, never mind. I’ll just take your post as seriously as you apparently do.
Adieu

Brian
October 24, 2011 1:19 pm

Amazing that the “politicians and environmental promoters” have managed to convince 97% of climatologists and essentially every major scientific organization that AGW is real. Especially with the entire Republican party (always friends of science!) and oil and coal interests fighting tooth and nail. As a layman observer it seems clear that claiming to understand the research well enough to side with the 3% of people who know what they’re talking about is disingenuous.

DR_UK
October 24, 2011 1:22 pm

This is interesting. The use of long station series seems a very good idea.
But isn’t this a similar method of taking first differences that was discussed and criticised before? See Hu McCulloch’s 2010 post at Climate Audit http://climateaudit.org/2010/08/19/the-first-difference-method/. I can’t say I understand all the arguments in that thread, but is there a better way of dealing with missing years?

October 24, 2011 1:22 pm

beng
“I assume TOBS “models” are as trustworthy as climate models, until shown otherwise.”
TOBS is an empirically derived adjustment. you can read karls paper or the subsequent verification of it.
Essentially here is the process: for the entire united states all of the HOURLY stations were assembled. That database is then split into two parts. One part for model development the other part for model test.
Model development. Since you have HOURLY data you can calculated the following
what is (Tmin+Tmax)/2 if you record at
1Am, 2am,3am, 4am, 5am,6am, 7am etc
That gives you a Tave for every hour of the day, or rather the Tave that would be recorded IF the TOB was at a given hour.
From that your derive an offset. Like so.
Suppose that Tave at 5pm is 15C and Tave at 7Am is 14.5C
That gives you an adjustment of -.5C
This allows you to adjust ANY TOB to a common TOB. Historically, rural stations manned by individuals had TOB of 5pm.. those are “moved” forward by adjusting to the 7AM time.
Give these “deltas” an emprical model is then developed for every region of the usa. It depends upon latitude and longitude and the suns position (season) That emprical model is basically a regression.
The regression is then tested against the “held out stations” to see how well it predicts the actual
Tave. All of the standard errors of prediction are in the karl paper
CA had a whole discussion of this some time ago. At first TOBS made no sense to me then I went through some examples prepared by JerryB for john daly’s old site.
Arguing about TOBS is a waste of time. It needs to be applied in ANY analysis that does simple averages. Otherwise you will get the wrong answer. demonstrably wrong.

George Turner
October 24, 2011 1:29 pm

Brian, what about the list of possible causes of the Medieval Warm Period, the Roman, etc, or the fact that even the BEST team shows a steeper temperature rise in the early 1800’s than anything in the 20th century? Since the list of natural causes is so diminished, would you suggest pirates, jousting tournaments, and goat entrails being sacrificed to Apollo as likely forcings for those events?
Most of the apocalyptic hand waving dismissed solar effects, as the variation in visual magnitude solar output is rather small. But now we’re finding strong links between UV output and upper atmospheric temperatures, along with a possibly huge influence on cloud coverage because of cosmic ray cloud seeding, which is modulated by the strength of the solar wind, and that strength does vary strongly with solar cycles. Then they often pretend that the AMO, PDO, AO, and other natural oscillations don’t exist, and pretend that the world has statistically significant warming in the past 15 years, which it hasn’t.
If you follow some of the adjustments they make to the temperature record, we should worry less about the present getting warmer than the alarming rate at which the past keeps getting cooler. If present trends continue, millions of extra people in the 1940’s are going to freeze to death.

October 24, 2011 1:31 pm

Brian,
After trying to help you by providing an explanation by Prof Richard Lindzen, who knows something about the subject, you come out with the 97% nonsense. That 97% number has been thoroughly debunked. In fact, the OISM Petition has far more signatures from degreed professionals in the hard sciences than the total of all the alarmist counter-petitions. The fact is that most scientists and almost all engineers reject the catastrophic AGW conjecture.
And IANAR, but tarring the “the entire Republican party” with the same brush makes you look like a credulous dope walking around with your zipper down. Run along now back to Skeptical Pseudo-Science. You need to load up on some new talking points.

Brian
October 24, 2011 1:38 pm

Smokey,
This is the problem with arguing with [SNIP: – Policy Violation -REP] , they all have different arguments, and are willing to change them at the drop of a hat. “The earth isn’t warming!” “Ok maybe it is but it’s not humans!” “Ok it’s humans but it’s not harmful!” “Ignore the fact that I was wrong on my first two premises!”
Most scientists reject catastrophic AGW? I like that you slip “catastrophic in there. Make up your mind, are you denying global warming, AGW, or that AGW is harmful? Changing your position from argument to argument is not acceptable. Technically you may be correct, most scientists do not believe AGW will lead to armaggedon. But denying the scientific consensus on man-made climate change is absurd. http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change

October 24, 2011 1:51 pm

Michael Palmer, I cannot thank you enough.
You have given the evidence in statistically significant swathes for what I’ve been banging on about for years. Long individual station records are essential for the proper deconstruction of the data problems, fudges and manipulations. We don’t need lots of stations, and we do not need the globe sectorized into areas of homogenized temperature soup. Surprisingly few stations will do, so long as they are trustworthy and long enough.
I did quite a lot of work on this when I had more time, some of which Anthony published here (one of my Circling Yamal pages, and my Circling the Arctic page). I also did GISS temperature records in the British Isles. And my page Removing UHI Distortion shows it’s the trustworthy US records that seem to be overall flat over the last century; worldwide there seems to be slight increase overall. Find all those pages here. And here in my “Primer are some of the prime long records, and the horrendous record of USHCN “corrections”
I was inspired by the choice of station records of the late John Daly: you, he and I are all, I think, amateur scientists, in the best sense of the word -love of real science.

Cherry Pick
October 24, 2011 1:55 pm

“The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.”
Why do we need to do that replacement? Isn’t it possible to calculate the global temperature trend by using just the real observations?
1. Calculate the trend of one site (station) in one day of year. For example the 25th October trend in 1900..2011.
2. Do that for every site
3. Do that for every day of the year
What is the impact of changing 2. and 3. ?
If the site has been moved or there is some other changes, just consider the site closed and start a new one.

phlogiston
October 24, 2011 2:02 pm

Garrett Hurley
You are picking up the wrong end of the stick. Who is it who should be expected to have coherent data on global temperatures? Is it the climate research commmunity, with salaries and pensions funded by the taxpayer and billions of dollars worth of politically mandated funding for equipment, satellites etc? Or an amateur group of scientists and concerned citizens, including some climate scientists sacked for heretical views? And you ask why WUWT does not have single party line on global temperature trends?
What has the climate science establishment done with the tremendous influx of funding over the last decade or two? One would reasonably have assumed that the first obvious thing to do would be increase the number of weather monitoring stations worldwide,making them fully automatic, with automated and correct min and max temperature recording, intercommunication and compilation. Iron out once and for all issues of area averaging, missing regions, etc.
The scandalous and unbelievble fact is that the opposite has happened. Somehow, all the new funding for climate research has been accompanied not by an increase, but a decrease in monitored weather stations worldwide, and a precipitous decline at that. WUWT? The climate community needs to defend itself from an inescapable accusation of massive fraud on almost unbelievable scale.
And now a straightforward demonstration by Michael Palmer that, sorting for the best quality continouos data records in the USA annihilates at a stroke any sign of warming on that continent, raises further the question – WHAT THE HELL HAS THE CLIMATE RESEARCH COMMINITY BEEN DOING??
BTW do mobile phones have thermometers? They should have. If they do, then someone should write an app to make a citizens’ global climate monitoing network. Mobile phones at least know where they are and what planet they are on.

Mike Jowsey
October 24, 2011 2:03 pm

@ Bob Kutz :October 24, 2011 at 8:30 am
Thanks for a thoughtful and well-constructed post.

Theo Goodwin
October 24, 2011 2:15 pm

Brian says:
October 24, 2011 at 1:38 pm
Either this is your first time on this site and you are truly innocent and lost or else you are a troll.
As I have explained above, in line with Palmer’s thesis about stations, the reliable stations show no warming in the US. That is my position for the world. Everything that shows warming is not empirically testable.
Some people say there is warming but it is not manmade.
Professor Lindzen follows Arrhenius in saying that there is manmade warming but it is and will be harmless.
Others say that warming might be somewhat harmful but adaptation is superior to mitigation.
All of those positions fall on the sceptical side and none of them are clearly mistaken.
By the way, Smokey is as reliable a guide as one can find.

October 24, 2011 2:24 pm

KR says: October 24, 2011 at 1:19 pm
Michael Palmer: ‘“Thanks for playing” – Oh? You consider this a game?’ Yes. For me, it is – my day job is in real science.
That’s a very telling statement. If you don’t consider studying the climate worthy of actual scientific effort, then, well, never mind. I’ll just take your post as seriously as you apparently do.
Adieu

_______________________________________________________________________
KR has got Michael’s point upside down. Thank goodness there are still real scientists like Michael who choose, in their free time, to visit an area of science that has become so corrupted that its gatekeepers are corrupt mad, proven by the fact that they proclaim they have 97% support, whilst gagging dissenters and failing to count their true number.
With mad gatekeepers, I guess entry to this domain often has to be played as a game.

Dave Springer
October 24, 2011 2:24 pm

steven “one trick pony” mosher says:
October 24, 2011 at 1:22 pm
It’s nothing short of amazing you can use the phrase “the right answer” in regard to obtaining a global average temperature when it’s derived from instruments placed in narrow band of latitudes on a single continent. Adding insult to injury the continental region in question was the most rapidly anthropogenically transformed land area of its size on the planet.
There is no right answer from the instrument record, Steverino. Perhaps you could simply admit that no amount of pencil whipping can possibly transform this regional daily temperature sample set into a global average.

October 24, 2011 2:24 pm

KR says:
October 24, 2011 at 1:19 pm
“If you don’t consider studying the climate worthy of actual scientific effort, then, well, never mind.”

I find studying the climate eminently worthy of scientific effort. What I cannot take seriously is the “reconstruction” of the “true” temperature record from a woefully incomplete and corrupted database. No amount of adjusting, correcting, weighting, averaging, extrapolating, smoothing, roughing, digesting and regurgitating will get us past the garbage in, garbage out problem.

Theo Goodwin
October 24, 2011 2:26 pm

steven mosher says:
October 24, 2011 at 1:22 pm
Wow! You actually used the word ’empirically’, though it does occur in “empirically derived adjustment.”
Given what you said, please explain one thing. How is it that you can do all your wonderful adjustments and come up with something that conflicts with the lack of a trend from the well managed stations? What is wrong with the well managed stations? Can you express this in empirical terms?
Second question. What would it take to get you to agree that our weather station measurement system is not up to the task and needs to be replaced? Is there anything that you could discover about the measurement system that would lead you to discard it? Or will you defend this system come hell or high water?

October 24, 2011 2:27 pm

@ Lucy Skywalker
Thanks very much for your comments. I have seen your posts here and also recently perused your own page from top to bottom and enjoyed it. I look forward to your further posts!
Best wishes, Michael

Theo Goodwin
October 24, 2011 2:28 pm

Brian says:
October 24, 2011 at 12:19 pm
This is a classic fallacy. You don’t have to present a replacement to show that a theory is false. If a theory is false, it is false all by itself.

Dave Springer
October 24, 2011 2:30 pm

phlogiston says:
October 24, 2011 at 2:02 pm
“BTW do mobile phones have thermometers?”
Yes they do. Too bad they have an internal heat source and are carried around much of the time in physical contact with a 98.6F warm body or inside heated/air conditioned buildings and vehicles.
You didn’t think about that question very much.
Some people say there’s no such thing as a stupid question but this one proves them wrong.

October 24, 2011 2:31 pm

Brian says: October 24, 2011 at 1:38 pm
…denying the scientific consensus on man-made climate change is absurd.
http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change

It’s shameful gatekeeper-corrupted material like this that makes me continue banging on about establishing a skeptics’ climate science wiki, especially aimed at restitution of the most corrupted truths and rehabilitation of the most wrongfully-tarred individuals.

October 24, 2011 2:31 pm

Brian says:
“Changing your position from argument to argument is not acceptable.”
I am willing to change my opinion if new facts warrant it, but my position has remained unchanged for a very long time. It is this:
A doubling of CO2 will probably result in a ≈1°C rise in temperature, ± ≈0.5°C. The additional warmth will be entirely beneficial. It will result in millions of new arable acres in places like Siberia, Mongolia and Canada. Furthermore, any current and projected rise in CO2 will be entirely beneficial to the biosphere. Agricultural production will continue to increase as a direct result of more CO2. There is no credible downside to a rise in CO2, a beneficial and harmless trace gas.
I have posted several hundred comments here that say essentially the same thing. I have been very consistent in this. If you can find a comment I made that contradicts anything in this post, please point it out.

October 24, 2011 2:34 pm

@Ivor Ward says:
October 24, 2011 at 8:23 am
“Would anyone care to explain to me why the graph linked by Mr Palmer for Hohenpiessenberg at http://climatereason.com/LittleIceAgeThermometers/Hohenpeissenberg_Germany.html shows a virtual flat line and then one that MFK Boulder links to at http://preview.tinyurl.com/Hohenpeissenberg shows our favourite hockey stick, preferably without using the word “baloney” in the text even if you do spell it correctly.”
It is easy to see which one is bogus from the original data:
http://members.multimania.nl/ErrenWijlens/co2/t_hohenpeissenberg_200306.txt

Dave Springer
October 24, 2011 2:37 pm

Theo Goodwin says:
October 24, 2011 at 2:26 pm
That was a brutal drubbing of the one trick pony. Took him to school with that one you did. Wished I’d have written it!

Dave Springer
October 24, 2011 2:47 pm

Smokey says:
October 24, 2011 at 2:31 pm
“A doubling of CO2 will probably result in a ≈1°C rise in temperature”
If you change that to a maximum of 1C in the most arid places on the planet where evaporation and convection play little role in surface cooling then I’ll agree to it. Rather basic physics there without confounding factors. CO2 has very little effect over the oceans which are 71% of the planet’s surface because the ocean only gives up 20% of its solar heating through radiation and CO2 only slows down radiative cooling. CO2 DOES NOT slow down evaporative, convective, or conductive cooling. Over land surfaces, especially dry surfaces, the dominant mode of cooling is radiative. CO2 will have its maximum theoretical effect there and there-only.

Brian
October 24, 2011 2:49 pm

@Smokey
So you basically accept the findings of climate research except for the consequences. So do you agree that 90% of the self-proclaimed “skeptics” are crazy for denying the earth is warming and that humans are causing (at least) much of it? What other scientific consensus’ do you deny? Evolution? The germ theory of disease? Do you think the evidence that smoking causes lung cancer is just a political ploy?
Climate-denial-gate has robbed the “skepticism” movement of whatever credibility it had.

kwik
October 24, 2011 2:50 pm

Brian says:
October 24, 2011 at 1:38 pm
Brian, are you very young and innocent?
How else can it be explained that you think wikipedia is a source to be recited? Or are you being deceitful by purpose?
We all know about the frantical editing by the warmistas over at wikipedia.
Hope you have access to plenty Joules next winther;
http://notrickszone.com/2011/10/24/german-meteorologists-horror-winter-to-hit-central-europe/

October 24, 2011 2:53 pm

DR_UK says:
October 24, 2011 at 1:22 pm
“But isn’t this a similar method of taking first differences that was discussed and criticised before? See Hu McCulloch’s 2010 post at Climate Audit http://climateaudit.org/2010/08/19/the-first-difference-method/.”

Thanks for the link. The idea is indeed similar. There is one difference, however: The CA post states: “Missing observations may simply be interpolated for the purposes of computing first differences (thereby splitting the 2 or more year observed difference into 2 or more equal interpolated differences).” In contrast to this approach, I did not fill the gaps with interpolated numbers.

phlogiston
October 24, 2011 3:04 pm

Dave Springer
Maybe the earlier suggestion of using airliner temperature records was a better one.

Ivor Ward
October 24, 2011 3:05 pm

George Turner says:
October 24, 2011 at 1:29 pm
If you follow some of the adjustments they make to the temperature record, we should worry less about the present getting warmer than the alarming rate at which the past keeps getting cooler. If present trends continue, millions of extra people in the 1940′s are going to freeze to death.
Thank you George!! This was the comment of the day for me. ( I hope my mum and dad aren’t affected because then I wouldn’t be here)

October 24, 2011 3:09 pm

Brian says:
October 24, 2011 at 1:38 pm
Smokey,
This is the problem with arguing with [SNIP: – Policy Violation -REP] , they all have different arguments, and are willing to change them at the drop of a hat. “The earth isn’t warming!” “Ok maybe it is but it’s not humans!” “Ok it’s humans but it’s not harmful!” “Ignore the fact that I was wrong on my first two premises!”
Most scientists reject catastrophic AGW? I like that you slip “catastrophic in there.

In the last few days, Brian is only one of many who question “what do skeptics believe” and point out that “skeptics” are all over the board in their (our) beliefs.
In another thread I posted:
“From the best I’ve determined, this is what most of both “skeptics” and “lukewarmers” believe:
“There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth.”
Many here know where that statement originates, but the source of the phrase isn’t as important as is the general sentiment it portrays. ”
We don’t slip “catastrophic” in anywhere – the position of the “warmists/alarmists”, the position that “requires” restriction of anthropogenic CO2 emissions, is that these same CO2 emissions are driving the climate toward catastrophic events.
We (skeptics/warmists) should not let a “warmist/alarmist” define what we believe.

October 24, 2011 3:14 pm

Brian says:
October 24, 2011 at 2:49 pm
“What other scientific consensus’ do you deny? Evolution? The germ theory of disease? Do you think the evidence that smoking causes lung cancer is just a political ploy?”

None of these, actually. You may also be surprised to learn that not all of us own a gun.
Anyone who likens the profundity of our understanding of global warming, and the quality of the evidence to those pertaining to any of the above theories only proves their ignorance.

Dave Springer
October 24, 2011 3:15 pm

Louis says:
October 24, 2011 at 10:55 am
“Local temperatures can change several degrees in less than an hour.”
I’ve seen 20F drop in ten minutes here in south central Texas when a cold front blows through on a warm day. It’s enough to take your breath away. Like you opened up your freezer door and stuck your head inside.
The locals here say “There ain’t nothing that stands between Austin and the Arctic circle but barbed wire fence.”

October 24, 2011 3:17 pm

Brian,
You’re beginning to sound like a lunatic. The large majority of scientific skeptics know that the planet has warmed naturally since the Little Ice Age [LIA]. You deliberately misrepresent the skeptical position by making truly absurd comments like: “…do you agree that 90% of the self-proclaimed ‘skeptics’ are crazy for denying the earth is warming…”
First, skeptics are not crazy. We simply ask for testable, real world evidence – per the scientific method – showing that human-produced CO2 causes measurable global warming. It may, but as of now there is no evidence that it does. None. The entire AGW facade is based on computer models and conjecture.
Next, you keep raising the red herring argument of “consensus”. Every last person on earth could believe that the moon is made out of green cheese. But the consensus would be wrong. Further, the canard that most scientists and engineers believe that human activity caused most of the ≈0.7°C warming over the past hundred and fifty years is provably wrong, as I’ve shown in the OISM links I posted. In fact, the alarmist crowd is in the minority, and your misguided belief system cannot change that verifiable fact.
Finally, your wacko comment about ‘climate-denial-gate’ confirms you as a member of the climate alarmist lunatic fringe. This is the internet’s “Best Science” site, not a tinfoil hat blog like tamino’s, Romm’s, Cook’s Skeptical Pseudo-Science, etc. Please take your wild-eyed conspiracy theories to them, they eat that stuff up.
• • •
Dave Springer,
The 1°C number is a ballpark figure, because we don’t know for certain what the sensitivity number is. It may even change depending on CO2 concentration and/or temperature. That’s why I give a 0.5°C error band. [Or fudge figure, if you like.☺] I personally think that when the question is definitively answered, the number will be ≤1°C.

Dave Springer
October 24, 2011 3:25 pm

phlogiston says:
October 24, 2011 at 3:04 pm
“Maybe the earlier suggestion of using airliner temperature records was a better one.”
Satellites are doing a fine job. Global coverage, 24/7, NBS traceability. Interestingly though they too were also “adjusted” around 1999 to go from showing a cooling trend to a warming trend.
It’s just a testament to the incredibly small effect (CO2) they’re trying to tease out of the data that pencil whipping within the error bars of the highest tech sensing gear can change a cooling trend to a warming trend. In reality I strongly suspect there is no detectable temperature trend from CO2 but there is indeed a trend and that trend is higher agricultural output because whatever else CO2 is it is definitely plant food and we’re fertilizing the atmosphere by burning fossil fuels.

John-X
October 24, 2011 3:27 pm

Hansen’s own GISStemp data show that it is actually COOLER than his infamous “Scenario C” forecast, which was based on NO (i.e. ZERO, NONE, NOT ANY) man-made CO2 emissions after 2000!
http://www.real-science.com/doubt-temperatures-rising-fast-hansens-emissions
If Hansen paid any attention at all to HIS OWN DATA, as well as his own predictions, pronouncements, prognositcations and various other bloviations, he would have concluded years ago that CO2-based warming simply DOES NOT HAPPEN!

peetee
October 24, 2011 3:31 pm

uhhh… the article references to ‘abstract’ – where’s the actual published paper to be found? What journal? Uhhh… what peer-reviewed journal? Surely, surely…. this can’t be a pre-release!!!

Glenn Tamblyn
October 24, 2011 3:38 pm

As Requested:
Off Averages & Anomalies Part 1A
In recent years a number of claims have been made about ‘problems’ with the surface temperature record: that it is faulty, biased, or even ‘being manipulated’. Many of the criticisms often seem to revolve around misunderstandings of how the calculations are done and thus exaggerated ideas of how vulnerable to error the analysis of the record is. In this series I intend to look at how the temperature records are built and why they are actually quite robust. In this first post (Part 1A) I am going to discuss the basic principles of how a reasonable surface temperature record should be assembled, Then in Part 1B I will look at how the major temperature products are built. Finally in Parts 2A and 2B I will then look at a number of the claims of ‘faults’ against this to see if they hold water or are exaggerated based on misconceptions.
How NOT to calculate the Surface Temperature
So, we have records from a whole bunch of meteorological stations from all around the world. They have measurements of daily maximum and minimum temperatures for various parts of the last century and beyond. And we want to know how much the world has warmed or not.
Sounds simple enough. Each day we add up all these station’s daily average temperatures together, divide by the number of stations and, voilá, we have the average temperature for the world that day. Then do that for the next day and the next and…. Now we know the world’s average temperature, each day, for all that measurement period. Then compare the first and last days and we know how much warming has happened – how big the ‘Temperature Anomaly’ is – between the two days. We are calculating the ‘Anomaly of the Averages’. Sounds fairly simple doesn’t it? What could go wrong?
Absolutely everything.
So what is wrong with the method I described above?
1. Every station may not have data for the entire period covered by the record. They have come and gone over the years for all sorts of reasons. Or a station may not have a continuous record. It may not be measured on weekends because there wasn’t the budget for someone to read the station then. Or it couldn’t be reached in the dead of winter.
Imagine we have 5 measuring stations, A to E that have the following temperatures on a Friday:
A = 15, B = 10, C = 5, D = 20 & E = 25
The average of these is (15+10+5+20+25)/5 = 15
Then on Saturday, the temperature at each station is 2 °C colder because a weather system is passing over. But nobody reads station C because it is high in the mountains and there is no budget for someone to go up there at the weekend. So the average we calculate from the data we have available on Saturday is:
(13+8+18+23)/4 = 15.5.
But if station C had been read as well it would have been:
(13+8+3+18+23)/5 = 13
This is what we should be calculating! So our missing reading has distorted the result.
We can’t just average stations together! If we do, every time a station from a warmer climate drops off the record, our average drops. Every time a station from a colder climate drops off, our average rises. And the reverse for adding stations. If stations report erratically then our record bounces erratically. We can’t have a consistent temperature record if our station list fluctuates and we are just averaging them. We need another answer!
2. Our temperature measurements aren’t from locations spaced evenly around the world. Much of the world isn’t covered at all – the 70% that is oceans. And even on land our stations are not evenly spread. How many stations are there in the roughly 1000 km between Maine and Washington DC, compared to the number in the roughly 4000 km between Perth & Darwin?
We need to allow for the fact that each station may represent the temperature of very different size regions. Just doing a simple average of all of them will mean that readings from areas with a higher station density will bias the result. Again, we can’t just average stations together!
We need to use what is called an Area Weighted Average. Do something like: take each station’s value, multiply it by the area it is covering, add all these together, and then divide by the total area. Now the world isn’t colder just because the New England states are having a bad winter!
3. And how good an indicator of its region is each station anyway? A station might be in a wind or rain shadow. It might be on a warm plain or higher in adjacent mountains, or in a deep valley that cools quicker as the Sun sets. It might get a lot more cloud cover at night or be prone to fogs that cause night-time insulation. So don’t we need a lot of stations to sample all these micro-climates to get a good reliable average? How small does each station’s ‘region’ need to be before its readings are a good indicator of that region? If we are averaging stations together we need a lot of stations!
4. Many sources of bias and errors can exist in the records. Were the samples always taken at the same time of day? If Daylight Savings Time was introduced, was the sampling time adjusted for this? Where log sheets for a station (in the good old days before new fangled electronic recording gizmos) written by someone with bad handwriting – is that a 7 or a 9? Did the measurement technology or their calibrations change? Has the station moved, or changed altitude? Are there local sources of biasing around the station? And do these biases cause one-off changes or a time-varying bias?
We can’t take the reading from a station at face-value. We need to check for problems. And if we find them we need to decide whether we can correct for the problem or need to throw that reading or maybe all that station’s data away. But each reading is a precious resource – we don’t have a time-machine to go back and take another reading. We shouldn’t reject it unless there is no alternative.
So, we have a Big Problem. If we just average the temperatures of stations together, even with the Area Weighting answer to problem #2, this doesn’t solve problems #1, #3 or #4. It seems we need a very large detailed network, which has existed for all of the history of the network, with no variations in stations, measurement instruments etc, and without any measurement problems or biases.
And we just don’t have that. Our station record is what it is. We don’t have that time machine. So do we give up? No!
How do stations’ climates change?
Let’s consider a few key questions. If we look at just one location over its entire measurement history, say down on the plains, what will the numbers look like? Seasons come and go; there are colder and warmer years. But what is the longer term average for this location? What is meant by ‘long term’? The World Meteorological Organisation (WMO) defines Climate as the average of Weather over a 30 year period. So if we look at a location averaged over something like a 30 year period and compare the same location averaged over a different 30 year period, the difference between the two is how much the average temperature for that location has changed. And what we find is that they don’t change by very much at all. Short term changes may be huge but the long term average is actually pretty stable.
And if we then look at a nearby location, say up in the mountains, we see the same thing: lots of variation but a fairly stable average with only a small long term change. But their averages are very different from each other. So although a station’s average change over time is quite small, an adjacent station can have a very different average even though its change is small as well. Something like this:
Comparing two adjacent stations
Next question: if each of our two stations averages only change by a small amount, how similar are the changes in their averages? This is not an idle question. It can be investigated, and the answer is: mostly by very little. Nearby locations will tend to have similar variations in their long term averages. If the plains warm long term by 0.5°C, it is likely that the nearby mountains will warm by say 0.4–0.6°C in the long term. Not by 1.5 or -1.5°C.
It is easy to see why this would tend to be the case. Adjacent stations will tend to have the same weather systems passing over them. So their individual weather patterns will tend to change in lockstep. And thus their long term averages will tend to be in lock-step as well. Santiago in Chile is down near sea level while the Andes right at its doorstep are huge mountains. But the same weather systems pass over both. The weather that Adelaide, Australia gets today, Melbourne will tend to get tomorrow.
Station Correlation Scatter Plots (HL87)Final question. If nearby locations have similar variations in their climate, irrespective of each station’s local climate, what do we mean by ‘nearby’? This too isn’t an idle question; it can be investigated, and the answer is many 100’s of kilometres at low latitudes, up to 1000 kilometres or more at high latitudes. In Climatology this is the concept of ‘Teleconnection’ – that the climates of different locations are correlated to each other over long distances.
Figure 3, from Hansen & Lebedeff 1987 (apologies for the poor quality, this is an older paper) plots the correlation coefficients versus separation for the annual mean temperature changes between randomly selected pairs of stations with at least 50 common years in their records. Each dot represents one station pair. They are plotted according to latitude zones: 64.2-90N, 44.4-64.2N, 23.6-44.4N, 0-23.6N, 0-23,6S, 23.6-44.4S, 44.4-64.2S.
Notice how the correlation coefficients are highest for stations closer together and less so as they stretch farther apart. These relationships are most clearly defined at mid to high northern latitudes and mid southern latitudes – the regions of the Earth with higher proportions of land to ocean.
This makes intuitive sense since surface air temperatures of the oceanic regions are influenced also by water temperatures, ocean currents etc instead of just air masses passing over them, while land temperatures don’t have this other factor. So land temperatures would be expected to have better correlation since movement of weather systems over them is a stronger factor in their local weather.
This is direct observational evidence of Teleconnection. Not just climatological theory but observation.
A better answer
So what if we do the following? Rather than averaging all our stations together, instead we start out by looking at each station separately. We calculate its long term average over some suitable reference period. Then we recalculate every reading for that station as a difference from that reference period average. We are comparing every reading from that station against its own long term average. Instead of a series of temperatures for a station, we now have a series of ‘Temperature Anomalies’ for that station. And then we repeat this for each individual station, using the same reference period to produce the long term average for each separate station.
Then, and only then, do we start calculating the Area Weighted Average of these Anomalies. We are now calculating the ‘Area Average of the Anomalies’ rather than the ‘Anomaly of the Area Averages’ – now there’s a mouthful. Think about this. We are averaging the changes, not averaging the absolute temperatures.
Does this give us a better result? In our imaginary ideal world where we have lots of stations, always reporting all the time, no missing readings, etc., then these two methods will give the same result.
The difference arises when we work in an imperfect world. Here is an example (for simplicity I am only doing simple averages here rather than area weighted averages):
Let’s look at stations A to E. Let’s say their individual long term reference average temperatures are:
A = 15, B = 10, C = 5, D = 20 & E = 25
Then for one day’s data their individual readings are:
A = 15.8, B = 10.4, C = 5.7, D = 20.4 & E = 25.3
Using the simple Anomaly of Averages method from earlier we have:
(15.8+10.4+5.7+20.9+25.3)/5 – (15+10+5+20+25)/5 = 0.52
While using our Average of Anomalies method we get:
((15.8-15) + (10.4-10) + (5.7-5) + (20.4-20) + (25.3-25))/5 = 0.52
Exactly the same!
However, if we remove station C as in our earlier example, things look very different. Anomaly of Averages gives us:
(15.8+10.4+20.4+25.3)/4 – (15+10+5+20+25)/5 = 2.975 !!
While Average of Anomalies gives us:
((15.8-15) + (10.4-10) + (20.4-20) + (25.3-25))/4 = 0.475
Obviously both values don’t match what the correct value would be if station C were included, but the second method is much closer to the correct value. Bearing in mind that Teleconnection means that adjacent stations will have similar changes in anomaly anyway, this ‘Average of Anomalies’ method is much less sensitive to variations in station availability.
Now let’s consider how this approach could be used when looking at station histories over long periods of time. Consider 3 stations in ‘adjacent’ locations. A has readings from 1900 to 1960. B has reading from 1930 to 2000 and C has readings from 1970 to today. A overlaps with B, B overlaps with C. But C doesn’t overlap with A. If our reference period is say 1930 – 1960, we can use the readings from A & B. But C doesn’t have any readings from our reference period. So how can we splice together A, B, & C to give a continuous record for this location?
Doesn’t this mean we can’t use C since we can’t reference it to out 1930-1960 baseline? And if we use a more recent reference period we lose A. Do we have to ignore C’s readings entirely? Surely that means that as the years roll by and the old stations disappear, eventually we will have no continuity to our record at all? That’s not good enough.
However there is a way we can ‘splice’ them together.
A & B have a common period from 1930-1960. And B & C have a common period from 1970-2000. So if we take the average of B from 1930 to 1960 and compare it to the same average from A for the same period we know how much their averages differ. Similarly we can compare the average of B from 1930-1960 to the average for B from 1970-2000 to see how much B has changed over the intervening period. Then we can compare B vs C over the 1970-2000 period to relate them together. Knowing these three differences, we can build a chain of relationships that links C1970-2000 to B1970-2000 to B1930-1960 to A1930-1960
Something like this:
‘Chaining’ station histories together
If we have this sort of overlap we can ‘stitch together’ a time series stretching beyond more than one station’s data. We have the means to carry forward our data series beyond the life (and death) of any one station, as long as there is enough time overlap between them. But we can only do this if we are using our Average of Anomalies method. The Anomaly of Averages method doesn’t allow us to do this.
So where has this got us in looking at our problems? The Average of Anomalies approach directly addresses problem #1. Area Weighted Averaging addresses problem #2. Teleconnection and comparing a station to itself helps us hugely with problem #3 – if fog provides local insulation, it probably always had, so any changes are less related to the local conditions and more to underlying climate changes. Local station bias issues still need to be investigated but if they don’t change over time, then they don’t introduce ongoing problems. For example, if a station is too close to an artificial heat source, then this biases that station’s temperature. But if this heat source has been a constant bias over the life of the station, then it cancels out when calculate the anomaly for the station. So this method also helps us with (although doesn’t completely solve) problem #4. In contrast, using the Anomaly of Averages method, local station biases and erratic station availability will compound each other making things worse.
So this looks like a better method.
Which is why all the surface temperature analyses use it!
The Average of Anomalies approach is used precisely because it avoids many of the problems and pitfalls.
In Part 1B I will look at how the main temperature records actually compile their trends.

Dave Springer
October 24, 2011 3:39 pm

@Smokey
“Dave Springer, The 1°C number is a ballpark figure, because we don’t know for certain what the sensitivity number is.”
Sensitivity is another word for feedbacks. 1C is what you get from a CO2 doubling in the absence of feedbacks and is a hard number that you can take to the bank in a dry atmosphere over dry land. Now you know. Of course there’s no such thing in the real world as a totally dry cloud-free atmosphere over a dry land so this is maximum theoretical no-feedback effect. Some arid regions may approximate it fairly well. Those regions will also approximate a black body fairly well too. Once liquid water or water vapor enters the picture all bets are off. Given that the earth is a 71% covered in water that pretty much means that 71% of the bets are off. For instance, there is very little atmospheric greenhouse effect. The earth is warmer than the moon not because of its atmosphere but because the surface is 71% covered by water that averages 12,000 feet deep. The atmosphere’s primary role is establishing a surface pressure in which water has a wide temperature range in which it can exist as a liquid. If the ocean weren’t there this planet would be as cold as the moon which has an average temperature of -23C.

Glenn Tamblyn
October 24, 2011 3:40 pm

Off Averages & Anomalies, Part 1B
In Part 1A we looked at how a reasonable temperature record needs to be compiled. If you haven’t already read part 1A, it might be worth reading it before 1B.
There are four major surface temperature analysis products produced at present: GISTemp from the Goddard Institute of Space Sciences (GISS); HadCRUT, a collaboration between the Hadley Research Center and the University of East Anglia Climate Research Unit (HadCRUT); The US National Oceanic And Atmospheric Administration’s (NOAA) National Climatic Data Center (NCDC); and the Japanese Meteorological Agency (JMA). Another major analysis effort is currently underway: the Berkeley Earth Surface Temperature Project (BEST), but as yet their results are preliminary.
GISTemp
We will look first specifically at the product from GISS, at how they do their Average of Anomalies, and their Area Weighting scheme. This product dates back to work undertaken at GISS in the early 1980s with the principle paper describing the method being Hansen & Lebedeff 1987 (HL87).
The following diagram illustrates the Average of Anomalies method used by HL87
Reference Station method for comparing stations series
This method is called the ‘Reference Station Method’. One station in the region to be analysed is chosen as station 1, the reference station. The next stations are 2, 3, 4, etc., to ‘N’. The average for each pair of stations (T1, T2), (T1, T3), etc. is calculated over the common reference period using the data series for each station T1(t), T2(t), etc., where “t” is the time of the temperature reading. So for each station their anomaly series is the individual readings – Tn(t) – minus the average value of Tn.
“δT” is the difference between their two averages. Simply calculating the two averages is sufficient to produce two series of anomalies, but GISTemp then shifts T2(t) down by δT, combines the values of T1(t) and T2(t) to produce a modified T1(t), and generates a new average for this (the diagram doesn’t show this, but the paper does describe it). Why are they doing this? Because this is where their Area Averaging scheme is included.
When combining T1(t) and T2(t) together, after adjusting for the difference in their averages, they still can’t just add them because that wouldn’t include any Area Weighting. Instead, each reading is multiplied by an Area Weighting factor based on the location of each station; these two values are then added together and divided by the combined area weighting for the two stations. So series T1(t) is now modified to be the area weighted average of series T1(t) and T2(t). Series T1(t) now needs to be averaged again since the values will have changed. Then they are ready to start incorporating data from station 3 etc. Finally, when all the stations have been combined together, the average is subtracted from the now heavily-modified T1(t) series, giving us a single series of Temperature Anomalies for the region being analysed.
So how are the Area Weighting values calculated? And how does GISTemp then average out larger regions or the entire globe?
They divide up the Earth into 80 equal area boxes – this means each box has sides of around 2500km. Then within each box they divide these up into 100 equal area smaller sub-boxes.
GISTemp Grids
They then calculate an anomaly for each sub-box using the method above. Which stations get included in this calculation? Every station within 1200 km of the centre of the sub-box. And the weighting for each station used simply diminishes in proportion to its distance from the centre of the sub-box. So a station 10km from the centre will have a weighting of 1190/1200 = 0.99167, while a station 1190 km from the centre will have a weighting of 10/1200 = 0.00833. In this way, stations closer to the centre have a much larger influence while those farther away an ever smaller influence. And this method can be used even if there are no stations directly in the sub-box, inferring its result from surrounding stations.
In the event that stations are extremely sparse and there were only 1 station within 1200 km, then that reading would be used for a sub-box. But as soon as you have even a handful of stations within range, their values will quickly start to balance out the result. And closer stations will tend to predominate. Then the sub-boxes are simply averaged together to produce an average for the larger box – we can do this without any further area averaging because we have already used area averaging within the sub-box and they are all of equal area. Then in turn the larger boxes can be averaged to produce results for latitude bands, hemispheres, or globally. Finally these results are then averaged over long time periods.
Remember our previous discussion of Teleconnection, and that long term climates are linked over significant distances. This is why this process can produce a meaningful result even when data is sparse. On the other hand, if we were trying to use this method to estimate daily temperatures in a sub-box, the results would be meaningless. The short term chaotic nature of daily weather would swamp any longer range relationships. But averaged out over longer time periods and larger areas, the noise starts to cancel out and underlying trends emerge. For this reason, the analysis used here will be inherently more accurate when looked at over larger times and distances. The monthly anomaly for one sub-box will be much less meaningful than the annual anomaly for the planet. And the 10-year average will be more meaningful again.
And why the range of 1200 km? This was determined in HL87 based on the correlation coefficients between stations shown in the earlier chart. The paper explains this choice:
“The 1200-km limit is the distance at which the average correlation coefficient of temperature variations falls to 0.5 at middle and high latitudes and 0.33 at low latitudes. Note that the global coverage defined in this way does not reach 50% until about 1900; the northern hemisphere obtains 50% coverage in about 1880 and the southern hemisphere in about 1940. Although the number of stations doubled in about 1950, this increased the area coverage by only about 10%, because the principal deficiency is ocean areas which remain uncovered even with the greater number of stations. For the same reason, the decrease in the number of stations in the early 1960s, (due to the shift from Smithsonian to Weather Bureau records), does not decrease the area coverage very much. If the 1200-km limit described above, which is somewhat arbitrary, is reduced to 800 km, the global area coverage by the stations in recent decades is reduced from about 80% to about 65%.”
Effect of station count on area coverage
It’s a trade-off between how much coverage we have of the land area and how good the correlation coefficient is. Note that the large increase in contributing station numbers in the 1950s and subsequent drop off in the mid-1970s does not have much of an impact on percentage station coverage – once you have enough stations, more doesn’t improve things much. And remember, this method only applies to calculating surface temperatures on land; ocean temperatures are calculated quite separately. When calculating the combined Land-Ocean temperature product, GISTemp uses land-based data in preference as long as there is a station within 100 km. Otherwise it uses ocean data. So in large land areas with sparse station coverage, it still calculates using the land-based method out to 1200 km. However, for an island in the middle of a large ocean, the land-based data from that island is only used out to 100 km. After that point, the ocean based data prevails. In this way data from small islands don’t influence the anomalies reported for large areas of ocean when ocean temperature data is available.
One aspect of this method is the order in which stations are merged together. This is done by ordering the list of stations used in calculating a sub-box by those that have the longest history of data first, with the stations with shorter histories last. So they are merging progressively shorter data series into a longer series. In principle, the method used to select the order in which the stations are being processed could have a small effect on the result. Selecting stations closer to the centre of the sub-box first is an alternative approach. HL87 considered this and found that the two techniques produced differences that were two orders of magnitude smaller than the observed temperature trends. And their chosen method was found to produce the lowest estimate of errors. They also looked at the 1200 km weighting radius and considered alternatives. Although this produced variations in temperature trends for smaller areas, it made no noticeable difference to zonal, hemispheric, or global trends.
The Others
The other temperature products use somewhat simpler methods.
HadCRUT (or really CRU, since the Hadley Centre contribution is Sea Surface Temperature data) calculate based on grid boxes that are xº by xº, with default value being 5º by 5º. At the equator, this means they are approximately 550 x 550 km, although much smaller at the polar regions. They then take a simple average all the anomalies for every station within that grid box. This is a much simpler area averaging scheme. Because they aren’t interpolating data like GISS, they are limited by the availability of stations as to how small their grid size can go, otherwise too many of their grids may have no station at all. And in grid boxes where there is no data available, they do not calculate anything. So they aren’t extrapolating / interpolating data. But equally, any large-scale temperature anomaly calculation such as for a hemisphere or the globe is effectively assuming that any uncalculated grid boxes are all actually at the calculated average temperature. However, to then build results for larger areas, they need to area weight the averages for the differing sizes of the grid boxes depending on the latitude they are at.
NCDC and JMA also use a 5º by 5º grid size and simply average anomalies for all stations within that grid then area weighted averaging is used to average grid boxes together. All three also combine land and sea anomaly data. Unlike HadCRU & NCDC, which use the same ocean data, JMA maintain their own separate database of SST data.
In this article and the Part 1A, I have tried to give you an overview of how and why surface temperatures are calculated, and why calculating anomalies then averaging them is far more robust than what might seem the more intuitive method of averaging then calculating the anomaly.
In parts 2A & 2B we will look at the implications of this for the various questions and criticisms that have been raised about the temperature record.

Glenn Tamblyn
October 24, 2011 3:41 pm

Off Averages & Anomalies Part 2A
In Part 1A and Part 1B we looked at how surface temperature trends are calculated, the importance of using Temperature Anomalies as your starting point before doing any averaging and why this can make our temperature record more robust.
In this Part 2A and a later Part 2B we will look at a number of the claims made about ‘problems’ in the record and how misperceptions about how the record is calculated can lead us to think that it is more fragile than it actually is. This should also be read in conjunction with earlier posts here at SkS on the evidence here, here & here that these ‘problems’ don’t have much impact. In this post I will focus on why they don’t have much impact.
If you hear a statement such as ‘They have dropped stations from cold locations so the result is now give a false warming bias’ and your first reaction is, yes, that would have that effect, then please, if you haven’t done so already, go and read Part 1A and Part 1B then come back here and continue.
Part 2A focuses on issues of broader station location. Part 2B will focus on issues related to the immediate station locale.
Now to the issues. What are the possible problems?
Urban Heat Island Effect
This is the first possible issue we will consider. The Urban Heat Island Effect (UHI) is a real phenomenon. In built-up urban areas the concentration of heat storing materials in buildings, roads, etc. such as concrete, bitumen, bricks and so on, and heat sources such as heaters, air-conditioners, lighting, cars, etc. all combine to produce a local ‘heat island’: a region where temperatures tend to be warmer than the surrounding rural land. This is well-known and you can even see its effects just looking at reports of daily temperatures. If we have weather stations inside such a heat island they will record higher temperatures than they would if they were in the surrounding country side. If we don’t make some sort of compensation for this then this could be a real source of bias in our result, and since we never see ‘cool islands’, its bias would be towards warming.
This is why the major temperature records include some method for compensating for it, either by applying a compensating adjustment to the broad result they calculate, or by trying to identify stations that have such an issue and adjusting them. GISTemp for example seeks to identify such urban stations and then adjust them so that the urban station’s long-term trend is brought into line with adjacent rural stations. There is also the question of identifying which stations are ‘urban’. Previous methods relied on records of how stations were classified, but this can change over time as cities grow out into the country, for example. GISTemp recently started using satellite observations of lights at night to identify urban regions – more light means more urban.
What other factors might limit or exaggerate the impact a heat island might have?
Is the UHI effect at the station growing, changing over time? Has the UHI effect in a city got steadily warmer, or has the area that is affected by the heat island expanded but the magnitude of the effect hasn’t changed. This will depend on things like how the density of the city changes, what sort of activities happen where, etc. For a station that has always been inside a city, say an inner city university, it will only be affected if the magnitude of the heat island effect increases. If the UHI at a site stays constant, then that isn’t a bias to the trend.
On the other hand, a previously rural station that has been engulfed by an expanding city will most definitely feel some warming and will show a trend during the period of its engulfing, although again how much will depend on circumstances. And this will look like a trend. If it has been engulfed by low density suburbia and its piece of ‘country’ has been preserved as a large park around it, the impact will be lower than if a complete satellite city has sprung up around it and it is on the pavement next to a 6 lane expressway.
But remember, the existing products include a compensation to try and remove UHI, UHI only impacts our long term temperature results if the magnitude of the effect is growing, and each station’s data still has to be added to the results for all other stations using Area Weighted Averaging. And since the vast majority of the Earths land area isn’t urban, UHI can only have a limited impact on the final result anyway. And the Oceans aren’t affected by UHI and they are 70% of the Earth’s surface.
Airports
One particular example sometimes cited is the number of stations located at airports, with images being painted of ‘all those hot jet exhausts’ distorting the results. Firstly we are interested in daily average temperatures not instantaneous values. So the station would need to get hit by a lot of jets.
Think about a medium-sized airport. At its busiest it might have one aircraft movement (take off or landing) per minute. Each takeoff involves less than a minute at full power while the rest of the take off and landing, 10 minutes or so of taxiing, is at relatively low power. For the rest of the one to several hours that the aircraft is on the ground, its engines are off. So for each jet at the airport, its average power output over its entire stay there is a very tiny percentage of its full power. And many airports have night-time curfews when no aircraft are flying. So how much do the jets contribute to any bias?
Consider instead that the airport is like a mini-city – buildings and lots of concrete and bitumen tarmac. But also lots of grassed land in between. So the real impact of an airport on any station located there will be more like a mini-UHI effect. But how much does an airport grow? Usually they have a fixed area of land set aside for them. The number of runways and taxiways doesn’t change much. And the area of apron around the terminal buildings doesn’t change that much over time. So the magnitude of this UHI effect is unlikely to change greatly over time unless the airport is growing rapidly.
If an airport is located in a rural area then any changes to the climate in the airport is going to be moderated by effects from surrounding countryside since it after all a mini-city not a city. If an airport has always been inside an urban area such a Le Guardia in New York then it is going to be adjusted for by the UHI compensations described above. And a rural airport that has been enveloped by its city will eventually have a UHI compensation applied. So the airports that are most likely to have a significant impact need to be and remain rural, be so big that moderating effects from the surrounding countryside don’t have much effect, and be expanding so that their bias keeps growing and thus isn’t compensated out by the analysis method. Then they need to dominate the temperature record for large areas, with few other adjacent stations. And then there are no airports on the oceans. So any airport that is likely to have an impact needs to be near a large growing city to generate the large and increasing traffic volumes to cause the airport to be large and growing, in a region that is sparsely populated otherwise so there are few other stations. And most large growing cities tend to be near other such cities.
Islands
There is one special case sometimes cited in relation to GISTemp: islands. If the only station on an island in the ocean is at an airport or has ‘problems’, that islands data will then supposedly be used for the temperature of the ocean up to 1200 km away in all directions, extending any problems over a large area. This claim is missing one key point: the temperature series used to determine global warming trends is the combined Land and Ocean series. And when land data isn’t available such as around an island, ocean data is substituted instead.
This is some data from a patch of ocean in the South Pacific (OK, it’s from around Tahiti, I’m a sucker for exotic locations). I calculated this by using GISTemp to calculate temperature anomalies for grids around the world for 1900 to 2010, using consecutively land only data, ocean only data and combined land & ocean data. I then calculated from the three values obtained from each grid point the percentage contribution to the combined land/ocean data of each of the two sources. The following graph shows the percentage contribution of the land data at each grid point. And for reference below I have listed the temperature stations in the area with their Lat/Long. Obviously this isn’t coming just from land only data and in grids too far from land the % contribution of land data falls to zero. Each 2º by 2º grid is approximately 200 x 200 km, much less than the 1200km averaging radius used by GISTemp.
% Land Contributionaround Tahiti
There aren’t enough stations
A common criticism is that there aren’t enough temperature stations to produce a good quality temperature record. A related criticism is that the number of stations being used in the analysis has dropped off in recent decades and that this might be distorting the result. On the Internet comments such as ‘Do you know how many stations they have in California?’ – By implication not enough – are not uncommon. This seems to reflect a common misperception that you need large numbers of stations to adequately capture the temperature signal with all its local variability.
However, as I discussed in Part 1A, the combination of calculating based on Anomalies and the climatological concept of Teleconnection means that we need far fewer stations than most people realise to capture the long-term temperature signal. If this isn’t clear, perhaps re-read Part 1A.
So how few stations do we need to still get an adequate result? Nick Stokes ran an interesting analysis using just 61 stations with long reporting histories from around the world. His results, plotting his curve against CruTEM3, although obviously much noisier than the data from the full global temperature still produced a recognisably similar temperature curve even with just 61 stations worldwide!
Just 61 Stations!Just 61 Stations – Smoothed!
So even a handful of stations get you quite close. What reducing station numbers does is diminish the smoothing effect that lots of stations gives. But the underlying trend remains quite robust even with far fewer stations. What is perhaps more important is if the reduction in station numbers reduces ‘station coverage’ – the percentage of the land surface with at least one station within ‘x’ kilometres of that location. But as we discussed in Part 1A, Teleconnection means that ‘x’ can be surprisingly large and still give meaningful results. And with Anomaly based calculations, the absolute temperature at the station isn’t relevant; it is the long term change in the station we are working with.
The Thermometers are Marching!
A related criticism is that the decline in used station count has disproportionately removed stations from colder climates and thus introduced a false warming bias to the record. This has been labelled “The March of the Thermometers”. With the secondary ‘conspiracy theory’ type claim that this is intentional, all part of the ‘fudging’ of the data. This can seem intuitively reasonable – surely if you remove cold data from your calculations the result will look warmer. And if that is the result then, hey, that could be deliberate.
But the apparent reasonableness of this idea rests on a mathematical misconception which we discussed in detail in Part 1A. If we average together the absolute temperatures from all the sites then most certainly removing colder stations would produce a warm bias. Which is one of the most important reasons why it isn’t done that way! Using that approach (what I called the Anomaly of Averages method) would produce a very unstable, unreliable temperature record indeed.
Instead what is done is to calculate the Anomaly for each station relative to its own history then average these anomalies (what I called the Average of Anomalies method).
Since we are interested in how much each station has changed compared to itself, removing a cold station will not cause a warming bias. Removing a cooling station would! The hottest station in the world could still be a cooling station if its long term average was dropping. 50 °C down to 49 °C is still cooling. Removing that station would add a warming bias. However, removing a station whose average has gone from -5 °C up to -4 °C would add a cooling bias since you have removed a warming station.
We are averaging the changes in the stations, not their absolute values. And remember that Teleconnection means that stations relatively close to each other tend to have climates that follow each other. So removing one station won’t have much effect if ‘adjacent’ stations are showing similar long term changes. So for station removals to add a definite warming bias we would need to remove stations that have or are showing less warming, remove other adjacent stations that might be doing the same, but leave any stations that are showing more warming. If this station removal was happening randomly, there is no reason to think that any effect from this would be anything other than random, not a bias.
If this were part of some ‘wicked scheme’, then the schemers would need to carefully analyse all the world’s stations, look for the patterns of warming so they could cherry-pick the stations that would have the best impact for their scheme, and then ‘arrange’ for those station to become ‘unavailable’ from the supplier countries, while leaving the stations that support their scheme in place. And why would anyone want to remove stations in the Canadian Arctic for example as part of their ‘scheme’? Some of the highest rates of warming in the world is happening up there. Why remove them to make the warming look higher? Maybe someone is scheming. I’ll let you think about how likely that is.
But what if the pattern of station removals is driven by other factors – physical accessibility of the stations, operating budgets to keep them running etc.? Wouldn’t the stations more likely to be dropped be the ones in remote, difficult to reach, and thus expensive locations? Like mountains, arctic regions, poorer countries? Which are substantially where the ‘biasing’ stations are alleged to have disappeared from. If you drop ‘difficult’ stations you are very likely to remove Arctic and Mountain stations.
Could it also be that the people responsible for the ongoing temperature record realise that you don’t need that many stations for a reliable result and thus aren’t concerned about the decline in station numbers – why keep using stations that aren’t needed if they are harder to work with?
For example, here are the stats on stations used by GISTemp. The number of stations rose during the 60’s and dropped of during the 90’s but percentage coverage of the land surface only dropped off slightly. Where coverage is concerned, its not quantity that counts but quality.
Coverage from GISS
Station coverage from GISTemp
GISTemp ‘extrapolates’ 1200 kilometers
One particular criticism made of the GISTemp method is that ‘they use temporatures from 1200 km away’ usually spoken with a tome of incredulity and some suggestion that this number was plucked out of thin air.
Station Correlation Scatter Plots (HL87)As explained in Part 1A and Part 1B, the 1200 km area weighting scheme used by GISTemp is based on the known and observed phenomena of Teleconnection; that climates are connected over surprisingly long distances. The 1200 km range used by GISTemp was determined emprically to give the best balance between correlation between stations and area of coverage.
Figure 3, from Hansen & Lebedeff 1987 (apologies for the poor quality, this is an older paper) plots the correlation coefficients versus separation for the annual mean temperature changes between randomly selected pairs of stations with at least 50 common years in their records. Each dot represents one station pair. They are plotted according to latitude zones: 64.2-90N, 44.4-64.2N, 23.6-44.4N, 0-23.6N, 0-23,6S, 23.6-44.4S, 44.4-64.2S.
When multiple stations are located within 1200 km of the centre of a grid point, the value calculated is the weighted average of their individual anomolies. A station 10 km from the centre has 100 times the weighting of a station 1000 km from the centre. And as discussed under the section on islands previously, for small islands, the ocean data predominates not the land data.
One area of some contention is temperatures in the Arctic Ocean. Unlike the Antarctic, the Arctic does not have temperature stations out on the ice. So the neasest temperature stations are on the coast around the Arctic Ocean, Greenland and some islands. And ocean temperature data can’t be used instead since this is not available for the Arctic Ocean.
Other temperature products don’t calulate a result for theArctic Ocean. The result is that when compiling the Global trend, the headline figure most people are interested in, this method effectively assumes that the Arctic Ocean is warming at the same rate as the global average. Yet we know the land around then Arctic is warming faster than the global average so it seems unreasonable to suggest that the ocean isn’t, Satellite temperature measurements up to 82.5 N support this as does the decline of Arctic sea ice here, here & here.
So it seems reasonable that the Arctic Ocean would be warming at a rate comparable to the land. Since the GISTemp method is based on empirical data regarding teleconnection, projecting this out seems to me the better option since we know the alternative method will produce an underestimate. Many parts of the Arctic Ocean are significantly less than 1200 km from land, with the main region where this isn’t the case being between Alaska & East Siberia and the North Pole.
Certainly the implied suggestion that GISTemp’s estimates of Arctic Ocean anomalies are false isn’t justified. It may not be perfect but it is better than any of the alternatives.
In this post we have looked at some of the reasons why the temperature trend may be more robust with respect to factors affecting the broader region in which stations are located than might seem the case. The method used to calculate temperature trends does seem to provide good protection against these kinds of problems
In Part 2B we will continue, looking at issues very local to a station and why these aren’t as serious as many might think…

Glenn Tamblyn
October 24, 2011 3:42 pm

Of Averages & Anomalies Part 2B
In Part 1A and Part 1B we looked at how surface temperature trends are calculated, the importance of using Temperature Anomalies as your starting point before doing any averaging and why this can make our temperature record more robust.
In Part 2A and in this Part 2B we will look at a number of the claims made about ‘problems’ in the record, and how misperceptions about how the record is calculated can lead us to think that it is more fragile than it actually is. This should also be read in conjunction with earlier posts here at SkS on the evidence here, here & here that these ‘problems’ don’t have much impact. In this post I will focus on why they don’t have much impact.
If you hear a statement such as ‘They have dropped stations from cold locations so the result is now give a false warming bias’ and your first reaction is, yes, that would have that effect, then please, if you haven’t done so already, go and read Part 1A and Part 1B then come back here and continue.
Part 2A focused on issues of broader station location. Part 2B focuses on issues related to the immediate station locale.
Now to the issues. What are the possible problems?
Problems with ‘bad’ stations
One issue that has received considerable attention is the question of the ‘quality’ of surface observation stations, particularly in the US. How well do the stations in the observation network meet quality standards with respect to location and avoidance of local biasing issues, and how much might this impact on the accuracy of the temperature record.
The upshot of investigations into this is that, at least in the US, a substantial proportion of stations have poor location quality ratings. However, analysis of the impact of the site quality problems by a number of independent analysts suggests that these problems have had almost no impact on the accuracy of the long term temperature record. How could this be? Surely that is the whole point of these quality rankings – poor quality sites can give bad results. So why wouldn’t they?
The definition of the best quality sites, Category 1 is as follows:
“Flat and horizontal ground surrounded by a clear surface with a slope below 1/3 (<19º). Grass/low vegetation ground cover 3 degrees.”
Down to Category 5:
“(error ≥ 5ºC) – Temperature sensor located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface”
Lets consider a few of these factors. And remember, we are interested in factors that have an impact on long-term changes in the temperature readings at a site. If a factor results in a bias in the reading but this bias does not change over time, then it will not impact on the analysis since we are interested in changes – static biases get cancelled out in the analysis and have no long-term impact. Firstly, let’s look at the standard enclosure used for a meterological measurement station – the Stevenson Screen:
Stevenson Screen
The screen is designed to isolate the instruments inside from outside influences, particularly radiant effects from its surrounds and rain. It is usually made from a material such as wood or similar that is a fairly good insulator and isn’t going to change temperature too much because of radiant heating/cooling from its surroundings. The double-slatted design suppresses air movement from wind through the enclosure, minimising wind chill effects and restricting rain entry onto the instruments. The double-slatted design also means that any air rising from beneath the enclosure isn’t being preferentially drawn into or out of the box. And the design of the base allows air movement from below while shielding from radiation from below.
So what are the problems which can change the category of a temperature monitoring station to lower than 1?
Slope > 19º
A problem may arise if the station isn’t located on sufficiently flat ground. This can produce air movements that are caused by temperature, resulting in warmer air possibly moving towards the station. However, unless there have been really major earthworks around the site, this factor doesn’t change over time and is unlikely to have a long-term changing impact.
Grass/low vegetation ground cover >10 centimeters high
This can impact on air movements around the station. Also, if the vegetation changes substantially – low grass to shrubs and trees – then this could change water evaporation rates around the station and alter air temperatures. Major increases in vegetation might have a cooling effect on the station due to evaporative effects, while declines in vegetation back to Category 1 standards might have a warming impact. However, unless there is a regular and progressive change in the vegetation pattern around the station, this would not produce an ongoing change of any bias. If maintenance of vegetation around the station over its lifetime has been poor or erratic, then the bias may fluctuate up and down. This would create shorter term fluctuations in the bias but this would tend to cancel out in the longer term.
Shading when the sun elevation >3 degrees
If the degree to which the station and its surrounds are shaded over the course of the day changes, this can alter local heating. Primarily this is going to impact as a result of shading causing differing heating/cooling of the ground under/around the enclosure, resulting in changes in the temperature and flow rate of rising air up through the enclosure. Unless the cause of the shading varies over long, multi-year time frames such as trees growing or buildings rising, the shading effect is not a long-term changing biasing factor. Depending on the cause of the shading, this may cause changes in the bias over the course of a day and over the seasons, but as a multi-year bias, this would remain constant.
Not far enough from large bodies of water
This too is a static bias. The body of water would have a cooling effect due to evaporation that would vary with daily weather conditions and the seasons but would not be a multi-year biasing factor.
Static artificial heating sources
Essentially surfaces such a brick, concrete, bitumen, etc. that can act as local heat stores, greater than normal grass covered earth would be, that can then release heat either radiantly or by heating the surrounding air. These can be vertical structures, horizontal surfaces away from the enclosure, or a horizontal surface beneath the enclosure. The enclosures are designed to minimise radiant heat penetration into the enclosure from its surrounds, so the major impact of such static heating sources is going to be from heating surrounding air which may then pass through the enclosure. This will be worst when such a surface is very close to the enclosure, particularly beneath it, generating rising warmer air into the box.
Also an important factor will be the extent to which any such surfaces tend to form a partial ‘room’ around the enclosure, restricting horizontal air movement. Any such surface will tend to heat the air near/above it, causing that air to rise. More air is then drawn in to replace this, potentially flowing over or through the enclosure. If the distances involved and the geometry of the site result in this new air being warmer than the general surroundings, this could provide a warming bias for the site. Conversely if this replacement air is being drawn from a location that isn’t warmer then there may be no bias at all, possibly even a cooling effect. Ambient winds may also blow warmed air towards, or away from enclosure, depending on wind direction. And the effect of any such bias will vary over the course of the day and the seasons.
However, since the main source of any such bias is the amount and layout of such surfaces and sunlight, these biases won’t change over multi-year time frames unless the area of the surfaces is changing. This could be due to construction, or changes in shading of these surfaces such as by trees growing or building construction nearby. And some of these shading changes could actual reduce the bias over time, resulting in a long-term cooling trend. Also to be considered is whether the site is included within a region that is or becomes urban, in which case the UHI adjustments mentioned previously may cancel out any bias completely. And we still have to allow for area weighting of data from such a site when averaged over the Earth’s land surface. And this doesn’t affect the oceans at all.
Dynamic artificial heating sources
These are similar to the static surfaces, but they are things that actively pump heated air into the environment. Things such as Air Conditioner condensers, Exhaust fans, Heater flues, Cooling towers, Vehicle exhausts, etc. As with the static sources, a key issue here is geometry. They are generating hot air which will tend to rise unless winds blow it towards the enclosure. Does any such device actively blow warm air towards the enclosure? Or does its operation tend to draw air in from elsewhere and over the enclosure? How distant is the device and what is the geometry?
Also how long does the device operate for; 24/7 or intermittently? A station may be next to a large car park, but unless there is continuous activity, even thousands of cars have no extra impact if they are all parked and empty. Does an Air-conditioner run 24/7 or just 9-5 weekdays? Is it a reverse cycle A/C unit also used for heating in winter or at night, in which case it will pump out colder air then that doesn’t rise? How much do these activities vary with the seasons? And ultimately do these activities grow in magnitude over multi-year time frames? Otherwise they again contribute to short-term intra-annual biasing but not multi-year effects. And they may be cancelled out anyway by UHI compensations.
Conclusions about ‘Bad’ Stations
The US network certainly isn’t as good as it should be. There are certainly factors operating there that influence short term daily and seasonal readings and these may have important implications for use in daily Meteorological forecasting which rely on absolute temperatures. However, for long term multi-year Climatological uses, it is perhaps easy to overestimate the impact of these problems.
It easy to understand how our subjective impressions, standing near a poor quality site, seeing an A/C roaring away or feeling the radiant heat from a concrete parking lot nearby, could lead us to think this is a big issue. But the combination of the screening properties of the enclosures, long-term averaging, anomaly-based averaging, and UHI compensation will certainly tend to remove many biases that do not have long term-trend changes. And area averaging over the Earth’s land surface combined with the fact that most of the Earth is water reduces any impact even further.
So it isn’t surprising that the long term temperature trend data doesn’t seem to be significantly affected by station quality issues. That is not to say that there may not be noticeable impacts on shorter term measures – local and seasonal trends and possibly daily temperature range (DTR) effects for example. But for the headline Global Temperature Anomaly, which is a main indicator of Climate Change, station quality issues appear to be a very minor issue, something that ‘all comes out in the wash’.
Station Homogenisation issues
Finally we come to ‘Station Homogenisation’ – the process of reviewing station data records looking for errors that are a result of how the measurement was taken, rather than what the temperature actually was.
A common misconception is that ‘the thermometer never lies’. That the raw data is the gold standard. As anyone who works in any field involving instrumentation knows, this isn’t true; there are always ‘issues’ that you have to monitor for. Any instrument, even a simple thermometer will have its own built in biases.
Sometimes there will be readings that are just plain whacky. And surrounding influences can have an impact. A thermometer out in the sunshine will have a different reading from one shaded by your hand for a few minutes. A caretaker who can work quickly taking the readings when the enclosure door is open will produce a different bias from one who works slowly, or reads the instruments in a different order. Bias and error is everywhere.
If readings at a station weren’t always taken at the same time of day, this can introduce biases. Changes in the instruments used can introduce a bias. Some readings can be just plain wrong. Imagine some scenarios:
The caretaker of a station may have had a ‘big night out’ and not read the thermometer very accurately. There is an error there but we probably can’t detect it.
The caretaker of a station may have a ‘big night out’ every Friday night. Now there might be a regular error in Saturday’s readings. With a pattern like this, we might be able to detect it with statistical analysis. We might be able to correct it but only if we are certain enough.
That caretaker might have had one ‘REALLY big night out’ and next morning broke the thermometer. He replaced it but did he record that fact in the station log? If he did, we know that a change of bias has been introduced between the two thermometers. Then we can compare readings from before and after and try to find the change. But only after we have years’ worth of data from both thermometers. And if he didn’t log it, then we only spot a problem if that station seems to have a strange change compared to nearby stations.
Over time the Stevenson Screen may have fallen into disrepair, resulting in a slow changing bias as outside influences start to penetrate. Then the site is updated with a new screen. Biases removed, although the new screen may have its own small bias. If we now about the change we can try to compensate for it. Eventually when we have enough data from before and after.
The caretaker at the station in Ushuaia right at the southern tip of Argentina records data through the early 1900’s. In Spanish, with poor handwriting – really hard to tell 7’s and 9’s apart. The log sheets are sent to Buenos Aires where the data from this and many other stations are collated and typed up onto summary sheets by a clerk with an old battered typewriter. Then they are filed away; 40 years later they are extracted, faded and old, photocopied on a poor quality early copier and mailed to the US for incorporation into climate databases. Where they must be copied into the database again by hand. How many errors have crept in during that process?
So, we can’t simply take the raw data at face value. It has noise in it. We need to analyse this data looking for problems and correcting them when we are confident enough of the correction. But also being careful that we don’t introduce errors through unjustified corrections. This requires care and judgment and it is sometimes a real detective story. And often corrections cannot be made until many, many years later because you need lots of data before you can spot changes in bias.
So this process of working through the data, trying to make it more accurate is ongoing.
But what of its impact on the temperature record? Again, if the biases at a station don’t change over time, they don’t affect our analysis. Individual errors matter but they will tend to be random, some higher, some lower so when we average over large areas and long time periods, they tend to cancel out. Again, it is problems that cause changing biases that matter. And analysis of changes due to Homogenisation in the record indicate that there are as many cooling changes as warming ones. Such as this from Brohan et al 2006:
Homogenisation Distribution
Conclusion
So, Part 1A looked at how we should calculate the temperature record and why the method used is very important to the result. And that this doesn’t necessarily match our intuitive idea of how it should be done; in this our intuition is often wrong. In Part 1B looked at how we DO calculate the temperature record, that is using the method outlined in part one and that the area weighting scheme used by one record is based on empirical evidence. In Part 2A we looked at some of the areas where the temperature record has been criticised with respect to its broader locale. And in this post we have explored issues related to the immediate surrounds of the station.
I think we have seen that there are many reasons why we tend to overestimate the effect of these problems. This conclusion is consistent with the evidence here, here & here from various analyses that show that these possible problems haven’t had any significant effect on the result.
My conclusion is that we can have a strong confidence in the results produced for the global temperature trend. Any problems will show up more in short-term patterns such as seasonal, monthly and daily trends. But the headline global numbers look pretty robust.
You will have to make up your own mind but I hope I have been able to give you some food for thought when you are thinking about this.

October 24, 2011 3:45 pm

Brian
1. Scientific truth is not determined by holding a vote. It is determined by collecting and evaluating evidence. The history of science is littered with examples of where the consensus was wrong.
2. Not one of the major scientific institutions you allude to has actually ballotted its members on CAGW. The committees just presume to speak on behalf of their membership.
3. See the paper from which the 97% agreement figure was derived. The question on man’s influence on the climate was not well-formed. It did not restrict itself to global rather than local effects, nor was it specific to CO2 ( as opposed to land use changes, etc ) and it also relied on the respondent’s interpretation of what was meant by “significant”.
4. The questionnaire was circulated to those working in “earth sciences”. So, no solar physicists ? There are quite a lot of people who think that big yellow thing in the sky might have something to do with the climate. No cosmologists – what about the cosmic ray theory? No specialists in thermodynamics, many of whom question the scientific principle of the greenhouse effect ? No botanists, whose stomata studies directly contradict the ice core CO2 records ? etc, etc.
5. Finally, the 97% represented only 75 scientists, all of whom were described as climatologists. This probably means they worked for the institutions which produce the climate models and, …. wait for it ….. they believe in their own models !

Legatus
October 24, 2011 3:45 pm

I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it? Maybe Anthony Watts can shed some light on this?

Have you heard of the Little Ice Age? Click that link and you will see two things. The first is the obligatory “I believe in global warming like you, so don’t cut off my funding!” message (see Climategate and it’s emails showing what is done to those who fail to support AGW). At the very least, this shows that you cannot say that the scientist here has an anti AGW bias. Second, there is this, yet another site (Wikipedia) that starts with the obligatory “we support AGW!” message, trying to say that the little ice age (according to the IPCC) was not worldwide, and then is followed by actual scientific evidence from , respectively, Europe, North America, Central America, Africa, Antarctica, Australia, New Zealand and Pacific Islands, and South America, which shows that the LIA was, indeed, worldwide, and that the IPCC was wrong or flat out lying. And if you look here, you see a 300 year record of 5 European cities showing about 150 years of cooling, followed by 150 years of warming. We also have records of the cold people wrote about then, such as the frost fairs held on the top of the ice of the frozen river Thames in England, and how George Washington dragged cannon across the frozen Hudson River. Conclusion, it was colder then. Then about 150 years ago, it started to warm up, and warmed up rapidly.
Thus, by the time we get to 1900 or so, it has warmed up from it’s formerly cold temperature. and since then the temperature has remained about the same. Thus we can clearly say “the world is warming”, since it is clearly warmer than the little ice age, and doubt modern, unprecedented warming since 1950, since we can see that for the last 100 years or a bit more, the temperature has not changed nearly as much as it did when the climate changed from the little ice age to the modern warm period.
To sum up:
There was a Little Ice Age, so called because it was cold.
It was consistently colder than now for quite some time, perhaps 300 years.
The Little Ice Age ended, and it warmed up rapidly.
For the last 100 years or more, in the USA at least, it has remained about the same, warmer than the Little Ice Age.
If global warming were true, it would not have remained consistently the same, it would have warmed rapidly after 1950, it did not.
This is according to the most reliable temperature measurement station we have, the ones that have remained constant for 100+ years.
Or in one sentence, it has definitely warmed up since the Little Ice Age, and has remained fairly warm for over 100 years, but has not warmed much more in that 100 years from what it was 100 years ago.
You just need a little perspective, compared to 200 years ago, the world has warmed, compared to 100 years ago, the world has not warmed.

October 24, 2011 4:13 pm

Erratum : the question did restrict itself to global effects. Sorry.

Myrrh
October 24, 2011 4:33 pm

Legatus says:
October 24, 2011 at 3:45 pm
Or in one sentence, it has definitely warmed up since the Little Ice Age, and has remained fairly warm for over 100 years, but has not warmed much more in that 100 years from what it was 100 years ago.
You just need a little perspective, compared to 200 years ago, the world has warmed, compared to 100 years ago, the world has not warmed.

It’s the last hundred years temperature data they’d been playing with as in New Zealand, the second mentions Salinger’s connection with CRU: http://climaterealists.com/index.php?id=6151
http://www.climateconversation.wordshine.co.nz/tag/nz-temperature-records/
It’s taken around forty years to put the record straight..

October 24, 2011 4:35 pm

Brian,
Try this shoe on and tell me if it fits:
“there are… numerous well meaning individuals who have allowed propagandists to convince them that in accepting the alarmist view of anthropogenic climate change, they are displaying intelligence and virtue. For them, their psychic welfare is at stake.”
The source is M.I.T. Climatologist Dr. Richard Lindzen:
http://thegwpf.org/opinion-pros-a-cons/2229-richard-lindzen-a-case-against-precipitous-climate-action.html
Dr. Lindzen goes on to say:
“With all this at stake, one can readily suspect that there might be a sense of urgency provoked by the possibility that warming may have ceased and that the case for such warming as was seen being due in significant measure to man, disintegrating. For those committed to the more venal agendas, the need to act soon, before the public appreciates the situation, is real indeed. However, for more serious leaders, the need to courageously resist hysteria is clear. Wasting resources on symbolically fighting ever present climate change is no substitute for prudence. Nor is the assumption that the earth’s climate reached a point of perfection in the middle of the twentieth century a sign of intelligence.”
Do I hear the sound of a pseudo-scientific religious cult crumbling?
http://sbvor.blogspot.com/p/climate-change-science-overview.html

Steve Garcia
October 24, 2011 4:39 pm

As to the post-2000 dying of the thermometers, so soon after the Great Dying of the Thermometers in about 1990, reminds me of a short story I read back in the early 1960, when I was a mere lad, as the say across the pond NW of France.
It was the era of Reader’s Digest’s greatest popularity, including hardbound volumes with truncated versions of novels great and not-so-great, about 4 or five to the volume. Someone tongue-in-cheek (and well over my young literary-virginal head) wrote about the trend to digest books more and more, more and more, and taking it to its logical but extreme limit, told a tale of a book that had been digested all the way down to a single word. I believe that word was the name of the story, but it HAS been a long time, and I was ever so young.
One might suppose that that is what the climate establishment is aiming for – to digest all current thermometer readings to one special one that represents the entire globe.
And why not? Why should they have to go through all that tedious data assembling – instruments and proxies, tree rings, ice cores, varves, corals, and the various thermometer types? Wouldn’t there be far less disagreement and more settled science that way? Shouldn’t that wonder of our two most recent decades – science – digest down all their data collecting to one and only one reading per day? Wouldn’t all this NH/SH, El Niño-La Niña, AMO, SST, confusion be done away with, not to mention the problem of semi-drunken local temperature readers – FINALLY! – so that the experts can sit back, in the full glory of their expertship, puffing on their Meerschaums and Marlboros (and self-rolled Zig-Zags and whatever might find itself therein), blowing rings and smiling like the Cheshire cat – and fading blissfully from our sight, into the upper reaches of yon ivory towers of yore? Isn’t that what we really want our scientists to do?
If science is at its core about improving life and making life easier and simpler, well why shouldn’t the climatologists partake of that life of Riley, too? They pay taxes, too, after all. We should nod our heads in agreement at such a development – this perfect, singular global temperature data point from the one perfect temperature point on Earth – as the apex of science’s great accomplishments on behalf of homo sapiens sapiens. Rather than bemoaning the defuncting of the confusion-engendering Yamal or Polar Ural tree rings, the obfuscating UAH satellite blather, the TOB changes, the TOB differences, UHI adjustments, petulant declines, the PDO, solar irradiance, and cosmic rays, we should be having a rousing wake, celebrating the part all of them had done for us in the past, when our climate folks were getting their feet under themselves, and we should toast to the new age of Unitemp. Gone will be all the confusion and gone will be all the endless ragging on each other over what graph and what data set is BEST – and most especially the endless tug-of-war over what it all means.
Let us revel in our oneness of agreement. The one temperature cannot be confused, and isn’t that better? Unity beyond complexity. War is peace. Simple is more complete.
We can just hand over a small scrap of paper with one number on it, once a year, and so let Congress or the European Parliament get on with their job of whatever it is they do. Isn’t that so much more efficient and civilized?
There! I feel better, just for having digested it all down for your reading enjoyment…

wayne
October 24, 2011 4:41 pm

Smokey:
one thing you are is consistent (with a capital ‘C’). Others like Brian claiming otherwise are lost and wrong. This is a great science site.
I wanted to show you a couple of new things I have stumbled upon for you are one person here I know will not forget, you are very persistent too! 😉
A traipse through a search of all “water vapor”,spencer,miskolczi led me to a missed article by Dr. Curry many months ago at http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/ and related to http://judithcurry.com/2011/09/25/trends-in-tropospheric-humidity/ . And, within, it points to a paper by Kratz et al ( http://miskolczi.webs.com/p27jqsrt.pdf ) where I came upon a statement I found very interesting.
One was:
“The far-infrared, speci[fi]ed here to cover the spectral range with wave-numbers less than
650 cm−1, is dominated by the pure rotation band of water vapor, and has been shown to account for
over 40% of the energy emitted to space by the Earth’s atmosphere-surface system for clear-sky conditions
[1].”
Always suspected that but had never explicitly read “40%” it in a paper. There it is.
Second, in that you have read in some of my comments, or I think you have, of how 1/6th of the 390Wm-2 down welling IR displayed in Keihl-Trenberth graphic is real, that is when all figures on the energy budget balance, well, I also found in a summary of Miskolczi’s papers at http://www.friendsofscience.org/assets/documents/The_Saturated_Greenhouse_Effect.htm was said that:
“He applies the Virial Theorem to the atmosphere, which states that the kinetic energy of a system is half of the potential energy. The internal kinetic energy is taken as the upward long wave energy flux at the top of the atmosphere, and the potential energy is the upward radiation flux from the surface. This result is used to determine the fraction of the upward radiation from the surface that is transmitted directly to space (rather than absorbed by the atmosphere), which is 1/6.”
That to me is very curious, 1/6. He is stating of upward LW being 1/6 while I have been pointing out the 1/6 downward. Four sixth is of course always horizontal in three dimensions. Never had read that explicitly stated either. When you carry this into the Virial Theorum of what energy exactly supports the mass of our atmosphere every second of every day it now all seems to finally make perfect sense.
Thought this a good time to pass that and think about it.

October 24, 2011 5:10 pm

One thing which I find intriguing is that global temperatures in 1959 are shown to be somewhere in the region of 287.22 K. – I say in the region of 287.22 K because there are a few different figures “on the market” -. The chart I chose, at random, started in 1959 and ended in 2004 and showed a global temperature of 287.77 K for that last of the two years.
There were a few “ups and downs” mainly behind the decimal point during those years but the 287 K bit was reproduced for every year (almost).
So, the intriguing bit for me is; if thermometers placed at a few places on the Earth’s dry surface plus a lot of highly unreliable temperature reports from the world’s merchant and military shipping can be accurate to within 0,55 K or better, then why the Ken-Nell do we spend not just millions but billions or more of $€£ on satellite measurements?

Glenn Tamblyn
October 24, 2011 5:22 pm

Jerome, DirkH
“I have to disagree. He is using the temperature delta (Δtemperature) to average with other deltas. That makes much more sense than what you have assumed.”
A fundamental problem I see with this method is that the delta’s propergate forward. So any error that will unavoidable occur if a station is missing from sample n then gets carried forward into the calculation for samples n+1, n+2 etc. Ultiimately all future deltas contain some effect from all past errors. In addition there is the problem of propogating inaccuracies in the performance of the calculation. Computers do not calculate to infinite precision and since most of this is about calculating differences between larger values to produce much smaller differences then continually summing these differences the finite accuracy of each stage of arithmatic will propgate forward in the result. It would take some serious analysis to work out whether the net effect of this over time will all cancel out or be cumulative.
In contrast the method used by the mainstream analyses of comparing each reading for each station against its own long term average means that the anomaly (the delta if you like) is always calculated against a fixed reference. So any isues that might occur as a result of problems at one sample point don’t automatically propogate to future samples.
Also, Michaels method does not use area weighting. He is either doing one calculation for the whole US or he is subdividing the country. – this isn’t clear. But his method of simply dividing by the available station count means that the weighting for the stations effectively changes each time there is a missing reading. That is over and above the fact that he is not area weighting at all. If for example you are using 10 stations in Texas and 10 stations in Vermont, by his method the climate change in Vermont is given equal weight to that in Texas even though Texas is a much larger proportion of the US. Then, if you are looking at stations over time, you can introduce a time bias towards the climate changes in a region where the number of stations has grown over time. In my example. If Texas had 3 stations in 1900 and Vermont 6 because it was more densely poulated then the count changed to my first example by 2000, that is introducing a time bias towards the Texas climate over time.
Area weighting ensures that these geographic biases don’t occur
“But if you do area weighting your result will be hugely biased towards the trends of isolated thermometers with no second data source a thousand miles around, like in GISS. Is this what you consider a better approach? ”
The key question here is how many stations you need to adequately characterise the CHANGE in a regions climate. Note this is not the same as characterising that regions climate. Read the posts I put up earlier on teleconnection and where the GISS range of 1200 km comes from – sorry I couldn’t include the graphics, WUWT would be more effective as a platform for discussion if commenters could illustrate their points.
It seems to me that a common misperception many people have is that to adequately observe any climate change we need lots of stations. The climate of2 locations may be quite different to each other, even if they are relatively close, due to altitude differences. But locations that are at similar altitudes can have very similar climates over quite long distances. And when we are looking at how the climates of 2 locations CHANGE relative to each other they are often quite well correlated over long distances, particularly over land. Thus the 1200 figure used by GISS. This isn’t based on speculation, but on observation. Looking at the correlation between large numbers of station pairs and seeing how that correlation varies with separation.
So the case of a truly isolated station that is the only one within 1200 km would be problematic. But there aren’t many situations like that. However failing to area weight in calculations means that in effect every single station is producing a bias. And these biases have a definite pattern. Regions with dense station counts will bias the result towards them.
It also makes manipulations much simpler. The PNS approach. Drop the thermometers that don’t confess.

George E. Smith;
October 24, 2011 5:25 pm

Well with all this talk about missing data, and thermometers dying (none of Mother Gaia’s thermometers die; so she always knows the Temperature) and methods of diddling the averages to substitute for the missing data.
It’s kind of a lost cause; there’s this thing called the Nyquist Sampling Theorem, and it says you don’t have anywhere near enough global stations ; and never have had, to either reconstruct the original continuous Temperature map (don’t need to do); but you do need the ability to reconstruct the original continuous Temperature function; in order to even extract a correct average of the values; whether you recreate the values or not.
So whatever you have as a calculation for the average of the data set, whether data samples are missing or not; the zero frequency signal; which is another name for the average value, is itslef corrupted with aliassing noise; so what one does to fix it is somehwat irrelevent..
And the twice a day, min-max Temperature readings, are already in violation of the Nyquist criterion for the temporal sampling, since the daily temperature variation is not a simple sinusoid with no harmonic content, so at least a second harmonic with a 12 hour period must be present, and twice in 24 hours sampling, will result in the daily average calculation also containing aliassing noise. Not to mention that any varying cloud cover during the day, will totally bamboozle the min-max thermometer (but not Mother Gaia’s thermometers)..

October 24, 2011 5:26 pm

Glenn Tamblyn says:
As Requested:
October 24, 2011 at 3:38 pm – Off Averages & Anomalies Part 1A
October 24, 2011 at 3:40 pm – Off Averages & Anomalies, Part 1B
October 24, 2011 at 3:41 pm – Off Averages & Anomalies Part 2A
October 24, 2011 at 3:42 pm – Of Averages & Anomalies Part 2B

About ten screens for each of these posts. And the whole shabang is a humumgous diversion.
Glenn, we’re questioning your unstated assumptions: that the basic data, including the so-called rural, are free of UHI / or that the UHI is accurately enough known – and that the corrections applied are appropriate.
The longstanding records that are the subject of this post, are very strong evidence against all your assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work.
Moderators: should Glenn’s posts be raised to their own thread, to assemble a fuller answer from WUWT? At least, Glen could not say on SkS that nobody here even pays attention to his “real” science. Yes I know, we get the traffic at WUWT. But the politicians and the science academies are still playing poker with our finances, and putting on shows like the real skeptics’ science doesn’t exist.

October 24, 2011 5:41 pm

Looks like we have an un-closed italics tag up above.
[Fixed. ~dbs]

October 24, 2011 5:46 pm

Wayne,
Thanks for the info. Interestingly, Dr Miskolczi’s estimate of climate sensitivity is… zero.

barry
October 24, 2011 5:48 pm

Dave Springer @ here
And others talking about adjustments…
Are you aware that many studies, including Fall et al (Anthony Watts’ study), have corroborated the average temperature record for the US? There have also been many blog initiatives taking raw, adjusted, rural, airport, urban data and processed them in different ways and basically corroborated that the US record for average temperatures is robust?
Here is an excellent breakdown of US adjustment choices, focussing on UHI, over at The Blackboard, Lucia’s blog.
Last year Steve Mosher and Zeke Hausfather wrote a seminal post at WUWT discussing numerous attempts at reconstructing global temperatures.
http://wattsupwiththat.com/2010/07/13/calculating-global-temperature/
These were worked up from raw data and from adjusted data, and with various filters and processes, the general result being that the official temperature records are robust. Mosh and Zeke made an excellent case to move on from questions about the need for adjustments to more incisive enquiry about the robustness of some of them (like UHI).
(Check the link above, because it points to numerous projects from both sides of the aisle reaching pretty good agreement on basic ideas.)
So many issues have been investigated – we must not lose sight of all the work that has been done. For example, here is a link to an experiment taking 60 rural stations from around the globe with at least 90 years of uninterrupted data. Result? Good agreement with official (land-only) records.
For US rural/urban comparisons, there have been several blog attempts, which conclude that the difference is negligible.
Recall also the global temperature record from raw data at The Air Vent (just one of the skeptical sites I have cited here). Time and again both sides of the aisle have tested the data thoroughly and found the official records to be fairly robust.
Regarding the top post, there is good agreement between rural and urban temps, from independent analyses, in the literature, and even according to Michael Palmer’s work above.. There is no need to discard recent data, although it would be good to learn why rural stations have dropped off lately – and remember the last time station drop out was thought to be an issue and it turned out it wasn’t anything nefarious, and that it didn’t make a difference to the temp records anyway.

October 24, 2011 6:01 pm

barry,
Three “robusts” and one “robustness”! That word always sounds faintly ridiculous to me, like those using it are trying to make their argument stronger.
Here’s the real argument, which is avoided as much as possible by the robust crowd: “Carbon” [by which they mean carbon dioxide, a gas] has been demonized as something harmful that will cause bad things to happen, like climate disruption, runaway global warming, coral bleaching, etc.
The truth is this: there is no evidence to support those conclusions. The only evidence we have is that CO2 is harmless and beneficial. Falsify that testable hypothesis, if you can.

barry
October 24, 2011 6:06 pm

Interestingly, Dr Miskolczi’s estimate of climate sensitivity is… zero.

Yes, it would appear the good doctor does not believe we’ve experienced global ice ages over the past million years.

1DandyTroll
October 24, 2011 6:08 pm

Brian says:
October 24, 2011 at 1:19 pm
“Amazing that the “politicians and environmental promoters” have managed to convince 97% of climatologists and essentially every major scientific organization that AGW is real.”
So, essentially, you mean that all them people and organizations who depend on the money, essentially, coming from “politicians and environmental promoters”, cave to those “politicians and environmental promoters” demands? In your reality: I wonder why?
Back to reality: do you happen to be able to supply proof, or do you just like to blow smoke from your bong every which where?

Theo Goodwin
October 24, 2011 6:11 pm

barry says:
October 24, 2011 at 5:48 pm
Very interesting post, barry, but it is incomplete. Lucy Skywalker sets forth what you need to address as follows:
“Glenn, we’re questioning your unstated assumptions: that the basic data, including the so-called rural, are free of UHI / or that the UHI is accurately enough known – and that the corrections applied are appropriate.
The longstanding records that are the subject of this post, are very strong evidence against all your assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work.”
In addition, I am questioning the entire framework that Mosher and friends inherited but never questioned. As scientists, it is not enough to draw your diagrams across maps of Earth’s surface and apply your inherited statistical techniques when you have empirical evidence of huger importance staring you in the face. The longstanding records from what I call the “well managed” stations are powerful evidence that the records from the other stations are questionable. Shame on you for ignoring those long standing stations. I take it that you cannot think outside the box enough to address this empirical matter. But it is your duty now as scientists to explain empirically the differences between the long standing stations and the others.

October 24, 2011 6:12 pm

Coming back to the thread after a few days, I was shocked at some of the comments directed at “Brian”, since they didn’t apply to anything I’d said. I immediately checked for other Brians, of course, and came across the d****l “Brian” was posting.
He dishonours our shared given name! Glad I appended my surname initial to my tag from the get-go.
SBVOR;
Thanks for the additional Lindzen links. He’s becoming ever more accurate and forthright in his diagnoses.

barry
October 24, 2011 6:15 pm

My dear Smokey,
I’d rather lick my way to the centre of the earth than attempt to make points through your endlessly shifting goal posts. Been there before, bubba. Bought the T-shirt, got back on the boat.
You’re welcome to respond to the content of my post rather than snarking about a word in it. It would be on-topic to boot. I may even reply with more consideration.
🙂

Legatus
October 24, 2011 6:19 pm

Off Averages & Anomalies 1. Every station may not have data for the entire period covered by the record. They have come and gone over the years for all sorts of reasons. Or a station may not have a continuous record. It may not be measured on weekends because there wasn’t the budget for someone to read the station then. Or it couldn’t be reached in the dead of winter.

Ahh, but what if you do have stations that have reliable data for the entire period? Well, then you do not need to do all this complicated stuff to make up for that, do you? And this post is about exactly such stations. And this post shows that these reliable stations data disagrees with the data of stations that are less reliable, and use a lot of complicated math to supposedly make up for that. In the scientific method, this is called “falsification”. This shows that the method described in “Off Averages & Anomalies” has been falsified.
The Scientific method:
Hypothesis, some stations have unreliable data some of the time which introduces error. If we use the method described in “Off Averages & Anomalies” we will be able to screen out that error.
Test of Hypothesis- Compare the output of this method to known stations that do have reliable data, and see if there is a significant difference. If none, the method works and can be used to eliminate error, if a significant difference, the method is falsified and the current reported temperatures contain error.
Result of Experiment- There is a very considerable difference in the known reliable stations and output of the method that claims to be able to eliminate this unreliability.
Conclusion- This method does not eliminate the error, it has been falsified (assuming it was even used correctly and honestly in the first place).
Also, the idea that you can eliminate most of the stations and still achieve a reliable record of whether the temperature is rising or not is incorrect. If you give me control over which stations are included in the temperature record, and which are dropped, I can get you warming regardless of what method you use to screen out my deliberate and false warming. All I need to do is carefully select stations that show a gradual and steady warming. I would almost certainly have to eliminate most stations, using too many stations would make this too hard for me, there would only be so many stations with this kind of record. There is, after all, only so many cities, and some stations may have been too carefully maintained and calibrated to make this possible (they may have compensated for the UHI effect by careful citing and re siting). These can be stations that are in growing urban areas. The above article shows that those are the stations kept. I would also eliminate most rural stations, except for a few where the local environment right around the station itself contributed to a slowly growing heat. In all cases, what I am looking for is a slow rise of reported temperatures due to a slowly increasing Urban Heat Island Effect. If I choose those stations where the increase is slow and gradual enough, and drop all stations where there is not such a gradual rise, no amount of fancy math will correct for my deliberately introduced warm bias.
I notice a few things here (“you” is the author of “Off Averages & Anomalies”):
*The number of stations dropped off the temperature record is huge, that is exactly what I would need to falsify the temperature record by including only those stations that show rising temperatures, and dropping all others. The number of stations that would show such a gradual rising temperature may be far smaller than the total number of stations, thus, I would need to drop most of them. Thus, this huge drop of stations must raise serious suspicion.
*I paid for these stations with my tax dollars, many still exist and are still reporting temperatures, yet are not being entered into the global temperature record, why not? I paid for them, you are wasting my tax dollars by not using them for the purpose I paid for them. Where is the money going, if it is not being used for these stations?
*You claim to have a method here which will eliminate bias and error. You drop a huge number of stations, which still exist and report. You cannot claim you dropped those stations because they report in error, since you claim to have a method to eliminate this error. So why have you dropped these stations?
*If I were to deliberately introduce a warm bias, I would wish to drop most rural stations. We now have no more rural stations reporting than we had some 150 years ago. This should make anyone suspicious, whatever the excuse made.
*If I were to deliberately introduce a warm bias, I would drop far more rural then urban stations, this is exactly what has happened. The percentage of urban stations is now far greater a percentage of the total stations than it was at any time in history, including 150 years ago.
*If I were to deliberately introduce a warm bias recently, I would expect to see “rising temperatures” right around the time I eliminate most stations. That is exactly what we do see.
*If I have 1/3 of stations reporting warming , 1/3 reporting cooling, and 1/3 reporting steady, can this method fail to show warming if I drop the cooling and most of the steady ones, will it correct for that error, will it even warn you of it? What if I have only 5 or 10% of the stations giving me the slowly rising temperatures I want, and drop most of the rest, can this method fail to report warming?
*You claim to have not carefully selected for warm bias stations, the above article shows that this is suspect, at the very least. Therefore, I would want to see proof of that, your word alone is no longer enough. The fact that “global warming” being true results in greater budgets, job security, and prestige for you has to make me very suspicious, you have a clear conflict of interest.
<blockquoteCould it also be that the people responsible for the ongoing temperature record realise that you don’t need that many stations for a reliable result and thus aren’t concerned about the decline in station numbers – why keep using stations that aren’t needed if they are harder to work with?
“Could it also be”, now there is a definitive, scientific phrase, sure to fill me with unbounded confidence! If you are going to say that we don’t need these stations, I am going to need something more difinative than a “could”. I have a much more likely idea, since we know that there used to be far more temperature stations reporting than now, we know they can, indeed be used, because they were. But now you say “it’s too hard!”. Well, then, how about we fire your lazy ass and get someone out there who will do the work!. You say it’s too expensive, well, how much has your budget dropped, if at all? Unless you can show that your budget has dropped a lot, than this is just an excuse. And you are asking us to expend trillions of dollars to combat “global warming”, and you are trying to skimp a few bucks here? Before I am willing to do all the hard work and expense to combat something, you had better do the far less hard work and expense to show me I need to.
Here’s an idea, before we just accept that it is OK if you drop all these stations (many of which still exist and still report temperatures, thus showing that there is no need to not use them, it is not too hard because someone is doing it now), how about we try adding back in the temperatures they still report, and you can then use all your fancy math to screen out the errors ( have no objection to honest error screening after all), and then we can see if the record still shows the same. Or…we could try using only the most reliable, long term stations, and see if they concur with your method. You know, like is done above. Oh wait, it does not concur. Conclusion, all the verbiage in “Off Averages & Anomalies” shows the old saying, if you can’t dazzle them with brilliance, baffle them with BS. In fact, the pro AGW camp can go further, you can baffle each other, each of you can only do a little dishonesty, while telling each other how diligent you are being with the truth. So long as you compartmentalize it, say with only a few key players adding in just a little dishonesty at key steps (with lots of rationalizations for it), why, you can continue to believe that your record is honest. The above actual use of the scientific method, however, to test that, and find it wanting, should give you pause…
The fact that a number of the chief proponants of AGW have been actually caught monkeying with the temperature trend, and ‘adjusting” it well after the fact (despite not having a time machine to be able to tell if they need to, actual example, how 1938 used to be the highest recorded US temperature, yet was adjusted downward in incriments till 1998 was), as well as actual criminal behavior and deliberatly not using the scientific mthod (such as not releasing their data and code and even threatening to destroy it rather tha do so, so that their work cannot be replicated or even checked), also means that the claim that they are, indeed, not up to anything is either suspect in the extreme, or a flat out lie. Note that it is quite possible in a large loose organization like that to beleive you are telling the truth simply because everyone else around you assures you that you are. Enough compartmentalization if little lies here and there and you can collect them together into one huge whopper and never know it. Throw in a bit of “noble cause corruption” as a rationalization and there you go, concience cleared!

Legatus
October 24, 2011 6:31 pm

BTW, one thing I would surely like to see, as an amendment and addition to this article, see if there are stations like these, with guaranteed long and accurate records, in countries other than the US. Yes, I know that others have shown in this thread that there are other very long records, what I am looking for is
1) How accurate and reliable are they? This would require them to be multiple stations, rather than just, say, at one spot. The GISS record hare is of 600 stations, are there any such records from other countries?
2)I would like to see it over the same 100 year time frame, apples to apples.
3) Unadjusted data, of course.

Theo Goodwin
October 24, 2011 6:33 pm

Lucy Skywalker says:
October 24, 2011 at 5:26 pm
“The longstanding records that are the subject of this post, are very strong evidence against all your assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work.”
Excellent post. The longstanding records should be treated with respect and not bundled with the other records in knee jerk fashion. It is incumbent upon the bundlers to provide empirical evidence that the two sets of records should be treated the same. Without such evidence, one wonders whether the reason for bundling the weak records with the longstanding records is to achieve a higher average temperature. Recognizing that such evidence exists and can be addressed might require some folks to think outside the box.

October 24, 2011 6:36 pm

peetee says:
October 24, 2011 at 3:31 pm
“uhhh… the article references to ‘abstract’ – where’s the actual published paper to be found? What journal? Uhhh… what peer-reviewed journal? Surely, surely…. this can’t be a pre-release!!!”

peetee, this IS the FINAL release, and the paper is peer-reviewed right here. That means you and your peers!
“But wait”, I hear you saying, “you can’t be serious! My troll posts count for peer review?” To which I reply: “Why yes, for sure! You have no idea what real academic peer review can be like.”
Glad we could clear that up.

Theo Goodwin
October 24, 2011 6:40 pm

Glenn Tamblyn says:
October 24, 2011 at 3:38 pm
“In this series I intend to look at how the temperature records are built and why they are actually quite robust. In this first post (Part 1A) I am going to discuss the basic principles of how a reasonable surface temperature record should be assembled, Then in Part 1B I will look at how the major temperature products are built. Finally in Parts 2A and 2B I will then look at a number of the claims of ‘faults’ against this to see if they hold water or are exaggerated based on misconceptions.”
We are not asking for a tour of the box. We want you to think outside the box. What empirical evidence can you offer for not treating the longstanding records differently from the other records? In an early post, I suggested that the longstanding records should be treated as the standard and that all other records should be treated as deficient because of siting issues and related matters. Anthony’s 30 years of data offer considerable empirical evidence for investigating the siting issues. Please try to address the empirical questions about siting.

October 24, 2011 7:00 pm

It’s also worth puzzling over GISS vs adjusted data in Australia …
http://www.waclimate.net/bomhq-giss.html

October 24, 2011 7:26 pm

barry,
I’ll respond to your comment (as you requested).
Fine…
Let’s pretend the instrument data are flawless across the entire history. AMO warming cycles alone are still enough to account for essentially all of the USA warming signal:
http://sbvor.blogspot.com/2011/10/amo-as-driver-of-climate-change.html
I submit that the AMO alone is also enough to fully explain the birth and death of the CAGW cult (and the global cooling cult which preceded it):
http://sbvor.blogspot.com/2010/12/how-amo-killed-cagw-cult.html
From the same post (above) Dr. Lindzen asserts that:
“The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries. Recent work (Tsonis et al, 2007), suggests that this variability is enough to account for all climate change since the 19th Century.”

barry
October 24, 2011 7:30 pm

Theo @ here
Included in my post are links to posts on the questions that Lucy Skywalker raised, testing for UHI in the records and in the data. This work has been done under the auspices of skeptic websites and others.I have not included the literature references as I assumed these would be disregarded out of hand. I took pains only to reference sources that have the trust, or seem to have the trust, of the skeptical milieu.
Lucy’s riposte seems to be that there is doubt that stations have not been properly screened to distinguish rural from urban. There are a number of different methods, Zeke works with three in his post on UHI in the US. At Residual Analysis, the screening methods tested are (1) GHCN classification, then from population density, and then from (2) ‘vegetation’ metadata,from which he sourced actually rural (no settlement nearby) stations.
Other parameters have been tested – airport trends (example, example) is a good example, where the results are not much different than urban, rural or all sites.
Unsure about GHCN data? Then use the GSOD data set, which shows a similar profile,/a> to the other records – this has been compiled outside the official channels by citizens bloggers. Here is a comparison to GHCN data from 1950 (caveat – GSOD has poorer coverage before 1974, at least that was the case last time I read up on it a year ago).
My original post to this one is aimed more at the people here who are impugning the methods and motives of the official compilers of temperature data and records. There is no call for that when there is so much material – from skeptics as well – corroborating the official records. Even Anthony Watts paper (Fall et al) corroborates the averaged temperature record for the US.
In reply to your comment to me, Lucy’s queries are of course valid, but it is wrong to suggest that they have not been addressed.

Richard G
October 24, 2011 8:20 pm

Garrett Curley (@ga2re2t) says:
October 24, 2011 at 3:13 am
I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it?
————————————-
It’s that darned consensus thingy.

October 24, 2011 9:16 pm

Garrett Curley (@ga2re2t) (October 24, 2011 at 3:13 am)
What’s your pleasure?
A world which is warming, cooling or static?
Just tell me your preference and I’ll give you a time frame to match your desires.
But, as for human contribution…
See my previous comment.

John Andrews
October 24, 2011 10:05 pm

The RUTI vs BEST graph is interesting. I was born in San Diego in 1934 and lived there until 1973. I think it was winter on 1972 that I saw snow in our neighborhood (Clairmont) for the very first time. That experience fits with the RUTI graph.

October 24, 2011 11:28 pm

Dave Springer says:
CO2 has very little effect over the oceans which are 71% of the planet’s surface because the ocean only gives up 20% of its solar heating through radiation and CO2 only slows down radiative cooling. CO2 DOES NOT slow down evaporative, convective, or conductive cooling. Over land surfaces, especially dry surfaces, the dominant mode of cooling is radiative. CO2 will have its maximum theoretical effect there and there-only.
Henry
This theory of yours does seem to confirm my findings, which showed little or no warming over the SH, even in the antarctic. But how did you get to this value of 20%?
As you know, when I carefully look at what happens more inland on the SH, like on the South American continent, I find cooling which indeed could be linked to de-forestation. Here in Pretoria, or Brisbane there was no warming but these are countries that have large deserts with dry climates and little greening. Most of the greening is happening in the NH and here I find the heat being trapped.
What I have discovered so far from my (silly?) carefully chosen sample of 15 weather stations is that the overall increase of maxima, means and minima was 0.036, 0.012 and 0.004 degrees C respectively per annum over the past 35 years. So the ratio is 9:3:1. Assuming that my sample is reasonably representative of earth (70/30 balanced & + – latitude balanced) , I have to conclude that it was the maximum temps (that occur during the day) that pushed up the average temps. and the minima. So either the sun shone more brightly or there were less clouds. Or, even, perhaps the air just simply became cleaner (less dust? Are there records on that?).. Amazingly, taking the NH separate, I find that that the increase in maxima, means and minima was at a ratio 1:1:1.So, somehow, more of that extra heat coming to earth naturally, – I don’t know if you agree with that?> – is trapped in the NH. The question is why?
My point has been that if it were an increase in CO2 or GHG’s that is doing the warming, you would expect to see the exactly the same results for NH and SH because these gases should be distributed evenly in the whole of the NH and SH hemisphere. So I must conclude that it never was the increase in CO2 that is doing it. The only logical explanation I can think of is the difference in the rate by which the earth is greening. In South America we still had massive de-forestation over this period whereas Australia and Southern Africa have large deserts. Obviously, the NH has most of the landmass and here everyone seems to be planting trees and gardens. A recent investigation by the Helsinki university found that 45 countries were more green then previously out of a sample of 70.
Paradoxically, the increase in greenery is partly due to human intervention, partly due to more heat coming available (increase in maxima!) and partly due to the extra CO2 that we put in the air which appears to be acting as a fertilizer/ accelerator for growth.
For my data, see:
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming
(make a copy for yourself of the tables)
I am not convinced by your theoryDave because if it were true, a place like Honolulu should show little or no warming. I think you are speculating by seeing little or no warming in the SH and making the link that the SH is mostly covered by oceans. But the individual trends that I picked up seems to support my theory: if it greens more, it becomes warmer.
Lucy is right. Let is guard the gates, and stick to the truth.
.

Agile Aspect
October 25, 2011 12:18 am

malagaview says:
October 24, 2011 at 9:57 am
I admire your faith in satellite data that is not calibrated against earth based thermometers… satellite data that cannot be independently verified / processed / c
;—————————————————————————————————————-
It’s my understanding there are 3 ways to measure the temperature, namely
(1) satellites
(2) balloons
(3) thermometers on the Earth’s surface (which is usually land primarily in the North Hemisphere.)
The first two agree – but the land based thermometers don’t agree with either (1) or (2).
The different land bases indices come from essentially the same set of thermometers – the difference in the indexes is primarily a result of cherry picking the thermometers and then cooking the data.
I admire your faith.

October 25, 2011 1:10 am

I think the problem is when new shorter series are added to the long ones, I mean anomalies. Optimally, you need several years of comparing the new ones to the old ones to find out the difference and to set what real anomaly of the new series is. Lets say you measure temperature since 1900 somewhere, you have absolute numbers for 1961-1990 period and you can easily explain the record in form of anomalies. But then you add another site starting in 1985 – 300 m different altitude and 500 km nearby – so which anomaly will be the first year? If you manage to make it higher from the beginning and mix it with the long record, the whole post-1980 mix will be steeper. This is exactly what we can see, when comparing long-term US stations and all of them.
I would rather use several hundreds of long-term records for global record than thousands of who knows how calculated anomalies for the shorter versions.

October 25, 2011 2:17 am

barry says: October 24, 2011 at 7:30 pm
Thanks Barry for all that work, all refs of which I checked. The graph was the telling item: “rural” warming rising faster than “urban” warming” – as I expected. So I quote my earlier words

Glenn Barry, we’re questioning your articles‘ unstated assumptions: that the basic data, including the so-called rural, are free of UHI / or that the UHI is accurately enough known – and that the corrections applied are appropriate.
The longstanding records that are the subject of this post, are very strong evidence against all your articles‘ assumptions. They constitute a statistically significant result that bears out all the UHI-testing surveys I collected here. Stop avoiding this issue. Be a real scientist and face these uncomfortable facts that challenge the BEST work, the Jones-Wang UHI papers, and all the consensus work including the work in all the articles you reference.

October 25, 2011 2:40 am

Bob Kutz
“I think it a bit much to lump all of us into one group with one set of beliefs.”
You’re right and I don’t mean to. It’s just that I sometimes get the feeling that those amongst you who say you believe the Earth is warming are still comforted by data showing otherwise. But maybe I’m misinterpreting.
“But we don’t ostracize those with dissenting ideas here. We talk about those ideas.”
Talking about dissenting ideas is great. But you’re not doing your movement (if I may call it that) any favor by continuously regurgitating the doubt as regards GW. Old ideas sometimes need to be laid to rest. Geophysicists rightly do not pay attention to those who bring up pre-plate-tectonic theories.
“That is how science usually begins; a hypothesis is developed and a means of testing it is devised.”
Sure. I just find that WUWT is not a great place for putting forward a hypothesis. This site is to the fore in climate skepticism and I’ve come here to see things from another perspective. A basic, probably highly flawed, hypothesis made by a microbiologist is a turn-off for people like me.
phlogiston
“Who is it who should be expected to have coherent data on global temperatures? Is it the climate research community, … Or an amateur group of scientists and concerned citizens, including some climate scientists sacked for heretical views?”
The climate research community is coherent with regards their conclusion on trends in the Earth’s temperature: it’s warming. They consider their data to be coherent. You disagree, but as you said, you’re a group of amateur scientists and concerned citizens, so maybe your definition of coherent is flawed because you lack the expertise? Having said that, this site does have a big responsibility: when the quasi-totality of climate experts, along with the large majority of professional scientists, believe that AGW is real and a likely danger to human-kind, then any group that wishes to disagree and slow down or stop actions being taken to mitigate AGW needs to make sure that their argument is coherent and based on fact, not amateurish hypotheses.
“And now a straightforward demonstration by Michael Palmer that, sorting for the best quality continuous data records in the USA annihilates at a stroke any sign of warming on that continent” You say “best quality” without justification. The records were “continuous”, but not necessarily best-quality. You say it “annihilates” before even waiting to see a critical analysis of his demonstration. That’s quite a leap of faith, if I may say so. Are you sure that you’re in no way biased? A non-biased statement, in my opinion, would say something more like: “A simple mathematical analysis by Michael Palmer, using continuous raw data records, shows no net warming on the US continent over the past one hundred years”. Again, ask yourself whether you are biased or not.
Cheers.
formatting test
[NOTE: Site Policy requires a valid e-mail address. Further comments wikll not be posted without one. -REP]

October 25, 2011 3:34 am

Garrett Curley says:
“I just find that WUWT is not a great place for putting forward a hypothesis.”
We then, let’s fix that. Here is my testable hypothesis:
The global increase in CO2 is harmless, and beneficial to the biosphere. Up to the maximum UN/IPCC’s projected levels, CO2 is a net benefit to the environment. More CO2 is better.
Falsify that hypothesis, if you can.

October 25, 2011 5:06 am

Garret Curley says: The climate research community is coherent with regards their conclusion on trends in the Earth’s temperature: it’s warming. They consider their data to be coherent…..
Henry@Garret
Most sceptics do not doubt that it is warming. The question remains: what is causing it. Understand that it is alleged that due to increased green house gases in the atmosphere, heat is trapped that cannot escape from earth. So if an increase in green house gases were to be blamed for any warming, it should be minimum temperatures (that occur during the night) that must show the increase (of modern warming). In that case, the observed trend should be that minimum temperatures should be rising faster than maxima and mean temperatures, pushing up the avergae temperature.
So don’t you think that any set of data displaying the increase in average temperatures is pretty useless unless it shown TOGETHER with the development of minima and maxima?
So far on my sample the trend is opposite – it is maxima (happening during the day) pushing up the average temps.
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming

peetee
October 25, 2011 6:06 am

Michael Palmer says: “peetee, this IS the FINAL release, and the paper is peer-reviewed right here. That means you and your peers!
“But wait”, I hear you saying, “you can’t be serious! My troll posts count for peer review?” To which I reply: “Why yes, for sure! You have no idea what real academic peer review can be like.”
Glad we could clear that up.”
Mr. Palmer, respectively, as vaunted as the reach of WUWT is, surely you should seek a legitimate scientific peer-review route… no? Particularly in regards to Anthony’s expressed reservations concerning the BEST (lack of) peer-review, why should one give your findings any more consideration in that they constitute nothing more than ‘blog science’. As you appear to be well published in your non-climate related discipline, that would suggest you would equally seek a legitimate peer review for this “WUWT paper”. That you consider and label it as a FINAL release seems contradictory… yes?

October 25, 2011 6:13 am

[SNIP: You were told to use a legitimate e-mail. Please do so. -REP]

October 25, 2011 6:20 am

peetee says:
October 25, 2011 at 6:06 am
“surely you should seek a legitimate scientific peer-review route… no?”

No. It is not worth my time. Anyone who finds it worth their time is welcome to the data and the code.

beng
October 25, 2011 6:32 am

****
steven mosher says:
October 24, 2011 at 1:22 pm
TOBS is an empirically derived adjustment. you can read karls paper or the subsequent verification of it.
****
Yes, as I said, TOBS is legit, when done properly.
*****
Arguing about TOBS is a waste of time. It needs to be applied in ANY analysis that does simple averages. Otherwise you will get the wrong answer. demonstrably wrong.
*****
Yes — read my first comment. But the “averaging method” you mention is a waste of time. The only method that will convince me is when the metadata is used for each individual station & the proper correction applied to that station. Yeah, I know the amount of data (and some is certainly missing) makes this tedious/difficult, but nothing less will suffice IMO, especially for such a large correction compared to the changes we are trying to detect.

October 25, 2011 8:18 am

Henry@Garret
Garret your comment got wiped, this does not happen a lot here,
You must give them a valid e-mail address.
Anyway, I read it on my BBerry and I understand from it that you don’t know how the greenhouse effect works.
Quote from Wikipedia (on the interpretation of the greenhouse effect);
“The Earth’s surface and the clouds absorb visible and invisible radiation from the sun and re-emit much of the energy as infrared back to the atmosphere. Certain substances in the atmosphere, chiefly cloud droplets and water vapor, but also carbon dioxide, methane, nitrous oxide, sulfur hexafluoride, and chlorofluorocarbons, absorb this infrared, and re-radiate it in all directions including back to Earth.”
If you want a bit more understanding of the problem, you can go here:
http://www.letterdash.com/HenryP/the-greenhouse-effect-and-the-principle-of-re-radiation-11-Aug-2011
(I have tried to keep it as simple as possible, and have asked for comments from my peers here if there is anything wrong with this explanation)
Obviously, this being the case, it follows that, if you say that the extra warming we experience is not natural, we must assume that the flow of warmth from the sun is or was constant – in which case maxima should not be rising. If you say that the warming is due to an increased greenhouse effect it should be minimum temperatures that are rising.
My findings thusfar show exactly the opposite, as also explained here:
http://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/#comment-777081
I am still hoping that Dave Springer will give me a reply on that, because I don’t know where or how he got to that 20% number.

Tilo Reber
October 25, 2011 8:29 am

Great work Michael. What you have done here answers some questions that I have always had. I’ve always wondered about what rural, long term, unadjusted records would look like. I didn’t expect to see a negative trend. But I did expect to see a trend that was less than what our major suface temperature sources were showing.
I do have a question. In figure 1, the absolute rural anomalys look to be significantly lower than the non rural. In figure 2 they appear to be about the same. Is there a difference in baselining?

barry
October 25, 2011 8:39 am

Lucy @ <a href=http://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/#comment-777203</here
There is a much simpler way of doing this. UAH satellite record has a trend for global land of 0.18C/decade from 1979. The trend is very similar for the surface records (land only). The satellite records are not impacted by UHI. Urbanisation rates have increased over time to present, in line with population. If UHI is significantly impacting surface temp trends, they should diverge significantly from the satellite temp trends. They do not.
UAH also have a trend for the US lower 48, of 0.2C per decade since 1979. This record is unaffected by UHI. I’ll leave it to you to plot the linear trend for US stations from 1979.
Don’t know if you noticed, but I posted a link to an analysis of 60 global rural stations with 90 year records (similar to Michael’s effort, but global in extent).
http://moyhu.blogspot.com/2010/05/just-60-stations.html
Also, the links I posted discussed their methods for determining rural/urban stations. It’s not as if they didn’t make the effort – they all outline their methods! – so I’m not sure why you quote yourself saying that they are making “unstated assumptions.” Are you sure you actually read them?

October 25, 2011 8:51 am

barry says:
“Yes, it would appear the good doctor [Miskolczi] does not believe we’ve experienced global ice ages over the past million years.”
OK, barry, quote verbatim where Dr Miskolczi states that.
barry continues:
“I’d rather lick my way to the centre of the earth than attempt to make points through your endlessly shifting goal posts.”
As wayne points out [“Smokey: one thing you are is consistent (with a capital ‘C’)”], I am entirely consistent in my comments. So start licking, barry.
And I note that the only non-response to my hypothesis that CO2 is harmless and beneficial is the sound of crickets. As the saying goes, put up or shut up.

Brian
October 25, 2011 8:53 am

@Smokey et al
Even if you reject the 97%, other polls have surveyed scientists more generally and still found the overwhelming majority of scientists accept AGW. You’re right, science is not by consensus and they could be wrong, but the question is, what is a layperson justified in believing? It’s possible those (let’s say) 80% of scientists could be wrong, but it’s clearly more likely that the 20% is wrong. I would like an example in modern science for when the 20% was right and the 80% was wrong after decades of research.
A common argument seems to be that government money introduces a bias. Really? More of a bias than the fossil fuel lobbies which are overtly funding “skeptics”?
Sure models aren’t as good as experimentation when that’s an option (which it isn’t in this case), but we run nuclear reactors using models, built an atomic bomb that worked on the first try, we calculate plume dispersion from pollution, we determine how proteins fold, how fluids flow etc. All of these models seem to work for the most part, why is this different?
When I say skeptics are “crazy” I don’t mean schizo, I mean so ruled by cognitive biases that the conclusions they reach, with the certainty they reach them is demonstrably illogical. As you move from GW to AGW to CAGW the certainty of the science decreases as does the irrationality of disbelief.
Scientists think AGW will probably be bad, partially because we’ve built society around how the climate is right now. Is it likely to be catastrophic? No. The Gore/Greenpeace “alarmists” as you call them are out of step with the science on this. The point is that if AGW is “probably” bad and even “possibly” terrible, it makes sense to start taking modest steps to address it, using best-estimate cost-benefit analyses. If 10 years from now we realize it’s not warming as fast as we thought, it’s a lot harder to cut back on GHG emissions than we thought and mitigation is likely to be cheaper than avoidance, fine. Our policies can evolve. Scientific knowledge evolves and our opinions and actions should be based on the best available knowledge at the time.

October 25, 2011 8:57 am

barry sez:
“UAH satellite record has a trend for global land of 0.18C/decade from 1979. The trend is very similar for the surface records (land only).”
1) Peer reviewed science begs to differ. Quoting Dr. Roger Pielke Sr.:
“Our [peer reviewed] paper… has clearly documented an estimated warm bias of about 30% in the IPCC reported surface temperature trends.”
http://sbvor.blogspot.com/2009/09/warm-bias-of-about-30-in-ipcc-reported.html
2) In the USA, simply insist that any trend analysis date range start and stop at similar points in the AMO cycle and the warming signal essentially disappears:
http://sbvor.blogspot.com/2011/10/amo-as-driver-of-climate-change.html

Tilo Reber
October 25, 2011 9:03 am

Mosher: “Arguing about TOBS is a waste of time. It needs to be applied in ANY analysis that does simple averages. Otherwise you will get the wrong answer. demonstrably wrong.”
Frankly, Mosher, I don’t think that you can produce any physical evidence that it would produce the wrong answer. Here is what I get from Karl’s paper as the physical explanation for why we need TOBS.
Karl shows statistics that many people moved their station reading time from the late afternoon to the early morning. Why does this matter. The stations, when read, contain a maximum temperature and a minimum temperature. Maximum temperatures are likely to happen some time between noon and four. So when you read the thermometer at 7 AM you are highly likely to get the high from the previous day. When taking a monthly average, then, that means that your fist day of the month actually has the data from the previous month. Every day that follows will have that one day offset for it’s high reading. Now let’s say that you are taking readings in April. That means that your first sample was actually from March. And each day will have one sample that represents an earlier day. In April, temperatures are increasing rapidly. And since all your days readings are offset by half a day in the past, it means that you will have average temperatures for April that are biased cool. So, in order to get rid of that bias, you have to apply a warming correction.
But here’s the catch. That offset continues month after month. When you get to fall, you are taking your first sample of the month from the previous month and that sample is too warm. So you get a too cool offset in the spring and a too warm offset in the fall. This means that while you have months that are biased in themselves, over the course of a year, there should be no bias. The warm biased months and the cold biased months should cancel each other out.
Bottom line is that it appears to me that there should be no TOBS over the long term. This means that your individual months will be off a little, but there should be no bias on a yearly basis.
So if anyone out there can explain to me, physically, why a long term TOBS bias should exist, please do so; because the one that Karl uses for his temperature record is huge. And it seems to me that it should be zero.

October 25, 2011 9:09 am

In my previous comment, the more complete (and more to the point) quote from Dr. Roger Pielke Sr. would have been:
“Our [peer reviewed] paper… has clearly documented an estimated warm bias of about 30% in the IPCC reported surface temperature trends… Moreover, despite the claim in the IPCC (2007) report, the tropospheric and surface temperature trends have not NOT reconciled”

Gail Combs
October 25, 2011 9:11 am

crosspatch says:
October 24, 2011 at 12:14 am
In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out
At some point they are going to run out of tricks to use to create a warming signal.
________________________________________
The Hoaxsters do not care.
By then all us old foggies will be dead and the youngsters will be too badly educated to think their way out of a paper bag. More important the youngsters will all be too busy working three jobs trying to put food on the table and heat their cold water flats to care.

Gail Combs
October 25, 2011 9:25 am

Frank Lansner says:
October 24, 2011 at 1:05 am
I just wanted to say thanks for your post at Joanne Nova’s (and all the work) http://joannenova.com.au/2011/10/messages-from-the-global-raw-rural-data-warnings-gotchas-and-tree-ring-divergence-explained/#comment-625436
It is a real eye opener ant the type of work I would EXPECT from a real scientist.

October 25, 2011 9:50 am

Brian says:
“Even if you reject the 97%, other polls have surveyed scientists more generally and still found the overwhelming majority of scientists accept AGW.”
Brian’s 97% number is completely bogus. It was based on a questionable survey of 77 respondents to a vaguely worded poll. Contrast that truly puny number with the more than thirty thousand co-signers of the OISM Petition, all of whom reject the alarmist position. And all the alarmist counter-OISM petition signers put together don’t total nearly the number of OISM petition co-signers. The large majority of engineers and scientists reject – in writing – the false belief that CO2 is a problem.
Brian continues:
“A common argument seems to be that government money introduces a bias. Really? More of a bias than the fossil fuel lobbies which are overtly funding ‘skeptics’?”
Government money absolutely introduces a bias. And companies give much more money to watermelons than to scientific skeptics. Companies are simply buying plausible deniability and protection from eco-extortionists, the same way that Jesse Jackson extorts loot from companies so they won’t be labeled “racist.” In addition to the lopsided amounts of payola that companies give to alarmists compared to skeptics, the U.S. government has handed over $99 billion in payola to “study climate change”, almost all of it to climate alarmists and their organizations. Per a report from the Congressional Budget Office:
http://www.cbo.gov/ftpdocs/112xx/doc11224/03-26-ClimateChange.pdf
So far, Brian has been wrong about everything. That’s what happens when someone gets their “facts” from propaganda blogs like Skeptical Pseudo-Science.

Tilo Reber
October 25, 2011 9:52 am

By the way, here is Karl’s paper on TOB:
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/karl-etal1986.pdf
I can understand from this paper why there are monthly biases. But those biases should be cool if someone changes their reading time in the spring, and hot if they change it in the fall. I can see no reason for biases that should persist for longer than a year. If anyone can find a physical explanation in this paper for why biases should persist for more than a year, please explain it to me.

Gail Combs
October 25, 2011 10:04 am

Garrett Curley (@ga2re2t) says:
October 24, 2011 at 3:13 am
“I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it?…..”
_______________________________________________________
It depends on the context. The temperature is cyclical so if you pick the right time period it is warming, another time period it is flat and a third it is cooling. This is why BEST looked at 1956 to the end of the century. This is why Hansen redid his graphs and Jones refused to honor Freedom of Information requests and when cornered “lost” the data. This is why Mann is fighting tooth and nail against FOI requests and skeptics greeted the Climategate e-mails with glee.
SCIENCE is all about validity, repeatability and consistency . See http://www.experiment-resources.com/definition-of-reliability.html
This is how the selection of context game can be played:
1970 to 1999 – slightly warming
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_current.gif
http://wattsupwiththat.files.wordpress.com/2011/01/gw-us-1999-2011-hansen.gif
21st century – pretty much flat
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_current.gif
last 6000-8000 years – cooling
http://www.biocab.org/holocene.html
last 0.03 million years – sharply warming
http://img836.imageshack.us/img836/9484/lasticeageglant.png

October 25, 2011 10:09 am

To HenryP
I was logged in to this site for posting comments using my Twitter account, so I thought I was okay. When logged in though the Twitter authorization there is no option to add an e-mail address. They should remove the Twitter/Facebook buttons at the bottom of the “Reply” section.
I do understand how the greenhouse effect works, but you’ve still got it wrong. Nothing in the greenhouse effect prevents max temps from rising. The average temp rises, bringing with it both the min and max temps, though not necessarily in sync locally because of local climate patterns. It’s got to do with temperature equilibrium in the upper atmosphere, but we’re not gonna go in to all that here.

Gail Combs
October 25, 2011 10:18 am

Steve C says:
October 24, 2011 at 3:21 am
There’s another graph I’d like to see…
__________________________________________
You might want to look at Frank’s work over at Joanne Nova’s. He slices and dices the data in a much different way than we normally see.
http://joannenova.com.au/2011/10/messages-from-the-global-raw-rural-data-warnings-gotchas-and-tree-ring-divergence-explained/#comment-625436

Gail Combs
October 25, 2011 10:41 am

Rhys Jaggar says:
October 24, 2011 at 4:45 am
I think the debate is beginning to move toward the position where we can see that the result obtained in terms of temperature trends depends on the source data. SURPRISE, SURPRISE!
It may in fact be the case that about 50 independent analyses be done to show what happens depending on what data you use. This one is just for the USA. Whch is a large continental land mass bounded by the world’s two largest oceans to the East and West, a warm sea to the South and a major land mass to the north, with a smaller land mass in the SW.
You might find different results in you studied Russia: a large continental land mass surrounded by ice/ocean to the north and a continental land mass to the south……
_____________________________________________
The Russians already did the study and cried FOUL!
the Russians confirm that UK climate scientists manipulated data to exaggerate global warming…. http://blogs.telegraph.co.uk/news/jamesdelingpole/100020126/climategate-goes-serial-now-the-russians-confirm-that-uk-climate-scientists-manipulated-data-to-exaggerate-global-warming/
CHINA (includes info back 100 years)
Seems to echo Frank Lancer’s findings “….annual mean temperature rises occurred in most of the stations along the South and East. China Sea….”
Seasonal and regional temperature changes in China over the 50 year period 1951–2000 http://www.springerlink.com/content/r178t075r24625u6/

October 25, 2011 10:45 am

Barry to me: …Also, the links I posted discussed their methods for determining rural/urban stations. It’s not as if they didn’t make the effort – they all outline their methods! – so I’m not sure why you quote yourself saying that they are making “unstated assumptions.” Are you sure you actually read them?
Yes, I read enough to see that all are taking the stations’ categorizations of urban and rural as trustworthy, without dealing with the rogue strong UHI effect in some “rural” stations that my UHI page makes very clear. Also you are still avoiding the import of the longstanding stations’ statistically significant message.
All of which I’ve already said. Cut to the essentials and don’t get distracted.
It’s a pity you’ve all done so much work on inadequate premises. There are real problems that deserve our time, and we need real science again.

Gail Combs
October 25, 2011 10:56 am

Bigdinny says:
October 24, 2011 at 4:48 am
I have been lurking here and several other sites for over a year now, trying to make sense of all this from both sides of the fence. In answer to this simple question, Is the earth’s temperature rising? depending upon whom you ask you get the differing responses YES!, NO!, DEPENDS! IT WAS BUT NO LONGER IS! I think I have learned a great deal. Of one thing I am now fairly certain. Nobody knows.
______________________________________
I Think he’s Got it!
You might like to read AJ Strata’s info. He looks at the error bars on the data. Something that is always left out by the main stream media and Al Gore as they flash alarming charts with temperature in 0.001 degree increments.
http://strata-sphere.com/blog/index.php/archives/11420

Brian
October 25, 2011 11:14 am

Smokey says “The large majority of engineers and scientists reject – in writing – the false belief that CO2 is a problem.”
That’s an incredible viewpoint to hold, given how unsupported it is by the evidence. The petition says signers must have a BS, MS or PhD in science, engineering, or related disciplines. Millions of Americans fit that definition, 30,000 is hardly a majority. Only 39 of the signers are climatologists.
This is compared to multiple polls that find scientists, especially those in related fields, overwhelmingly accept the basics of ACC. Analyses have been done on the published literature, which also overwhelmingly supports ACC. On top of that, essentially every major scientific organization in related fields accepts the consensus. Again, the wiki article gives a good summary.
http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change#cite_note-AQAonAAPG-1
You’re seriously going to claim that every major scientific organization is saying there is a consensus when “the large majority” of their members disagree? Again, when has this EVER happened in modern science? The conspiracy you’re claiming exists would have to be ginormous.
P.S. I like the “wrong” clip 🙂

October 25, 2011 11:32 am

Garret says
http://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/#comment-777494
Garret, what you give me a red herring, claiming that the problem lies in some upper sphere allowing maxima to rise faster than minima, “as a result of the greenhouse effect”.
So now we are back to the beginning.
http://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/#comment-777306

October 25, 2011 11:42 am

Brian says:
“30,000 is hardly a majority”
It’s a huge majority when you total up all the signatures on both sides of the debate. And since you’re presuming to know how the rank-and-file of all those organizations would vote if asked, “Do human CO2 emissions have a major or a minor effect on global temperatures,” then you’re putting yourself out as a mind reader. Because they’ve never been asked. The only honest way to ask them is with a secret ballot, to get an honest answer free of the threat of retribution. When has that ever happened? Name one organization that lets its members speak for themselves.
Prof Richard Lindzen shows how the spokespersons of those organizations have been targeted and co-opted, and he shows why the dues paying rank-and-file members are not asked those questions.
It’s no different than if I presumed to speak for you, without allowing you input. The CAGW scam exists only because of the dishonesty and censorship of its proponents.
• • •
Lucy,
barry can’t answer you right now, he’s too busy licking his way to the center of the earth.☺

Brian
October 25, 2011 12:30 pm

Smokey says And since you’re presuming to know how the rank-and-file of all those organizations would vote if asked, “Do human CO2 emissions have a major or a minor effect on global temperatures,”
Again, numerous polls have demonstrated what their position is.
“Name one organization that lets its members speak for themselves.”
Which one doesn’t? The organizations are made up of the people in the organization. Often you are elected by members of the organization. Anti-science conspiracy theorists seems to think scientists belong to some kind of secret society. They don’t, it’s an open community. The sources you reference don’t support your argument.
Scientists don’t reach consensus through online petition. They publish a body of research which speaks for itself. Surveys of the research are performed. Scientific organizations develop positions based on this body of research. That’s how it works, and you don’t seem to have a problem with it except when the conclusion they reach conflicts with your pre-conceived notions. Again, you’re claiming a massive conspiracy. When has this ever happened?
You don’t seem to be persuaded by facts. If everyone who has a degree in something even remotely science/engineering related signed a petition, and it ended up with a million signatures, would that convince you? What would it take? Anthony stated what it would take to convince him, and then quickly recanted when it happened. How about you?

Jeff D
October 25, 2011 1:04 pm

Brian says:
October 25, 2011 at 12:30 pm
Which one doesn’t? The organizations are made up of the people in the organization.
Read the following closely and you will see just how many scientists it takes to make a consensus. The actual # of people is surprisingly low. http://www.pnas.org/content/107/27/12107.full This is the study that they hold as 97%.
As for the groups; ask them for the survey they sent to the members for their stance on CAGW. I took the time to ask two groups. Seems they didn’t poll their members but took a vote of the board for a determination. A Noble prize winner removed himself from one of these groups for just this reason. American Scientific Association I think, Again there was an article posted here on WUWT following just this. If I have the name of the association wrong someone please correct me.
You are close to seeing the light. Dig just a bit deeper. The real truth will set your mind free. A simple test for your huge body of work:
Why is the current temperature falling for the last 10 years while CO2 has continued to rise?
Why are sea levels falling for the last year while CO2 has continued to rise?
Why has the Arctic summer ice melt stabilized while CO2 has continued to rise?
Why as the Antarctic gained ice while the level of CO2 has continued to rise?
The above are all tenets for AGW predicted by the “models”. CO2 has continued to rise and yet none of these predictions for AGW have been able to be demonstrated.
Before stating any other group think BS in response please answer the above questions. If you can’t it might be time to review your belief system.
Trust but verify… I did.

Bob Kutz
October 25, 2011 1:57 pm

In Re; Jeff D says:
October 25, 2011 at 1:04 pm
. . . . If I have the name of the association wrong someone please correct me.;
Anthony Watts; Sept. 14;
” Dr. Ivar Giaever resigned as a Fellow from the American Physical Society (APS) on September 13, 2011 in disgust over the group’s promotion of man-made global warming fears.”

barry
October 25, 2011 2:06 pm

SBVOR, I have replied to you about your AMO theory thrice at your website, not wishing to go off-topic or have the subject changed on me here, but the posts aren’t showing up after making it through the posting bit. I’ll repost here, and maybe you can drag the last one out of the spam filter?
From your article:
“If one looked for a single factor explanation for the rise and fall of the CAGW religious cult (and the Global Cooling cult which preceded it), the AMO (Atlantic Multidecadal Oscillation) would be a very good candidate.
Click here (1) and here (2) for published science supporting this view.
(1) The first reference attributes multi-decadal variability to the Pacific Decadal Oscillation, NOT the AMO. It does not in any way corroborate your assertion.
(2) The second reference links AMO to the Asian Monsoon cycle, NOT global temperatures.
Furthermore, the chart you say is ‘created’ by David Evans actually comes from Syun-Ichi Akasofu, whose paper you reference in (1), and which includes that chart. David Evans, if you follow your own links to him, actually credits Akasofu with the chart, and links to a longer paper of Akasofu’s,
http://people.iarc.uaf.edu/~sakasofu/pdf/two_natural_components_recent_climate_change.pdf
which, again, displays that chart, and attributes mutlti-decadal temperature variability to the Pacific Decadal Oscillation, NOT the Atlantic Multidecadal Oscillation.
You have messed up your references here. They do not support your thesis.
If you wish to take up the matter, let’s do it at your web site.

David
October 25, 2011 2:31 pm

Brian, neither peer review of consensus is a refuge for the CAGW position. The Oregon petition was signed by scientist who took the time to look at the evidence and found it lacking. You wish to include hundreds of studies by enviromental scientist who take a “what if” percautionary assumption about the climate science and feeedbacks, and from there predict disasters which have not been realized. The Wegman report showed huge problems within peer review and the relatively small circle of climate scientist predicting disaster. Suppression of evidence against CAGW can be found here…http://epw.senate.gov/public/index.cfm?
My sources on AGW are worldwide, (not outside of mainstream science) and from far more PHD scientists then represented by the IPCC, which in the end (those who write the summaries at least) is a political body with many of the valid fallacies constructed by the critics of religion, equally present in this UN organization. I could quote from among peer reviewed literature, papers by Lindzen, Pielke, Christy, Spencer, Eschenbach, Scafetta, Myhre, Akasofu, Douglass, McIntyre and many others, all of whom have robustly challenged the dogma of a few cloistered warmists. These are not “big oil shills” as some try to claim, nor are they nutters. They are all eminent climate scientists who are showing that observations do not support the hypothesis that CO2 is significantly warming the planet, a hypothesis that is predicated on the false premise that historical climate has remained fixed for millennia, which is in contradiction of overwhelming evidence that temperatures were warmer than today a thousand years ago. I could point to 100 more papers that show that the medieval warm period was real, global, and warmer than today – a mountain of evidence against the warmists broken hockey stick. Additionally these scientist are unafraid to reveal their methodology and data, unlike many deacons high in the AGW hierarchy. The fact that climate alarmists reject the Scientific Method means that they are political advocates first, and mendacious scientists second.
FuseAction=Minority.Blogs&ContentRecord_id=865DBE39-802A-23AD-4949-EE9098538277
BTW the term “climate scientist” is a misnomer as most climate scientist have degrees in another field. There is far more then the Oregon petition…
http://epw.senate.gov/public/index.cfm?FuseAction=Minority.Blogs&ContentRecord_id=2674e64f-802a-23ad-490b-bd9faf4dcdb7
http://www.appinsys.com/GlobalWarming/RS_Arctic.htm
http://friendsofscience.org/assets/documents/Veizer-Shaviv,%20Celestial%20Drive.pdf
http://www.phys.huji.ac.il/~shaviv/ClimateDebate/RahmstorfDebate.pdf
http://www.phys.huji.ac.il/~shaviv/ClimateDebate/RahmReplyReply.pdf
http://www.greenworldtrust.org.uk/Science/Social/Testimony.htm

Brian
October 25, 2011 3:41 pm

D regarding your list of ?s
The data shows that temps and sea levels are rising, and ice is melting. http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html
It’s a stochastic system, you don’t expect every single year to be warmer than the last. 1998 was unusually warm while 2000-2008 was cool (relative to predictions). I’ll admit that if 2010-2020 scientists will have to re-evaluate the presumed rate of warming, but I doubt you’ll commit to believing in AGW if temperatures are higher in 2010-2020.
You’re really going to link me to James “the ozone layer is fine” Inhofe? Ad Hominem attacks aside, at some point a person loses credibility.
I’m still waiting for an example for when a supposed minority of scientists have risen to the top of every scientific organization and declared their minority opinion to be the consensus. OTOH I have plenty of examples for groups denying scientific consensus (evolution, smoking/cancer, ozone layer etc.)
Also still waiting for a way your position could be shown to be incorrect. For ACC proponents that’s easy. Temperatures could stop rising. As [SNIP: Policy Violation. -REP] has shown us, even “skeptical” scientists admit the earth is warming.
If you’re willing to admit that the earth is warming, and humans are causing it, we can have a more productive discussion about the likelihood that it will be bad, and what to do about it. But I doubt you’ll admit there’s any chance you’re wrong.

barry
October 25, 2011 3:48 pm

Lucy @ here

Yes, I read enough…

You read ‘enough’…? You skimmed. Ok.

…to see that all are taking the stations’ categorizations of urban and rural as trustworthy…

Rubbish. Zeke Hausfather applies 3 different methods to determine rural/urban, precisely because he knows that it is difficult to establish rural and urban stations. Joseph at Residual Analysis also uses 3 different methods. There is no assumption that their methods are perfect, otherwise they wouldn’t try to distinguish urban and rural in different ways. They are well aware of how problematic it is. Consider the opening remarks from Zeke:

“Teasing out the Urban Heat Island effect can be a fiendishly difficult task. There are enough confounding factors that it is dangerously easy to simply pick a measure that shows what you want to show (be it a negligible or huge UHI) without including the nuances necessary.

No assumptions there.

Also you are still avoiding the import of the longstanding stations’ statistically significant message.

I’m not. I lack the skill to determine if Michael Palmer’s methodology (post-selection) is sound, and wouldn’t try to speak to that. I did notice Glenn Tamblyn’s comments (who does have the skill), which articulated strong reservations about Michael’s number crunching. Michael has not given a substantive reply to that. Is there a reason why I should unskeptically accept Michael’s results?
Nor am I avoiding the import of long-standing records, as I linked for you in the last post a global estimate based on rural stations with a 90 year unbroken record. When I read your replies addressed to me, it’s as if you’re describing someone else’s posts.

It’s a pity you’ve all done so much work on inadequate premises.

I have done none of this work, and am unable to do so. I merely read about it, try to understand it (being honest about my limitations) and share the information.

There are real problems that deserve our time, and we need real science again.

I read your page on UHI, and, apart from the McKitrick/Michaels paper, note that many of the supporting links provide much in the way of criticism, commentary and anecdotal evidence, and little in the way of ‘science’ (ie statistical analysis of a large number of stations), or even original work. Most of the links do not even bother to test any method for rural/urban stations – which is a criticism you have repeatedly put out here,including in reply to me. For example, you link to the ‘6th grader’ post at WUWT, which I well remember. But the choice for rural is that determined by GISS – the father son team go to GISS and click on rural sites. By your own standard, this analysis assumes “the stations’ categorizations of urban and rural as trustworthy, without dealing with the rogue strong UHI effect in some “rural” stations.” Same goes for the John Daly site, which doesn’t really lay out any classification method, other than “few large cities”, and so could easily be a cherry-pick of any stations showing cooling (and of course there will be a subset of those even in a warming world). The page on New Zealand isn’t even about UHI.
Spencer’s work is interesting, but he expects to see a spurious warming signal at airports. These locations suffer no uncertainty as to their classification, and they show a cooler trend than the trend for all stations or non-airport. That is a remarkable result.
It seems to me that you uncritically accept whatever comports with the notion of fatally problematic official temperature records, and reject with no real discussion anything that corroborates them.
My sources try three independent methods to establish rural/urban and the work is original. I am not saying that this means UHI is properly accounted for. The satellite record (which you ignored from my last post) also corroborates the land-surface instrumental records. Interestingly, Fall et al (Anthony Watts’ peer-reviewed paper) corroborates the US average temperature record, after weeding out stations compromised by station bias.Do you doubt that this endeavour has tried to winnow out sites affected by UHI?
I would not say with absolute conviction that the UHI issue is squared away (I am not qualified to judge) but I would note that the bulk of formal and semi-popular literature that carefully examines the issue tends to discover that the effect is minimal WRT the instrumental records.
I find your web page unconvincing, as it is a hotch potch of links on a general theme, only a couple of which (M&M and Spencer) satisfy the strictures you have laid out here, and one of which isn’t even about UHI.
You said that you “needed to quantify” certain claims, but I could nowhere find original work of yours processing any data, just links to other work. Could you provide a link? I will do my best to follow, and tip my hat to you for rolling up your sleeves and doing your own number-crunching.
If you wouldn’t mind, I’d appreciate a response to the arguments I made re the satellite record and the import of Fall et al onaverage US temps. I will read M&M again, as well as the Spencer, and look for competent analysis of these, if any such exists.

Jeff D
October 25, 2011 4:00 pm

D regarding your list of ?s
The data shows that temps and sea levels are rising, and ice is melting. http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html
Ok, maybe I was wrong about hope. You keep siting the biggest group of idiots on the planet and wont take the time to ” Verify ” for yourself. Take a visit to NOAA’s site and look up the charts for sea level rise for the last year and come back and admit you don’t have a clue and need our help. After that visit the sea ice page and see that current melt is in trend with the last 10 years as well. You still have not address 3 and 4.
Have a good day, hope you have learned something.

October 25, 2011 5:01 pm

barry (October 25, 2011 at 2:06 pm)
Blogger.com automatically decided your comment aimed at my post titled “How the AMO Killed the CAGW cult” belonged in the spam bin. I agree. It will stay there. Here’s why:
1) You falsely allege:
“The first reference attributes multi-decadal variability to the Pacific Decadal Oscillation, NOT the AMO. It does not in any way corroborate your assertion.”
The study in question notes a “multidecadal oscillation” in global surface temperatures without seeking to explain the cause. So, no — it does not directly attribute the AMO. But, it also does NOT attribute this MULTIdecadal oscillation to the Pacific DECADAL oscillation. The “support” which I referred to rests in the fact that the dates and durations of the multidecadal temperature oscillations Akasofu describes line up very, VERY well with the AMO — far more so than the PDO.
A) Quoting the cited study:
“It is not the purpose of this section to discuss the multi-decadal changes in detail. The sole purpose is to explain why the warming has halted after 2000…
the multi-decadal oscillation peaked in 1940 [I contend the AMO peaked in 1933], and the temperature actually decreased from the level of the linear increase from 1940 to 1975 [I contend the AMO bottomed out in 1976] and then increased after 1975 to 2000 [I contend the AMO peaked again in 1998]. Thus, it may be speculated that the situation in 2000 is similar to that in 1940 [ditto for the AMO], so that it is predicted that the temperature change will be flat or in a slightly declining trend during the next 30 years or so”

So, although the study does not directly attribute this multidecadal oscillation in global temperatures to the AMO, the dates and the durations of the oscillations all line up very closely with the AMO. Therein rests the “support” I referred to.
B) The study does NOT attribute “multi-decadal variability to the Pacific Decadal Oscillation”. Do you really think a MULTIdecadal oscillation could be attributed to a DECADAL oscillation? To the contrary, the study says:
“It is interesting to note a striking resemblance of changes between PDO and the multi-decadal oscillation”
2) You falsely allege:
“The second reference links AMO to the Asian Monsoon cycle, NOT global temperatures.”
Quoting the cited study:
“Based on the coherency in decadal variability between the ice core data and the observed snow cover over the Tibetan Plateau during recent decades, we used three available ice core data to characterize the snow cover variability of the last 200 years. The analysis suggests that the snow cover exhibits significant decadal variability with major shifts around 1840s, 1880s, 1920s, and 1960s. Its variations are found to be closely correlated with the Atlantic Multidecadal Oscillation: Cool/warm phases coincide with large/small snow cover.”
Which part of “Cool/warm phases” does not register with you as temperature?
3) Congratulations – You got me for getting a single attribution wrong (Akasofu vs, Evans). You can pat yourself on your back while I correct that.

October 25, 2011 5:19 pm

barry (October 25, 2011 at 2:06 pm),
To address the nits you picked, I changed the wording in my post from “supporting this view” to “consistent with this view” (which, as you could have deduced from the very next sentence, was what I intended to convey in the first place).

barry
October 25, 2011 5:48 pm

SBVOR – both Akasofu’s papers which include that chart specifically discuss PDO in reference to multidecadal temperature oscillations. In neither of these papers is the AMO mentioned. In both, the PDO is discussed as being the cause of multi-decadal variability (PDO is a multidecadal oscillation – do you not know that?). Akasofu is not attributing the variability to the AMO, but to the PDO. It’s that simple. The paper does not corroborate your theory – for that you would need to reference published papers on AMO and global temps, surely.
The other paper on snow cover is limited to the Tibetan Plateau. Lest we be in any doubt that the paper is about local conditions and not global, here is the title;
Decadal variability in snow cover over the Tibetan Plateau during the last two centuries
You completely misconstrue the paper, which is surprising, seeing as the abstract you just quoted clearly identifies the region of interest. This paper is about local conditions, not global. The region covered is 0.2% of the globe. Warm and cool phases of the AMO are associated with snowfall (humidity) and the monsoonal cycle, not with temperature (at least in the abstract – couldn’t find a full version online). To connect the AMO to global temperatures with this abstract is a giant leap too far.
Please admit my post at your site where we may discuss further. My comments are on-topic there and they are not incorrect. (anyone here can check if they want – the snow cover abstract is short and straightforward)
[FROM THE MODERATOR: This dispute is cluttering up the thread. If you can’t resolve it at SBVOR’s blog, at least take it to Unthreaded. -REP]

October 25, 2011 6:03 pm

barry,
I still contend that the AMO is the most relevant ocean cycle over the last century. But, I had never examined the PDO closely enough to see that — within its decadal cycles — there are periods wherein the bias is strongly towards multidecadal scale warm phases and multidecadal scale cool phases.
Comparing a PDO chart to an AMO chart, I now see that the two cycles do display considerable synchronicity during the last century. For me, this only serves to reaffirm what Dr. Lindzen said:
“The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries. Recent work (Tsonis et al, 2007), suggests that this variability is enough to account for all climate change since the 19th Century.”
[REPLY: Apology noted. Now, please, take this dispute elsewhere. Thank you. -REP]

October 25, 2011 6:07 pm

I get the feeling that Brian is pretty new to this subject. The number of skeptical scientists is far greater than the clique of grant hogs trying to sell CAGW to an increasingly skeptical public:
http://wattsupwiththat.com/2011/09/14/nobel-laureate-resigns-from-american-physical-society-to-protest-the-organizations-stance-on-global-warming
http://wattsupwiththat.com/2010/07/25/seven-eminent-physicists-that-are-skeptical-of-agw
http://www.middlebury.net/op-ed/un-signatories.html
http://wattsupwiththat.com/2009/11/02/160-physicists-send-letter-to-senate-regarding-aps-climate-position
You can add those names to the OISM Petition, plus 650 additional names found here.
And as David shows above, there are even more scientists skeptical of CAGW. I’m not arguing that “consensus” in science has any importance. But it is constantly being brought up by the alarmist crowd, and I am showing that they’re even wrong about that.
Finally, we have sea ice records going back only to 1979. I cited a paper here recently showing that the Arctic was entirely ice free some 6,000 – 7,000 years ago, when CO2 levels were well under 300 ppmv. An ice-free Arctic is a routine, natural occurrence that has zero to do with CO2; otherwise the Antarctic would show similar action. And regarding isostatic sea level rise [rise due to thermal expansion], it’s going away. And explain this without sounding like you’re inventing a new version of Trenberth’s “hidden heat in the pipeline”.
The fact is that every claim of the climate alarmists evaporates under scrutiny. At least CAGW skeptics have a hypothesis that has withstood all attempts at falsification: produce empirical, testable evidence per the scientific method showing that the rise in CO2, or the [natural] 0.7°C warming has been globally harmful. The null hypothesis is that CO2 is harmless and beneficial. Falsify that, if you think you can.

October 25, 2011 8:47 pm

Glenn Tamblyn says:

“A fundamental problem I see with this method is that the delta’s propergate forward. So any error that will unavoidable occur if a station is missing from sample n then gets carried forward into the calculation for samples n+1, n+2 etc.”
That was the very purpose of it. If there is a gap, we get to choose – do we trust the record will carry on faithfully where it left off, or should we treat it as a separate series? My method does the latter. However, I will give you that it would have been useful to compare the two methods and see whether they give a major difference – I guess at least with the long running series they won’t, because only few points are missing.
“In addition there is the problem of propogating inaccuracies in the performance of the calculation. Computers do not calculate to infinite precision and since most of this is about calculating differences between larger values to produce much smaller differences then continually summing these differences the finite accuracy of each stage of arithmatic will propgate forward in the result. It would take some serious analysis to work out whether the net effect of this over time will all cancel out or be cumulative.”
It will be cumulative, of course, though not strictly additive. However:
– Python floats have 53 bits of precision
– the whole data series has on the order of 10,000 stations, times 100 years times 10 months per year = 10^7 delta values = 2^23, therefore
– 53-23=30 bits, or 9 decimal places of precision are still available when adding the last delta to the accumulated sum.
So this point is moot, and it is something an aspiring data cruncher should have been able to work out himself before inserting FUD into a public post.

David
October 26, 2011 1:11 am

Brian says…” You’re really going to link me to James “the ozone layer is fine” Inhofe? Ad Hominem attacks aside, at some point a person loses credibility.”
Brian, below this you talk of “productive communication, and yet you ignore what I presented and say this? I did not link you to a person, I linked you to a report where over 700 dissenting scientists (updates previous 650 report) from around the globe challenged man-made global warming claims made by the United Nations Intergovernmental Panel on Climate Change (IPCC) and former Vice President Al Gore. This new 2009 255-page U.S. Senate Minority Report — updated from 2007’s groundbreaking report of over 400 scientists who voiced skepticism about the so-called global warming “consensus” — features the skeptical voices of over 700 prominent international scientists, including many current and former UN IPCC scientists, who have now turned against the UN IPCC. This updated report includes an additional 300 (and growing) scientists and climate researchers since the initial release in December 2007. The over 700 dissenting scientists are more than 13 times the number of UN scientists (52) who authored the media-hyped IPCC 2007 Summary for Policymakers. The fact that your response was an adhom. atttack against one poltician is a demonstration of one not capable of any introspective thought.
Brian continues…
“I’m still waiting for an example for when a supposed minority of scientists have risen to the top of every scientific organization and declared their minority opinion to be the consensus. OTOH I have plenty of examples for groups denying scientific consensus (evolution, smoking/cancer, ozone layer etc.)”
Again Brian you show yourself not serious. The oregon petition is in itself a scientific organizatiom, as was the group which formed the senate minority report you never read. Many of the recent changes in leadership within some socities are changes to political creatures, not scientist. Yes, the CAGW claim is unique, because it is post normal, IE. poltics (the opportunity to tax the very air we breath) have entered the science and destroyed it, so when those political creatures rise to the top of a former scientific organization, you get mass defects of brilliant PHD scientist, the testimony of some I linked to earlier. CO2 science is a “scientific organization, linking hundreds of peer reviewed studies against the alarmist position. The grass roots rise against CAGW is unique massive and multi leveled, and not comparable to the claims in the smoking/cancer arena for instance. If you automatically discount thousands of international and US scientist, and hundreds of peer revied papers, and ignor the deeply flawed hockeystick and other palo studies, ignore climategate letters, ignore the numerous IPCC emabaresements to science, ignore the Wegman report, then you may have your consensous, and be welcome to it, but it says more about you then it does the science.
Brian continues…”Also still waiting for a way your position could be shown to be incorrect. For ACC proponents that’s easy. Temperatures could stop rising. As [SNIP: Policy Violation. -REP] has shown us, even “skeptical” scientists admit the earth is warming.”
Brian, temperatures have stopped rising for well over ten years, so for how long must this cont before you question? My postion is that the C in CAGW is absent, and as so many false claims attributing anything and everything to CAGW exist, but those claims fail to manifest in the real world, then when temperature rise exceeds the natural historic variation and limits, and SL rise accelerates beyond those same historic limits, and storms and droughts increase in ferocity and frequency beyond historic norms, then I will relook at the C in CAGW. Until then, seeing that we do not know the climate senstivity, and adoption to climate change is far more logical, and that developing cheep abundant energy is the fasted way to clean the enviroment and reduce population, I will ignore the political advocates that wish to rule the world.
Brian continues, “If you’re willing to admit that the earth is warming, and humans are causing it, we can have a more productive discussion about the likelihood that it will be bad, and what to do about it. But I doubt you’ll admit there’s any chance you’re wrong.”
Of course there is, but for now the earth itself is laying waste to the CAGW claims, and I never denied that the increase in CO2 can increase the residence time of energy within the atmosphere.

October 26, 2011 1:35 am

GT? Work it out? What FUD would that be?
😉

October 26, 2011 10:42 am

barry says: October 25, 2011 at 3:48 pm
Thanks. Appreciated. Looks like a lot of thought went into that post of yours. I shall try to assimilate it all and reply here asap but cannot do so right away. Or if this thread fails you can email me.

October 26, 2011 1:31 pm

barry says: October 25, 2011 at 3:48 pm

Lucy “Yes, I read enough to see that all are taking the stations’ categorizations of urban and rural as trustworthy…”
Rubbish. Zeke Hausfather applies 3 different methods…

I did not express myself adequately here, I apologize for that. But, as I said on my UHI page, the problem is that even rural areas with the lowest populations can have UHI. Ilarionov showed that statistically, the most-rural areas have the greatest temperature increase – even though their actual temperatures may well be lower than nearby urban areas. This disparity in delta T has to be UHI, and it shows me why even Zeke’s work is insufficient to compensate UHI in a way I can trust.
If you had read my page carefully you would have seen Ilarionov is a scientist whose work deserves checking, not least because it has the “statistical analysis of a large number of stations” that you wrongly claim my page lacks – and it’s easy to follow.
I lack the skill to determine if Michael Palmer’s methodology (post-selection) is sound, and wouldn’t try to speak to that. I did notice Glenn Tamblyn’s comments (who does have the skill), which articulated strong reservations about Michael’s number crunching. Michael has not given a substantive reply to that. Is there a reason why I should unskeptically accept Michael’s results?
Of course you should not uncritically accept Michael Palmer. I did not. It took me a long time to find answers to all Skeptical Science’s “answers to skeptics” but I did otherwise I would not be a skeptic myself now. I’m not going to waste my time repeating my story here. Relevant here, however – I realized the importance of a few trustworthy individual longterm records over numbers – in fact, over almost all the other records, and over the use of true “number crunching” that turns several stations’ results into an area soup. I have written up several web pages referring to such records, two of which were published here (“Circling the Arctic” and “Circling Yamal”). You could have guessed something like this, from my very first comment here, thanking Michael Palmer.
I doubt that you lack the skill to check Michael, if you were to apply yourself. You seem bright enough. I am not going to wade into Glen Tamblyn at all, seeing as he posted so much that did not even begin to address Michael’s work. I am also skeptical of your claims that Glenn has the skill, or that Michael has not given a substantive reply. But the above answers you enough IMO and any more checking seems like a waste of time.
I hope this clears up the misunderstanding due to my bad use of words, and also shows that I did pay attention, as I claimed – and that you need to pay closer attention to my work. Thanks for your attention and concern, I appreciate it.

Brian
October 26, 2011 2:12 pm

@Smokey, again with the lists of scientists that reject ACC. 30,000 out of millions is a tiny fraction of scientists and engineers, most of which have expertise in fields completely unrelated to the climate. None of this nullifies that polls indicate wide acceptance of ACC. Hundreds of scientists reject evolution as well
http://www.discovery.org/scripts/viewDB/filesDB-download.php?command=download&id=660
that doesn’t negate polls showing ~95% support.
I want to make sure I have your position correct. You think that most scientists reject ACC, and despite this, in a massive, unprecedented conspiracy, every major scientific organization, country, etc. has been co-opted by a small number of scientists and politicians wanting to push GHG limits/taxes in a desire to, what? Protect their research funding? Control people?
Smokey, I’m not sure why you’re posting misleading temperature and sea level data when you’ve already admitted that you accept warming, and that humans are causing at least some of it but I’ll bite. First, these are single figures, with no context, posting on “skeptic” blogs. These are probably part of a study with descriptive text but that information isn’t given. The first image shows increasing sea levels except for the past year or two (a few years in one model). The second shows wildly oscillating atmospheric temperatures over a very short time scale. Here is similar data over a longer timeframe.
http://upload.wikimedia.org/wikipedia/commons/7/7e/Satellite_Temperatures.png
Given the rate of temperature increase compared to yearly fluctuations you wouldn’t expect an increase every year. 2000-2010 was warmer than 1990-2000 and I’ll be very surprised if 2010-2020 isn’t warmer still.
“The fact is that every claim of the climate alarmists evaporates under scrutiny.” This seems to be your only, real belief. You claim that you believe CAGW except for the “C”. This wouldn’t be such an unreasonable position if you (all) didn’t show such a propensity to attack any data showing any part of CAGW. Your conflation of alarmists with scientists is concerning as well. Scientists aren’t saying it will definitely be catastrophic. More like, it’s definitely happening, and will probably be bad, and hey, maybe we should do something about it just in case while we figure things out a little better. Yet you attack any climate research that is produced.

October 26, 2011 8:10 pm

Brian says:
“…again with the lists of scientists that reject ACC. 30,000 out of millions is a tiny fraction of scientists and engineers, most of which have expertise in fields completely unrelated to the climate. None of this nullifies that polls indicate wide acceptance of ACC.”
Cognitive dissonance is generally incurable, and Brian has it bad. He confuses 30 thousand scientists out of millions, when the correct way to view the numbers is a comparison of those 30,000+ skeptical scientists with the number of climate alarmists who signed their various counter petitions. Last I heard their total was less than 1,500. And that’s for all their contrary petitions.
Skeptics outnumber alarmists by about twenty to one in that legitimate comparison, and there is no reason to believe the general public is much different. Gallup routinely extrapolates 3% or less of a sample to the entire population. And more often than not, Gallup is right. Brian is only fooling himself if he believes everyone thinks like he does.

Jeff D
October 26, 2011 8:28 pm

Smokey, don’t waist any more of your time. Wrestling with a pig in the mud is useless. Eventually you discover the pig just likes the mud.
Between the two of us he has received 15 indisputable sources of information that he has refused to verify. If he wants to keep watching the Gorathon videos let him. From now on he isn’t worth my time trying to help or to even respond to.

barry
October 26, 2011 10:49 pm

Lucy,
thanks for the thoughtful reply. I will read more on your site. The link to Ilarionov didn’t work for me, but to go from your comment:

Ilarionov showed that statistically, the most-rural areas have the greatest temperature increase – even though their actual temperatures may well be lower than nearby urban areas. This disparity in delta T has to be UHI,

That assumption may not be warranted. For example, airports, which one would assume should definitely have a UHI effect, actually trend lower than total stations, or than non-airport stations. Here, there is no murkiness regarding classification (an airport is an airport). Ilarionov considers only Russian data, so perhaps there is something going on seasonally or whatever, that somehow cools the urban record – there are instances of that occurring in cities, with station moves from built-up areas to parks. Nothing conclusive offered here, of course, but I think it would be a mistake to presume a particular cause for this disparity. More investigation is needed*.
However, on your page you appear to be presenting the opposite notion from Ilarionov – that the most-rural trends are lower than urban.
* I guess I’d need to see his documented Heartland presentation to sort this out. I’ll see if I can find it after work today (if you don’t happen to drop a better link in the interim :-).

October 27, 2011 5:43 am

barry says: October 26, 2011 at 10:49 pm
Lucy, thanks for the thoughtful reply…

And thanks for yours. My web page Heartland link has gone dead. Removed now. The two U-tube links given subsequently work. And bear in mind that Spencer also notes UHI effects at very low population density (same page). All this bears out the notion of Watts’ original investigations IMHO. If I think of a weather station in the Arctic, for instance, I feel one is likely to contend with at least these changes:
Station closer to habitation (because of electrical wiring to instruments)
Runways expanded, cleared more, used more
More local buildings

Gail Combs
October 27, 2011 6:42 am

Brian says:
October 25, 2011 at 12:30 pm
Which one doesn’t? The organizations are made up of the people in the organization.
_______________________________
Jeff D says:
October 25, 2011 at 1:04 pm
…..As for the groups; ask them for the survey they sent to the members for their stance on CAGW. I took the time to ask two groups. Seems they didn’t poll their members but took a vote of the board for a determination. A Noble prize winner removed himself from one of these groups for just this reason. American Scientific Association I think, Again there was an article posted here on WUWT following just this…..
______________________________________
I belonged to two scientific organizations and quit after 30 years+ membership for just that reason. I got sick and tired of the organizations taking a “Politically Correct” stance instead of sticking to science
I suggest you read about one of the organizations I quit here:
American Chemical Society members revolting against their editor for pro AGW views
http://wattsupwiththat.com/2009/07/30/american-chemical-society-members-revolting-against-their-editor-for-pro-agw-views/

Gail Combs
October 27, 2011 8:02 am

HenryP says:
October 25, 2011 at 8:18 am
Henry@Garret
…. (on the interpretation of the greenhouse effect)
…..If you want a bit more understanding of the problem, you can go here:
http://www.letterdash.com/HenryP/the-greenhouse-effect-and-the-principle-of-re-radiation-11-Aug-2011
(I have tried to keep it as simple as possible, and have asked for comments from my peers here if there is anything wrong with this explanation)
Obviously, this being the case, it follows that, if you say that the extra warming we experience is not natural, we must assume that the flow of warmth from the sun is or was constant – in which case maxima should not be rising. If you say that the warming is due to an increased greenhouse effect it should be minimum temperatures that are rising…..
____________________________________
If you take what you are saying to the logical conclusion then the greenhouse effect does not cause “warming” it causes a moderating of the extremes in the night/day temperature cycle. The real life demonstration of course is seen in the deserts where the greenhouse gas H2O is missing and the day time/night time temperature swings are extreme compared to a humid day/night at the same latitude.
The practical applications are the sprinklers in Florida orchards used to protect fragile fruit trees.
A good explanation: http://edis.ifas.ufl.edu/ch182
Albert Einstein said ” No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”
But it seems a lots of money, propaganda and political power sure can keep a dead theory walking!

Gail Combs
October 27, 2011 8:30 am

Brian says:
October 25, 2011 at 8:53 am
@Smokey et al
……Scientists think AGW will probably be bad, partially because we’ve built society around how the climate is right now. Is it likely to be catastrophic? No. The Gore/Greenpeace “alarmists” as you call them are out of step with the science on this. The point is that if AGW is “probably” bad and even “possibly” terrible, it makes sense to start taking modest steps to address it, using best-estimate cost-benefit analyses. If 10 years from now we realize it’s not warming as fast as we thought, it’s a lot harder to cut back on GHG emissions than we thought and mitigation is likely to be cheaper than avoidance, fine. Our policies can evolve. Scientific knowledge evolves and our opinions and actions should be based on the best available knowledge at the time…….
______________
Fine Brian then start actively promoting Nuclear Power (Thorium) as Anthony has. http://wattsupwiththat.com/2011/03/30/anti-nuclear-power-hysteria-and-it%E2%80%99s-significant-contribution-to-global-warming/
That is a reasonable solution that does not involve Genocide by Eucaluptus tree:
http://wattsupwiththat.com/2011/09/25/they-had-to-burn-the-village-to-save-it-from-global-warming/
http://wattsupwiththat.com/2011/09/25/they-had-to-burn-the-village-to-save-it-from-global-warming/#comment-754959
http://wattsupwiththat.com/2011/10/13/borlaug-2-0/#comment-767559
Global Land Grab: http://www.inthesetimes.com/article/11784/global_land_grab/
World Bank Links: http://web.worldbank.org/WBSITE/EXTERNAL/NEWS/0,,contentMDK:22785667~pagePK:34370~piPK:34424~theSitePK:4607,00.html
I believe in practicing what I preach so I am staring at a nuclear plant as I type….

October 27, 2011 9:08 am

Gail Combs says:
http://www.letterdash.com/HenryP/the-greenhouse-effect-and-the-principle-of-re-radiation-11-Aug-2011
If you take what you are saying to the logical conclusion then the greenhouse effect does not cause “warming” it causes a moderating of the extremes in the night/day temperature cycle. The real life demonstration of course is seen in the deserts where the greenhouse gas H2O is missing and the day time/night time temperature swings are extreme compared to a humid day/night at the same latitude.
Henry@Gail
Yes, I am sure you are right. I am not even a 100% sure if the net effect of more humidity in the air causes warming (by deflecting earth light) rather than cooling (by deflecting IR light from the sun) but that it causes a balancing act between extremes is for certain. In fact, I believe that is why life exists at all.
http://www.letterdash.com/HenryP/what-was-that-what-henry-said

Gail Combs
October 27, 2011 9:20 am

Brian says:
October 25, 2011 at 3:41 pm
…..If you’re willing to admit that the earth is warming, and humans are causing it, we can have a more productive discussion about the likelihood that it will be bad, and what to do about it. But I doubt you’ll admit there’s any chance you’re wrong…..
___________________________________________-
Why the heck would we admit we are “WRONG” when the science is far from settled???
900+ Peer-Reviewed Papers Supporting Skeptic Arguments Against ACC/AGW Alarm
More to the point why would we agree to the World Bank’s plans to rape the poor and middle class???
It completely flabbergasts me that people who clamor about how we must DO SOMETHING about CAGW to save humanity completely MISS WHO is PROFITING from CAGW and the very real fact they are condemning children under the age of 5 to horrible deaths.
This is NOT now substantiated in fact.
The so-called Danish text, a secret draft agreement … hands effective control of climate change finance to the World Bank…
http://web.worldbank.org/WBSITE/EXTERNAL/NEWS/0,,contentMDK:22785667~pagePK:34370~piPK:34424~theSitePK:4607,00.html
http://wattsupwiththat.com/2011/09/25/they-had-to-burn-the-village-to-save-it-from-global-warming/
http://www.bread.org/hunger/global/
http://www.thirdworldtraveler.com/IMF_WB/Budhoo_50YIE.html
http://www.whirledbank.org/development/sap.html
http://www.whirledbank.org/development/debt.html
Global Land Grab: http://www.inthesetimes.com/article/11784/global_land_grab/
A Quickie on the 2008 food riots that shows just how ruthless these people are in their quest for money and also shows the l: http://wattsupwiththat.com/2011/10/13/borlaug-2-0/#comment-767575ong term planning.

Brian
October 27, 2011 11:38 am

[snip. Enough with the politics. ~dbs, mod.]

November 3, 2011 1:06 pm

Been having a disscussion with a warmist and used this post as evidence of lack of century scale warming. Apparently he’s been learning a few things from our exchanges and toss back at me an argument I have used offen. Out of the 613 station used in the graph. How many of them passed the Surfacestation review for siting?