Unadjusted data of long period stations in GISS show a virtually flat century scale trend

Hohenpeissenberg Meteorological Observatory - Image from GAWSIS - click for details

Temperature averages of continuously reporting stations from the GISS dataset

Guest post by Michael Palmer, University of Waterloo, Canada

Abstract

The GISS dataset includes more than 600 stations within the U.S. that have been

in operation continuously throughout the 20th century. This brief report looks at

the average temperatures reported by those stations. The unadjusted data of both

rural and non-rural stations show a virtually flat trend across the century.

The Goddard Institute for Space Studies provides a surface temperature data set that

covers the entire globe, but for long periods of time contains mostly U.S. stations. For

each station, monthly temperature averages are tabulated, in both raw and adjusted

versions.

One problem with the calculation of long term averages from such data is the occurrence of discontinuities; most station records contain one or more gaps of one or more months. Such gaps could be due to anything from the clerk in charge being a quarter drunkard to instrument failure and replacement or relocation. At least in some examples, such discontinuities have given rise to “adjustments” that introduced spurious trends into the time series where none existed before.

1 Method: Calculation of yearly average temperatures

In this report, I used a very simple procedure to calculate yearly averages from raw

GISS monthly averages that deals with gaps without making any assumptions or adjustments.

Suppose we have 4 stations, A, B, C and D. Each station covers 4 time points, without

gaps:

In this case, we can obviously calculate the average temperatures as:

A more roundabout, but equivalent scheme for the calculation of T1 would be:

With a complete time series, this scheme offers no advantage over the first one. However, it can be applied quite naturally in the case of missing data points. Suppose now we have an incomplete data series, such as:

…where a dash denotes a missing data point. In this case, we can estimate the average temperatures as follows:

The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.

One advantage that may not be immediately obvious is that this scheme also removes

systematic errors due to change of instrument or instrument siting that may have occurred concomitantly with a data gap.

Suppose, for example, that data point B1 went missing because the instrument in station B broke down and was replaced, and that the calibration of the new instrument was offset by 1 degree relative to the old one. Since B2 is never compared to B0, this offset will not affect the calculation of the average temperature. Of course, spurious jumps not associated with gaps in the time series will not be eliminated.

In all following graphs, the temperature anomaly was calculated from unadjusted

GISS monthly averages according to the scheme just described. The code is written in

Python and is available upon request.

2 Temperature trends for all stations in GISS

The temperature trends for rural and non-rural US stations in GISS are shown in Figure

1.

Figure 1: Temperature trends and station counts for all US stations in GISS between 1850 and 2010. The slope for the rural stations is 0.0039 deg/year, and for the other stations 0.0059 deg/year.

This figure resembles other renderings of the same raw dataset. The most notable

feature in this graph is not in the temperature but in the station count. Both to the

left of 1900 and to the right of 2000 there is a steep drop in the number of available

stations. While this seems quite understandable before 1900, the even steeper drop

after 2000 seems peculiar.

If we simply lop off these two time periods, we obtain the trends shown in Figure

2.

Figure 2: Temperature trends and station counts for all US stations in GISS between 1900 and 2000. The slope for the rural stations is 0.0034 deg/year, and for the other stations 0.0038 deg/year.

The upward slope of the average temperature is reduced; this reduction is more

pronounced with non-rural stations, and the remaining difference between rural and

non-rural stations is negligible.

3 Continuously reporting stations

There are several examples of long-running temperature records that fail to show any

substantial long-term warming signal; examples are the Central England Temperature record and the one from Hohenpeissenberg, Bavaria. It therefore seemed of interest to look for long-running US stations in the GISS dataset. Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.

The temperature trends of these stations are shown in Figure 3.

Figure 3: Temperature trends and station counts for all US stations in GISS reporting continuously, that is containing at least one monthly data point for each year from 1900 to 2000. The slope for the rural stations (335 total) is -0.00073 deg/year, and for the other stations (278 total) -0.00069 deg/year. The monthly data point coverage is above 90% throughout except for the very first few years.

While the sequence and the amplitudes of upward and downward peaks are closely similar to those seen in Figure 2, the trends for both rural and non-rural stations are virtually zero. Therefore, the average temperature anomaly reported by long-running stations in the GISS dataset does not show any evidence of long-term warming.

Figure 3 also shows the average monthly data point coverage, which is above 90%

for all but the first few years. The less than 10% of all raw data points that are missing

are unlikely to have a major impact on the calculated temperature trend.

4 Discussion

The number of US stations in the GISS dataset is high and reasonably stable during the 20th century. In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out, to the point that the GISS dataset no longer seems to offer a valid basis for comparison of the present to the past. If we confine the calculation of average temperatures to the 20th century, there remains an upward trend of approximately 0.35 degrees.

Figure 4: Locations of US stations continuously reporting between 1900 and 2000 and contained in the GISS dataset. Rural stations in red, others in blue. This figure clearly shows that the US are large, but the world (shown in FlatEarth™ projection) is even larger.

Interestingly, this trend is virtually the same with rural and non-rural stations.

The slight upward temperature trend observed in the average temperature of all

stations disappears entirely if the input data is restricted to long-running stations only, that is those stations that have reported monthly averages for at least one month in every year from 1900 to 2000. This discrepancy remains to be explained.

While the long-running stations represent a minority of all stations, they would

seem most likely to have been looked after with consistent quality. The fact that their

average temperature trend runs lower than the overall average and shows no net warming in the 20th century should therefore not be dismissed out of hand.

Disclaimer

I am not a climate scientist and claim no expertise relevant to this subject other than

basic arithmetics. In case I have overlooked equivalent previous work, this is due to my ignorance of the field, is not deliberate and will be amended upon request.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Brian H

A fine extraction of genuine information from an egregiously abused data set. The retro-chilling of old records and The Great Dying of The Thermometers (actually about 1990, they went from ~6000 to ~1600) are so brazen as to defy belief.

crosspatch

In the 21st century, the number of stations has dropped precipitously. In particular, rural stations have almost entirely been weeded out

At some point they are going to run out of tricks to use to create a warming signal.

Torgeir Hansson

Unadjusted data???? Are you crazy? Don’t you go and challenge Dr. Jones et al now. They run this game, sonny, and you’d better get used to it.

Glenn Tamblyn

Michael. Interesting post. However I have to disagree with the method you have used to handle gaps in the record. By using the average of other stations to sustitute for a missing station when averaging temperatures this assumes that the missing station is essentially at a similar temperature to the others. If its temperature is significantly different, this will introduce a bias – if it is colder, a warming bias, if it is warmer, a cooling bias.
This isn’t the method used by the mainstream temperature records. They base their calculations on comparing each reading from a station against its own long term average over some base period. Then they take the difference between the individual reading and this long term average to calculates the temperature anomaly for that reading on that day. This produces quite a different behaviour when looking at missing readings.
Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser.
[snip sorry SkS doesn’t treat people with any sense of fairness – for Example Dr. Peilke Sr. If you want to reference any of your work, your are welcome print it out in detail here, but until SkS changes how they treat people, sorry not going to allow you to use it as a reference. Be as upset as you wish, but the better thing to do is work for change there- Anthony ]
I will be interested in your comments.

Goldie

To the untrained eye this looks to have two warming cooling cycles in it. The first in the early to mid 20th century, which corresponds with the well attested long hot summer that made the Battle of Britain possible and the second in the late part of the 20th Century which corresponds with the well undertood warming cycle that began in approximately 1974/76 following the 1974 La Nina with a step change in global rainfall.
Just a word of warning though – last week this website was going off the deep end at BEST for using a non-standard period for assessing climate change. Whilst this assessment is useful, it is important that the limitations of the data to fully inform the current debate are fully understood

Ian of Fremantle

Have you yourself or do you know of anyone who has asked GISS why particular stations have been discontinued? In Australia there also seems to have been selective removal of some stations. Of course it would be uncharitable to suggest the removals are to tie in with the proposition of global warming but it would be good to get an official answer. It would also be a lot more good if posts like this were publicised in the MSM

Steeptown

Is there an official explanation for why, in the modern era with all the funding available, the number of stations has dropped precipitously?

tokyoboy

Many folks here know that for a long time, n’est-ce-pas?

@Michael Palmer
“At some point they are going to run out of tricks to create a warming signal”
I appreciate very much that you just put it as it is. We sceptics sometimes want to sound extremely nuanced etc. simply to be taken serious, and thus, when someone just say the truth we all know just like that, its a great relief.
Here RUTI (Rural Unadjusted Temperature Index) versus BEST global land trend:
http://hidethedecline.eu/media/ARUTI/GlobalTrends/Est1Global/fig1c.jpg
The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.
RUTI global taken from:
http://hidethedecline.eu/pages/posts/ruti-global-land-temperatures-1880-2010-part-1-244.php
RUTI will grow stronger and stronger and even though all beginning is tough, I hope everone will help collecting even more original temperature data to me to make RUTI even better.
Tricks: Coastal stations.
One (important) trick from Hadcrut is to use rural coastal stations so that they do have rural data aboard.
Problem is, that coastal stations world wide has around 0,6K more heat trend 1925-2010 than near by non-coastal stations, see
Joanne Nova/RUTI :
http://hidethedecline.eu/pages/posts/ruti-coastal-temperature-stations-242.php
K.R. Frank

Brian H

I actually find this chilling. I’d counted on an underlying base trend of about .6K/century to give a bit of a leg up to resist the coming downturn. Not to be, apparently!
A possible positive outcome could be that the Cooling freaks out the Alarmists, and they flip over to pushing CO2 emissions to combat it. That will do nothing for temperature, but will unclog the energy generation pipelines and be great for agriculture and silviculture. Maybe even viticulture!

TBear (Sydney, where it is finally warming Up, after coldest October in nearly 50 yrs)

Interesting.
But does it stack up?
Interested to know …

The graph I find most interesting is #2. I have seen elsewhere in many places that the global temps are just not rising (at least not significantly, if at all) in this century. How is it that the GISS records for the US are rising very significantly in the 21st century. It appears to be about as much warming in the last decade as the whole 20th century! (Allowing for some smoothing)

Tom Harley reblogged this on pindanpost and commented: The same result for NW Australia…virtually flat for over 100 years in Broome

Glenn Tamblyn says:
October 24, 2011 at 12:42 am
Michael. Interesting post. However I have to disagree with the method you have used to handle gaps in the record. By using the average of other stations to sustitute for a missing station when averaging temperatures this assumes that the missing station is essentially at a similar temperature to the others. If its temperature is significantly different, this will introduce a bias – if it is colder, a warming bias, if it is warmer, a cooling bias.

I have to disagree. He is using the temperature delta (Δtemperature) to average with other deltas. That makes much more sense than what you have assumed.

The upshot of this is that missing monthly Δtemperature values are simply dropped and replaced by the average (Δtemperature) from the other stations.

That is not to say the technique is not problematic, but it should likely be much more accurate than any method I have seen described in this matter. The nearer the site is, I suspect the better the correlation, regardless of the offset in average temperature.

So, where we have continuous, reliable, non-manipulated data, there is no warming at all. QED
Strange that many “skeptics” seem to have reconciled themselves with the notion that “there was some global warming in the 20th century.”
Repetitive, all-pervasive lie, if non constantly resisted, in time would appear as containing at least some truth in it. I remember that in the 1980s one could hardly find a person in Russia, however opposed to the regime, who would not believe in some part of the Soviet propaganda. It has been nailed into people’s brains for 70 years, from the womb to the tomb, and only a few were stubborn enough to see it through.

son of mulder

Interesting. You use stations continuously operating in 20th century. What does the graph look like if you include only stations that operated continuously through the 20th century upto the present day? That would indicate any bias in the removal of stations recently.

Glenn Tamblyn

Michael
I had some stuff you might have been interested in reading but Anthony snipped the link. Interesting, it isn’t the Mod’s here at WUWT snipping this, it is Anthony himself
Look at posts at SkS during May this year or my posts, under my authors name. Unless Anthony snips this as well.
Also, Anthony, if you want to talk about how people are treated, seriously read all the exchanges between SkS and Dr P Snr. Please note the civil tone of it all.
Unless you want to snip this too.
Note that I have copied this post so we can show what you have snipped Anthony
{see these: http://wattsupwiththat.com/2011/09/25/a-modest-proposal-to-skeptical-science/ and http://wattsupwiththat.com/2011/10/11/on-skepticalscience-%e2%80%93-rewriting-history/ and explain why that sort of behavior is OK for SkS How do you justify changing/deleting user comments months and years later? ~mod}

In this case, we can estimate the average temperatures as follows:

Yikes! You introduce “fiction” into fact. Well, the temperatrue averages are already an artificial construct. One that doesn’t actually represent the time-averaged thermal state of the system being measured. Even the time-average has knobs on it for subsequent use. Didn’t Doug Keenan explain that adequately a couple of days ago?
What is sorely needed is analysis that can cope with “holes” in the data. Gaps. Analysis that doesn’t require invention of data to bridge the gaps. Which is going to be somewhat harder than for homogenised data, but at least one isn’t analysing guesses instead of raw data.
Moreover, what is needed is an understanding of the physical system. A dry-air temperature isn’t sufficient to describe the thermodynamic state; the enthalphy of the short-term climate system.

John Marshall

Unfortunately the adding of all temperatures and dividing by the total number does not actually produce a correct answer.
Many inputs can affect these temperatures between the times of reading which will skew the average without any knowledge that this has happened. A continuous recording, like a barograph for pressure, would be much more accurate. Whether this is done I have no idea.
See:- Does a Global Temperature Exist? Essex, McKitrick and Andresen 2006.

The most notable feature in this graph is not in the temperature but in the station count.

Very true, very alarming, very indicative of manipulation.
The station count, reporting years and monthly data point coverage can be used to generate a monthly GISS Credibility Index for their overall dataset… unfortunately this credibility index started to fall off a cliff in the 1970s and is currently very close to zero.
The fact that their average temperature trend runs lower than the overall average and shows no net warming in the 20th century should therefore not be dismissed out of hand.

Totally correct.
The subset of raw data with a very high GISS Credibility Index actually shows a very slight cooling trend in the US during the 20th century.

KPO

Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone. I have this sense that there are parameters missing such as humidity, cloud cover and perhaps wind speed/direction that should also be recorded and used collectively when compiling a record. My feeling is that a more complete “measure” would be accomplished, EG an average temp of 15c/70% humidity/45% cloud/20km/hr/NW – in short, a sort of micro-climate record, expanded to regional and then global if that’s possible. This thing with averages of averages of averages of data points (numbers) bothers me. Perhaps my thinking is off so please straighten me out. Thanks

Pete in Cumbria UK

What KPO says above at 2:17 +1
How is it that changing temperature (alone) is being used as a proxy for ‘changing climate’
Its like saying that because jeans are usually blue, all items of blue clothing are made of denim and worn around your ass. (or something like that, yanno wot i meen)

Bill

Have the stations gone away, or are the stations still there, but just no longer counted?

It has often struck me as I extend my Carbon Footprint around the globe, that a very interesting, very consistent, and very much available temperature record may well be available. It is the temperature as recorded by planes as they travel.
The temperature, height and time are all constantly recorded. I can see that all we need to add may be the humidity. I guess it may be very low at most cruising heights however.
This may not give us everything, but it would at least give us something we could investigate. The cost of gathering this data should be trivial.
I personally volunteer to help, as long as my expenses are met. Obviously it would be much better to observe from the front of the plane, so only first class tickets will be accepted 😉

KPO says:
October 24, 2011 at 2:17 am
Please forgive my ignorance if I am being way off here, ….

I had the same thought. The use of Google for about 20 minutes fixed the ignorance. It is something to do with “wet build temperature” or similar. Basically the relative humidity is taken into account. (I am sure others with infinitely more knowledge can explain or correct me).
If you mean that the ‘temperature’ itself is not a good reading, because what we need to measure is ‘energy’, you have my vote. This view has been expounded on this site often, but I apologise for forgetting by whom. Records of weather such as could would also be extremely useful, IIRC Willis has posted a few essays on the matter.

^^^ Dang! ^^^
“wet build temperature” = “wet bulb temperature”

DirkH

Glenn Tamblyn says:
October 24, 2011 at 12:42 am
“Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser. ”
But if you do area weighting your result will be hugely biased towards the trends of isolated thermometers with no second data source a thousand miles around, like in GISS. Is this what you consider a better approach? It also makes manipulations much simpler. The PNS approach. Drop the thermometers that don’t confess.

James Reid

A question to anyone who is familiar with the techniques from someone who has not had access to the published papers in this area; How is the global mean temperature “normally” calculated from daily maximum and minimum records?
Can anyone point to a primer in this area please?

kim;)

KPO says:
October 24, 2011 at 2:17 am
“This thing with averages of averages of averages of data points (numbers) bothers me. ”
xxxxxxxxxxxxxx
Well said!

KV

Re “The Great Dying of Thermometers” which occurred worldwide predominantly in 1992-93. Of course, most of them didn’t actually die – Hansen and associates simply stopped using the data for reasons nobody ever seems to have explained. The following is my own interpretation of available facts.
1988 was reportedly the hottest summer in the US for 52 years and no doubt gave James Hansen confidence for his alarmist appearance before a US Committee that year (reportedly with windows purposely left wide open and air conditioning off on one of the hottest days of the year)! He did not have the backing of NASA as his former supervisor, senior NASA atmospheric scientist Dr. John S. Theon made clear after he retired. He said Hansen had “embarrassed NASA” with his alarming climate claims and violated NASA’s official agency position on climate forecasting (i.e., ‘we did not know enough to forecast climate change or mankind’s effect on it’).
It is evident Hansen had put his reputation and credibility on the line and when, starting with significant falls in 1989 and 1990 the next four years to 1992 showed sharp cooling in many parts of the world, a cooling which was then further exacerbated by the June 1991 eruption of Mt.Pinatubo, Hansen must have been put under extreme pressure, not only by his critics within and outside NASA, but also other scientists supporting and pushing the AGW hypothesis.
In the absence of any official plausible explanation I feel this to be a credible background and motive for the dropping of stations and the now well-established abuse of raw data. It is worthy of note that others have found drops in station numbers resulted in a significant step rise in most graphed temperatures.

I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it? Maybe Anthony Watts can shed some light on this?
Also, Anthony and others were quite annoyed that the recent BEST results were publicized before peer-review, and here we have a biochemist being given free reign to post what amounts to a back-of-the-envelope calculation of average ground temps. Michael Palmer or Anthony Watts could have, at the very least, got a statistician or near equivalent to verify the averaging analysis, which I suspect is particularly well suited to giving an average trend of zero.
Cheers.

TBear (Sydney, where it is finally warming Up, after coldest October in nearly 50 yrs)

KPO says:
October 24, 2011 at 2:17 am
Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone. I have this sense that there are parameters missing such as humidity, cloud cover and perhaps wind speed/direction that should also be recorded and used collectively when compiling a record. My feeling is that a more complete “measure” would be accomplished, EG an average temp of 15c/70% humidity/45% cloud/20km/hr/NW – in short, a sort of micro-climate record, expanded to regional and then global if that’s possible. This thing with averages of averages of averages of data points (numbers) bothers me. Perhaps my thinking is off so please straighten me out. Thanks
_______________
So let’s say KPO is correct.
Does this sort of point indicate that we really have no safe handle on total energy in the climate system?
If that is correct, or even plausible, why are we spending trillions of dollars on AGW?
Frustrated that, if these types of points have any true validity, that the general scientific community does not get its act together and call this CAGW argument for the misguided tripe that it may well be. Is it because scientists feel too constrained by specialism? Sad, very sad, if that is the case.
Waiting for the push-back revoluition to begin …

PlainJane

@ Glen Tamblyn
“Also, in your post there is no mention of how you handle area weighting of averages. Without this your results will be hugely biased towards the trends in regions where temperature stations are denser.”
This article never claims to be working out the average temperature of America or any region in particular – it is only looking at long term trends in individual stations, so this point is not valid.
It is also petty and peevish to complain about being “snipped” here when you have not had your comment disappeared down a black hole so that no other people may know you posted something. Most readers of this blog, if interested, would be savvy enough to find your work anyway from the comment Anthony left stating where your work was. You have been invited to place your specific work here so we can read it easily in context. Could you please do that?

Steve C

There’s another graph I’d like to see, prompted by an oft-quoted statistic in these and other pages, and again by those RUTI vs BEST graphs from hidethedecline.eu, above. The oft-quoted statistic is that, of the land area of the earth, urban areas comprise around 1%, rural 99%. Has anyone drawn a graph in which 99% weighting was given to the best of the rural stations, and 1% to the urban?
Of course, this still ignores the two-thirds of the planet which is sea, but I’d certainly call such a graph a more accurate picture of overall land temperature than the usual heavily urban- and airport-influenced offerings. Has anyone here come across such a graph, or can someone produce one a bit more quickly than my own rusty data processing would allow?

Glenn Tamblyn,
Please do not refer to the Skeptical Pseudo-Science propaganda blog. John Cook has no ethics, no honesty, and his mendacious blog has been thoroughly deconstructed in articles and comments here. Do a search of the archives, and get educated.
Of course, if you suffer from cagnitive dissonance like most True Believers, SPS is the echo chamber for you. Just be aware that you’re being spoon-fed alarmist propaganda at that heavily censoring blog. Some folks actually like being led by the nose, so maybe that’s A-OK with you. To each his own.

Harry Snape

Replacing a missing temp from one station with the average of the other stations seems like madness to me. The missing reading could be for a station in Antarctica, or the Equator, they would bear no relationship to the “average” of other stations.
Equally, infilling with the average for that time of the year from that station, or something similar is also likely to introduce an error. As someone said above Sydney has an anomalously low Oct, so infilling an average Oct temp would have overestimated the temp. But at least this method has the temp in the ball park, i.e. it will be an average Syd temp for Oct, not an average world temp for Syd (which would be seriously low).
What needs to happen when stations are missing is that the average for that location/time is used, but the error bar on the calculated temp is increased by the maximum variance at that location/time (possibly more for short data records). So the loss of any substantial portion of the record will be observable by the width of the error bars and once can deduce how confident we should be of the final number.

oMan

KPO: you’ve caught the flavor of my frustration with the (understandable, inevitable) enterprise of reducing the complexity of a system such as local weather or (its big sister, integrated over time and space) climate, into a single parameter called “temperature” and even that not measuring heat content but just a dry bulb thermometer. We lose so much information in this process. It would be nice if the final statistic came with a label reminding us of the magnitude of what has been left on the cutting-room floor and how subjective the cutting has been. Just use a five-star or color code. I know, I know, error bars are a good nudge in that direction; and this post by Michael Palmer contains, in Figure 1, another excellent pointer for quality, namely the number of stations. It tells the reader that the data before 1900 and after about 1990 is “thin” and “different.”. I use those words to suggest the orthogonality of the information-space we are trying to explore if not capture in the temperature time series for the entire US “represented” by GISS numbers.
That dying of the thermometers is the real story for me, and Michael Palmer adds a valuable chapter to the story. Many thanks.

Stephen Wilde

So, how to reconcile the widespread sceptic acceptance that there has been SOME warming since the LIA with the now clear possibility that the surface temperature record via thermometers has been primarily recording UHI effects and/or suffers from unjustified ‘adjustments’ towards the warm side ?
Firstly we can simply say that the background warming since the LIA is less than that apparently recorded by the thermometers.
Although there has been some warming, the effect of UHI and inaccurate ‘adjustments’ has exagerrated it during the late 20th century and may now be hiding a slight decline.
Secondly we can see from the chart above that although there may have been little or no net change in temperature during the 20th century at the most reliable long term sites there have been changes up and down commensurate with many other observations i.e. warming followed by cooling then warming again and now possibly cooling.
What such a pattern suggests is that the Earth’s watery thermostat is highly effective but takes a while ( a few decades) to adjust to any new forcing factor.
Thus the system energy content remains much the same (including the main body of the oceans) but in the process of adjusting the energy flow through the system in order to retain that basic system energy content the surface pressure distribution changes so as to alter the size and position off the climate zones especially in the mid latitudes where most temperature sensors are situated.
From our perspective there seems to be a warming or cooling at the recording sites when averaged out overall but in reality all that is being recorded is the rate of energy flow past the recording sites as the speed of energy flow through the system changes in an inevitable negative response to a forcing agent whether it be sun sea or GHGs.In effect the positions of the surface sensors vary in relation to the position of the climate zones and they record that variance and NOT any change in energy content for the system as a whole.
A warmer surface temperature recording at a given site (excluding UHI and adjustments) just reflects the fact that more warm air from a more equatorward direction flows across it more often. That does not imply a warmer system if the system is expelling energy to space commensurately faster.
When a forcing agent tries to warm the system the flow of energy through the system and out to space increases so that more warm air crosses our sensors.
When a forcing agent tries to cool the system the flow of energy through the system and out to space decreases so that cooler air crosses our sensors.
Hence the disparity between satellite and surface records. The satellites are independent of the rate of energy flow across the surface and attempt to ascertain the energy content of the entire system. That energy content varies little if at all because the net effect of changes in the rate of energy flow through the system is to stabilise the total system energy content despite internal system variations or external (solar) variations seeking to disturb it.
Higher temperature readings at surface stations therefore do not necessarily reflect a higher system energy content at all, merely a local or regional surface warming as more energy flows through on its way to space.

Hi, Michael. One needed adjustment has to do with time-of-observation bias. There are literature references on this available via the internet. You may want to check Steve McIntyre’s Climate Audit for discussions of this.
I believe that the raw data needs this adjustment.

Richard

A chilling analysis of James Hansen’s machinations.

Richard

Garrett Curley (@ga2re2t) says:
“I just don’t get it with this site. On some posts (e.g. a recent one by Willis Eschenbach), there’s the argument that skeptics have never doubted that the world is warming. And then this article comes along to doubt that warming. Which is it?”
Maybe Arithmetic and free discussion?
This is not a religious site perhaps?
Not a discussion of beliefs but rather on the basis of the beliefs?
Maybe Hansen introduces a consistent warming bias with his “adjustments”?

Dave Springer

Frank Lansner says:
October 24, 2011 at 1:05 am
Here RUTI (Rural Unadjusted Temperature Index) versus BEST global land trend:
http://hidethedecline.eu/media/ARUTI/GlobalTrends/Est1Global/fig1c.jpg
The ONLY difference between the 2 datasets happens in the years 1950-78 (just before satellite data starts) :
BEST adds 0,55 K to the warm trend 1950-78 compared to RUTI.
_________________________________________________________________
Hi Frank. I had a look at the graphs and recognized the source of the difference. That’s the infamous Time of Observation adjustment. It’s the biggest upward adjustment they make. Without it there is no significant 20th century warming trend. This is why the warming trend focus has now shifted to the period 1950-2010. You don’t hear the AGW boffins discussing dates earlier than that anymore. The author of the OP here evidently didn’t get the memo which was issued about the same time as the order to stop calling it “climate change” and begin calling it “global climate disruption”. It’s all about framing, you see. They frame the times and they frame the terms. It’s a despicable, dishonest, unscientific agenda they pursue.
Steve Goddard has a good explanation of the TOB here:
http://stevengoddard.wordpress.com/2011/08/01/time-of-observation-bias/

Time Of Observation Bias
Posted on August 1, 2011 by stevengoddard
The largest component of the USHCN upwards adjustments is called the time of observation bias. The concept itself is simple enough, but it assumes that the station observer is stupid and lazy.
Suppose that you have a min/max thermometer and you take the reading once per day at 3 pm. On day one it reads 50F for the high – which for arguments sake occurred right at 3 pm. That night a cold front comes through and drops temperatures by 40 degrees. The next afternoon, the maximum temperature is also going to be recorded as 50F – because the max marker hasn’t moved since yesterday. This is a serious and blatantly obvious problem, if the observer is too stupid to reset the min/max markers before he goes to bed. The opposite problem occurs if you take your readings early in the morning.
I had a min/max thermometer when I was six years old. It took me about three days to realize that you had to reset the markers every night before you went to bed. Otherwise half of the temperature readings are garbage.
USHCN makes it worse by claiming that people used to take the readings in the afternoon, but now take them in the morning. That is how they justify adjusting the past cooler and the present warmer.
They should use the raw data. These adjustments are largely BS.

Please forgive my ignorance if I am being way off here, but I have had this niggling thought for some time that there is something missing when recording a temperature reading alone.

Indeed. The mainstream datasets are based upon a daily temperature reading.
The problem here is what does this temperature reading actually represent?
For the moment forget all about all the problems associated with the accuracy of the thermometer… forget the Urban Heat Island effect and all about the known location issues associated with thermometers… and lets take a look at what daily data is actually being recorded.
If we used a basic data logging thermometer to record the temperature at the end of every minute during a single calendar day we would accumulate 1,440 temperature data points… the maths is simple: 60 minutes x 24 hours = 1,440.
From these 1,440 data points we could then very easily calculate an Average Daily Temperature for that day.
Unfortunately, this is not how the mainstream dataset their daily temperature reading.
This is what they actually do.
First:
They capture the data point with the highest temperature and call it their Daily High Temperature.
Although this is fairly reasonable it is important to remember that this is the high outlier value from the 1,440 data points for that day.
Second:
They capture the data point with the lowest temperature and call it their Daily Low Temperature.
Although this is fairly reasonable it is important to remember that this is the low outlier value from the 1,440 data points for that day.
Third:
They add the Daily High Temperature outlier value to the Daily Low Temperature outlier value and then divide this intermediate number by two. So what would a rational person call this number? By definition (i.e. the maths) it is the mid-point between the daily extreme outlier temperatures. However, climatologists by some bizarre logic call this number the Daily Average Temperature. It is only in climatology that the average of 1,440 data points is calculated by just using the two extreme outlier values for the day… no wonder climate science is regarded as a weird science.
To underline just how weird this weird science really is lets ask ourselves:
Question: What data would a rational person use to demonstrate rising temperatures?
Rational Answer: The series of Daily High Temperatures.
Climatology Answer: The series of mid-points between the daily extreme outlier temperatures.

Dave Springer

More info on individual adjustments and how they change twencen temp record.
A picture is worth a thousand words (maybe more in this case):
http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_pg.gif

Dave Springer

Final result of adjustments:
http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
As you can see the entire twencen warming is indeed manmade. Made by the adjustments applied to the thermometer readings that is…

Rhys Jaggar

I think the debate is beginning to move toward the position where we can see that the result obtained in terms of temperature trends depends on the source data. SURPRISE, SURPRISE!
It may in fact be the case that about 50 independent analyses be done to show what happens depending on what data you use. This one is just for the USA. Whch is a large continental land mass bounded by the world’s two largest oceans to the East and West, a warm sea to the South and a major land mass to the north, with a smaller land mass in the SW.
You might find different results in you studied Russia: a large continental land mass surrounded by ice/ocean to the north and a continental land mass to the south.
You might find different resuilts if you studied Europe. A mid-sized continental land mass with a major ocean to the west, a small sea to the south, smaller enclosed seas to the SE and NE.
You might find different results for Australia: a mid sized land mass totally surrounded on all sides by a major ocean.
I am certain, based on the fact that the Thames simply doesn’t freeze over like it used to in the 19th century, that London must be much warmer in winter than it used to be 200 years ago. So I’d frankly be amazed if we couldn’t agree that we have had warming in the past 200 years, although the 20th century may be less clear cut.
Where the debate has been the past 20 years is a small group unilaterally determining the source data sets and not having that most important decision subjected to the most rigorous analysis by all. That can’t change quickly enough in my book.
It would also appear that debates about how you search for deviations can get you different results. Particularly if the datasets have gaps in the record. It might be helpful to commit to building a century of reliable, consistent, internationally agreed datasets to ensure that this working with limited data be something which becomes less important in time. One hopes that this can include wireless-based transmission of data, particularly in rural areas with extreme cold in winter. Whether that is technically feasible is something specialist climatologists should no doubt enighten the public about.
One is minded to suggest that the IPCC bears all the hallmarks in climatology that FIFA has done in world football. A deeply flawed organisation, but not completely evil. Which is about as strong a signal for fundamental reform that can be given using measured langauge…………

Bigdinny

I have been lurking here and several other sites for over a year now, trying to make sense of all this from both sides of the fence. In answer to this simple question, Is the earth’s temperature rising? depending upon whom you ask you get the differing responses YES!, NO!, DEPENDS! IT WAS BUT NO LONGER IS! I think I have learned a great deal. Of one thing I am now fairly certain. Nobody knows.

DocMartyn

Have a look at the histograms of Fig.3, BEST UHI paper. It appears that when they compare 10 year, 10-20, 10-30 and 30+ data series the distribution of temperature RATE changes from a normal to a Poisson distribution.
My guess is that you will find the same thing. Moreover, if you look at the individual rates in the same manner, you will be able to identify when the transition occurred..

Jim Butler

First hand example of how data sets get messed up…
This morning, as part of my first cuppa routine, I checked the weather online using Intellicast’s site. 47deg. Let my dogs out…hmmmm …seems much cooler than 47deg, checked Accu. 47deg. Then checked Wunderground, 47deg. Used Wunderground’s “station select” feature, saw that it had defaulted to APRSWXNET – Milford, Ma.. All of the surrounding stations were reporting 34-37degs, and they appeared to be amateur stations, whereas I’m guessing that APRSWXNET is an “official” station of some sort.
Whatever it is, multiple services are using it, and it’s wrong by about 12deg.
JimB

Richard says:
“Maybe Arithmetic and free discussion?”
I would be of the opinion that arithmetic and free discussion of that type should be reserved to forum threads. This site is considered, from what I gather, as a reference on climate skepticism, so back-of-the-envelope calculations seem out of place to me.
“This is not a religious site perhaps?”
Well, I would argue that using any Tom, Dick and Harry analysis just to place doubt on GW (and therefore AGW) is being somewhat religious. But putting that aside, not being religious/dogmatic about something does not imply that every discussion is fair-game. Being open-minded about something does not require you to let your intellectual defenses down.