Spencer: developing a new satellite based surface temperature set

New Work on the Recent Warming of Northern Hemispheric Land Areas

by Roy W. Spencer, Ph. D.

aqua_night_pacific

INTRODUCTION

Arguably the most important data used for documenting global warming are surface station observations of temperature, with some stations providing records back 100 years or more. By far the most complete data available are for Northern Hemisphere land areas; the Southern Hemisphere is chronically short of data since it is mostly oceans.

But few stations around the world have complete records extending back more than a century, and even some remote land areas are devoid of measurements. For these and other reasons, analysis of “global” temperatures has required some creative data massaging. Some of the necessary adjustments include: switching from one station to another as old stations are phased out and new ones come online; adjusting for station moves or changes in equipment types; and adjusting for the Urban Heat Island (UHI) effect. The last problem is particularly difficult since virtually all thermometer locations have experienced an increase in manmade structures replacing natural vegetation, which inevitably introduces a spurious warming trend over time of an unknown magnitude.

There has been a lot of criticism lately of the two most publicized surface temperature datsets: those from Phil Jones (CRU) and Jim Hansen (GISS). One summary of these criticisms can be found here. These two datasets are based upon station weather data included in the Global Historical Climate Network (GHCN) database archived at NOAA’s National Climatic Data Center (NCDC), a reduced-volume and quality-controlled dataset officially blessed by your government for climate work.

One of the most disturbing changes over time in the GHCN database is a rapid decrease in the number of stations over the last 30 years or so, after a peak in station number around 1973. This is shown in the following plot which I pilfered from this blog.

Given all of the uncertainties raised about these data, there is increasing concern that the magnitude of observed ‘global warming’ might have been overstated.

TOWARD A NEW SATELLITE-BASED SURFACE TEMPERATURE DATASET

We have started working on a new land surface temperature retrieval method based upon the Aqua satellite AMSU window channels and “dirty-window” channels. These passive microwave estimates of land surface temperature, unlike our deep-layer temperature products, will be empirically calibrated with several years of global surface thermometer data.

The satellite has the benefit of providing global coverage nearly every day. The primary disadvantages are (1) the best (Aqua) satellite data have been available only since mid-2002; and (2) the retrieval of surface temperature requires an accurate adjustment for the variable microwave emissivity of various land surfaces. Our method will be calibrated once, with no time-dependent changes, using all satellite-surface station data matchups during 2003 through 2007. Using this method, if there is any spurious drift in the surface station temperatures over time (say due to urbanization) this will not cause a drift in the satellite measurements.

Despite the shortcomings, such a dataset should provide some interesting insights into the ability of the surface thermometer network to monitor global land temperature variations. (Sea surface temperature estimates are already accurately monitored with the Aqua satellite, using data from AMSR-E).

THE INTERNATIONAL SURFACE HOURLY (ISH) DATASET

Our new satellite method requires hourly temperature data from surface stations to provide +/- 15 minute time matching between the station and the satellite observations. We are using the NOAA-merged International Surface Hourly (ISH) dataset for this purpose. While these data have not had the same level of climate quality tests the GHCN dataset has undergone, they include many more stations in recent years. And since I like to work from the original data, I can do my own quality control to see how my answers differ from the analyses performed by other groups using the GHCN data.

The ISH data include globally distributed surface weather stations since 1901, and are updated and archived at NCDC in near-real time. The data are available for free to .gov and .edu domains. (NOTE: You might get an error when you click on that link if you do not have free access. For instance, I cannot access the data from home.)

The following map shows all stations included in the ISH dataset. Note that many of these are no longer operating, so the current coverage is not nearly this complete. I have color-coded the stations by elevation (click on image for full version).

ISH-station-map-1901-thru-2009

WARMING OF NORTHERN HEMISPHERIC LAND AREAS SINCE 1986

Since it is always good to immerse yourself into a dataset to get a feeling for its strengths and weaknesses, I decided I might as well do a Jones-style analysis of the Northern Hemisphere land area (where most of the stations are located). Jones’ version of this dataset, called “CRUTem3NH”, is available here.

I am used to analyzing large quantities of global satellite data, so writing a program to do the same with the surface station data was not that difficult. (I know it’s a little obscure and old-fashioned, but I always program in Fortran). I was particularly interested to see whether the ISH stations that have been available for the entire period of record would show a warming trend in recent years like that seen in the Jones dataset. Since the first graph (above) shows that the number of GHCN stations available has decreased rapidly in recent years, would a new analysis using the same number of stations throughout the record show the same level of warming?

The ISH database is fairly large, organized in yearly files, and I have been downloading the most recent years first. So far, I have obtained data for the last 24 years, since 1986. The distribution of all stations providing fairly complete time coverage since 1986, having observations at least 4 times per day, is shown in the following map.

ISH-station-map-1986-thru-2009-6-hrly

I computed daily average temperatures at each station from the observations at 00, 06, 12, and 18 UTC. For stations with at least 20 days of such averages per month, I then computed monthly averages throughout the 24 year period of record. I then computed an average annual cycle at each station separately, and then monthly anomalies (departures from the average annual cycle).

Similar to the Jones methodology, I then averaged all station month anomalies in 5 deg. grid squares, and then area-weighted those grids having good data over the Northern Hemisphere. I also recomputed the Jones NH anomalies for the same base period for a more apples-to-apples comparison. The results are shown in the following graph.

ISH-vs-CRUTem3NH-1986-thru-2009

I’ll have to admit I was a little astounded at the agreement between Jones’ and my analyses, especially since I chose a rather ad-hoc method of data screening that was not optimized in any way. Note that the linear temperature trends are essentially identical; the correlation between the monthly anomalies is 0.91.

One significant difference is that my temperature anomalies are, on average, magnified by 1.36 compared to Jones. My first suspicion is that Jones has relatively more tropical than high-latitude area in his averages, which would mute the signal. I did not have time to verify this.

Of course, an increasing urban heat island effect could still be contaminating both datasets, resulting in a spurious warming trend. Also, when I include years before 1986 in the analysis, the warming trends might start to diverge. But at face value, this plot seems to indicate that the rapid decrease in the number of stations included in the GHCN database in recent years has not caused a spurious warming trend in the Jones dataset — at least not since 1986. Also note that December 2009 was, indeed, a cool month in my analysis.

FUTURE PLANS

We are still in the early stages of development of the satellite-based land surface temperature product, which is where this post started.

Regarding my analysis of the ISH surface thermometer dataset, I expect to extend the above analysis back to 1973 at least, the year when a maximum number of stations were available. I’ll post results when I’m done.

In the spirit of openness, I hope to post some form of my derived dataset — the monthly station average temperatures, by UTC hour — so others can analyze it. The data volume will be too large to post at this website, which is hosted commercially; I will find someplace on our UAH computer system so others can access it through ftp.

While there are many ways to slice and dice the thermometer data, I do not have a lot of time to devote to this side effort. I can’t respond to all the questions and suggestions you e-mail me on this subject, but I promise I will read them.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
103 Comments
Inline Feedbacks
View all comments
mobihci
February 20, 2010 7:46 pm

signal from noise. if you want to find the signal (agw warming which includes station dropouts), you must first get rid of the noise (natural warming), which includes long term cycles. it may be that removing stations does actually cause a warming, but because the noise (warming part of cycle) is present and stronger than the signal, you cannot see it. the only way to remove the noise is to extend the test to 2 x the largest cycles frequency what ever that may be. at least 60 years needs to be looked at. more like 2000 years is necessary to draw and real conclusions from it, but then this sort of caculation we see here from spencer seems to be the norm in this industry.

February 20, 2010 8:06 pm

From my experience of looking at weather data over the past 25 years, I would have to say that the day to night, high to low spread in Temperatures has a lot to do with specific heat content due to soil moisture content, elevation, cloud cover and thickness of the atmosphere.
So I would expect the same to carry over to the greater spread from monthly highs to lows for more concentrated reporting of mid-latitude areas than equatorial ones.
The inverse shows up in the resultant dying of thermometers by movement to more coastal areas, and irrigated rural climes, but is more seen as a increase in night time lows, rather than just the decrease in total spread, in the reviews in this debate. Which by using the(Tmax+Tmin)/2= Ave method rather than using the median of 24 hourly readings, might show a slightly different story.

cement a friend
February 20, 2010 8:25 pm

Roy, If one use the same data base one could expect that two sets of analyses should be similar. However, one has to ask; is the database representative of the the total area being covered.
Warwick Hughes http://www.warwickhughes.com/blog/ has put up a post USA Dept of Energy Jones et al 1986 350 pages station documentation now online in pdf He notes “That is the attempt by CRU and Jones to direct investigators to the GHCN station data in lieu of Jones et al/CRU station data. The two groups conduct distinctly different processes on station data and researchers will seldom get close to understanding what Jones/CRU have done by relying on GHCN station versions. The GHCN is riddled with its own multitude of errors and is more than a subject for study in itself.”
I had a quick glance at the Southern Hemisphere document and note that none of the three stations in Tasmania are representative of that island and in Queensland 5 of the nine stations are located on the coast and also in cities of over 50,000. The four inland stations are all in developed town locations.
The Russians have expressed doubts about the selection of their stations. In China there has been a huge growth of cities and there has to be doubts about the representativeness of any of the stations as well as the record available.
Maybe you need to focus on the few stations which are truely representative of a local area and then compare satelite measurements

cement a friend
February 20, 2010 8:42 pm

I should have added from Warwick Hughes http://www.warwickhughes.com/cru86/tr027/index.htm
The following comment
” Readers should email CDIAC and ask for copies of TR022 and TR027. The USA DoE foisted this stuff on the world.
cdiac@ornl.gov
However – for the SH I have a full station list with GHCN populations added by me. Gives you a fair idea how many city stations were used. Readers can judge for themselves the veracity of the Jones et al statement on p1216 of Jones et al 1986b, where they state that “… very few stations in our final data set come from large cities.” This glib and lulling statement is detached from the reality that 40% of their ~300 SH stations are cities with population over 50K.”

February 20, 2010 9:03 pm

Why is Australia totally missing from the 1986-2009 map? Surely we have weather stations reporting hourly.
How do I contact Dr Roy W. Spencer I happen to have an empty website with a large multi megabyte allowance I’m not going to fill any time soon? How big are those files? I got two sites instead of one by accident when I shifted my site off yahoo geocities. I need a web designer to tell me how to host those FTP files though. I’m paying for the space I may a well use it.

Bart
February 20, 2010 9:05 pm

Roy Spencer (13:47:57) :
“I’m not a big fan of creating data where there are none. As it is, spreading a single thermometer over a 5×5 deg grid square (90,000 sq. nautical miles at the equator) is a bit of a stretch already.”
Thanks for responding, Dr. Spencer. I agree completely. Which is why I am suggesting that a fit to a spherical harmonic expansion would likely give a more faithful result. You would specifically not be “creating data where there is none” as, it appears to me, is necessary for the “Jones methodology” because there are a lot of blank grids in space and in time in the plots you show here.
With the spherical harmonic expansion, would be doing the equivalent of linear, quadratic, cubic, etc. polynomial fit such as you regularly do for single dimensional data, but it would be a 2 dimensional least squares fit specifically formulated for spherical geometry. Using a low order model which is significantly overdetermined by the data would effectively spatially low pass filter the data, and should produce a good function upon which to base a global estimate without all this ad hoc gridding and weighting.

Bart
February 20, 2010 9:06 pm

I also understand that the question is moot with the satellite data, but it might give you a better glimpse at what to expect when you do process the satellite data.

Robert
February 20, 2010 9:34 pm

” Holger Danske (15:47:07) :
Robert, I don’t understand your reference to Archimedes. As far as I know he was only interested in buoyancy/gravity:”
I cited him as an example of seeing “around” a problem:
“The most widely known anecdote about Archimedes tells of how he invented a method for determining the volume of an object with an irregular shape. According to Vitruvius, a new crown in the shape of a laurel wreath had been made for King Hiero II, and Archimedes was asked to determine whether it was of solid gold, or whether silver had been added by a dishonest goldsmith.[13] Archimedes had to solve the problem without damaging the crown, so he could not melt it down into a regularly shaped body in order to calculate its density. While taking a bath, he noticed that the level of the water in the tub rose as he got in, and realized that this effect could be used to determine the volume of the crown. For practical purposes water is incompressible,[14] so the submerged crown would displace an amount of water equal to its own volume.”
If we can get really accurate measurement of energy in/energy out, we can effectively measure how much global warming (or cooling) is taking place, rather than trying to chase down and find all the heat in the air, in the oceans, or in the ice. That’s a complicated measurement problem (like Archimedes’ crown) and while, unlike with the crown, we will always need to measure the temperatures directly, a precise direct measurement of earth’s radiation budget can quantify the amount of warming/cooling independent of all that.
Wouldn’t that be cool?

R. Gates
February 20, 2010 10:12 pm

Don’t know how the so-called “AGW Crowd” will react to this effort, but based on satellite data, February follows closely on the heels of January as a very warm month in the troposphere. With 8 days left, without an immediate and rapid cooling, February should be the warmest on satellite record. See:
http://discover.itsc.uah.edu/amsutemps/
If February does end as the warmest on instrument record, then we’ll have 2 months of 2010 completed with record temps, and be well on the way to meeting my prediction of making 2010 as the warmest year on record, unless we have a Mt. Pinatubo type volcanic eruption. With these warm tropospheric temps and the sun waking up from a prolonged solar minimum, cosmic ray counts falling, the interplanetary AP index edging up…looks like it will be hard to keep 2010 cool…

pft
February 20, 2010 10:42 pm

“R. Gates (22:12:25) :
… but based on satellite data, February follows closely on the heels of January as a very warm month in the troposphere. With 8 days left, without an immediate and rapid cooling, February should be the warmest on satellite record. …
If February does end as the warmest on instrument record, then we’ll have 2 months of 2010 completed with record temps, and be well on the way to meeting my prediction of making 2010 as the warmest year on record.”
Wish I lived at 14,000 ft where all the warmth is (what is it, about -25 deg F there), which is probably related to El Nino and thus not reflecting climate trends. Meanwhile back on land near sea level, DC had about 4 ft of snow about the time I flew back to Taipei, and we just had the coldest lunar new year holiday in recent memory, perhaps as cold as 1986 when I first came out here, although maybe the heavy rains made it seem colder than it was.
But at least the sun is becoming more active, and that should help warm us up a bit.
Not sure where CO2 comes into play in these cooling and warming events though. Thats the big question isn’t it. I mean, Jones has admitted it might have been warmer in the MWP than today, and that the warming from 1979-1999 was not much different than from the warming of 1910-1940 and 1850-1880, and that while not statistically significant, there has been a cooling trend since 2002.

Anticlimactic
February 20, 2010 11:17 pm

I am not sure I see the point of this : taking a subset of suspect data to produce a less accurate version of a suspect graph!
Also this sucks you in to the idea that global warming is a continuous trend and it is just a matter of by how much. If you take the graph in 8 year chunks then 1986-2003 looks like it would trend as flat at about -0.3C, 2004-2001 would show a rise from about -0.3C to 0.3C, and 2002-2009 would trend as flat at about 0.3C.
You would get a similar result by measuring a kettle from 2 minutes before to 2 minutes after it has boiled. The trend line would suggest the kettle will reach the melting point of steel in about 100 minutes!
________________________________________
The graph is like a thriller novel with the last page missing : January 2010. As this is the northern hemisphere land surfaces it is not contaminated by the hotter southern hemisphere. If January comes out as not being exceptionally cold, and not way below the area shown on the graph, it will be a pointer as to how compromised the CRU data is.

R. Gates
February 20, 2010 11:31 pm

pft,
Actually, the near surface temps in the troposphere are warm as well this year, and it’s warm all the way up to about 46,000 ft. Some of this may be El Nino related certainly, but since the it is a global reading, that doesn’t explain it all.
Regardless of whether or not the MWP was warmer than the recent warm period, (and I think the data is still not completely solid that it was over the entire globe), but regardless, the whole issue comes down to whether or not our human created CO2 that is accumulating in greater quantities every year can impact the climate enough to overide any natural variations. The earth should be entering another glacial period about now in this overall Ice Age that we are in. Can the accumuation of human created GHG’s delay or even overcome the next advance of the glaciers? Or perhaps, and I think this is also a real possibility, we will warm things up just enough to change the ocean current, slowing down the circulation of heat from the tropics to the poles, and perhaps bring on the advance of the glaciers even faster than they would have come on their own. This “Day After Tomorrow” scenario is just as much a threat as any run-away global warming. This scenario is what ushered in the Younger Dryas period, and sent the earth back into a thousand years of cold just as it was warming up after the last glacial period.

Anticlimactic
February 21, 2010 12:01 am

OOPS! I obviously have a blind spot with the 1990s! My entry should read :
Also this sucks you in to the idea that global warming is a continuous trend and it is just a matter of by how much. If you take the graph in 8 year chunks then 1986-1993 looks like it would trend as flat at about -0.3C, 1994-2001 would show a rise from about -0.3C to 0.3C, and 2002-2009 would trend as flat at about 0.3C.

February 21, 2010 12:45 am

Re: Bart (Feb 20 21:05)
I don’t like the spherical harmonics idea. Partly because you’d need fairly high order to capture variations on the scale of continents, which I imagine that you want to do, and this involves an assumption of high order differentiability.
But the worse problem is probably false teleconnection. In trying to accommodate a difficult fit in, say. Greenland, the fitting would produce (~Gibbs effect) ripples all over the world, not diminishing with distance from Greenland.

Bart
February 21, 2010 2:09 am

Nick Stokes (00:45:13) :
“Partly because you’d need fairly high order to capture variations on the scale of continents, which I imagine that you want to do, [I do?] and this involves an assumption of high order differentiability.”
Not any more than you need a higher order than typical linear trend (order 1 polynomial expansion) to pick up variations on the scale of years or the eleven year solar cycle or other periodic components in order to produce estimates of warming over the last century.
Remember, the object would be to get a function which would then be integrated over the surface and divided by 4*pi*R^2 to get a global average. Higher spatial frequencies would be attenuated in that integration, so each successive harmonic adds progressively less to the final result.
I would even say, you want to do a low order fit, maybe 2nd or 3rd order even, with lots of data to make it well overdetermined, as this will beat down noise in the final product. As always in filtering applications, the choice is between bias and variance in your estimator. You would have to futz around with the data to find the best compromise. Try 3rd order, lower it to 2nd and raise it to 4th and see how things change. Even plot several orders, and look for the Goldilocks point at which lower is too smooth, and higher is too variable.
“But the worse problem is probably false teleconnection. In trying to accommodate a difficult fit in, say. Greenland, the fitting would produce (~Gibbs effect) ripples all over the world, not diminishing with distance from Greenland.”
A) Gibbs effects come out for higher order fits – see above
B) doesn’t matter, the integration to compute the global average would beat the spatial variability down
Thanks for the comments.

Brent Hargreaves
February 21, 2010 2:10 am

John Hooper (13:53:49) 20 Feb : “So the world is warming just like everyone said?
Guess ClimateGate was a red herring after all.”
No, that doesn’t follow. Dr. Spencer’s work here is validating records over a tiny 24-year period. This Great Debate concerns timescales in millennia, not decades, and whether the recent rise is unprecedented (as the Hockey Stick would have it) or merely the latest in a long series of ups and downs.

PJB
February 21, 2010 2:17 am

I am not really interested in just repeating the sort of computer analysis that the temperature alarmists have already made (using data the alarmists have put out on the web) to see whether they know how to program a computer correctly. After all, what Jones “lost” was the original data, not the programs used to analyze the computer datasets. Why not collect (and not just sort out from alarmists’ temperature datasets) the handwritten temperature data from stations that you or your colleagues have visited and thus personally know started out rural and stayed rural. As few as twenty or thirty of these scattered across North America would be of interest. Even if someone wanted to, tampering with decades of hand-written temperature log books would be an impossible task. It would definitely be interesting to see whether this data shows any significant warming trend. Really, poking around in the datasets that the alarmists have put out onto the web is the easiest sort of check to do, and it is almost guaranteed to back up their claims. Do you think they just dump any old numbers out there and hope no one looks at them? — I don’t, not now after the skeptics have started to gain traction. Not everyone is a high-level UN bureaucrat. And if by chance they did miss something and put out data that works against their case, they’ll just “find” some mistake in it — it’s their data after all, how can you dispute the presence of a mistake if they say it’s there? — and put out “corrected data” that shows the sort of warming they want. The alarmists’ basic power is that they control the official data coming from the weather stations and the satellites, and they get to say what the “correct” data is, so any disagreements you find between their web data and their temperature trends just shows them how to fix their data so that it supports their claims. I realize you have actually done some work to check the alarmists’ claims, which puts you way ahead of almost all of the people who comment here, but you made the easy check, the check the alarmists invite people to do by publicizing their data. By now the climate establishment is on high alert; they are no longer dozing at their government-funded desks the way they were, say, two years ago, so I think we can all assume that any temperature data they provide will support their case. By the way, their obvious next step is to “fix” the way they process the satellite data so it too supports global warming. Satellite data is highly vulnerable to tweaks in the massive computer programs needed to go from the radiation sensor data generated on-board the spacecraft to the corresponding temperature estimates of the earth’s surface and atmosphere. This computer processing is so complicated that, up until a few years ago, the satellite groups were probably reluctant to modify the computer programs. It would be expensive and difficult to do without introducing embarrassing bugs that would interrupt the data stream — and no one was really paying attention to the climate skeptics — but now they may be worried enough to attempt the modifications. (Indeed they may already have done so — strange that suddenly the satellites are showing a record-high January temperature anomaly at such a politically convenient time for the alarmists.) If they have kept the raw satellite-sensor data from past decades they can even go back with their new modified temperature-extraction programs and “correct” past satellite temperatures. Really, as long as we’re only talking about a degree or two here and there, temperature data can be massaged any way you please as long as you own it and can claim to be making improvements.

Bart
February 21, 2010 2:45 am

Bart (02:09:03) :
Nick Stokes (00:45:13) :
Actually (claps his hand to his head), only the estimate of the 00 term impacts the average, but the estimate of the other terms impacts the estimate of the 00 term. Again, though, I do not think the order has to be very high to get a good estimate of that term.

February 21, 2010 2:51 am

Re: Bart (Feb 21 02:09),
I wrongly assumed you wanted to use the harmonics to produce a smoothed spatial representation of temperature, which is one of the uses of gridding. But if you are doing it just for the global average, then another point of gridding is to ensure that different areas of the land mass contribute more or less equally to the average, regardless of the density of stations.
It’s not clear to me how you achieve that by fitting low order harmonics. The LS fit would still be over-influenced by regions of high station density.

Bart
February 21, 2010 2:57 am

“…only the estimate of the 00 term impacts the average…”
The global average, I mean, of course. I’d test this assumption myself, but I have a day job in an entirely other realm. This is just a suggestion for whomever might like to give it a whirl. You could start with low order and work higher. At some order, the 00 term should start to level out, then break up as you go higher, and that level would give you a good estimate of the global average without all of this gridding and weighting of progressively warped differential areas, which appears to me, based on the descriptions I have read, to be essentially little more sophisticated than blunt rectangular integration.

Rhys Jaggar
February 21, 2010 3:05 am

Dr Spencer – a highly interesting and information-rich article.
Just one question from an interested lay-person: although your calibration will try to minimise ‘drift’ due to UHI etc, that is an external drift situation.
What are the methods used to ensure that ‘internal drift’ does not occur, namely that the technology in the satellites doesn’t change characteristics through time??
We saw of course that satellite ‘drift’ occurred in the arctic ice measurement situation in the past 18 months. Presumably, there is some form of internal warning system to address that??

John Hooper
February 21, 2010 3:34 am

Brent Hargreaves (02:10:17) :
John Hooper (13:53:49) 20 Feb : “So the world is warming just like everyone said?
Guess ClimateGate was a red herring after all.”
No, that doesn’t follow. Dr. Spencer’s work here is validating records over a tiny 24-year period. This Great Debate concerns timescales in millennia, not decades, and whether the recent rise is unprecedented (as the Hockey Stick would have it) or merely the latest in a long series of ups and downs.

Nice attempt to move the goal posts, but if you care to peruse this site you’ll find plenty of cynical uninformed comments casting dispersion on the recent record. My bet is you won’t see any forthcoming apologies.

carrot eater
February 21, 2010 4:12 am

Brent Hargreaves (02:10:17) :
That may be your opinion, but I think you’ll find a great number of people around here who doubt whether any warming has taken place over that time period. They might say it’s all or mostly a result of dodgy homogenisation, or somesuch.
Bart (02:09:03) :
Just to be clear, are you suggesting this method for satellite data, or these surface data? Anyway, I agree with Nick’s reservations.

Gareth
February 21, 2010 4:16 am

Over the last 150 years the technology and coverage of surface temperature recording has come on in leaps and bounds, culminating in that giant leap into space.
To what extent is it possible to factor out these changes? How do we know that the current temperature record and the lumpy increases that appear in the ‘global’ series aren’t simply us getting more accurate and more complete coverage, in particular bringing more of the Southern Hemisphere into the records?

A C Osborn
February 21, 2010 4:23 am

I agree completely with PJB (02:17:58) : .
Every Single Station Dataset that has been tested using the raw data has shown major problems with the Official record of those Datasets.
Even whole Datasets when tested using the raw data have shown manipulation especially in the GISS data.
Many posters on here have from all over the world have shown their own collections of Raw unadulterated data. It would be much better to collect them all, but especially the Rural sites and plot them to see a more truthfull picture than that provided by “Quality Controlled” datasets.