Spencer: developing a new satellite based surface temperature set

New Work on the Recent Warming of Northern Hemispheric Land Areas

by Roy W. Spencer, Ph. D.

aqua_night_pacific

INTRODUCTION

Arguably the most important data used for documenting global warming are surface station observations of temperature, with some stations providing records back 100 years or more. By far the most complete data available are for Northern Hemisphere land areas; the Southern Hemisphere is chronically short of data since it is mostly oceans.

But few stations around the world have complete records extending back more than a century, and even some remote land areas are devoid of measurements. For these and other reasons, analysis of “global” temperatures has required some creative data massaging. Some of the necessary adjustments include: switching from one station to another as old stations are phased out and new ones come online; adjusting for station moves or changes in equipment types; and adjusting for the Urban Heat Island (UHI) effect. The last problem is particularly difficult since virtually all thermometer locations have experienced an increase in manmade structures replacing natural vegetation, which inevitably introduces a spurious warming trend over time of an unknown magnitude.

There has been a lot of criticism lately of the two most publicized surface temperature datsets: those from Phil Jones (CRU) and Jim Hansen (GISS). One summary of these criticisms can be found here. These two datasets are based upon station weather data included in the Global Historical Climate Network (GHCN) database archived at NOAA’s National Climatic Data Center (NCDC), a reduced-volume and quality-controlled dataset officially blessed by your government for climate work.

One of the most disturbing changes over time in the GHCN database is a rapid decrease in the number of stations over the last 30 years or so, after a peak in station number around 1973. This is shown in the following plot which I pilfered from this blog.

Given all of the uncertainties raised about these data, there is increasing concern that the magnitude of observed ‘global warming’ might have been overstated.

TOWARD A NEW SATELLITE-BASED SURFACE TEMPERATURE DATASET

We have started working on a new land surface temperature retrieval method based upon the Aqua satellite AMSU window channels and “dirty-window” channels. These passive microwave estimates of land surface temperature, unlike our deep-layer temperature products, will be empirically calibrated with several years of global surface thermometer data.

The satellite has the benefit of providing global coverage nearly every day. The primary disadvantages are (1) the best (Aqua) satellite data have been available only since mid-2002; and (2) the retrieval of surface temperature requires an accurate adjustment for the variable microwave emissivity of various land surfaces. Our method will be calibrated once, with no time-dependent changes, using all satellite-surface station data matchups during 2003 through 2007. Using this method, if there is any spurious drift in the surface station temperatures over time (say due to urbanization) this will not cause a drift in the satellite measurements.

Despite the shortcomings, such a dataset should provide some interesting insights into the ability of the surface thermometer network to monitor global land temperature variations. (Sea surface temperature estimates are already accurately monitored with the Aqua satellite, using data from AMSR-E).

THE INTERNATIONAL SURFACE HOURLY (ISH) DATASET

Our new satellite method requires hourly temperature data from surface stations to provide +/- 15 minute time matching between the station and the satellite observations. We are using the NOAA-merged International Surface Hourly (ISH) dataset for this purpose. While these data have not had the same level of climate quality tests the GHCN dataset has undergone, they include many more stations in recent years. And since I like to work from the original data, I can do my own quality control to see how my answers differ from the analyses performed by other groups using the GHCN data.

The ISH data include globally distributed surface weather stations since 1901, and are updated and archived at NCDC in near-real time. The data are available for free to .gov and .edu domains. (NOTE: You might get an error when you click on that link if you do not have free access. For instance, I cannot access the data from home.)

The following map shows all stations included in the ISH dataset. Note that many of these are no longer operating, so the current coverage is not nearly this complete. I have color-coded the stations by elevation (click on image for full version).

ISH-station-map-1901-thru-2009

WARMING OF NORTHERN HEMISPHERIC LAND AREAS SINCE 1986

Since it is always good to immerse yourself into a dataset to get a feeling for its strengths and weaknesses, I decided I might as well do a Jones-style analysis of the Northern Hemisphere land area (where most of the stations are located). Jones’ version of this dataset, called “CRUTem3NH”, is available here.

I am used to analyzing large quantities of global satellite data, so writing a program to do the same with the surface station data was not that difficult. (I know it’s a little obscure and old-fashioned, but I always program in Fortran). I was particularly interested to see whether the ISH stations that have been available for the entire period of record would show a warming trend in recent years like that seen in the Jones dataset. Since the first graph (above) shows that the number of GHCN stations available has decreased rapidly in recent years, would a new analysis using the same number of stations throughout the record show the same level of warming?

The ISH database is fairly large, organized in yearly files, and I have been downloading the most recent years first. So far, I have obtained data for the last 24 years, since 1986. The distribution of all stations providing fairly complete time coverage since 1986, having observations at least 4 times per day, is shown in the following map.

ISH-station-map-1986-thru-2009-6-hrly

I computed daily average temperatures at each station from the observations at 00, 06, 12, and 18 UTC. For stations with at least 20 days of such averages per month, I then computed monthly averages throughout the 24 year period of record. I then computed an average annual cycle at each station separately, and then monthly anomalies (departures from the average annual cycle).

Similar to the Jones methodology, I then averaged all station month anomalies in 5 deg. grid squares, and then area-weighted those grids having good data over the Northern Hemisphere. I also recomputed the Jones NH anomalies for the same base period for a more apples-to-apples comparison. The results are shown in the following graph.

ISH-vs-CRUTem3NH-1986-thru-2009

I’ll have to admit I was a little astounded at the agreement between Jones’ and my analyses, especially since I chose a rather ad-hoc method of data screening that was not optimized in any way. Note that the linear temperature trends are essentially identical; the correlation between the monthly anomalies is 0.91.

One significant difference is that my temperature anomalies are, on average, magnified by 1.36 compared to Jones. My first suspicion is that Jones has relatively more tropical than high-latitude area in his averages, which would mute the signal. I did not have time to verify this.

Of course, an increasing urban heat island effect could still be contaminating both datasets, resulting in a spurious warming trend. Also, when I include years before 1986 in the analysis, the warming trends might start to diverge. But at face value, this plot seems to indicate that the rapid decrease in the number of stations included in the GHCN database in recent years has not caused a spurious warming trend in the Jones dataset — at least not since 1986. Also note that December 2009 was, indeed, a cool month in my analysis.

FUTURE PLANS

We are still in the early stages of development of the satellite-based land surface temperature product, which is where this post started.

Regarding my analysis of the ISH surface thermometer dataset, I expect to extend the above analysis back to 1973 at least, the year when a maximum number of stations were available. I’ll post results when I’m done.

In the spirit of openness, I hope to post some form of my derived dataset — the monthly station average temperatures, by UTC hour — so others can analyze it. The data volume will be too large to post at this website, which is hosted commercially; I will find someplace on our UAH computer system so others can access it through ftp.

While there are many ways to slice and dice the thermometer data, I do not have a lot of time to devote to this side effort. I can’t respond to all the questions and suggestions you e-mail me on this subject, but I promise I will read them.

0 0 votes
Article Rating
103 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
insurgent
February 20, 2010 12:02 pm

Always an interesting read from you Dr. Spencer. Keep up the great work.
Do you have a chart for the ISH dataset showing the number of active stations by year like the one for GHCN?
Also, is there a new home for the satellite temperature data maps like those that are at climate.uah.edu which hasn’t been updated since 2008?

ShrNfr
February 20, 2010 12:07 pm

I still wish you would somehow dredge up the NEMS and SCAMS datasets and patch those on to the front of the present ones. NEMS is a bit more problematic, but SCAMS was a nice instrument until the belt that rotated the horn jammed. That would extend the dataset back to the early 70s anyway.

A C Osborn
February 20, 2010 12:08 pm

It is interesting that 2002 has the highest anomaly of 2.0C and yet it is not recognised as the Hottest Year.
Can you plot your Satellite values on the same Chart?

dorlomin
February 20, 2010 12:18 pm

Good luck with this endevour. Interesting idea.

NickB.
February 20, 2010 12:21 pm

Nice work Dr. Spencer – looking forward to the new satellite dataset as well. If you were Santa Claus, and I could ask you for anything… in the interest of spurious trend vs. long term trend vs. recurring trend, I’d love to see what the 30’s looked like but understand you’re busy!
I imagine the AGW crowd will call this an independent confirmation of CRU – this should be interesting to watch.
Time for popcorn!

latitude
February 20, 2010 12:31 pm

I would guess that urbanization, at least in this country, has ground to a screaching halt. And don’t we have the best thermometers?
“”But at face value, this plot seems to indicate that the rapid decrease in the number of stations included in the GHCN database in recent years has not caused a spurious warming trend in the Jones dataset — at least not since 1986.””
My bone to pick is the temperature data prior to 1986. When urbanization was in full swing.

Robert
February 20, 2010 12:32 pm

Hey, he analyzed the data, and it showed the opposite of what he was expecting, and he went public with that. That’s great; that’s how it’s supposed to work.
On the subject of satellites, I’m really excited about SORCE (http://lasp.colorado.edu/sorce/index.htm) and CERES (http://ams.confex.com/ams/pdfpapers/148171.pdf) (new sensor launching in 2010). With those two instruments working in tandem, we should be able to get a clear direct measurement of global warming: (radiation in) – (radiation out) = net warming.
We have those data sets prior to 2008, but my understand is that the uncertainties are too great to accurately measure the net radiation budget. Get that, and you can bypass (to an extent) the entire question of temperatures, a la Archimedes in his bath: if you are absorbing more energy than you are radiating, then that energy is somewhere in the system in the form of additional heat.
Of course we’re still going to care what is warming and in what pattern, but being able to measure the precise amount of total warming will be huge.

February 20, 2010 12:34 pm

I have compared MSUAH and CRU record for 23.5-90/0-360 (Nothern extratropics) from KNMI database and found excellent agreement. Tropics and Souther hemisphere were however diverging, CRU running warmer. Unfortunately, KNMI does not allow to extract MSUAH land-only data for given area, so I could not compare land against land. Probably SSTs, which are not affected by UHI, are improving the overall NH ground record.
I have also compared two high quality stations with MSUAH for given 2.5*2.5° grid and found excellent agreement. It was Armagh Observatory and Lomnicky peak Observatory.
Dr Spencer, which ground stations the surface record will be calibrated against?

DirkH
February 20, 2010 12:37 pm

“I know it’s a little obscure and old-fashioned, but I always program in Fortran”
😉 Very refreshing! I’m a C++ guy, too young for Fortran, but i can understand why one uses the tool one knows best. Good luck to you, Dr. Spencer!

Brian D
February 20, 2010 12:45 pm

If your using a similar methodology to Jones, wouldn’t you expect similar results? Is the methodology truly valid, or should a different one be used?
I notice here the step up in 1998. I saw step ups in 1931, and 1998 for the temp graphs for the Upper Midwest using the USHCN data site from a previous post. Just like annual snow extents. The step down in the mid 80’s, mainly due to the decrease in Spring, and Summer extents. Always peaks my interest as to why that happens. Something changed in short order.

Doug in Seattle
February 20, 2010 12:47 pm

So it seems that Dr. Jones wasn’t exaggerating his data set – at least within the 24 year window Dr. Spencer uses. Too bad he destroyed lost his data, he might now have had some crowing points.

DirkH
February 20, 2010 12:48 pm

“Robert (12:32:21) :
Hey, he analyzed the data, and it showed the opposite of what he was expecting, and he went public with that.”
I don’t think you interpret that correctly. Dr. Spencer is in the business of creating satellite-based temperature measurement data sets, and i guess he thinks that that might be a better method to get a global dataset than the gridding-and homogenizing approach by say GISS. Nowhere does he say that he expects to find cooling or that he expects to find out “the opposite” of what Jones finds out.

steven mosher
February 20, 2010 12:55 pm

Thanks Dr. Spencer,
If you post the code other people who want to port to R or matlab or C or python or whatever can work with the language they like.
Kudos.

Bart
February 20, 2010 12:57 pm

“I then averaged all station month anomalies in 5 deg. grid squares, and then area-weighted those grids having good data over the Northern Hemisphere.”
How do you interpolate over the oceans, or in regions with smaller numbers of measuring stations? Shouldn’t you fit the data to an expansion in spherical harmonics and compute the mean from the resulting function? It seems to me this would be more rigorous, and since satellite data can be processed to give readings at different altitudes, produce a three dimensional model which could confirm or falsify heating of different layers of the atmosphere compared to GCM expectations.

DirkH
February 20, 2010 12:58 pm

“Robert (12:32:21) :
[…]
Get that, and you can bypass (to an extent) the entire question of temperatures, a la Archimedes in his bath: if you are absorbing more energy than you are radiating, then that energy is somewhere in the system in the form of additional heat.”
And Robert, you should really stop using that iPhone app because that statement doesn’t make sense on so many levels i’d like to do the opposite of eating if i cared enough.

Bart
February 20, 2010 12:58 pm

i.e., if you use the same methodology with satellite data, you could get a 3d model.

Rupert
February 20, 2010 1:04 pm

It looks like the Active Temperature Stations graph is the first hockey stick that we can trust.

February 20, 2010 1:04 pm

Spencer:
The ISH data include globally distributed surface weather stations since 1901, and are updated and archived at NCDC in near-real time. The data are available for free to .gov and .edu domains. (NOTE: You might get an error when you click on that link if you do not have free access. For instance, I cannot access the data from home.)
Tax-payers will have to pay twice for this data, it seems.

nc
February 20, 2010 1:08 pm

Now the data being used by Dr Spencer, is that raw data or adjusted data. Seems I have read where GHCN data is adjusted.

aMINO aCIDS iN mETEORITES
February 20, 2010 1:14 pm

There’s a rapid drop in stations starting around 1988. That was also when James Hansen gave his infamous talk before the Senate. Is there a connection? Are James Hansen et al trying to make Hansen’s prediction of rise in temps come to pass? If so that would truly be ‘anthropogenic’ (i.e., made by man) warming.
……………………………………………………………………………………………………………..

(video on Hansen’s talk in 1988)

D. King
February 20, 2010 1:15 pm

“I’ll have to admit I was a little astounded at the agreement between Jones’ and my analyses, especially since I chose a rather ad-hoc method of data screening that was not optimized in any way. Note that the linear temperature trends are essentially identical; the correlation between the monthly anomalies is 0.91.”
I don’t understand the following video. It sounds like NOAA
is saying the new automated weather stations are not calibrated,
and therefore heat is added to adjust the data. Is this correct
and is this the data you are working with? Further, does this
explain the step change apparent in your CRU Temp graph in
1998? The video has a countdown timer, so to save time, start
at -6:45 and end at -5.57, though the whole video is worth
watching. Sorry, you may have to watch the commercial first!
http://www.kusi.com/weather/colemanscorner/84516272.html?video=YHI&t=a

40 Shades of Green
February 20, 2010 1:23 pm

Reading off the graph, this looks like .8 degrees of warming over 24 years which by my calculations makes for .33 degrees of warming per decade.
Am I right.
This looks to be bang in the middle of AR4 projections, does it not.
Having said that, one of the things that always annoys me about the temperature record over the last 30 years is that the first half had 3 big tropical, (or nearly tropical) volcanoes and the second half had none. This depressed the temperatures in the first half so running a trend line across the 30 years gives you “absence of volcano” driven warming.
And indeed having said that, I take it that given the correlation with Dr Jones analysis, this analysis also has no statistically significant warming for 15 years. Or to put it another way. No warming since the volcanoes stopped depressing temperatures.
which depressed the temper

Robert
February 20, 2010 1:35 pm

@Dirk: “I don’t think you interpret that correctly.”
I think you’re right; I over-read his statement that he was “astonished” by what he found. Of course, he may simply have been astonished that the results agreed so well, not that they showed a more dramatic warming trend.
“And Robert, you should really stop using that iPhone app because that statement doesn’t make sense on so many levels i’d like to do the opposite of eating if i cared enough.”
Makes perfect sense. You have a problem with the concept of a radiation budget?

Bernie
February 20, 2010 1:35 pm

Dr Spencer:
Your raw data set should prove to be a very useful addition. Given that we know which 5 degree grids or even 2.5 degree grids have experienced the greatest and least urbanization perhaps we can even get a better handle on the UHI effects.
Do your stations have full metadata or are they plagued with similar gaps to the other temperature series?
Many thanks for the openness and candor.

Adam from Kansas
February 20, 2010 1:37 pm

The new dataset doesn’t have the spike at the beginning of 2010 nor the big uptrend from 2008 to 2009.
It’s really interesting to say the least, this confirms the comments here saying “but we’re freezing our butts off in our area” or “summer has been rather cool in our area so far” and the lot of cold stories surfacing around the globe.
The trend decreases significantly if you start from 2000 and starts going down if you work from 2007. This stuff is cutting edge and cool stuff.

NickB.
February 20, 2010 1:43 pm

Folks – lets get one thing straight for the analysis portion of the post:
The hypothesis being tested was that, for the northern hemisphere at least, the massive GHCN station count drop-off in the last twenty years introduced or exacerbated the warming trend.
It was not to test instrumentation accuracy, UHI, or even the accuracy of CRU vs. an independent analysis on a historical timeframe.
I do appreciate that, for good measure, he used raw data as well, but given the nature of the analysis it wasn’t actually necessary. I’d also be curious to see GISS and UAH on the graph since I have heard rumors about divergences between the 3 over the last few years

February 20, 2010 1:47 pm

Bart, there’s no interpolation. I’m not a big fan of creating data where there are none. As it is, spreading a single thermometer over a 5×5 deg grid square (90,000 sq. nautical miles at the equator) is a bit of a stretch already. That’s one advantage of the satellite data…every square mile of the Earth’s surface is sampled (except near the poles).

Chris
February 20, 2010 1:50 pm

NH Land temps today are the same as 1990, or twenty years ago. I thought global warming was a runaway train.

John Hooper
February 20, 2010 1:53 pm

So the world is warming just like everyone said?
Guess ClimateGate was a red herring after all.

DirkH
February 20, 2010 2:05 pm

“40 Shades of Green (13:23:19) :
[…]
This looks to be bang in the middle of AR4 projections, does it not.”
For which scenario[s], 40 Shades of Green? You do know that they made different assumptions re the CO2 emissions, do you?

sky
February 20, 2010 2:06 pm

I’m not at all surprised by the close agreement between Spencer’s and Jones’ analysis of essentially similar station data for recent decades. The devil lies in the patching together of anomalies from different stations in the early decades of the past century. That’s where offsets and adjustments in the name of “homogenizing” the records introduce dubious “trends.”

Jordan
February 20, 2010 2:07 pm

Although this is a step right direction, my concerns about data sampling remain.
Until we have analysed the temperature field (in space and time), how do we know how to sample it to avoid aliasing?
Until we understand the dynamics of the temperature field and determined a sampling regime which meets the requirements of Shannon’s Sampling Theorem, any such series will have a great deal of uncertainty hanging over it.
Those sharp steps in the series are a cause for concern. Do they indicate high frequency components in the signal (“frequency” meaning for time or space) and the possibility of aliasing of the underlying (analogue) temperature signal?
Remember that faithful reconstruction of a signal from discrete samples is a much more onerous task than being able to calculate statistical aggregates. But even then, if inadequate sampling results in aliased data, we cannot rely on the calculation of statistical aggregates of the samples to give an accurate measire of the statistical aggregates of the analogue signal.
Sorry if this is repetitive as I debated these points at length on an earlier thread. There is possibly little to add by debating them again here, so I propose not to do so.
I’d be interested to hear about any analysis of the dynamics of the temperature field (specifically, identification of the temporal or spatial bandwidths) which could help to address the question of whether the requirements of the Sampling Theorem have been met.
Cheers.

Dave N
February 20, 2010 2:22 pm

nc (13:08:48) :
Maybe it’s the raw adjusted data?

David S
February 20, 2010 2:23 pm

Is the data you’re using adjusted in any way before you get it?

DirkH
February 20, 2010 2:26 pm

I can’t access the noaa data ftp directory (no surprise) but couldn’t stop myself from digging around on that ftp server… some data is publicly accessible. I found all time snow record files. Compare 19 Feb 2010 to 19 Feb 2007:
ftp://ftp.ncdc.noaa.gov/pub/data/extremes/all-time/snow/20070219.txt
Number of Broken Records: 0
Number of Tied Records: 1
ftp://ftp.ncdc.noaa.gov/pub/data/extremes/all-time/snow/20100219.txt
Number of Broken Records: 229
Number of Tied Records: 117

DirkH
February 20, 2010 2:35 pm

“Robert (13:35:26) :
[…]
Makes perfect sense. You have a problem with the concept of a radiation budget?”
So you say we model/measure (like Hansen+Schmidt) a radiation imbalance to an absurd precision (or at least purport that we do), deduce that the earth is storing energy and stop looking for where it might be? That’s nothing new, Robert, Hansen and Schmidt have done exactly that. It was nonsense back then and it’s nonsense now. Google for radiation imbalance 0.85 Watt/m^2 or something like that but you (or your telephone) probably now that already.
If the earth purportedly stores energy (you say yourself “stored heat”) it would have to be, well, warm somewhere, wouldn’t it? You’d rather stop measuring altogether? And that sounds like an intelligent way to go about things to you? A smart thing to do, smart on an Archimedes-type smartness level? Really? You’re a joker.

NickB.
February 20, 2010 3:09 pm

John Hooper,
I must have missed the part about how all the various issues raised in those e-mails hinged on GHCN station dropout for the Northern Hemisphere in the last 24 years.
In fairness, this does seem to show that for the past 24 years the CRU results for the Northern Hemisphere do seem to past the smell test around station drop out and adjustments.

Robert
February 20, 2010 3:23 pm

@ DirkH
Wow, Dirk, you’ve got a lot of fear there. You have no rational case, of course. Measuring the radiation budget more accurately is good for everybody who cares about the science. But given that you put “stored heat” in quotation marks, that apparently doesn’t include you.
I have to thank that a good way to guess which said of a debate is full of crap it would be the side that’s afraid of better measurements.

February 20, 2010 3:26 pm

NH extratropics are a good measure of warming/cooling, since the Nino effect is not much visible there. Looks that NH has peaked in 2006-2007. This Dec-Jan has anomaly equal to early 80ties 😮
I accept that cold winter in one part is not the whole globe, but where is all that warmth from the whole hemisphere hidden? It is a travesty!
http://climexp.knmi.nl/data/icrutem3_hadsst2_0-360E_23.5-90N_n_1980:2011a.png

Holger Danske
February 20, 2010 3:47 pm

Robert, I don’t understand your reference to Archimedes. As far as I know he was only interested in buoyancy/gravity:
http://en.wikipedia.org/wiki/Eureka_(word)

February 20, 2010 3:51 pm

My understanding is that there is some quality testing with quality flags included with each ISH observation. I doubt that any of the temperatures have been altered in any way.
NOW…in retrospect, I’m surprised no one asked the following question:
If the monthly temperature anomalies are, on average, 36% larger than Jones got…why isn’t the warming trend 36% greater, too? Maybe the agreement isn’t as close as it seems at first.

Pamela Gray
February 20, 2010 4:10 pm

I would be careful about infilling empty grids. You could be crossing climate zones and thus ascribing a temperature that would be false, IE it may be true for the climate zone it came from, but cannot be accurate for the climate zone it is infilling next to it if that climate zone has no data.

Steve Goddard
February 20, 2010 4:16 pm

UAH showed a huge spike in January which was much smaller in GISS. Same thing in 1998. It raises questions about the accuracy of TLT measurements.

Geoff Sherrington
February 20, 2010 4:35 pm

A major complication is gridding/interpolation, as Bart (12:57:41) : notes above at 20/2. For years I have seen reporting of 5 x 5 grid cells with area weighted averages. There are many different ways to arrive at composite figures from sparse observations and the method above is only one. I’d be inclined to bring in some mining expertise if this had not been done already. One way miners approach this differently is to delete or assign low grade vales to blocks with insufficient information. It costs money to process barren rock that looks like paydirt because the math was unsuited.
You note that “I’ll have to admit I was a little astounded at the agreement between Jones’ and my analyses” I do not think that the analaysis is particularly close. Most extremes are in blue, for a start, possibly meaning that there is some discconnect between measuring 1.5 m above the surface and at the higher altitude satellite region; or that CRU has more severe smoothing or whatever.
There is reasonable agreement at this scale on the time axis, but that needs a bit more mechanism discussion. I’ve been working on the 1998 hot year and can find no explanation for it, particularly one involving GHG. So until the reasons for the ammual excursions can be explained, they remain as lines and points on a graph.

Ian H
February 20, 2010 4:43 pm

Since the core of the earth is hotter than the surface (by a considerable extent) it is not enough just to form an energy budget by simply looking at the amount of radiant energy arriving at or leaving the surface. You’ve got to also make some accounting for the heat rising to the surface from the interior.
Furthermore heat energy will not manifest directly as a temperature rise. Much will be absorbed in state transitions – ice to water and water to steam being the main two. And some will be absorbed in driving chemical reactions – for example the chlorophyll mediated reaction converting CO_2 into sugar.
I don’t see that accounting for all this is going to be any simpler than trying to keep accurate temperature records. In particular a heat budget isn’t going to end ambiguity since a lot of assumptions will have to be made about these things.

NickB.
February 20, 2010 4:56 pm

Dr. Roy,
I had caught the comment about anomalies in your post, and I guess for a rusty economics student that does IT for a living I’m sure I don’t grasp the consequences of it. I was thinking, incorrectly I suspect, that it meant the amplitude of your analysis was greater but, due to the graph, the resulting averages were consistent. Maybe not…
Any insight to what this tells us about CRU and what it doesn’t?

February 20, 2010 5:20 pm

It will be good to have a new surface temperature data set, and I congratulate Dr Spencer on the initiative.
But that GHCN plot suggests that there has been a large recent drop to about 400 stations. You see this sometimes, and it’s always the last month. It’s an elementary error – we saw it a little while ago with the Langoliers. The fact is that GHCN updates with new monthly data as it comes in, a few days at a time. If you look at the file early in the month, you’ll see only a few hundred reporting the latest month. But there has been a base of 1200+ stations reporting consistently (if sometimes a few days later) for many years now. I’ve documented this here. I don’t believe you’ll find a single month in the last fifty years that, by the end of the following month, did not have reports from at least 1100 stations.

carrot eater
February 20, 2010 6:21 pm

It wouldn’t be too hard to compute a GISS or CRU record for the same area that was used here. Both provide gridded data, after all. Apples to apples.
That’d test the idea of whether a lack of tropical coverage increased the variability in the Spencer set.
“I computed daily average temperatures at each station from the observations at 00, 06, 12, and 18 UTC.”
This really is UTC, and not local time for each station? This in itself is somewhat interesting, if correct. It basically says that you can measure the temp every 6 hours, and no matter what the phase, you’ll still get a trend that correlates really well with the trend in (Tmax+Tmin)/2.
I can’t quite decide whether I’m surprised by that, or not.

suricat
February 20, 2010 6:39 pm

Dr. Roy.
It really is nice to see your participation in the thread that arose from your post.
That way we all get feedback. What’s more, it’s ‘interactive’. 🙂
Did you ever ride a motorcycle? I did (in my youth). One thing you realise when you ride a motorcycle is that your ‘spatial awareness’ is increased. Riding through the countryside in summer you can feel the air temperatures alter as you ride through different temperature regions, and these can alter three, or even four, times for each kilometre of your travel. Autumn, winter and spring aren’t so significant because it just feels ‘cold’ here in the UK when biking, but hey, that’s just a sensual human thing.
This brings me to the point that ‘Jordan’ made (Jordan (14:07:31)). Signal resolution!
Steve_M once asked me what my preference for resolution of surface temperature stations would be and I replied ‘a 1 km grid’, to which he replied ‘we wish’! I’m sure you realise the ramifications of this, if networking nodes are altered the overall signal reception is also altered and especially when the ‘resolution’ is well outside of the ‘1/3 of the signal frequency’ required for signal definition.
I wish you luck with your project, but I’m apprehensive of it’s outcome.
Best regards, suricat.

carrot eater
February 20, 2010 6:55 pm

I’d been thinking about doing something like this using SYNOPs, but didn’t realise these data were all stored here. Your map shows that Africa is still sparse, but I wonder if you’ll get more if you relax the data completeness standards a bit.
Get out your automatic downloaders, gentlemen, it’s a lot of files.

mobihci
February 20, 2010 7:46 pm

signal from noise. if you want to find the signal (agw warming which includes station dropouts), you must first get rid of the noise (natural warming), which includes long term cycles. it may be that removing stations does actually cause a warming, but because the noise (warming part of cycle) is present and stronger than the signal, you cannot see it. the only way to remove the noise is to extend the test to 2 x the largest cycles frequency what ever that may be. at least 60 years needs to be looked at. more like 2000 years is necessary to draw and real conclusions from it, but then this sort of caculation we see here from spencer seems to be the norm in this industry.

February 20, 2010 8:06 pm

From my experience of looking at weather data over the past 25 years, I would have to say that the day to night, high to low spread in Temperatures has a lot to do with specific heat content due to soil moisture content, elevation, cloud cover and thickness of the atmosphere.
So I would expect the same to carry over to the greater spread from monthly highs to lows for more concentrated reporting of mid-latitude areas than equatorial ones.
The inverse shows up in the resultant dying of thermometers by movement to more coastal areas, and irrigated rural climes, but is more seen as a increase in night time lows, rather than just the decrease in total spread, in the reviews in this debate. Which by using the(Tmax+Tmin)/2= Ave method rather than using the median of 24 hourly readings, might show a slightly different story.

cement a friend
February 20, 2010 8:25 pm

Roy, If one use the same data base one could expect that two sets of analyses should be similar. However, one has to ask; is the database representative of the the total area being covered.
Warwick Hughes http://www.warwickhughes.com/blog/ has put up a post USA Dept of Energy Jones et al 1986 350 pages station documentation now online in pdf He notes “That is the attempt by CRU and Jones to direct investigators to the GHCN station data in lieu of Jones et al/CRU station data. The two groups conduct distinctly different processes on station data and researchers will seldom get close to understanding what Jones/CRU have done by relying on GHCN station versions. The GHCN is riddled with its own multitude of errors and is more than a subject for study in itself.”
I had a quick glance at the Southern Hemisphere document and note that none of the three stations in Tasmania are representative of that island and in Queensland 5 of the nine stations are located on the coast and also in cities of over 50,000. The four inland stations are all in developed town locations.
The Russians have expressed doubts about the selection of their stations. In China there has been a huge growth of cities and there has to be doubts about the representativeness of any of the stations as well as the record available.
Maybe you need to focus on the few stations which are truely representative of a local area and then compare satelite measurements

cement a friend
February 20, 2010 8:42 pm

I should have added from Warwick Hughes http://www.warwickhughes.com/cru86/tr027/index.htm
The following comment
” Readers should email CDIAC and ask for copies of TR022 and TR027. The USA DoE foisted this stuff on the world.
cdiac@ornl.gov
However – for the SH I have a full station list with GHCN populations added by me. Gives you a fair idea how many city stations were used. Readers can judge for themselves the veracity of the Jones et al statement on p1216 of Jones et al 1986b, where they state that “… very few stations in our final data set come from large cities.” This glib and lulling statement is detached from the reality that 40% of their ~300 SH stations are cities with population over 50K.”

February 20, 2010 9:03 pm

Why is Australia totally missing from the 1986-2009 map? Surely we have weather stations reporting hourly.
How do I contact Dr Roy W. Spencer I happen to have an empty website with a large multi megabyte allowance I’m not going to fill any time soon? How big are those files? I got two sites instead of one by accident when I shifted my site off yahoo geocities. I need a web designer to tell me how to host those FTP files though. I’m paying for the space I may a well use it.

Bart
February 20, 2010 9:05 pm

Roy Spencer (13:47:57) :
“I’m not a big fan of creating data where there are none. As it is, spreading a single thermometer over a 5×5 deg grid square (90,000 sq. nautical miles at the equator) is a bit of a stretch already.”
Thanks for responding, Dr. Spencer. I agree completely. Which is why I am suggesting that a fit to a spherical harmonic expansion would likely give a more faithful result. You would specifically not be “creating data where there is none” as, it appears to me, is necessary for the “Jones methodology” because there are a lot of blank grids in space and in time in the plots you show here.
With the spherical harmonic expansion, would be doing the equivalent of linear, quadratic, cubic, etc. polynomial fit such as you regularly do for single dimensional data, but it would be a 2 dimensional least squares fit specifically formulated for spherical geometry. Using a low order model which is significantly overdetermined by the data would effectively spatially low pass filter the data, and should produce a good function upon which to base a global estimate without all this ad hoc gridding and weighting.

Bart
February 20, 2010 9:06 pm

I also understand that the question is moot with the satellite data, but it might give you a better glimpse at what to expect when you do process the satellite data.

Robert
February 20, 2010 9:34 pm

” Holger Danske (15:47:07) :
Robert, I don’t understand your reference to Archimedes. As far as I know he was only interested in buoyancy/gravity:”
I cited him as an example of seeing “around” a problem:
“The most widely known anecdote about Archimedes tells of how he invented a method for determining the volume of an object with an irregular shape. According to Vitruvius, a new crown in the shape of a laurel wreath had been made for King Hiero II, and Archimedes was asked to determine whether it was of solid gold, or whether silver had been added by a dishonest goldsmith.[13] Archimedes had to solve the problem without damaging the crown, so he could not melt it down into a regularly shaped body in order to calculate its density. While taking a bath, he noticed that the level of the water in the tub rose as he got in, and realized that this effect could be used to determine the volume of the crown. For practical purposes water is incompressible,[14] so the submerged crown would displace an amount of water equal to its own volume.”
If we can get really accurate measurement of energy in/energy out, we can effectively measure how much global warming (or cooling) is taking place, rather than trying to chase down and find all the heat in the air, in the oceans, or in the ice. That’s a complicated measurement problem (like Archimedes’ crown) and while, unlike with the crown, we will always need to measure the temperatures directly, a precise direct measurement of earth’s radiation budget can quantify the amount of warming/cooling independent of all that.
Wouldn’t that be cool?

R. Gates
February 20, 2010 10:12 pm

Don’t know how the so-called “AGW Crowd” will react to this effort, but based on satellite data, February follows closely on the heels of January as a very warm month in the troposphere. With 8 days left, without an immediate and rapid cooling, February should be the warmest on satellite record. See:
http://discover.itsc.uah.edu/amsutemps/
If February does end as the warmest on instrument record, then we’ll have 2 months of 2010 completed with record temps, and be well on the way to meeting my prediction of making 2010 as the warmest year on record, unless we have a Mt. Pinatubo type volcanic eruption. With these warm tropospheric temps and the sun waking up from a prolonged solar minimum, cosmic ray counts falling, the interplanetary AP index edging up…looks like it will be hard to keep 2010 cool…

pft
February 20, 2010 10:42 pm

“R. Gates (22:12:25) :
… but based on satellite data, February follows closely on the heels of January as a very warm month in the troposphere. With 8 days left, without an immediate and rapid cooling, February should be the warmest on satellite record. …
If February does end as the warmest on instrument record, then we’ll have 2 months of 2010 completed with record temps, and be well on the way to meeting my prediction of making 2010 as the warmest year on record.”
Wish I lived at 14,000 ft where all the warmth is (what is it, about -25 deg F there), which is probably related to El Nino and thus not reflecting climate trends. Meanwhile back on land near sea level, DC had about 4 ft of snow about the time I flew back to Taipei, and we just had the coldest lunar new year holiday in recent memory, perhaps as cold as 1986 when I first came out here, although maybe the heavy rains made it seem colder than it was.
But at least the sun is becoming more active, and that should help warm us up a bit.
Not sure where CO2 comes into play in these cooling and warming events though. Thats the big question isn’t it. I mean, Jones has admitted it might have been warmer in the MWP than today, and that the warming from 1979-1999 was not much different than from the warming of 1910-1940 and 1850-1880, and that while not statistically significant, there has been a cooling trend since 2002.

Anticlimactic
February 20, 2010 11:17 pm

I am not sure I see the point of this : taking a subset of suspect data to produce a less accurate version of a suspect graph!
Also this sucks you in to the idea that global warming is a continuous trend and it is just a matter of by how much. If you take the graph in 8 year chunks then 1986-2003 looks like it would trend as flat at about -0.3C, 2004-2001 would show a rise from about -0.3C to 0.3C, and 2002-2009 would trend as flat at about 0.3C.
You would get a similar result by measuring a kettle from 2 minutes before to 2 minutes after it has boiled. The trend line would suggest the kettle will reach the melting point of steel in about 100 minutes!
________________________________________
The graph is like a thriller novel with the last page missing : January 2010. As this is the northern hemisphere land surfaces it is not contaminated by the hotter southern hemisphere. If January comes out as not being exceptionally cold, and not way below the area shown on the graph, it will be a pointer as to how compromised the CRU data is.

R. Gates
February 20, 2010 11:31 pm

pft,
Actually, the near surface temps in the troposphere are warm as well this year, and it’s warm all the way up to about 46,000 ft. Some of this may be El Nino related certainly, but since the it is a global reading, that doesn’t explain it all.
Regardless of whether or not the MWP was warmer than the recent warm period, (and I think the data is still not completely solid that it was over the entire globe), but regardless, the whole issue comes down to whether or not our human created CO2 that is accumulating in greater quantities every year can impact the climate enough to overide any natural variations. The earth should be entering another glacial period about now in this overall Ice Age that we are in. Can the accumuation of human created GHG’s delay or even overcome the next advance of the glaciers? Or perhaps, and I think this is also a real possibility, we will warm things up just enough to change the ocean current, slowing down the circulation of heat from the tropics to the poles, and perhaps bring on the advance of the glaciers even faster than they would have come on their own. This “Day After Tomorrow” scenario is just as much a threat as any run-away global warming. This scenario is what ushered in the Younger Dryas period, and sent the earth back into a thousand years of cold just as it was warming up after the last glacial period.

Anticlimactic
February 21, 2010 12:01 am

OOPS! I obviously have a blind spot with the 1990s! My entry should read :
Also this sucks you in to the idea that global warming is a continuous trend and it is just a matter of by how much. If you take the graph in 8 year chunks then 1986-1993 looks like it would trend as flat at about -0.3C, 1994-2001 would show a rise from about -0.3C to 0.3C, and 2002-2009 would trend as flat at about 0.3C.

February 21, 2010 12:45 am

Re: Bart (Feb 20 21:05)
I don’t like the spherical harmonics idea. Partly because you’d need fairly high order to capture variations on the scale of continents, which I imagine that you want to do, and this involves an assumption of high order differentiability.
But the worse problem is probably false teleconnection. In trying to accommodate a difficult fit in, say. Greenland, the fitting would produce (~Gibbs effect) ripples all over the world, not diminishing with distance from Greenland.

Bart
February 21, 2010 2:09 am

Nick Stokes (00:45:13) :
“Partly because you’d need fairly high order to capture variations on the scale of continents, which I imagine that you want to do, [I do?] and this involves an assumption of high order differentiability.”
Not any more than you need a higher order than typical linear trend (order 1 polynomial expansion) to pick up variations on the scale of years or the eleven year solar cycle or other periodic components in order to produce estimates of warming over the last century.
Remember, the object would be to get a function which would then be integrated over the surface and divided by 4*pi*R^2 to get a global average. Higher spatial frequencies would be attenuated in that integration, so each successive harmonic adds progressively less to the final result.
I would even say, you want to do a low order fit, maybe 2nd or 3rd order even, with lots of data to make it well overdetermined, as this will beat down noise in the final product. As always in filtering applications, the choice is between bias and variance in your estimator. You would have to futz around with the data to find the best compromise. Try 3rd order, lower it to 2nd and raise it to 4th and see how things change. Even plot several orders, and look for the Goldilocks point at which lower is too smooth, and higher is too variable.
“But the worse problem is probably false teleconnection. In trying to accommodate a difficult fit in, say. Greenland, the fitting would produce (~Gibbs effect) ripples all over the world, not diminishing with distance from Greenland.”
A) Gibbs effects come out for higher order fits – see above
B) doesn’t matter, the integration to compute the global average would beat the spatial variability down
Thanks for the comments.

Brent Hargreaves
February 21, 2010 2:10 am

John Hooper (13:53:49) 20 Feb : “So the world is warming just like everyone said?
Guess ClimateGate was a red herring after all.”
No, that doesn’t follow. Dr. Spencer’s work here is validating records over a tiny 24-year period. This Great Debate concerns timescales in millennia, not decades, and whether the recent rise is unprecedented (as the Hockey Stick would have it) or merely the latest in a long series of ups and downs.

PJB
February 21, 2010 2:17 am

I am not really interested in just repeating the sort of computer analysis that the temperature alarmists have already made (using data the alarmists have put out on the web) to see whether they know how to program a computer correctly. After all, what Jones “lost” was the original data, not the programs used to analyze the computer datasets. Why not collect (and not just sort out from alarmists’ temperature datasets) the handwritten temperature data from stations that you or your colleagues have visited and thus personally know started out rural and stayed rural. As few as twenty or thirty of these scattered across North America would be of interest. Even if someone wanted to, tampering with decades of hand-written temperature log books would be an impossible task. It would definitely be interesting to see whether this data shows any significant warming trend. Really, poking around in the datasets that the alarmists have put out onto the web is the easiest sort of check to do, and it is almost guaranteed to back up their claims. Do you think they just dump any old numbers out there and hope no one looks at them? — I don’t, not now after the skeptics have started to gain traction. Not everyone is a high-level UN bureaucrat. And if by chance they did miss something and put out data that works against their case, they’ll just “find” some mistake in it — it’s their data after all, how can you dispute the presence of a mistake if they say it’s there? — and put out “corrected data” that shows the sort of warming they want. The alarmists’ basic power is that they control the official data coming from the weather stations and the satellites, and they get to say what the “correct” data is, so any disagreements you find between their web data and their temperature trends just shows them how to fix their data so that it supports their claims. I realize you have actually done some work to check the alarmists’ claims, which puts you way ahead of almost all of the people who comment here, but you made the easy check, the check the alarmists invite people to do by publicizing their data. By now the climate establishment is on high alert; they are no longer dozing at their government-funded desks the way they were, say, two years ago, so I think we can all assume that any temperature data they provide will support their case. By the way, their obvious next step is to “fix” the way they process the satellite data so it too supports global warming. Satellite data is highly vulnerable to tweaks in the massive computer programs needed to go from the radiation sensor data generated on-board the spacecraft to the corresponding temperature estimates of the earth’s surface and atmosphere. This computer processing is so complicated that, up until a few years ago, the satellite groups were probably reluctant to modify the computer programs. It would be expensive and difficult to do without introducing embarrassing bugs that would interrupt the data stream — and no one was really paying attention to the climate skeptics — but now they may be worried enough to attempt the modifications. (Indeed they may already have done so — strange that suddenly the satellites are showing a record-high January temperature anomaly at such a politically convenient time for the alarmists.) If they have kept the raw satellite-sensor data from past decades they can even go back with their new modified temperature-extraction programs and “correct” past satellite temperatures. Really, as long as we’re only talking about a degree or two here and there, temperature data can be massaged any way you please as long as you own it and can claim to be making improvements.

Bart
February 21, 2010 2:45 am

Bart (02:09:03) :
Nick Stokes (00:45:13) :
Actually (claps his hand to his head), only the estimate of the 00 term impacts the average, but the estimate of the other terms impacts the estimate of the 00 term. Again, though, I do not think the order has to be very high to get a good estimate of that term.

February 21, 2010 2:51 am

Re: Bart (Feb 21 02:09),
I wrongly assumed you wanted to use the harmonics to produce a smoothed spatial representation of temperature, which is one of the uses of gridding. But if you are doing it just for the global average, then another point of gridding is to ensure that different areas of the land mass contribute more or less equally to the average, regardless of the density of stations.
It’s not clear to me how you achieve that by fitting low order harmonics. The LS fit would still be over-influenced by regions of high station density.

Bart
February 21, 2010 2:57 am

“…only the estimate of the 00 term impacts the average…”
The global average, I mean, of course. I’d test this assumption myself, but I have a day job in an entirely other realm. This is just a suggestion for whomever might like to give it a whirl. You could start with low order and work higher. At some order, the 00 term should start to level out, then break up as you go higher, and that level would give you a good estimate of the global average without all of this gridding and weighting of progressively warped differential areas, which appears to me, based on the descriptions I have read, to be essentially little more sophisticated than blunt rectangular integration.

Rhys Jaggar
February 21, 2010 3:05 am

Dr Spencer – a highly interesting and information-rich article.
Just one question from an interested lay-person: although your calibration will try to minimise ‘drift’ due to UHI etc, that is an external drift situation.
What are the methods used to ensure that ‘internal drift’ does not occur, namely that the technology in the satellites doesn’t change characteristics through time??
We saw of course that satellite ‘drift’ occurred in the arctic ice measurement situation in the past 18 months. Presumably, there is some form of internal warning system to address that??

John Hooper
February 21, 2010 3:34 am

Brent Hargreaves (02:10:17) :
John Hooper (13:53:49) 20 Feb : “So the world is warming just like everyone said?
Guess ClimateGate was a red herring after all.”
No, that doesn’t follow. Dr. Spencer’s work here is validating records over a tiny 24-year period. This Great Debate concerns timescales in millennia, not decades, and whether the recent rise is unprecedented (as the Hockey Stick would have it) or merely the latest in a long series of ups and downs.

Nice attempt to move the goal posts, but if you care to peruse this site you’ll find plenty of cynical uninformed comments casting dispersion on the recent record. My bet is you won’t see any forthcoming apologies.

carrot eater
February 21, 2010 4:12 am

Brent Hargreaves (02:10:17) :
That may be your opinion, but I think you’ll find a great number of people around here who doubt whether any warming has taken place over that time period. They might say it’s all or mostly a result of dodgy homogenisation, or somesuch.
Bart (02:09:03) :
Just to be clear, are you suggesting this method for satellite data, or these surface data? Anyway, I agree with Nick’s reservations.

Gareth
February 21, 2010 4:16 am

Over the last 150 years the technology and coverage of surface temperature recording has come on in leaps and bounds, culminating in that giant leap into space.
To what extent is it possible to factor out these changes? How do we know that the current temperature record and the lumpy increases that appear in the ‘global’ series aren’t simply us getting more accurate and more complete coverage, in particular bringing more of the Southern Hemisphere into the records?

A C Osborn
February 21, 2010 4:23 am

I agree completely with PJB (02:17:58) : .
Every Single Station Dataset that has been tested using the raw data has shown major problems with the Official record of those Datasets.
Even whole Datasets when tested using the raw data have shown manipulation especially in the GISS data.
Many posters on here have from all over the world have shown their own collections of Raw unadulterated data. It would be much better to collect them all, but especially the Rural sites and plot them to see a more truthfull picture than that provided by “Quality Controlled” datasets.

February 21, 2010 5:06 am

carrot eater (04:12:06),
Is this a result of “dodgy homogenization, or somesuch”: click [these are blink gifs, they take a few seconds to load]. Same question for this GISS/USHCN series collated by Mike McMillan: click
Or are these upward “adjustments” always the result of legitimate changes to the raw temperature data record – which always seems to result in more artificially rising temperatures than the actual raw data itself shows?

February 21, 2010 5:27 am

John Hooper (03:34:37),
“So the world is warming just like everyone said?
Guess ClimateGate was a red herring after all.”
That is not the question, and skeptics know it. But since you are not an AGW skeptic, I will explain it for you:
The planet has been following a gradual warming trend since the LIA, and from the last great Ice Age before that. Multi-decadal cycles of warming and cooling ride on top of that natural long term warming trend.
The key point is this the fact that the long term warming of the planet began well before the Industrial Revolution. Therefore, the planet’s gradual warming has been entirely natural. Despite a large increase in atmospheric CO2, the current warming trend is in line with the trend. Therefore, according to Occam’s Razor, CO2 is an extraneous entity and should be eliminated from the likely causes: Never increase, beyond what is necessary, the number of entities required to explain anything.
In fact, postal rate increases have a stronger correlation to temperature than CO2 does: click
And your conflating Climategate with entirely natural global warming is the real red herring argument.

DirkH
February 21, 2010 5:47 am

“PJB (02:17:58) :
[…]
Really, poking around in the datasets that the alarmists have put out onto the web is the easiest sort of check to do, and it is almost guaranteed to back up their claims.”
Not really if you use a constant set.
Here’s a guy who did a very simple analysis of raw data who comes to the conclusion that there is no discernible trend:
http://crapstats.wordpress.com/2010/01/21/global-warming-%e2%80%93-who-knows-we-all-care/

Dennis Wingo
February 21, 2010 7:23 am

Dr. Spencer
Since there is going to be a lot more money in the NASA budget for Earth Sciences there should be a way of improving your dataset in the future in the following manner.
A new land based temperature sensor, based upon the latest technology in temperature sensing (usually accurate to less than .001 degree), could be built. These sensors would be solar/battery powered in order to have them independent of any power grid. Additionally, they would have a satellite transponder that would use a satellite system like OrbComm, Iridium, or Globalstar (all low earth orbit constellations) that could provide real time temperature data on pretty much a global basis. This data could be integrated into your satellite dataset in real time via ground processing of the two datasets.
This would provide a high quality data product that is independent of the UHI effect, and could be sited anywhere, including on long lived floating bouys in the pacific in the southern oceans and land masses.
We do a lot of remote sensing work via satellite on other planets, why not here?

February 21, 2010 7:31 am

Interesting comments all.
Steve Goddard: the January 1998 warm spike in the satellite data compared to surface data is because peak El Nino warmth is in the eastern tropical and subtropical Pacific, which is poorly sampled by thermometers. The same is true during most if not all El Nino years.
Clearly, I need to do the ISH comparison to just those grids that Jones also has data for, which will be a much more meaningful and informative comparison.

carrot eater
February 21, 2010 7:43 am

Smokey (05:06:09) :
Smokey (05:27:44) :
So first Hargreaves is unsurprised that Spencer’s totally unhomogenised data set, based on only continuous stations, matches CRU so well because we know the Earth is warming and this isn’t in doubt; it’s just natural.
Then it’s implied ( Smokey (05:06:09), DirkH (05:47:11) : )that recent warming is only observed because of homogenisation, station drop-offs, and whatever else.
Then we’re back to warming being evident and undoubted; it’s just natural. [Smokey (05:27:44) ]
Can you please clarify?
It’s going to take a month and an extra hard drive to download this whole data set.

Alexej Buergin
February 21, 2010 7:57 am

” pft (22:42:32) :
Wish I lived at 14,000 ft where all the warmth is (what is it, about -25 deg F there)”
The ICAO standard atmosphere has 5.5°F at 15’000 ft or -17.5°C at 5000 m.

Jordan
February 21, 2010 9:56 am

Dr Spencer
In the hope that you will return to this thread and read this, I would suggest that the most important thing to do in pursuit of the observation of the trend in these global aggrgates is to assess dynamics of the global temperature field. This will enable us to determine what is required to meet the requirements of the Sampling Theorem.
This into the realms of discrete signal processing, and is not a question of statistics or statistical “error”. It is quite easy to show how the statistics of discrete data cannot be relied upon to measure the statistics of the underlying analogue signal if the requirements of the Sampling Theorem have not been met (i.e. if the discrete data is aliased due to inadequate sampling).
I’m not aware of any assessment which addresses this question with respect to the global temperature field. That’s not to say it doesn’t exist – but I would have thought that such an important step in the analysis would have been widely referred to in the literature.
As discussed in an earlier thread, Jones tried to use correlation to justify certain steps in his analysis of 150 year temperature trend. As statistical aggregates are not valid if discrete data is suffering from aliasing, his tests could not demonstrate that the data is free from the effects of alasing. And he made no reference to the mathematical literature to support his method.
To carry out an identification of the spatial and temporal dynamics of the global temperature field is no small task. And one of the things to be established at the outset would be “which temperature field” (e.g. surface, top of troposphere, etc).
Such a study would add to the sum total of knowledge, and is a logical step.
Until this is done, these aggregate global temperature series should be viewed with the greatest of caution. Interpretation of their trends could be meaningless.
Here are two videos I posted in that earloer thread. The first shows how even a very simple signal can be completely misrepresented if the discrete data is suffering from aliasing. The second shows how it can be completely meaningless to read observations if we are unaware that our data is aliased.
Regards

http://www.youtube.com/watch?v=LVwmtwZLG88

Oldjim
February 21, 2010 12:19 pm

The graph of GHCN Active Temperature Stations looks a bit dubious.
Looking at the source http://savecapitalism.wordpress.com/2009/12/09/american-thinker-shows-the-code-meddling/ and then comparing it with http://savecapitalism.wordpress.com/2009/12/13/on-request-ghcn-measurements-per-country/ there appears to be a chunk missing.
In this thread http://wattsupwiththat.com/2010/02/12/noaa-drops-another-13-of-stations-from-ghcn-database/ there are 1113 stations

Cold Lynx
February 21, 2010 12:44 pm

I think it would be very intresting if Dr Spencer could use all available data back to 1901 and make that analyse as well. Since the last year seems to be in correlation a control of correlation with the entire available data period may show someting else or not.
My bet? A large divergence with Jones data before 1960.

Bart
February 21, 2010 12:55 pm

Nick Stokes (02:51:09) :
“It’s not clear to me how you achieve that by fitting low order harmonics. The LS fit would still be over-influenced by regions of high station density.”
Not if the density function has some degree of smoothness. An analogy would be where you have a complex waveform limited to a specific interval and wrapping around at the endpoints. Something like this (hope the spacing works out when I post it)
Y
|*
| * *
| * *
| * *
——————–X
| o o
| o o
| *
Your measurement samples are the “*” points. The function continues as the “o” points but you haven’t measured them – you mostly only have data points in the positive “Y” direction, i.e., in the Northern hemisphere. You can try filling in the “o” points via ad hoc linear interpolation, but you will get the wrong answer for the bias level. But, information on the low order orthognonal basis functions in an expansion is contained in your measured data – you just have to do a proper interpolation based on a proper set of orthogonal basis functions. If you do an appropriately normed fit to the coefficients of the lower order basis functions and integrate that, you will markedly reduce the error. I would start with an L2 norm since it is easiest, and see how much the result is changed, then perhaps pursue an L1 normed result to the the best possible performance (see exception to Nyquist-Shannon sampling theorem discussed below).
This sort of thing is well established in numerical theory. If you want to integrate a function, the worst way would be to pick points, arbitrarily interpolate across the blank areas, and rectangularly integrate what you get. The best way is to strategically choose your abscissae, interpolate with an orthogonal basis, and integrate the resulting functional approximation. See, for example, the chapter on Integration of Functions, section on Gaussian Quadratures and Orthogonal Polynomials here.
carrot eater (04:12:06) :
Bart (02:09:03) :
“Just to be clear, are you suggesting this method for satellite data, or these surface data?”
You could do it with both, but the greatest advantage would be for the sparsely sampled surface data. There likely would be little advantage at all with finely sampled satellite data.
“Anyway, I agree with Nick’s reservations.”
There can be pathological cases, but as I say, this kind of operation is well established in numerical theory, and 99 times out of 100 will give significantly enhanced results.
Jordan (09:56:20) :
“It is quite easy to show how the statistics of discrete data cannot be relied upon to measure the statistics of the underlying analogue signal if the requirements of the Sampling Theorem have not been met (i.e. if the discrete data is aliased due to inadequate sampling).”
Keep in mind, the Nyquist-Shannon sampling theory is sufficient, but not necessary to reconstruct a signal from sampled data. There’s a pretty good write-up at <Wikipedia.

The sampling theorem provides a sufficient condition, but not a necessary one, for perfect reconstruction. The field of compressed sensing provides a stricter sampling condition when the underlying signal is known to be sparse. Compressed sensing specifically yields a sub-Nyquist sampling criterion.

Anyway, the data are what they are. We go to war with the army we have, using every available resource to our advantage.

Bart
February 21, 2010 12:55 pm

“hope the spacing works out when I post it”
It didn’t. Hopefully, you get the gist.

February 21, 2010 2:03 pm

I’m surprised no one asked the following question:
If the monthly temperature anomalies are, on average, 36% larger than Jones got…why isn’t the warming trend 36% greater, too? Maybe the agreement isn’t as close as it seems at first.

Well to my uncalibrated eyeballs it appears that your data is not only 1.036 higher than CRUTem3NH’s anomalies but 1.036 lower as well, so minimal net effect.

Jordan
February 21, 2010 3:11 pm

Bart (12:55:06) :
“Keep in mind, the Nyquist-Shannon sampling theory is sufficient, but not necessary to reconstruct a signal from sampled data.”
I don’t think that alters the point Bart – we need knowledge of the temperature field in order to know how to sample it. Until we have surveyed and understood what it is we are seeking to faithfully reconstruct using discrete data samples, we have insufficient knowledge about how to sample it.
The point of an initial survey is the gain enough information to determine what sampling strategy is both sufficient and necessary.
Until then, who can say whether any particular trend over a defined space or time period is- or is not distorted by aliasing.

Steve Koch
February 21, 2010 3:15 pm

An average 36% difference on the anomalies is pretty huge, could this be a calibration problem?
I’ve found that plotting the delta of the two traces can be very informative and might lead the direction to the next step.
Anyway, great first step.
Should the thermal mass of a volume measurement be taken into account? For example, presumably a humid space contains more heat energy than an arid volume of the same temperature.

keith in hastings UK
February 21, 2010 3:28 pm

re Robert (12:32:21) & later about measuring radiation in/out, can we ignore the biosphere?
I know growth (absorbs energy) is followed by decay (releases energy) but think how much was trapped to make all the coal, oil, & gas we have been using.
Obviously, better to have the measures of radiation than not, but careful interpretation needed…

NickB.
February 21, 2010 4:11 pm

Steve Koch,
I have been wondering the same thing. Isn’t temperature, by itself, just an arbitrary and incomplete way to look at it?
Someone posted on a thread here that the GCM’s assumed constant humidity levels, when they had actually dropped somewhat (1% I think is what they said).
I always thought you’d need to analyze both temperature and humidity to really understand the atmospheric heat content.. but then again, what do I know.
Cheers!

Steve Koch
February 21, 2010 5:28 pm

NickB:
You have to wonder about the validity of assuming a constant humidity.
I looked up “thermal mass” and found this interestng article:
http://wattsupwiththat.com/2009/05/06/the-global-warming-hypothesis-and-ocean-heat/
Have not finished it yet but it is saying, so far as I have read, that energy is more important than temperature and that the ocean heat content (OHC) is orders of magnitude more important than atmospheric heat content as a measure of global heat content.

JP
February 21, 2010 5:43 pm

I guesst that using infrared measurement as Miskolczi did would be out question? I mean if there woudl be an spot where there is a direct visibility to surface and this spot is located at rural area based on map data, could that serve as a reference point for calibration?
How many of these automated weather stations, providing real time or hourly measurements are loacted in places where real time information is needed, like at the airports for aviation weather?
In my country the goal is to increase the usage of aviation weather (at the airports) to replace the older groud stations to automate the data gathering process..

Allen63
February 21, 2010 6:11 pm

I presume there is no need to “manipulate” the temperature data when “calibrating” the satellites.
Rather, merely do a point by point (station by station) matchup of satellite “signal” (from the restricted area of each temperature station) to measured temperature — day by day — or hour by hour if the data allows. Or, is the satellite coverage “pixel size” too coarse to allow a per station comparison?
Hence, I presume you will not be using the average temperatures (as shown in this post) for calibration. Rather, you will compare the “calibrated”-satellite plot to the thermometer plot shown above.
Not really expecting an answer. Just questions I’d have about the final calibrated satellite product.

NickB.
February 21, 2010 7:40 pm

Steve,
Thanks for the link – great (complex) reading! As much time as everyone spends on the surface temps, the divergence between projected and observed heat content really does raise some significant questions

Pooh
February 22, 2010 9:07 am

Re: Allen63 (Feb 21 18:11),
Allen63 – Here is your answer about calibration for UAH AMSU:
Spencer, Ph.D., Roy W. “How the UAH Global Temperatures Are Produced.” Scientific. Global Warming (drroyspencer.com), January 6, 2009.
http://www.drroyspencer.com/2010/01/how-the-uah-global-temperatures-are-produced/
“Microwave temperature sounders like AMSU measure the very low levels of thermal microwave radiation emitted by molecular oxygen in the 50 to 60 GHz oxygen absorption complex. This is somewhat analogous to infrared temperature sounders (for instance, the Atmospheric InfraRed Sounder, AIRS, also on Aqua) which measure thermal emission by carbon dioxide in the atmosphere.”
“Once every Earth scan, the radiometer antenna looks at a “warm calibration target” inside the instrument whose temperature is continuously monitored with several platinum resistance thermometers (PRTs). ”
“A second calibration point is needed, at the cold end of the temperature scale. For that, the radiometer antenna is pointed at the cosmic background, which is assumed to radiate at 2.7 Kelvin degrees.”

Cold Lynx
February 22, 2010 9:50 am

An if the CO2 levels are changing?
http://www.agu.org/pubs/crossref/1976/GL003i002p00077.shtml
“The results also show that at mid to high northern (winter) latitudes the O2 concentration is about a factor of two higher than at low southern (summer) latitudes, thus revealing an apparent reversal of the winter to summer increase in O2 deduced from optical and incoherent scatter measurements during higher levels of solar activity.”
Hmmm

Cold Lynx
February 22, 2010 10:19 am

Should be:
“And if the O2 levels are changing?”

Robert
February 22, 2010 11:46 am

“re Robert (12:32:21) & later about measuring radiation in/out, can we ignore the biosphere?
I know growth (absorbs energy) is followed by decay (releases energy) but think how much was trapped to make all the coal, oil, & gas we have been using.
Obviously, better to have the measures of radiation than not, but careful interpretation needed…”
You can turn energy from the biosphere into heat by burning stuff, but the amount of heating is extremely small relative to the sun. This is also true of geothermal activity. So if you measure the energy entering and leaving the atmosphere, you have an account of 99.9% of any energy imbalance (http://en.wikipedia.org/wiki/Radiation_budget).
We would still care about where the heat was in the biosphere: how much in the oceans, in the air, expended in melting ice, etc. But if we could track the radiation budget accurately through direct measurements, it’d be a great advance in our knowledge.

Phil.
February 23, 2010 5:51 am

Cold Lynx (09:50:09) :
And if the O2 levels are changing?

Since the measurements aren’t being made in the thermosphere that isn’t a problem.

Cold Lynx
February 23, 2010 9:54 am

If the optical depth of O2 is changed will that have impact in Beer–Lambert law calculations. Especially in the troposhere.
So please tell me where i find how the annual?/decadal? change in O2 levels is used in the calculations. I cant find it.
Because if the global O2 level is decreasing will that lower the optical hight and show us a heating of the atmosphere.

February 25, 2010 6:07 pm

This looks really exciting, FORTRAN was my first language and I have always respected it’s ability to be clear and precise. Having said that I wouldn’t go back to it! I look forward to the new dataset. I also think that active GHCN stations graphic may be inaccurate for recent years; however it probably accurately reflects the dataset on which it is based.