How the UAH Global Temperatures Are Produced

by Dr. Roy Spencer, PhD.

I am still receiving questions about the method by which the satellite microwave measurements are calibrated to get atmospheric temperatures. The confusion seems to have arisen because Christopher Monckton has claimed that our satellite data must be tied to the surface thermometer data, and after Climategate (as well all know) those traditional measurements have become suspect. So, time for a little tutorial.

NASA’S AQUA SATELLITE

The UAH global temperatures currently being produced come from the Advanced Microwave Sounding Unit (AMSU) flying on NASA’s Aqua satellite. AMSU is located on the bottom of the spacecraft (seen below); the AMSR-E instrument that I serve as the U.S. Science Team Leader for is the one on top of the satellite with the big dish.

aqua_night_pacific

Aqua has been operational since mid-2002, and is in a sun-synchronous orbit that crosses the equator at about 1:30 am and pm local solar time. The following image illustrates how AMSU, a cross-track scanner, continuously paints out an image below the spacecraft (actually, this image comes from the MODIS visible and infrared imager on Aqua, but the scanning geometry is basically the same):

Aqua-MODIS-swaths

HOW MICROWAVE RADIOMETERS WORK

Microwave temperature sounders like AMSU measure the very low levels of thermal microwave radiation emitted by molecular oxygen in the 50 to 60 GHz oxygen absorption complex. This is somewhat analogous to infrared temperature sounders (for instance, the Atmospheric InfraRed Sounder, AIRS, also on Aqua) which measure thermal emission by carbon dioxide in the atmosphere.

As the instrument scans across the subtrack of the satellite, the radiometer’s antenna views thirty separate ‘footprints’, nominally 50 km in diameter, each over over a 50 millisecond ‘integration time’. At these microwave frequencies, the intensity of thermally-emitted radiation measured by the instrument is directly proportional to the temperature of the oxygen molecules. The instrument actually measures a voltage, which is digitized by the radiometer and recorded as a certain number of digital counts. It is those digital counts which are recorded on board the spacecraft and then downlinked to satellite tracking stations in the Arctic.

HOW THE DATA ARE CALIBRATED TO TEMPERATURES

Now for the important part: How are these instrument digitized voltages calibrated in terms of temperature?

Once every Earth scan, the radiometer antenna looks at a “warm calibration target” inside the instrument whose temperature is continuously monitored with several platinum resistance thermometers (PRTs). PRTs work somewhat like a thermistor, but are more accurate and more stable. Each PRT has its own calibration curve based upon laboratory tests.

The temperature of the warm calibration target is allowed to float with the rest of the instrument, and it typically changes by several degrees during a single orbit, as the satellite travels in and out of sunlight. While this warm calibration point provides a radiometer digitized voltage measurement and the temperature that goes along with it, how do we use that information to determine what temperatures corresponds to the radiometer measurements when looking at the Earth?

A second calibration point is needed, at the cold end of the temperature scale. For that, the radiometer antenna is pointed at the cosmic background, which is assumed to radiate at 2.7 Kelvin degrees. These two calibration points are then used to interpolate to the Earth-viewing measurements, which then provides the calibrated “brightness temperatures”. This is illustrated in the following graph:

radiometer-calibration-graph

The response of the AMSU is slightly non-linear, so the calibration curve in the above graph actually has slight curvature to it. Back when all we had were Microwave Sounding Units (MSU), we had to assume the instruments were linear due to a lack of sufficient pre-launch test data to determine their nonlinearity. Because of various radiometer-related and antenna-related factors, the absolute accuracy of the calibrated Earth-viewing temperatures are probably not much better than 1 deg. C. While this sounds like it would be unusable for climate monitoring, the important thing is that the instruments be very stable over time; an absolute accuracy error of this size is irrelevant for climate monitoring, as long as sufficient data are available from successive satellites so that the newer satellites can be calibrated to the older satellites’ measurements.

WHAT LAYERS OF THE ATMOSPHERE ARE MEASURED?

For AMSU channel 5 that we use for tropospheric temperature monitoring, that brightness temperature is very close to the vertically-averaged temperature through a fairly deep layer of the atmosphere. The vertical profiles of each channel’s relative sensitivity to temperature (’weighting functions’) are shown in the following plot:

AMSU-weighting-functions

These weighting functions are for the nadir (straight-down) views of the instrument, and all increase in altitude as the instrument scans farther away from nadir. AMSU channel 5 is used for our middle tropospheric temperature (MT) estimate; we use a weighted difference between the various view angles of channel 5 to probe lower in the atmosphere, which a fairly sharp weighting function which is for our lower-tropospheric (LT) temperature estimate. We use AMSU channel 9 for monitoring of lower stratospheric (LS) temperatures.

For those channels whose weighting functions intersect the surface, a portion of the total measured microwave thermal emission signal comes from the surface. AMSU channels 1, 2, and 15 are considered “window” channels because the atmosphere is essentially clear, so virtually all of the measured microwave radiation comes from the surface. While this sounds like a good way to measure surface temperature, it turns out that the microwave ‘emissivity’ of the surface (it’s ability to emit microwave energy) is so variable that it is difficult to accurately measure surface temperatures using such measurements. The variable emissivity problem is the smallest for well-vegetated surfaces, and largest for snow-covered surfaces. While the microwave emissivity of the ocean surfaces around 50 GHz is more stable, it just happens to have a temperature dependence which almost exactly cancels out any sensitivity to surface temperature.

POST-PROCESSING OF DATA AT UAH

The millions of calibrated brightness temperature measurements are averaged in space and time, for instance monthly averages in 2.5 degree latitude bands. I have FORTRAN programs I have written to do this. I then pass the averages to John Christy, who inter-calibrates the different satellites’ AMSUs during periods when two or more satellites are operating (which is always the case).

The biggest problems we have had creating a data record with long-term stability is orbit decay of the satellites carrying the MSU and AMSU instruments. Before the Aqua satellite was launched in 2002, all other satellites carrying MSUs or AMSUs had orbits which decayed over time. The decay results from the fact that there is a small amount of atmospheric drag on the satellites, so they very slowly fall in altitude over time. This leads to 3 problems for obtaining a stable long-term record of temperature.

(1) Orbit Altitude Effect on LT The first is a spurious cooling signal in our lower tropospheric (LT) temperature product, which depends upon differencing measurements at different view angles. As the satellite falls, the angle at which the instrument views the surface changes slightly. The correction for this is fairly straightforward, and is applied to both our dataset and to the similar datasets produced by Frank Wentz and Carl Mears at Remote Sensing Systems (RSS). This adjustment is not needed for the Aqua satellite since it carries extra fuel which is used to maintain the orbit.

(2) Diurnal Drift Effect The second problem caused by orbit decay is that the nominal local observation time begins to drift. As a result, the measurements can increasingly be from a warmer or cooler time of day after a few years on-orbit. Luckily, this almost always happened when another satellite operating at the same time had a relatively stable observation time, allowing us to quantify the effect. Nevertheless, the correction isn’t perfect, and so leads to some uncertainty. [Instead of this empirical correction we make to the UAH products, RSS uses the day-night cycle of temperatures created by a climate model to do the adjustment for time-of-day.] This adjustment is not necessary for the Aqua AMSU.

(3) Instrument Body Temperature Effect. As the satellite orbit decays, the solar illumination of the spacecraft changes, which then can alter the physical temperature of the instrument itself. For some unknown reason, it turns out that most of the microwave radiometers’ calibrated Earth-viewing temperatures are slightly influenced by the temperature of the instrument itself…which should not be the case. One possibility is that the exact microwave frequency band which the instrument observes at changes slightly as the instrument warms or cools, which then leads to weighting functions that move up and down in the atmosphere with instrument temperature. Since tropospheric temperature falls off by about 7 deg. C for every 1 km in altitude, it is important for the ‘local oscillators’ governing the frequency band sensed to be very stable, so that the altitude of the layer sensed does not change over time. This effect is, once again, empirically removed based upon comparisons to another satellite whose instrument shows little or no instrument temperature effect. The biggest concern is the long-term changes in instrument temperature, not the changes within an orbit. Since the Aqua satellite does not drift, the solar illumination does not change and and so there is no long-term change in the instrument’s temperature to correct for.

One can imagine all kinds of lesser issues that might affect the long-term stability of the satellite record. For instance, since there have been ten successive satellites, most of which had to be calibrated to the one before it with some non-zero error, there is the possibility of a small ‘random walk’ component to the 30+ year data record. Fortunately, John Christy has spent a lot of time comparing our datasets to radiosonde (weather balloon) datasets, and finds very good long-term agreement.

Share

0 0 votes
Article Rating
78 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 12, 2010 9:07 pm

Thank you for this tutorial, Dr. Spencer.
If I may ask, are the raw satellite data and programs used to process it available to the public?

Vincent Guerrini
January 12, 2010 9:21 pm

Dr’s Spencer and Christy. Thank you for this lucid explanation. Can we assume that surface data could be +1 or -1C error (or is it +0.5 and -0.5C), but that does not really matter since the trend and radiosonde data is very reliable. ie: As long as satellite does not fail in the next 50-100 years we would be able to discern a significant trend. So in essence the current 10 year graphs are quite accurate. The posting by Magicjava above does have a point. We cannot ask the team to provide if RSS, AMSU doesn’t either….

E.M.Smith
Editor
January 12, 2010 9:36 pm

If ocean water is problematic, and you are measuring O2, but with some ground reflections, how do you remove the effects of confounders such as a partial view of ground, ocean, vegetation, and even things like radar stations and other ground sources of microwaves?
Also, given that the atmosphere often has variable water in it, how do you prevent variations in ice, snow, rain, hale, aurorae, humidity, etc. from disturbing the O2 signature?
Finally, given the recent variation in atmosphere thickness with solar output changes, how does that change in density per unit of altitude impact your temperature measuring?
And a comment: Given your description it looks to me like your have a measure of predominantly trophosheric air temperatures, not surface temperatures, so to compare surface series to satellite series we need to know that relationship.
Thanks, nice article!

stumpy
January 12, 2010 9:44 pm

I assume there is some sensibility check carried out to ensure the result is in the right area or compares reasonably with surface measurements. I assume Radiosonde data is used for this comparison. What I think people might be interested to know is:
1. what do you compare the data with to check results i.e a sensibility check against radiosonde data.
2. Are any further adjustments made due to any comparison with other data i.e. to produce a better comparison
I would assume no further calibration is carried out, but further investigations to resolve discrepancies are carried out if any significant differance is observed. But this area might be worth clarifying as it seems some people believe at this stage there is some adjustment to “match” the data to surface temperature records.

Chad
January 12, 2010 9:50 pm

AMSU channel 5 is used for our middle tropospheric temperature (MT) estimate; we use a weighted difference between the various view angles of channel 5 to probe lower in the atmosphere, which a fairly sharp weighting function which is for our lower-tropospheric (LT) temperature estimate.

A weighting function is used? I thought that the UAH product used a full dynamic radiative transfer model to get the LT temperature.

Tenuc
January 12, 2010 9:52 pm

Thanks for an excellent overview of how the AMSU operates and some of the known issues.
Does variation to type, altitude and density of cloud have any effect on the measurements? I’d also be very interested to know the temporal resolution of the system and, in view of the assumptions and empirical corrections made, the accuracy of the published results?

Layne Blanchard
January 12, 2010 10:04 pm

Dr. Spencer,
Is the UAH record we often see stitched from one instrument/satellite to another in or around 1997-1999?

Editor
January 12, 2010 10:04 pm

But then, weren’t the final analyzed values (net microwave numbers), or what becomes the “temperature itself, “calibrated” back against GISS “corrected” values of surface temperatures?
If so, then isn’t the satellite baseline point “corrected” (er, corrupted) as well by being set against those manipulated average earth temperatures?

Alan S. Blue
January 12, 2010 10:28 pm

Wouldn’t it still be rather useful to attempt a calibration with somewhere on the ground though? This would be an entirely different type of calibration, both accuracy and precision would suffer, naturally. You wouldn’t be measuring the ground temperature, but inferring it from nearby temperatures and established calibration coefficients.
Because it would be mighty handy to know the value of the surface stations. As nice as the satellites are for measuring current effects, the crucial part is deciphering the instrumental enigma that happens to reach 150 years back into the past.
The adjustments that have been applied are of the same order as the measured trends. With a minimal number of randomly distributed Stevenson Screens, some of the questions about how well any given gridcell is represented by the available instruments and the available adjustments could at least be attempted.

Mann O Mann
January 12, 2010 10:30 pm

With that explanation it is obvious that none of the data from these satellites can be trusted. After all, at no point are they calibrated to tree rings.
/sarcasm

Chad
January 12, 2010 10:55 pm

RACookPE1978,
John Christy has said that (here),

“No other data are used in the construction. That is why we can do comparison studies without any interdependence.”

Konrad
January 12, 2010 11:07 pm

Dr. Spencer,
Firstly thank you for taking the time to give us this tutorial. I find it encouraging that radiosonde balloons are used for cross checking rather than surface stations. However there has been one question that I have always wondered about regarding remote temperature sensing. This relates to the thickness of the atmosphere. During the current prolonged solar minimum it has been widely reported that the atmosphere has contracted by around 100 Km. Is it necessary to adjust data from the satellites regarding “altitude of layer sensed” to account for the recent cooling and contraction of the mesosphere and thermosphere?

Ray Boorman
January 12, 2010 11:10 pm

Thanks for this explanation Dr Spencer. A pity I didn’t know this when I read the myth propogated by Lord Monkton recently, as it would have stopped me repeating his mistake.

Rocket Man
January 12, 2010 11:12 pm

There are a lot of steps and a lot of equipment used to determine the temperatures measured, and I would be interested to know the overall error of the entire process, including the PRT’s, the electronics used to measure the PRT’s output, the variation in the cosmic background temps, the MSU itself, the removal of the portion of the signal that comes from the surface, etc., etc.
I would be surprised if this metric is not already known, as it is a standard practice in Aerospace to determine things like that, but I am surprised that the information was not included in the article.

January 12, 2010 11:19 pm

I have compared record of rural meteorological stations with UAH anomalies for given 2.5×2.5° grid and found excellent agreement. I tried it for Armagh Observatory and Lomnicky peak Observatory.

Indiana Bones
January 12, 2010 11:22 pm

I too am interested to know how you reject ambient radiation in the 50-60GHz range. The surface is increasingly radiative due to man-made devices directly at frequency or from harmonics thereto.

barry moore
January 12, 2010 11:32 pm

If only the IPCC were as honest and open. As an instrument and controls engineer of more years than I care to admit to I am only too well aware that absolute accuracy is of minor importance compared to repeatability and the ability to track the changes which are then published as the anomolies. Therefore it is the trends of the anomolies which are significant and to obcess about absolute accuracy is futile.
Truly an excellent explanation however I fear the honesty will be cherry picked and distorted by those who are so skilled in this art.

peat
January 12, 2010 11:35 pm

I am wondering how the satellite instrument channels are able to focus on different layers of the atmosphere. Why don’t the emission signatures from an entire column of atmosphere from the ground up enter the instrument and become mixed up?

Jordan
January 12, 2010 11:44 pm

thanks
it would be good to have an account of how the globe is sampled and how the data series/maps comply with the conditions of the sampling theorem. this is an unconditional requirement to allow a representative reconstruction of the measured system at any time scale.
without this, there is no reason to believe that we have any more than a bunch of aliased nonsense. sorry to be so sceptical – but applies just as much to other series,

David Alan
January 13, 2010 12:17 am

All weather records, tied or broken, for all dates available (starting on 1/1/2009), for ALL States
(All Records): 113236
(H) High Temperature: 13678
(HM) Highest Minimum Temperature: 16098
(L) Low Temperature: 10883
(LM) Lowest Maximum Temperature: 20151
(R) Rain/Precipitation: 43066
(S) Snow: 9360
Warm (H + HM) Records: 26.3%
Cold (L + LM) Records: 27.4%
Precip. (R + S) Records: 46.3%
I pulled this from:
http://www.extremeweatherrecords.com/Records/default.aspx
Looking back over the years, I don’t think cold records have exceeded high records for some time. Does anyone have any data to determine the last time extreme cold weather records outpaced highs.

tallbloke
January 13, 2010 12:26 am

“Fortunately, John Christy has spent a lot of time comparing our datasets to radiosonde (weather balloon) datasets, and finds very good long-term agreement.”
Firstly, many thanks to Dr Roy for taking the time to write this clear and concise piece accessible to lay-people.
Secondly, please could he comment on whether John Christy’s comparison of the data to radiosonde balloons finding good long term agreement goes some way to validating the work of Ferenc Miscolczi? One of the criticisms of Miscolczi’s findings was that it relies on analysis of radiosonde data which the R.C. Team claimed were unreliable.

Invariant
January 13, 2010 12:28 am

Thanks to Dr. Spencer and Dr. Christy – a rock solid piece of work!

pft
January 13, 2010 1:11 am

An absolute temperature accuracy of 1 deg C is not very good. I will accept that if the relative accuracy is good then this is useful data. However, when anomalies are calculated, is this based on anomalies from satellite data, or is ground data used for years prior to the satellite data, and thus the anomaly from the ground based data must be calculated.
Also, given 50% or more of the surface is covered by cloud, how is this corrected for or is only data for surfaces not covered by cloud used.
The 2 data points for the calibration curve are not very good. For one they measure the temperature of a completely different medium than that on earth. Besides, having a calibration range of 2.7 to 290 deg K when you are measuring really 220-320 K is not ideal. Of course, there may be no way around it. I would think measuring 2 points on the earth that have stable temperatures may be used, say above equatorial waters and Antarctic ice surface where you have grounded based temperatures explicity for the use of satellite calibrations, and not controlled by the GISS or CRU crowd. This is not calibration in the true sense of the word, but neither method is perfect in this regard, and perhaps both need to be used.
That said, is it really true the satellite data is showing surface temperatures warmer than last year, and why are the temperatures shown negative. What are we calling near surface temperatures anyways, for it to be negative this means above the clouds?.
How often are algorithms, if any, adjusted. We have seen how surface data has been corrupted, so my concern really is what measures are in place to ensure the same does not happen for satellite data.

KeithGuy
January 13, 2010 1:19 am

Thank you for that clear explanation Dr Spencer and Dr Christy.
Now would someone please explain in simple terms how GISS global temperature data is contrived (Whoops! I mean calculated)?

Rhys Jaggar
January 13, 2010 1:35 am

A question from a non-specialist reader:
Is the radiation profile of oxygen affected by any compositional changes in the atmosphere, be that soot, ozone etc etc?
Or did you choose this mode of measurement precisely because it WAS so unaffected by minor changes to the atmosphere??

Ryan Stephenson
January 13, 2010 2:31 am

“This is somewhat analogous to infrared temperature sounders (for instance, the Atmospheric InfraRed Sounder, AIRS, also on Aqua) which measure thermal emission by carbon dioxide in the atmosphere.”
Well those CO2 measurements sound very accurate as an indicator of climate change (not) given that we know there is more CO2 in the atmosphere!
Thus we can dismiss surface stations due to UHI, we can dismiss most of the satellite data since it is measuring CO2 and the amount of CO2 is growing all the time. We have one set of satellite data from the last 10 years that “might” be reliable because it measures oxygen microwave radiation from 450miles up with some vague idea of whether it might be looking at the ocean or looking at the top of mount Everest!
Ever get the feeling that NASA and ESA are beding over backwards to justify their existence? Wouldn’t it have been a better idea to have spent that money on a few well-placed surface monitoring sites? After all, if the Anatartic ice melts it will be at ground level! If ever there was an example of how AGW theory is being used to justify outrageous research expenditure on the wrong things, this was it.

Ryan Stephenson
January 13, 2010 2:42 am

According to Wikipedia the AMSU-A referred to by Dr Spencer has a maximum measurement sensitivity of 0.25Celsius. Not great if you are trying to measure decadal climate trends of about 0.1Celsius. And remember this is only the instrument sensitivity – not an indicator of the instrument accuracy.
Dr Spencer seems to have overlooked that fact. More propaganda dressed up as science.
Anyway, my son’s got a bit of a fever today. I’ll go and check how bad it is by using a microwave receiver 75km away.

guidoLaMoto
January 13, 2010 2:55 am

If observational error is +/- 1 deg, then calculated anomalies of < 1 deg are meaningless.

Richard Saumarez
January 13, 2010 3:12 am

Being a complete non-climatologist, could some explain to me several things?
Why does the mean temperature rise during NH summer? Is this related to a higher land mass in the NH and more sea in the SH? Neing naive, I would expect the mean global temperature to remain constant.
Why does the upper atmosphere cool when the lower atmosphere warms?
Im not being critical, I just want to know.

Daniel H
January 13, 2010 3:34 am

LOL@Mann O Mann’s comment
Thank you Dr. Spencer for the excellent writeup on NASA’s Aqua satellite. I’d be interested in learning more about AIRS and why it is so difficult to detect the CO2 signature from gas columns when clouds are involved. Also, why does the ESA’s Envisat seemed to be limited to detecting CO2 over land but the Aqua AIRS satellite does not have suffer from this limitation? Thanks again.

Jared
January 13, 2010 3:36 am

Just remembered a screen cap I made about 6 months ago.
I made a screen cap of NOAA’s July 2009 prediction for this winter and beyond.
http://i292.photobucket.com/albums/mm3/arketebel/NOAAPredictionJulyof2009.jpg
[sarcasm]
No doubt these guys know what it will be like in 2050 or 2100. Look how dead on they were 6 months into the future.
[/sarcasm]

John Simons
January 13, 2010 3:39 am

Speaking of satellite data
Wow! check out the present anomalies
http://discover.itsc.uah.edu/amsutemps/amsutemps.html
all the low altitude satellite temperature channels are going ballistic and now in record territory, looks like Jan is shaping up to be another seasonal temperature record at least.
Bob Tisdale’s Dec updated shows that temperatures have moved into the same band of record temperatures not seen since the 1998 super El-Nino… bets for a new all time global SST record in Jan anyone?

supercritical
January 13, 2010 4:07 am

Dr. Spencer, thanks for being so frank and open about the workings of the project. I am sure that it is appreciated by all readers here.
As I understand it, the project is measuring the microwave radiation energy at certain frequencies which free 02 molecules emit, and the energy-levels correlate with temperature rises of the atmosphere.
It seems that the measurements will be directly affected by the absolute number of O2 molecules in the measurement column. So, the readings will vary not only with changes in atmospheric density, but also the relative fraction of O2.
It follows that the raw measurement series could reveal changes in atmospheric density, and hence variations in the total amount of atmosphere per-se ( or at least the O2 proportion)
And as ‘climate’ is really integrated weather ( i.e atmospheric density and temperature change ) over a long time, perhaps climatic changes could more usefully be tracked by looking at how other attributes of the atmosphere are varying, rather than just relying on ‘average temperature’.
Perhaps the raw data could find a second use, by being reanalysed to produce a set of proxy sea-level pressure series. This data could then be compared with the existing surface records, and may provide yet another means of understanding.

Sean Inglis
January 13, 2010 4:29 am

Another thank you to add to the chorus. This will take more than one reading, and is fascinating in its own right.

Kendra
January 13, 2010 4:58 am

Off-Topic but Please Help:
I know someone who is in a position of influence – ie will be teaching a new class on the politicization of climate science.
Unfortunately, this person buys the schtick (climategate overblown, evidence stands, etc.)
The person recommended (others as well, but this was specifically mentioned) that I read The Long Thaw by David Archer. Not only is it difficult for me to spend money on what I’m “biased” to see as propaganda, but I need information as soon as possible. The reviews at Amazon were not informative altho there was a negative one, nothing substantive.
If I can do one thing about this, it’s to at least try to affect someone with influence who then keeps the “machine” haha (re MIT debates, etc.) rolling. So this is an important challenge for me, altho the person is brushing me off with the “agree to disagree” meme.
I never even heard of David Archer since I started researching over a year ago.
I probably seem a bit histrionic but the sooner I can stop this, the better – I mean, try to stop this.
This is not the kind of thing for Climate Audit so won’t ask there, and am assuming that Jeff Id’s readers also read here.
So, 1. Does anyone have anything to say about this book and/or David Archer’s arguments. 2. Where else could I ask this?
Thank you for any assistance whatsoever.

Ignots
January 13, 2010 5:00 am

– Did we know how PRT is influenced by prolonged cosmic radiation? Handbook of PRTs usage recommends periodical testing of PRTs against spoiling by external influences. How it will be performed in orbit? Did we know how prolonged cosmic radiation influences radiometer antenna elements? Is it possible in laboratory to test these influences? If cosmic radiation is influencing PRTs and radiometric systems, then it will show only continuous ẅarming tendency in history of measurements. Revealing is your words that ‘the newer satellites can be calibrated to the older satellites’ measurements.’ If PRTs is experiencing continuous shift in quality due to cosmic radiation, then such “calibration” only deepens error: in output we have measurement not of “climate warming”, but of degradation of measurement system due to cosmic radiation. And it is Obvious, because shift of “warming” have very stable tendency, which didnt fit data obtained by other measures.

1DandyTroll
January 13, 2010 5:05 am

Actual make sense information, instead of nonsensical, from NASA.
But how does the last part, about the use of radiosonde balloons as reference, stand to the fact that the weathery balloons being criticized for their inaccuracy, pretty much due to ’em being heated by the sun. Of course all that critic was from some warmist, at NASA, who instead thought temperature measurements from wind speed proxy was a more stellar idea. If my memory serves that was in the doc about what quality control, on the data, is being done by NASA.
And didn’t CRU (or was it RC? same same I guess) also criticize the radiosonde balloons for their inaccuracy?
But essentially you guys just measure the escaping, to space, radiation, and not the “green house effect”, back to earth, radiation? Don’t mind the last part it’s probably really silly, measuring the back to earth radiation, duh. Better to postulate the basic and then use devilishly clever interpolation techniques to extrapolate a measurement and go aa-ha-ha.

Butch
January 13, 2010 5:19 am

While I appreciate the fine tutorial, which for me may raises more questions than it gives answers, I never took Christopher Monckton’s statement to apply to UAH. In fact, in his paper “Climategate: Caught Green-Handed” this statement appears on page 20.
“In future, therefore, the SPPI monthly surface-temperature graphs will exclude the two terrestrial-temperature datasets altogether and will rely solely upon the RSS and UAH satellite datasets.”
I can see that confusion could arise but I took no indictment of UAH from his video presentation.

ShrNfr
January 13, 2010 5:53 am

@ peat (23:35:58) :
I am wondering how the satellite instrument channels are able to focus on different layers of the atmosphere. Why don’t the emission signatures from an entire column of atmosphere from the ground up enter the instrument and become mixed up?
——-
The O2 absorption complex is a series of lines out to about 70 Ghz. Depending on where you look, you see more of one altitude than you do of others. But yes, you see them all to some extent. The “inversion” problem has been around for these instruments and the CO2 sounders forever. There are a bunch of approaches you can take. None of them are perfect, but they give pretty good results. They rely on the statistics of the atmosphre, etc. You can use relaxation methods, kalman-bucy filters, and all sorts of other stuff to tease the temperature out at various levels. Of course, the “ground truth” is a radiosonde, and they have their own errors. It can be quite a mess. I did my PhD at MIT in one method of how to get the data out. One of the problems is that some stuff yields a “smooth” field, but that is not what the weather forecasters want for their prediction models.

Martin Brumby
January 13, 2010 5:58 am

Very interesting.
Some concern here about an accuracy of plus / minus 1ºC.
But this set up takes a huge number of readings.
So is it better or worse than a few thousand individually badly sited surface instruments at extremely non – random locations and with loads of human error built in to when the reading was taken, whether the reader remembered to bring his glasses, whether he had a heavy session the night before and all the rest of it?
And that’s before the merry UEA and GISS teams start to tinker.
OK. No doubt many surface stations at least in the US now have more sophisticated instruments. Properly calibrated? Results from the Surfacestations project doesn’t exactly inspire my confidence that this is actually the case. And for the older instruments, how many readings are more accurate than plus/minus 1ºC, anyway?
So, whilst I’m not convinced that differences in ambiant temperature of 1ºC really matter much in the real world, (let alone “trends” of 0.6ºC per century!) I’d put my money on Spencer & Christie rather than Hansen & Jones any day.
Sorry to personalise things but that’s the way I see it.

NickB.
January 13, 2010 6:39 am

Great write-up – thank you!
I especially liked the last paragraph, it’s good to see that even the most solid methods are still cross-checked against other reliable measurements. Well done!

Chris
January 13, 2010 6:40 am

Seems like many readers don’t know the difference between relative and absolute readings. Sometimes, knowing the actual value of something (say temp) is not important. What’s important is the change from x temp, like x+ 0.001 C. Since satellite data is presented as anomalies, then relative difference is the proper way to go. Accuracy of the absolute number does not matter. Similarly, if you bought $1000 dollar of stock in company x, it really doesn’t matter what is the absolute value of the stock price, but instead what is important is the change between the time you bought it and the time you sold it.

phlogiston
January 13, 2010 7:05 am

Would it be possible to site imaging (microwave, IR etc) equipment on the moon, to measure earth’s atmospheric temperatures? Its orbit presumably decays more slowly than that of a satellite.

Phil.
January 13, 2010 7:09 am

pft (01:11:31) :
I would think measuring 2 points on the earth that have stable temperatures may be used, say above equatorial waters and Antarctic ice surface where you have grounded based temperatures explicity for the use of satellite calibrations, and not controlled by the GISS or CRU crowd. This is not calibration in the true sense of the word, but neither method is perfect in this regard, and perhaps both need to be used.

Due to orbit not being perfectly polar the coverage of the antarctic in the low altitude channels is not good, also the interference from the surface is worst when that surface is cold ice and the altitude is ~3000m. RSS doesn’t report TLT data beyond 70ºS for those reasons.

Alan S. Blue
January 13, 2010 7:33 am

Sometimes, knowing the actual value of something (say temp) is not important. What’s important is the change from x temp, like x+ 0.001 C. Since satellite data is presented as anomalies, then relative difference is the proper way to go.
Perfectly true Chris. But if you’re doing studies of the ocean, it is useful to know if the anomalies – these fractions of a degree differences you’re studying – are occurring focused around -10C or 110C. That’s obviously an outrageous example, but it highlights that the absolute measurements can indeed be quite relevant.
It should similarly be relevant here, although much more subtle and complex to quantify. The spectral properties of compounds are obviously temperature dependent – but they’re also generally slightly non-linear in response.

Mike Ramsey
January 13, 2010 7:35 am

Dr. Spencer,
 Perhaps this is OT, but what can you tell us about efforts, either by you, other Aqua team scientist, or perhaphs others looking at, say GPS, to accurately measure specific humidity at altitudes above 850 hPa?
I am thinking about Garth’s 2009 paper, “Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data”.
Thank you,
Mike Ramsey

January 13, 2010 7:52 am

Thanks Dr Spencer for your tutorial… Very good.
And thanks WUWT for pointing it out and putting it up for us.

Ralph
January 13, 2010 8:14 am

Sorry, I still don’t get this.
You calibrate the satellite via a warm and cold calibration object, and delete every variable you know of to get a reliable temperature reading.
You then fly the bird in space and discover that when you look at a particular target you get an adjusted equivalent temperature of 15oc. While actual Earthbound sensors read 17oc.
What are you going to do? Ignore the error? Or adjust the instrument?
Surely, at some point in time, the satellite has to be calibrated against known temperatures on the Earth.
.

JonesII
January 13, 2010 9:28 am

Were these satellites callibrated by “consensus”?☺

Ryan Stephenson
January 13, 2010 9:54 am

“Sometimes, knowing the actual value of something (say temp) is not important. What’s important is the change from x temp, like x+ 0.001 C. Since satellite data is presented as anomalies, then relative difference is the proper way to go.”
Except that this instrument, by NASA’s own admission, cannot read to x+0.001C. It’s measurement sensitivity is only 0.25Celsius. It’s like a digital thermometer that can only read in increments of 0.25Celsius. So any “trend” measured over the last ten years of operation can be discarded as merely measurement noise.
This is before you get into whether the temperature is accurate to better than 0.25Celsius for any given reading. Difficult, since the measurement is being made for a given “altitude” over a 45km radius – how they define “altitude” is not clear (do they move up and down to account for changes in land above sea level?)
I would say the instrument is about as useful for measuring climate change as the thermometer on my dad’s garden wall. Still, I bet claiming it was a crucial tool for measuring climate change made the whole project easier to fund.

SJB
January 13, 2010 9:59 am

Forgive me if this is slightly OT, and/or a stupid question (I am not a physicist) or if it has been asked before (I haven’t read all of the comments above) but to what extent could the increased DIRECT heating of the atmosphere over the 30+ year period of satellite measurements, due to the increasing human population, account for the slight increase in temperature of the troposphere measured by the satellite(s)? By ‘direct’ heating I mean the heat of (the increasing number of) buildings escaping into the atmosphere, plus all other human activities which generate heat (all of which eventually must leak out) – not to mention our increased biomass (we are warm-blooded after all). Thus, heating of the atmosphere directly, without the involvement of the greenhouse effect.
I wouldn’t have thought it would be too difficult for someone sufficiently knowledgeable (not me) to calculate the quantity of heat released by human activity in 1978 (using available figures for e.g. the amount of fossil fuel burnt + size of biomass i.e. population), do the same for 2010, then calculate the potential effect of the additional released heat on the amount of radiation which might be detected by the satellite?
I imagine this HAS probably already been done and the answer is that the theoretical effect is negligible, but I just thought I’d ask.

SJB
January 13, 2010 10:04 am

Moderator – if possible could you change the name for my previous comment to lower case ‘sjb’. I just realised I think there might already be an ‘SJB’ contributing to WUWT (not me, that was the first time I have commented).
[Reply: Only you can change your WordPress name. You can’t change your username, but you can add nicknames. The one you designate will appear as your screen name. ~dbs]

R. Craigen
January 13, 2010 10:47 am

As a mathematician I appreciate the distinction you have made between a potential error in the absolute temperature derived from these measurements and the stability of that error. As we teach our first year calculus students, the first derivative of a constant is zero. If an instrument has an internal systematic (but constant error) it may read “wrong” but its first derivative — ie trend information — may remain completely accurate. The constant cancels out in any differencing operation.
This is an important point, and brings me to mind of a recent debate that demonstrated a standard alarmist tactic very well. When it is pointed out that surface temperatures have remained constant or fallen slightly over the last temperatures this is often met with the oblique “[That’s not true!] Several of the warmest years in recorded history have occurred during that same 10 years!”
In the debate I recently watched the exact phrase I used “That’s not true!” actually occurred, which brought me some amusement. For, the “rejoinder” does not contradiction the original assertion at all. It is quite possible for the temperature to be falling and at the same time at or around its highest overall value. Indeed, what else can it do after it attaining a local maximum? And this is precisely the case for the instrument-based temperature data these folks generally invoke as gospel. Whether or not one accepts the CRU-and-Mann-generated constructions, to present these two statements as somehow contradictory is simply ludicrous — it is the most common error made by students who flunk Calculus I by failing to understand the distinction between a function f and its derivative f’.
Insofar as this point touches on “debating points” it should be pointed out that the first derivative of temperature is far more important than that of the temperature itself when considering what climate trends are afoot. The second derivative is also very important. Where is the inflection point? Is the curvature up or down? Alarmists make some ludicrous temperature about the “increasing rate” of temperature change, CO2 increase, or polar ice disappearance. Even when these values themselves are seen to be increasing (f’>0) it is almost always the case, as I have observed, that they display a negative second derivative (f”<0), which merely underscores either the ignorance of the speaker or that they are willfully playing to the ignorance of their audience.

JonesII
January 13, 2010 11:02 am

What is it temperature after all?…as faked as the word of a politician, no one believes in it, I would rather prefer amperage.☺

George E. Smith
January 13, 2010 12:16 pm

Well Dr Roy, you have added some more detail to the previous presentation you posted here. Too bad that some folks can’t tell the difference between accuracy, and repeatability.
I’m pretty dumb when it comes to molecular spectroscopy in the microwave region (50-60 GHz). Does your sensor simply grab that whole frequency range in something like a thermopile; or can you actually frequency select some much narrower specific O2 frequencies. That would have a bearing on the sensitivity to RFI noise that some have raised as an issue. The two point calibration doesn’t raise any hackles with me; though it seems to have bothered some.
I find my self chuckling at the comments vis-a-vis the 1deg C accuracy. I assume that these people must have absolute faith in the absolute accuracy of the thermistors or thermocouples that are in these Stephenson Screen “Owl boxes”. And fancy that every single one of those thermometers has exactly the same accurate calibration down to millidegrees I am sure. You must be cheating too only need one thermometer to look at the whole globe.
Personally, I have close to zero confidence in any thermistor or thermocouple thermometer. The thought of connecting a temperature sensor to external circuitry through heat conducting wires does not excite me.
One thing I didn’t quite get was whether ALL of your measurments are made looking straight down; or whether you do do oblique readings. You mentioneed the shift due to atmospheric path in oblique readings, but I didn’t get whether you do it anyway, and correct for path obliquity.
I’m somewhat curious as to why your 2.7 K big bang echo measurements are not offset by actual starlight. Do you spectrum filter to get rid of the energy spectra of real stars ?

January 13, 2010 12:58 pm

Butch (05:19:47) :
What you’re reading is not the original version of Monckton’s piece. It did indeed originally say that the satellites were calibrated from the earth measurements, but he promptly revised it to what you’re reading after he was alerted to the error, so the incorrect version was on his site for only a very short time.

David Segesta
January 13, 2010 1:01 pm

Dr. Spencer thank you for the explanation.

David Segesta
January 13, 2010 1:12 pm

I just thought of one question. Radiant energy leaving the earth consists of reflected sunlight and emitted infra-red. But from Dr. Spencer’s explanation it now seems like some energy also leaves through emitted microwaves. Is that amount significant enough to affect the energy balance?

Dr A Burns
January 13, 2010 2:22 pm

I assume the integration avoids “double/multiple counting” over the poles ?
Are polar measurements used to check repeatability ?

Chad
January 13, 2010 2:31 pm

I’ve dealt with this issue of false precision here as has Lucia. It doesn’t make much of a difference.

Richard Saumarez
January 13, 2010 2:37 pm

@R Craigen
You are absolutely right, but the problem lies with differentiating a noisy signal. In the frequency domain, differentiation is a filter with an amplitide prortional to -j.omega so that noise becomes problematic. The second derivative is even worse.

phlogiston
January 13, 2010 3:52 pm

John Simons (03:39:29) :
“Speaking of satellite data
Wow! check out the present anomalies
http://discover.itsc.uah.edu/amsutemps/amsutemps.html
It would be a remarkable discovery indeed to find that during an ice age, global temperatures actually increase! However we would have no reason to doubt it, especially if published in the leading climate journals. Climate science by climate scientists!

pft
January 13, 2010 5:05 pm

“SJB (09:59:35) :
Forgive me if this is slightly OT, and/or a stupid question (I am not a physicist) or if it has been asked before (I haven’t read all of the comments above) but to what extent could the increased DIRECT heating of the atmosphere over the 30+ year period of satellite measurements, due to the increasing human population, account for the slight increase in temperature of the troposphere measured by the satellite(s)? …..
I wouldn’t have thought it would be too difficult for someone sufficiently knowledgeable (not me) to calculate the quantity of heat released by human activity in 1978 (using available figures for e.g. the amount of fossil fuel burnt + size of biomass i.e. population), do the same for 2010, then calculate the potential effect of the additional released heat on the amount of radiation which might be detected by the satellite?”
Hoyt (2006) indicated the US generated heat at about 0.34 W/M2, about 1 W/M2 in urban areas and much higher in cities. For this reason you would believe satellites are much better than surface temperature stations, at least 1/2 or more are in urban areas. Of course, land accounts for only 29% of the globes surface area.
He estimated that since 1900 a population increase of 1 billion to 6 billion could account for 0.5 deg C, which is close to the observed warming. Of course, it’s complicated by land useage changes, aerosols, more active sun, and of course more CO2, etc.
Thats from David Rapps book on Assessing Climate Change

Brian Dodge
January 13, 2010 7:32 pm

Ignots (05:00:41) :
“- Did we know how PRT is influenced by prolonged cosmic radiation?”
http://www.onlineconversion.com/forum/forum_1059218210.htm
100 rem = 1 Gray
http://www.fas.org/spp/military/docops/usaf/2020/app-f.htm (on satellite radiation exposure)
“using 5.0 gm/cm2 of aluminum shielding, the REM for one year continuous exposure would be reduced to about 550.”
So. 5.5gy per year in a Bud box; less exposure with more shielding.
According to http://www.lakeshore.com/pdf_files/Appendices/LSTC_appendixB_l.pdf their PtRDs typically shift about -20 milliKelvin with a Gamma dose of 29 Grays plus 2.5e12 neutrons/cm2, so the drift probably would be less than 5 millikelvin per year. There’s a NASA engineer sitting in a cubicle somewhere who knows what the shielding actually is (prolly not a Bud Box).

Brian Dodge
January 13, 2010 8:29 pm

R. Craigen (10:47:57) :
“When it is pointed out that surface temperatures have remained constant or fallen slightly over the last temperatures this is often met with the oblique “[That’s not true!]”
But the claim isn’t that” the surface temperature record [from HadCRUT/GISS or even RSS or UAH] have remained constant or fallen slightly”, nor do the people making the claims address whether they are statistically significant. What is claimed is that we see “global cooling” since 1998 or 2002, usually ending at the 2007 minimum – google shows “Results 1 – 10 of about 1,610 from wattsupwiththat.com for “global cooling” – ignoring Arctic sea ice, glacier, and Greenland ice loss, which indeed ” … underscores either the ignorance of the speaker or that they are willfully playing to the ignorance of their audience.” Speaking of which, do you not realize that SNR of the derivative of a noisy signal will be worse, or were you hoping that people like Richard Saumarez (14:37:58) : aren’t paying attention?

E.M.Smith
Editor
January 13, 2010 9:57 pm

KeithGuy (01:19:16) : Now would someone please explain in simple terms how GISS global temperature data is contrived (Whoops! I mean calculated)?
Try this:
http://chiefio.wordpress.com/2009/11/09/gistemp-a-human-view/
It has clickable links to greater detail if you want to get more deeply into any one part, but has a “normal folks” feel to the text.

E.M.Smith
Editor
January 13, 2010 10:50 pm

From here:
http://www.wsdmag.com/Articles/ArticleID/18449/18449.html
we have
One final note about the propagation. As it turns out, 60 GHz is at that part of the spectrum where the absorption of the signal by water molecules is at a peak. That’s probably why they made 60 GHz the unlicensed band. The frequencies directly above and below are more useful. What that means is that when it rains or snows signal amplitude will be severely decreased even blocked. That attenuation is in the 10 dB/km to 15 dB/km or about 1.5 dB/100 meters. Luckily, most applications will probably be of the indoor variety, so we won’t have to worry about that.
Two really good arguments for high water-vapor attenuation are interference mitigation and security. Signals won’t travel far beyond their intended targets with this technology so interference to others will be minimal and the chance for illegal reception significantly less.

OK, it’s a bit vague on how far below 60 gHz is less attenuated… but I still come away with an uneasy feeling about this.
We have folks building gear using these frequencies for radars and “stuff” and we have water absorption of at least some of it.
Sure seems to me like we could easily have a mistaken “warming” signal via some combination of “more radars on more things” and / or “more gHz leakage from computers everywhere”. It also looks like reduced water in the air (say from, oh, I don’t know, Frozen Air in a cooling trend…) could end up letting more signal reach a satellite.
I’m as willing as the next guy to accept some “magic sauce” is applied that prevents this (say o2 is in 55 gHz and water absorbs from 58 to 60, so you put a notch filter in and look at 55 gHz); but I’d be more comfortable with a word from the folks who make this bird work saying “Yeah, we handled that”… “ground source and water absorption are covered”.
from the same source, it looks like it is going to get worse:
Each week I hear more about products and plans for the unlicensed 60 GHz spectrum. Yes, that is 60 gigahertz, or 60 billion cycles per second. Very high frequency indeed. What can you do with frequencies that high? Well, with all the progress in 60 GHz semiconductors, we are about to find out. A 60 GHz wireless product is probably in the near future.
… This was from March, 2008 …

As for data rate you need to look at available bandwidth. With 7-GHz bandwidth available at 60 Hz (57 to 64 GHz in the U.S.), you can really get some great data rates (4 to 5 Gb/s to be conservative), and that is using the simpler BPSK and QPSK modulation methods.

And some of it is in the top end of 50 gHz…
Panasonic has an HDTV gizmo at 60 gHz:
http://www.wirelesshd.org/ as have others
http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel3%2F4660%2F13055%2F00596529.pdf%3Farnumber%3D596529&authDecision=-203
I also found a bunch of paywalled references to a 50 gHz rain guage and similar 50 gHz radar type gear.
So, OK, which is it: Water is “an issue” and attenuates. Or Water is not an issue, but ground source is going to propagate and be an issue…
Don’t know what it is, but something here has my “What?” sensor tingling..

cba
January 14, 2010 6:16 am


Kendra (04:58:06) :
Off-Topic but Please Help:
I know someone who is in a position of influence – ie will be teaching a new class on the politicization of climate science.
Unfortunately, this person buys the schtick (climategate overblown, evidence stands, etc.)
The person recommended (others as well, but this was specifically mentioned) that I read The Long Thaw by David Archer. Not only is it difficult for me to spend money on what I’m “biased” to see as propaganda, but I need information as soon as possible. The reviews at Amazon were not informative altho there was a negative one, nothing substantive.
If I can do one thing about this, it’s to at least try to affect someone with influence who then keeps the “machine” haha (re MIT debates, etc.) rolling. So this is an important challenge for me, altho the person is brushing me off with the “agree to disagree” meme.
I never even heard of David Archer since I started researching over a year ago.
I probably seem a bit histrionic but the sooner I can stop this, the better – I mean, try to stop this.
This is not the kind of thing for Climate Audit so won’t ask there, and am assuming that Jeff Id’s readers also read here.
So, 1. Does anyone have anything to say about this book and/or David Archer’s arguments. 2. Where else could I ask this?

Kendra,
I’m not familiar with Archer’s book. He has put a MODTRAN Calculator on his website that allows one to calculate the radiative power transfer into and out of the atmosphere and offers a few parameters to manipulate. As far as I know, it’s an honest one to the limitations that exist. In and of itself, it doesn’t prove anything one way or the other.
One book that might be of interest is Svensmark’s and Calder’s The Chilling Stars.
You need to understand that there is GW, global warming, AGW, anthropogenic global warming (man-made), and CAGW, catastrophic antrhopogenic global warming. You also need to understand that it is a cult style religion being exploited for political reasons in order to impose tyranical rule that would never be accepted otherwise. As such, there will be difficulty using logic to combat emotion. Somewhere deep in the mix, there is the science or some cartoon rendition of science.
GW is not an issue. Climate does change and if we didn’t cause it, there’s probably no prospect we could actually manipulate it. AGW has the appearance that we caused it so we must fix it. What the hype is about is the CAGW or catastrophic change we supposedly caused. It has to be catastrophic to justify the expense and suffering. Let’s face it, if we actually are responsible for the increase in co2 in the air and it does lead to less colder weather and more plant growth with no down side, we might be responsible for saving life on planet Earth by this additional injection of co2. To reduce this co2 could actually be a very bad idea. This is the antithesis of the mentality found in the cult which begins with the notion that man is evil and is destroying the planet, including those cute cuddly polar bears and baby seals.
What it means for the argument is that there are many pieces that must be addressed. One that this web site has concentrated in is determining the validity of the instrument record being used. This particular thread is concerned with a particular satellite record and how it is done. Another is Climate Audit tthat concerns itself with the validity of the statistical analysis being used, quite often with proxies like tree rings. There are others that are more general or without specialties and there are others that concentrate on only certain aspects.
One key thing to know is that while there may be thousands of scientists involved in global warming research, practically none of them are knowledgeable in the fundamentals of the physics associated with the warming. A biologist or economist dealing with the ramifications of 5 degree C warming is likely to be clueless as to whether or not 5 degrees C increase is even possible under any circumstances. Another is that statements like “10 years or it’s too late” or consensus science, and the like are unscientific.
In reality, this huge case for CAGW is far from proven. There are severe flaws on every front.

DeWitt Payne
January 14, 2010 7:55 am

Anyone interested in more detail about remote sensing of atmospheric temperature as well as the physics of radiative energy transfer would not go wrong by obtaining a copy of A First Course in Atmospheric Radiation by Grant W. Petty.
Dr. Spencer’s post is informative about some of the details of calculating temperature from microwave emission, but there’s a lot more he left out. But then he’s not writing a chapter in a textbook here.

sjb
January 14, 2010 11:13 am

@pft (17:05:03)
Thanks, very interesting. I would add that Rapps book to my birthday list, but I’ve just looked it up and seen how much it costs!

Malaga View
January 14, 2010 11:32 am

So let me understand some of the basics regarding the Advanced Microwave Sounding Unit (AMSU) flying on NASA’s Aqua satellite:
1) The Aqua satellite takes 99 mins to orbits the earth.
See: http://aqua.nasa.gov/about/instruments.php
Therefore it makes 14.55 orbits in 24 hours
24 hrs x 60 mins = 1,440 minutes
1,440 / 99 = 14.545454
i.e. Aqua flies over the equator in daylight 14 times in 24 hours.
2) The earth circumference at the equator is 40,075 km
See: http://en.wikipedia.org/wiki/Earth
Therefore, the AMSU should scan 2,862.5 km of the equator on each orbit.
40,075 km / 14 = 2,862.5 km
3) This WUWT article says the instrument scans across the subtrack of the satellite,
the radiometer’s antenna views thirty separate ‘footprints’, nominally 50 km in diameter

Therefore, the AMSU only scans 1,500 km of the equator on each orbit.
Using simple maths this means the AMSU only scans 52.4% of the equator.
1,500 / 2,862.5 = 0.524
Note:
The AMSU has a Swath of 1650 km according to http://aqua.nasa.gov/about/instrument_amsu.php
This might increase the coverage to 57.64%
Therefore, I would like to know how we get UAH global temperatures by sampling only 52.4% or 57.64% of the globe….

Butch
January 14, 2010 2:32 pm

Joe Born (12:58:22) :
Thank you for the response. It was my recollection that I had in fact seen the statement both ways. I believe now that the linkage to earth temperature reference was part of a video production I watched, probably produced prior to the correction.
It didn’t sync with my understanding of UAH’s methods, which prompted the search that yielded the paper, which was obviously corrected by that time. Time lines can be so tricky. Thanks again!

Brian Dodge
January 14, 2010 6:44 pm

Malaga View (11:32:16) :
“i.e. Aqua flies over the equator in daylight 14 times in 24 hours.”
http://aqua.nasa.gov/news/nasa_release.php?id=1
“Aqua crosses the equator 28-30 times a day, doing so at 1:30 p.m. as it heads north and at 1:30 a.m. as it heads south. ”
And the atmospheric microwave emissions don’t turn off at night.

Malaga View
January 15, 2010 1:59 am

Brian Dodge (18:44:37) :
Aqua crosses the equator 28-30 times a day, doing so at 1:30 p.m. as it heads north and at 1:30 a.m. as it heads south.

This means 14-15 crosses in daylight and 14-15 crosses on the dark side….
And the atmospheric microwave emissions don’t turn off at night.
So the AMSU could take 14-15 day time scans and 14-15 night time scans…
But generating UAH global temperatures from these 28-30 scans would really mixing sunlight apples with moonlight pears…

Kendra
January 15, 2010 6:22 am

cba,
Thanks so much for your kind answer – and a very informative overview as well. I just received Svensmark’s book and am very much looking forward to it – my husband has it right now.
Someone I know, to whom I’ve sent links including Climate Audit, WUWT and others, is about to teach a class on (ironically) the politicization of science, where one of the required books is Archer’s. We’d had discussions during a visit last summer and, actually, the follow-up e-mail debate was supposed to be about whether or not there was a consensus (I know, bogus argument). Rather than compiling lists (I see a new one’s out, comparing warmists and skeptics in terms of numbers but also in which fields), I thought those I knew of showing skeptical points of substance was the right way to go. No response was ever given to these over more than 6 months, although she had said she would send links representing her side of the argument.
In return, I finally got a dismissal “we’ll have to agree to disagree” with a recommendation that I read The Long Thaw (rather elitist, I have to buy a book and she got from me links for free). I also have a list of the other few required readings and recommended readings to be used in the seminar – just what you’d expect. I had mentioned following Climategate, too, and the comment included the usual “overblown” and “science not affected.”
This is someone I’ve known for over 30 years but rarely see. A political scientist with leftist persuasion I had always been aware of but we rarely discussed politics, until last time. I was hoping to open her mind just ever so slightly but I doubt that she gave the links even a cursory glance.
Not only did I feel anger on a personal level (uppity Kendra oversteps bounds) but the reality of what was going to occur – a number of minds subjected to the same old propaganda – and I felt this impulse to try to stop it and to be able to make a couple of arguments in terms of this Archer guy seemed to be the first step.
I did keep on searching and found an introductory piece on pbs.org (also showing funding by ExxonMobil, go figure) where I identified 2 statements that I know are at minimum controversial – how long CO2 remains in atmosphere and amount of CO2 in volcanoes, so that’s a start.
Naturally, I would make the point that I do not wish to buy an expensive book that already presumes as proven what is not – moreover, even without those 2 points, the book was published in 2008 and cannot possibly deal with the latest information (CO2’s lumpiness, haha, for example).
I’ve come to the realization that the whole endeavor will be tilting at windmills and this group of kids will be 15 or 20 more brainwashed ones set loose in the world and feel quite demoralized. However, I still will slowly work on a reply so am grateful for any assistance in dealing with this +”*ç% Archer. I also discovered he’s associated with our old friend realclimate!!!
I only wanted to post this plea in the latest WUWT so more people would see it, knowing it would be OT in any case. Haste makes waste, so I accidentally clicked on this thread instead of the newest!!!!
Thank you very much again!

January 19, 2010 5:05 pm

John Simons: You wrote, “Bob Tisdale’s Dec updated shows that temperatures have moved into the same band of record temperatures not seen since the 1998 super El-Nino… bets for a new all time global SST record in Jan anyone?”
Sorry for not finding this earlier, but without a link to my site, I missed your comment.
Based on the OI.v2 SST anomaly data that I post every month, I’ll guess that this month is about the same as December 2009 or slightly less, but I don’t make predictions.

February 4, 2010 3:00 am

Instruments that measure atmospheric temperature must be subject for constant calibration. Temperature calibrators are best instrument to make it more reliable and accurate.

AusieDan
March 3, 2010 7:07 pm

Thank you Dr Spenser for explaining all that.
However, a number of comments made good points or raise interesting questions and the whole thing would be more valuable if somone authorative could answer and summarise in reply.
I have three questions about all this:
(1) how is the varying atmospheric pressure and water content handled?
(2) how is the variable cloud cover handled. Surely on some occasions, heat never reaches the earth, but is reflected back out into space by clouds. In other cases, heat reflected up off the earth’s surface is reflected back down again by clouds and never reaches the satelite.
(3) How can the satelite now be showing that the temperature is reaching unprecended new heights, when over and over again, true rural thermometers show no warming for as long as the last 100 years. Something smells and needs checking before it turns rancid.