I am still receiving questions about the method by which the satellite microwave measurements are calibrated to get atmospheric temperatures. The confusion seems to have arisen because Christopher Monckton has claimed that our satellite data must be tied to the surface thermometer data, and after Climategate (as well all know) those traditional measurements have become suspect. So, time for a little tutorial.
NASA’S AQUA SATELLITE
The UAH global temperatures currently being produced come from the Advanced Microwave Sounding Unit (AMSU) flying on NASA’s Aqua satellite. AMSU is located on the bottom of the spacecraft (seen below); the AMSR-E instrument that I serve as the U.S. Science Team Leader for is the one on top of the satellite with the big dish.
Aqua has been operational since mid-2002, and is in a sun-synchronous orbit that crosses the equator at about 1:30 am and pm local solar time. The following image illustrates how AMSU, a cross-track scanner, continuously paints out an image below the spacecraft (actually, this image comes from the MODIS visible and infrared imager on Aqua, but the scanning geometry is basically the same):
HOW MICROWAVE RADIOMETERS WORK
Microwave temperature sounders like AMSU measure the very low levels of thermal microwave radiation emitted by molecular oxygen in the 50 to 60 GHz oxygen absorption complex. This is somewhat analogous to infrared temperature sounders (for instance, the Atmospheric InfraRed Sounder, AIRS, also on Aqua) which measure thermal emission by carbon dioxide in the atmosphere.
As the instrument scans across the subtrack of the satellite, the radiometer’s antenna views thirty separate ‘footprints’, nominally 50 km in diameter, each over over a 50 millisecond ‘integration time’. At these microwave frequencies, the intensity of thermally-emitted radiation measured by the instrument is directly proportional to the temperature of the oxygen molecules. The instrument actually measures a voltage, which is digitized by the radiometer and recorded as a certain number of digital counts. It is those digital counts which are recorded on board the spacecraft and then downlinked to satellite tracking stations in the Arctic.
HOW THE DATA ARE CALIBRATED TO TEMPERATURES
Now for the important part: How are these instrument digitized voltages calibrated in terms of temperature?
Once every Earth scan, the radiometer antenna looks at a “warm calibration target” inside the instrument whose temperature is continuously monitored with several platinum resistance thermometers (PRTs). PRTs work somewhat like a thermistor, but are more accurate and more stable. Each PRT has its own calibration curve based upon laboratory tests.
The temperature of the warm calibration target is allowed to float with the rest of the instrument, and it typically changes by several degrees during a single orbit, as the satellite travels in and out of sunlight. While this warm calibration point provides a radiometer digitized voltage measurement and the temperature that goes along with it, how do we use that information to determine what temperatures corresponds to the radiometer measurements when looking at the Earth?
A second calibration point is needed, at the cold end of the temperature scale. For that, the radiometer antenna is pointed at the cosmic background, which is assumed to radiate at 2.7 Kelvin degrees. These two calibration points are then used to interpolate to the Earth-viewing measurements, which then provides the calibrated “brightness temperatures”. This is illustrated in the following graph:
The response of the AMSU is slightly non-linear, so the calibration curve in the above graph actually has slight curvature to it. Back when all we had were Microwave Sounding Units (MSU), we had to assume the instruments were linear due to a lack of sufficient pre-launch test data to determine their nonlinearity. Because of various radiometer-related and antenna-related factors, the absolute accuracy of the calibrated Earth-viewing temperatures are probably not much better than 1 deg. C. While this sounds like it would be unusable for climate monitoring, the important thing is that the instruments be very stable over time; an absolute accuracy error of this size is irrelevant for climate monitoring, as long as sufficient data are available from successive satellites so that the newer satellites can be calibrated to the older satellites’ measurements.
WHAT LAYERS OF THE ATMOSPHERE ARE MEASURED?
For AMSU channel 5 that we use for tropospheric temperature monitoring, that brightness temperature is very close to the vertically-averaged temperature through a fairly deep layer of the atmosphere. The vertical profiles of each channel’s relative sensitivity to temperature (’weighting functions’) are shown in the following plot:
These weighting functions are for the nadir (straight-down) views of the instrument, and all increase in altitude as the instrument scans farther away from nadir. AMSU channel 5 is used for our middle tropospheric temperature (MT) estimate; we use a weighted difference between the various view angles of channel 5 to probe lower in the atmosphere, which a fairly sharp weighting function which is for our lower-tropospheric (LT) temperature estimate. We use AMSU channel 9 for monitoring of lower stratospheric (LS) temperatures.
For those channels whose weighting functions intersect the surface, a portion of the total measured microwave thermal emission signal comes from the surface. AMSU channels 1, 2, and 15 are considered “window” channels because the atmosphere is essentially clear, so virtually all of the measured microwave radiation comes from the surface. While this sounds like a good way to measure surface temperature, it turns out that the microwave ‘emissivity’ of the surface (it’s ability to emit microwave energy) is so variable that it is difficult to accurately measure surface temperatures using such measurements. The variable emissivity problem is the smallest for well-vegetated surfaces, and largest for snow-covered surfaces. While the microwave emissivity of the ocean surfaces around 50 GHz is more stable, it just happens to have a temperature dependence which almost exactly cancels out any sensitivity to surface temperature.
POST-PROCESSING OF DATA AT UAH
The millions of calibrated brightness temperature measurements are averaged in space and time, for instance monthly averages in 2.5 degree latitude bands. I have FORTRAN programs I have written to do this. I then pass the averages to John Christy, who inter-calibrates the different satellites’ AMSUs during periods when two or more satellites are operating (which is always the case).
The biggest problems we have had creating a data record with long-term stability is orbit decay of the satellites carrying the MSU and AMSU instruments. Before the Aqua satellite was launched in 2002, all other satellites carrying MSUs or AMSUs had orbits which decayed over time. The decay results from the fact that there is a small amount of atmospheric drag on the satellites, so they very slowly fall in altitude over time. This leads to 3 problems for obtaining a stable long-term record of temperature.
(1) Orbit Altitude Effect on LT The first is a spurious cooling signal in our lower tropospheric (LT) temperature product, which depends upon differencing measurements at different view angles. As the satellite falls, the angle at which the instrument views the surface changes slightly. The correction for this is fairly straightforward, and is applied to both our dataset and to the similar datasets produced by Frank Wentz and Carl Mears at Remote Sensing Systems (RSS). This adjustment is not needed for the Aqua satellite since it carries extra fuel which is used to maintain the orbit.
(2) Diurnal Drift Effect The second problem caused by orbit decay is that the nominal local observation time begins to drift. As a result, the measurements can increasingly be from a warmer or cooler time of day after a few years on-orbit. Luckily, this almost always happened when another satellite operating at the same time had a relatively stable observation time, allowing us to quantify the effect. Nevertheless, the correction isn’t perfect, and so leads to some uncertainty. [Instead of this empirical correction we make to the UAH products, RSS uses the day-night cycle of temperatures created by a climate model to do the adjustment for time-of-day.] This adjustment is not necessary for the Aqua AMSU.
(3) Instrument Body Temperature Effect. As the satellite orbit decays, the solar illumination of the spacecraft changes, which then can alter the physical temperature of the instrument itself. For some unknown reason, it turns out that most of the microwave radiometers’ calibrated Earth-viewing temperatures are slightly influenced by the temperature of the instrument itself…which should not be the case. One possibility is that the exact microwave frequency band which the instrument observes at changes slightly as the instrument warms or cools, which then leads to weighting functions that move up and down in the atmosphere with instrument temperature. Since tropospheric temperature falls off by about 7 deg. C for every 1 km in altitude, it is important for the ‘local oscillators’ governing the frequency band sensed to be very stable, so that the altitude of the layer sensed does not change over time. This effect is, once again, empirically removed based upon comparisons to another satellite whose instrument shows little or no instrument temperature effect. The biggest concern is the long-term changes in instrument temperature, not the changes within an orbit. Since the Aqua satellite does not drift, the solar illumination does not change and and so there is no long-term change in the instrument’s temperature to correct for.
One can imagine all kinds of lesser issues that might affect the long-term stability of the satellite record. For instance, since there have been ten successive satellites, most of which had to be calibrated to the one before it with some non-zero error, there is the possibility of a small ‘random walk’ component to the 30+ year data record. Fortunately, John Christy has spent a lot of time comparing our datasets to radiosonde (weather balloon) datasets, and finds very good long-term agreement.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.





“This is somewhat analogous to infrared temperature sounders (for instance, the Atmospheric InfraRed Sounder, AIRS, also on Aqua) which measure thermal emission by carbon dioxide in the atmosphere.”
Well those CO2 measurements sound very accurate as an indicator of climate change (not) given that we know there is more CO2 in the atmosphere!
Thus we can dismiss surface stations due to UHI, we can dismiss most of the satellite data since it is measuring CO2 and the amount of CO2 is growing all the time. We have one set of satellite data from the last 10 years that “might” be reliable because it measures oxygen microwave radiation from 450miles up with some vague idea of whether it might be looking at the ocean or looking at the top of mount Everest!
Ever get the feeling that NASA and ESA are beding over backwards to justify their existence? Wouldn’t it have been a better idea to have spent that money on a few well-placed surface monitoring sites? After all, if the Anatartic ice melts it will be at ground level! If ever there was an example of how AGW theory is being used to justify outrageous research expenditure on the wrong things, this was it.
According to Wikipedia the AMSU-A referred to by Dr Spencer has a maximum measurement sensitivity of 0.25Celsius. Not great if you are trying to measure decadal climate trends of about 0.1Celsius. And remember this is only the instrument sensitivity – not an indicator of the instrument accuracy.
Dr Spencer seems to have overlooked that fact. More propaganda dressed up as science.
Anyway, my son’s got a bit of a fever today. I’ll go and check how bad it is by using a microwave receiver 75km away.
If observational error is +/- 1 deg, then calculated anomalies of < 1 deg are meaningless.
Being a complete non-climatologist, could some explain to me several things?
Why does the mean temperature rise during NH summer? Is this related to a higher land mass in the NH and more sea in the SH? Neing naive, I would expect the mean global temperature to remain constant.
Why does the upper atmosphere cool when the lower atmosphere warms?
Im not being critical, I just want to know.
LOL@Mann O Mann’s comment
Thank you Dr. Spencer for the excellent writeup on NASA’s Aqua satellite. I’d be interested in learning more about AIRS and why it is so difficult to detect the CO2 signature from gas columns when clouds are involved. Also, why does the ESA’s Envisat seemed to be limited to detecting CO2 over land but the Aqua AIRS satellite does not have suffer from this limitation? Thanks again.
Just remembered a screen cap I made about 6 months ago.
I made a screen cap of NOAA’s July 2009 prediction for this winter and beyond.
http://i292.photobucket.com/albums/mm3/arketebel/NOAAPredictionJulyof2009.jpg
[sarcasm]
No doubt these guys know what it will be like in 2050 or 2100. Look how dead on they were 6 months into the future.
[/sarcasm]
Speaking of satellite data
Wow! check out the present anomalies
http://discover.itsc.uah.edu/amsutemps/amsutemps.html
all the low altitude satellite temperature channels are going ballistic and now in record territory, looks like Jan is shaping up to be another seasonal temperature record at least.
Bob Tisdale’s Dec updated shows that temperatures have moved into the same band of record temperatures not seen since the 1998 super El-Nino… bets for a new all time global SST record in Jan anyone?
Dr. Spencer, thanks for being so frank and open about the workings of the project. I am sure that it is appreciated by all readers here.
As I understand it, the project is measuring the microwave radiation energy at certain frequencies which free 02 molecules emit, and the energy-levels correlate with temperature rises of the atmosphere.
It seems that the measurements will be directly affected by the absolute number of O2 molecules in the measurement column. So, the readings will vary not only with changes in atmospheric density, but also the relative fraction of O2.
It follows that the raw measurement series could reveal changes in atmospheric density, and hence variations in the total amount of atmosphere per-se ( or at least the O2 proportion)
And as ‘climate’ is really integrated weather ( i.e atmospheric density and temperature change ) over a long time, perhaps climatic changes could more usefully be tracked by looking at how other attributes of the atmosphere are varying, rather than just relying on ‘average temperature’.
Perhaps the raw data could find a second use, by being reanalysed to produce a set of proxy sea-level pressure series. This data could then be compared with the existing surface records, and may provide yet another means of understanding.
Another thank you to add to the chorus. This will take more than one reading, and is fascinating in its own right.
Off-Topic but Please Help:
I know someone who is in a position of influence – ie will be teaching a new class on the politicization of climate science.
Unfortunately, this person buys the schtick (climategate overblown, evidence stands, etc.)
The person recommended (others as well, but this was specifically mentioned) that I read The Long Thaw by David Archer. Not only is it difficult for me to spend money on what I’m “biased” to see as propaganda, but I need information as soon as possible. The reviews at Amazon were not informative altho there was a negative one, nothing substantive.
If I can do one thing about this, it’s to at least try to affect someone with influence who then keeps the “machine” haha (re MIT debates, etc.) rolling. So this is an important challenge for me, altho the person is brushing me off with the “agree to disagree” meme.
I never even heard of David Archer since I started researching over a year ago.
I probably seem a bit histrionic but the sooner I can stop this, the better – I mean, try to stop this.
This is not the kind of thing for Climate Audit so won’t ask there, and am assuming that Jeff Id’s readers also read here.
So, 1. Does anyone have anything to say about this book and/or David Archer’s arguments. 2. Where else could I ask this?
Thank you for any assistance whatsoever.
– Did we know how PRT is influenced by prolonged cosmic radiation? Handbook of PRTs usage recommends periodical testing of PRTs against spoiling by external influences. How it will be performed in orbit? Did we know how prolonged cosmic radiation influences radiometer antenna elements? Is it possible in laboratory to test these influences? If cosmic radiation is influencing PRTs and radiometric systems, then it will show only continuous ẅarming tendency in history of measurements. Revealing is your words that ‘the newer satellites can be calibrated to the older satellites’ measurements.’ If PRTs is experiencing continuous shift in quality due to cosmic radiation, then such “calibration” only deepens error: in output we have measurement not of “climate warming”, but of degradation of measurement system due to cosmic radiation. And it is Obvious, because shift of “warming” have very stable tendency, which didnt fit data obtained by other measures.
Actual make sense information, instead of nonsensical, from NASA.
But how does the last part, about the use of radiosonde balloons as reference, stand to the fact that the weathery balloons being criticized for their inaccuracy, pretty much due to ’em being heated by the sun. Of course all that critic was from some warmist, at NASA, who instead thought temperature measurements from wind speed proxy was a more stellar idea. If my memory serves that was in the doc about what quality control, on the data, is being done by NASA.
And didn’t CRU (or was it RC? same same I guess) also criticize the radiosonde balloons for their inaccuracy?
But essentially you guys just measure the escaping, to space, radiation, and not the “green house effect”, back to earth, radiation? Don’t mind the last part it’s probably really silly, measuring the back to earth radiation, duh. Better to postulate the basic and then use devilishly clever interpolation techniques to extrapolate a measurement and go aa-ha-ha.
While I appreciate the fine tutorial, which for me may raises more questions than it gives answers, I never took Christopher Monckton’s statement to apply to UAH. In fact, in his paper “Climategate: Caught Green-Handed” this statement appears on page 20.
“In future, therefore, the SPPI monthly surface-temperature graphs will exclude the two terrestrial-temperature datasets altogether and will rely solely upon the RSS and UAH satellite datasets.”
I can see that confusion could arise but I took no indictment of UAH from his video presentation.
@ur momisugly peat (23:35:58) :
I am wondering how the satellite instrument channels are able to focus on different layers of the atmosphere. Why don’t the emission signatures from an entire column of atmosphere from the ground up enter the instrument and become mixed up?
——-
The O2 absorption complex is a series of lines out to about 70 Ghz. Depending on where you look, you see more of one altitude than you do of others. But yes, you see them all to some extent. The “inversion” problem has been around for these instruments and the CO2 sounders forever. There are a bunch of approaches you can take. None of them are perfect, but they give pretty good results. They rely on the statistics of the atmosphre, etc. You can use relaxation methods, kalman-bucy filters, and all sorts of other stuff to tease the temperature out at various levels. Of course, the “ground truth” is a radiosonde, and they have their own errors. It can be quite a mess. I did my PhD at MIT in one method of how to get the data out. One of the problems is that some stuff yields a “smooth” field, but that is not what the weather forecasters want for their prediction models.
Very interesting.
Some concern here about an accuracy of plus / minus 1ºC.
But this set up takes a huge number of readings.
So is it better or worse than a few thousand individually badly sited surface instruments at extremely non – random locations and with loads of human error built in to when the reading was taken, whether the reader remembered to bring his glasses, whether he had a heavy session the night before and all the rest of it?
And that’s before the merry UEA and GISS teams start to tinker.
OK. No doubt many surface stations at least in the US now have more sophisticated instruments. Properly calibrated? Results from the Surfacestations project doesn’t exactly inspire my confidence that this is actually the case. And for the older instruments, how many readings are more accurate than plus/minus 1ºC, anyway?
So, whilst I’m not convinced that differences in ambiant temperature of 1ºC really matter much in the real world, (let alone “trends” of 0.6ºC per century!) I’d put my money on Spencer & Christie rather than Hansen & Jones any day.
Sorry to personalise things but that’s the way I see it.
Great write-up – thank you!
I especially liked the last paragraph, it’s good to see that even the most solid methods are still cross-checked against other reliable measurements. Well done!
Seems like many readers don’t know the difference between relative and absolute readings. Sometimes, knowing the actual value of something (say temp) is not important. What’s important is the change from x temp, like x+ 0.001 C. Since satellite data is presented as anomalies, then relative difference is the proper way to go. Accuracy of the absolute number does not matter. Similarly, if you bought $1000 dollar of stock in company x, it really doesn’t matter what is the absolute value of the stock price, but instead what is important is the change between the time you bought it and the time you sold it.
Would it be possible to site imaging (microwave, IR etc) equipment on the moon, to measure earth’s atmospheric temperatures? Its orbit presumably decays more slowly than that of a satellite.
pft (01:11:31) :
I would think measuring 2 points on the earth that have stable temperatures may be used, say above equatorial waters and Antarctic ice surface where you have grounded based temperatures explicity for the use of satellite calibrations, and not controlled by the GISS or CRU crowd. This is not calibration in the true sense of the word, but neither method is perfect in this regard, and perhaps both need to be used.
Due to orbit not being perfectly polar the coverage of the antarctic in the low altitude channels is not good, also the interference from the surface is worst when that surface is cold ice and the altitude is ~3000m. RSS doesn’t report TLT data beyond 70ºS for those reasons.
Sometimes, knowing the actual value of something (say temp) is not important. What’s important is the change from x temp, like x+ 0.001 C. Since satellite data is presented as anomalies, then relative difference is the proper way to go.
Perfectly true Chris. But if you’re doing studies of the ocean, it is useful to know if the anomalies – these fractions of a degree differences you’re studying – are occurring focused around -10C or 110C. That’s obviously an outrageous example, but it highlights that the absolute measurements can indeed be quite relevant.
It should similarly be relevant here, although much more subtle and complex to quantify. The spectral properties of compounds are obviously temperature dependent – but they’re also generally slightly non-linear in response.
Dr. Spencer,
Perhaps this is OT, but what can you tell us about efforts, either by you, other Aqua team scientist, or perhaphs others looking at, say GPS, to accurately measure specific humidity at altitudes above 850 hPa?
I am thinking about Garth’s 2009 paper, “Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data”.
Thank you,
Mike Ramsey
Thanks Dr Spencer for your tutorial… Very good.
And thanks WUWT for pointing it out and putting it up for us.
Sorry, I still don’t get this.
You calibrate the satellite via a warm and cold calibration object, and delete every variable you know of to get a reliable temperature reading.
You then fly the bird in space and discover that when you look at a particular target you get an adjusted equivalent temperature of 15oc. While actual Earthbound sensors read 17oc.
What are you going to do? Ignore the error? Or adjust the instrument?
Surely, at some point in time, the satellite has to be calibrated against known temperatures on the Earth.
.
Were these satellites callibrated by “consensus”?☺
“Sometimes, knowing the actual value of something (say temp) is not important. What’s important is the change from x temp, like x+ 0.001 C. Since satellite data is presented as anomalies, then relative difference is the proper way to go.”
Except that this instrument, by NASA’s own admission, cannot read to x+0.001C. It’s measurement sensitivity is only 0.25Celsius. It’s like a digital thermometer that can only read in increments of 0.25Celsius. So any “trend” measured over the last ten years of operation can be discarded as merely measurement noise.
This is before you get into whether the temperature is accurate to better than 0.25Celsius for any given reading. Difficult, since the measurement is being made for a given “altitude” over a 45km radius – how they define “altitude” is not clear (do they move up and down to account for changes in land above sea level?)
I would say the instrument is about as useful for measuring climate change as the thermometer on my dad’s garden wall. Still, I bet claiming it was a crucial tool for measuring climate change made the whole project easier to fund.