How the UAH Global Temperatures Are Produced

by Dr. Roy Spencer, PhD.

I am still receiving questions about the method by which the satellite microwave measurements are calibrated to get atmospheric temperatures. The confusion seems to have arisen because Christopher Monckton has claimed that our satellite data must be tied to the surface thermometer data, and after Climategate (as well all know) those traditional measurements have become suspect. So, time for a little tutorial.

NASA’S AQUA SATELLITE

The UAH global temperatures currently being produced come from the Advanced Microwave Sounding Unit (AMSU) flying on NASA’s Aqua satellite. AMSU is located on the bottom of the spacecraft (seen below); the AMSR-E instrument that I serve as the U.S. Science Team Leader for is the one on top of the satellite with the big dish.

aqua_night_pacific

Aqua has been operational since mid-2002, and is in a sun-synchronous orbit that crosses the equator at about 1:30 am and pm local solar time. The following image illustrates how AMSU, a cross-track scanner, continuously paints out an image below the spacecraft (actually, this image comes from the MODIS visible and infrared imager on Aqua, but the scanning geometry is basically the same):

Aqua-MODIS-swaths

HOW MICROWAVE RADIOMETERS WORK

Microwave temperature sounders like AMSU measure the very low levels of thermal microwave radiation emitted by molecular oxygen in the 50 to 60 GHz oxygen absorption complex. This is somewhat analogous to infrared temperature sounders (for instance, the Atmospheric InfraRed Sounder, AIRS, also on Aqua) which measure thermal emission by carbon dioxide in the atmosphere.

As the instrument scans across the subtrack of the satellite, the radiometer’s antenna views thirty separate ‘footprints’, nominally 50 km in diameter, each over over a 50 millisecond ‘integration time’. At these microwave frequencies, the intensity of thermally-emitted radiation measured by the instrument is directly proportional to the temperature of the oxygen molecules. The instrument actually measures a voltage, which is digitized by the radiometer and recorded as a certain number of digital counts. It is those digital counts which are recorded on board the spacecraft and then downlinked to satellite tracking stations in the Arctic.

HOW THE DATA ARE CALIBRATED TO TEMPERATURES

Now for the important part: How are these instrument digitized voltages calibrated in terms of temperature?

Once every Earth scan, the radiometer antenna looks at a “warm calibration target” inside the instrument whose temperature is continuously monitored with several platinum resistance thermometers (PRTs). PRTs work somewhat like a thermistor, but are more accurate and more stable. Each PRT has its own calibration curve based upon laboratory tests.

The temperature of the warm calibration target is allowed to float with the rest of the instrument, and it typically changes by several degrees during a single orbit, as the satellite travels in and out of sunlight. While this warm calibration point provides a radiometer digitized voltage measurement and the temperature that goes along with it, how do we use that information to determine what temperatures corresponds to the radiometer measurements when looking at the Earth?

A second calibration point is needed, at the cold end of the temperature scale. For that, the radiometer antenna is pointed at the cosmic background, which is assumed to radiate at 2.7 Kelvin degrees. These two calibration points are then used to interpolate to the Earth-viewing measurements, which then provides the calibrated “brightness temperatures”. This is illustrated in the following graph:

radiometer-calibration-graph

The response of the AMSU is slightly non-linear, so the calibration curve in the above graph actually has slight curvature to it. Back when all we had were Microwave Sounding Units (MSU), we had to assume the instruments were linear due to a lack of sufficient pre-launch test data to determine their nonlinearity. Because of various radiometer-related and antenna-related factors, the absolute accuracy of the calibrated Earth-viewing temperatures are probably not much better than 1 deg. C. While this sounds like it would be unusable for climate monitoring, the important thing is that the instruments be very stable over time; an absolute accuracy error of this size is irrelevant for climate monitoring, as long as sufficient data are available from successive satellites so that the newer satellites can be calibrated to the older satellites’ measurements.

WHAT LAYERS OF THE ATMOSPHERE ARE MEASURED?

For AMSU channel 5 that we use for tropospheric temperature monitoring, that brightness temperature is very close to the vertically-averaged temperature through a fairly deep layer of the atmosphere. The vertical profiles of each channel’s relative sensitivity to temperature (’weighting functions’) are shown in the following plot:

AMSU-weighting-functions

These weighting functions are for the nadir (straight-down) views of the instrument, and all increase in altitude as the instrument scans farther away from nadir. AMSU channel 5 is used for our middle tropospheric temperature (MT) estimate; we use a weighted difference between the various view angles of channel 5 to probe lower in the atmosphere, which a fairly sharp weighting function which is for our lower-tropospheric (LT) temperature estimate. We use AMSU channel 9 for monitoring of lower stratospheric (LS) temperatures.

For those channels whose weighting functions intersect the surface, a portion of the total measured microwave thermal emission signal comes from the surface. AMSU channels 1, 2, and 15 are considered “window” channels because the atmosphere is essentially clear, so virtually all of the measured microwave radiation comes from the surface. While this sounds like a good way to measure surface temperature, it turns out that the microwave ‘emissivity’ of the surface (it’s ability to emit microwave energy) is so variable that it is difficult to accurately measure surface temperatures using such measurements. The variable emissivity problem is the smallest for well-vegetated surfaces, and largest for snow-covered surfaces. While the microwave emissivity of the ocean surfaces around 50 GHz is more stable, it just happens to have a temperature dependence which almost exactly cancels out any sensitivity to surface temperature.

POST-PROCESSING OF DATA AT UAH

The millions of calibrated brightness temperature measurements are averaged in space and time, for instance monthly averages in 2.5 degree latitude bands. I have FORTRAN programs I have written to do this. I then pass the averages to John Christy, who inter-calibrates the different satellites’ AMSUs during periods when two or more satellites are operating (which is always the case).

The biggest problems we have had creating a data record with long-term stability is orbit decay of the satellites carrying the MSU and AMSU instruments. Before the Aqua satellite was launched in 2002, all other satellites carrying MSUs or AMSUs had orbits which decayed over time. The decay results from the fact that there is a small amount of atmospheric drag on the satellites, so they very slowly fall in altitude over time. This leads to 3 problems for obtaining a stable long-term record of temperature.

(1) Orbit Altitude Effect on LT The first is a spurious cooling signal in our lower tropospheric (LT) temperature product, which depends upon differencing measurements at different view angles. As the satellite falls, the angle at which the instrument views the surface changes slightly. The correction for this is fairly straightforward, and is applied to both our dataset and to the similar datasets produced by Frank Wentz and Carl Mears at Remote Sensing Systems (RSS). This adjustment is not needed for the Aqua satellite since it carries extra fuel which is used to maintain the orbit.

(2) Diurnal Drift Effect The second problem caused by orbit decay is that the nominal local observation time begins to drift. As a result, the measurements can increasingly be from a warmer or cooler time of day after a few years on-orbit. Luckily, this almost always happened when another satellite operating at the same time had a relatively stable observation time, allowing us to quantify the effect. Nevertheless, the correction isn’t perfect, and so leads to some uncertainty. [Instead of this empirical correction we make to the UAH products, RSS uses the day-night cycle of temperatures created by a climate model to do the adjustment for time-of-day.] This adjustment is not necessary for the Aqua AMSU.

(3) Instrument Body Temperature Effect. As the satellite orbit decays, the solar illumination of the spacecraft changes, which then can alter the physical temperature of the instrument itself. For some unknown reason, it turns out that most of the microwave radiometers’ calibrated Earth-viewing temperatures are slightly influenced by the temperature of the instrument itself…which should not be the case. One possibility is that the exact microwave frequency band which the instrument observes at changes slightly as the instrument warms or cools, which then leads to weighting functions that move up and down in the atmosphere with instrument temperature. Since tropospheric temperature falls off by about 7 deg. C for every 1 km in altitude, it is important for the ‘local oscillators’ governing the frequency band sensed to be very stable, so that the altitude of the layer sensed does not change over time. This effect is, once again, empirically removed based upon comparisons to another satellite whose instrument shows little or no instrument temperature effect. The biggest concern is the long-term changes in instrument temperature, not the changes within an orbit. Since the Aqua satellite does not drift, the solar illumination does not change and and so there is no long-term change in the instrument’s temperature to correct for.

One can imagine all kinds of lesser issues that might affect the long-term stability of the satellite record. For instance, since there have been ten successive satellites, most of which had to be calibrated to the one before it with some non-zero error, there is the possibility of a small ‘random walk’ component to the 30+ year data record. Fortunately, John Christy has spent a lot of time comparing our datasets to radiosonde (weather balloon) datasets, and finds very good long-term agreement.

Share

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Thank you for this tutorial, Dr. Spencer.
If I may ask, are the raw satellite data and programs used to process it available to the public?

Vincent Guerrini

Dr’s Spencer and Christy. Thank you for this lucid explanation. Can we assume that surface data could be +1 or -1C error (or is it +0.5 and -0.5C), but that does not really matter since the trend and radiosonde data is very reliable. ie: As long as satellite does not fail in the next 50-100 years we would be able to discern a significant trend. So in essence the current 10 year graphs are quite accurate. The posting by Magicjava above does have a point. We cannot ask the team to provide if RSS, AMSU doesn’t either….

If ocean water is problematic, and you are measuring O2, but with some ground reflections, how do you remove the effects of confounders such as a partial view of ground, ocean, vegetation, and even things like radar stations and other ground sources of microwaves?
Also, given that the atmosphere often has variable water in it, how do you prevent variations in ice, snow, rain, hale, aurorae, humidity, etc. from disturbing the O2 signature?
Finally, given the recent variation in atmosphere thickness with solar output changes, how does that change in density per unit of altitude impact your temperature measuring?
And a comment: Given your description it looks to me like your have a measure of predominantly trophosheric air temperatures, not surface temperatures, so to compare surface series to satellite series we need to know that relationship.
Thanks, nice article!

stumpy

I assume there is some sensibility check carried out to ensure the result is in the right area or compares reasonably with surface measurements. I assume Radiosonde data is used for this comparison. What I think people might be interested to know is:
1. what do you compare the data with to check results i.e a sensibility check against radiosonde data.
2. Are any further adjustments made due to any comparison with other data i.e. to produce a better comparison
I would assume no further calibration is carried out, but further investigations to resolve discrepancies are carried out if any significant differance is observed. But this area might be worth clarifying as it seems some people believe at this stage there is some adjustment to “match” the data to surface temperature records.

Chad

AMSU channel 5 is used for our middle tropospheric temperature (MT) estimate; we use a weighted difference between the various view angles of channel 5 to probe lower in the atmosphere, which a fairly sharp weighting function which is for our lower-tropospheric (LT) temperature estimate.

A weighting function is used? I thought that the UAH product used a full dynamic radiative transfer model to get the LT temperature.

Tenuc

Thanks for an excellent overview of how the AMSU operates and some of the known issues.
Does variation to type, altitude and density of cloud have any effect on the measurements? I’d also be very interested to know the temporal resolution of the system and, in view of the assumptions and empirical corrections made, the accuracy of the published results?

Layne Blanchard

Dr. Spencer,
Is the UAH record we often see stitched from one instrument/satellite to another in or around 1997-1999?

But then, weren’t the final analyzed values (net microwave numbers), or what becomes the “temperature itself, “calibrated” back against GISS “corrected” values of surface temperatures?
If so, then isn’t the satellite baseline point “corrected” (er, corrupted) as well by being set against those manipulated average earth temperatures?

Alan S. Blue

Wouldn’t it still be rather useful to attempt a calibration with somewhere on the ground though? This would be an entirely different type of calibration, both accuracy and precision would suffer, naturally. You wouldn’t be measuring the ground temperature, but inferring it from nearby temperatures and established calibration coefficients.
Because it would be mighty handy to know the value of the surface stations. As nice as the satellites are for measuring current effects, the crucial part is deciphering the instrumental enigma that happens to reach 150 years back into the past.
The adjustments that have been applied are of the same order as the measured trends. With a minimal number of randomly distributed Stevenson Screens, some of the questions about how well any given gridcell is represented by the available instruments and the available adjustments could at least be attempted.

Mann O Mann

With that explanation it is obvious that none of the data from these satellites can be trusted. After all, at no point are they calibrated to tree rings.
/sarcasm

Chad

RACookPE1978,
John Christy has said that (here),

“No other data are used in the construction. That is why we can do comparison studies without any interdependence.”

Konrad

Dr. Spencer,
Firstly thank you for taking the time to give us this tutorial. I find it encouraging that radiosonde balloons are used for cross checking rather than surface stations. However there has been one question that I have always wondered about regarding remote temperature sensing. This relates to the thickness of the atmosphere. During the current prolonged solar minimum it has been widely reported that the atmosphere has contracted by around 100 Km. Is it necessary to adjust data from the satellites regarding “altitude of layer sensed” to account for the recent cooling and contraction of the mesosphere and thermosphere?

Ray Boorman

Thanks for this explanation Dr Spencer. A pity I didn’t know this when I read the myth propogated by Lord Monkton recently, as it would have stopped me repeating his mistake.

Rocket Man

There are a lot of steps and a lot of equipment used to determine the temperatures measured, and I would be interested to know the overall error of the entire process, including the PRT’s, the electronics used to measure the PRT’s output, the variation in the cosmic background temps, the MSU itself, the removal of the portion of the signal that comes from the surface, etc., etc.
I would be surprised if this metric is not already known, as it is a standard practice in Aerospace to determine things like that, but I am surprised that the information was not included in the article.

I have compared record of rural meteorological stations with UAH anomalies for given 2.5×2.5° grid and found excellent agreement. I tried it for Armagh Observatory and Lomnicky peak Observatory.

Indiana Bones

I too am interested to know how you reject ambient radiation in the 50-60GHz range. The surface is increasingly radiative due to man-made devices directly at frequency or from harmonics thereto.

barry moore

If only the IPCC were as honest and open. As an instrument and controls engineer of more years than I care to admit to I am only too well aware that absolute accuracy is of minor importance compared to repeatability and the ability to track the changes which are then published as the anomolies. Therefore it is the trends of the anomolies which are significant and to obcess about absolute accuracy is futile.
Truly an excellent explanation however I fear the honesty will be cherry picked and distorted by those who are so skilled in this art.

peat

I am wondering how the satellite instrument channels are able to focus on different layers of the atmosphere. Why don’t the emission signatures from an entire column of atmosphere from the ground up enter the instrument and become mixed up?

Jordan

thanks
it would be good to have an account of how the globe is sampled and how the data series/maps comply with the conditions of the sampling theorem. this is an unconditional requirement to allow a representative reconstruction of the measured system at any time scale.
without this, there is no reason to believe that we have any more than a bunch of aliased nonsense. sorry to be so sceptical – but applies just as much to other series,

David Alan

All weather records, tied or broken, for all dates available (starting on 1/1/2009), for ALL States
(All Records): 113236
(H) High Temperature: 13678
(HM) Highest Minimum Temperature: 16098
(L) Low Temperature: 10883
(LM) Lowest Maximum Temperature: 20151
(R) Rain/Precipitation: 43066
(S) Snow: 9360
Warm (H + HM) Records: 26.3%
Cold (L + LM) Records: 27.4%
Precip. (R + S) Records: 46.3%
I pulled this from:
http://www.extremeweatherrecords.com/Records/default.aspx
Looking back over the years, I don’t think cold records have exceeded high records for some time. Does anyone have any data to determine the last time extreme cold weather records outpaced highs.

“Fortunately, John Christy has spent a lot of time comparing our datasets to radiosonde (weather balloon) datasets, and finds very good long-term agreement.”
Firstly, many thanks to Dr Roy for taking the time to write this clear and concise piece accessible to lay-people.
Secondly, please could he comment on whether John Christy’s comparison of the data to radiosonde balloons finding good long term agreement goes some way to validating the work of Ferenc Miscolczi? One of the criticisms of Miscolczi’s findings was that it relies on analysis of radiosonde data which the R.C. Team claimed were unreliable.

Invariant

Thanks to Dr. Spencer and Dr. Christy – a rock solid piece of work!

pft

An absolute temperature accuracy of 1 deg C is not very good. I will accept that if the relative accuracy is good then this is useful data. However, when anomalies are calculated, is this based on anomalies from satellite data, or is ground data used for years prior to the satellite data, and thus the anomaly from the ground based data must be calculated.
Also, given 50% or more of the surface is covered by cloud, how is this corrected for or is only data for surfaces not covered by cloud used.
The 2 data points for the calibration curve are not very good. For one they measure the temperature of a completely different medium than that on earth. Besides, having a calibration range of 2.7 to 290 deg K when you are measuring really 220-320 K is not ideal. Of course, there may be no way around it. I would think measuring 2 points on the earth that have stable temperatures may be used, say above equatorial waters and Antarctic ice surface where you have grounded based temperatures explicity for the use of satellite calibrations, and not controlled by the GISS or CRU crowd. This is not calibration in the true sense of the word, but neither method is perfect in this regard, and perhaps both need to be used.
That said, is it really true the satellite data is showing surface temperatures warmer than last year, and why are the temperatures shown negative. What are we calling near surface temperatures anyways, for it to be negative this means above the clouds?.
How often are algorithms, if any, adjusted. We have seen how surface data has been corrupted, so my concern really is what measures are in place to ensure the same does not happen for satellite data.

KeithGuy

Thank you for that clear explanation Dr Spencer and Dr Christy.
Now would someone please explain in simple terms how GISS global temperature data is contrived (Whoops! I mean calculated)?

Rhys Jaggar

A question from a non-specialist reader:
Is the radiation profile of oxygen affected by any compositional changes in the atmosphere, be that soot, ozone etc etc?
Or did you choose this mode of measurement precisely because it WAS so unaffected by minor changes to the atmosphere??

Ryan Stephenson

“This is somewhat analogous to infrared temperature sounders (for instance, the Atmospheric InfraRed Sounder, AIRS, also on Aqua) which measure thermal emission by carbon dioxide in the atmosphere.”
Well those CO2 measurements sound very accurate as an indicator of climate change (not) given that we know there is more CO2 in the atmosphere!
Thus we can dismiss surface stations due to UHI, we can dismiss most of the satellite data since it is measuring CO2 and the amount of CO2 is growing all the time. We have one set of satellite data from the last 10 years that “might” be reliable because it measures oxygen microwave radiation from 450miles up with some vague idea of whether it might be looking at the ocean or looking at the top of mount Everest!
Ever get the feeling that NASA and ESA are beding over backwards to justify their existence? Wouldn’t it have been a better idea to have spent that money on a few well-placed surface monitoring sites? After all, if the Anatartic ice melts it will be at ground level! If ever there was an example of how AGW theory is being used to justify outrageous research expenditure on the wrong things, this was it.

Ryan Stephenson

According to Wikipedia the AMSU-A referred to by Dr Spencer has a maximum measurement sensitivity of 0.25Celsius. Not great if you are trying to measure decadal climate trends of about 0.1Celsius. And remember this is only the instrument sensitivity – not an indicator of the instrument accuracy.
Dr Spencer seems to have overlooked that fact. More propaganda dressed up as science.
Anyway, my son’s got a bit of a fever today. I’ll go and check how bad it is by using a microwave receiver 75km away.

guidoLaMoto

If observational error is +/- 1 deg, then calculated anomalies of < 1 deg are meaningless.

Richard Saumarez

Being a complete non-climatologist, could some explain to me several things?
Why does the mean temperature rise during NH summer? Is this related to a higher land mass in the NH and more sea in the SH? Neing naive, I would expect the mean global temperature to remain constant.
Why does the upper atmosphere cool when the lower atmosphere warms?
Im not being critical, I just want to know.

Daniel H

LOL@Mann O Mann’s comment
Thank you Dr. Spencer for the excellent writeup on NASA’s Aqua satellite. I’d be interested in learning more about AIRS and why it is so difficult to detect the CO2 signature from gas columns when clouds are involved. Also, why does the ESA’s Envisat seemed to be limited to detecting CO2 over land but the Aqua AIRS satellite does not have suffer from this limitation? Thanks again.

Jared

Just remembered a screen cap I made about 6 months ago.
I made a screen cap of NOAA’s July 2009 prediction for this winter and beyond.
http://i292.photobucket.com/albums/mm3/arketebel/NOAAPredictionJulyof2009.jpg
[sarcasm]
No doubt these guys know what it will be like in 2050 or 2100. Look how dead on they were 6 months into the future.
[/sarcasm]

John Simons

Speaking of satellite data
Wow! check out the present anomalies
http://discover.itsc.uah.edu/amsutemps/amsutemps.html
all the low altitude satellite temperature channels are going ballistic and now in record territory, looks like Jan is shaping up to be another seasonal temperature record at least.
Bob Tisdale’s Dec updated shows that temperatures have moved into the same band of record temperatures not seen since the 1998 super El-Nino… bets for a new all time global SST record in Jan anyone?

supercritical

Dr. Spencer, thanks for being so frank and open about the workings of the project. I am sure that it is appreciated by all readers here.
As I understand it, the project is measuring the microwave radiation energy at certain frequencies which free 02 molecules emit, and the energy-levels correlate with temperature rises of the atmosphere.
It seems that the measurements will be directly affected by the absolute number of O2 molecules in the measurement column. So, the readings will vary not only with changes in atmospheric density, but also the relative fraction of O2.
It follows that the raw measurement series could reveal changes in atmospheric density, and hence variations in the total amount of atmosphere per-se ( or at least the O2 proportion)
And as ‘climate’ is really integrated weather ( i.e atmospheric density and temperature change ) over a long time, perhaps climatic changes could more usefully be tracked by looking at how other attributes of the atmosphere are varying, rather than just relying on ‘average temperature’.
Perhaps the raw data could find a second use, by being reanalysed to produce a set of proxy sea-level pressure series. This data could then be compared with the existing surface records, and may provide yet another means of understanding.

Sean Inglis

Another thank you to add to the chorus. This will take more than one reading, and is fascinating in its own right.

Kendra

Off-Topic but Please Help:
I know someone who is in a position of influence – ie will be teaching a new class on the politicization of climate science.
Unfortunately, this person buys the schtick (climategate overblown, evidence stands, etc.)
The person recommended (others as well, but this was specifically mentioned) that I read The Long Thaw by David Archer. Not only is it difficult for me to spend money on what I’m “biased” to see as propaganda, but I need information as soon as possible. The reviews at Amazon were not informative altho there was a negative one, nothing substantive.
If I can do one thing about this, it’s to at least try to affect someone with influence who then keeps the “machine” haha (re MIT debates, etc.) rolling. So this is an important challenge for me, altho the person is brushing me off with the “agree to disagree” meme.
I never even heard of David Archer since I started researching over a year ago.
I probably seem a bit histrionic but the sooner I can stop this, the better – I mean, try to stop this.
This is not the kind of thing for Climate Audit so won’t ask there, and am assuming that Jeff Id’s readers also read here.
So, 1. Does anyone have anything to say about this book and/or David Archer’s arguments. 2. Where else could I ask this?
Thank you for any assistance whatsoever.

Ignots

– Did we know how PRT is influenced by prolonged cosmic radiation? Handbook of PRTs usage recommends periodical testing of PRTs against spoiling by external influences. How it will be performed in orbit? Did we know how prolonged cosmic radiation influences radiometer antenna elements? Is it possible in laboratory to test these influences? If cosmic radiation is influencing PRTs and radiometric systems, then it will show only continuous ẅarming tendency in history of measurements. Revealing is your words that ‘the newer satellites can be calibrated to the older satellites’ measurements.’ If PRTs is experiencing continuous shift in quality due to cosmic radiation, then such “calibration” only deepens error: in output we have measurement not of “climate warming”, but of degradation of measurement system due to cosmic radiation. And it is Obvious, because shift of “warming” have very stable tendency, which didnt fit data obtained by other measures.

1DandyTroll

Actual make sense information, instead of nonsensical, from NASA.
But how does the last part, about the use of radiosonde balloons as reference, stand to the fact that the weathery balloons being criticized for their inaccuracy, pretty much due to ’em being heated by the sun. Of course all that critic was from some warmist, at NASA, who instead thought temperature measurements from wind speed proxy was a more stellar idea. If my memory serves that was in the doc about what quality control, on the data, is being done by NASA.
And didn’t CRU (or was it RC? same same I guess) also criticize the radiosonde balloons for their inaccuracy?
But essentially you guys just measure the escaping, to space, radiation, and not the “green house effect”, back to earth, radiation? Don’t mind the last part it’s probably really silly, measuring the back to earth radiation, duh. Better to postulate the basic and then use devilishly clever interpolation techniques to extrapolate a measurement and go aa-ha-ha.

Butch

While I appreciate the fine tutorial, which for me may raises more questions than it gives answers, I never took Christopher Monckton’s statement to apply to UAH. In fact, in his paper “Climategate: Caught Green-Handed” this statement appears on page 20.
“In future, therefore, the SPPI monthly surface-temperature graphs will exclude the two terrestrial-temperature datasets altogether and will rely solely upon the RSS and UAH satellite datasets.”
I can see that confusion could arise but I took no indictment of UAH from his video presentation.

ShrNfr

@ peat (23:35:58) :
I am wondering how the satellite instrument channels are able to focus on different layers of the atmosphere. Why don’t the emission signatures from an entire column of atmosphere from the ground up enter the instrument and become mixed up?
——-
The O2 absorption complex is a series of lines out to about 70 Ghz. Depending on where you look, you see more of one altitude than you do of others. But yes, you see them all to some extent. The “inversion” problem has been around for these instruments and the CO2 sounders forever. There are a bunch of approaches you can take. None of them are perfect, but they give pretty good results. They rely on the statistics of the atmosphre, etc. You can use relaxation methods, kalman-bucy filters, and all sorts of other stuff to tease the temperature out at various levels. Of course, the “ground truth” is a radiosonde, and they have their own errors. It can be quite a mess. I did my PhD at MIT in one method of how to get the data out. One of the problems is that some stuff yields a “smooth” field, but that is not what the weather forecasters want for their prediction models.

Martin Brumby

Very interesting.
Some concern here about an accuracy of plus / minus 1ºC.
But this set up takes a huge number of readings.
So is it better or worse than a few thousand individually badly sited surface instruments at extremely non – random locations and with loads of human error built in to when the reading was taken, whether the reader remembered to bring his glasses, whether he had a heavy session the night before and all the rest of it?
And that’s before the merry UEA and GISS teams start to tinker.
OK. No doubt many surface stations at least in the US now have more sophisticated instruments. Properly calibrated? Results from the Surfacestations project doesn’t exactly inspire my confidence that this is actually the case. And for the older instruments, how many readings are more accurate than plus/minus 1ºC, anyway?
So, whilst I’m not convinced that differences in ambiant temperature of 1ºC really matter much in the real world, (let alone “trends” of 0.6ºC per century!) I’d put my money on Spencer & Christie rather than Hansen & Jones any day.
Sorry to personalise things but that’s the way I see it.

NickB.

Great write-up – thank you!
I especially liked the last paragraph, it’s good to see that even the most solid methods are still cross-checked against other reliable measurements. Well done!

Chris

Seems like many readers don’t know the difference between relative and absolute readings. Sometimes, knowing the actual value of something (say temp) is not important. What’s important is the change from x temp, like x+ 0.001 C. Since satellite data is presented as anomalies, then relative difference is the proper way to go. Accuracy of the absolute number does not matter. Similarly, if you bought $1000 dollar of stock in company x, it really doesn’t matter what is the absolute value of the stock price, but instead what is important is the change between the time you bought it and the time you sold it.

phlogiston

Would it be possible to site imaging (microwave, IR etc) equipment on the moon, to measure earth’s atmospheric temperatures? Its orbit presumably decays more slowly than that of a satellite.

pft (01:11:31) :
I would think measuring 2 points on the earth that have stable temperatures may be used, say above equatorial waters and Antarctic ice surface where you have grounded based temperatures explicity for the use of satellite calibrations, and not controlled by the GISS or CRU crowd. This is not calibration in the true sense of the word, but neither method is perfect in this regard, and perhaps both need to be used.

Due to orbit not being perfectly polar the coverage of the antarctic in the low altitude channels is not good, also the interference from the surface is worst when that surface is cold ice and the altitude is ~3000m. RSS doesn’t report TLT data beyond 70ºS for those reasons.

Alan S. Blue

Sometimes, knowing the actual value of something (say temp) is not important. What’s important is the change from x temp, like x+ 0.001 C. Since satellite data is presented as anomalies, then relative difference is the proper way to go.
Perfectly true Chris. But if you’re doing studies of the ocean, it is useful to know if the anomalies – these fractions of a degree differences you’re studying – are occurring focused around -10C or 110C. That’s obviously an outrageous example, but it highlights that the absolute measurements can indeed be quite relevant.
It should similarly be relevant here, although much more subtle and complex to quantify. The spectral properties of compounds are obviously temperature dependent – but they’re also generally slightly non-linear in response.

Mike Ramsey

Dr. Spencer,
 Perhaps this is OT, but what can you tell us about efforts, either by you, other Aqua team scientist, or perhaphs others looking at, say GPS, to accurately measure specific humidity at altitudes above 850 hPa?
I am thinking about Garth’s 2009 paper, “Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data”.
Thank you,
Mike Ramsey

Thanks Dr Spencer for your tutorial… Very good.
And thanks WUWT for pointing it out and putting it up for us.

Sorry, I still don’t get this.
You calibrate the satellite via a warm and cold calibration object, and delete every variable you know of to get a reliable temperature reading.
You then fly the bird in space and discover that when you look at a particular target you get an adjusted equivalent temperature of 15oc. While actual Earthbound sensors read 17oc.
What are you going to do? Ignore the error? Or adjust the instrument?
Surely, at some point in time, the satellite has to be calibrated against known temperatures on the Earth.
.

JonesII

Were these satellites callibrated by “consensus”?☺

Ryan Stephenson

“Sometimes, knowing the actual value of something (say temp) is not important. What’s important is the change from x temp, like x+ 0.001 C. Since satellite data is presented as anomalies, then relative difference is the proper way to go.”
Except that this instrument, by NASA’s own admission, cannot read to x+0.001C. It’s measurement sensitivity is only 0.25Celsius. It’s like a digital thermometer that can only read in increments of 0.25Celsius. So any “trend” measured over the last ten years of operation can be discarded as merely measurement noise.
This is before you get into whether the temperature is accurate to better than 0.25Celsius for any given reading. Difficult, since the measurement is being made for a given “altitude” over a 45km radius – how they define “altitude” is not clear (do they move up and down to account for changes in land above sea level?)
I would say the instrument is about as useful for measuring climate change as the thermometer on my dad’s garden wall. Still, I bet claiming it was a crucial tool for measuring climate change made the whole project easier to fund.