Guest Post by Bob Tisdale
It’s been 25 years since Spencer and Christy of the University of Alabama at Huntsvillle published their 1990 paper Precise Monitoring of Global Temperature Trends from Satellites. The abstract reads (my boldface):
Passive microwave radiometry from satellites provides more precise atmospheric temperature information than that obtained from the relatively sparse distribution of thermometers over the earth’s surface. Accurate global atmospheric temperature estimates are needed for detection of possible greenhouse warming, evaluation of computer models of climate change, and for understanding important factors in the climate system. Analysis of the first 10 years (1979 to 1988) of satellite measurements of lower atmospheric temperature changes reveals a monthly precision of 0.01°C, large temperature variability on time scales from weeks to several years, but no obvious trend for the 10-year period. The warmest years, in descending order, were 1987, 1988, 1983, and 1980. The years 1984, 1985, and 1986 were the coolest.
“…no obvious trend for the 10-year period”, probably didn’t go over too well.
Also see Roy Spencer’s post 25th Anniversary of Global Satellite Temperature Monitoring. There, Roy introduces an interview by Paul Gattis, published at AL.com, titled 7 questions with John Christy and Roy Spencer: Climate change skeptics for 25 years. The interview begins:
The silver anniversary of Roy Spencer’s career-defining moment arrived with no expectation in March. He didn’t realize it until someone mentioned it to him.
For John Christy, he had no idea that a discovery announced in 1990 would not only still resonate 25 years later but would be at the center of a raging debate.
The date was March 29, 1990. That was the day – though unbeknownst to either Christy or Spencer – they publicly became climate change skeptics.
The scientists at the University of Alabama in Huntsville are known throughout the environmental community as being skeptical that climate change (or global warming) will have a catastrophic effect on the earth. The crux of the matter is that their research, using satellite data to measure temperatures in the atmosphere, disagrees with climate models they say that overstates the earth’s warming.
The rest of the interview is here. It’s worth a read. The comments on the thread there contain the usual exchanges between skeptics and the CO2 obsessed.
AND FOR THOSE WONDERING
The current UAH lower troposphere temperature data do show rising temperatures since 1979. See Figure 1, which is from the February 2015 Global Surface (Land+Ocean) and Lower Troposphere Temperature Anomaly & Model-Data Difference Update.
Figure 1
But, with all of the updates and corrections to the UAH TLT data since 1990, do they still show little warming during the period of 1979 to 1988 as noted in the 1990 paper? The answer is yes. As shown in Figure 2, during that period, the UAH TLT data have a very low trend compared to the surface temperature datasets.
Figure 2
Thanks to John and Roy. I’m looking forward to the corrections in the next version.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


Anyone can obtain the raw satellite data from NCDC and try their hand at a better satellite product. BUT….it’s many gigabytes, and you have to know what you are doing regarding radiometer calibration and to make adjustments for known demonstrable effects (calibration differences between successive satellites, orbit decay, diurnal drift, etc.). Arguably, only 2 groups with extensive experience with satellite data and instrumentation (UAH and RSS) have been successful, and get substantially the same results, despite some differences in methodology. It’s very easy to get bad results if you don’t know what you are doing.
“It’s very easy to get bad results if you don’t know what you are doing.”
I agree 97 +/- 3%. It’s also not that hard even if you DO KNOW.
Paradoxically, the personal outcome appears somewhat independent of whether you get it right. A poor application of Principle Component Analysis can leave you not recognizing the forest because of that darn Yamal tree, but you may become a star.
Happy Easter to you, John & Roy!
The big problem with Tisdale’s graphs is that he uses only 10 years of data, not nearly enough to get a decent trend. Here are the best fit slopes for Jan. 1979 through Dec. 2014, with 95% confidence intervals estimated as twice the standard error, based on scatter. Data sets can be found at Climate Explorer http://climexp.knmi.nl/selectfield_obs2.cgi?id=someone@somewhere
First the surface T data sets, slopes in K/decade:
BEST_1: 0.164 ± 0.013
BEST_2: 0.148 ± 0.012
HadCRUT: 0.157 ± 0.012
GISS: 0.157 ± 0.012
NOAA: 0.148 ± 0.011
These are all consistent with a surface trend of 0.155 K/decade.
Satellite data:
UAH: 0.139 ± 0.016
RSS: 0.122 ± 0.016
Somewhat lower than the surface T trends, but they are not really measuring the same thing.
Recently I was thinking about one thing, warmist claim that Earth is doomed by burning fossil fuels and releasing CO2. So just count: weight of Earth atmospehere is 5.15×10^18kg. Reserves of oil are 2.3 x 10^14 kg. Reserves of coal are 9.48 x 10^14 kg. Reserves of natural gas are 1.3 x 10^14 kg. That makes total 13.08 x 10^14 kg of C equivalent. Carbon is 27% relative weight of Carbon Dioxide. That makes weight of CO2 for all fossil reserves 4,84 x 10^15 kg. This is 0.00094, or 0.094% or roughly 940ppm. Current content is 400ppm plus reserves of 940ppm is 1340ppm total. This is final number of CO2 in air available from fossil reserves. We can live with 400ppm and we can definitely live with 1340ppm. This level will of course not happen as man will never release all C reserves and plants are permanently storing this C back to the ground.
So there is fixed point in future with maximum CO2 content, maximum Earth Temperature and corresponding climate. We simply can not cross this point.
Simply because of this whole theory of runaway greenhouse effect induced by Man is wrong.
The observation that 1/2 of what we are supposedly putting into the atmosphere is immediately disappearing, so at best we might get to 870 if we really put our hearts into making the world a better place!
Peter: There is considerable uncertainty about the real reserves actually are, but here is a paper that looks at things out to 2300 if we burn the estimated fossil fuel reserves and they get a number around 1400ppm, like you: http://caos.iisc.ernet.in/faculty/gbala/pdf_files/Bala_etal_JCLIM05.pdf They also find it would lead to a very, very different climate.
Yes they found after tons of counting that 1400ppm of CO2 will cause increase in radiative forcing of 10.5W/m2. And thus increase in temperature of 8K. This is quite logical, if overall radiative forcing is around 260W/m2 and average Earth temperature is 15C – 288K increase of 10.5W would mean increase of temperature around 11K, this more less corresponds with 8K found in your document and they are stating that their temperature is still rising, so let’s assume 11K. But there is fundamental problem with radiative forcing. 70% of Earth is covered with clouds and is excluded from radiative forcing, as light must reach surface for transforming to longwave radiation. Otherwise it is radiated back to space (cloud albedo is close to 1). So again simple math only 30% area of Earth is absorbing incoming radiation (not 100% of course). So overall effect will be only 30% of 11K. That is 3,3K.
I’m fine with that increase.
Yes there are big uncertainties about real oil reserves and there are other things like releasing CH4 from permafrost or peat deposits which can be eventually used. But this is compensated by other uncertainties like CO2 fertilization, increasing of biomass, increasing effectvity of plants to process CO2.
This will on the end null each other and my counts will not be far from true…
Peter: The whole albedo of the Earth is only ~30% and part of that is provided by ice and such, so I don’t see how you get 70%. It is more like 20-25%.
Furthermore, I don’t understand why you think that there being clouds somehow allows you to exclude this fraction of radiative forcing. It is not relevant in the slightest. The forcing is the total additional radiative power imbalance at the top of the atmosphere divided by the surface area of the earth.
Besides the accuracy of the toa measurements is 4 or 5 times larger than the forcing from Co2, so there no telling what the actual balance is.
Peter: “70% of Earth is covered with clouds and is excluded from radiative forcing, as light must reach surface for transforming to longwave radiation. Otherwise it is radiated back to space (cloud albedo is close to 1)”. So it is pitch black on a cloudy day? Funny how I never noticed that.
I think we speak about the same. 70% of Earth area are clouds close to albedo 1, 30% rest mostly water close to 0 albedo, total albedo around 0.3, you say more like 0.25.
About radiation forcing: area of Earth is 4*pi*r2, it is receiving radiation by circle cut pi*r2 so difference between insolated area and total area is exactly 4. Power of Sun radiation is 1300W/m2, divided by 4 on square meter of Earth it is 325W/m2. 260W/m2 of them can reach surface, rest is reflected to space or absorbed in atmosphere.
So my counting of radiation forcing is correct, because I got number which is well known – 260W/m2
And this number represents average energy falling on Earth surface through CLEAR sky. But when there is cloud present, top of the cloud has albedo close to 1, that means that practically all light energy is reflected back to space, almost all of 1300W/m2 of sun’s energy is reflected to space without changing to heat. Assuming there is little of atmosphere above clouds remaining.
Without changing sun radiation to heat – longwave radiation CO2 could to trap and reradiate, there is practically no effect of greenhouse gasses above clouds.
Clouds are working like insulation, keeping infrared heat below them and reflecting sun radiation from above preventing to change it to infrared heat.
This is why we can count only 30% of total radiation forcing, because rest is reflected back to space without heating Earth.
Same as we can count only 30% GNG effect, because rest is nullified by cloud effect reflecting all IR radiation back to surface below and back to space above. IR wavelength window is closed there.
Mike, try to find on internet how much power is in diffuse light when it is overcast. It is somehow around 50W/m2 this is much lower than 1000W/m2 on direct sun. So it is only around 5% of sun radiation available under cloud. It will move whole picture only little bit. Eye is btw. very sensitive organ, daylight intensity is 120,000 lux, vs. full moon 0.25lux 480,000 times less and you are still able to see.
Joel, the sheer degrees of freedom value due to known and unknown variables, over-parameterized data, and infill errors make any proposed climate scenario to be laughable and speaks of very poor acumen in the vagaries and limitations of statistics. Modelers that attempt to read the taro cards, tea leaves and palm lines of Mother Earth should just admit their snake oil research, put up a tent and don a fortuneteller turban.
“””””…..
joeldshore
April 2, 2015 at 5:48 pm
Peter: The whole albedo of the Earth is only ~30% and part of that is provided by ice and such, so I don’t see how you get 70%. It is more like 20-25%……””””
I think Peter’s 70% cloud cover is a bit high. I believe I have seen a NASA / NOAA figure of 60-65% which surprised me as being that high.
And if Peter is using “albedo” as a synonym for “reflectance”, then it is nowhere near one for clouds; only a few percent in fact (at visible wavelengths).
Water droplets strongly transmit and refract visible light, converting it from a near collimated (0.5 deg divergence) beam into a strongly focused broad beam of the order of a radian (in a single refraction). This rapidly becomes a totally isotropic flux distribution in just a few sequential refractions, so no more that 50% could be returned to space.
It’s a bit more complicated with ice crystals instead of droplets, but Peter is way off with his 100% cloud albedo (or reflectance).
And I also think total ice albedo is way overblown too. Fresh snow decays in a few hours in terms of reflectance (which is also scattering), because of surface melting, and that results in TIR trapping, so snow / ice reflectance of solar visible wavelengths very quickly becomes similar to grass. Some of the near IR part of the solar spectrum, gets strongly absorbed in both clouds and snow/ice.
g
It’s been 25 years since Spencer and Christy of the University of Alabama at Huntsvillle published their 1990 paper Precise Monitoring of Global Temperature Trends from Satellites. The abstract reads (my boldface):
Passive microwave radiometry from satellites provides more precise atmospheric temperature information than that obtained from the relatively sparse distribution of thermometers over the earth’s surface. Accurate global atmospheric temperature estimates are needed for detection of possible greenhouse warming, evaluation of computer models of climate change, and for understanding important factors in the climate system. Analysis of the first 10 years (1979 to 1988) of satellite measurements of lower atmospheric temperature changes reveals a monthly precision of 0.01°C, large temperature variability on time scales from weeks to several years, but no obvious trend for the 10-year period. The warmest years, in descending order, were 1987, 1988, 1983, and 1980. The years 1984, 1985, and 1986 were the coolest.
“…no obvious trend for the 10-year period”, probably didn’t go over too well.
Indeed it didn’t, mainly because the results were wrong! Subsequent papers by about three groups (including Mears et al., which resulted in RSS) showed different results and led to errors being identified and corrected by S & C. Including a significant contribution from the stratosphere which ultimately led to the development of the composite product TLT, and orbital decay (0.10 correction). The diurnal correction error led to a 40% correction in the trend (0.035).
Phil, as Bob Tisdale pointed out in the original post, even the latest versions of the RSS and UAH datasets have trends for 1979-1988 which are not statistically different from zero. We both correct for all known errors in the data, and the RSS dataset is more cited than ours because it shows the longest period of no warming.
And you conveniently left out the MSU instrument body temperature effect, which mostly offset the orbital decay effect on LT. Maybe because you didn’t know about it, since we didn’t make a huge deal out of it like the alarmists did with orbit decay, which has been central to every alarmist’s website talking points on the subject.
Roy,
You seem to imply that the corrections did not alter the trends that much. I don’t think that is correct. When I looked at it several years back (using whatever version of the UAH was available at that time), I found the following:
* Your 1998 paper said that, prior to the update in the analysis that was presented in that paper, the trend for January 1979-April 1997 was -0.076 C/decade.
* I found that the trend in the “current version” (at the time I did the analysis, early 2009) for that same period was +0.029 C / decade.
* The trend in the “current version” (at the time I did the analysis, early 2009) for the entire period of data through Dec. 2008 was +0.127 C / decade.
So, to summarize: The best estimate of the trend had changed by +0.203 C/decade. Of that, 0.105 C/decade was due to changes in the analysis and 0.099 C/decade was due to having a longer data set. Or, in other words, the very substantial change in the trend was basically due half to changes in the analysis and half to having a longer data set.
I am not sure how much this may have changed if you repeated this analysis that I did 6 years ago, but my impression is that any updates that you have made since then have had only a small effect on the trends.
By the way, this is the paper I am referring to: http://www.homogenisation.org/files/private/WG1/Bibliography/Applications/Applications%20(C-E)/CHRISTY_etal_1998.pdf
The relevant sentence is in the conclusions: “The combination of these changes causes the 18+ year trend of T_2LT to be warmer by + 0.03 C /decade (-0.076 to
-0.046C / decade for January 1979–April 1997).”
Roy, I was pointing out that the initial analysis had errors in it which were eventually corrected by you as a result of critiques from other researchers, e.g. Wenz & Schabel who reported that correction for orbital decay changed the trend to +0.07.
Of course if you reduce the period enough then the trend will not be significant. I appreciate you don’t like publicizing your errors but in the interests of transparency that should be done, you were very reluctant to admit the possibility that your original results could be wrong. Perhaps you should have a readily available page which itemize all the corrections and changes made in your analysis?
For about ten years you misapplied a correction by subtracting it instead of adding it (pointed out by Wentz). As a result you made the claim: “The net global effect of these revisions (version D) is small, having little impact on the year-to-year anomalies.”, which was incorrect.
Later you said:
“An artifact of the diurnal correction applied to LT has been discovered by Carl Mears and Frank Wentz (Remote Sensing Systems)………… The new global trend from Dec 1978 to July 2005 is +0.123 C/decade, or +0.035 C/decade warmer than v5.1”.
Hats off and a standing ovation for Spencer and Christy!
Dennis Wingo (up thread @ur momisugly 8:51am) provided the perfect honorific:
He stuck to his science, which is the highest compliment that you can make to a scientist.
Spencer and Christy,
To honor you two most effectively and most enjoyably then you two need to be in a pub with me (and many others who comment here) !
John
Regression analysis of RSS from Jan 1979 to Dec 1997 does not show any warming trend. I got -0.002 C / decade.
Of course we all know the 18-year pause in 1997-2014. It appears the warming trend in 1979-2014 is due to the 1998 super El Nino.
Yes. No surprise from regression analysis, with and without including El Nino.
So if you extend the trend line out for 10 decades (100 yrs) it results in between 0.5 C (UAH warming rate) or 0.78 degrees C (RSS warming rate). What’s the big fuss all about?
Isn’t that what the warmists are essentially doing – extending their trend lines out 100 years?
What the modelers do is extending the trend lines from the different models, not from actual temperature (surface or satellite) measurements as you show in the graph you posted above. The models are failing.
Here are the numbers.
1. We have reliable data for atmospheric concentrations of CO2 (CCO2) from Mauna Loa starting in 1959. There is an almost linear increase in yearly CO2 concentrations until now. By “almost” I simply mean the increase was a little bit slower from 1959 to about 1972, but not by much. Graphs of such data have been widely presented and a yearly increase in CO2 is not disputed.
2. Take the surface temperature anomalies (T) of any data set from 1959, for example HadCRUT4 and plot the data with CCO2. Basically you will see a correlation between the increase in T and CCO2 increase only between the years about 1977 and 2000 (give or take) and this is statistically significant. Between 1959 and 1977 there is an actual slight decrease in T while CCO2 increased (I don’t know if this trend is statistically significant). Then between 2000 (give or take a few years and El Nino problem) and now, CCO2 continued to increased but T no longer increased.
Again a number of graphs have been presented showing such there is no more correlation between CCO2 and T.
So, the claim that CO2 is responsible for temperature increase and we must stop using fossil fuels is now in question.
So the modelers are now at an impasse. They must start adjusting the models to fit the actual data.
They are even in more difficulties if instead of using surface temperature data they would use satellite measurements, as shown by the graph you posted above.
No…It gives a warming 1.2 C to 1.4 C (for the lower troposphere)….Not sure how you got your numbers. I get mine by multiplying the trend in C/decade by 10.
And, no, predictions of warming are not based simply on linear extrapolation of the trend for 100 years.
Yes, I just multiplied by 10 to get 0.5 to 0.78 – it’s as simple as that,t what did you multiply by to get 1.2 – 1.4 C???
Sorry, but anyone who extrapolates linearly outside the range of data in a chaotic, cyclic system, is a mathematical ignoramus.
I used the trend for UAH and RSS for the entire satellite record that is shown in Figure 1 of this post, i.e., 0.139 and 0.122 C per decade, respectively.
joeldshore commented
Curious, why would you take a 30-40 year period that clearly has a step function as it’s main “feature”, linearize it, and then extrapolate it out for 100 years, and then imply it means anything?
micro6500 April 3, 2015 at 8:23 am
Why don’t you ask J. Philip Peterson, it was he who’s making the extrapolation?
He did, fair enough, But joel did counter with the same.
micro6500 April 3, 2015 at 10:08 am
He did, fair enough, But joel did counter with the same.
No Joel queried the values used and also stated that “predictions of warming are not based simply on linear extrapolation of the trend for 100 years.”
micro6500: It doesn’t “clearly have a step function” in it. One can produce artificial data with a perfectly linear trend + random noise and find such supposed step functions.
The reason we look for a linear trend line is that the data isn’t good enough to support fits with more parameters, so the best one can do is pull out a linear trend.
joeldshore commented
I think we actually do have enough surface station data (daily min/max temp) from say 1950 onward to tell a lot more of what is going on than we get by comparing a global temperature average.


You can look at the daily rate of change, the annual average of change, yesterday’s rising temp vs last night’s falling temps. If you calculate a slope of each years daily rate of change, and then plot that for each year since 1950, it shows a curve (that peaked about the same time temps peaked), much cooler than a straight line.
I wish our two satellite stars had pointed out that the much-ballyhooed “97%” number came from an entirely bogus “study” that has been rounded debunked and discredited.
/Mr Lynn
I was just saying that if you extend the temperature lines out 100 years you get 0.5 to 0.78 degrees C. I didn’t say that this is correct or that that is my prediction. I don’t like being called an “ignoramus”, or a denier for that matter… It could be +3 degrees C or -3 degrees C – or maybe more each way, we don’t know. And the so called climate scientist really don’t know…
[snip wildly off topic, irrelevant, and highly likely just another sock puppet of Doug Cotton posting from NSW -Anthony]
I suspect that the WUWT Cotton Filter may have sprouted a leak.
Thanks Joel, on it.
Mods, the post by Questioner is by the internet pest D O U G C. He has been infesting Roy’s blog with hundreds of posts with his usual nonsense and has stalked Roy here.
“A question for the skeptics here. Is there anything, any information, that could cause you to change your mind?”
But I DID change my mind.
I was not born skeptic !
What would have made me KEEP my mind:
– accelerated rising seas
– accelerated rising temps
– undisputed temp measures
– hotspot
– water vapor feedback and steady relative humidity
– arctic and antarctic meltdown
– disappearing snow at my latitude
– tropical diseases at my latitude
– undisputed hockey stick graph (with the CO2-T time lag in the right direction!)
…
Failing to see any such evidence (but the Arctic I would say), I DID change already !
What’s your point?
“Passive microwave radiometry from satellites provides more precise atmospheric temperature information than that obtained from the relatively sparse distribution of thermometers over the earth’s surface”.
I know this claim by Dr Spencer is now dated (1990). But is there any scientific literature to back this claim up? Dr Carl Spears of RSS says the opposite in his blog, he trusts the surface measurements more than the free air measurements.
Dr Carl MEARS – sorry.
Well there likely is a good deal of scientific literature to support Dr. Spencer’s assertion.
Central to that literature, would be the general theory of sampled data systems, which governs all modern high capacity digital communications; such as supports this internet capacity.
The Nyquist sampling theory that is central to sampled data, would say that any earthbound global Temperature measurement system, is simply garbage, as the spatial sampling regimen is orders of magnitude short of what it needs top be. And the Temporal sampling based on a daily min / max record, taken at quite uncontrolled time epochs is also deficient.
What satellite systems do allow Christy and Spencer, as well as RSS to do, that earth based systems do not, is at least sample (by scanning) a near complete global surface, so that at least one can claim that their readings are at least valid data.
So they may be subject to instrumentation errors (what experimental systems are not), and possibly even calibration errors. (what is your better calibrated system; a thermistor ??). But in the absence of correct sampling regimen, the data is garbage anyway.
So I’ll bet on the satellites.
Any references to published scientific literature then?
According to his Wikipedia entry, Spencer advocates intelligent design as a reasonable alternative to evolution. Sorry, but this is someone who does not understand science and makes it difficult to take any of his findings seriously. Those findings may be absolutely correct but it damages their credibility.