Why Does NASA GISS Oppose Satellites?

A Modest Proposal For A Better Data Set

Reposted from Warren Meyers website: Climate Skeptic.

One of the ironies of climate science is that perhaps the most prominent opponent of satellite measurement of global temperature is James Hansen, head of … wait for it … the Goddard Institute for Space Studies at NASA!  As odd as it may seem, while we have updated our technology for measuring atmospheric components like CO2, and have switched from surface measurement to satellites to monitor sea ice, Hansen and his crew at the space agency are fighting a rearguard action to defend surface temperature measurement against the intrusion of space technology.

For those new to the topic, the ability to measure global temperatures by satellite has only existed since about 1979, and is admittedly still being refined and made more accurate.  However, it has a number of substantial advantages over surface temperature measurement:

  • It is immune to biases related to the positioning of surface temperature stations, particularly the temperature creep over time for stations in growing urban areas.
  • It is relatively immune to the problems of discontinuities as surface temperature locations are moved.
  • It is much better geographic coverage, lacking the immense holes that exist in the surface temperature network.

Anthony Watts has done a fabulous job of documenting the issues with the surface temperature measurement network in the US, which one must remember is the best in the world.  Here is an example of the problems in the network. Another problem that Mr. Hansen and his crew are particularly guilty of is making a number of adjustments in the laboratory to historical temperature data that are poorly documented and have the result of increasing apparent warming.  These adjustments, that imply that surface temperature measurements are net biased on the low side, make zero sense given the surfacestations.org surveys and our intuition about urban heat biases.

What really got me thinking about this topic was this post by John Goetz the other day taking us step by step through the GISS methodology for “adjusting” historical temperature records  (By the way, this third party verification of Mr. Hansen’s methodology is only possible because pressure from folks like Steve McIntyre forced NASA to finally release their methodology for others to critique).

There is no good way to excerpt the post, except to say that when its done, one is left with a strong sense that the net result is not really meaningful in any way.  Sure, each step in the process might have some sort of logic behind it, but the end result is such a mess that its impossible to believe the resulting data have any relevance to any physical reality.  I argued the same thing here with this Tucson example.

Satellites do have disadvantages, though I think these are minor compared to their advantages  (Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal.  This is also consistent with the fact that Mr. Hansen’s historical adjustments tend to be opposite what most would intuit, adding to rather than offsetting urban biases).  Satellite disadvantages include:

  • They take readings of individual locations fewer times in a day than a surface temperature station might, but since most surface temperature records only use two temperatures a day (the high and low, which are averaged), this is mitigated somewhat.
  • They are less robust — a single failure in a satellite can prevent measuring the entire globe, where a single point failure in the surface temperature network is nearly meaningless.
  • We have less history in using these records, so there may be problems we don’t know about yet
  • We only have history back to 1979, so its not useful for very long term trend analysis.

This last point I want to address.  As I mentioned above, almost every climate variable we measure has a technological discontinuity in it.  Even temperature measurement has one between thermometers and more modern electronic sensors.  As an example, below is a NOAA chart on CO2 that shows such a data source splice:

Atmosphericcarbondioxide

I have zero influence in the climate field, but I would never-the-less propose that we begin to make the same data source splice with temperature.  It is as pointless continue to rely on surface temperature measurements as our primary metric of global warming as it is to rely on ship observations for sea ice extent.

Here is the data set I have begun to use (Download crut3_uah_splice.xls ).  It is a splice of the Hadley CRUT3 historic data base with the UAH satellite data base for historic temperature anomalies.  Because the two use different base periods to zero out their anomalies, I had to reset the UAH anomaly to match CRUT3.  I used the first 60 months of UAH data and set the UAH average anomaly for this period equal to the CRUT3 average for the same period.  This added exactly 0.1C to each UAH anomaly.  The result is shown below (click for larger view)

Landsatsplice

Below is the detail of the 60-month period where the two data sets were normalized and the splice occurs.  The normalization turned out to be a simple addition of 0.1C to the entire UAH anomaly data set.  By visual inspection, the splice looks pretty good.

Landsatsplice2

One always needs to be careful when splicing two data sets together.  In fact, in the climate field I have warned of the problem of finding an inflection point in the data right at a data source splice.  But in this case, I think the splice is clean and reasonable, and consistent in philosophy to, say, the splice in historic CO2 data sources.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
79 Comments
Inline Feedbacks
View all comments
MarkW
July 20, 2008 6:20 pm

While JohnV did limit his analysis to stations receiving a class 1 or 2 rating, he made no attempt to control for UHI influences.

July 20, 2008 6:35 pm

counters said:

“Remember, just as proponents need temperature records to prime their models and test out the intricacies of their theory, skeptics need accurate temperature records as well, because there has yet to be a major revolution in the basic physics behind AGW which translates into a falsification of the theory in its entirety.”

See what he’s doing there? counters is turning the Scientific Method on its head by assuming that the AGW hypothesis must be true until/unless “the basic physics behind AGW” are falsified. But “basic physics” has long since withstood the peer review/falsification process. It is AGW/climate disaster which is the unproven hypothetical. In other words, hypothesizing AGW disaster is the same as asking any speculative “What if…?” question.
There is no proof of AGW leading to planetary catastrophe [and make no mistake, the stated hypothesis is that AGW will lead to runaway global warming/climate disaster. If AGW were only a hypothesis of a very small fraction of a degree change, which is probably the case, then AGW would only be a small and unimportant footnote in an obtuse technical journal].
In fact, it is AGW/planetary disaster that has been put forth as a new hypothesis. Therefore, those hypothesizing anthropogenic global warming leading to a planetary climate catastrophe have the burden of proof. Skeptical scientists have no duty whatever to falsify the status quo: natural climate change. The current climate is well within historical norms, and screaming “But what if…!” proves nothing.
These word games indicate desperation. After all is said and done, the climate is cooling, not warming. And the real-world record proves that CO2 has no measurable effect.

Mike C
July 20, 2008 7:03 pm

MarkW (18:20:51) :
“While JohnV did limit his analysis to stations receiving a class 1 or 2 rating, he made no attempt to control for UHI influences.”
Not correct Mark, that was the first thing Kristen corrected him on. He demonstrated some embarassment about it as well.

John McLondon
July 20, 2008 7:26 pm

In his web http://www.uah.edu/News/climatebackground.php Christy made the following comment, I quote:
“”In areas where you have high resolution, well- maintained scientific collection of temperature data, the satellites and the surface data show a high degree of agreement,” said Christy. “Over North America, Europe, Russia, China and Australia, the agreement is basically one-to-one.””
When Chrsity himself commented that the satellite and surface station data shows a high degree of agreement, it seems difficult to make an effective claim that surface stations have major problems?
It also says:
“”Global” surface thermometer networks show a warming trend of approximately 1.7 degrees Celsius per century — about 3° Fahrenheit.
The satellite data show a warming trend of 1.4 C or about 2.52° F per century.” and this difference is explained: “A recent analysis of the surface and satellite datasets hints that the apparent disagreement might have as much to do with coverage as with differing trends at different altitudes.”

Evan Jones
Editor
July 20, 2008 7:28 pm

Ooooh.
So IceAnomoly is our Old friend, Lee.
Well, well, well. That correlates.
On the point discussed, would it not be possible fro UAH or RSS to interpolate polar data from a full swath of surrounding data more accurately GISS does from its spotty surface coverage?
And isn’t the polar areas not covered by satellite round 1% of surface or something?

paminator
July 20, 2008 7:31 pm

Paul Linsay- Well said.
Here is why I trust the MSU satellite data over any other temperature dataset:
-Satellite data provides better global coverage than any other method in use.
-Satellite data has the most transparent error analysis and self-correction procedures in place of any temperature dataset. UAH and RSS personnel actually cooperate to find and correct errors.
-The satellite historical record does not get re-adjusted every month a new data point is added.
-Metadata for satellite measurement equipment is available. Metadata for surface temperature and sea temperature datasets is an appalling mess.
-Satellite-mounted microwave radiosondes are highly reliable, accurate instruments, as compared with canvas buckets, rotted out Stevenson screens and badly sited MMTS units.
I think it would be very educational if someone could track down the daily (or better yet, hourly) temperature readings from a surface station site over the last ten years, to get some perspective on how small a trend in temperature change is trying to be coaxed out of the huge temperature variations that occur.
Of course, if you really don’t like satellite or surface station data, you could try inferring temperature from wind gradients measured using radiosondes. Or try inferring temperature from recent tree ring patterns. Or like Hansen, inferring arctic surface temperature *measurements* using GCM output! IIRC these approaches claim similar error bars to those for the MSU dataset. Isn’t climate science wonderful?

Evan Jones
Editor
July 20, 2008 7:43 pm

Perhaps the reason that the continental USA temperature record shows minimal to no global warming trend is because they are the ones that can’t measure temperature properly.
That would seem unlikely. The great majority of the biases are to the warm and occurred over time as well sited stations were overtaken by uerban, suburban, and exurban creep, thus exaggerating not only the temperatures, but the trends.
And a huge number of CRN4 violations occurred during the 1980s to date when better sited Stevenson screens were replaced by MMTS units located right next to buildings on account of cable issues.

steven mosher
July 20, 2008 7:47 pm

the proper way to do the splice is to normalize the hadcru data to the UAH anomaly period. ( average hadcru from 1979 to 1998 and subtract from itself)
But then I am not at all sure one can splice these two records as they measure different things

Mike C
July 20, 2008 8:21 pm

John McLondon says
“When Chrsity himself commented that the satellite and surface station data shows a high degree of agreement, it seems difficult to make an effective claim that surface stations have major problems?”
John,
The .3 degree celcius difference between the satellite and surface record is pretty big considering global warming this century is about .7 degrees C (HadCrut). Now take a look at Hansen’s or NCDC data, they are warmer than Hadcrut. The difference is even bigger. I’d say the surface stations are warmer for a reason. Barbecue ribs anyone?

Mike Bryant
July 20, 2008 8:40 pm

I propose a new greeting for family and freinds,
“Have you noticed that global warming stopped?”, or
“Have you noticed that the weather is getting cooler?”
Maybe someone can come up with a catch phrase that will catch on like:
“Have a nice day” 🙂

John McLondon
July 20, 2008 9:15 pm

Mike C.,
Yes, absolutely. But I pointed out that to bring the explanation they gave, “A recent analysis of the surface and satellite datasets hints that the apparent disagreement might have as much to do with coverage as with differing trends at different altitudes.”
If different trends at different altitudes is the problem for such difference, then it is difficult to assign that on surface stations. The coverage may be a different story, I do not know enough about that to comment on whether it will bias for higher or lower temperature.
My main comment was on Christy’s comment that satellite and surface station measurements (at least for the U.S., China, Europe, etc) of temperature are very close.

John McLondon
July 20, 2008 9:18 pm

In any case we are looking at trends and anomalies, so absolute numbers may not be that important.
REPLY: Oh sure they are, remember that an “absolute number” from a weather station of weather statgion network is used every time we get:
1) A newspaper article saying “New record high in Podunk, USA today sure sign of global warming”
2) A press release from NOAA, or GISS, or HadCRUT that says “Xth warmest year on record”. That’s done from a combination of absolute numbers
3) A TV station does a story on the “Heat wave” and cites temperatures all around the city, but without caring about where that temperatures were measured (rooftop, parking lot, downtown, bank sign, etc) telling the public only the numbers, not the accuracy. They are only interested in the absolute highest numbers when this happens, and I speak from experience. See this article from TV meteorologist Brian Sussman on that issue.
Absolute numbers are a big deal to the public and the press, don’t let yourself believe otherwise. -Anthony

Evan Jones
Editor
July 20, 2008 9:19 pm

Mike C: 3°F, only 1.7°C. But that’s around a third or so of the smoothed increase since 1979, so it’s quite significant.

Evan Jones
Editor
July 20, 2008 9:20 pm

Divide that by 10! 0.3. 0.17!

IceAnomaly
July 20, 2008 9:22 pm

MikeC,
Those differences are because the satellites, HadCRUT, and GISS, use different baseline reference periods when they compute anomalies. They are on different scales.

Evan Jones
Editor
July 20, 2008 9:22 pm

That difference is c. .05 per decade, or half a degree per century. Not small potatoes.

Richard
July 20, 2008 9:22 pm

“But then I am not at all sure one can splice these two records as they measure different things”
That didn’t stop Mann et.al. from coming up with the “hockey stick”!

Editor
July 20, 2008 9:44 pm

Philip_B (17:26:31) :
“Bob B, no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.
The difference in precision between 10 and 100 sites is significant, the difference between 100 and 1,000 sites isn’t significant.”
Okay, by this logic I can take 100 elevations around the United States and draw a precise topographic map. Clearly this logic is bogus.
1) I think you are referring to things like polling voters with a truly random sampling algorithm. It takes surprisingly few samples (like 100 or so) to come up with an accurate result. For something where the result has more data points, e.g. so that models can handle convection or that your map includes mountain ranges, then you will need many more samples.
2) Precision is merely the “repeatability” of a measurement. Accuracy refers to how close to the “truth” a measurement is. (An accurate measurement implies precision.) Precision is nice, accuracy is better.
I was going to apologize for not providing links to support my comments, but I’ll just recycle your “no proof is required” assertion.

Mike C
July 20, 2008 10:06 pm

John McLondon,
Please allow me to patiently address your repeating of the “maybes” “possiblys” “could bes” and “might bes”.
Okay, here I am looking at a temperature station next to a barbecue. My own two eyes are telling me there is a barbecue there. There are no maybes, possiblys, could bes or might bes about it. It is definately there. Let’s compare this to a press story from AGU where a scientist is speculating. Okay, hmmmmm, where should the evidence take me at this time? Oh, yes, maybe my eyes are being bought off by ExxonMobile BWAAAAAAHAHAHAHA ::::Koff:::: pardon me. Anyways, if about half of the temperature increase is due to human error, (and I have to side with the balloon and satellite data because I doubt there was a kegger going on in flight controll where a frozen alcoholic beverage was spilled on the sensor)… and the PDO is ready to shift, that means the human signal riding on the natural climate signal cannot be more than 0.15 degrees per century, assuming that the other ocean circulations which are in or coming out of warm phases are not adding natural warming to the climate system at this time.
IceAnomaly … or Lee… or whatever your name is, I’ve already run the numbers myself with the corrected baselines, they are between .2 and .4 with GISS being the warmest, several different smoothing methods and etc. Nodoubt about it, the balloons and satellites are close together with the surface temps being the warmer.
And by the way, both of you are invited to the barbecue at the temperature station when this is all over, you can drink it off.

Manfred
July 20, 2008 11:08 pm

I found this study interesting, showing a significant correlation between ground measured temperature increases and social-economic factors like poluation growth, bsp, average income etc.
http://www.uoguelph.ca/~rmckitri/research/jgr07/M&M.JGRDec07.pdf
The conclusion is, that human effects on temperatures are not corrected adequately and the authors conclude, that the global temperature increase in 1980-2002 over land was “measured+adjusted” too high by a factor of approx. 2.
-> so the temperature “adjustments” in developing countries could be a much more serious problem than elsewhere.
with the rapid 3rd world development in the last couple of years, this may have worsened significantly after 2002.

mondo
July 20, 2008 11:44 pm

Re GISS adjustments, I would be interested to know the distribution of adjustments, ie, whether they are positive or negative. My understanding (as a layman I acknowledge) of adjustment processes is that, if fairly done, they tend to cancel each other out, and the resultant curve isn’t all that much different from the starting curve, but the confidence levels are improved.
A statistician that I was talking to last night suggested that if nearly all of the “adjustments” are in one direction, then that is a signal that the adjuster is introducing bias. I wonder if anybody has been able to analyse the GISS adjustments in this way?

fred
July 21, 2008 12:10 am

there has yet to be a major revolution in the basic physics behind AGW which translates into a falsification of the theory in its entirety.

This is a commonly made and totally invalid argument. AGW is not a matter of physics. It is worth explaining as people so often misunderstand it.
What is physics is that in a theoretical atmosphere consisting of gases in the proportions of those on earth, a doubling of CO2 with no other changes would lead to about a 1 degree C rise in atmospheric temperature, because of the increased absorption of heat by the CO2. It is also physics that a rise in temperature of a theoretical atmosphere with no other changes will lead to an increase in water vapour. And that increase in water vapor with no other changes will also lead to absorption of more heat and a rise in temperature.
Nevertheless, AGW is not ‘just physics’, and here is why. Its the difference between the laws which govern the operation of an engine, and the design of a vehicle. How the climate system reacts to the increase in atmospheric temperature caused by the increased CO2 is not just physics, in the same way that how the car reacts to increased fuel flow is not just physics. The one does not allow us to predict the other. It could be that the increase in speed is a function of increased fuel flow. Or it could be, with no violation of the laws of physics, that factors such as wind resistance, rolling resistance, governors etc either limit or eliminate any speed increase.
What is ‘just physics’ is that the energy content of the fuel going to the engine, and thus the power output of the engine, has increased. But it does not follow from that as a matter of physics speed will increase.
Similarly, it is correct to say that an atmosphere with more CO2 must increase in heat uptake. However, whether this leads to much or any warming depends on what happens to the system as a whole in response. It could be that increases in water vapor amplify it. Or it could be that convection, cloud and rain eliminate it. It could be that over time the average water vapor content rises or falls, without there being any violation of the various laws of physics governing the behavior of gases. It would not even, as far as I know, be a violation of the laws of physics for an increase in CO2 to lead to cooling. It might require an unlikely combination of circumstances, and I don’t believe it to be the case, but I don’t think there is anything contrary to the laws of physics about it.
If on warmist blogs you question the connection between a rise in CO2 and a rise in global temperatures, you will frequently be told to google various of these laws of gases. They are not exactly irrelevant, one should know them, but they are not the crux of the matter. The crux of the matter is not the laws governing gases, but how, given them, the climate actually works, and there is nothing that says it has to work in such a way that a small amount of warming amplifies the amount of water vapor over a long enough period that the predominant feedbacks are positive.
Whether it does work like that or not is a matter of how the system works in detail, how it is constituted. Its not ‘just physics’ in exactly the same way that what the shape of the car body is, and thus how much wind resistance it has, is not ‘just physics’ either. Of course, the resistance of a given shape is just physics. But what the shape happens to be, is not.
Still less is it, as some bloggers repeat endlessly, a matter of 200 year old physics.

Evan Jones
Editor
July 21, 2008 12:49 am

Re GISS adjustments, I would be interested to know the distribution of adjustments, ie, whether they are positive or negative.
Well I can answer half of your question but not the other half.
For its “raw” data, GISS uses NOAA adjusted data. For the USHCN-2, the adjustments were a whopping +0.42C. I looked at the USHCN-1 version that was +0.3C and found that (in Gore’s own words), “Everything that’s supposed to be UP is DOWN and everything that’s supposed to be DOWN is UP.”
According to NOAA, all these site violations the Rev has been documenting made the temperatures DROP. So they had to be adjusted upwards.
To add insult to injusy, the UHI adjustment was -0.1 FARENHEIT!
I don’t know what GISS did to those outrageous NOAA numbers. Unlike NOAA USHCN-1, so far as I can tell, they are much too smart to publish a bottom line. (USHCN-2 wised up and stopped publishing the amount and direction of their adjustments. They learned a bitter lesson when their USHCN-1 adjustment graphs became one of the most quoted graphs by skeptics. And USHCN-2 is almost half again worse! (But they didn’t publish that, it had to be derived by map function diddling. I didn’t run that, a poster on this blog ran the numbers.)

Steve Keohane
July 21, 2008 4:55 am

Phillip_B “no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.”
While a broadly true statement, one needs an initial sample size such that it can capture the deviation within the population being measured. A population with a sigma of .001 needs a smaller sample than one with a sigma of 9.

SunSword
July 21, 2008 5:57 am

When the amount and direction of adjustments to ground based stations is not published, then the information by definition cannot be subject to peer review (since the “peers” e.g. actual scientists who study the climate) cannot review what is withheld. This is a fundamental violation of the checks and balances of the scientific method, and in fact is not science at all but merely politics.