
A Modest Proposal For A Better Data Set
Reposted from Warren Meyers website: Climate Skeptic.
One of the ironies of climate science is that perhaps the most prominent opponent of satellite measurement of global temperature is James Hansen, head of … wait for it … the Goddard Institute for Space Studies at NASA! As odd as it may seem, while we have updated our technology for measuring atmospheric components like CO2, and have switched from surface measurement to satellites to monitor sea ice, Hansen and his crew at the space agency are fighting a rearguard action to defend surface temperature measurement against the intrusion of space technology.
For those new to the topic, the ability to measure global temperatures by satellite has only existed since about 1979, and is admittedly still being refined and made more accurate. However, it has a number of substantial advantages over surface temperature measurement:
- It is immune to biases related to the positioning of surface temperature stations, particularly the temperature creep over time for stations in growing urban areas.
- It is relatively immune to the problems of discontinuities as surface temperature locations are moved.
- It is much better geographic coverage, lacking the immense holes that exist in the surface temperature network.
Anthony Watts has done a fabulous job of documenting the issues with the surface temperature measurement network in the US, which one must remember is the best in the world. Here is an example of the problems in the network. Another problem that Mr. Hansen and his crew are particularly guilty of is making a number of adjustments in the laboratory to historical temperature data that are poorly documented and have the result of increasing apparent warming. These adjustments, that imply that surface temperature measurements are net biased on the low side, make zero sense given the surfacestations.org surveys and our intuition about urban heat biases.
What really got me thinking about this topic was this post by John Goetz the other day taking us step by step through the GISS methodology for “adjusting” historical temperature records (By the way, this third party verification of Mr. Hansen’s methodology is only possible because pressure from folks like Steve McIntyre forced NASA to finally release their methodology for others to critique).
There is no good way to excerpt the post, except to say that when its done, one is left with a strong sense that the net result is not really meaningful in any way. Sure, each step in the process might have some sort of logic behind it, but the end result is such a mess that its impossible to believe the resulting data have any relevance to any physical reality. I argued the same thing here with this Tucson example.
Satellites do have disadvantages, though I think these are minor compared to their advantages (Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal. This is also consistent with the fact that Mr. Hansen’s historical adjustments tend to be opposite what most would intuit, adding to rather than offsetting urban biases). Satellite disadvantages include:
- They take readings of individual locations fewer times in a day than a surface temperature station might, but since most surface temperature records only use two temperatures a day (the high and low, which are averaged), this is mitigated somewhat.
- They are less robust — a single failure in a satellite can prevent measuring the entire globe, where a single point failure in the surface temperature network is nearly meaningless.
- We have less history in using these records, so there may be problems we don’t know about yet
- We only have history back to 1979, so its not useful for very long term trend analysis.
This last point I want to address. As I mentioned above, almost every climate variable we measure has a technological discontinuity in it. Even temperature measurement has one between thermometers and more modern electronic sensors. As an example, below is a NOAA chart on CO2 that shows such a data source splice:
I have zero influence in the climate field, but I would never-the-less propose that we begin to make the same data source splice with temperature. It is as pointless continue to rely on surface temperature measurements as our primary metric of global warming as it is to rely on ship observations for sea ice extent.
Here is the data set I have begun to use (Download crut3_uah_splice.xls ). It is a splice of the Hadley CRUT3 historic data base with the UAH satellite data base for historic temperature anomalies. Because the two use different base periods to zero out their anomalies, I had to reset the UAH anomaly to match CRUT3. I used the first 60 months of UAH data and set the UAH average anomaly for this period equal to the CRUT3 average for the same period. This added exactly 0.1C to each UAH anomaly. The result is shown below (click for larger view)
Below is the detail of the 60-month period where the two data sets were normalized and the splice occurs. The normalization turned out to be a simple addition of 0.1C to the entire UAH anomaly data set. By visual inspection, the splice looks pretty good.
One always needs to be careful when splicing two data sets together. In fact, in the climate field I have warned of the problem of finding an inflection point in the data right at a data source splice. But in this case, I think the splice is clean and reasonable, and consistent in philosophy to, say, the splice in historic CO2 data sources.



Steve McIntyre has, on various occasions, looked into the adjustments, and the way that, not only are current numbers altered in accordance with various semi-documented codes, but some of the historical data is altered as well, rather like modifying your date of birth at each birthday. One post is at
http://www.climateaudit.org/?p=3201#more-3201
Also in another post
http://www.climateaudit.org/index.php?paged=3
he points out that
“Hansen also likes to zero things to the present (resulting in constant re-writing of history). It appears that the adjustment is zeroed on the last year of the M0 segment, by subtracting the last adjustment value in the range.”
So SunSword is perfectly correct, “This is a fundamental violation of the checks and balances of the scientific method, and in fact is not science at all but merely politics.”
“no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.”
Most people (when they think of it) think of statistics in terms of sampling – assessing like or similar populations. Unfortunately, because of the underlying varied terrain and fluid dynamics of the earth’s atmosphere, many more samples are required. One need only look at the daily temperature map of the US (no matter how badly presented!) to see the variability. Station temperature assessment doesn’t really fall into sampling theory so much as the geostatistics arena (which is why Hanson uses other stations to adjust). Typically though, these sort of adjustments (and there’s some bayesian statistics that should be thrown in, although I’m not sure its formalized) also should produce estimated errors. Based on what I’ve read, Hanson doesn’t like to use reproducible approaches, and tends to modify his approach as he goes along… FWIW
Anybody come across zapperz at physics and physicist´s – Just had a discussion (about the APS) with a guy who claims to be a scientist. Refused to talk about anything except my “apparent” attempt to accuse scientists of bowing to “grant pressure”.
I mentioned that CO2 hasn’t been proven to drive temperature about 20 times, but he still kept coming back to my “accusations”, totally ignoring the actual science. Typical alarmist speak.
I think Lindsay answered the question very succintly, Iceman can you answer this? re satellites dont measure temeprature?
Bryant: Of course people will start noticing, that’s why us skeptics aren’t really worried. It just ain’t getting hot. I haven’t seen any change in precipitation or temperatures in Australia for the past 20 years! You can always look at the BOM graphs LOL
not only are current numbers altered in accordance with various semi-documented codes, but some of the historical data is altered as well, rather like modifying your date of birth at each birthday.
So THAT’s what that line in the FILENET code means!
IF X>=40 THEN X=39
Vincent – satellites measure infrared radiation, which is normally equal to temperature. But if you take a reading of a mirror or glas, then you often get the temperature reading of the reflected object, not the glass. Most infrared cameras need to be adjusted to the relative “black boxedness” of the object they are measuring.
“I mentioned that CO2 hasn’t been proven to drive temperature about 20 times, but he still kept coming back to my “accusations”, totally ignoring the actual science. Typical alarmist speak.”
Re the low quality of debate, I once contributed to a discussion and received the reply “you live in Texas and work for Haliburton”.
Well, silly me; I thought I lived in the UK, and worked for myself. And I have no idea who or what Haliburton is. Sounds like a type of fish.
“This is a commonly made and totally invalid argument. AGW is not a matter of physics.”
I think the climate system is entirely a matter of physics, but the problem is the models do not, and probably can not be designed to represent all of the physical processes, because the relationships and feedbacks between the physical processes are not fully understood. But, all of the processes involved are physical processes conforming to the laws of physics, and can be described mathematically.
Using the same car analogy, we could accurately model the car and forecast precisely what speed the car will attain from a given fuel flow, but we have to know the composition of the fuel and air mixture, the efficiency of the engine and drivetrain and the areodynamics of the car. If we only know with certainty the fuel composition and the efficiency of the engine, and have limited information about the rest of the drivetrain, and the areodynamic properties of the car then we cannot accurately forecast the precise speed.
So regarding the climate system and GCMs, the basic physics is known, 2XCO2 = 1C-1.2C warming direct from CO2 forcing, but the additional feedbacks have not been precisely determined. If they could be determined,
then they could be represented in a numerical model based entirely on physics and that model could accurately forecast the climate, given the correct inputs. IMHO, such a model is well beyond our ability in the foreseeable future, and probably for all time, and still wouldn’t be able to forecast future climate because the forcing inputs for the future can never be known.
Evan Jones (10:01:52) :
So THAT’s what that line in the FILENET code means!
IF X>=40 THEN X=39
Nice one, Evan
“See what he’s doing there? counters is turning the Scientific Method on its head by assuming that the AGW hypothesis must be true until/unless “the basic physics behind AGW” are falsified.”
Read Arrhenius’ 1896 paper, he made this very argument. The “science is settled” and it is inadequate for assent at every turn.
Now it is a purely political battle, and we’ve every reason to suspect diplomacy will not be successful. Lets set a time limit.
“This is a commonly made and totally invalid argument. AGW is not a matter of physics.”
This is absolutely true. It’s a matter of metaphysics, computer entrail reading, data voodoo, and binary tea leaves.
I thin Hafemeister’s tutorial was quite interesting to start with AGW theory.
The critical and disputed part of his tutorial appears to be included in a single sentence, that he presented without referencing:
“One can attribute 21 oC of that warming to the IR trapping of water vapor, 7 oC to CO2 and 5 oC to other gases.”
I think this is a spectacular contribution for such a rare trace gas.
<i. Unfortunately, because of the underlying varied terrain and fluid dynamics of the earth’s atmosphere, many more samples are required
Were we looking for a regional or local effect, then I would agree, but we are looking for a global signal, and such a signal must (in a statistical sense) be present in the average of less than 100 sites. Assuming of course there is no systematic bias. And if there is a systematic bias (and it’s highly likely there are several), more sites doesn’t solve the problem, unless you know the source of the biases. And if you do you should be eliminating sites with known biases. Adjustments just produce another source of error.
Put simply, if you cannot find a clear global warming signal in a 100 (random) locations, then you are unlikely to find it in a sample of 1000s. And if you do, it is proof the effect is small.
I really wish people would not refer to CO2 as a trace gas and therefore it’s (low) concentration cannot have an effect.
Chloroflorocarbons are believed to have a greenhouse warming effect equal to one fifth of the CO2 effect, even though they are measured in parts per trillion. Thats one million times less than CO2.
Forgot the link.
http://www.earthscape.org/r1/ari02/gcc3.html
Fred, sorry but you have reminded me of an old joke.
A mathematician, statistician and physicist are at a horse race and a punter, talking over the beer, asks them whether they know which horse will win.
Well, the statistician talked of form and going and handicap and concluded that it would, possibly, be this horse, but he couldn’t be certain, the weather may change.
The mathematician talked of probabilities and the punters who usually bet on a sure thing, therefore he would vote the favourite .. but it is a horserace, after all, so who knows?
The physicist boldly stated: I can tell you precisely who will win, assuming a spherical horse.
Evan Jones (10:01:52) :
“not only are current numbers altered in accordance with various semi-documented codes, but some of the historical data is altered as well, rather like modifying your date of birth at each birthday.
So THAT’s what that line in the FILENET code means!
IF X>=40 THEN X=39”
Almost before my time. RIP Jack Benny, 1894-1974, dead at 39.
Great, short youtube video:
NewsWatch 2008: UC Davis atmospheric scientist Richard Snyder reports from the UC Davis weather station that the Sacramento Valley’s weather is changing, but it may not be experiencing climate change…
Mike C,
>> “And by the way, both of you are invited to the barbecue at the temperature station when this is all over, you can drink it off.”
That sounds like a great plan. Just tell us where, and we will be there!!
>>”Please allow me to patiently address your repeating of the “maybes” “possiblys” “could bes” and “might bes”.”
Those maybes are not mine, I am just quoting John Christy who is a leading expert in satellite temperature measurements, and by no means an AGW supporter (I myself use maybes a lot, since I believe that in life we do not know much for sure, beyond some probabilities. It is possible for a gas mixture to spontaneously separate itself to its components, but it not that probable. I also believe in miracles, occasional perturbations of the known natural law – in my work I see few of them occasionally). But just to make it clear, when I read Christy’s statements, this is what I gather (one more time): (1) he implied that U.S., China, etc have high resolution well-maintained scientific collection of temperature, (2) there is a one-to-one correspondence between satellite and ground measurements in those regions, (3) part of the disagreement comes from different trends at different altitudes, surface stations are not responsible for that (4) the other part is from the difference in coverage (5) the greatest disagreement is in tropics where there are fewer weather stations (including Central Africa and South America), (6) I have not see any comments from him attributing the disagreement to the quality of the surface stations or with the correction methods.
Evan and I went through this a week ago and we agreed to disagree. When I look at the Figure Anthony posted in the Forum, I do not see any difference between various curves. Evan disagreed.
http://wattsupwiththat.files.wordpress.com/2008/03/giss-had-uah-rss_global_anomaly_refto_1979-1990_v2.png
Also, I am well aware of Anthony’s (as a leading expert in this area) opinion on surface stations. So, it seems that for a compatible conclusion we have to assume that Hansen’s correction algorithm somehow works. If Hansen could make his backbox algorithm public, it will remove a lot of doubts, it may even help for improving his algorithm.
Anthony >> “2) A press release from NOAA, or GISS, or HadCRUT that says “Xth warmest year on record”. That’s done from a combination of absolute numbers….
Absolute numbers are a big deal to the public and the press, don’t let yourself believe otherwise. –Anthony”
OK, I agree, if you are going to put it that way. Although I wish it is not the case. It seems to me (although I have not looked into it carefully) that when using a finite number of measurements (whether surface stations or satellites) to reduce a continuous temperature surface on Earth to an average number, the precision (or confidence level) of the result cannot be that high to claim that one year is tiny bit warmer than the other, unless the difference between those two years are appropriately large. But we do make statements as you said, so I have to agree with you.
Mike Bryant, thank you for providing a perfect example of how the media distorts everything.
The scientist states clearly that weather events are not climate change. But during his comments, the media shows pictures of flooding and disaster.
That is also known as propaganda, isn’t it?
“Satellites do not measure temperature as such. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data to calculate temperature trends have obtained a range of values.” – Wikipedia
Sounds accurate.
I haven’t studied the science of the “splice”, nor would I understand the technical adjustments which took care of the loose ends, but wasn’t it satellite data (in ’78 or ’79) which first introduced “evidence” of global warming?
Bill Marsh says,
“Of course the best metric for measuring ‘global warming’ is not surface temp, it is ocean heat content.”
And as we embrace the next series of technological “advancements”, won’t we be putting our faith blindly in the next generation of adjusters – to compensate for the problems with suckerfish, for example?
http://www.examiner.com/a-1484359~Little_yellow_submarine_studies_ocean.html
After reading about the issues with buckets and engine inlets at CA, I wondered if this might not be an elegant solution. But if the data takes an adjuster with his own private algorithm to digest it for the public, this approach, too, is doomed to controversy.
“Satellites do not measure temperature as such. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. ”
Mauna Loa CO2 is measure via irradiances and not directly by chemical analysis. While the objection may be material it is ad hoc and obtuse.
keep in mind the ground stations keep a lot of people employed and require a siqnificant budget. This gives the manager/boss importance and by doing away with the budget and resources, the manger will not be able to justify their own salary.