Why Does NASA GISS Oppose Satellites?

A Modest Proposal For A Better Data Set

Reposted from Warren Meyers website: Climate Skeptic.

One of the ironies of climate science is that perhaps the most prominent opponent of satellite measurement of global temperature is James Hansen, head of … wait for it … the Goddard Institute for Space Studies at NASA!  As odd as it may seem, while we have updated our technology for measuring atmospheric components like CO2, and have switched from surface measurement to satellites to monitor sea ice, Hansen and his crew at the space agency are fighting a rearguard action to defend surface temperature measurement against the intrusion of space technology.

For those new to the topic, the ability to measure global temperatures by satellite has only existed since about 1979, and is admittedly still being refined and made more accurate.  However, it has a number of substantial advantages over surface temperature measurement:

  • It is immune to biases related to the positioning of surface temperature stations, particularly the temperature creep over time for stations in growing urban areas.
  • It is relatively immune to the problems of discontinuities as surface temperature locations are moved.
  • It is much better geographic coverage, lacking the immense holes that exist in the surface temperature network.

Anthony Watts has done a fabulous job of documenting the issues with the surface temperature measurement network in the US, which one must remember is the best in the world.  Here is an example of the problems in the network. Another problem that Mr. Hansen and his crew are particularly guilty of is making a number of adjustments in the laboratory to historical temperature data that are poorly documented and have the result of increasing apparent warming.  These adjustments, that imply that surface temperature measurements are net biased on the low side, make zero sense given the surfacestations.org surveys and our intuition about urban heat biases.

What really got me thinking about this topic was this post by John Goetz the other day taking us step by step through the GISS methodology for “adjusting” historical temperature records  (By the way, this third party verification of Mr. Hansen’s methodology is only possible because pressure from folks like Steve McIntyre forced NASA to finally release their methodology for others to critique).

There is no good way to excerpt the post, except to say that when its done, one is left with a strong sense that the net result is not really meaningful in any way.  Sure, each step in the process might have some sort of logic behind it, but the end result is such a mess that its impossible to believe the resulting data have any relevance to any physical reality.  I argued the same thing here with this Tucson example.

Satellites do have disadvantages, though I think these are minor compared to their advantages  (Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal.  This is also consistent with the fact that Mr. Hansen’s historical adjustments tend to be opposite what most would intuit, adding to rather than offsetting urban biases).  Satellite disadvantages include:

  • They take readings of individual locations fewer times in a day than a surface temperature station might, but since most surface temperature records only use two temperatures a day (the high and low, which are averaged), this is mitigated somewhat.
  • They are less robust — a single failure in a satellite can prevent measuring the entire globe, where a single point failure in the surface temperature network is nearly meaningless.
  • We have less history in using these records, so there may be problems we don’t know about yet
  • We only have history back to 1979, so its not useful for very long term trend analysis.

This last point I want to address.  As I mentioned above, almost every climate variable we measure has a technological discontinuity in it.  Even temperature measurement has one between thermometers and more modern electronic sensors.  As an example, below is a NOAA chart on CO2 that shows such a data source splice:

Atmosphericcarbondioxide

I have zero influence in the climate field, but I would never-the-less propose that we begin to make the same data source splice with temperature.  It is as pointless continue to rely on surface temperature measurements as our primary metric of global warming as it is to rely on ship observations for sea ice extent.

Here is the data set I have begun to use (Download crut3_uah_splice.xls ).  It is a splice of the Hadley CRUT3 historic data base with the UAH satellite data base for historic temperature anomalies.  Because the two use different base periods to zero out their anomalies, I had to reset the UAH anomaly to match CRUT3.  I used the first 60 months of UAH data and set the UAH average anomaly for this period equal to the CRUT3 average for the same period.  This added exactly 0.1C to each UAH anomaly.  The result is shown below (click for larger view)

Landsatsplice

Below is the detail of the 60-month period where the two data sets were normalized and the splice occurs.  The normalization turned out to be a simple addition of 0.1C to the entire UAH anomaly data set.  By visual inspection, the splice looks pretty good.

Landsatsplice2

One always needs to be careful when splicing two data sets together.  In fact, in the climate field I have warned of the problem of finding an inflection point in the data right at a data source splice.  But in this case, I think the splice is clean and reasonable, and consistent in philosophy to, say, the splice in historic CO2 data sources.

0 0 votes
Article Rating
79 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
July 20, 2008 1:17 pm

Smith and Reynolds in their ERSST.v3 blend satellite data with buoy data, especially in the Southern Ocean. For LST, at minimum, those big old gaps in Canada, South America, Africa, etc., could be filled in with satellite data. That way we could eliminate the much of the guess work.
Regards

Evan Jones
Editor
July 20, 2008 1:45 pm

I wouldn’t be surprised if they started mysteriously disappearing.
Nice satellite you got here. It would be a pity if something “happened” to it. Orbits decay, y’know . . .

July 20, 2008 2:12 pm

hmmmm … I would say because it’s less easy to fudge and fake satellite data than it is the ground stations.

counters
July 20, 2008 2:41 pm

The article’s citation of flaws with satellites fails to mention a very important one:
Satellites don’t measure temperature.
Read it one more time. Are we on the same page? Good.
Yes, satellites are optimal in the resolution of data we can collect with them. Sure, they might be expensive, but in my opinion, the benefits far outweigh costs (as the satellites can no doubt serve multiple purposes). But let’s be very clear: We’re not sticking satellites up there with big thermometers. Satellite temperature data is just as convoluted and riddled with a lack of precision and accuracy as tree-ring derived temperatures.
Without going into too much detail, because it does get quite complicated, we can infer temperature by looking at microwave radiation from atmospheric oxygen. However, time and time again, the conversions we make have shown to be inaccurate, which is why the satellite record is always being adjusted. Granted, we do accept the temperature record derived from satellites, but it should be taken with a grain of salt and the intrinsic error in the data must be considered.
The correct meme that skeptics and proponents alike should be proposing is that the work be put in to fix all of our available resources. Not only should we continue revising the GISS stations to determine and counteract the biases in some of them, but we should continue researching into MSU technology to help refine satellite measurements.
Whatever your opinion of AGW, it is wrong to dismiss the available data as corrupt or biased. Remember, just as proponents need temperature records to prime their models and test out the intricacies of their theory, skeptics need accurate temperature records as well, because there has yet to be a major revolution in the basic physics behind AGW which translates into a falsification of the theory in its entirety. A skeptic’s ace-up-the-sleeve is an accurate temperature record which does not demonstrate significant warming in the post-Industrial era, not allegations of bias or corruption.

July 20, 2008 2:49 pm

The money quote:

Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal.

DAV
July 20, 2008 2:52 pm

Evan Jones (13:45:13) :”Nice satellite you got here. It would be a pity if something “happened” to it. Orbits decay, y’know . . .”
Tut, tut, Evan.
‘Course if NASA actually operated the spacecraft collecting all of that temperature data they’d be in a better position to affect things.
http://www.nesdis.noaa.gov/About/onepagers/pdf/NSOFinfo.pdf

tarpon (14:12:56) : “I would say because it’s less easy to fudge and fake satellite data than it is the ground stations.”
Ground data is mostly a few temperature readings a day. The satellite data are complex and require intense conversion. This requires fairly large, complicated computer programs and a lot of people are involved (including the military). Much easier to fake a moon landing 🙂 I haven’t been down to Suitland in a long time but I doubt things have gotten easier. OTOH, the military DOES operate its own weather spacecraft.

Duane Johnson
July 20, 2008 3:21 pm

Counters says:
Satellites don’t measure temperature.
But keep in mind that even a thermometer doesn’t really measure temperature. A mercury thermometer measures the length of a column of mercury. Other devices rely on changes of some other material, a change in electrical resistance, etc. To say that a satellite doesn’t measure temperature doesn’t really say anything about its accuracy in inferring temperature of a particular atmospheric location. Elimination of microsite problems may well make satellite instruments the more accurate tool.

Leon Brozyna
July 20, 2008 3:26 pm

I enjoyed this piece when I read it on Climate Skeptic. It is quite even in its presentation of the strengths and weaknesses of satellite data. I consider that the biggest plus in the use of satellites is the way it overcomes land based biases. As an example, here’s a rather stunning piece I expect we’ll see here shortly about an article in Icecap about the stations in Maine, which they got from:
http://www.m4gw.com:2005/m4gw/2008/07/makin_up_climate_data_from_jun.html#more
Now I understand how gaps in records or wholesale creation of a record happens, such as the early years of the Buffalo Bill dam:
http://wattsupwiththat.wordpress.com/2008/07/15/how-not-to-measure-temperature-part-67/
Now I understand the business with filnet and making up data (to fill in the blanks). I trust that filnet has been and continues to be tested against known stations to ensure its reliability and such peer-reviewed studies have been shared with the meteorlogical community…..

Mike C
July 20, 2008 3:29 pm

Counter,
I’ll consider the surface temperature to be corrupt and biased as long as my lying eyes tell me there are barbecues, air conditioner exhausts, trash incinerators, asphault roofs, chimneys, pavement, metal automobiles, etc…etc…etc in the immediate vacinity of those surface stations. Yeah Bobbo, I’m gonna believe all those climate scientists who say it doesnt matter because glaciers have been melting since the little ice age. And while you are dishing out your passionate advice to bloggers, please be fair and, for instance, tell the whole truth; satellites get adjusted due to orbital drift, not because some alien is at the satellite with a cooler of beer cooling down the satellite record. Orbital drift is the only significant issue of any kind with the satellite record and it is well studies, understood and accounted for.

IceAnomaly
July 20, 2008 3:34 pm

* It is immune to biases related to the positioning of surface temperature stations, particularly the temperature creep over time for stations in growing urban areas.
-But satellites are subject to calibration errors as the satellite orbit altitudes drift over time, and as the sensor sensitivity drifts overtime.
* It is relatively immune to the problems of discontinuities as surface temperature locations are moved.
-but is subject to large discontinuities as one satellite is replaced by another over time . There are several such discontinuities in the satellite record already.
* It is much better geographic coverage, lacking the immense holes that exist in the surface temperature network.
-the satellites have no high-latitude coverage. They miss the poles entirely.
“Mr. Hansen and his crew are particularly guilty of is making a number of adjustments in the laboratory to historical temperature data that are poorly documented”
They are documented in a series of publications outlining the rationale for the adjustments, and the way the adjustments are made. They are also perfectly documented in the computer code. There is no ‘guilt’ here, there is an attempt to get the best possible results form a flawed historical data set that can not be recreated – unless you have a time machine somewhere that no one else knows about.
“These adjustments, that imply that surface temperature measurements are net biased on the low side, make zero sense given the surfacestations.org surveys and our intuition about urban heat biases.”
“intuition” is not a scientific argument. JohnV’s early analysis here at WattsUp showed that the class1,2,high quality stations,when analyzed alone, gave historical results in very close agreement with the adjusted GISS results. Not to mention that the four main data sets, GISSI, HadCrut, UAH and RSS, all show very similar results when compared over the entire time of the satellite measurements. I note that this article conspicuously does not attempt to quantify that claimed error. Just what IS the diffference in slope of those 4 temperature products over the entire period of overlap? What is the error in each slope calculation? Are they statistically distinguishable? Claiming a significant difference without showing it to us, tells us nothing.
“Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal. ”
Mt skeptics, then, are comfortable with accusing Hansen of fraud and of assuming a major conspiracy in play. Thanks for making that clear. There are very good reasons for paying attention to the GISS product. It goes back a century and a half. It attempts to include the arctic, via (well documented) interpolation. It corrects urban trends by removing those trends from the analysis entirely – substituting an interpolation of the trends of surrounding non-urban areas. It is consistent with qualitative supporting evidence, such as degree-day calculations and killing frost date records. It is consistent with the other three major temp products.
And as JohnV showed, the adjusted results are very, very close to those of the class 1,2 stations alone.
Attributing a fraudulent motive to Dr. Hansen, especially in the absence of any evidence to that effect other than biased speculation, is simply vile – Anthony, I don’t understand why continually you allow that kind of character assassination to flood your site.
REPLY: Lee, let me clarify for you. Hansen knows the adjustments are flawed. It has been shown time and again that stations are being adjusted and the reasoning and logic (using Hansen’s own published methods) does not hold up. The adjustment based on satellite night lights at one point in time, taken in the year 1995, is being used to adjust an entire century worth of data. Of course that method is simply ludicrous, and we’ve shown that rural stations like Cedarville are getting adjusted for UHI when they should not be. Hansen knows this, and knowing that your methodology is in error, yet leaving it intact, is not science, but agenda or pride, or something else.
Further, GISS is interpolating data into the arctic, where no historical data is measured by stations. Interpolating data and presenting it with measured data, is in my view, scientific fraud, plain and simple.
Imagine interpolating data for a medical study and adding to real data for a final paper to show some drug works as advertised. If such a thing was revealed, heads would roll, careers would be destroyed, companies would go bankrupt as stocks tumble. Yet Hansen, when doing just that, to show the surface temps align with his models, gets a free pass because “he’s saving the planet”. Amazingly, we have misguided anonymous cowards like yourself argue for us to “go easy” on him.
I believe that Hansen should be called on the carpet for these flawed adjustments and interpolation of arctic data mixing it with real data.
How many false names do you plan to create Lee? – Anthony

crosspatch
July 20, 2008 3:37 pm

I don’t have a problem with adjusting temperatures for changes in surrounding conditions, station moves, etc. The problem I am having with Dr. Hansen’s adjustments are that nobody can duplicate them. While some stations can be duplicated, other can’t. That means that process by which these adjustments are done are still not understood. Dr. Hansen should make it a top priority to make sure that the adjustment process is transparent and the procedure is clear and well documented. He doesn’t own the data or the process or the results, they belong to the US taxpayer.
His data are diverging more from the several other data sets with each passing month and it is my opinion that we are entitled to know why this might be. It does not add any trust to his numbers when the methods used to create these adjustments take months of begging to obtain and when they are finally produced, they do not consistently match the output that he produces.

K
July 20, 2008 3:39 pm

I suspect a lot of NASA people simply know they have a nice, low-effort, job and intend that it not change until retirement.
The talented people that plan and create any technology don’t normally stay around very long to operate the devices. Agencies and bureaus do that. The seasons pass, the big objective becomes to secure the next budget, to check off the boxes. Over time the reviews and oversight become routine, they are boring.
Trust is everything. “he knows what he is doing, I am too busy to check everything.” is a much more pleasant thought than “great, it will take me a week to figure this out”.
Whatever goes on with the NASA GISS team may not involve any conspiracy or manipulation.

Bob B
July 20, 2008 3:40 pm

Counters, the surface data is corrupted and biased–get over it. It is pure crap and should not be relied upon at all.

Bill Marsh
July 20, 2008 3:54 pm

counters,
Good point, I understand the fact that satellites don’t measure temperature directly, but I think we have the issues pretty well worked out. The advantage of the satellites is the breadth of coverage and the fact that they do not depend on humans.
The problem with the ground stations is that of coverage (2/3 have shutdown since the 1990’s, especially outside the US). Right now there are areas in which the ‘adjustments’ are doing the equivalent of estimating the temperature in Atlanta based on thermometer readings in New York. I am VERY skeptical of the efficacy of those adjustments. That and the exposure that the temps have been ‘faked’ or made up either because of some misguided attempt at ‘aiding global warming efforts’ or through laziness at some sites, and the almost bizarre attempts at adjusting the surface temps that end up causing past measurements to adjust up or down (mostly down in the past, up in the more recent past).
If we are to commit resources, the best effort would be to providing self reading/reporting stations in quality locations that we can trust.
Of course the best metric for measuring ‘global warming’ is not surface temp, it is ocean heat content.

Philip_B
July 20, 2008 3:57 pm

Gavin Schmidt, at Real Climate says all we need to measure global temperatures is 60 good sites. I tend to round this up to 100 good sites with good geographic distribution, but the principal is correct.
I think there must 60 or 100 good sites in the world and the fact no one has compiled a global index based on the best sites we can find, what I call prisitine sites, make me suspect they don’t show significant warming. And that is the conclusion of my informal survey.
Using only pristine sites, sites remote from any local or regional influences without moves, would have the added advantage that instrument issues would be much easier to diagnose and therefore fix.

Joel Black
July 20, 2008 3:59 pm

Mr. IceA.,
Maybe the folks at RealClimate can teach Anthony how to delete posts that cause discomfort. Or he could just spew vitriol at them like that little dog at Open Mind.

IceAnomaly
July 20, 2008 4:02 pm

Philip_B, JohnV did essentially just that for the US, right here on WattsUp. He did a gridded temp anomaly trend analysis for the US based on only the Class 1,2 stations identified by Watts’ survey. He found that the results of his analysis of only the best of the stations, matched very closely to the gridded adjusted output by GISS of all the stations.
REPLY: Unfortunately in his rush to disprove the value of the project, he ended up with only 17 stations that rated CRN1 or CRN2 and very poor geographic distribution. I have not done this type of analysis yet because the survey is not complete yet. If I had done an analysis and published it, I’d be vilified for “rushing to judgment” or “using an incomplete data set with poor representivity”.
Yet somehow, JohnV gets a free pass for the rush job he did, and his results cited as “fact”, as you have done, probably because his results are what folks like you want to see.
When we have the majority of stations done, I’ll do a proper analysis of the data. Until then any analysis is simply premature. – Anthony

Bob B
July 20, 2008 4:05 pm

“Gavin Schmidt, at Real Climate says all we need to measure global temperatures is 60 good sites. I tend to round this up to 100 good sites with good geographic distribution, but the principal is correct.”
Phillip, where is the statistical proof for that? The big discrepancy in March 2008
data between satellite and surface station is due in large part to missing surface site locations

Mike C
July 20, 2008 5:10 pm

Using the JohnV defense is a joke. John V only had 17 stations, several missing data, especially for the last few years and only one station represented the entire southeast. You also needed to read his own warnings about how his program was untested and not peer reviewed. I sat back and watched his work as he corrected mistake after mistake,most of which were corrected by then 15 year old Kristen Byrnes. He was using data that he claimed was raw, in UHI areas, missing months, pre QC, and included the MMTS adjustment which we now know was a major error. His program did not consider elevation, climate zone or any of a number of other parameters used in temperature. He was using older USHCN v1 data and his temperature showed significant differences from GISS, especially as it went back in time. It’s the kind of thing you would expect from someone passionate about his beliefs (he resented criticism about James Hansen) and no experience whatsoever with climate monitoring issues.

Philip_B
July 20, 2008 5:26 pm

Bob B, no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.
The difference in precision between 10 and 100 sites is significant, the difference between 100 and 1,000 sites isn’t significant.
Otherwise, your second statement may well be true, but it is not directly due to a relatively small amount data that is missing. I’d guess it’s due to data missing from particularly good or bad sites.
And at root that’s the problem with GISS. If they were removing sites because they were known bad sites, then I would be in favour, although with the proviso that selection or deselection of sites be open, transparent and auditable. At the moment it isn’t and I’m reasonably sure that sites where the trend doesn’t conform to the ‘known’ GW trend are biased against (unconciously or otherwise) and tend to be either eliminated or adjusted.
And,
skeptics, then, are comfortable with accusing Hansen of fraud and of assuming a major conspiracy in play.
Experimenter bias is widespread and has been documented in many scientific studies, which is why double blind studies are the norm in areas where scientists have to make decisions/assesments, and replication is a cornerstone of science. There is no need to introduce fraud or a conspiracy as an explanation.
Otherwise, I agree with Anthony and if fraud has occured, it is in Hansen’s (deliberate) failure to release his data and methods for analysis and replication.

July 20, 2008 5:36 pm

Anthony, your reply to the comment by IceA at 16:02:02 is priceless. The insight that you, Steve Mc, and a few others give to the behind the scenes antics of AGW disciples is invaluable.

Michael Hauber
July 20, 2008 5:57 pm

John V’s analysis (which was updated to give a better agreement with GISS trend by Steve Mc) may be only 17 stations. But are there any other analysis based on larger number of stations that show a problem with the GISS trend?
So maybe when all the stations are updated and an analysis is done we find there is a real problem with the data. But until such an analysis is done any claims that the cliamte record are rubbish seem to be premature.
And how do you know USA has the best climate stations in the world? Australia has its climate network documented. Some of these have concrete footpaths, or fences or watered lawns within a few metres, but none are enclosed in a sea of concrete or on a rooftop amongst air con units as I’ve seen in photos on this site.
Perhaps the reason that the continental USA temperature record shows minimal to no global warming trend is because they are the ones that can’t measure temperature properly.

Bob B
July 20, 2008 5:59 pm

Phillip B–look at the surface data and the satellite data on the link attached. Canada and South Africa appear cold in March 2008 in the Satellite data. The Surface data is missing data and relies off the Hot data in Asia. NOAA and GISS temp reported a MUCH warmer March 2008 then UAH and RSS–this data here shows why.
http://www.theregister.co.uk/2008/06/05/goddard_nasa_thermometer/print.html

Dishman
July 20, 2008 6:04 pm

IceAnomaly wrote:
They are documented in a series of publications outlining the rationale for the adjustments, and the way the adjustments are made. They are also perfectly documented in the computer code.
The code does not match the publications.
As best I can tell (based on my FOIA request), GISTEMP is not subject to any kind of Software Assurance process, so there is no basis for asserting that the code matches the documentation. Any claims to the contrary are at best a guess.

Paul Linsay
July 20, 2008 6:20 pm

counters at (14:41:10) : “Satellites don’t measure temperature.”
Funny thing about that. They use the same technology and methods used to measure the 2.7 K background radiation left over from the Big Bang. In fact, the 2.7 K background is used for calibration. [ Read section 4 about calibration here http://daac.gsfc.nasa.gov/AIRS/documentation/amsu_instrument_guide.shtml ] It’s remarkable how using microwaves to measure the temperature of the Big Bang is worth a couple of Nobel Prizes in Physics (the real kind) but only opprobrium when used to measure the Earth’s temperature.

MarkW
July 20, 2008 6:20 pm

While JohnV did limit his analysis to stations receiving a class 1 or 2 rating, he made no attempt to control for UHI influences.

July 20, 2008 6:35 pm

counters said:

“Remember, just as proponents need temperature records to prime their models and test out the intricacies of their theory, skeptics need accurate temperature records as well, because there has yet to be a major revolution in the basic physics behind AGW which translates into a falsification of the theory in its entirety.”

See what he’s doing there? counters is turning the Scientific Method on its head by assuming that the AGW hypothesis must be true until/unless “the basic physics behind AGW” are falsified. But “basic physics” has long since withstood the peer review/falsification process. It is AGW/climate disaster which is the unproven hypothetical. In other words, hypothesizing AGW disaster is the same as asking any speculative “What if…?” question.
There is no proof of AGW leading to planetary catastrophe [and make no mistake, the stated hypothesis is that AGW will lead to runaway global warming/climate disaster. If AGW were only a hypothesis of a very small fraction of a degree change, which is probably the case, then AGW would only be a small and unimportant footnote in an obtuse technical journal].
In fact, it is AGW/planetary disaster that has been put forth as a new hypothesis. Therefore, those hypothesizing anthropogenic global warming leading to a planetary climate catastrophe have the burden of proof. Skeptical scientists have no duty whatever to falsify the status quo: natural climate change. The current climate is well within historical norms, and screaming “But what if…!” proves nothing.
These word games indicate desperation. After all is said and done, the climate is cooling, not warming. And the real-world record proves that CO2 has no measurable effect.

Mike C
July 20, 2008 7:03 pm

MarkW (18:20:51) :
“While JohnV did limit his analysis to stations receiving a class 1 or 2 rating, he made no attempt to control for UHI influences.”
Not correct Mark, that was the first thing Kristen corrected him on. He demonstrated some embarassment about it as well.

John McLondon
July 20, 2008 7:26 pm

In his web http://www.uah.edu/News/climatebackground.php Christy made the following comment, I quote:
“”In areas where you have high resolution, well- maintained scientific collection of temperature data, the satellites and the surface data show a high degree of agreement,” said Christy. “Over North America, Europe, Russia, China and Australia, the agreement is basically one-to-one.””
When Chrsity himself commented that the satellite and surface station data shows a high degree of agreement, it seems difficult to make an effective claim that surface stations have major problems?
It also says:
“”Global” surface thermometer networks show a warming trend of approximately 1.7 degrees Celsius per century — about 3° Fahrenheit.
The satellite data show a warming trend of 1.4 C or about 2.52° F per century.” and this difference is explained: “A recent analysis of the surface and satellite datasets hints that the apparent disagreement might have as much to do with coverage as with differing trends at different altitudes.”

Evan Jones
Editor
July 20, 2008 7:28 pm

Ooooh.
So IceAnomoly is our Old friend, Lee.
Well, well, well. That correlates.
On the point discussed, would it not be possible fro UAH or RSS to interpolate polar data from a full swath of surrounding data more accurately GISS does from its spotty surface coverage?
And isn’t the polar areas not covered by satellite round 1% of surface or something?

paminator
July 20, 2008 7:31 pm

Paul Linsay- Well said.
Here is why I trust the MSU satellite data over any other temperature dataset:
-Satellite data provides better global coverage than any other method in use.
-Satellite data has the most transparent error analysis and self-correction procedures in place of any temperature dataset. UAH and RSS personnel actually cooperate to find and correct errors.
-The satellite historical record does not get re-adjusted every month a new data point is added.
-Metadata for satellite measurement equipment is available. Metadata for surface temperature and sea temperature datasets is an appalling mess.
-Satellite-mounted microwave radiosondes are highly reliable, accurate instruments, as compared with canvas buckets, rotted out Stevenson screens and badly sited MMTS units.
I think it would be very educational if someone could track down the daily (or better yet, hourly) temperature readings from a surface station site over the last ten years, to get some perspective on how small a trend in temperature change is trying to be coaxed out of the huge temperature variations that occur.
Of course, if you really don’t like satellite or surface station data, you could try inferring temperature from wind gradients measured using radiosondes. Or try inferring temperature from recent tree ring patterns. Or like Hansen, inferring arctic surface temperature *measurements* using GCM output! IIRC these approaches claim similar error bars to those for the MSU dataset. Isn’t climate science wonderful?

Evan Jones
Editor
July 20, 2008 7:43 pm

Perhaps the reason that the continental USA temperature record shows minimal to no global warming trend is because they are the ones that can’t measure temperature properly.
That would seem unlikely. The great majority of the biases are to the warm and occurred over time as well sited stations were overtaken by uerban, suburban, and exurban creep, thus exaggerating not only the temperatures, but the trends.
And a huge number of CRN4 violations occurred during the 1980s to date when better sited Stevenson screens were replaced by MMTS units located right next to buildings on account of cable issues.

steven mosher
July 20, 2008 7:47 pm

the proper way to do the splice is to normalize the hadcru data to the UAH anomaly period. ( average hadcru from 1979 to 1998 and subtract from itself)
But then I am not at all sure one can splice these two records as they measure different things

Mike C
July 20, 2008 8:21 pm

John McLondon says
“When Chrsity himself commented that the satellite and surface station data shows a high degree of agreement, it seems difficult to make an effective claim that surface stations have major problems?”
John,
The .3 degree celcius difference between the satellite and surface record is pretty big considering global warming this century is about .7 degrees C (HadCrut). Now take a look at Hansen’s or NCDC data, they are warmer than Hadcrut. The difference is even bigger. I’d say the surface stations are warmer for a reason. Barbecue ribs anyone?

Mike Bryant
July 20, 2008 8:40 pm

I propose a new greeting for family and freinds,
“Have you noticed that global warming stopped?”, or
“Have you noticed that the weather is getting cooler?”
Maybe someone can come up with a catch phrase that will catch on like:
“Have a nice day” 🙂

John McLondon
July 20, 2008 9:15 pm

Mike C.,
Yes, absolutely. But I pointed out that to bring the explanation they gave, “A recent analysis of the surface and satellite datasets hints that the apparent disagreement might have as much to do with coverage as with differing trends at different altitudes.”
If different trends at different altitudes is the problem for such difference, then it is difficult to assign that on surface stations. The coverage may be a different story, I do not know enough about that to comment on whether it will bias for higher or lower temperature.
My main comment was on Christy’s comment that satellite and surface station measurements (at least for the U.S., China, Europe, etc) of temperature are very close.

John McLondon
July 20, 2008 9:18 pm

In any case we are looking at trends and anomalies, so absolute numbers may not be that important.
REPLY: Oh sure they are, remember that an “absolute number” from a weather station of weather statgion network is used every time we get:
1) A newspaper article saying “New record high in Podunk, USA today sure sign of global warming”
2) A press release from NOAA, or GISS, or HadCRUT that says “Xth warmest year on record”. That’s done from a combination of absolute numbers
3) A TV station does a story on the “Heat wave” and cites temperatures all around the city, but without caring about where that temperatures were measured (rooftop, parking lot, downtown, bank sign, etc) telling the public only the numbers, not the accuracy. They are only interested in the absolute highest numbers when this happens, and I speak from experience. See this article from TV meteorologist Brian Sussman on that issue.
Absolute numbers are a big deal to the public and the press, don’t let yourself believe otherwise. -Anthony

Evan Jones
Editor
July 20, 2008 9:19 pm

Mike C: 3°F, only 1.7°C. But that’s around a third or so of the smoothed increase since 1979, so it’s quite significant.

Evan Jones
Editor
July 20, 2008 9:20 pm

Divide that by 10! 0.3. 0.17!

IceAnomaly
July 20, 2008 9:22 pm

MikeC,
Those differences are because the satellites, HadCRUT, and GISS, use different baseline reference periods when they compute anomalies. They are on different scales.

Evan Jones
Editor
July 20, 2008 9:22 pm

That difference is c. .05 per decade, or half a degree per century. Not small potatoes.

Richard
July 20, 2008 9:22 pm

“But then I am not at all sure one can splice these two records as they measure different things”
That didn’t stop Mann et.al. from coming up with the “hockey stick”!

Editor
July 20, 2008 9:44 pm

Philip_B (17:26:31) :
“Bob B, no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.
The difference in precision between 10 and 100 sites is significant, the difference between 100 and 1,000 sites isn’t significant.”
Okay, by this logic I can take 100 elevations around the United States and draw a precise topographic map. Clearly this logic is bogus.
1) I think you are referring to things like polling voters with a truly random sampling algorithm. It takes surprisingly few samples (like 100 or so) to come up with an accurate result. For something where the result has more data points, e.g. so that models can handle convection or that your map includes mountain ranges, then you will need many more samples.
2) Precision is merely the “repeatability” of a measurement. Accuracy refers to how close to the “truth” a measurement is. (An accurate measurement implies precision.) Precision is nice, accuracy is better.
I was going to apologize for not providing links to support my comments, but I’ll just recycle your “no proof is required” assertion.

Mike C
July 20, 2008 10:06 pm

John McLondon,
Please allow me to patiently address your repeating of the “maybes” “possiblys” “could bes” and “might bes”.
Okay, here I am looking at a temperature station next to a barbecue. My own two eyes are telling me there is a barbecue there. There are no maybes, possiblys, could bes or might bes about it. It is definately there. Let’s compare this to a press story from AGU where a scientist is speculating. Okay, hmmmmm, where should the evidence take me at this time? Oh, yes, maybe my eyes are being bought off by ExxonMobile BWAAAAAAHAHAHAHA ::::Koff:::: pardon me. Anyways, if about half of the temperature increase is due to human error, (and I have to side with the balloon and satellite data because I doubt there was a kegger going on in flight controll where a frozen alcoholic beverage was spilled on the sensor)… and the PDO is ready to shift, that means the human signal riding on the natural climate signal cannot be more than 0.15 degrees per century, assuming that the other ocean circulations which are in or coming out of warm phases are not adding natural warming to the climate system at this time.
IceAnomaly … or Lee… or whatever your name is, I’ve already run the numbers myself with the corrected baselines, they are between .2 and .4 with GISS being the warmest, several different smoothing methods and etc. Nodoubt about it, the balloons and satellites are close together with the surface temps being the warmer.
And by the way, both of you are invited to the barbecue at the temperature station when this is all over, you can drink it off.

Manfred
July 20, 2008 11:08 pm

I found this study interesting, showing a significant correlation between ground measured temperature increases and social-economic factors like poluation growth, bsp, average income etc.
http://www.uoguelph.ca/~rmckitri/research/jgr07/M&M.JGRDec07.pdf
The conclusion is, that human effects on temperatures are not corrected adequately and the authors conclude, that the global temperature increase in 1980-2002 over land was “measured+adjusted” too high by a factor of approx. 2.
-> so the temperature “adjustments” in developing countries could be a much more serious problem than elsewhere.
with the rapid 3rd world development in the last couple of years, this may have worsened significantly after 2002.

mondo
July 20, 2008 11:44 pm

Re GISS adjustments, I would be interested to know the distribution of adjustments, ie, whether they are positive or negative. My understanding (as a layman I acknowledge) of adjustment processes is that, if fairly done, they tend to cancel each other out, and the resultant curve isn’t all that much different from the starting curve, but the confidence levels are improved.
A statistician that I was talking to last night suggested that if nearly all of the “adjustments” are in one direction, then that is a signal that the adjuster is introducing bias. I wonder if anybody has been able to analyse the GISS adjustments in this way?

fred
July 21, 2008 12:10 am

there has yet to be a major revolution in the basic physics behind AGW which translates into a falsification of the theory in its entirety.

This is a commonly made and totally invalid argument. AGW is not a matter of physics. It is worth explaining as people so often misunderstand it.
What is physics is that in a theoretical atmosphere consisting of gases in the proportions of those on earth, a doubling of CO2 with no other changes would lead to about a 1 degree C rise in atmospheric temperature, because of the increased absorption of heat by the CO2. It is also physics that a rise in temperature of a theoretical atmosphere with no other changes will lead to an increase in water vapour. And that increase in water vapor with no other changes will also lead to absorption of more heat and a rise in temperature.
Nevertheless, AGW is not ‘just physics’, and here is why. Its the difference between the laws which govern the operation of an engine, and the design of a vehicle. How the climate system reacts to the increase in atmospheric temperature caused by the increased CO2 is not just physics, in the same way that how the car reacts to increased fuel flow is not just physics. The one does not allow us to predict the other. It could be that the increase in speed is a function of increased fuel flow. Or it could be, with no violation of the laws of physics, that factors such as wind resistance, rolling resistance, governors etc either limit or eliminate any speed increase.
What is ‘just physics’ is that the energy content of the fuel going to the engine, and thus the power output of the engine, has increased. But it does not follow from that as a matter of physics speed will increase.
Similarly, it is correct to say that an atmosphere with more CO2 must increase in heat uptake. However, whether this leads to much or any warming depends on what happens to the system as a whole in response. It could be that increases in water vapor amplify it. Or it could be that convection, cloud and rain eliminate it. It could be that over time the average water vapor content rises or falls, without there being any violation of the various laws of physics governing the behavior of gases. It would not even, as far as I know, be a violation of the laws of physics for an increase in CO2 to lead to cooling. It might require an unlikely combination of circumstances, and I don’t believe it to be the case, but I don’t think there is anything contrary to the laws of physics about it.
If on warmist blogs you question the connection between a rise in CO2 and a rise in global temperatures, you will frequently be told to google various of these laws of gases. They are not exactly irrelevant, one should know them, but they are not the crux of the matter. The crux of the matter is not the laws governing gases, but how, given them, the climate actually works, and there is nothing that says it has to work in such a way that a small amount of warming amplifies the amount of water vapor over a long enough period that the predominant feedbacks are positive.
Whether it does work like that or not is a matter of how the system works in detail, how it is constituted. Its not ‘just physics’ in exactly the same way that what the shape of the car body is, and thus how much wind resistance it has, is not ‘just physics’ either. Of course, the resistance of a given shape is just physics. But what the shape happens to be, is not.
Still less is it, as some bloggers repeat endlessly, a matter of 200 year old physics.

Evan Jones
Editor
July 21, 2008 12:49 am

Re GISS adjustments, I would be interested to know the distribution of adjustments, ie, whether they are positive or negative.
Well I can answer half of your question but not the other half.
For its “raw” data, GISS uses NOAA adjusted data. For the USHCN-2, the adjustments were a whopping +0.42C. I looked at the USHCN-1 version that was +0.3C and found that (in Gore’s own words), “Everything that’s supposed to be UP is DOWN and everything that’s supposed to be DOWN is UP.”
According to NOAA, all these site violations the Rev has been documenting made the temperatures DROP. So they had to be adjusted upwards.
To add insult to injusy, the UHI adjustment was -0.1 FARENHEIT!
I don’t know what GISS did to those outrageous NOAA numbers. Unlike NOAA USHCN-1, so far as I can tell, they are much too smart to publish a bottom line. (USHCN-2 wised up and stopped publishing the amount and direction of their adjustments. They learned a bitter lesson when their USHCN-1 adjustment graphs became one of the most quoted graphs by skeptics. And USHCN-2 is almost half again worse! (But they didn’t publish that, it had to be derived by map function diddling. I didn’t run that, a poster on this blog ran the numbers.)

Steve Keohane
July 21, 2008 4:55 am

Phillip_B “no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.”
While a broadly true statement, one needs an initial sample size such that it can capture the deviation within the population being measured. A population with a sigma of .001 needs a smaller sample than one with a sigma of 9.

SunSword
July 21, 2008 5:57 am

When the amount and direction of adjustments to ground based stations is not published, then the information by definition cannot be subject to peer review (since the “peers” e.g. actual scientists who study the climate) cannot review what is withheld. This is a fundamental violation of the checks and balances of the scientific method, and in fact is not science at all but merely politics.

Tony Edwards
July 21, 2008 7:11 am

Steve McIntyre has, on various occasions, looked into the adjustments, and the way that, not only are current numbers altered in accordance with various semi-documented codes, but some of the historical data is altered as well, rather like modifying your date of birth at each birthday. One post is at
http://www.climateaudit.org/?p=3201#more-3201
Also in another post
http://www.climateaudit.org/index.php?paged=3
he points out that
“Hansen also likes to zero things to the present (resulting in constant re-writing of history). It appears that the adjustment is zeroed on the last year of the M0 segment, by subtracting the last adjustment value in the range.”
So SunSword is perfectly correct, “This is a fundamental violation of the checks and balances of the scientific method, and in fact is not science at all but merely politics.”

Brendan
July 21, 2008 8:31 am

“no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.”
Most people (when they think of it) think of statistics in terms of sampling – assessing like or similar populations. Unfortunately, because of the underlying varied terrain and fluid dynamics of the earth’s atmosphere, many more samples are required. One need only look at the daily temperature map of the US (no matter how badly presented!) to see the variability. Station temperature assessment doesn’t really fall into sampling theory so much as the geostatistics arena (which is why Hanson uses other stations to adjust). Typically though, these sort of adjustments (and there’s some bayesian statistics that should be thrown in, although I’m not sure its formalized) also should produce estimated errors. Based on what I’ve read, Hanson doesn’t like to use reproducible approaches, and tends to modify his approach as he goes along… FWIW

The engineer
July 21, 2008 9:27 am

Anybody come across zapperz at physics and physicist´s – Just had a discussion (about the APS) with a guy who claims to be a scientist. Refused to talk about anything except my “apparent” attempt to accuse scientists of bowing to “grant pressure”.
I mentioned that CO2 hasn’t been proven to drive temperature about 20 times, but he still kept coming back to my “accusations”, totally ignoring the actual science. Typical alarmist speak.

vincent
July 21, 2008 9:50 am

I think Lindsay answered the question very succintly, Iceman can you answer this? re satellites dont measure temeprature?

vincent
July 21, 2008 9:54 am

Bryant: Of course people will start noticing, that’s why us skeptics aren’t really worried. It just ain’t getting hot. I haven’t seen any change in precipitation or temperatures in Australia for the past 20 years! You can always look at the BOM graphs LOL

Evan Jones
Editor
July 21, 2008 10:01 am

not only are current numbers altered in accordance with various semi-documented codes, but some of the historical data is altered as well, rather like modifying your date of birth at each birthday.
So THAT’s what that line in the FILENET code means!
IF X>=40 THEN X=39

The engineer
July 21, 2008 10:04 am

Vincent – satellites measure infrared radiation, which is normally equal to temperature. But if you take a reading of a mirror or glas, then you often get the temperature reading of the reflected object, not the glass. Most infrared cameras need to be adjusted to the relative “black boxedness” of the object they are measuring.

Barbara
July 21, 2008 10:05 am

“I mentioned that CO2 hasn’t been proven to drive temperature about 20 times, but he still kept coming back to my “accusations”, totally ignoring the actual science. Typical alarmist speak.”
Re the low quality of debate, I once contributed to a discussion and received the reply “you live in Texas and work for Haliburton”.
Well, silly me; I thought I lived in the UK, and worked for myself. And I have no idea who or what Haliburton is. Sounds like a type of fish.

silencedogood
July 21, 2008 10:12 am

“This is a commonly made and totally invalid argument. AGW is not a matter of physics.”
I think the climate system is entirely a matter of physics, but the problem is the models do not, and probably can not be designed to represent all of the physical processes, because the relationships and feedbacks between the physical processes are not fully understood. But, all of the processes involved are physical processes conforming to the laws of physics, and can be described mathematically.
Using the same car analogy, we could accurately model the car and forecast precisely what speed the car will attain from a given fuel flow, but we have to know the composition of the fuel and air mixture, the efficiency of the engine and drivetrain and the areodynamics of the car. If we only know with certainty the fuel composition and the efficiency of the engine, and have limited information about the rest of the drivetrain, and the areodynamic properties of the car then we cannot accurately forecast the precise speed.
So regarding the climate system and GCMs, the basic physics is known, 2XCO2 = 1C-1.2C warming direct from CO2 forcing, but the additional feedbacks have not been precisely determined. If they could be determined,
then they could be represented in a numerical model based entirely on physics and that model could accurately forecast the climate, given the correct inputs. IMHO, such a model is well beyond our ability in the foreseeable future, and probably for all time, and still wouldn’t be able to forecast future climate because the forcing inputs for the future can never be known.

Tony Edwards
July 21, 2008 11:47 am

Evan Jones (10:01:52) :
So THAT’s what that line in the FILENET code means!
IF X>=40 THEN X=39
Nice one, Evan

Gary Gulrud
July 21, 2008 12:33 pm

“See what he’s doing there? counters is turning the Scientific Method on its head by assuming that the AGW hypothesis must be true until/unless “the basic physics behind AGW” are falsified.”
Read Arrhenius’ 1896 paper, he made this very argument. The “science is settled” and it is inadequate for assent at every turn.
Now it is a purely political battle, and we’ve every reason to suspect diplomacy will not be successful. Lets set a time limit.

Jeff Alberts
July 21, 2008 1:43 pm

“This is a commonly made and totally invalid argument. AGW is not a matter of physics.”
This is absolutely true. It’s a matter of metaphysics, computer entrail reading, data voodoo, and binary tea leaves.

Manfred
July 21, 2008 3:43 pm

I thin Hafemeister’s tutorial was quite interesting to start with AGW theory.
The critical and disputed part of his tutorial appears to be included in a single sentence, that he presented without referencing:
“One can attribute 21 oC of that warming to the IR trapping of water vapor, 7 oC to CO2 and 5 oC to other gases.”
I think this is a spectacular contribution for such a rare trace gas.

Philip_B
July 21, 2008 5:07 pm

<i. Unfortunately, because of the underlying varied terrain and fluid dynamics of the earth’s atmosphere, many more samples are required
Were we looking for a regional or local effect, then I would agree, but we are looking for a global signal, and such a signal must (in a statistical sense) be present in the average of less than 100 sites. Assuming of course there is no systematic bias. And if there is a systematic bias (and it’s highly likely there are several), more sites doesn’t solve the problem, unless you know the source of the biases. And if you do you should be eliminating sites with known biases. Adjustments just produce another source of error.
Put simply, if you cannot find a clear global warming signal in a 100 (random) locations, then you are unlikely to find it in a sample of 1000s. And if you do, it is proof the effect is small.

Philip_B
July 21, 2008 5:29 pm

I really wish people would not refer to CO2 as a trace gas and therefore it’s (low) concentration cannot have an effect.
Chloroflorocarbons are believed to have a greenhouse warming effect equal to one fifth of the CO2 effect, even though they are measured in parts per trillion. Thats one million times less than CO2.

Philip_B
July 21, 2008 5:29 pm
Robert Wood
July 21, 2008 6:05 pm

Fred, sorry but you have reminded me of an old joke.
A mathematician, statistician and physicist are at a horse race and a punter, talking over the beer, asks them whether they know which horse will win.
Well, the statistician talked of form and going and handicap and concluded that it would, possibly, be this horse, but he couldn’t be certain, the weather may change.
The mathematician talked of probabilities and the punters who usually bet on a sure thing, therefore he would vote the favourite .. but it is a horserace, after all, so who knows?
The physicist boldly stated: I can tell you precisely who will win, assuming a spherical horse.

Editor
July 21, 2008 6:43 pm

Evan Jones (10:01:52) :
“not only are current numbers altered in accordance with various semi-documented codes, but some of the historical data is altered as well, rather like modifying your date of birth at each birthday.
So THAT’s what that line in the FILENET code means!
IF X>=40 THEN X=39”
Almost before my time. RIP Jack Benny, 1894-1974, dead at 39.

Mike Bryant
July 21, 2008 6:57 pm

Great, short youtube video:
NewsWatch 2008: UC Davis atmospheric scientist Richard Snyder reports from the UC Davis weather station that the Sacramento Valley’s weather is changing, but it may not be experiencing climate change…

John McLondon
July 21, 2008 8:40 pm

Mike C,
>> “And by the way, both of you are invited to the barbecue at the temperature station when this is all over, you can drink it off.”
That sounds like a great plan. Just tell us where, and we will be there!!
>>”Please allow me to patiently address your repeating of the “maybes” “possiblys” “could bes” and “might bes”.”
Those maybes are not mine, I am just quoting John Christy who is a leading expert in satellite temperature measurements, and by no means an AGW supporter (I myself use maybes a lot, since I believe that in life we do not know much for sure, beyond some probabilities. It is possible for a gas mixture to spontaneously separate itself to its components, but it not that probable. I also believe in miracles, occasional perturbations of the known natural law – in my work I see few of them occasionally). But just to make it clear, when I read Christy’s statements, this is what I gather (one more time): (1) he implied that U.S., China, etc have high resolution well-maintained scientific collection of temperature, (2) there is a one-to-one correspondence between satellite and ground measurements in those regions, (3) part of the disagreement comes from different trends at different altitudes, surface stations are not responsible for that (4) the other part is from the difference in coverage (5) the greatest disagreement is in tropics where there are fewer weather stations (including Central Africa and South America), (6) I have not see any comments from him attributing the disagreement to the quality of the surface stations or with the correction methods.
Evan and I went through this a week ago and we agreed to disagree. When I look at the Figure Anthony posted in the Forum, I do not see any difference between various curves. Evan disagreed.
http://wattsupwiththat.files.wordpress.com/2008/03/giss-had-uah-rss_global_anomaly_refto_1979-1990_v2.png
Also, I am well aware of Anthony’s (as a leading expert in this area) opinion on surface stations. So, it seems that for a compatible conclusion we have to assume that Hansen’s correction algorithm somehow works. If Hansen could make his backbox algorithm public, it will remove a lot of doubts, it may even help for improving his algorithm.
Anthony >> “2) A press release from NOAA, or GISS, or HadCRUT that says “Xth warmest year on record”. That’s done from a combination of absolute numbers….
Absolute numbers are a big deal to the public and the press, don’t let yourself believe otherwise. –Anthony”
OK, I agree, if you are going to put it that way. Although I wish it is not the case. It seems to me (although I have not looked into it carefully) that when using a finite number of measurements (whether surface stations or satellites) to reduce a continuous temperature surface on Earth to an average number, the precision (or confidence level) of the result cannot be that high to claim that one year is tiny bit warmer than the other, unless the difference between those two years are appropriately large. But we do make statements as you said, so I have to agree with you.

July 21, 2008 8:47 pm

Mike Bryant, thank you for providing a perfect example of how the media distorts everything.
The scientist states clearly that weather events are not climate change. But during his comments, the media shows pictures of flooding and disaster.
That is also known as propaganda, isn’t it?

July 21, 2008 9:54 pm

“Satellites do not measure temperature as such. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data to calculate temperature trends have obtained a range of values.” – Wikipedia
Sounds accurate.
I haven’t studied the science of the “splice”, nor would I understand the technical adjustments which took care of the loose ends, but wasn’t it satellite data (in ’78 or ’79) which first introduced “evidence” of global warming?

July 21, 2008 10:50 pm

Bill Marsh says,
“Of course the best metric for measuring ‘global warming’ is not surface temp, it is ocean heat content.”
And as we embrace the next series of technological “advancements”, won’t we be putting our faith blindly in the next generation of adjusters – to compensate for the problems with suckerfish, for example?
http://www.examiner.com/a-1484359~Little_yellow_submarine_studies_ocean.html
After reading about the issues with buckets and engine inlets at CA, I wondered if this might not be an elegant solution. But if the data takes an adjuster with his own private algorithm to digest it for the public, this approach, too, is doomed to controversy.

Gary Gulrud
July 22, 2008 11:55 am

“Satellites do not measure temperature as such. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. ”
Mauna Loa CO2 is measure via irradiances and not directly by chemical analysis. While the objection may be material it is ad hoc and obtuse.

Colin
July 22, 2008 1:58 pm

keep in mind the ground stations keep a lot of people employed and require a siqnificant budget. This gives the manager/boss importance and by doing away with the budget and resources, the manger will not be able to justify their own salary.

Gary Gulrud
July 25, 2008 11:07 am

“So, it seems that for a compatible conclusion we have to assume that Hansen’s correction algorithm somehow works.”
I think we have some de-programming to do over at CA. I wonder what the definition of ‘works’ is from which to commense.

Gary Gulrud
July 25, 2008 11:15 am

“Chloroflorocarbons are believed to have a greenhouse warming effect equal to one fifth of the CO2 effect, even though they are measured in parts per trillion.”
You’ve traded one sore point for another! It would be better to ban all GHG discussion until empirical measurements were gathered. Calculated transfer functions are at the bottom of this sad story.

John McLondon
July 25, 2008 6:35 pm

GG. “I think we have some de-programming to do over at CA.”
That is for sure!!!! 🙂 I will certainly do that if I start doubting AGW!
“I wonder what the definition of ‘works’ is from which to commense.”
As you know, the corrections are doing what they are supposed to do – filtering out certain biases. Otherwise I do not know how we can explain the fact that all four curves are almost identical.

Fred
July 29, 2008 6:24 am

More anecdotal news wrt sea ice:
More ice than expected in parts of the Arctic
http://www.barentsobserver.com/?cat=16149&id=4498513