Why Does NASA GISS Oppose Satellites?

A Modest Proposal For A Better Data Set

Reposted from Warren Meyers website: Climate Skeptic.

One of the ironies of climate science is that perhaps the most prominent opponent of satellite measurement of global temperature is James Hansen, head of … wait for it … the Goddard Institute for Space Studies at NASA!  As odd as it may seem, while we have updated our technology for measuring atmospheric components like CO2, and have switched from surface measurement to satellites to monitor sea ice, Hansen and his crew at the space agency are fighting a rearguard action to defend surface temperature measurement against the intrusion of space technology.

For those new to the topic, the ability to measure global temperatures by satellite has only existed since about 1979, and is admittedly still being refined and made more accurate.  However, it has a number of substantial advantages over surface temperature measurement:

  • It is immune to biases related to the positioning of surface temperature stations, particularly the temperature creep over time for stations in growing urban areas.
  • It is relatively immune to the problems of discontinuities as surface temperature locations are moved.
  • It is much better geographic coverage, lacking the immense holes that exist in the surface temperature network.

Anthony Watts has done a fabulous job of documenting the issues with the surface temperature measurement network in the US, which one must remember is the best in the world.  Here is an example of the problems in the network. Another problem that Mr. Hansen and his crew are particularly guilty of is making a number of adjustments in the laboratory to historical temperature data that are poorly documented and have the result of increasing apparent warming.  These adjustments, that imply that surface temperature measurements are net biased on the low side, make zero sense given the surfacestations.org surveys and our intuition about urban heat biases.

What really got me thinking about this topic was this post by John Goetz the other day taking us step by step through the GISS methodology for “adjusting” historical temperature records  (By the way, this third party verification of Mr. Hansen’s methodology is only possible because pressure from folks like Steve McIntyre forced NASA to finally release their methodology for others to critique).

There is no good way to excerpt the post, except to say that when its done, one is left with a strong sense that the net result is not really meaningful in any way.  Sure, each step in the process might have some sort of logic behind it, but the end result is such a mess that its impossible to believe the resulting data have any relevance to any physical reality.  I argued the same thing here with this Tucson example.

Satellites do have disadvantages, though I think these are minor compared to their advantages  (Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal.  This is also consistent with the fact that Mr. Hansen’s historical adjustments tend to be opposite what most would intuit, adding to rather than offsetting urban biases).  Satellite disadvantages include:

  • They take readings of individual locations fewer times in a day than a surface temperature station might, but since most surface temperature records only use two temperatures a day (the high and low, which are averaged), this is mitigated somewhat.
  • They are less robust — a single failure in a satellite can prevent measuring the entire globe, where a single point failure in the surface temperature network is nearly meaningless.
  • We have less history in using these records, so there may be problems we don’t know about yet
  • We only have history back to 1979, so its not useful for very long term trend analysis.

This last point I want to address.  As I mentioned above, almost every climate variable we measure has a technological discontinuity in it.  Even temperature measurement has one between thermometers and more modern electronic sensors.  As an example, below is a NOAA chart on CO2 that shows such a data source splice:

Atmosphericcarbondioxide

I have zero influence in the climate field, but I would never-the-less propose that we begin to make the same data source splice with temperature.  It is as pointless continue to rely on surface temperature measurements as our primary metric of global warming as it is to rely on ship observations for sea ice extent.

Here is the data set I have begun to use (Download crut3_uah_splice.xls ).  It is a splice of the Hadley CRUT3 historic data base with the UAH satellite data base for historic temperature anomalies.  Because the two use different base periods to zero out their anomalies, I had to reset the UAH anomaly to match CRUT3.  I used the first 60 months of UAH data and set the UAH average anomaly for this period equal to the CRUT3 average for the same period.  This added exactly 0.1C to each UAH anomaly.  The result is shown below (click for larger view)

Landsatsplice

Below is the detail of the 60-month period where the two data sets were normalized and the splice occurs.  The normalization turned out to be a simple addition of 0.1C to the entire UAH anomaly data set.  By visual inspection, the splice looks pretty good.

Landsatsplice2

One always needs to be careful when splicing two data sets together.  In fact, in the climate field I have warned of the problem of finding an inflection point in the data right at a data source splice.  But in this case, I think the splice is clean and reasonable, and consistent in philosophy to, say, the splice in historic CO2 data sources.

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Smith and Reynolds in their ERSST.v3 blend satellite data with buoy data, especially in the Southern Ocean. For LST, at minimum, those big old gaps in Canada, South America, Africa, etc., could be filled in with satellite data. That way we could eliminate the much of the guess work.
Regards

evanjones

I wouldn’t be surprised if they started mysteriously disappearing.
Nice satellite you got here. It would be a pity if something “happened” to it. Orbits decay, y’know . . .

hmmmm … I would say because it’s less easy to fudge and fake satellite data than it is the ground stations.

counters

The article’s citation of flaws with satellites fails to mention a very important one:
Satellites don’t measure temperature.
Read it one more time. Are we on the same page? Good.
Yes, satellites are optimal in the resolution of data we can collect with them. Sure, they might be expensive, but in my opinion, the benefits far outweigh costs (as the satellites can no doubt serve multiple purposes). But let’s be very clear: We’re not sticking satellites up there with big thermometers. Satellite temperature data is just as convoluted and riddled with a lack of precision and accuracy as tree-ring derived temperatures.
Without going into too much detail, because it does get quite complicated, we can infer temperature by looking at microwave radiation from atmospheric oxygen. However, time and time again, the conversions we make have shown to be inaccurate, which is why the satellite record is always being adjusted. Granted, we do accept the temperature record derived from satellites, but it should be taken with a grain of salt and the intrinsic error in the data must be considered.
The correct meme that skeptics and proponents alike should be proposing is that the work be put in to fix all of our available resources. Not only should we continue revising the GISS stations to determine and counteract the biases in some of them, but we should continue researching into MSU technology to help refine satellite measurements.
Whatever your opinion of AGW, it is wrong to dismiss the available data as corrupt or biased. Remember, just as proponents need temperature records to prime their models and test out the intricacies of their theory, skeptics need accurate temperature records as well, because there has yet to be a major revolution in the basic physics behind AGW which translates into a falsification of the theory in its entirety. A skeptic’s ace-up-the-sleeve is an accurate temperature record which does not demonstrate significant warming in the post-Industrial era, not allegations of bias or corruption.

The money quote:

Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal.

DAV

Evan Jones (13:45:13) :”Nice satellite you got here. It would be a pity if something “happened” to it. Orbits decay, y’know . . .”
Tut, tut, Evan.
‘Course if NASA actually operated the spacecraft collecting all of that temperature data they’d be in a better position to affect things.
http://www.nesdis.noaa.gov/About/onepagers/pdf/NSOFinfo.pdf

tarpon (14:12:56) : “I would say because it’s less easy to fudge and fake satellite data than it is the ground stations.”
Ground data is mostly a few temperature readings a day. The satellite data are complex and require intense conversion. This requires fairly large, complicated computer programs and a lot of people are involved (including the military). Much easier to fake a moon landing 🙂 I haven’t been down to Suitland in a long time but I doubt things have gotten easier. OTOH, the military DOES operate its own weather spacecraft.

Duane Johnson

Counters says:
Satellites don’t measure temperature.
But keep in mind that even a thermometer doesn’t really measure temperature. A mercury thermometer measures the length of a column of mercury. Other devices rely on changes of some other material, a change in electrical resistance, etc. To say that a satellite doesn’t measure temperature doesn’t really say anything about its accuracy in inferring temperature of a particular atmospheric location. Elimination of microsite problems may well make satellite instruments the more accurate tool.

Leon Brozyna

I enjoyed this piece when I read it on Climate Skeptic. It is quite even in its presentation of the strengths and weaknesses of satellite data. I consider that the biggest plus in the use of satellites is the way it overcomes land based biases. As an example, here’s a rather stunning piece I expect we’ll see here shortly about an article in Icecap about the stations in Maine, which they got from:
http://www.m4gw.com:2005/m4gw/2008/07/makin_up_climate_data_from_jun.html#more
Now I understand how gaps in records or wholesale creation of a record happens, such as the early years of the Buffalo Bill dam:
http://wattsupwiththat.wordpress.com/2008/07/15/how-not-to-measure-temperature-part-67/
Now I understand the business with filnet and making up data (to fill in the blanks). I trust that filnet has been and continues to be tested against known stations to ensure its reliability and such peer-reviewed studies have been shared with the meteorlogical community…..

Mike C

Counter,
I’ll consider the surface temperature to be corrupt and biased as long as my lying eyes tell me there are barbecues, air conditioner exhausts, trash incinerators, asphault roofs, chimneys, pavement, metal automobiles, etc…etc…etc in the immediate vacinity of those surface stations. Yeah Bobbo, I’m gonna believe all those climate scientists who say it doesnt matter because glaciers have been melting since the little ice age. And while you are dishing out your passionate advice to bloggers, please be fair and, for instance, tell the whole truth; satellites get adjusted due to orbital drift, not because some alien is at the satellite with a cooler of beer cooling down the satellite record. Orbital drift is the only significant issue of any kind with the satellite record and it is well studies, understood and accounted for.

IceAnomaly

* It is immune to biases related to the positioning of surface temperature stations, particularly the temperature creep over time for stations in growing urban areas.
-But satellites are subject to calibration errors as the satellite orbit altitudes drift over time, and as the sensor sensitivity drifts overtime.
* It is relatively immune to the problems of discontinuities as surface temperature locations are moved.
-but is subject to large discontinuities as one satellite is replaced by another over time . There are several such discontinuities in the satellite record already.
* It is much better geographic coverage, lacking the immense holes that exist in the surface temperature network.
-the satellites have no high-latitude coverage. They miss the poles entirely.
“Mr. Hansen and his crew are particularly guilty of is making a number of adjustments in the laboratory to historical temperature data that are poorly documented”
They are documented in a series of publications outlining the rationale for the adjustments, and the way the adjustments are made. They are also perfectly documented in the computer code. There is no ‘guilt’ here, there is an attempt to get the best possible results form a flawed historical data set that can not be recreated – unless you have a time machine somewhere that no one else knows about.
“These adjustments, that imply that surface temperature measurements are net biased on the low side, make zero sense given the surfacestations.org surveys and our intuition about urban heat biases.”
“intuition” is not a scientific argument. JohnV’s early analysis here at WattsUp showed that the class1,2,high quality stations,when analyzed alone, gave historical results in very close agreement with the adjusted GISS results. Not to mention that the four main data sets, GISSI, HadCrut, UAH and RSS, all show very similar results when compared over the entire time of the satellite measurements. I note that this article conspicuously does not attempt to quantify that claimed error. Just what IS the diffference in slope of those 4 temperature products over the entire period of overlap? What is the error in each slope calculation? Are they statistically distinguishable? Claiming a significant difference without showing it to us, tells us nothing.
“Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal. ”
Mt skeptics, then, are comfortable with accusing Hansen of fraud and of assuming a major conspiracy in play. Thanks for making that clear. There are very good reasons for paying attention to the GISS product. It goes back a century and a half. It attempts to include the arctic, via (well documented) interpolation. It corrects urban trends by removing those trends from the analysis entirely – substituting an interpolation of the trends of surrounding non-urban areas. It is consistent with qualitative supporting evidence, such as degree-day calculations and killing frost date records. It is consistent with the other three major temp products.
And as JohnV showed, the adjusted results are very, very close to those of the class 1,2 stations alone.
Attributing a fraudulent motive to Dr. Hansen, especially in the absence of any evidence to that effect other than biased speculation, is simply vile – Anthony, I don’t understand why continually you allow that kind of character assassination to flood your site.
REPLY: Lee, let me clarify for you. Hansen knows the adjustments are flawed. It has been shown time and again that stations are being adjusted and the reasoning and logic (using Hansen’s own published methods) does not hold up. The adjustment based on satellite night lights at one point in time, taken in the year 1995, is being used to adjust an entire century worth of data. Of course that method is simply ludicrous, and we’ve shown that rural stations like Cedarville are getting adjusted for UHI when they should not be. Hansen knows this, and knowing that your methodology is in error, yet leaving it intact, is not science, but agenda or pride, or something else.
Further, GISS is interpolating data into the arctic, where no historical data is measured by stations. Interpolating data and presenting it with measured data, is in my view, scientific fraud, plain and simple.
Imagine interpolating data for a medical study and adding to real data for a final paper to show some drug works as advertised. If such a thing was revealed, heads would roll, careers would be destroyed, companies would go bankrupt as stocks tumble. Yet Hansen, when doing just that, to show the surface temps align with his models, gets a free pass because “he’s saving the planet”. Amazingly, we have misguided anonymous cowards like yourself argue for us to “go easy” on him.
I believe that Hansen should be called on the carpet for these flawed adjustments and interpolation of arctic data mixing it with real data.
How many false names do you plan to create Lee? – Anthony

crosspatch

I don’t have a problem with adjusting temperatures for changes in surrounding conditions, station moves, etc. The problem I am having with Dr. Hansen’s adjustments are that nobody can duplicate them. While some stations can be duplicated, other can’t. That means that process by which these adjustments are done are still not understood. Dr. Hansen should make it a top priority to make sure that the adjustment process is transparent and the procedure is clear and well documented. He doesn’t own the data or the process or the results, they belong to the US taxpayer.
His data are diverging more from the several other data sets with each passing month and it is my opinion that we are entitled to know why this might be. It does not add any trust to his numbers when the methods used to create these adjustments take months of begging to obtain and when they are finally produced, they do not consistently match the output that he produces.

K

I suspect a lot of NASA people simply know they have a nice, low-effort, job and intend that it not change until retirement.
The talented people that plan and create any technology don’t normally stay around very long to operate the devices. Agencies and bureaus do that. The seasons pass, the big objective becomes to secure the next budget, to check off the boxes. Over time the reviews and oversight become routine, they are boring.
Trust is everything. “he knows what he is doing, I am too busy to check everything.” is a much more pleasant thought than “great, it will take me a week to figure this out”.
Whatever goes on with the NASA GISS team may not involve any conspiracy or manipulation.

Bob B

Counters, the surface data is corrupted and biased–get over it. It is pure crap and should not be relied upon at all.

Bill Marsh

counters,
Good point, I understand the fact that satellites don’t measure temperature directly, but I think we have the issues pretty well worked out. The advantage of the satellites is the breadth of coverage and the fact that they do not depend on humans.
The problem with the ground stations is that of coverage (2/3 have shutdown since the 1990’s, especially outside the US). Right now there are areas in which the ‘adjustments’ are doing the equivalent of estimating the temperature in Atlanta based on thermometer readings in New York. I am VERY skeptical of the efficacy of those adjustments. That and the exposure that the temps have been ‘faked’ or made up either because of some misguided attempt at ‘aiding global warming efforts’ or through laziness at some sites, and the almost bizarre attempts at adjusting the surface temps that end up causing past measurements to adjust up or down (mostly down in the past, up in the more recent past).
If we are to commit resources, the best effort would be to providing self reading/reporting stations in quality locations that we can trust.
Of course the best metric for measuring ‘global warming’ is not surface temp, it is ocean heat content.

Philip_B

Gavin Schmidt, at Real Climate says all we need to measure global temperatures is 60 good sites. I tend to round this up to 100 good sites with good geographic distribution, but the principal is correct.
I think there must 60 or 100 good sites in the world and the fact no one has compiled a global index based on the best sites we can find, what I call prisitine sites, make me suspect they don’t show significant warming. And that is the conclusion of my informal survey.
Using only pristine sites, sites remote from any local or regional influences without moves, would have the added advantage that instrument issues would be much easier to diagnose and therefore fix.

Joel Black

Mr. IceA.,
Maybe the folks at RealClimate can teach Anthony how to delete posts that cause discomfort. Or he could just spew vitriol at them like that little dog at Open Mind.

IceAnomaly

Philip_B, JohnV did essentially just that for the US, right here on WattsUp. He did a gridded temp anomaly trend analysis for the US based on only the Class 1,2 stations identified by Watts’ survey. He found that the results of his analysis of only the best of the stations, matched very closely to the gridded adjusted output by GISS of all the stations.
REPLY: Unfortunately in his rush to disprove the value of the project, he ended up with only 17 stations that rated CRN1 or CRN2 and very poor geographic distribution. I have not done this type of analysis yet because the survey is not complete yet. If I had done an analysis and published it, I’d be vilified for “rushing to judgment” or “using an incomplete data set with poor representivity”.
Yet somehow, JohnV gets a free pass for the rush job he did, and his results cited as “fact”, as you have done, probably because his results are what folks like you want to see.
When we have the majority of stations done, I’ll do a proper analysis of the data. Until then any analysis is simply premature. – Anthony

Bob B

“Gavin Schmidt, at Real Climate says all we need to measure global temperatures is 60 good sites. I tend to round this up to 100 good sites with good geographic distribution, but the principal is correct.”
Phillip, where is the statistical proof for that? The big discrepancy in March 2008
data between satellite and surface station is due in large part to missing surface site locations

Mike C

Using the JohnV defense is a joke. John V only had 17 stations, several missing data, especially for the last few years and only one station represented the entire southeast. You also needed to read his own warnings about how his program was untested and not peer reviewed. I sat back and watched his work as he corrected mistake after mistake,most of which were corrected by then 15 year old Kristen Byrnes. He was using data that he claimed was raw, in UHI areas, missing months, pre QC, and included the MMTS adjustment which we now know was a major error. His program did not consider elevation, climate zone or any of a number of other parameters used in temperature. He was using older USHCN v1 data and his temperature showed significant differences from GISS, especially as it went back in time. It’s the kind of thing you would expect from someone passionate about his beliefs (he resented criticism about James Hansen) and no experience whatsoever with climate monitoring issues.

Philip_B

Bob B, no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.
The difference in precision between 10 and 100 sites is significant, the difference between 100 and 1,000 sites isn’t significant.
Otherwise, your second statement may well be true, but it is not directly due to a relatively small amount data that is missing. I’d guess it’s due to data missing from particularly good or bad sites.
And at root that’s the problem with GISS. If they were removing sites because they were known bad sites, then I would be in favour, although with the proviso that selection or deselection of sites be open, transparent and auditable. At the moment it isn’t and I’m reasonably sure that sites where the trend doesn’t conform to the ‘known’ GW trend are biased against (unconciously or otherwise) and tend to be either eliminated or adjusted.
And,
skeptics, then, are comfortable with accusing Hansen of fraud and of assuming a major conspiracy in play.
Experimenter bias is widespread and has been documented in many scientific studies, which is why double blind studies are the norm in areas where scientists have to make decisions/assesments, and replication is a cornerstone of science. There is no need to introduce fraud or a conspiracy as an explanation.
Otherwise, I agree with Anthony and if fraud has occured, it is in Hansen’s (deliberate) failure to release his data and methods for analysis and replication.

Anthony, your reply to the comment by IceA at 16:02:02 is priceless. The insight that you, Steve Mc, and a few others give to the behind the scenes antics of AGW disciples is invaluable.

Michael Hauber

John V’s analysis (which was updated to give a better agreement with GISS trend by Steve Mc) may be only 17 stations. But are there any other analysis based on larger number of stations that show a problem with the GISS trend?
So maybe when all the stations are updated and an analysis is done we find there is a real problem with the data. But until such an analysis is done any claims that the cliamte record are rubbish seem to be premature.
And how do you know USA has the best climate stations in the world? Australia has its climate network documented. Some of these have concrete footpaths, or fences or watered lawns within a few metres, but none are enclosed in a sea of concrete or on a rooftop amongst air con units as I’ve seen in photos on this site.
Perhaps the reason that the continental USA temperature record shows minimal to no global warming trend is because they are the ones that can’t measure temperature properly.

Bob B

Phillip B–look at the surface data and the satellite data on the link attached. Canada and South Africa appear cold in March 2008 in the Satellite data. The Surface data is missing data and relies off the Hot data in Asia. NOAA and GISS temp reported a MUCH warmer March 2008 then UAH and RSS–this data here shows why.
http://www.theregister.co.uk/2008/06/05/goddard_nasa_thermometer/print.html

Dishman

IceAnomaly wrote:
They are documented in a series of publications outlining the rationale for the adjustments, and the way the adjustments are made. They are also perfectly documented in the computer code.
The code does not match the publications.
As best I can tell (based on my FOIA request), GISTEMP is not subject to any kind of Software Assurance process, so there is no basis for asserting that the code matches the documentation. Any claims to the contrary are at best a guess.

Paul Linsay

counters at (14:41:10) : “Satellites don’t measure temperature.”
Funny thing about that. They use the same technology and methods used to measure the 2.7 K background radiation left over from the Big Bang. In fact, the 2.7 K background is used for calibration. [ Read section 4 about calibration here http://daac.gsfc.nasa.gov/AIRS/documentation/amsu_instrument_guide.shtml ] It’s remarkable how using microwaves to measure the temperature of the Big Bang is worth a couple of Nobel Prizes in Physics (the real kind) but only opprobrium when used to measure the Earth’s temperature.

MarkW

While JohnV did limit his analysis to stations receiving a class 1 or 2 rating, he made no attempt to control for UHI influences.

counters said:

“Remember, just as proponents need temperature records to prime their models and test out the intricacies of their theory, skeptics need accurate temperature records as well, because there has yet to be a major revolution in the basic physics behind AGW which translates into a falsification of the theory in its entirety.”

See what he’s doing there? counters is turning the Scientific Method on its head by assuming that the AGW hypothesis must be true until/unless “the basic physics behind AGW” are falsified. But “basic physics” has long since withstood the peer review/falsification process. It is AGW/climate disaster which is the unproven hypothetical. In other words, hypothesizing AGW disaster is the same as asking any speculative “What if…?” question.
There is no proof of AGW leading to planetary catastrophe [and make no mistake, the stated hypothesis is that AGW will lead to runaway global warming/climate disaster. If AGW were only a hypothesis of a very small fraction of a degree change, which is probably the case, then AGW would only be a small and unimportant footnote in an obtuse technical journal].
In fact, it is AGW/planetary disaster that has been put forth as a new hypothesis. Therefore, those hypothesizing anthropogenic global warming leading to a planetary climate catastrophe have the burden of proof. Skeptical scientists have no duty whatever to falsify the status quo: natural climate change. The current climate is well within historical norms, and screaming “But what if…!” proves nothing.
These word games indicate desperation. After all is said and done, the climate is cooling, not warming. And the real-world record proves that CO2 has no measurable effect.

Mike C

MarkW (18:20:51) :
“While JohnV did limit his analysis to stations receiving a class 1 or 2 rating, he made no attempt to control for UHI influences.”
Not correct Mark, that was the first thing Kristen corrected him on. He demonstrated some embarassment about it as well.

John McLondon

In his web http://www.uah.edu/News/climatebackground.php Christy made the following comment, I quote:
“”In areas where you have high resolution, well- maintained scientific collection of temperature data, the satellites and the surface data show a high degree of agreement,” said Christy. “Over North America, Europe, Russia, China and Australia, the agreement is basically one-to-one.””
When Chrsity himself commented that the satellite and surface station data shows a high degree of agreement, it seems difficult to make an effective claim that surface stations have major problems?
It also says:
“”Global” surface thermometer networks show a warming trend of approximately 1.7 degrees Celsius per century — about 3° Fahrenheit.
The satellite data show a warming trend of 1.4 C or about 2.52° F per century.” and this difference is explained: “A recent analysis of the surface and satellite datasets hints that the apparent disagreement might have as much to do with coverage as with differing trends at different altitudes.”

evanjones

Ooooh.
So IceAnomoly is our Old friend, Lee.
Well, well, well. That correlates.
On the point discussed, would it not be possible fro UAH or RSS to interpolate polar data from a full swath of surrounding data more accurately GISS does from its spotty surface coverage?
And isn’t the polar areas not covered by satellite round 1% of surface or something?

paminator

Paul Linsay- Well said.
Here is why I trust the MSU satellite data over any other temperature dataset:
-Satellite data provides better global coverage than any other method in use.
-Satellite data has the most transparent error analysis and self-correction procedures in place of any temperature dataset. UAH and RSS personnel actually cooperate to find and correct errors.
-The satellite historical record does not get re-adjusted every month a new data point is added.
-Metadata for satellite measurement equipment is available. Metadata for surface temperature and sea temperature datasets is an appalling mess.
-Satellite-mounted microwave radiosondes are highly reliable, accurate instruments, as compared with canvas buckets, rotted out Stevenson screens and badly sited MMTS units.
I think it would be very educational if someone could track down the daily (or better yet, hourly) temperature readings from a surface station site over the last ten years, to get some perspective on how small a trend in temperature change is trying to be coaxed out of the huge temperature variations that occur.
Of course, if you really don’t like satellite or surface station data, you could try inferring temperature from wind gradients measured using radiosondes. Or try inferring temperature from recent tree ring patterns. Or like Hansen, inferring arctic surface temperature *measurements* using GCM output! IIRC these approaches claim similar error bars to those for the MSU dataset. Isn’t climate science wonderful?

evanjones

Perhaps the reason that the continental USA temperature record shows minimal to no global warming trend is because they are the ones that can’t measure temperature properly.
That would seem unlikely. The great majority of the biases are to the warm and occurred over time as well sited stations were overtaken by uerban, suburban, and exurban creep, thus exaggerating not only the temperatures, but the trends.
And a huge number of CRN4 violations occurred during the 1980s to date when better sited Stevenson screens were replaced by MMTS units located right next to buildings on account of cable issues.

steven mosher

the proper way to do the splice is to normalize the hadcru data to the UAH anomaly period. ( average hadcru from 1979 to 1998 and subtract from itself)
But then I am not at all sure one can splice these two records as they measure different things

Mike C

John McLondon says
“When Chrsity himself commented that the satellite and surface station data shows a high degree of agreement, it seems difficult to make an effective claim that surface stations have major problems?”
John,
The .3 degree celcius difference between the satellite and surface record is pretty big considering global warming this century is about .7 degrees C (HadCrut). Now take a look at Hansen’s or NCDC data, they are warmer than Hadcrut. The difference is even bigger. I’d say the surface stations are warmer for a reason. Barbecue ribs anyone?

Mike Bryant

I propose a new greeting for family and freinds,
“Have you noticed that global warming stopped?”, or
“Have you noticed that the weather is getting cooler?”
Maybe someone can come up with a catch phrase that will catch on like:
“Have a nice day” 🙂

John McLondon

Mike C.,
Yes, absolutely. But I pointed out that to bring the explanation they gave, “A recent analysis of the surface and satellite datasets hints that the apparent disagreement might have as much to do with coverage as with differing trends at different altitudes.”
If different trends at different altitudes is the problem for such difference, then it is difficult to assign that on surface stations. The coverage may be a different story, I do not know enough about that to comment on whether it will bias for higher or lower temperature.
My main comment was on Christy’s comment that satellite and surface station measurements (at least for the U.S., China, Europe, etc) of temperature are very close.

John McLondon

In any case we are looking at trends and anomalies, so absolute numbers may not be that important.
REPLY: Oh sure they are, remember that an “absolute number” from a weather station of weather statgion network is used every time we get:
1) A newspaper article saying “New record high in Podunk, USA today sure sign of global warming”
2) A press release from NOAA, or GISS, or HadCRUT that says “Xth warmest year on record”. That’s done from a combination of absolute numbers
3) A TV station does a story on the “Heat wave” and cites temperatures all around the city, but without caring about where that temperatures were measured (rooftop, parking lot, downtown, bank sign, etc) telling the public only the numbers, not the accuracy. They are only interested in the absolute highest numbers when this happens, and I speak from experience. See this article from TV meteorologist Brian Sussman on that issue.
Absolute numbers are a big deal to the public and the press, don’t let yourself believe otherwise. -Anthony

evanjones

Mike C: 3°F, only 1.7°C. But that’s around a third or so of the smoothed increase since 1979, so it’s quite significant.

evanjones

Divide that by 10! 0.3. 0.17!

IceAnomaly

MikeC,
Those differences are because the satellites, HadCRUT, and GISS, use different baseline reference periods when they compute anomalies. They are on different scales.

evanjones

That difference is c. .05 per decade, or half a degree per century. Not small potatoes.

Richard

“But then I am not at all sure one can splice these two records as they measure different things”
That didn’t stop Mann et.al. from coming up with the “hockey stick”!

Philip_B (17:26:31) :
“Bob B, no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.
The difference in precision between 10 and 100 sites is significant, the difference between 100 and 1,000 sites isn’t significant.”
Okay, by this logic I can take 100 elevations around the United States and draw a precise topographic map. Clearly this logic is bogus.
1) I think you are referring to things like polling voters with a truly random sampling algorithm. It takes surprisingly few samples (like 100 or so) to come up with an accurate result. For something where the result has more data points, e.g. so that models can handle convection or that your map includes mountain ranges, then you will need many more samples.
2) Precision is merely the “repeatability” of a measurement. Accuracy refers to how close to the “truth” a measurement is. (An accurate measurement implies precision.) Precision is nice, accuracy is better.
I was going to apologize for not providing links to support my comments, but I’ll just recycle your “no proof is required” assertion.

Mike C

John McLondon,
Please allow me to patiently address your repeating of the “maybes” “possiblys” “could bes” and “might bes”.
Okay, here I am looking at a temperature station next to a barbecue. My own two eyes are telling me there is a barbecue there. There are no maybes, possiblys, could bes or might bes about it. It is definately there. Let’s compare this to a press story from AGU where a scientist is speculating. Okay, hmmmmm, where should the evidence take me at this time? Oh, yes, maybe my eyes are being bought off by ExxonMobile BWAAAAAAHAHAHAHA ::::Koff:::: pardon me. Anyways, if about half of the temperature increase is due to human error, (and I have to side with the balloon and satellite data because I doubt there was a kegger going on in flight controll where a frozen alcoholic beverage was spilled on the sensor)… and the PDO is ready to shift, that means the human signal riding on the natural climate signal cannot be more than 0.15 degrees per century, assuming that the other ocean circulations which are in or coming out of warm phases are not adding natural warming to the climate system at this time.
IceAnomaly … or Lee… or whatever your name is, I’ve already run the numbers myself with the corrected baselines, they are between .2 and .4 with GISS being the warmest, several different smoothing methods and etc. Nodoubt about it, the balloons and satellites are close together with the surface temps being the warmer.
And by the way, both of you are invited to the barbecue at the temperature station when this is all over, you can drink it off.

Manfred

I found this study interesting, showing a significant correlation between ground measured temperature increases and social-economic factors like poluation growth, bsp, average income etc.
http://www.uoguelph.ca/~rmckitri/research/jgr07/M&M.JGRDec07.pdf
The conclusion is, that human effects on temperatures are not corrected adequately and the authors conclude, that the global temperature increase in 1980-2002 over land was “measured+adjusted” too high by a factor of approx. 2.
-> so the temperature “adjustments” in developing countries could be a much more serious problem than elsewhere.
with the rapid 3rd world development in the last couple of years, this may have worsened significantly after 2002.

mondo

Re GISS adjustments, I would be interested to know the distribution of adjustments, ie, whether they are positive or negative. My understanding (as a layman I acknowledge) of adjustment processes is that, if fairly done, they tend to cancel each other out, and the resultant curve isn’t all that much different from the starting curve, but the confidence levels are improved.
A statistician that I was talking to last night suggested that if nearly all of the “adjustments” are in one direction, then that is a signal that the adjuster is introducing bias. I wonder if anybody has been able to analyse the GISS adjustments in this way?

fred

there has yet to be a major revolution in the basic physics behind AGW which translates into a falsification of the theory in its entirety.

This is a commonly made and totally invalid argument. AGW is not a matter of physics. It is worth explaining as people so often misunderstand it.
What is physics is that in a theoretical atmosphere consisting of gases in the proportions of those on earth, a doubling of CO2 with no other changes would lead to about a 1 degree C rise in atmospheric temperature, because of the increased absorption of heat by the CO2. It is also physics that a rise in temperature of a theoretical atmosphere with no other changes will lead to an increase in water vapour. And that increase in water vapor with no other changes will also lead to absorption of more heat and a rise in temperature.
Nevertheless, AGW is not ‘just physics’, and here is why. Its the difference between the laws which govern the operation of an engine, and the design of a vehicle. How the climate system reacts to the increase in atmospheric temperature caused by the increased CO2 is not just physics, in the same way that how the car reacts to increased fuel flow is not just physics. The one does not allow us to predict the other. It could be that the increase in speed is a function of increased fuel flow. Or it could be, with no violation of the laws of physics, that factors such as wind resistance, rolling resistance, governors etc either limit or eliminate any speed increase.
What is ‘just physics’ is that the energy content of the fuel going to the engine, and thus the power output of the engine, has increased. But it does not follow from that as a matter of physics speed will increase.
Similarly, it is correct to say that an atmosphere with more CO2 must increase in heat uptake. However, whether this leads to much or any warming depends on what happens to the system as a whole in response. It could be that increases in water vapor amplify it. Or it could be that convection, cloud and rain eliminate it. It could be that over time the average water vapor content rises or falls, without there being any violation of the various laws of physics governing the behavior of gases. It would not even, as far as I know, be a violation of the laws of physics for an increase in CO2 to lead to cooling. It might require an unlikely combination of circumstances, and I don’t believe it to be the case, but I don’t think there is anything contrary to the laws of physics about it.
If on warmist blogs you question the connection between a rise in CO2 and a rise in global temperatures, you will frequently be told to google various of these laws of gases. They are not exactly irrelevant, one should know them, but they are not the crux of the matter. The crux of the matter is not the laws governing gases, but how, given them, the climate actually works, and there is nothing that says it has to work in such a way that a small amount of warming amplifies the amount of water vapor over a long enough period that the predominant feedbacks are positive.
Whether it does work like that or not is a matter of how the system works in detail, how it is constituted. Its not ‘just physics’ in exactly the same way that what the shape of the car body is, and thus how much wind resistance it has, is not ‘just physics’ either. Of course, the resistance of a given shape is just physics. But what the shape happens to be, is not.
Still less is it, as some bloggers repeat endlessly, a matter of 200 year old physics.

evanjones

Re GISS adjustments, I would be interested to know the distribution of adjustments, ie, whether they are positive or negative.
Well I can answer half of your question but not the other half.
For its “raw” data, GISS uses NOAA adjusted data. For the USHCN-2, the adjustments were a whopping +0.42C. I looked at the USHCN-1 version that was +0.3C and found that (in Gore’s own words), “Everything that’s supposed to be UP is DOWN and everything that’s supposed to be DOWN is UP.”
According to NOAA, all these site violations the Rev has been documenting made the temperatures DROP. So they had to be adjusted upwards.
To add insult to injusy, the UHI adjustment was -0.1 FARENHEIT!
I don’t know what GISS did to those outrageous NOAA numbers. Unlike NOAA USHCN-1, so far as I can tell, they are much too smart to publish a bottom line. (USHCN-2 wised up and stopped publishing the amount and direction of their adjustments. They learned a bitter lesson when their USHCN-1 adjustment graphs became one of the most quoted graphs by skeptics. And USHCN-2 is almost half again worse! (But they didn’t publish that, it had to be derived by map function diddling. I didn’t run that, a poster on this blog ran the numbers.)

Steve Keohane

Phillip_B “no proof is required. It is elementary statistics. As sample size increases, the increase in precision declines.”
While a broadly true statement, one needs an initial sample size such that it can capture the deviation within the population being measured. A population with a sigma of .001 needs a smaller sample than one with a sigma of 9.

SunSword

When the amount and direction of adjustments to ground based stations is not published, then the information by definition cannot be subject to peer review (since the “peers” e.g. actual scientists who study the climate) cannot review what is withheld. This is a fundamental violation of the checks and balances of the scientific method, and in fact is not science at all but merely politics.