By Blake Snow – FOXNews.com
Image: NASA / Goddard Institute for Space Studies – Maps from NASA’s GISS reveal temperatures where no data exist, thanks to mathematical extrapolation of data.
NASA was able to put a man on the moon, but the space agency can’t tell you what the temperature was when it did. By its own admission, NASA’s temperature records are in even worse shape than the besmirched Climate-gate data.
E-mail messages obtained by a Freedom of Information Act request reveal that NASA concluded that its own climate findings were inferior to those maintained by both the University of East Anglia’s Climatic Research Unit (CRU) — the scandalized source of the leaked Climate-gate e-mails — and the National Oceanic and Atmospheric Administration’s National Climatic Data Center.
The e-mails from 2007 reveal that when a USA Today reporter asked if NASA’s data “was more accurate” than other climate-change data sets, NASA’s Dr. Reto A. Ruedy replied with an unequivocal no. He said “the National Climatic Data Center’s procedure of only using the best stations is more accurate,” admitting that some of his own procedures led to less accurate readings.
“My recommendation to you is to continue using NCDC’s data for the U.S. means and [East Anglia] data for the global means,” Ruedy told the reporter.
“NASA’s temperature data is worse than the Climate-gate temperature data. According to NASA,” wrote Christopher Horner, a senior fellow at the Competitive Enterprise Institute who uncovered the e-mails. Horner is skeptical of NCDC’s data as well, stating plainly: “Three out of the four temperature data sets stink.”
…
Global warming critics call this a crucial blow to advocates’ arguments that minor flaws in the “Climate-gate” data are unimportant, since all the major data sets arrive at the same conclusion — that the Earth is getting warmer. But there’s a good reason for that, the skeptics say: They all use the same data.
…
Neither NASA nor NOAA responded to requests for comment. But Dr. Jeff Masters, director of meteorology at Weather Underground, still believes the validity of data from NASA, NOAA and East Anglia would be in jeopardy only if the comparative analysis didn’t match. “I see no reason to question the integrity of the raw data,” he says. “Since the three organizations are all using mostly the same raw data, collected by the official weather agency of each individual country, the only issue here is whether the corrections done to the raw data were done correctly by CRU.”
Corrections are needed, Masters says, “since there are only a few thousand surface temperature recording sites with records going back 100+ years.” As such, climate agencies estimate temperatures in various ways for areas where there aren’t any thermometers, to account for the overall incomplete global picture.
“It would be nice if we had more global stations to enable the groups to do independent estimates using completely different raw data, but we don’t have that luxury,” Masters adds. “All three groups came up with very similar global temperature trends using mostly the same raw data but independent corrections. This should give us confidence that the three groups are probably doing reasonable corrections, given that the three final data sets match pretty well.”
But NASA is somewhat less confident, having quietly decided to tweak its corrections to the climate data earlier this month.
In an updated analysis of the surface temperature data released on March 19, NASA adjusted the raw temperature station data to account for inaccurate readings caused by heat-absorbing paved surfaces and buildings in a slightly different way. NASA determines which stations are urban with nighttime satellite photos, looking for stations near light sources as seen from space.
Of course, this doesn’t solve problems with NASA’s data, as the newest paper admits: “Much higher resolution would be needed to check for local problems with the placement of thermometers relative to possible building obstructions,” a problem repeatedly underscored by meteorologist Anthony Watts on his SurfaceStations.org Web site. Last month, Watts told FoxNews.com that “90 percent of them don’t meet [the government’s] old, simple rule called the ‘100-foot rule’ for keeping thermometers 100 feet or more from biasing influence. Ninety percent of them failed that, and we’ve got documentation.”
Read the entire story at Fox News.com
From the link in the article:
Subject: Re: USA temperatures – question from USA TODAY
From: “James Hansen” .
Date: Wed, 29 Aug 2007 16:12:20 -0400
To: “Rice, Doyle”
Well, I guess that I would say it a bit differently.
Our method of analysis has features that are different than the analyses of the other groups. In some cases the differences have a substantial impact.
For example, we extrapolate station measurements as much as 1200 km. This allows us to include results for the full Arctic. In 2005 this turned out to be important, as the Arctic had a large positive temperature anomaly. We thus found 2005 to be the warmest year in the record, while the British did
not and initially NOAA also did not. Independent satellite IR measurements showed that our extrapolations of anomalies into the Arctic were conservative. I am very confident that our result was the correct one in that instance.
That doesn’t sound to me like GISS is admitting its data is worse than HadCRUT data. Just different.
Until the full Earth Observing System is up and running and giving data from every part of the planet, I see no problem with interpolating known, nearby data in the Arctic. This is better than Hadley’s “we don’t measure it, so let’s just put in the average temperature anomaly for the entire planet” approach. Which is also the approach of UAH and RSS satellite data analysis.
Should they spend another $1 billion so they don’t have to do this kind of guesstimate ? Absolutely.
FOX News – We Report, You Decide.
OK, i’ve decided.
Tom W:
“But apparently you haven’t heard of Mears and Wentz who in 2005 found an algebraic error in an earlier analysis which led to a substantial increase in tropospheric warming. They also released their own results indicating a trend of 0.19°C per decade, higher than the surface rate of 0.17°C.”
Yes, I know about that one too. It was the UAH record they found an error in, and though the correction was significant, after the correction, the warming in the troposphere was still less than the warming at the surface. That’s why I didn’t mention it. But hey, if you want to hang your hat on this Mears “correction” to UAH, and admit that the troposphere is warming more slowly than the surface, be my guest.
As for “their own results”, that’s the one I mentioned (except I believe it was .20 for RSS trop, and 0.16 for surface). And I explained their circular reasoning in that paper. Since you obviously missed that, I’ll explain it again. They “used 5 years of hourly output from a climate model” (a direct quote from Mears et al, 2005) to “adjust” the RSS tropospheric temperature data. They didn’t mention which climate model they used (which is apparently standard practice for climatologists, so no one can duplicate their work), but ALL of the climate models assume that CO2 is the driving force behind climate change. So using this model is equivalent to ASSUMING that global warming is man-made. Then, based on this ASSUMPTION, they estimated a new value for the increase in tropospheric temperatures and concluded that it was consistent with anthropogenic global warming (i.e., about 1.2 times as large as surface warming). What a surprise! They started by ASSUMING that global warming is man-made, and then CONCLUDED that global warming is man-made. That, Tom, is circular reasoning.
Look. There’s any number of “adjustments” that “scientists” can do to “show” that something is there when it isn’t (just look at the multiple instances in the CRU code released by the Climate-Gate hacker/whistle-blower). I don’t trust “adjustments” and never have. But of all the many adjustments done to the RSS and UAH satellite data, prescious few of them get the tropospheric temperatures up anywhere near where AGW theory says it SHOULD be in comparison to surface temperature. And those that do are shady at best.
Regards,
Trevor
This kind of illustrates why Obama doesn’t like Fox News.
NickB. (09:26:51) :
“Don’t forget M&M 2007 where they state that a significant % of the warming in the surface temp is due to corruption of the grid averages due to extrapolation of UHI into areas that are not developed.”
I haven’t mentioned it yet, Nick, because I’m having too much fun rubbing the alarmists’ noses in their own surface temperature record, but I think you’re exactly right on this. If the alarmists would just admit that the surface temperature record is positively biased to show about 40% more warming than actually occurred, they wouldn’t have a problem with the tropospheric temperatures vis-a-vis surface temps and AGW theory. If actual temperatures had a warming trend of, say, 0.12 degrees C per decade (rather than the 0.17 that the record shows), then the 0.144 degrees C per decade (average of RSS and UAH) trend in the troposphere would be right on the money with respect to what AGW theory says about tropospheric temperatures in relation to surface temperatures. But if they admit that the surface is only warming 0.12 degrees C per decade, not only is not very scary, it’s not even unusual when compared to temperature changes in the past.
Regards,
Trevor
“”” Bill Marsh (04:57:00) :
Ron Broberg (21:34:11) :
<em.Global warming critics call this a crucial blow to advocates’ arguments that minor flaws in the “Climate-gate” data are unimportant, since all the major data sets arrive at the same conclusion — that the Earth is getting warmer. But there’s a good reason for that, the skeptics say: They all use the same data.
I guess the skeptics haven’t heard of the lower tropospheric satellite data.
—————
No, I'm sure they haven't since the lower tropospheric satellite data is developed by a leading skeptic. Of course lower tropospheric satellite data has only been available since 1978 and tells us nothing about temperatures earlier than that or the causes of any temperature changes in the period 1978- present. There is also a 'divergence' between that satellite data and the GISS/CRU et al adjusted data. Dr Hansen doesn't use the satellite data in the GISS temp calcs because, if he did, he'd be guilty of splicing different measurement sets together. The same thing that was done when the thermometer records were 'spliced' onto the dendro records without explicitly saying so (the 'trick' in the emails – he was 'hiding the decline' in the dendro records to solve the 'divergence' problem, not an actual decline in temperature). Of course the 'trick' was actually allowing them to avoid dealing with the problem that the divergence called into question the entire past dendro calculations, not just the 1980 – present record. """
So you mentioned Dr James Hansen; would he be the "a leading skeptic" that you cited as being responsible for the "lower troposphere" satellite data.
When you write peer reviewed Journal papers do you give a bibliography citing "leading skeptics" or "leading protagonists" or some other euphemism for who exactly you are citing from.
As to "lower troposphere" data from satellites; so far as I am aware, one of the two "leading report groups" of satellite data say that their observations relate to about 14,000 ft altitiude (UHA); which would likely make those measurements somewhat unrelated to what happens down here on the earth's surface. Most of the Stevenson screen and such like data sources that I believe GISSTemp relies on, are supposed to be at around 60 inches high or so above the hard ground.
That would seem to be quite unrelated to whatever is going onb at 14,000 feet above MSL, with no apparent conenction to the actual surface altitude.
And since so far as I know, important environments such as arctic and antarctic sea ice don't seem to appear at 14,000 ft altitude, it would seem that the important climate events that affect human populations; and by inference other flora and fauna; don't really care what the 14,000 ft temperature (anomalies) are.
Let’s hear it for the Arctic ice cap.
In a year of a severe El Nino (average global temperatures up by around 0.7 degrees C) and near record winter temperatures (according to NASA) in northern Canada and Greenland – the ice cap appears set to increase to its greatest extent in seven years at this time of year.
The problem could be that tomorrow is April Fools’ Day, so the AGW brigade could refuse to believe it, arguing that is no more than a nasty sceptic trick designed to make them look stupid.
I think you are making too much of the fact that they used a climate model since it was only used to construct a model of the diurnal temperature variations to be used eliminate aliasing…the misinterpretation of daily temperature variations along the satellite track as as a long term tend. Diurnal variations are a ZEROTH order temperature change associated with daily changes of solar insolation (night and day. They are HUGE compared with changes associated with global warming and should be relatively insensitive to precise CO_2 levels. By contrast average changes due global warming are a SECOND ORDER effect due to CHANGES in CO_2 levels,
“but ALL of the climate models assume that CO2 is the driving force behind climate change.”
This is false. In fact the models use (an admittedly crude version of) the laws of physics to CALCULATE the effect of CO_2. Over the years the methods have been refined and the results appear to converge – thus giving a level of confidence in the result.
One more thing on the Lu et al (2004) paper.
Look at the third page (page 57 of the journal). Look closely at Figure 2. Note how closely Lu’s “adjusted” Lower Troposphere (LT) temperature (T850-300) follows the unadjusted LT temp (T2). A few lines below that, Lu, inexplicably, says “It is evident that the T850-300 trend is more positive than the T2 trend.” What?! They’re right on top of each other, for Pete’s sake! Furthermore, it is clear that any trend (even Lu’s inflated trend) is utterly overwhelmed by year-to-year fluctuations. If you take out 1998 (an El Nino year), there wouldn’t be any significant trend at all.
Now look at Figure 3. Earlier in the paper, Lu admits that only 15% of the T2 signal comes from the stratosphere. Yet, somehow, removing the stratospheric influence NEARLY DOUBLES the global LT temp trend. How in the world does this make sense?
Finally, here’s the kicker, the thing that exposes this whole paper for the sham that it is. On the same page as Figures 2 and 3, in the next two paragraphs after his inexplicable statement regarding the T850-300 trend, Lu makes the following two statements:
1. “The trend difference between T850-300 and T2 for the tropics is smaller (~0.05K per decade) because there the tropopause is higher and the stratospheric cooling is smaller, so the stratospheric influence is smaller.”
2. “GCM studies have predicted a global ratio [of tropospheric to surface temperatures] of ~1.2 (ref. 8) and a tropical ratio of ~1.54
Now, why would the GCM studies mentioned in statement 2 predict the tropical ratio would be higher than the global ratio? Because the predictions ALREADY TAKE INTO ACCOUNT what Lu mentioned in statement 1, that the tropopause is higher and the stratospheric cooling is smaller in the tropics than elsewhere on the planet. So the predictions already factor in “stratospheric cooling”. It is therefore ENTIRELY INAPPROPRIATE to adjust the LT temps for stratospheric cooling when comparing them to the predictions of the models. Stratospheric cooling is ALREADY INCLUDED in those predictions (which is consistent with even my minimal estimate of the competence of climatologists).
It’s like I predicted that my net pay (after deductions) is going to be $1000, then when I get my paycheck, I look at the stub and find net pay ($1400), and subtract the deductions ($400), arriving at $1000, exactly what I predicted. But it is wrong to subtract deductions from net pay, because they have ALREADY been subtracted from gross pay to get net pay. My actual net pay is $1400, which my prediction missed, badly.
There is no need to adjust LT temps for stratospheric cooling when comparing to predictions of LT temp, because stratospheric cooling is already included in the predictions. It MIGHT be reasonable to adjust for “diurnial drift” (I haven’t closely examined this yet, but it seems a little fishy to me). But other than Mears’s circular-reasoning paper, no such adjustment based on diurnial drift comes close to putting LT temps where AGW theory predicts they should be in relation to surface temps.
Regards,
Trevor
DirkH (06:17:39) : If i made any mistake in my reasoning let me know, but to me this looks like GISTEMP has no more relevance or credibility left (maybe except for making alarmist headlines like “Hottest Decade ever”). Am i right in this conclusion?
Yup. You are Absolutely right. (From someone who has GIStemp running in his living room and has read all the code. It’s a bad joke, poorly written, producing fantasy output.)
“Tom W (10:53:59) :
[…]
Of course one uses caution…which is why one examines things like stationarity, the autocorrelation function, etc, of atmospheric quantities to determine if it is applicable. Indications are that it is.
There are gazillions of words devoted to the subject
[points to Statistical analysis in climate research By Hans von Storch, Francis W. Zwiers]”
Looks like i gotta read that book to see what you mean. Ok. One remark until i’m done with it: I wonder whether von Storch measured the ergodicity in real climate or in one of his models…
Tom W (10:53:59) :
“There are gazillions of words devoted to the subject
http://books.google.ca/books?id=5QgAfL1N6koC&pg=PA251&lpg=PA251&dq=climate+ergodicity+autocorrelation&source=bl&ots=-r7lB72NoZ&sig=0_daC7tHO4sK0N5C9GicoJ–73Y&hl=en&ei=souzS_-lO4L_8AbvqdyGBA&sa=X&oi=book_result&ct=result&resnum=10&ved=0CEAQ6AEwCQ#v=onepage&q=&f=false
”
Ok… von Storch and Zwiers talk about ergodicity on page 203, explaining the concept and culminating in the sentence “However, ergodicity is not generally a problem in climate research”. They don’t say where they got that from but i guess it’s common lore in climate science circles.
Thanks for pointing me to the book anyway.
See he predicted the red spots depicting heat at the Arctic circle.
This prediction also covers the record floods in new England.
The web is flooded with pictures of a severe deluge hitting parts of New England. The pictures are no good because most were taken by amateurs and not Doctoral degreed climate scientists. I hate flooding but I am not impressed. 4 inches of rain in 24 hours is not a big deal. 13 inches in 24 hours is.
NASA doesnt have a clue on the climate data. They are even worse than GISS, who are pretty appalling.
Never mind we are supposed to commit trillions of dollars and modify our way of life based on this data.
Tom W (12:18:20)
Since you seem to understand the models, could you please explain where in those models it accounts for walking on concrete in the summer being hotter than walking on grass? If the grass is the low side of meadow albedo @ur momisugly 10%, evaporative effect – per Trenberth – is an average of the equivalent to 39% albedo, and the we’re talking new concrete at an albedo of 50%… wouldn’t you expect grass and concrete to be the same temperature in the sun?
Also, where in the models have they accounted for the 61,000 square miles of pavement we have put down in the US?
I just did an analysis of GHCN v2.mean temperatures using Jeff Id’s script for R. See Thermal Hammer on Jeff’s Website.
I just did it for all the sites in GHCN’s v2temperatureinv for British Columbia, Canada.
It appears from what I did, that the temperature data for BC is very poorly kept up to date. Most records begin around 1950 and end at 1990. There are only a few sites (of about 140) that have records to 2010. The seasonal slopes and anomaly calculated by Jeff’s script vary widely. The +/- range can be huge.
Maybe BC is an outlier, but I doubt it. It seems possible to me that the emphasis on the 1950 to 1990 range, if carried through over the world would make the data pretty dodgy. The only US area I analysed (WMO 72235 versions 1 thr 12) had far more up to date data.
The program is easy to run especially if you use R commander and the script is at Jeff’s website. It would be interesting to see if other people would find similiar poorly up to date data for their area of the world. Maybe we could get a world wide analysis of the GHCN data!! (another sufacestations type analysis)
Richard (15:30:14) :
NASA doesnt have a clue on the climate data. They are even worse than GISS, who are pretty appalling.
You might be interested to learn that GISS is a NASA organization:
http://www.giss.nasa.gov/
Never mind we are supposed to commit trillions of dollars and modify our way of life based on this data.
Really ? We are “supposed” to ? Was there some law passed this weekend ?
Which trillions of the annual $70.24 trillion world economy is supposed to be committed ? And what do you mean by “modify” my way of life – drive a hybrid? Use compact fluorescent light bulbs? Pay $9/month more for electricity from natural gas and wind rather than coal?
Oh, the horror…
“Anu (17:40:04) :
[…]
Which trillions of the annual $70.24 trillion world economy is supposed to be committed ? And what do you mean by “modify” my way of life – drive a hybrid? Use compact fluorescent light bulbs? Pay $9/month more for electricity from natural gas and wind rather than coal?”
That’s how it starts, and from there the cost of electricity rises with 6 to 10% a year, that’s our experience here in Germany – as more and more wind turbines and PV panels get erected, each one subsidized, the cost rises.
At the same time, the CO2 emissions don’t fall because of the rising need for a spinning reserve.
While you are right in that it does not mean the immediate end of civilization it has to be pointed out that it’s a useless and rising cost – useless for skeptics but just as useless for environmentalists because the emissions don’t fall.
Everybody pays more money and some people profit from it. That’s the entire effect. That’s where it ends. Money changes its owner.
“DirkH (17:55:47) :
[…]
Everybody pays more money and some people profit from it. That’s the entire effect. That’s where it ends. Money changes its owner.”
And if Bono were smart he’d put his money into a wind turbine maker instead of Palm.
“Since you seem to understand the models, could you please explain where in those models it accounts for walking on concrete in the summer being hotter than walking on grass?”
While man-made concrete structures do influence local temperatures over a limited vertical distance, they are entirely negligible vis a vis the global temperature because they represent an infinitesimal fraction of the Earth’s surface.
The temperature over various surface types is the subject of Boundary Layer Meteorology – the study of the interaction between the atmosphere and the earth’s surface. There are books written on this subject. To properly explain the temperature difference over grass and concrete would require a mini course which is more work than I am willing to do. Briefly put however, evaporation and water vapour transfer are much more significant above a grassy surface and can greatly affect the energy balance and resulting air temperature.
For a more more detailed, related discussion see page 2 of Chapter 10
http://www.atmos.washington.edu/2004Q2/547/www/lect10.pdf
Tom W (19:17:27)
I think surface area should be expected to play a (big?) role too, but mostly speaking I think we’re on the same page here with possibly one exception…
What about the underside of the pavement? Pavement will tend toward a higher equilibrium surface temperature in the sunlight, but it is also thermally coupled with the ground underneath it (one big giant friggin heat sink).
So, majority speaking, the difference is that one surface emits back heat and humidity, while the other emits back heat only and changes the equilibrium temperature of the soil underneath it.
Now how can you tell the difference between that and CO2-based AGW, observationally speaking? Measurements would show a build up in net energy in the system for both, and higher temperatures for both… right?
Now in the United States we have paved the equivalent of Wisconsin (61,000 square miles) and that’s not even counting structures (which would probably bring it to around 100,000 square miles). Are you REALLY saying that a parking lot the size of Wisconsin is infinitesimal?
Also, Tom… I asked about the models. Where in the models have they accounted for any of this?
Jeeze, and people call us “deniers”
/rollz eyez
“Also, where in the models have they accounted for the 61,000 square miles of pavement we have put down in the US?”
That’s about 2% of the area of the US. Let’s be generous and assume that 1% of the entire land surface of the Earth is paved (this probably much too large) then since 30% of the Earth’s surface is land it follows that less than 0.3% of the Earth’s surface is paved (probably much less). In other words the effect of pavement is negligible.
DirkH (17:55:47) :
That’s how it starts, and from there the cost of electricity rises with 6 to 10% a year, that’s our experience here in Germany – as more and more wind turbines and PV panels get erected, each one subsidized, the cost rises.
——-
Perhaps a country at the same latitude as Labrador, Canada shouldn’t be subsidizing PV power. But I bet you take advantage of windpower at least as well as the Danish. Also, the idea is that as wind turbines get bigger, better engineered, and economies of scale manufacturing tens of thousands of them kick in, subsidies should come down.
Aren’t the Germans still good engineers and manufacturers? Or did that die out last century…
For good reason.
As I have
Oops. I keep forgetting about html and brackets
“Also, Tom… I asked about the models. Where in the models have they accounted for any of this?”
They don’t. As I have pointed out in two previous posts the effect is negligible (a fraction of 1%).
“Jeeze, and people call us “deniers””
For good reason