Does Hansen's Error "Matter"? – guest post by Steve McIntyre

Does Hansen’s Error “Matter”?

There’s been quite a bit of publicity about Hansen’s Y2K error and the

change in the U.S. leaderboard (by which 1934 is the new warmest U.S. year)

in the right-wing blogosphere. In contrast,

realclimate has dismissed it a triviality and the climate blogosphere is

doing its best to ignore the matter entirely.

My own view has been that

matter is certainly not the triviality that Gavin Schmidt would have you

believe, but neither is it any magic bullet. I think that the point is

significant for reasons that have mostly eluded commentators on both sides.

Station Data

First, let’s start with the impact of Hansen’s error on individual station

histories (and my examination of this matter arose from examination of

individual station histories and not because of the global record.) GISS

provides an excellent and popular

online service

for plotting temperature histories of individual stations. Many such

histories have been posted up in connection with the ongoing examination of

surface station quality at Here’s an example of this

type of graphic:

Figure 1. Plot of Detroit Lakes MN using GISS software

But it’s presumably not just Anthony Watts and

readers that have used these GISS station plots; presumably scientists and

other members of the public have used this GISS information. The Hansen

error is far from trivial at the level of individual stations. Grand Canyon

was one of the stations previously discussed at in

connection with Tucson urban heat island. In this case, the Hansen error was

about 0.5 deg C. Some discrepancies are 1 deg C or higher.

Figure 2. Grand Canyon Adjustments

Not all station errors lead to positive steps. There is a bimodal

distribution of errors reported earlier at

CA here , with many

stations having negative steps. There is a positive skew so that the impact

of the step error is about 0.15 deg C according to Hansen. However, as you

can see from the distribution, the impact on the majority of stations is

substantially higher than 0.15 deg. For users of information regarding

individual stations, the changes may be highly relevant.

GISS recognized that the error had a significant impact on individual

stations and took rapid steps to revise their station data (and indeed the

form of their revision seems far from ideal indicating the haste of their

revision.) GISS failed to provide any explicit notice or warning on their

station data webpage that the data had been changed, or an explicit notice

to users who had downloaded data or graphs in the past that there had been

significant changes to many U.S. series. This obligation existed regardless

of any impact on world totals.

Figure 3. Distribution of Step Errors

GISS has emphasized recently that the U.S. constitutes only 2% of global

land surface, arguing that the impact of the error is negligible on the

global averagel. While this may be so for users of the GISS global average,

U.S. HCN stations constitute about 50% of active (with values in 2004 or

later) stations in the GISS network (as shown below). The sharp downward

step in station counts after March 2006 in the right panel shows the last

month in which USHCN data is presently included in the GISS system. The

Hansen error affects all the USHCN stations and, to the extent that users of

the GISS system are interested in individual stations, the number of

affected stations is far from insignificant, regardless of the impact on

global averages.

Figure 4. Number of Time Series in GISS Network. This includes all versions

in the GISS network and exaggerates the population in the 1980s as several

different (and usually similar) versions of the same data are often


U.S. Temperature History

The Hansen error also has a significant impact on the GISS estimate of U.S.

temperature history with estimates for 2000 and later being lowered by about

0.15 deg C (2006 by 0.10 deg C). Again GISS moved quickly to revise their

online information changing their

US temperature

data on Aug 7, 2007. Even though Gavin Schmidt of GISS and realclimate

said that changes of 0.1 deg C in individual years were “significant”,

GISS did not explicitly announce these changes or alert readers that a

“significant” change had occurred for values from 2000-2006. Obviously they

would have been entitled to observe that the changes in the U.S. record did

not have a material impact on the world record, but it would have been

appropriate for them to have provided explicit notice of the changes to the

U.S. record given that the changes resulted from an error.

The changes in the U.S. history were not brought to the attention of

readers by GISS itself, but in

this post at climateaudit. As a result of the GISS revisions, there was

a change in the “leader board” and 1934 emerged as the warmest U.S. year and

more warm years were in the top ten from the 1930s than from the past 10

years. This has been widely discussed in the right-wing blogosphere and has

been acknowledged at

realclimate as follows:

The net effect of the change was to reduce mean US anomalies by

about 0.15 ºC for the years 2000-2006. There were some very minor knock

on effects in earlier years due to the GISTEMP adjustments for rural vs.

urban trends. In the global or hemispheric mean, the differences were

imperceptible (since the US is only a small fraction of the global


There were however some very minor re-arrangements in the various

rankings (see data). Specifically, where 1998 (1.24 ºC anomaly compared

to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top

US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these

differences are statistically significant.

In my opinion, it would have been more appropriate for Gavin Schmidt of

GISS (who was copied on the GISS correspondence to me) to ensure that a

statement like this was on the caption to the U.S. temperature history on

the GISS webpage, rather than after the fact at realclimate.

Obviously much of the blogosphere delight in the leader board changes is

a reaction to many fevered press releases and news stories about year x

being the “warmest year”. For example, on Jan 7, 2007, NOAA

announced that

The 2006 average annual temperature for the contiguous U.S. was

the warmest on record.

This press release was widely covered as you can determine by googling

“warmest year 2006 united states”. Now NOAA and NASA are different

organizations and NOAA, not NASA, made the above press release, but members

of the public can surely be forgiven for not making fine distinctions

between different alphabet soups. I think that NASA might reasonably have

foreseen that the change in rankings would catch the interest of the public

and, had they made a proper report on their webpage, they might have

forestalled much subsequent criticism.

In addition, while Schmidt describes the changes atop the leader board as

“very minor re-arrangements”, many followers of the climate debate are aware

of intense battles over 0.1 or 0.2 degree (consider the satellite battles.)

Readers might perform a little thought experiment: suppose that Spencer and

Christy had published a temperature history in which they claimed that 1934

was the warmest U.S. year on record and then it turned out that they had

been a computer programming error opposite to the one that Hansen made, that

Wentz and Mears discovered there was an error of 0.15 deg C in the Spencer

and Christy results and, after fiixing this error, it turned out that 2006

was the warmest year on record. Would realclimate simply describe this as a

“very minor re-arrangement”?

So while the Hansen error did not have a material impact on world

temperatures, it did have a very substantial impact on U.S. station data and

a “significant” impact on the U.S. average. Both of these surely “matter”

and both deserved formal notice from Hansen and GISS.

Can GISS Adjustments “Fix” Bad Data?

Now my original interest in GISS adjustments did not arise abstractly,

but in the context of surface station quality. Climatological stations are

supposed to meet a variety of quality standards, including the relatively

undemanding requirement of being 100 feet (30 meters) from paved surfaces.

Anthony Watts and volunteers of have documented one

defective site after another, including a weather station in a parking lot

at the University of Arizona where MBH coauthor Malcolm Hughes is employed,

shown below.

Figure 5. Tucson University of Arizona Weather Station

These revelations resulted in a variety of aggressive counter-attacks in

the climate blogosphere, many of which argued that, while these individual

sites may be contaminated, the “expert” software at GISS and NOAA could fix

these problems, as, for example

here .

they [NOAA and/or GISS] can “fix” the problem with math and

adjustments to the temperature record.

or here:

This assumes that contaminating influences can’t be and aren’t

being removed analytically.. I haven’t seen anyone saying such

influences shouldn’t be removed from the analysis. However I do see

professionals saying “we’ve done it”

“Fixing” bad data with software is by no means an easy thing to do (as

witness Mann’s unreported modification of principal components methodology

on tree ring networks.) The GISS adjustment schemes (despite protestations

from Schmidt that they are “clearly outlined”) are not at all easy to

replicate using the existing opaque descriptions. For example, there is

nothing in the methodological description that hints at the change in data

provenance before and after 2000 that caused the Hansen error. Because many

sites are affected by climate change, a general urban heat island effect and

local microsite changes, adjustment for heat island effects and local

microsite changes raises some complicated statistical questions, that are

nowhere discussed in the underlying references (Hansen et al 1999, 2001). In

particular, the adjustment methods are not techniques that can be looked up

in statistical literature, where their properties and biases might be

discerned. They are rather ad hoc and local techniques that may or may not

be equal to the task of “fixing” the bad data.

Making readers run the gauntlet of trying to guess the precise data sets

and precise methodologies obviously makes it very difficult to achieve any

assessment of the statistical properties. In order to test the GISS

adjustments, I requested that GISS provide me with details on their

adjustment code. They refused. Nevertheless, there are enough different

versions of U.S. station data (USHCN raw, USHCN time-of-observation

adjusted, USHCN adjusted, GHCN raw, GHCN adjusted) that one can compare GISS

raw and GISS adjusted data to other versions to get some idea of what they


In the course of reviewing quality problems at various surface sites,

among other things, I compared these different versions of station data,

including a comparison of the Tucson weather station shown above to the

Grand Canyon weather station, which is presumably less affected by urban

problems. This comparison demonstrated a very odd pattern discussed

here. The adjustments show that the trend in the problematic Tucson site

was reduced in the course of the adjustments, but they also showed that the

Grand Canyon data was also adjusted, so that, instead of the 1930s being

warmer than the present as in the raw data, the 2000s were warmer than the

1930s, with a sharp increase in the 2000s.

Figure 6. Comparison of Tucson and Grand Canyon Versions

Now some portion of the post-2000 jump in adjusted Grand Canyon values

shown here is due to Hansen’s Y2K error, but it only accounts for a 0.5 deg

C jump after 2000 and does not explain why Grand Canyon values should have

been adjusted so much. In this case, the adjustments are primarily at the

USHCN stage. The USHCN station history adjustments appear particularly

troublesome to me, not just here but at other sites (e.g. Orland CA). They

end up making material changes to sites identified as “good” sites and my

impression is that the USHCN adjustment procedures may be adjusting some of

the very “best” sites (in terms of appearance and reported history) to

better fit histories from sites that are clearly non-compliant with WMO

standards (e.g. Marysville, Tucson). There are some real and interesting

statistical issues with the USHCN station history adjustment procedure and

it is ridiculous that the source code for these adjustments (and the

subsequent GISS adjustments – see bottom panel) is not available/

Closing the circle: my original interest in GISS adjustment procedures

was not an abstract interest, but a specific interest in whether GISS

adjustment procedures were equal to the challenge of “fixing” bad data. If

one views the above assessment as a type of limited software audit (limited

by lack of access to source code and operating manuals), one can say firmly

that the GISS software had not only failed to pick up and correct fictitious

steps of up to 1 deg C, but that GISS actually introduced this error in the

course of their programming.

According to any reasonable audit standards, one would conclude that the

GISS software had failed this particular test. While GISS can (and has)

patched the particular error that I reported to them, their patching hardly

proves the merit of the GISS (and USHCN) adjustment procedures. These need

to be carefully examined. This was a crying need prior to the identification

of the Hansen error and would have been a crying need even without the

Hansen error.

One practical effect of the error is that it surely becomes much harder

for GISS to continue the obstruction of detailed examination of their source

code and methodologies after the embarrassment of this particular incident.

GISS itself has no policy against placing source code online and, indeed, a

huge amount of code for their climate model is online. So it’s hard to

understand their present stubbornness.

The U.S. and the Rest of the World

Schmidt observed that the U.S. accounts for only 2% of the world’s land

surface and that the correction of this error in the U.S. has “minimal

impact on the world data”, which he illustrated by comparing the U.S. index

to the global index. I’ve re-plotted this from original data on a common

scale. Even without the recent changes, the U.S. history contrasts with the

global history: the U.S. history has a rather minimal trend if any since the

1930s, while the ROW has a very pronounced trend since the 1930s.

Re-plotted from GISS Fig A and GFig D data.

These differences are attributed to “regional” differences and it is

quite possible that this is a complete explanation. However, this conclusion

is complicated by a number of important methodological differences between

the U.S. and the ROW. In the U.S., despite the criticisms being rendered at, there are many rural stations that have been in

existence over a relatively long period of time; while one may cavil at how

NOAA and/or GISS have carried out adjustments, they have collected metadata

for many stations and made a concerted effort to adjust for such metadata.

On the other hand, many of the stations in China, Indonesia, Brazil and

elsewhere are in urban areas (such as Shanghai or Beijing). In some of the

major indexes (CRU,NOAA), there appears to be no attempt whatever to adjust

for urbanization. GISS does report an effort to adjust for urbanization in

some cases, but their ability to do so depends on the existence of nearby

rural stations, which are not always available. Thus, ithere is a real

concern that the need for urban adjustment is most severe in the very areas

where adjustments are either not made or not accurately made.

In its consideration of possible urbanization and/or microsite effects,

IPCC has taken the position that urban effects are negligible, relying on a

very few studies (Jones et al 1990, Peterson et al 2003, Parker 2005, 2006),

each of which has been discussed at length at this site. In my opinion, none

of these studies can be relied on for concluding that urbanization impacts

have been avoided in the ROW sites contributing to the overall history.

One more story to conclude. Non-compliant surface stations were reported

in the formal academic literature by Pielke and Davey (2005) who described a

number of non-compliant sites in eastern Colorado. In NOAA’s official

response to this criticism, Vose et al (2005) said in effect –

it doesn’t matter. It’s only eastern Colorado. You

haven’t proved that there are problems anywhere else in the United


In most businesses, the identification of glaring problems, even in a

restricted region like eastern Colorado, would prompt an immediate

evaluation to ensure that problems did not actually exist. However, that

does not appear to have taken place and matters rested until Anthony Watts

and the volunteers at launched a concerted effort to

evaluate stations in other parts of the country and determined that the

problems were not only just as bad as eastern Colorado, but in some cases

were much worse.

Now in response to problems with both station quality and adjustment

software, Schmidt and Hansen say in effect, as NOAA did before them –

it doesn’t matter. It’s only the United States.

You haven’t proved that there are problems anywhere else in the world.

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
Douglas Hoyt
August 12, 2007 6:10 am

Orland is a pristine rural site that has remained unchanged for many years. The observations show no warming. It is shocking that the adjusted Orland data shows a strong warming trend. It tells me the adjustments are wrong and the adjusted data cannot be trusted.

Evan Jones(@evanjones)
August 12, 2007 7:26 am

BingO! St. Mac has got it.
Not to put too fine a point on it:
The USHCN is adjusting the reading from GOOD sites based on the data from BAD sites!
And I bet the UHI offset has been miscalculated (minimized) because of those same bad high readings.
All I can say is keep up the fine work. Saints are hard to come by in science these days!

David Walton
August 12, 2007 8:10 am

Sorry to hear Climate Audit is offline again (just checked, yep, not there). I hope things will soon smooth out for them.
Thanks for this blog entry, it makes quite a bit more clear the situation that US climatologists inside and outside the government agencies find themselves in and what some of the significant core issues are.

August 12, 2007 8:34 am

OK Steve, I must say I’m impressed by the thought you put into this look at the recent hubbub over climate data. (BTW, I don’t know just what happened, but all sides should be totally candid about something as important as this.)
Here’s the main forest point, I think: CO2 absorbs infrared, so more in the atmosphere is a driver for higher temperatures, agreed? It’s sort of like pouring vinegar into a lake: increased acidity unless something else compensates. So, we can expect rising temperatures as the most likely outcome (and a harmful outcome doesn’t have to be certain anyway to motivate protective defenses!) REM also that rising temperatures will inhibit some counter forces, like reflective ice etc. So, what can rational people agree on about this?

August 12, 2007 11:00 am

This is fascinating. How on Earth does one “adjust” bad data? Coming from a mathematics/engineering background – the very premise of “correcting bad data” sends chills down my spine!
Instead, one should have to take the measurements over again. We all know that this can not be done. Therefore, the experiment must be scrapped. Throw away the entire set of data, you ask? Not really. Just ignore the data and place an asterisk next to it with the footnote “flawed for xxxx reason” and be honest about it.
Then, stop speaking of the Earth warming. Especially since the data I see is a spec of dust in the ocean compared to the entire life of the planet. The data, itself, is insignificant compared to the entire 4.5 billion years of this planet’s existence.
Maybe someone should notify Al Gore that, just like WMD’s, the intelligence on global warming was ALSO flawed and, well, global warming just doesnt exist.
The planet is a self-regulating system. Rinse, Repeat…

August 12, 2007 11:56 am

One point – Shmidt said that the U.S. only represents 2% of the land surface. Actually, the U.S. represents just 2% of the entire world surface, but about 6% of the land surface. Since the best temperature records are done through land surface stations, the U.S. records cannot be considered an “insignificant” portion of the .

August 12, 2007 12:23 pm

I wonder if James hansen would have taken any measures to correct the data set, if it wasn’t Steve McIntyre himself, but rather someone else who raised the question about the dataset.

August 12, 2007 1:31 pm

RE Neil B
1) Only if there is nothing else to aborb it. Forgot about all that water vapor and the big rock under the atmosphere didn’t we.
2) More like throwing ferterlizer in a lake. Got all those nasty plants sucking it up.
3)So are you wearing your tinfoil hat to protect you from RFI. Can’t be too careful you know.
Rational people depend on reasonable risk analysis based on criticly reviewed science with sound methodology.

Steven mosher
August 12, 2007 2:07 pm

Good to see you back on Again.
Three things:
1. NASA have an OpenSource option. I am looking through all the policies and think there may be some leverage with the CIO and the Cheif Engineer who are charged with implementing the policys.
2. Hansen 2001. Are you aware that hansen eliminated portions of Nrthern California records ( 1880-1929) namely, Orleans, Lake Spaulding, Willows, Electra, and Crater Lake Ore. because of cooling anomalies?
3. I recall some dendro studies that used US specific data. Might be interesting to see what the impact n Parker was.

August 12, 2007 2:36 pm

Re : Neil B
“… CO2 absorbs infrared, so more in the atmosphere is a driver for higher temperatures, agreed? …”
Thank you for only saying it is a driver, rather than that more CO2 automatically equals more temperature.
(Which it doesn’t of course – sooner or later, you reach a saturation point where all outbound infrared is being absorbed, and adding further CO2 has no effect.)
Anyway, the important question is how significant a driver is CO2. Is it dominant ower all the other drivers ? Or relatively minor ? Or totally insignificant ?
“… It’s sort of like pouring vinegar into a lake: increased acidity unless something else compensates. …”
If you pour a bottle of vinegar in a lake, it will certainly make the lake more acid … by some tiny amount. Fifty feet away, you will not be able to measure the effect.
“… So, we can expect rising temperatures as the most likely outcome …”
No. That assumes that this one factor – anthropogenic CO2 – is the dominant factor driving temperatures. We know that there have been huge natural variations in climate over the entire history of the planet, with far greater temperature rises/falls than we have seen in the 20th century. We should expect that these natural variations will continue.
“… (and a harmful outcome doesn’t have to be certain anyway to motivate protective defenses!) …”
This is a soft statement of the Precautionary Principle: if we can imagine a threat that cannot be proved to be non-existent, then we should take measures to protect against it.
Unfortunately, it only works if you assume that the cost of protection is zero, which it never is. In the particular case of global warming, the reason we use hydrocarbons so much is that they are the cheapest method currently available to store energy. In the absence of a better means of energy storage, anything that reduces our use of hydrocarbons is going to increase our cost of living. A lot.
Reductio ad absurdam of the Precautionary Principle – I’m worried about being invaded by martians with death rays. Can you prove it won’t happen ? No ? Then we should protect ourselves against the threat by massive investment in the space program – which is something we should be doing anyway…
“… REM also that rising temperatures will inhibit some counter forces, like reflective ice etc. …”
This is called a positive feedback – the more it heats up, the more solar energy gets absorbed, so the more it heats up, and so on. Possible negative feedbacks include clouds, for example – the more it heats up, the more humid the atmosphere, the more clouds form, the more solar energy gets reflected … and the lower the temperature.
If you only have positive feedbacks, then sooner or later, the climate system (or any system) runs away with itself to one or the other extreme.
But this has not happened in the history of the planet. Therefore, we know that the climate system is controlled by negative feedbacks, not positive feedbacks. So if the system is pushed one way, then reasonably soon, resistance to the push increases, and the system stops going in that direction.
“… So, what can rational people agree on about this? …”
That science is far too important to be a plaything for political activists and anyone with a big PR budget.
That anyone who refuses to reveal his data and workings is not a scientist, and his work product is not science.

August 12, 2007 2:44 pm

What?! Steven you mean Hansen cherry picked the dataset as well as “adjusting” it? How did you find this out? Could he have done this country wide?

August 12, 2007 3:06 pm

Sure there are other things in the atmosphere, but more CO2 enhances the warming effect. And plants do absorb lots of it, but no one disputes that average CO2 has gone up over the past decades (from long-term background of about 285 ppm to about 370, and check out this discussion of how we know why:
Plants aren’t absorbing enough of the excess.
Your last point is valid – I don’t know just how close we are to that, but the theoretical basis for the likely outcome is there.

August 12, 2007 3:16 pm

Fred: Yes we have to wonder just how much effect will be had, worth debating. Negative feedbacks will eventually pull temperatures back, but I think the problem is: a big swing is bad enough, and small consolation to have it come down later after washed in coastlines, etc.
As for the precautionary principle: I don’t mean that even the tiniest or outlandish threat deserves a massive response. We do have to take cost-threat comparison into account, but here we have something with good theoretical basis for some effect. Much of what we could do to lower CO2 would be good resource conservation and political independence anyway. The rest is “game” as they say.
The science is certainly too important to be a plaything for any side about this issue. Can you provide objective link etc. to good info on the “concealment” problem?

Steven Mosher
August 12, 2007 3:44 pm

BarryW, are you talkin to ME!
Hansen was very straightforward about his elimination of certain data from Nrthern Califrnia stations. He wrote about it in Hansen 2001. The impact on the US record was Minor, .01C.
BUT the logic was precious.
Let me explain. Both Hansen and peterson believe that the URBAN/RURAL distinction does not matter.
More specifically. They find no difference between temps at RURAL sites and URBAN sites.
Note: the factor that matters mst to them is population density which is measured by the proxy of nightligths. A satillite picture f the wrld at night. Big lights=Lots of people.
1. Nightlights picks out Rural/Urban
2. Temperatures show no difference.
3> BUT there is massive literature and experiments on heat increases in cities.
4. Conclusion: the weather stations in URBAN settings must be in COOL PARKS.
Put another way.
A UHI is real
B We see no difference between Rural and Urban temps
C: Urban sites MUST be in cool parks.
Now HAnsens problem is that he found 5 califrnia sites that had early century cooling. So HE cncluded the sites had to be messed up somehow, or that there was some flaw in nighlights.
Read Hansen 2001. pretty funny

Evan Jones(@evanjones)
August 12, 2007 4:09 pm

“That anyone who refuses to reveal his data and workings is not a scientist, and his work product is not science.”
Say it again.
Say it a whole lot.
Shout it from the rooftops.
Convince others to say it.

August 12, 2007 5:19 pm

Weren’t Mann and others dismissing the medieval warm period because it wasn’t global? If the last decade is unexceptional compared to the 1930s in the North America, I wonder if they’ll be as quick to conclude global warming isn’t global?

August 12, 2007 7:04 pm

Re: Mike…
Good point considering the more stable temperature of the southern hemispere. That is no warming trend the last 28 year from satelite measurements.
I am however willing to bet one of my legs that they won’t conclude that global warming isn’t global.

August 12, 2007 7:04 pm

Neil B.,
I worry a LOT about washed in coastlines.
Latest reports are that the oceans are rising 1.5 mm a year. That is 6″ in a century.
If your head is only 6″ above water and you keep it in that position for a century you will be drowned by global warming.
Run for your life.

August 12, 2007 7:36 pm

Being drowned isn’t the point. The economic stress from the rise costs lots even with a little bit of increase, since so much is built close to the water in concert with storms etc. A given event will cost millions more per each few mm of sea level, etc. Actually the best point is, again: Much [maybe most] of what we could do to lower CO2 would be good resource conservation and political independence anyway.

August 13, 2007 12:34 am

Steve McIntyre
Did you try asking Hansen’s boss at NASA? The word I hear is he is non too fond of the whole direction of the AGW steamroller, has been catching flack for it too.
It wouldn’t surprise me to find him waiting with open arms for an excuse to pin Hansen’s ears back. You could be that excuse.

Bob Meyer
August 13, 2007 2:02 am

TCO said:
“It is COMMON to have flaws in experiments and not have the ability to repeat runs. That’s cost benefit. And one can still get valuable inferences, do useful analysis. So neither an aghast reaction at flaws NOR a cover up and hide attitude are justified.”
That’s complete crap. To correct data you need to know the exact cause of the error, the exact value of the error and you have to demonstrate that the cause existed at the time of measurement. In addition, you have to demonstrate that in a comparable case the cause produced the precise effect you claim it did. In this case that is impossible.
When your data is bad and the influences are either unknown or impossible to calculate, then you have nothing. Got that? Nothing.
I was faced with proving that a $3,000,000 guidance system was perfectly functional despite some “flawed” test data. I stood in front of an Air Force reviewer for an entire day explaining why it is valid to use a diode forward drop as a proxy for a temperature measurement. This proved that the tests were performed at the correct temperature despite the environmental sensor indicating otherwise.
I had to show that the environmental sensor had failed, how it failed, what it read when it failed and what the real value was. I further had to show that the diode proxy indicated the correct temperature both when the environmental sensor worked and when it didn’t.
I then had to reproduce the effect with a prototype circuit to show how the difference in time delays of the sensor and the diode affected the readings.
That is what’s involved in correcting “bad” data (and I had a proxy measurement to gauge the magnitude of the error).
My experience was minor compared to a friend of mine who had to do a similar analysis for a pacemaker implanted in an eighty year old man. This poor old guy almost certainly would not survive an operation to replace it.
When an engineer says that correcting bad data sends chills down his spine he is almost certainly speaking from experience.
TCO, when you have walked a mile in the moccasins of an engineer whose errors can result in death or destruction then you can appreciate why we quake at the thought of correcting bad data. Until then, stick to the science and lay off the personal insults.

Stan Pa
August 13, 2007 3:33 am

In referece to the Lee posting.
The connections were made because of serious errors found in the data.
Will you please provide the information that shows that the data from the rest of the world that you are relying on is not compromised to the same extent.

Ian McLeod
August 13, 2007 8:40 am

Steve’s work discussed in a full editorial in today’s National Post.
Lorne Gunter appears to have the facts correct. Four of the hottest years in the US over the past century were in the 1930s, NASA’s GISS quietly corrected the error discovered by Steve McIntyre, describes James Hansen as the “godfather of global-warming alarmism”, and describes McIntyre’s work as “the bane of many warmers’ religious-like belief in climate catastrophe”.

August 13, 2007 9:29 am

Excellent and informative posts. Can somebody help me with this?
Here’s something that bothers me. Water vapor is responsible for 95% of the CO2 emitted into the atmosphere. Active volcanoes represent an additional 2%. Since the atmosphere is made up of 78% nitrogen and 21% oxygen, that leaves depending on the source .038 to .045% for CO2 as an atmospheric gas. It seems to me that we are making a great deal of assumptions about a gas that represents less than 1/2% of the atmosphere and attributing the potential end of the world to it. My sources are available at http://globalwarminghysteri
I’m not a scientist, I am a scuba diving instructor and know the figures above to be accurate, but in lay terms I wonder if somebody can explain to me why the main GHG water vapor is left out of the discussion? Is it because we can’t tax it? We are being hosed big time by those that are putting this nonsense before us. Remember, A lie repeated often enough is often accepted as the truth. Global Warming is theory based on flawed mathematical models, and can’t stand up to the rigor of the scientific method.
Also,But what would happen if we had evidence of glaciers melting and massive flooding that occurred 10,000 years ago – long before man burned fossil fuels to any significant degree ? Such evidence would certainly be considered evidence that global warming is a natural phenomenon – as opposed to man-made.
Well – this evidence actually exists and was reported in a Yahoo News article (via titled “Stone Age Settlement Found Under English Channel.”;_ylt=AsF.5ZIOoCSv09YpSdlI21Ws0NUE #
Thanks, Jim

August 13, 2007 9:30 am

>> but no one disputes that average CO2 has gone up over the past decades (from long-term background of about 285 ppm to about 370
Actually, I do. The measurements at Mauna Loa are only a measurement of the C02 at that particular spot. Even so, this dataset has never been audited by anyone. Your reference to 285 stem from very bad science. The actual data shows variable C02 levels. For example, the average C02 level is probably about 235. In 1940, it was 420. Empirical measurements show that C02 level goes up and down with temperature, just like Henry’s law predicts.

August 13, 2007 10:02 am

Neil B,
I think your assessment of say $10 million per mm sea level rise (provided the rise is below 1 ft) is about right.
150 mm rise = $1.5 billion spread over the whole world over 100 years. That is not serious money. Even 10X that is not serious money.
A rise of under 1ft is going to get lost in the tidal effects.
If people want to live where the land area is variable it is a personal choice. If they are unable to deal with even a 1 ft rise in sea level perhaps they need to reconsider their place of habitation.

August 13, 2007 11:00 am

Well, I was thinking $ “per event” which is a bit different (and admittedly ambigous), but I wouldn’t know the specifics anyway. (Who would?) The government surely shouldn’t subsidize and encouraged risky building in any case.
BTW, it is odd that Drudge hasn’t (AFAICT) put up a link about this temperature discrepancy. Isn’t he normally sympathetic to GW skepticism?

Karl Rove
August 13, 2007 11:27 am

and they always accused me of having a climate control machine.
Great work Steve, I am working to make sure you have an invitation to explain all this on FOX

August 13, 2007 11:41 am

This unearthing of yet another NASA debacle has obviously touched a nerve or two. The attacks disgust me, but I am hardly surprised by them. Whenever I drive by Ames I just sigh, about how good things used to be, and how far they have fallen. We used to take the ideas of the world’s best rocket man and put them into bold practice, now we put around in LEO in an oversized lifting body with freakin’ ICBMs strapped to it along with a Hindenburg’s worth of H. The so called “climate scientists” at the big N chant Gaia spells and curse the GOP. What a travesty. Break it up and start over from scratch. Shut her down ….

Robert r. Prudhomme
August 13, 2007 12:14 pm

Remember Jim Hansen
received a 250k$ grant from the Heintz
foundation while supporting Gore’s movie and all of its
errors .He also supported John Kerry
in 2004 . He ,of course , is going to
cherry pick to support his political views .

Evan Jones(@evanjones)
August 13, 2007 12:49 pm

“It seems to me that we are making a great deal of assumptions about a gas that represents less than 1/2% of the atmosphere and attributing the potential end of the world to it.”
I’ve wondered about that, too. A 50% increase sounds pretty radical–unless it’s 50% of a thirtieth of a percent.
The “twice nothing is still nothing” aregument may well apply here.
OTOH, that sword cuts both ways:
Thinking back to the c. 20 ppm of gunk that was in NYC air back in 1970, and one may recall that even a miniscule percentage can have a large, practical effect.

Dave Dardinger
August 13, 2007 1:48 pm

I turned on the radio while going to the store this morning and it was on the Dennis Prager program. He was discussiing what the worry was about in the late ’90s about Y2K, so I listened and sure enough he mentioned Steve McIntyre and Anthony Watts by name and what the results of fixing Hanson’s error was.
It seemed to be a fair report as he stated that there wasn’t much effect on temp measures for the whole world and that he needed to check things out before he’d vouch for the results. But he also admitted that he trusts AGW skeptics more than he does AGW activists.

August 13, 2007 2:25 pm

>> the average C02 level is probably about 235
I meant to say 335. That’s what the data shows, when you don’t cherry pick data to fit a preconceived notion.
>> if somebody can explain to me why the main GHG water vapor is left out of the discussion? Is it because we can’t tax it?
Basically, that’s it. You are absolutely correct that water vapor dwarfs C02 in GHG importance. And man does have a similar effect on the water cycle, as he does on the Carbon cycle. However, there are two problems with this approach from AGW point of view. 1) Everyone is familiar with the water cycle, and would be far less gullible than with C02. 2) AGW is really about restricting human activity, and energy usage is right next to oxygen usage in importance. Energy usage can’t help but produce C02, so it was chosen as the culprit. The prospect of a campaign to limit the boiling of water, draining pools, etc just isn’t as compelling.

August 13, 2007 3:19 pm

You may have gotten the 420 ppm in 1940 from readings at point Barrow, but the most relevant document (link “Hock et al. (1947-1949) 400 ppm Point Barrow”, found at Link) says that almost all measurement then, around 1950, were about 0.3%. That seems to be typical. It varies some place to place and time to time, but the year-world average has climbed about as standard graphs show, unless you can show convincing alternative to orthodoxy.
As for supposedly who is doing what for what reasons, remember that Svante Arrhenius wrote about the likelihood of CO2 induced global warming back in 1896, long before Al Gore was born. He even thought it would be a good thing, so I doubt his reasoning was intended to deceive.

Daniel Rose
August 13, 2007 4:03 pm

Neil B.,
Your link to a discussion of CO2 increases in the atmosphere was broken as it contained the closing parenthesis of your text:
So, in case anyone tried and failed to follow it, then gave up, here it is as it should be:

August 13, 2007 4:21 pm

(Deep Ecology)But if you restrict irrigation of arid zones, especially in places like the Western US, you get a two-fer. You reduce the carbon footprint and lower the amount of water vapor “artificially” being sent into the atmosphere. Force people to live sustainably in humid zones, stop enabling more than a thin contingent in the arid zones. While we’re at it, let’s ban cooling towers (the smaller type used for HVAC). Hey, and to boot, doing these things are guaranteed to lower population!(/Deep Ecology)

Richard Sharpe
August 13, 2007 5:21 pm

As for supposedly who is doing what for what reasons, remember that Svante Arrhenius wrote about the likelihood of CO2 induced global warming back in 1896, long before Al Gore was born. He even thought it would be a good thing, so I doubt his reasoning was intended to deceive.
He was right that it would be a good thing.
CO2 levels have been much higher in the deep past, and temperatures have been much higher. Indeed, biodiversity was greater during those times.

August 13, 2007 5:28 pm

I learned one thing by reading all this. I know nothing about climate. And I’m starting to wonder if anyone knows it really.

August 13, 2007 6:20 pm

Let me see if I understand this: A discredited anti-GW hack Canadian mine promoter fronting for ExxonMobil quietly objects to NASA’s temperature record, and — without providing any explanation — his buddies at GISS politely thank him and immediately rewrite the entire climate history of the United States??!!
This story gets more troubling with each new chapter.

August 13, 2007 7:06 pm

BTW I don’t know why so many are so hard on Hanson, after all here Link he says thanks to McIntyre for bringing up the error, and I don’t see evidence of anything deliberate. Hanson is said to have written the following around 2000, but I don’t have the link only a quote in TNR:
The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis. … In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1ºC. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1ºC.
IOW, he knew `34 was warm, and too close to the late 90s to call. In any case, looking at the corrected graph, it looks like a general upward trend, with ups and downs superimposed (like a typical stock market graph.)

John Howard
August 13, 2007 7:23 pm

Stir estimates together and make some numbers crunch and soon you’ve got a science and more than just a hunch.
From reading all of this, it’s now quite clear to me, that if you add up guesses, you get a certainty.

David Naugler
August 13, 2007 7:44 pm

This is all very commendable, but John Daly
went through all this ten years ago and was ignored. His death is regrettable.

August 13, 2007 7:53 pm

Mike asked “Weren’t Mann and others dismissing the Medieval Warm period because it wasn’t global?”
My search skills being non-existent I can only go from a flawed memory, but – yes. At first, they said they dismissed it because it was a UK anamoly. When Greenland was brought up, they said it was a “North Atlantic” anamoly owing entirely to the Gulf Stream. Then others started submitting written accounts from China, Japan, Korea etc. and proxy data from South America, Africa, and Antarctica…

August 13, 2007 10:03 pm

Neil B.,
Why so hard on Hansen?
Data and methods.
Science is open. Hansen keeps secrets.
dearieme posted Aug 9, 2007 12:15:51 PM:
“Government scientists ..refuse to publicly release their temperature adjustment algorithms or software”: the default interpretation of that is that they are crooks.

August 13, 2007 10:26 pm

Neil: Yes, Hanson deserves some credit. Many other people in similar circumstances (e.g. Michael Mann) refuse to admit that they even have a problem.
But the reason we don’t give Hansen a free ride is, I think, that this should never have happened in the first place. He needs to open up his methods fully so that they can be audited. Until then, how will we know how many other such errors lurk? There are a lot of questionable adjustments that I’m sure Mr. McIntyre and Watts would be happy to investigate and test for validity, if only they had access to the code/methods so that they could actually do so properly.
It benefits all of us for this data and the adjustments to be accurate. Fixing a mistake is one tiny step in getting there. But I think many of us will never have full confidence in the records until they are independently tested and verified.

Julian Flood
August 14, 2007 12:52 am

Ref; the precautionary principle.
There are costs if you choose the absolutely safest option. Ask the orang-utangs — they’re losing habitat to palm oil plantations. The oil is destined for bio-fuel.

August 14, 2007 12:55 am

Regional differences are significant across the globe and in the U.S. Southern Africa shows cooling, Southern Australia shows temperatures approaching the 1930’s. In the U.S. the “dust bowl” area of the 1930’s is still below the temperature of the 1903’s. I have written several regional summaries available at Also the Tuscon station mentioned in this thread is used in a urban / rural comparison in my regional summary on the southwest U.S.

August 14, 2007 2:16 am

In 1940, it was 420. Empirical measurements show that C02 level goes up and down with temperature, just like Henry’s law predicts.
You must realise that with this claimed huge temperature sensitivity you’d have a runaway greenhouse effect, which is even more reason for concern.
And if the CO2 is coming out of the oceans, where is all the man made CO2 going?
“Into the oceans”, hmm something’s wrong then with your bookkeeping.

August 14, 2007 6:35 am

>> You may have gotten the 420 ppm in 1940 from readings at point Barrow … were about 0.03% (previous comment corrected)
Yes, but the readings from Point Barrow are real, ranging from .03 to .05! Given that cold oceans are a deep sink, C02 levels are higher at the equator and lower at the poles, therefore, an average reading of 400 ppm over a two year period is quite significant. Even the .03 reading contradicts AGW dogma about low pre-industrial C02 levels. The 3 year study in Luxembourg shows that C02 levels are quite variable, and during that period, did not increase. The actual ice core data (non cherry picked) supports the contention of wide ranging C02 levels.
Of course, as your link shows, it was not only Point Barrow, Duerst measured 400 ppm in 1936-1939, Kreutz measured over 420 in 1939-41. Bazett measured 400 ppm in Philadelphia in 1941, Misra measured over 400 in India in 1941-1943, Lockhart measured over 600 ppm in Antarctica.
The central foundation to AGW is that man is able to dramatically affect the global C02 level, ie that it was low before, and that man has greatly increased it. This idea is falsified by two facts: 1) Many plant species could not survive the alleged pre-industrial level, yet they did, and 2) the actual C02 measurements referenced above. These two facts show that this critical AGW prerequisite is false.
>> It varies some place to place and time to time, but the year-world average has climbed about as standard graphs show
The Mauna Loa measurement only show the C02 levels at Mauna Loa, a location on top of a volcanoe (known C02 source), next to an active volcanoe, downwind from equatorial waters known to be outgassing C02. There is no scientific basis for claiming that Mauna Loa represents the world wide average. Does the temperature there also represent the global average? Did you know that at Mauna Loa, they don’t record measurements, unless the wind is blowing in from the sea? Did you know that scientists that have worked there have reported that a large percentage of data points are discarded and not included in the average? We should arrange for Steve Mc to be flown to Mauna Loa and audit them until they are blue in the face. It would probably only take a few days for them to turn red in the face.
>> unless you can show convincing alternative to orthodoxy.
The data contradicts the AGW idea, at every level. Interesting that you use the religious terminology “orthodoxy”. Freudian Slip?
>> Arrhenius wrote about the likelihood of CO2 induced global warming back in 1896
Yes, but the arguments of Arrhenius were falsifed by his contemporaries.
>> so I doubt his reasoning was intended to deceive.
It’s quite amazingly bad logic to claim that since a scientist in 1896 said X in good faith, therefore, anyone who says X now is also acting in good faith.
By not publishing all the data, they make the comment by Bacastow that the Mauna Loa measurements were “edited” seem quite plausible. Pales & Keeling said that large portions of the raw data were rejected, leaving just a small fraction to be subjected to averaging techniques. The Scripps program to monitor CO2 in the atmosphere was conceived and initiated by Dr. Roger Revelle (Revelle evasion factor). Pales & Keeting say “Revelle foresaw the geochemical implications of the rise in atmospheric CO2 resulting from fossil fuel combustion, and he sought means to ensure that this ‘large scale geophysical experiment‘ .. was documented”. Pales & Keeting continue “he inspired us to keep in sight the objectives which he had originally persuaded us to accept.”
Does this sound like true, unbiased research? All they were doing was measuring C02. Why would they need inspiration to keep the objectives in sight? What were the objectives? Why the need for persuasion? What’s so hard about measuring C02, and reporting all the data.

August 14, 2007 7:55 am

Gunnar, can you cite any sources about the Mauna Loa measurements ?
(I’ve always thought measuring CO2 next to a bloody great volcano was a bit strange.)

August 14, 2007 8:24 am

Very interesting, but I thought your position on CO2 changes was minority (I know that doesn’t mean wrong) even among GW skeptics, and that most such skeptics just thought it wouldn’t make that much difference, or etc. How widely shared are your views? And even if something varies around time to time and place to place, the world yearly average and its changes would be important, and it looked like most measurements were around 0.03% for a long time (tx for catching typo earlier.) Has anyone good data (non-Mauna Loa restricted) to derive that, where could I see a graph, or would you say the variability is just too much to make an averaged graph worthwhile? I would like to find out more about those plants that needed more CO2 etc.
Finally, re Arhennius: people like that are often referenced to show that if someone in their position believed something, it (the idea itself) gets more credibility since at least they wouldn’t have bad motives – not to compare to other persons. Yes, his early ideas had to be adjusted, but not overturned as such. In this case there unfortunately are grounds for having “bad motives” on all sides, with variable effects depending on personal susceptibility.

Anthony Watts
August 14, 2007 8:30 am

OK the discussion is going on to CO2 but we are talking about the near surface temperature record in the post, let’s stay on topic please.
CO2 is for another thread.

August 14, 2007 10:23 am

>> let’s stay on topic please.
Well, (trying to think of way that I can refocus the point into what you’re interested in, which is auditing surface climate stations)….
Why not demand that C02 measuring equipment be included in the standard climate monitoring station (not the weather stations).
>> CO2 is for another thread
Fred & Neil, I’ll have to continue this on my own blog (coming soon) at
Steve M is focused on auditing statistics, and Anthony W is focused on auditing stations. Both are very worthwhile efforts, but my focus is on science and logic.

August 14, 2007 11:29 am

Here is said (from Real Climate) to be a link for fairly recent code, SW etc. to various extents, some apparently needing special access and some not:

Jeff C.
August 14, 2007 11:31 am

Steve’s post has got a lot of people talking about a subject that has long been debated in the comments at Climate Audit. Everyone can understand why a site overtaken by gradual urbanization requires a downward correction over time. But why do good, pristine rural sites get an upwards correction? As the post mentions, the intent seems to be that all the sites show similar trends, not to correct biased data. Homogenization only works if the error is random. Attempting to homogenize a bias simply spreads the error around.
Now that has surveyed nearly 25% of the stations, it makes sense to identify a dozen or so “good” sites and use the various data sets to drill down into what corrections have been applied. These good sites would meet the following criteria:
1) few if any moves (from NOAA meta data)
2) no urbanization within 20(?) miles (from aerial photos)
3) no microsite issues (from database)
4) use of standardized equipment (from NOAA meta data)
Using the data available from GISS and NCDC, the data series could be parsed to determine which corrections are applied and when. Relatively non-controversial (and well-documented) corrections such as TOBS and conversion to MMTS could be isolated and removed to highlight remaining corrections of a more mysterious nature. Since the selected sites are supposed to be “good” and not requiring additional corrections (based on the selection criteria listed above), GISS should be pressed for a justification for the adjustments.
Steve has already ready done this to some extent with Orland, CA and Grand Canyon, AZ and it has raised a number of interesting questions. This could be turned into a very persuasive paper by increasing the sample size, fully documenting that the site is “good” and clearly isolating the adjustments. With enough exposure, GISS may feel compelled to respond and explain the justification for adjustments to sites that do not appear to require it. (Then again they may not.)
I’ve started to put something together like this. If anyone is working on something similar or has any ideas, please comment.

August 14, 2007 11:38 am

Now that U.S. temperatures have been “adjusted”, and since we are now told this is insignificant because after all, ‘it’s GLOBAL warming, stupid’, a few questions arise in my mind:
1) Who compiles the data for global temperatures?
2) What countries are the data from?
3) What standards apply to assure quality of these measurements?
4) Are the data as or more reliable than U.S.?
5) Is there a list of those sites?
Lastly, will the issues of UHI ever be addressed for both U.S. and ‘global’ temperature data?
If this were a laboratory experiment and I presented the methodology by which the data is obtained in the world of climate science, my employment status would not be in question; I’d have been fired many years ago. Ah, government jobs do have their advantages I suppose.

Judy Cross
August 14, 2007 11:43 am
180 Years of atmospheric CO 2 Gas Analysis by Chemical Methods
I’m surprised that you all are not familiar with the above which shows 90,000 chemical analyses of atmospheric CO2 from 53 locations over 150 years.
It clearly shows CO2 levels have hit 400ppm and over at least 3x before now during the 19th and 20th centuries.

Paul Penrose
August 14, 2007 12:20 pm

Neil B.,
We’ve seen that link before, and it’s a red herring. That is NOT a link to code related to the GISS surface temperature products, but is their Global Climate model.

Jim O'Toole
August 14, 2007 12:39 pm

Off topic, but can you recommend any particular books about science and engineering managment from the 1950s?

August 14, 2007 2:41 pm

In reference to homogenization, from what I’ve read (at least in the v2 documentation) they are looking at surrounding sites:
“It was assumed
that the reference series accurately reflected the climate
of the region so that any significant departures
from climatology could be directly associated with
discontinuities in the station data.”
The is from the documents listed here:
It sounds to me as if a possible error could be when a rural site is surrounded by sites contaminated by UHIE it might be subject to “correction”.
Also the splicing that is done for the global sites makes our critiques of the US data look like nitpicking. Foriegn sites are at least an order of maginitude more suspect.

Jeff C.
August 14, 2007 3:31 pm

“It sounds to me as if a possible error could be when a rural site is surrounded by sites contaminated by UHIE it might be subject to “correction”.”
That’s a good part of it. Another concern is that a station with microsite issues could also be used to homegenize good sites (regardless of being rural or urban) since GISS doesn’t seem to make any effort to weed out stations with microsite issues. As virtually all of the microsite issues seem to cause a positive bias, the error is simply split up among many stations (including good ones), not removed.

Douglas Hoyt
August 14, 2007 5:52 pm

“It sounds to me as if a possible error could be when a rural site is surrounded by sites contaminated by UHIE it might be subject to “correction”.”
Bingo. The GISS analysis assumes all stations are of equal quality when photos clearly show that is not true. This assumption just spreads the UHIE around to all stations and then they claim it isn’t there.
Another problem is that corrections for station relocations can be dependent upon the order in which comparison to nearby stations are made. Kind of like vector addition instead of scalar addition. If an error is made in one adjustment, that error will propagate through the entire network of comparison stations.

Mike Nee
August 14, 2007 6:19 pm

Anthony, my position, which I’ve said before, is that if you measure the air 5 feet above a surface, you are not getting anything but how air mixes with the thermal properties of the surface 5 feet above that surface. What that means overall, or how representative of the area it is, those are other issues. If you don’t combine that air temperature with the material under it, as well as the humidity, wind, and amount of sun at the location, I don’t think just air temp is very helpful. As to how much CO2 there is, who cares. 🙂
Or as Bob says, if you don’t know what’s wrong, how can you correct for it? That’s why the station surveys, including photographs for reference, is something that needs to be done.
The sad thing is that it needs to be done. 🙁
TCO, glad to see you agree that the specific information to replicate the adjustments just as they did them should be provided. I mean, I assume that, when you said “To show the new and original data” it’s what you meant.
Fred, I think you’re missing a few things in your comment to Neil B.
The IPCC doesn’t just focus on CO2, they include land use and aerosols. But strangly that mainly doesn’t get picked up and reported. That’s what bothers some people, various ones, because the non-CO2 part of the equation is basicially being ignored on a policy and press level.
I’d say we’re sure CO2 does, simply by absorbing IR, add “something” “in some way” to the “temperature”. In what specific ways and how much is another matter! The physics involved shows it does absorb/transfer/create heat (however you want to quibble about which specifically and how exactly). Then we have the sun, bingo. But it could be the other way around, hotter equals less absorbing of CO2 out of the air. Whatever. The disagreements are all about the specifics. That’s why everyone argues about it from different directions and in different ways.
It’s like talking about Mars or Venus or Jupiter or Saturn or Mercury versus Earth; don’t mix up the fact that if it wasn’t CO2 + heat it would be something else with proof of anything, or believe they can be compared. Or look at it another way: if the heat can’t “get out” nothing needs to absorb it anyway. But hey, everything with heat has an atmosphere, eh?
I think your Martians With Death Rays analogy is a bit excessive tho. It depends on how the argument is presented, not the argument itself. Rather than endlessly discussing the “unknown but tends to seem” role of CO2 and its extent, it’s something that should basically be ignored. No matter if AGW is real or not, there’s no reason not to develop renewable energy sources like solar (getting cheaper all the time btw) and letting technology and world wealth increase “fix” whatever problem there may or may not be. The details, or even the if; unimportant. A clean environment, reducing the impact of having to buy fuel from others on a nation’s economic system and national security basis and so on — That has nothing to do with AGW, really. Has the solar panel industry been working to reduce AGW as their goal? AGW is a non-issue if they haven’t been! “Fixing” AGW would be just a side effect. As in, not doing it “in case AGW is real and bad” but doing it for other reasons that stand on their own regardless. It makes the point moot, because it’s not the point! 🙂
All that said, the materials to replicate the adjustments should be given regardless. Nothing wrong with a little auditing from time to time. But at the end of the day, it probably doesn’t really matter….
“That anyone who refuses to reveal his data and workings is not a scientist, and his work product is not science.”
Neil B, it’s not been shown that the seas are doing anything particularly unusual. In fact, what’s probably more unusual is that they’re as stable as they are! The satellite rise of the surface temperatures is pretty wimpy overall. But YMMV depending on location and weather patterns at any given time. The hurricane predictions have been way off (far lower) recently also.
Bob’s got it all right there, and well said!!!

To correct data you need to know the exact cause of the error, the exact value of the error and you have to demonstrate that the cause existed at the time of measurement. In addition, you have to demonstrate that in a comparable case the cause produced the precise effect you claim it did. In this case that is impossible. When your data is bad and the influences are either unknown or impossible to calculate, then you have nothing.

Jim, water is responsible for 95% of the “greenhouse effect” because of the amount of it and the way the carbon cycle works. (Water includes vapor, ice in or on clouds/glaciers/ground, and seas) The reason it’s not discussed as a GHG are a couple. One is that we basically have no control over it, it doesn’t stay in any one place as vapor very long, and it almost never absorbs IR. Another is that because of this (and other reasons) the press and others are not aware it’s the primary GHG. They are sloppy (at best) in their explanation of it most of the time.
The correct explanation is that water vapor is the primary GHG and CO2 is the primary forcing GHG. Although not the strongest one. I’d say like water vapor, it’s not really the behavior, it’s the amount of it. This gets left off the discussion.
Gunnar, you are correct, and as you know I agree with you. Although I would phrase it another way of course; “estimated by sampling” On your other comment, I doubt that AGW is about restricting human activity. It’s just people that are worried and are trying to get people to understand how bad it all is, because they believe they know the truth and that others don’t. This to me is why so many refer to it as a religion, but to me it’s not. It’s a belief system, and I believe it’s a political one. I’m pretty sure mostly it’s done with good intentions. That many don’t understand some people just don’t agree with them and/or share their world view is the only thing that’s ever bothered me. I’m sure some are intending to restrict or control or have ulterior motives, but suffice it to say I’m not going to run around attributing motives to people I don’t know. Even if that person is acting like a jerk or an elitist or what have you. I think many times it’s just a case of a different worldview. Oh well.
Rick, nah, probably nobody does know much. I think it’s mostly a guessing game, which is why I never get too upset about any of it.
M. Simon, it surprises me people don’t understand that when you’re not transparent, it doesn’t matter why you aren’t. I makes people wonder why you’re not, so all you can do is guess at possibilites. Some of which are not as nice as others.
Nicholas, that is correct. But when somebody wants to check your work, and so many people just attribute random unknown motives to it (“You just want to invalidate the surface network!” or some such), it makes it a social thing more so than anything else.

August 14, 2007 7:10 pm

To those who mentioned microsite problems in response to my last post. I agree they’re as big a problem as classic UHIE. They’re assuming the algorithm takes care of unidentified changes.
The ordering of how they do the adjustments is a big unknown, as is the rest of the actual code/algorthms. What the papers say doesn’t mean a thing, because I can say I’m doing X when actually the code does Y. The splicing of USHCN data to GHCN is a prime example.
I wonder if there are any statistics on how many “fixes” are done?

Evan Jones(@evanjones)
August 14, 2007 10:09 pm

“But why do good, pristine rural sites get an upwards correction?”
And why is the overall adjustment up instead of down?
And why is the post 1980 data adjusted up even more than the pre-1980 data when all sense and reason would seem to cry out for the very opposite?
Ah, the questions we ask. We want to know why.
Why as reason? Why as motive? Why as a way of life? The Big Why?
Mike: Back at you.
After periof of shock, am attempting to regain perspecive. But SHEESH!
How could my liberals side up with non-disclosure, I ask? (The “Big How” is the other question, I guess.)

August 15, 2007 4:12 am

Rural sites getting upwards corrections? Look at the surrounding sites to see if they’re dragging the rural site up.
Why? Because that’s what their expectations are, based on their preconceptions, so they adjust the data accordingly. Lysenkoism in Russia for example.
Why no disclosure? Bad people would only use the information to discredit the TRUTH. (talk about Creationism, this is Ecologism run rampant)

August 15, 2007 5:54 am

>> I’m not going to run around attributing motives to people I don’t know.
What are the possible “good faith” motives for manipulating data to support a certain political position?
>> I doubt that AGW is about restricting human activity
Do you have any support for your position? What solutions have been proposed by AGWers that don’t involve restricting human activity and freedom?
About a month ago, I joked that soon, they will come out against sporting events. To my chagrin, they did just that about a week ago. This is no joke Mike, this is not about reducing our dependence on foreign oil for national security reasons, and you know it. They would not accept massive drilling for our own oil as “solving” anything. You are deluding yourself if you think solar or wind can ever compete with burning hydrocarbons.

Mike Nee
August 15, 2007 1:08 pm

Solar panels (I attribute this to laptop computers and their screens getting bigger and cheaper) are starting to get worthwhile. If they’ll compete, I think some day. And if somebody has enough money they can largely get off the grid right now with solar and wind. Maybe in 200 years everyone will have a hydrogen fusion reactor. But don’t get me wrong, I don’t think we’ll run out of oil or remove it from the equation any time soon, no. Plus you need it to make plastic and roads and solar panels and…
And as I said some folks are indeed with very specific motives. They are the ones at the forefront. I’m mainly speaking about the average person, not the ones that do indeed seem to have the agenda you accuse them of. So I guess I’m agreeing with you. 🙂

Josh Kinsey
August 16, 2007 5:24 am

Interesting, so we have no accurate data about global temperature change? I’m sure opponents of Global Warming will try and use this as fuel to deny that glaciers are disappearing and that the ice caps are disintegrating, and that ‘sea level’ is rising. And I’m sure supporters of Global Warming will duck and juke and make the claim that the absence of accurate evidence doesn’t constitute evidence of the absence of evidence. And I’m also sure that both sides will continue to concentrate more on assigning blame than seeking solutions.
Any ‘reasonable person’ can look around and agree that if Polar Bears are starving because the ice is gone, the Earth must be warming up. Does it matter why? Does it matter what’s to blame? Or does it just matter that we start adjusting and adapting to our new environment?

August 16, 2007 9:49 am

>> Polar Bears are starving because the ice is gone, the Earth must be warming up
Ahh, that’s your logic? A change in the weather pattern? So, should we conclude that since the South pole is cooling, earth is cooling?
And btw, polar bears don’t eat ice. The reports are that they are doing fine, but why should we care?
>> Does it matter why?
Yes, if we falsely concluded that drinking tea is causing the sun to explode, then we might all stop drinking tea for no good reason. Yea, the reason matters.

August 16, 2007 10:05 am

It is nice to see that good ol’ Wikipedia Bill (william connolley) is actively engaged in damage control to limit the spread of this ‘heresy’ into his little fifedom of WikiClimate.

Alan Wasner
August 16, 2007 1:59 pm

Hi, having gathered data, reviewed data gathering, and written publications for the US GOV for the last 26 years I can state with great confidence that errors in data are very common. In some cases, although not the general rule, I saw data gatherers just make up the data because they were too lazy to do their job. Many times I saw numerous data entry errors, this is extremely common. Also, in the publication process itself error is introduced. I can site many, many concrete examples of this and point to errors in official US GOV publications that are recent. There is very poor quality control standards on data gathering throughout the US GOV, although to be fair it varies greatly from Agency to Agency and office to office. Also, I saw many private labs gathering data and in many cases they were doing a poorer job than we were! It’s shocking to say the least. People that make life changing important decisions based on any data without checking the source of the data, and the validity of the data, are making a mistake. My rule is if you find ONE MISTAKE you must review ALL THE DATA, because if there is one error then there will be more. No one likes to hear that, as it means a lot of work. But why is it there is always time to do it over, but never enough time to do it right the first time?

Leon Palmer
August 17, 2007 3:38 pm

Looking at the GISS graphs, has anyone else noticed that the dip in temperature between the 1940s and 1980s corresponds with the depths of the cold war?
The world resumes warming only after the collapse of the soviet union!

August 18, 2007 12:34 am

The eruption of Krakatoa in the late 1880s had a profound global cooling effect. I have yet to see this being taken into account anywhere or mentioned in any analysis of global warming statistics.

Evan Jones(@evanjones)
August 21, 2007 7:58 am

Nor World War II (followed by 30 years of cooling).
No one even mentions WWII, not even around here (except me, repeatedly).

Jan C J Jones
August 26, 2007 1:10 pm

Please pass these questions along to Mr. McIntyre. I’ve tried contacting several global warming “experts” who appear uninterested in answering my questions: 1) How many gallons of free-flowing water are estimated to be required in the earth’s evaporation-condensation cycle for it to function in a stable equilibrium?
2) How many gallons of water are estimated to have been removed from natural “free flow” by humans to be contained in sealed reservoirs (e.g., city water tanks, etc), and contained & sealed bottled/canned water products?
3) Is it possible that man has removed (by sealed storage)so much free flow water from the evaporation-condensation cycle, that the earth’s overall temperature would be affected, resulting in an increase in temperature that would cause natural stored water sources (icebergs, etc.) to melt, thereby providing a way for the earth to replenish/replace the missing free-flow water volume needed to reach equilibrium of the evaporation-condensation cycle?

June 29, 2008 12:22 pm

[…] So either the bump is naturally occuring, or we have another data set splicing error like the GISS Y2K debacle from last […]

%d bloggers like this: