Does Hansen’s Error “Matter”?
There’s been quite a bit of publicity about Hansen’s Y2K error and the
change in the U.S. leaderboard (by which 1934 is the new warmest U.S. year)
in the right-wing blogosphere. In contrast,
realclimate has dismissed it a triviality and the climate blogosphere is
doing its best to ignore the matter entirely.
My own view has been that
matter is certainly not the triviality that Gavin Schmidt would have you
believe, but neither is it any magic bullet. I think that the point is
significant for reasons that have mostly eluded commentators on both sides.
Station Data
First, let’s start with the impact of Hansen’s error on individual station
histories (and my examination of this matter arose from examination of
individual station histories and not because of the global record.) GISS
provides an excellent and popular
for plotting temperature histories of individual stations. Many such
histories have been posted up in connection with the ongoing examination of
surface station quality at surfacestations.org. Here’s an example of this
type of graphic:

Figure 1. Plot of Detroit Lakes MN using GISS software
But it’s presumably not just Anthony Watts and surfacestations.org
readers that have used these GISS station plots; presumably scientists and
other members of the public have used this GISS information. The Hansen
error is far from trivial at the level of individual stations. Grand Canyon
was one of the stations previously discussed at climateaudit.org in
connection with Tucson urban heat island. In this case, the Hansen error was
about 0.5 deg C. Some discrepancies are 1 deg C or higher.

Figure 2. Grand Canyon Adjustments
Not all station errors lead to positive steps. There is a bimodal
distribution of errors reported earlier at
CA here , with many
stations having negative steps. There is a positive skew so that the impact
of the step error is about 0.15 deg C according to Hansen. However, as you
can see from the distribution, the impact on the majority of stations is
substantially higher than 0.15 deg. For users of information regarding
individual stations, the changes may be highly relevant.
GISS recognized that the error had a significant impact on individual
stations and took rapid steps to revise their station data (and indeed the
form of their revision seems far from ideal indicating the haste of their
revision.) GISS failed to provide any explicit notice or warning on their
station data webpage that the data had been changed, or an explicit notice
to users who had downloaded data or graphs in the past that there had been
significant changes to many U.S. series. This obligation existed regardless
of any impact on world totals.

Figure 3. Distribution of Step Errors
GISS has emphasized recently that the U.S. constitutes only 2% of global
land surface, arguing that the impact of the error is negligible on the
global averagel. While this may be so for users of the GISS global average,
U.S. HCN stations constitute about 50% of active (with values in 2004 or
later) stations in the GISS network (as shown below). The sharp downward
step in station counts after March 2006 in the right panel shows the last
month in which USHCN data is presently included in the GISS system. The
Hansen error affects all the USHCN stations and, to the extent that users of
the GISS system are interested in individual stations, the number of
affected stations is far from insignificant, regardless of the impact on
global averages.

Figure 4. Number of Time Series in GISS Network. This includes all versions
in the GISS network and exaggerates the population in the 1980s as several
different (and usually similar) versions of the same data are often
included.
U.S. Temperature History
The Hansen error also has a significant impact on the GISS estimate of U.S.
temperature history with estimates for 2000 and later being lowered by about
0.15 deg C (2006 by 0.10 deg C). Again GISS moved quickly to revise their
online information changing their
data on Aug 7, 2007. Even though Gavin Schmidt of GISS and realclimate
said that changes of 0.1 deg C in individual years were “significant”,
GISS did not explicitly announce these changes or alert readers that a
“significant” change had occurred for values from 2000-2006. Obviously they
would have been entitled to observe that the changes in the U.S. record did
not have a material impact on the world record, but it would have been
appropriate for them to have provided explicit notice of the changes to the
U.S. record given that the changes resulted from an error.
The changes in the U.S. history were not brought to the attention of
readers by GISS itself, but in
this post at climateaudit. As a result of the GISS revisions, there was
a change in the “leader board” and 1934 emerged as the warmest U.S. year and
more warm years were in the top ten from the 1930s than from the past 10
years. This has been widely discussed in the right-wing blogosphere and has
been acknowledged at
realclimate as follows:
The net effect of the change was to reduce mean US anomalies by
about 0.15 ºC for the years 2000-2006. There were some very minor knock
on effects in earlier years due to the GISTEMP adjustments for rural vs.
urban trends. In the global or hemispheric mean, the differences were
imperceptible (since the US is only a small fraction of the global
area).
There were however some very minor re-arrangements in the various
rankings (see data). Specifically, where 1998 (1.24 ºC anomaly compared
to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top
US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these
differences are statistically significant.
In my opinion, it would have been more appropriate for Gavin Schmidt of
GISS (who was copied on the GISS correspondence to me) to ensure that a
statement like this was on the caption to the U.S. temperature history on
the GISS webpage, rather than after the fact at realclimate.
Obviously much of the blogosphere delight in the leader board changes is
a reaction to many fevered press releases and news stories about year x
being the “warmest year”. For example, on Jan 7, 2007, NOAA
The 2006 average annual temperature for the contiguous U.S. was
the warmest on record.
This press release was widely covered as you can determine by googling
“warmest year 2006 united states”. Now NOAA and NASA are different
organizations and NOAA, not NASA, made the above press release, but members
of the public can surely be forgiven for not making fine distinctions
between different alphabet soups. I think that NASA might reasonably have
foreseen that the change in rankings would catch the interest of the public
and, had they made a proper report on their webpage, they might have
forestalled much subsequent criticism.
In addition, while Schmidt describes the changes atop the leader board as
“very minor re-arrangements”, many followers of the climate debate are aware
of intense battles over 0.1 or 0.2 degree (consider the satellite battles.)
Readers might perform a little thought experiment: suppose that Spencer and
Christy had published a temperature history in which they claimed that 1934
was the warmest U.S. year on record and then it turned out that they had
been a computer programming error opposite to the one that Hansen made, that
Wentz and Mears discovered there was an error of 0.15 deg C in the Spencer
and Christy results and, after fiixing this error, it turned out that 2006
was the warmest year on record. Would realclimate simply describe this as a
“very minor re-arrangement”?
So while the Hansen error did not have a material impact on world
temperatures, it did have a very substantial impact on U.S. station data and
a “significant” impact on the U.S. average. Both of these surely “matter”
and both deserved formal notice from Hansen and GISS.
Can GISS Adjustments “Fix” Bad Data?
Now my original interest in GISS adjustments did not arise abstractly,
but in the context of surface station quality. Climatological stations are
supposed to meet a variety of quality standards, including the relatively
undemanding requirement of being 100 feet (30 meters) from paved surfaces.
Anthony Watts and volunteers of surfacestations.org have documented one
defective site after another, including a weather station in a parking lot
at the University of Arizona where MBH coauthor Malcolm Hughes is employed,
shown below.

Figure 5. Tucson University of Arizona Weather Station
These revelations resulted in a variety of aggressive counter-attacks in
the climate blogosphere, many of which argued that, while these individual
sites may be contaminated, the “expert” software at GISS and NOAA could fix
these problems, as, for example
here .
they [NOAA and/or GISS] can “fix” the problem with math and
adjustments to the temperature record.
or here:
This assumes that contaminating influences can’t be and aren’t
being removed analytically.. I haven’t seen anyone saying such
influences shouldn’t be removed from the analysis. However I do see
professionals saying “we’ve done it”
“Fixing” bad data with software is by no means an easy thing to do (as
witness Mann’s unreported modification of principal components methodology
on tree ring networks.) The GISS adjustment schemes (despite protestations
from Schmidt that they are “clearly outlined”) are not at all easy to
replicate using the existing opaque descriptions. For example, there is
nothing in the methodological description that hints at the change in data
provenance before and after 2000 that caused the Hansen error. Because many
sites are affected by climate change, a general urban heat island effect and
local microsite changes, adjustment for heat island effects and local
microsite changes raises some complicated statistical questions, that are
nowhere discussed in the underlying references (Hansen et al 1999, 2001). In
particular, the adjustment methods are not techniques that can be looked up
in statistical literature, where their properties and biases might be
discerned. They are rather ad hoc and local techniques that may or may not
be equal to the task of “fixing” the bad data.
Making readers run the gauntlet of trying to guess the precise data sets
and precise methodologies obviously makes it very difficult to achieve any
assessment of the statistical properties. In order to test the GISS
adjustments, I requested that GISS provide me with details on their
adjustment code. They refused. Nevertheless, there are enough different
versions of U.S. station data (USHCN raw, USHCN time-of-observation
adjusted, USHCN adjusted, GHCN raw, GHCN adjusted) that one can compare GISS
raw and GISS adjusted data to other versions to get some idea of what they
did.
In the course of reviewing quality problems at various surface sites,
among other things, I compared these different versions of station data,
including a comparison of the Tucson weather station shown above to the
Grand Canyon weather station, which is presumably less affected by urban
problems. This comparison demonstrated a very odd pattern discussed
here. The adjustments show that the trend in the problematic Tucson site
was reduced in the course of the adjustments, but they also showed that the
Grand Canyon data was also adjusted, so that, instead of the 1930s being
warmer than the present as in the raw data, the 2000s were warmer than the
1930s, with a sharp increase in the 2000s.


Figure 6. Comparison of Tucson and Grand Canyon Versions
Now some portion of the post-2000 jump in adjusted Grand Canyon values
shown here is due to Hansen’s Y2K error, but it only accounts for a 0.5 deg
C jump after 2000 and does not explain why Grand Canyon values should have
been adjusted so much. In this case, the adjustments are primarily at the
USHCN stage. The USHCN station history adjustments appear particularly
troublesome to me, not just here but at other sites (e.g. Orland CA). They
end up making material changes to sites identified as “good” sites and my
impression is that the USHCN adjustment procedures may be adjusting some of
the very “best” sites (in terms of appearance and reported history) to
better fit histories from sites that are clearly non-compliant with WMO
standards (e.g. Marysville, Tucson). There are some real and interesting
statistical issues with the USHCN station history adjustment procedure and
it is ridiculous that the source code for these adjustments (and the
subsequent GISS adjustments – see bottom panel) is not available/
Closing the circle: my original interest in GISS adjustment procedures
was not an abstract interest, but a specific interest in whether GISS
adjustment procedures were equal to the challenge of “fixing” bad data. If
one views the above assessment as a type of limited software audit (limited
by lack of access to source code and operating manuals), one can say firmly
that the GISS software had not only failed to pick up and correct fictitious
steps of up to 1 deg C, but that GISS actually introduced this error in the
course of their programming.
According to any reasonable audit standards, one would conclude that the
GISS software had failed this particular test. While GISS can (and has)
patched the particular error that I reported to them, their patching hardly
proves the merit of the GISS (and USHCN) adjustment procedures. These need
to be carefully examined. This was a crying need prior to the identification
of the Hansen error and would have been a crying need even without the
Hansen error.
One practical effect of the error is that it surely becomes much harder
for GISS to continue the obstruction of detailed examination of their source
code and methodologies after the embarrassment of this particular incident.
GISS itself has no policy against placing source code online and, indeed, a
huge amount of code for their climate model is online. So it’s hard to
understand their present stubbornness.
The U.S. and the Rest of the World
Schmidt observed that the U.S. accounts for only 2% of the world’s land
surface and that the correction of this error in the U.S. has “minimal
impact on the world data”, which he illustrated by comparing the U.S. index
to the global index. I’ve re-plotted this from original data on a common
scale. Even without the recent changes, the U.S. history contrasts with the
global history: the U.S. history has a rather minimal trend if any since the
1930s, while the ROW has a very pronounced trend since the 1930s.


Re-plotted from GISS Fig A and GFig D data.
These differences are attributed to “regional” differences and it is
quite possible that this is a complete explanation. However, this conclusion
is complicated by a number of important methodological differences between
the U.S. and the ROW. In the U.S., despite the criticisms being rendered at
surfacestations.org, there are many rural stations that have been in
existence over a relatively long period of time; while one may cavil at how
NOAA and/or GISS have carried out adjustments, they have collected metadata
for many stations and made a concerted effort to adjust for such metadata.
On the other hand, many of the stations in China, Indonesia, Brazil and
elsewhere are in urban areas (such as Shanghai or Beijing). In some of the
major indexes (CRU,NOAA), there appears to be no attempt whatever to adjust
for urbanization. GISS does report an effort to adjust for urbanization in
some cases, but their ability to do so depends on the existence of nearby
rural stations, which are not always available. Thus, ithere is a real
concern that the need for urban adjustment is most severe in the very areas
where adjustments are either not made or not accurately made.
In its consideration of possible urbanization and/or microsite effects,
IPCC has taken the position that urban effects are negligible, relying on a
very few studies (Jones et al 1990, Peterson et al 2003, Parker 2005, 2006),
each of which has been discussed at length at this site. In my opinion, none
of these studies can be relied on for concluding that urbanization impacts
have been avoided in the ROW sites contributing to the overall history.
One more story to conclude. Non-compliant surface stations were reported
in the formal academic literature by Pielke and Davey (2005) who described a
number of non-compliant sites in eastern Colorado. In NOAA’s official
response to this criticism, Vose et al (2005) said in effect –
it doesn’t matter. It’s only eastern Colorado. You
haven’t proved that there are problems anywhere else in the United
States.
In most businesses, the identification of glaring problems, even in a
restricted region like eastern Colorado, would prompt an immediate
evaluation to ensure that problems did not actually exist. However, that
does not appear to have taken place and matters rested until Anthony Watts
and the volunteers at surfacestations.org launched a concerted effort to
evaluate stations in other parts of the country and determined that the
problems were not only just as bad as eastern Colorado, but in some cases
were much worse.
Now in response to problems with both station quality and adjustment
software, Schmidt and Hansen say in effect, as NOAA did before them –
it doesn’t matter. It’s only the United States.
You haven’t proved that there are problems anywhere else in the world.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Orland is a pristine rural site that has remained unchanged for many years. The observations show no warming. It is shocking that the adjusted Orland data shows a strong warming trend. It tells me the adjustments are wrong and the adjusted data cannot be trusted.
BingO! St. Mac has got it.
Not to put too fine a point on it:
The USHCN is adjusting the reading from GOOD sites based on the data from BAD sites!
And I bet the UHI offset has been miscalculated (minimized) because of those same bad high readings.
All I can say is keep up the fine work. Saints are hard to come by in science these days!
Sorry to hear Climate Audit is offline again (just checked, yep, not there). I hope things will soon smooth out for them.
Thanks for this blog entry, it makes quite a bit more clear the situation that US climatologists inside and outside the government agencies find themselves in and what some of the significant core issues are.
OK Steve, I must say I’m impressed by the thought you put into this look at the recent hubbub over climate data. (BTW, I don’t know just what happened, but all sides should be totally candid about something as important as this.)
Here’s the main forest point, I think: CO2 absorbs infrared, so more in the atmosphere is a driver for higher temperatures, agreed? It’s sort of like pouring vinegar into a lake: increased acidity unless something else compensates. So, we can expect rising temperatures as the most likely outcome (and a harmful outcome doesn’t have to be certain anyway to motivate protective defenses!) REM also that rising temperatures will inhibit some counter forces, like reflective ice etc. So, what can rational people agree on about this?
This is fascinating. How on Earth does one “adjust” bad data? Coming from a mathematics/engineering background – the very premise of “correcting bad data” sends chills down my spine!
Instead, one should have to take the measurements over again. We all know that this can not be done. Therefore, the experiment must be scrapped. Throw away the entire set of data, you ask? Not really. Just ignore the data and place an asterisk next to it with the footnote “flawed for xxxx reason” and be honest about it.
Then, stop speaking of the Earth warming. Especially since the data I see is a spec of dust in the ocean compared to the entire life of the planet. The data, itself, is insignificant compared to the entire 4.5 billion years of this planet’s existence.
Maybe someone should notify Al Gore that, just like WMD’s, the intelligence on global warming was ALSO flawed and, well, global warming just doesnt exist.
The planet is a self-regulating system. Rinse, Repeat…
One point – Shmidt said that the U.S. only represents 2% of the land surface. Actually, the U.S. represents just 2% of the entire world surface, but about 6% of the land surface. Since the best temperature records are done through land surface stations, the U.S. records cannot be considered an “insignificant” portion of the .
I wonder if James hansen would have taken any measures to correct the data set, if it wasn’t Steve McIntyre himself, but rather someone else who raised the question about the dataset.
RE Neil B
1) Only if there is nothing else to aborb it. Forgot about all that water vapor and the big rock under the atmosphere didn’t we.
2) More like throwing ferterlizer in a lake. Got all those nasty plants sucking it up.
3)So are you wearing your tinfoil hat to protect you from RFI. Can’t be too careful you know.
Rational people depend on reasonable risk analysis based on criticly reviewed science with sound methodology.
SteveMc.
Good to see you back on Again.
Three things:
1. NASA have an OpenSource option. I am looking through all the policies and think there may be some leverage with the CIO and the Cheif Engineer who are charged with implementing the policys.
2. Hansen 2001. Are you aware that hansen eliminated portions of Nrthern California records ( 1880-1929) namely, Orleans, Lake Spaulding, Willows, Electra, and Crater Lake Ore. because of cooling anomalies?
3. I recall some dendro studies that used US specific data. Might be interesting to see what the impact n Parker was.
Re : Neil B
“… CO2 absorbs infrared, so more in the atmosphere is a driver for higher temperatures, agreed? …”
Thank you for only saying it is a driver, rather than that more CO2 automatically equals more temperature.
(Which it doesn’t of course – sooner or later, you reach a saturation point where all outbound infrared is being absorbed, and adding further CO2 has no effect.)
Anyway, the important question is how significant a driver is CO2. Is it dominant ower all the other drivers ? Or relatively minor ? Or totally insignificant ?
“… It’s sort of like pouring vinegar into a lake: increased acidity unless something else compensates. …”
If you pour a bottle of vinegar in a lake, it will certainly make the lake more acid … by some tiny amount. Fifty feet away, you will not be able to measure the effect.
“… So, we can expect rising temperatures as the most likely outcome …”
No. That assumes that this one factor – anthropogenic CO2 – is the dominant factor driving temperatures. We know that there have been huge natural variations in climate over the entire history of the planet, with far greater temperature rises/falls than we have seen in the 20th century. We should expect that these natural variations will continue.
“… (and a harmful outcome doesn’t have to be certain anyway to motivate protective defenses!) …”
This is a soft statement of the Precautionary Principle: if we can imagine a threat that cannot be proved to be non-existent, then we should take measures to protect against it.
Unfortunately, it only works if you assume that the cost of protection is zero, which it never is. In the particular case of global warming, the reason we use hydrocarbons so much is that they are the cheapest method currently available to store energy. In the absence of a better means of energy storage, anything that reduces our use of hydrocarbons is going to increase our cost of living. A lot.
Reductio ad absurdam of the Precautionary Principle – I’m worried about being invaded by martians with death rays. Can you prove it won’t happen ? No ? Then we should protect ourselves against the threat by massive investment in the space program – which is something we should be doing anyway…
“… REM also that rising temperatures will inhibit some counter forces, like reflective ice etc. …”
This is called a positive feedback – the more it heats up, the more solar energy gets absorbed, so the more it heats up, and so on. Possible negative feedbacks include clouds, for example – the more it heats up, the more humid the atmosphere, the more clouds form, the more solar energy gets reflected … and the lower the temperature.
If you only have positive feedbacks, then sooner or later, the climate system (or any system) runs away with itself to one or the other extreme.
But this has not happened in the history of the planet. Therefore, we know that the climate system is controlled by negative feedbacks, not positive feedbacks. So if the system is pushed one way, then reasonably soon, resistance to the push increases, and the system stops going in that direction.
“… So, what can rational people agree on about this? …”
That science is far too important to be a plaything for political activists and anyone with a big PR budget.
That anyone who refuses to reveal his data and workings is not a scientist, and his work product is not science.
What?! Steven you mean Hansen cherry picked the dataset as well as “adjusting” it? How did you find this out? Could he have done this country wide?
davidcobb:
Sure there are other things in the atmosphere, but more CO2 enhances the warming effect. And plants do absorb lots of it, but no one disputes that average CO2 has gone up over the past decades (from long-term background of about 285 ppm to about 370, and check out this discussion of how we know why: http://www.radix.net/~bobg/faqs/scq.CO2rise.html)
Plants aren’t absorbing enough of the excess.
Your last point is valid – I don’t know just how close we are to that, but the theoretical basis for the likely outcome is there.
Fred: Yes we have to wonder just how much effect will be had, worth debating. Negative feedbacks will eventually pull temperatures back, but I think the problem is: a big swing is bad enough, and small consolation to have it come down later after washed in coastlines, etc.
As for the precautionary principle: I don’t mean that even the tiniest or outlandish threat deserves a massive response. We do have to take cost-threat comparison into account, but here we have something with good theoretical basis for some effect. Much of what we could do to lower CO2 would be good resource conservation and political independence anyway. The rest is “game” as they say.
The science is certainly too important to be a plaything for any side about this issue. Can you provide objective link etc. to good info on the “concealment” problem?
BarryW, are you talkin to ME!
Hansen was very straightforward about his elimination of certain data from Nrthern Califrnia stations. He wrote about it in Hansen 2001. The impact on the US record was Minor, .01C.
BUT the logic was precious.
Let me explain. Both Hansen and peterson believe that the URBAN/RURAL distinction does not matter.
More specifically. They find no difference between temps at RURAL sites and URBAN sites.
Note: the factor that matters mst to them is population density which is measured by the proxy of nightligths. A satillite picture f the wrld at night. Big lights=Lots of people.
So,
1. Nightlights picks out Rural/Urban
2. Temperatures show no difference.
3> BUT there is massive literature and experiments on heat increases in cities.
4. Conclusion: the weather stations in URBAN settings must be in COOL PARKS.
Put another way.
A UHI is real
B We see no difference between Rural and Urban temps
C: Urban sites MUST be in cool parks.
Now HAnsens problem is that he found 5 califrnia sites that had early century cooling. So HE cncluded the sites had to be messed up somehow, or that there was some flaw in nighlights.
Read Hansen 2001. pretty funny
“That anyone who refuses to reveal his data and workings is not a scientist, and his work product is not science.”
Say it again.
Say it a whole lot.
Shout it from the rooftops.
Convince others to say it.
Weren’t Mann and others dismissing the medieval warm period because it wasn’t global? If the last decade is unexceptional compared to the 1930s in the North America, I wonder if they’ll be as quick to conclude global warming isn’t global?
Re: Mike…
Good point considering the more stable temperature of the southern hemispere. That is no warming trend the last 28 year from satelite measurements.
I am however willing to bet one of my legs that they won’t conclude that global warming isn’t global.
Neil B.,
I worry a LOT about washed in coastlines.
Latest reports are that the oceans are rising 1.5 mm a year. That is 6″ in a century.
If your head is only 6″ above water and you keep it in that position for a century you will be drowned by global warming.
Run for your life.
Being drowned isn’t the point. The economic stress from the rise costs lots even with a little bit of increase, since so much is built close to the water in concert with storms etc. A given event will cost millions more per each few mm of sea level, etc. Actually the best point is, again: Much [maybe most] of what we could do to lower CO2 would be good resource conservation and political independence anyway.
Steve McIntyre
Did you try asking Hansen’s boss at NASA? The word I hear is he is non too fond of the whole direction of the AGW steamroller, has been catching flack for it too.
It wouldn’t surprise me to find him waiting with open arms for an excuse to pin Hansen’s ears back. You could be that excuse.
TCO said:
“It is COMMON to have flaws in experiments and not have the ability to repeat runs. That’s cost benefit. And one can still get valuable inferences, do useful analysis. So neither an aghast reaction at flaws NOR a cover up and hide attitude are justified.”
That’s complete crap. To correct data you need to know the exact cause of the error, the exact value of the error and you have to demonstrate that the cause existed at the time of measurement. In addition, you have to demonstrate that in a comparable case the cause produced the precise effect you claim it did. In this case that is impossible.
When your data is bad and the influences are either unknown or impossible to calculate, then you have nothing. Got that? Nothing.
I was faced with proving that a $3,000,000 guidance system was perfectly functional despite some “flawed” test data. I stood in front of an Air Force reviewer for an entire day explaining why it is valid to use a diode forward drop as a proxy for a temperature measurement. This proved that the tests were performed at the correct temperature despite the environmental sensor indicating otherwise.
I had to show that the environmental sensor had failed, how it failed, what it read when it failed and what the real value was. I further had to show that the diode proxy indicated the correct temperature both when the environmental sensor worked and when it didn’t.
I then had to reproduce the effect with a prototype circuit to show how the difference in time delays of the sensor and the diode affected the readings.
That is what’s involved in correcting “bad” data (and I had a proxy measurement to gauge the magnitude of the error).
My experience was minor compared to a friend of mine who had to do a similar analysis for a pacemaker implanted in an eighty year old man. This poor old guy almost certainly would not survive an operation to replace it.
When an engineer says that correcting bad data sends chills down his spine he is almost certainly speaking from experience.
TCO, when you have walked a mile in the moccasins of an engineer whose errors can result in death or destruction then you can appreciate why we quake at the thought of correcting bad data. Until then, stick to the science and lay off the personal insults.
In referece to the Lee posting.
The connections were made because of serious errors found in the data.
Will you please provide the information that shows that the data from the rest of the world that you are relying on is not compromised to the same extent.
Steve’s work discussed in a full editorial in today’s National Post.
http://www.canada.com/nationalpost/columnists/story.html?id=61b0590f-c5e6-4772-8cd1-2fefe0905363
Lorne Gunter appears to have the facts correct. Four of the hottest years in the US over the past century were in the 1930s, NASA’s GISS quietly corrected the error discovered by Steve McIntyre, describes James Hansen as the “godfather of global-warming alarmism”, and describes McIntyre’s work as “the bane of many warmers’ religious-like belief in climate catastrophe”.
Ian
Excellent and informative posts. Can somebody help me with this?
Here’s something that bothers me. Water vapor is responsible for 95% of the CO2 emitted into the atmosphere. Active volcanoes represent an additional 2%. Since the atmosphere is made up of 78% nitrogen and 21% oxygen, that leaves depending on the source .038 to .045% for CO2 as an atmospheric gas. It seems to me that we are making a great deal of assumptions about a gas that represents less than 1/2% of the atmosphere and attributing the potential end of the world to it. My sources are available at http://globalwarminghysteri a.blogspot.com
I’m not a scientist, I am a scuba diving instructor and know the figures above to be accurate, but in lay terms I wonder if somebody can explain to me why the main GHG water vapor is left out of the discussion? Is it because we can’t tax it? We are being hosed big time by those that are putting this nonsense before us. Remember, A lie repeated often enough is often accepted as the truth. Global Warming is theory based on flawed mathematical models, and can’t stand up to the rigor of the scientific method.
Also,But what would happen if we had evidence of glaciers melting and massive flooding that occurred 10,000 years ago – long before man burned fossil fuels to any significant degree ? Such evidence would certainly be considered evidence that global warming is a natural phenomenon – as opposed to man-made.
Well – this evidence actually exists and was reported in a Yahoo News article (via LiveScience.com) titled “Stone Age Settlement Found Under English Channel.” http://news.yahoo.com/s/livescience/20070810/sc_livescience/stoneagesettlementfoundunderenglishchannel;_ylt=AsF.5ZIOoCSv09YpSdlI21Ws0NUE #
Thanks, Jim
>> but no one disputes that average CO2 has gone up over the past decades (from long-term background of about 285 ppm to about 370
Actually, I do. The measurements at Mauna Loa are only a measurement of the C02 at that particular spot. Even so, this dataset has never been audited by anyone. Your reference to 285 stem from very bad science. The actual data shows variable C02 levels. For example, the average C02 level is probably about 235. In 1940, it was 420. Empirical measurements show that C02 level goes up and down with temperature, just like Henry’s law predicts.