Does Hansen’s Error “Matter”?
There’s been quite a bit of publicity about Hansen’s Y2K error and the
change in the U.S. leaderboard (by which 1934 is the new warmest U.S. year)
in the right-wing blogosphere. In contrast,
realclimate has dismissed it a triviality and the climate blogosphere is
doing its best to ignore the matter entirely.
My own view has been that
matter is certainly not the triviality that Gavin Schmidt would have you
believe, but neither is it any magic bullet. I think that the point is
significant for reasons that have mostly eluded commentators on both sides.
Station Data
First, let’s start with the impact of Hansen’s error on individual station
histories (and my examination of this matter arose from examination of
individual station histories and not because of the global record.) GISS
provides an excellent and popular
for plotting temperature histories of individual stations. Many such
histories have been posted up in connection with the ongoing examination of
surface station quality at surfacestations.org. Here’s an example of this
type of graphic:

Figure 1. Plot of Detroit Lakes MN using GISS software
But it’s presumably not just Anthony Watts and surfacestations.org
readers that have used these GISS station plots; presumably scientists and
other members of the public have used this GISS information. The Hansen
error is far from trivial at the level of individual stations. Grand Canyon
was one of the stations previously discussed at climateaudit.org in
connection with Tucson urban heat island. In this case, the Hansen error was
about 0.5 deg C. Some discrepancies are 1 deg C or higher.

Figure 2. Grand Canyon Adjustments
Not all station errors lead to positive steps. There is a bimodal
distribution of errors reported earlier at
CA here , with many
stations having negative steps. There is a positive skew so that the impact
of the step error is about 0.15 deg C according to Hansen. However, as you
can see from the distribution, the impact on the majority of stations is
substantially higher than 0.15 deg. For users of information regarding
individual stations, the changes may be highly relevant.
GISS recognized that the error had a significant impact on individual
stations and took rapid steps to revise their station data (and indeed the
form of their revision seems far from ideal indicating the haste of their
revision.) GISS failed to provide any explicit notice or warning on their
station data webpage that the data had been changed, or an explicit notice
to users who had downloaded data or graphs in the past that there had been
significant changes to many U.S. series. This obligation existed regardless
of any impact on world totals.

Figure 3. Distribution of Step Errors
GISS has emphasized recently that the U.S. constitutes only 2% of global
land surface, arguing that the impact of the error is negligible on the
global averagel. While this may be so for users of the GISS global average,
U.S. HCN stations constitute about 50% of active (with values in 2004 or
later) stations in the GISS network (as shown below). The sharp downward
step in station counts after March 2006 in the right panel shows the last
month in which USHCN data is presently included in the GISS system. The
Hansen error affects all the USHCN stations and, to the extent that users of
the GISS system are interested in individual stations, the number of
affected stations is far from insignificant, regardless of the impact on
global averages.

Figure 4. Number of Time Series in GISS Network. This includes all versions
in the GISS network and exaggerates the population in the 1980s as several
different (and usually similar) versions of the same data are often
included.
U.S. Temperature History
The Hansen error also has a significant impact on the GISS estimate of U.S.
temperature history with estimates for 2000 and later being lowered by about
0.15 deg C (2006 by 0.10 deg C). Again GISS moved quickly to revise their
online information changing their
data on Aug 7, 2007. Even though Gavin Schmidt of GISS and realclimate
said that changes of 0.1 deg C in individual years were “significant”,
GISS did not explicitly announce these changes or alert readers that a
“significant” change had occurred for values from 2000-2006. Obviously they
would have been entitled to observe that the changes in the U.S. record did
not have a material impact on the world record, but it would have been
appropriate for them to have provided explicit notice of the changes to the
U.S. record given that the changes resulted from an error.
The changes in the U.S. history were not brought to the attention of
readers by GISS itself, but in
this post at climateaudit. As a result of the GISS revisions, there was
a change in the “leader board” and 1934 emerged as the warmest U.S. year and
more warm years were in the top ten from the 1930s than from the past 10
years. This has been widely discussed in the right-wing blogosphere and has
been acknowledged at
realclimate as follows:
The net effect of the change was to reduce mean US anomalies by
about 0.15 ºC for the years 2000-2006. There were some very minor knock
on effects in earlier years due to the GISTEMP adjustments for rural vs.
urban trends. In the global or hemispheric mean, the differences were
imperceptible (since the US is only a small fraction of the global
area).
There were however some very minor re-arrangements in the various
rankings (see data). Specifically, where 1998 (1.24 ºC anomaly compared
to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top
US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these
differences are statistically significant.
In my opinion, it would have been more appropriate for Gavin Schmidt of
GISS (who was copied on the GISS correspondence to me) to ensure that a
statement like this was on the caption to the U.S. temperature history on
the GISS webpage, rather than after the fact at realclimate.
Obviously much of the blogosphere delight in the leader board changes is
a reaction to many fevered press releases and news stories about year x
being the “warmest year”. For example, on Jan 7, 2007, NOAA
The 2006 average annual temperature for the contiguous U.S. was
the warmest on record.
This press release was widely covered as you can determine by googling
“warmest year 2006 united states”. Now NOAA and NASA are different
organizations and NOAA, not NASA, made the above press release, but members
of the public can surely be forgiven for not making fine distinctions
between different alphabet soups. I think that NASA might reasonably have
foreseen that the change in rankings would catch the interest of the public
and, had they made a proper report on their webpage, they might have
forestalled much subsequent criticism.
In addition, while Schmidt describes the changes atop the leader board as
“very minor re-arrangements”, many followers of the climate debate are aware
of intense battles over 0.1 or 0.2 degree (consider the satellite battles.)
Readers might perform a little thought experiment: suppose that Spencer and
Christy had published a temperature history in which they claimed that 1934
was the warmest U.S. year on record and then it turned out that they had
been a computer programming error opposite to the one that Hansen made, that
Wentz and Mears discovered there was an error of 0.15 deg C in the Spencer
and Christy results and, after fiixing this error, it turned out that 2006
was the warmest year on record. Would realclimate simply describe this as a
“very minor re-arrangement”?
So while the Hansen error did not have a material impact on world
temperatures, it did have a very substantial impact on U.S. station data and
a “significant” impact on the U.S. average. Both of these surely “matter”
and both deserved formal notice from Hansen and GISS.
Can GISS Adjustments “Fix” Bad Data?
Now my original interest in GISS adjustments did not arise abstractly,
but in the context of surface station quality. Climatological stations are
supposed to meet a variety of quality standards, including the relatively
undemanding requirement of being 100 feet (30 meters) from paved surfaces.
Anthony Watts and volunteers of surfacestations.org have documented one
defective site after another, including a weather station in a parking lot
at the University of Arizona where MBH coauthor Malcolm Hughes is employed,
shown below.

Figure 5. Tucson University of Arizona Weather Station
These revelations resulted in a variety of aggressive counter-attacks in
the climate blogosphere, many of which argued that, while these individual
sites may be contaminated, the “expert” software at GISS and NOAA could fix
these problems, as, for example
here .
they [NOAA and/or GISS] can “fix” the problem with math and
adjustments to the temperature record.
or here:
This assumes that contaminating influences can’t be and aren’t
being removed analytically.. I haven’t seen anyone saying such
influences shouldn’t be removed from the analysis. However I do see
professionals saying “we’ve done it”
“Fixing” bad data with software is by no means an easy thing to do (as
witness Mann’s unreported modification of principal components methodology
on tree ring networks.) The GISS adjustment schemes (despite protestations
from Schmidt that they are “clearly outlined”) are not at all easy to
replicate using the existing opaque descriptions. For example, there is
nothing in the methodological description that hints at the change in data
provenance before and after 2000 that caused the Hansen error. Because many
sites are affected by climate change, a general urban heat island effect and
local microsite changes, adjustment for heat island effects and local
microsite changes raises some complicated statistical questions, that are
nowhere discussed in the underlying references (Hansen et al 1999, 2001). In
particular, the adjustment methods are not techniques that can be looked up
in statistical literature, where their properties and biases might be
discerned. They are rather ad hoc and local techniques that may or may not
be equal to the task of “fixing” the bad data.
Making readers run the gauntlet of trying to guess the precise data sets
and precise methodologies obviously makes it very difficult to achieve any
assessment of the statistical properties. In order to test the GISS
adjustments, I requested that GISS provide me with details on their
adjustment code. They refused. Nevertheless, there are enough different
versions of U.S. station data (USHCN raw, USHCN time-of-observation
adjusted, USHCN adjusted, GHCN raw, GHCN adjusted) that one can compare GISS
raw and GISS adjusted data to other versions to get some idea of what they
did.
In the course of reviewing quality problems at various surface sites,
among other things, I compared these different versions of station data,
including a comparison of the Tucson weather station shown above to the
Grand Canyon weather station, which is presumably less affected by urban
problems. This comparison demonstrated a very odd pattern discussed
here. The adjustments show that the trend in the problematic Tucson site
was reduced in the course of the adjustments, but they also showed that the
Grand Canyon data was also adjusted, so that, instead of the 1930s being
warmer than the present as in the raw data, the 2000s were warmer than the
1930s, with a sharp increase in the 2000s.


Figure 6. Comparison of Tucson and Grand Canyon Versions
Now some portion of the post-2000 jump in adjusted Grand Canyon values
shown here is due to Hansen’s Y2K error, but it only accounts for a 0.5 deg
C jump after 2000 and does not explain why Grand Canyon values should have
been adjusted so much. In this case, the adjustments are primarily at the
USHCN stage. The USHCN station history adjustments appear particularly
troublesome to me, not just here but at other sites (e.g. Orland CA). They
end up making material changes to sites identified as “good” sites and my
impression is that the USHCN adjustment procedures may be adjusting some of
the very “best” sites (in terms of appearance and reported history) to
better fit histories from sites that are clearly non-compliant with WMO
standards (e.g. Marysville, Tucson). There are some real and interesting
statistical issues with the USHCN station history adjustment procedure and
it is ridiculous that the source code for these adjustments (and the
subsequent GISS adjustments – see bottom panel) is not available/
Closing the circle: my original interest in GISS adjustment procedures
was not an abstract interest, but a specific interest in whether GISS
adjustment procedures were equal to the challenge of “fixing” bad data. If
one views the above assessment as a type of limited software audit (limited
by lack of access to source code and operating manuals), one can say firmly
that the GISS software had not only failed to pick up and correct fictitious
steps of up to 1 deg C, but that GISS actually introduced this error in the
course of their programming.
According to any reasonable audit standards, one would conclude that the
GISS software had failed this particular test. While GISS can (and has)
patched the particular error that I reported to them, their patching hardly
proves the merit of the GISS (and USHCN) adjustment procedures. These need
to be carefully examined. This was a crying need prior to the identification
of the Hansen error and would have been a crying need even without the
Hansen error.
One practical effect of the error is that it surely becomes much harder
for GISS to continue the obstruction of detailed examination of their source
code and methodologies after the embarrassment of this particular incident.
GISS itself has no policy against placing source code online and, indeed, a
huge amount of code for their climate model is online. So it’s hard to
understand their present stubbornness.
The U.S. and the Rest of the World
Schmidt observed that the U.S. accounts for only 2% of the world’s land
surface and that the correction of this error in the U.S. has “minimal
impact on the world data”, which he illustrated by comparing the U.S. index
to the global index. I’ve re-plotted this from original data on a common
scale. Even without the recent changes, the U.S. history contrasts with the
global history: the U.S. history has a rather minimal trend if any since the
1930s, while the ROW has a very pronounced trend since the 1930s.


Re-plotted from GISS Fig A and GFig D data.
These differences are attributed to “regional” differences and it is
quite possible that this is a complete explanation. However, this conclusion
is complicated by a number of important methodological differences between
the U.S. and the ROW. In the U.S., despite the criticisms being rendered at
surfacestations.org, there are many rural stations that have been in
existence over a relatively long period of time; while one may cavil at how
NOAA and/or GISS have carried out adjustments, they have collected metadata
for many stations and made a concerted effort to adjust for such metadata.
On the other hand, many of the stations in China, Indonesia, Brazil and
elsewhere are in urban areas (such as Shanghai or Beijing). In some of the
major indexes (CRU,NOAA), there appears to be no attempt whatever to adjust
for urbanization. GISS does report an effort to adjust for urbanization in
some cases, but their ability to do so depends on the existence of nearby
rural stations, which are not always available. Thus, ithere is a real
concern that the need for urban adjustment is most severe in the very areas
where adjustments are either not made or not accurately made.
In its consideration of possible urbanization and/or microsite effects,
IPCC has taken the position that urban effects are negligible, relying on a
very few studies (Jones et al 1990, Peterson et al 2003, Parker 2005, 2006),
each of which has been discussed at length at this site. In my opinion, none
of these studies can be relied on for concluding that urbanization impacts
have been avoided in the ROW sites contributing to the overall history.
One more story to conclude. Non-compliant surface stations were reported
in the formal academic literature by Pielke and Davey (2005) who described a
number of non-compliant sites in eastern Colorado. In NOAA’s official
response to this criticism, Vose et al (2005) said in effect –
it doesn’t matter. It’s only eastern Colorado. You
haven’t proved that there are problems anywhere else in the United
States.
In most businesses, the identification of glaring problems, even in a
restricted region like eastern Colorado, would prompt an immediate
evaluation to ensure that problems did not actually exist. However, that
does not appear to have taken place and matters rested until Anthony Watts
and the volunteers at surfacestations.org launched a concerted effort to
evaluate stations in other parts of the country and determined that the
problems were not only just as bad as eastern Colorado, but in some cases
were much worse.
Now in response to problems with both station quality and adjustment
software, Schmidt and Hansen say in effect, as NOAA did before them –
it doesn’t matter. It’s only the United States.
You haven’t proved that there are problems anywhere else in the world.
Neil B,
I think your assessment of say $10 million per mm sea level rise (provided the rise is below 1 ft) is about right.
150 mm rise = $1.5 billion spread over the whole world over 100 years. That is not serious money. Even 10X that is not serious money.
A rise of under 1ft is going to get lost in the tidal effects.
If people want to live where the land area is variable it is a personal choice. If they are unable to deal with even a 1 ft rise in sea level perhaps they need to reconsider their place of habitation.
Well, I was thinking $ “per event” which is a bit different (and admittedly ambigous), but I wouldn’t know the specifics anyway. (Who would?) The government surely shouldn’t subsidize and encouraged risky building in any case.
BTW, it is odd that Drudge hasn’t (AFAICT) put up a link about this temperature discrepancy. Isn’t he normally sympathetic to GW skepticism?
LOL
and they always accused me of having a climate control machine.
Great work Steve, I am working to make sure you have an invitation to explain all this on FOX
This unearthing of yet another NASA debacle has obviously touched a nerve or two. The attacks disgust me, but I am hardly surprised by them. Whenever I drive by Ames I just sigh, about how good things used to be, and how far they have fallen. We used to take the ideas of the world’s best rocket man and put them into bold practice, now we put around in LEO in an oversized lifting body with freakin’ ICBMs strapped to it along with a Hindenburg’s worth of H. The so called “climate scientists” at the big N chant Gaia spells and curse the GOP. What a travesty. Break it up and start over from scratch. Shut her down ….
Remember Jim Hansen
received a 250k$ grant from the Heintz
foundation while supporting Gore’s movie and all of its
errors .He also supported John Kerry
in 2004 . He ,of course , is going to
cherry pick to support his political views .
“It seems to me that we are making a great deal of assumptions about a gas that represents less than 1/2% of the atmosphere and attributing the potential end of the world to it.”
I’ve wondered about that, too. A 50% increase sounds pretty radical–unless it’s 50% of a thirtieth of a percent.
The “twice nothing is still nothing” aregument may well apply here.
OTOH, that sword cuts both ways:
Thinking back to the c. 20 ppm of gunk that was in NYC air back in 1970, and one may recall that even a miniscule percentage can have a large, practical effect.
I turned on the radio while going to the store this morning and it was on the Dennis Prager program. He was discussiing what the worry was about in the late ’90s about Y2K, so I listened and sure enough he mentioned Steve McIntyre and Anthony Watts by name and what the results of fixing Hanson’s error was.
It seemed to be a fair report as he stated that there wasn’t much effect on temp measures for the whole world and that he needed to check things out before he’d vouch for the results. But he also admitted that he trusts AGW skeptics more than he does AGW activists.
>> the average C02 level is probably about 235
I meant to say 335. That’s what the data shows, when you don’t cherry pick data to fit a preconceived notion.
>> if somebody can explain to me why the main GHG water vapor is left out of the discussion? Is it because we can’t tax it?
Basically, that’s it. You are absolutely correct that water vapor dwarfs C02 in GHG importance. And man does have a similar effect on the water cycle, as he does on the Carbon cycle. However, there are two problems with this approach from AGW point of view. 1) Everyone is familiar with the water cycle, and would be far less gullible than with C02. 2) AGW is really about restricting human activity, and energy usage is right next to oxygen usage in importance. Energy usage can’t help but produce C02, so it was chosen as the culprit. The prospect of a campaign to limit the boiling of water, draining pools, etc just isn’t as compelling.
Gunnar:
You may have gotten the 420 ppm in 1940 from readings at point Barrow, but the most relevant document (link “Hock et al. (1947-1949) 400 ppm Point Barrow”, found at Link) says that almost all measurement then, around 1950, were about 0.3%. That seems to be typical. It varies some place to place and time to time, but the year-world average has climbed about as standard graphs show, unless you can show convincing alternative to orthodoxy.
As for supposedly who is doing what for what reasons, remember that Svante Arrhenius wrote about the likelihood of CO2 induced global warming back in 1896, long before Al Gore was born. He even thought it would be a good thing, so I doubt his reasoning was intended to deceive.
Neil B.,
Your link to a discussion of CO2 increases in the atmosphere was broken as it contained the closing parenthesis of your text:
http://www.radix.net/~bobg/faqs/scq.CO2rise.html)
So, in case anyone tried and failed to follow it, then gave up, here it is as it should be: http://www.radix.net/~bobg/faqs/scq.CO2rise.html
(Deep Ecology)But if you restrict irrigation of arid zones, especially in places like the Western US, you get a two-fer. You reduce the carbon footprint and lower the amount of water vapor “artificially” being sent into the atmosphere. Force people to live sustainably in humid zones, stop enabling more than a thin contingent in the arid zones. While we’re at it, let’s ban cooling towers (the smaller type used for HVAC). Hey, and to boot, doing these things are guaranteed to lower population!(/Deep Ecology)
As for supposedly who is doing what for what reasons, remember that Svante Arrhenius wrote about the likelihood of CO2 induced global warming back in 1896, long before Al Gore was born. He even thought it would be a good thing, so I doubt his reasoning was intended to deceive.
He was right that it would be a good thing.
CO2 levels have been much higher in the deep past, and temperatures have been much higher. Indeed, biodiversity was greater during those times.
I learned one thing by reading all this. I know nothing about climate. And I’m starting to wonder if anyone knows it really.
Let me see if I understand this: A discredited anti-GW hack Canadian mine promoter fronting for ExxonMobil quietly objects to NASA’s temperature record, and — without providing any explanation — his buddies at GISS politely thank him and immediately rewrite the entire climate history of the United States??!!
\end{humor}
This story gets more troubling with each new chapter.
BTW I don’t know why so many are so hard on Hanson, after all here Link he says thanks to McIntyre for bringing up the error, and I don’t see evidence of anything deliberate. Hanson is said to have written the following around 2000, but I don’t have the link only a quote in TNR:
The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis. … In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1ºC. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1ºC.
IOW, he knew `34 was warm, and too close to the late 90s to call. In any case, looking at the corrected graph, it looks like a general upward trend, with ups and downs superimposed (like a typical stock market graph.)
Stir estimates together and make some numbers crunch and soon you’ve got a science and more than just a hunch.
From reading all of this, it’s now quite clear to me, that if you add up guesses, you get a certainty.
This is all very commendable, but John Daly
http://www.john-daly.com/
went through all this ten years ago and was ignored. His death is regrettable.
Mike asked “Weren’t Mann and others dismissing the Medieval Warm period because it wasn’t global?”
My search skills being non-existent I can only go from a flawed memory, but – yes. At first, they said they dismissed it because it was a UK anamoly. When Greenland was brought up, they said it was a “North Atlantic” anamoly owing entirely to the Gulf Stream. Then others started submitting written accounts from China, Japan, Korea etc. and proxy data from South America, Africa, and Antarctica…
Neil B.,
Why so hard on Hansen?
Data and methods.
Science is open. Hansen keeps secrets.
dearieme posted Aug 9, 2007 12:15:51 PM:
“Government scientists ..refuse to publicly release their temperature adjustment algorithms or software”: the default interpretation of that is that they are crooks.
http://powerandcontrol.blogspot.com/2007/08/default-interpretation.html
Neil: Yes, Hanson deserves some credit. Many other people in similar circumstances (e.g. Michael Mann) refuse to admit that they even have a problem.
But the reason we don’t give Hansen a free ride is, I think, that this should never have happened in the first place. He needs to open up his methods fully so that they can be audited. Until then, how will we know how many other such errors lurk? There are a lot of questionable adjustments that I’m sure Mr. McIntyre and Watts would be happy to investigate and test for validity, if only they had access to the code/methods so that they could actually do so properly.
It benefits all of us for this data and the adjustments to be accurate. Fixing a mistake is one tiny step in getting there. But I think many of us will never have full confidence in the records until they are independently tested and verified.
Ref; the precautionary principle.
There are costs if you choose the absolutely safest option. Ask the orang-utangs — they’re losing habitat to palm oil plantations. The oil is destined for bio-fuel.
JF
Regional differences are significant across the globe and in the U.S. Southern Africa shows cooling, Southern Australia shows temperatures approaching the 1930’s. In the U.S. the “dust bowl” area of the 1930’s is still below the temperature of the 1903’s. I have written several regional summaries available at http://www.appinsys.com/GlobalWarming. Also the Tuscon station mentioned in this thread is used in a urban / rural comparison in my regional summary on the southwest U.S.
In 1940, it was 420. Empirical measurements show that C02 level goes up and down with temperature, just like Henry’s law predicts.
You must realise that with this claimed huge temperature sensitivity you’d have a runaway greenhouse effect, which is even more reason for concern.
And if the CO2 is coming out of the oceans, where is all the man made CO2 going?
“Into the oceans”, hmm something’s wrong then with your bookkeeping.
>> You may have gotten the 420 ppm in 1940 from readings at point Barrow … were about 0.03% (previous comment corrected)
Yes, but the readings from Point Barrow are real, ranging from .03 to .05! Given that cold oceans are a deep sink, C02 levels are higher at the equator and lower at the poles, therefore, an average reading of 400 ppm over a two year period is quite significant. Even the .03 reading contradicts AGW dogma about low pre-industrial C02 levels. The 3 year study in Luxembourg shows that C02 levels are quite variable, and during that period, did not increase. The actual ice core data (non cherry picked) supports the contention of wide ranging C02 levels.
Of course, as your link shows, it was not only Point Barrow, Duerst measured 400 ppm in 1936-1939, Kreutz measured over 420 in 1939-41. Bazett measured 400 ppm in Philadelphia in 1941, Misra measured over 400 in India in 1941-1943, Lockhart measured over 600 ppm in Antarctica.
The central foundation to AGW is that man is able to dramatically affect the global C02 level, ie that it was low before, and that man has greatly increased it. This idea is falsified by two facts: 1) Many plant species could not survive the alleged pre-industrial level, yet they did, and 2) the actual C02 measurements referenced above. These two facts show that this critical AGW prerequisite is false.
>> It varies some place to place and time to time, but the year-world average has climbed about as standard graphs show
The Mauna Loa measurement only show the C02 levels at Mauna Loa, a location on top of a volcanoe (known C02 source), next to an active volcanoe, downwind from equatorial waters known to be outgassing C02. There is no scientific basis for claiming that Mauna Loa represents the world wide average. Does the temperature there also represent the global average? Did you know that at Mauna Loa, they don’t record measurements, unless the wind is blowing in from the sea? Did you know that scientists that have worked there have reported that a large percentage of data points are discarded and not included in the average? We should arrange for Steve Mc to be flown to Mauna Loa and audit them until they are blue in the face. It would probably only take a few days for them to turn red in the face.
>> unless you can show convincing alternative to orthodoxy.
The data contradicts the AGW idea, at every level. Interesting that you use the religious terminology “orthodoxy”. Freudian Slip?
>> Arrhenius wrote about the likelihood of CO2 induced global warming back in 1896
Yes, but the arguments of Arrhenius were falsifed by his contemporaries.
>> so I doubt his reasoning was intended to deceive.
It’s quite amazingly bad logic to claim that since a scientist in 1896 said X in good faith, therefore, anyone who says X now is also acting in good faith.
By not publishing all the data, they make the comment by Bacastow that the Mauna Loa measurements were “edited” seem quite plausible. Pales & Keeling said that large portions of the raw data were rejected, leaving just a small fraction to be subjected to averaging techniques. The Scripps program to monitor CO2 in the atmosphere was conceived and initiated by Dr. Roger Revelle (Revelle evasion factor). Pales & Keeting say “Revelle foresaw the geochemical implications of the rise in atmospheric CO2 resulting from fossil fuel combustion, and he sought means to ensure that this ‘large scale geophysical experiment‘ .. was documented”. Pales & Keeting continue “he inspired us to keep in sight the objectives which he had originally persuaded us to accept.”
Does this sound like true, unbiased research? All they were doing was measuring C02. Why would they need inspiration to keep the objectives in sight? What were the objectives? Why the need for persuasion? What’s so hard about measuring C02, and reporting all the data.
Gunnar, can you cite any sources about the Mauna Loa measurements ?
(I’ve always thought measuring CO2 next to a bloody great volcano was a bit strange.)