Antarctica warming? An evolution of viewpoint

mt-erebus.jpg

Above: Mt Erebus, Antarctica

picture by Sean Brocklesby

A press release today by the University of Washington makes a claim that Antarctica is warming and has been for the last 50 years:

“The study found that warming in West Antarctica exceeded one-tenth of a degree Celsius per decade for the last 50 years and more than offset the cooling in East Antarctica.”

“The researchers devised a statistical technique that uses data from satellites and from Antarctic weather stations to make a new estimate of temperature trends.”

“People were calculating with their heads instead of actually doing the math,” Steig said. “What we did is interpolate carefully instead of just using the back of an envelope. While other interpolations had been done previously, no one had really taken advantage of the satellite data, which provide crucial information about spatial patterns of temperature change.”

Satellites calculate the surface temperature by measuring the intensity of infrared light radiated by the snowpack, and they have the advantage of covering the entire continent. However, they have only been in operation for 25 years. On the other hand, a number of Antarctic weather stations have been in place since 1957, the International Geophysical Year, but virtually all of them are within a short distance of the coast and so provide no direct information about conditions in the continent’s interior.

The scientists found temperature measurements from weather stations corresponded closely with satellite data for overlapping time periods. That allowed them to use the satellite data as a guide to deduce temperatures in areas of the continent without weather stations.

Co-authors of the paper are David Schneider of the National Center for Atmospheric Research in Boulder, Colo., a former student of Steig’s; Scott Rutherford of Roger Williams University in Bristol, R.I.; Michael Mann of Pennsylvania State University; Josefino Comiso of NASA’s Goddard Space Flight Center in Greenbelt, Md.; and Drew Shindell of NASA’s Goddard Institute for Space Studies in New York City. The work was supported by grants from the National Science Foundation.

Anytime Michael Mann gets involved in a paper and something is “deduced” it makes me wary of the veracity of the methodology. Why?  Mann can’t even correct simple faults like latitude-longitude errors in data used in previous papers he’s written.

But that’s not the focus of the moment. In that press release they cite NASA satellite imagery. Let’s take a look at how the imagery has changed in 5 years.

NASA’s viewpoint – 2004

Click for larger image

NASA’s Viewpoint 2007 (added 1/22)

NASA’s viewpoint – 2009

antarctic_warming_2009
Click for larger image

Earth’s viewpoint – map of Antarctic volcanoes

Click for larger image

From the UW paper again:

“West Antarctica is a very different place than East Antarctica, and there is a physical barrier, the Transantarctic Mountains, that separates the two,” said Steig, lead author of a paper documenting the warming published in the Jan. 22 edition of Nature.

But no, it just couldn’t possibly have anything at all to do with the fact that the entire western side of the Antarctic continent and peninsula is dotted with volcanoes. Recent discovery of new volcanic activity isn’t mentioned in the paper at all.

From January 2008, the first evidence of a volcanic eruption from beneath Antarctica’s ice sheet has been discovered by members of the British Antarctic Survey.

The volcano on the West Antarctic Ice Sheet began erupting some 2,000 years ago and remains active to this day. Using airborne ice-sounding radar, scientists discovered a layer of ash produced by a ’subglacial’ volcano. It extends across an area larger than Wales. The volcano is located beneath the West Antarctic ice sheet in the Hudson Mountains at latitude 74.6°South, longitude 97°West.

antarctic_volcano2.jpg

UPDATE 1/22

In response to questions and challenges in comments, I’ve added imagery above and have a desire to further explain why this paper is problematic in my view.

The author of the paper himself (Steig) mentions the subglacial heat source in a response from “tallbloke” in comments. My issue is that they don’t even consider or investigate the possibility. Science is about testing and if possible, excluding all potential candidates that challenge your hypothesis, and given the geographic correlation between their output map and the volcanic map, it seems a reasonable theory to investigate. They didn’t.

But let’s put the volcanoes aside for a moment. Let’s look at the data error band. The UAH trend for Antarctica since 1978 is -0.77 degrees/century.

In a 2007 press release on Antarctica, NASA’s describes their measurement error at 2-3 degrees, making Steig’s conclusion of .25 degrees Celsius over 25 years statistically meaningless.

“Instead, the team checked the satellite records against ground-based weather station data to inter-calibrate them and make the 26-year satellite record. The scientists estimate the level of uncertainty in the measurements is between 2-3 degrees Celsius.”

That is from this 2007 NASA press release, third paragraph.

http://earthobservatory.nasa.gov/IOTD/view.php?id=8239

Also in that PR, NASA shows yet another satellite derived depiction which differs from the ones above. I’ve added it.

Saying you have a .25 deviation over 25 years (based on one-tenth of a degree Celsius per decade per Steig) with a previously established measurement uncertainty of 2-3 degrees means that the “deduced” value Steig obtained is not greater than the error bands previously cited on 2007, which would render it statistically meaningless.

In an AP story Kenneth Trenberth has the quote of the day:

http://news.yahoo.com/s/ap/20090121/ap_on_sc/sci_antarctica

“This looks like a pretty good analysis, but I have to say I remain somewhat skeptical,” Kevin Trenberth, climate analysis chief at the National Center for Atmospheric Research, said in an e-mail. “It is hard to make data where none exist.”

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
419 Comments
Inline Feedbacks
View all comments
E.M.Smith
Editor
January 25, 2009 1:04 pm

Roger Sowell (15:21:53) :
The same principle allows us to state, with a straight face, that a population of people has 2.3 children per woman, on average. I shall explain.

This is a different issue. Since no person has a fractional child, you can not be having an induced measurement error ( 1.4 children rounded down, 1.5 rounded up) causing a full digit error in your data.
Temperature measured over years can have the same result. Example: At time zero, temperature measured in whole degrees is 14 degrees C. At time zero-plus-ten years, temperature is measured at 15 degrees, again in whole degrees. What is the average temperature rise per year? One degree divided by ten years, equals 0.1 degrees/year. No precision errors, no accuracy errors.
And what if the actual temp was 14.4445 in both readings, but your precision of whole degrees recorded one as 14 and the other as 15? That’s the whole point.
If you measure in whole degrees, you can only know in whole degrees, so all you can say is that we had no provable rise. (Error band is 14 -/+ 1 vs 15 -/+ 1, true value can be anywhere from 16 -13 = 3 degrees to 14-15 = – 1 degrees; your ‘trend’ can be anything from +3 to -1 per decade and you don’t know what it really is
I’m beginning to understand why Mr. McGuire beat this into our heads so strongly…

foinavon
January 25, 2009 1:10 pm

Smokey (10:19:15) :

foinavon says: “McIntyre hasn’t… addressed in the scientific literature the increasing number of studies that largely support the original analysis of Mann et al.”
That is changing history. Steve McIntyre forced the UN/IPCC to withdraw its use of Mann’s bogus hockey stick.

No that’s not true. You seem to be caught up in the faux-“politics” of this issue and unaware of the science/facts. And it’s not possible to “change history” Smokey, although some may try to reinterpret it.
The Mann et al 1988/9 paleotemperature data is shown in the current IPCC reports. It can be found, for example, in the report of Working Group 1 -“Physical Science Basis” on page 467 (Figure 6.10) published in 2007/8. Obviously since there are now quite a number of paleoproxy analyses of millenial scale temperature, these are all shown there:
http://www.ipcc.ch/ipccreports/ar4-wg1.htm (see Chapter 6, page 467, Figure 6.10)

If foinavon really wanted the climate peer review process to function honestly rather than angling for ever more grant money, he would spend his time writing letters to the various journals demanding change. Instead, he spends countless hours here, attempting to change minds.

Peer review is generally pretty good I would say, although it can be rather annoying. How nice it would be if one could just put together any old stuff and get it published, like people can do on blogs! Of course peer review is only part of the process of science, scientific knowledge dissemination and advance. Once a paper is published it is open to assessment by others in the field, and one can really only determine the importance/value of a paper in relation to this. A good paper passes the test of time!
The grammatical construction of your sentence makes it difficult to determine who you think is “angling for ever more grant money” is it me?…otherwise I don’t see how a “process” can do any “angling”! Climate science is funded (in the US) about the same level as nanotechnology according to the NSF funding reports, I believe. Is that a reasonable funding level would you say? And do you think nanotechnologists are also engaging in whatever dubious processes you are insinuating in order to “angle for more grants”?

Mike Bryant
January 25, 2009 1:24 pm

Joel,
I don’t think you crossed any line. It just seems to me that a less emotional approach would be more in keeping with your scientific credentials.
Thanks for the response,
Mike

E.M.Smith
Editor
January 25, 2009 1:33 pm

In thinking about this:
If you measure in whole degrees, you can only know in whole degrees, so all you can say is that we had no provable rise. (Error band is 14 -/+ 1 vs 15 -/+ 1,
Given your example, measuring in whole degrees would be a +/- 1/2 error band for one degree resolution, so the text ought to say: 14 +/- .5 vs 15 +/- .5 with a range of 15.5 -13.5 = 2 degrees to 14.5-14.5 = 0 degrees; your ‘trend’ can be anything from +2 to -0 per decade and you don’t know what it really is.

Mike Bryant
January 25, 2009 2:04 pm

The more I think about the word “contrarian” the more I like it. Even though the word contrary carries a somewhat negative connotation, I don’t believe the word contrarian does. Besides it fits in so nicely in this phrase: Catastrophic Climate Change Contrarian. Now that is something that I would NOT mind being called. Perhaps I should send a note of thanks to Gavin?
Your Catastrophic Climate Change Contrarian,
Mike Bryant
PS Has anyone here noticed that the earth is getting cooler?

E.M.Smith
Editor
January 25, 2009 2:09 pm

evanjones (20:31:17) :
The problem is that GISS never starts with raw data. They use NOAA-adjusted data and “unadjust it” (how, we do not know), then they readjust it.

As I’m presently shoving my brains through the sieve that is the GISS code, I can actually tell you what it does… and it doesn’t do this.
NOAA provides several datasets for your choosing. The one that GISS chooses is the one without UHI in it from NOAA, so they do not need to ‘unadjust’ it. They choose the dataset that comes unadjusted… They then do many strange and wondrous things to it that I’m still sorting out, so I won’t comment on them here, yet.
To the the ‘semi-cooked’ data you can ftp it from NOAA. In your browser bar, put:
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn
This will open the ftp directory. If you want the already computed monthly means dataset used by GISS, fetch hcn_doe_mean_data.Z and if you want the UHI adjusted version, fetch urban_mean_fahr.Z
There are similar files for the min and max data both UHI adjusted and not.
From gistemp.txt at the top of the GISS source package:

Sources
——-
GHCN = Global Historical Climate Network (NOAA)
USHCN = US Historical Climate Network (NOAA)
SCAR = Scientific Committee on Arctic Research
Basic data set: GHCN – ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2
v2.mean.Z (data file)
v2.temperature.inv.Z (station information file)
For US: USHCN – ftp://ftp.ncdc.noaa.gov/pub/data/ushcn
hcn_doe_mean_data.Z
station_inventory
For Antarctica: SCAR – http://www.antarctica.ac.uk/met/READER/surface/stationpt.html
http://www.antarctica.ac.uk/met/READER/temperature.html
http://www.antarctica.ac.uk/met/READER/aws/awspt.html
For Hohenpeissenberg – http://members.lycos.nl/ErrenWijlens/co2/t_hohenpeissenberg_200306.txt
complete record for this rural station
(thanks to Hans Erren who reported it to GISS on July 16, 2003)
USHCN stations are part of GHCN; but the data are adjusted for various recording and protocol errors and discontinuities; this set is particularly relevant if studies of US temperatures are made, whereas the corrections have little impact on the GLOBAL temperature trend, the US covering less than 2% of the globe.

E.M.Smith
Editor
January 25, 2009 2:28 pm

Joel Shore (07:15:05) :
Your claim here seems to be that Mann hid the fact that his results depended strongly on a certain proxy (bristlecone pines from the Western U.S.). And, yet, here is what Mann et al. said […]
In using the sparser dataset available over the entire millennium (Table 1), only a relatively small number of indicators are available in regions (e.g., western North America) where the primary pattern of hemispheric mean temperature variation has signi cant amplitude (see Fig. 2 in MBH98), and where regional variations appear to be closely tied to global-scale temperature variations in model-based experiments [Bradley, 1996].

What strikes me from your entire posting is this: You have a proxy of trees in the western hills, that are connected to cold via more models. As someone who has lived through many of them, I can inform your understanding a bit better than some simulation.
What happens out here in the west is that we get a drought, as we have had for the last few years. Trees respond to less water the same way they respond to more cold: smaller rings.
Tree rings in most of the west, and California in particular, will show as much about cold as they do about drought. Unless you have a past rainfall record to go with those rings, you have a confounded data set.

E.M.Smith
Editor
January 25, 2009 2:41 pm

E.M.Smith (13:04:55) :
Roger Sowell (15:21:53) :
The same principle allows us to state, with a straight face, that a population of people has 2.3 children per woman, on average. I shall explain.
This is a different issue. Since no person has a fractional child, you can not be having an induced measurement error ( 1.4 children rounded down, 1.5 rounded up) causing a full digit error in your data.

Another way to see this is that having 1 child is in fact having 1.000000000 child… the decimal precision is infinite for an atomic event.

January 25, 2009 3:30 pm

If a part of Antarctica is warming, is the claim that AGW is the cause? If so, how could AGW cause this: click
The graphic appears to indicate a mountain range. I’m not a geologist; could the cold/colder boundary be caused by the intersection of two tectonic plates?

John M
January 25, 2009 4:03 pm

foinavon (13:10:59) :

Peer review is generally pretty good I would say, although it can be rather annoying. How nice it would be if one could just put together any old stuff and get it published, like people can do on blogs!

Can you give us your critical evaluation of this little item that peer review missed?
http://www.climateaudit.org/?p=3967

Allan M R MacRae
January 25, 2009 5:32 pm

Re Allan M R MacRae (10:58:43) :
and
davidc (00:16:17) :
Hi David,
For more on this subject, see
Increasing Atmospheric CO2: Manmade…or Natural?
January 21st, 2009 by Roy W. Spencer
http://www.drroyspencer.com/2009/01/increasing-atmospheric-co2-manmade%e2%80%a6or-natural/
Many scientists who believe that the theory of catastrophic humanmade global warming is invalid still do believe that humankind is driving increased atmospheric CO2 through combustion of fossil fuels.
I used to be accept without question the role of fossil fuels in driving increased atmospheric CO2 – now I am leaning towards being an agnostic on this very interesting scientific question.
The really important question is whether the world is undergoing catastrophic global warming or NOT.
It is apparent to me that there has been no significant warming for many years, and sharp cooling since January 2007.
The shift in the PDO from warm to cool mode suggests we can expect, on average, 20-30 years of global cooling (with upward and downward natural variation).
In summary, I think the alleged catastrophic humanmade global warming crisis does not exist in reality.
Regards, Allan

January 25, 2009 6:25 pm

E.M.Smith (13:04:55) :
re: measuring to 4 digits precision.
A real example from my engineering days may illustrate. In a chemical plant, we had a gaseous stream of by-product hydrogen that was saturated with water vapor. The economic thing to do was pipe the hydrogen to a boiler and supplement the fuel burned. There were the usual safeguards on the hydrogen line before it entered the boiler, including a flame arrestor. The flame arrestor had a metal grid inside that allowed hydrogen to pass but would stop a flame from propagating.
The flame arrestor grid eventually plugged up from deposits of salt that could only be from the hydrogen and water vapor. We tried measuring the hydrogen gas for presence of salt, and could not detect any. Our measurements showed zero, down to the most sensitive instruments we could use. We then did a flame ionization test, and sure enough, there was a characteristic yellow color that indicated salt was present.
So, we could not even measure the amount of contaminant in order to engineer a solution to solve the problem. My task was to engineer the solution. (Thanks again, boss! )
I can write about the solution we devised because it is not proprietary nor is it a trade secret. We installed a filming amine injection system that could be regulated via a pulse-timer on a small positive displacement pump. The salt stayed in the amine solution and the problem went away. The cost of the amine solution and the injection system was far less than the cost to clean the flame arrestor and the increased cost of fuel while the hydrogen was vented to atmosphere during the cleaning.
I think this illustrates what happens if we were to round to zero our measurements of salt concentration as 0.000 mg/l, because that is what the instrument displayed ( I am fabricating these numbers to make the point). In reality, the salt concentration was somewhere below the detectable limit, perhaps at 0.000001 mg/l.
I do understand the point you made above, and perhaps there are different techniques to apply for different problems. Temperatures at atmospheric conditions, unlike concentrations of contaminants in a gas, always have a measurable value. It appears the problem is one of consistently rounding or misreading the instrument.
There also are similar problems with the ga-zillions of data points taken every day in a chemical plant or refinery, compared to the atmospheric temperature measurements.
It might be illuminating for the atmospheric scientists to talk to the process chemical engineers about that.
The refineries and chemical complexes have very large data sets, taking process data from thousands of instruments at different locations and times, some on a short interval (i.e. every few seconds) and some on longer intervals (perhaps three times per day), and laboratory data including dozens of analyses on hundreds of samples.
The continuous process industries regularly face the same issues of instrument miscalibration, missing data when instruments are off-line for repair, and differences in values reported as to precision. Yet the engineers routinely manage to massage the data appropriately, correct for bad readings, interpolate or reconcile for missing data, obtain averages that make sense, and produce trend charts and graphs that are useful to decision-makers in running a safe, profitable, and sometimes optimized plant. They have been doing this for several decades now.
Just a thought.
Roger E. Sowell

January 25, 2009 7:44 pm

Antarctic ice increasing AND decreasing: click

Allan M R MacRae
January 25, 2009 8:03 pm

This just in from meteorologist Joe d’Aleo – more evidence of why much of the trend analysis done with Surface Temperature (ST) data is highly questionable, due to the warming bias in ST measurement and data handling.
Favorite paragraph:
“The difference between the NOAA NCDC USHCN version 2 and GISS shows that NOAA’s new algorithm fails to correct for urbanization warming. In fact the NCDC changes have introduced a warming of 0.75F in the 75 years since 1930. Man made warming indeed but the men are in Asheville, NC.”
Regards, Allan
United States and Global Data Integrity Issues
By Joseph D’Aleo, CCM, AMS Fellow
Jan.27, 2009
http://scienceandpublicpolicy.org/images/stories/papers/originals/DAleo-DC_Brief.pdf
Abstract
Issues with the United States and especially the global data bases make them inappropriate to use for trend analysis and thus any important policy decisions based on climate change. These issues include inadequate adjustments for urban data, bad instrument siting, use of instruments with proven biases that are not adjusted for, major global station dropout, an increase in missing monthly data and questionable adjustment practices.
********************

Ron de Haan
January 26, 2009 4:01 am

Harold Pierce Jr (12:27:46) :
“ATTN: Everbody!”
Harold Pierce Jr.
Would you please be so kind and add the web links you refer to?

foinavon
January 26, 2009 4:47 am

John M (16:03:47) :

foinavon (13:10:59) :
Peer review is generally pretty good I would say, although it can be rather annoying. How nice it would be if one could just put together any old stuff and get it published, like people can do on blogs!
Can you give us your critical evaluation of this little item that peer review missed?
http://www.climateaudit.org/?p=3967

John, It seems you’ve been taken in by a curious reluctance of a blogger to give the whole story.
It took me about 5 minutes to download Mann et al’s paper [Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia Proc. Natl. Acad. Sci. 105, 13252-13257] , to browse the Supplementary Information posted on the website of Proc. Natl. Acad. Sci. and to discover the following:
1. Mann et al. have a large section in their Supplementary info (right near the front on page 2) entitled:
“Potential data quality problems”
In this section they describe that of the 1200 or so proxy series used in their analyses, 7 are potentially problematic due to data quality issues. The authors state that these records include the 4 Tiljander lake varve series (that Mcintyre is making a fuss about) and three other series (Mono lake; Isdale fluorescence data; McCulloch Ba/Ca data).
Since these data are potentially problematic an entire reanalysis of the data was performed leaving these data sets out.
These reconstructions are shown in Figure S8 on page 14 of the Supplementary info, as a comparison with the reconstruction using the full data set. Leaving out the Tiljander (and Mono, Isdale, McCulloch data sets) makes a trivial difference to the long-term CPS Northern Hemisphere land reconstruction and a small difference to the EIV Northern Hemisphere land plus ocean reconstrucrion.
I haven’t read the paper fully so won’t comment further for now. However this addresses your question about peer-review. Mann et al very clearly highlight those data sets they consider to have potential problems and perform analyses leaving these out. When they do so the interpretations of the paper don’t require any material change. Therefore a review should consider that particular matter adequately dealt with.
The more interesting question is why McIntyre doesn’t indicate this glaringly obvious point. Perhaps he does somewhere else on his blog, but certainly not on the web page you linked to.
That’s pretty much why I’m skeptical of blogs and prefer scientists and the scientific literature when it comes to assessing science.

Editor
January 26, 2009 5:35 am

Joel Shore (07:06:06) :

Roger Knights says:
Or medieval and Roman artifacts coming to light in the wake of retreating alpine glaciers?
You mean like this paper http://www3.interscience.wiley.com/journal/114125034/abstract?CRETRY=1&SRETRY=0 regarding a pass in the Swiss Alps whose abstract reads:
{skipped}
And, the surprising aspect of this paper is it is from the general region that hads strong evidence of a significant MWP.

Cool, present one source then comment on it being cherry picked from Europe.
I spent a weekend collecting links to various recent exposures of stuff from 5000-7000 years ago, see http://wermenh.com/climate/6000.html . The sources include Western Canada and Peru. While these events long predate the MWP, they hint at previous warm periods. I don’t know how coincident they are as a group, but it’s intriguing that there’s so much available for that period and not others.

John M
January 26, 2009 5:48 am

foinavon (04:47:03) :
OK. Since you prefer to not go over to CA, I’ll post your response over there myself.
I take this to mean you think it’s OK to invert data in a peer-reviewed publication as long as it “doesn’t matter”.
Are you willing to give McKitrick the same consideration for making a degree/radian error that “doesn’t matter”?
Hmmm, come to think of it, that was all a big hoo-hah in the “warming” blogosphere wasn’t it? He published a correction, but apparently the blog accounts seem to carry more weight with some than did his published correction.
Odd.

John M
January 26, 2009 6:34 am

Another link regarding Tijander from CA. Actually predates the “which end is up” item.
http://www.climateaudit.org/?p=3951

foinavon
January 26, 2009 7:24 am

John M (05:48:54)
Oh well….. You asked a question about peer review in the context of the Mann et al, 2008 PNAS study. I answered it. The reviewers quite reasonably observed that Mann et al highlighted specific data set problems (including the Tijander data sets) very clearly in their paper, and made a full paleoproxy reanalysis with the problematic data removed, and presented this. Whether the problematic data is included or not doesn’t materially change the conclusions of the paper, and the issue is discussed anyway in the main paper and Supplement.
That’s all pretty strightforward.
But now you’re going off in all sorts of odd directions….I don’t know what the sign inversion relates to. Maybe there was a good reason for it….maybe it was a mistake. I expect that if it was a mistake Mann et al will issue a correction as is normal in these circumstances. However since they already highlighted the problematic nature of the Tijander data sets in their paper, and presented an analysis with the problematic data left out, it’s obviously not a particularly serious issue.
I don’t know what you are referring to with “McItrick” and “degree/radian error”. If we’re talking about Mann et al 2008 and their analysis/use of Tijander’s data, what’s the relevance of some other paper (I presume) by some other bloke (or gal)?

Joel Shore
January 26, 2009 9:50 am

John M says:

Are you willing to give McKitrick the same consideration for making a degree/radian error that “doesn’t matter”?

Well, “doesn’t matter” is in the eye-of-the-beholder. In particular, in the original paper, the trend in their data set went 0.27 -> 0.11 -> 0.06 C / decade when they corrected for “economic effects” and then further corrected for “social effect”. According to their own correction, once they fix the degrees / radian error, this became 0.27 -> 0.18 -> 0.13 C / decade, so I would call that a significant difference. (See Sect. 6 of their correction here: http://www.uoguelph.ca/~rmckitri/research/Erratum_McKitrick.pdf)
So yes, they still obtain their basic result that some significant part of the temperature trend is explained by non-climatic influences (although, from my understanding which admittedly is rather vague on this particular paper, there are other criticisms of the paper which could potentially impact this conclusion). However, the strength of the purported effect changes a fair bit.

John M
January 26, 2009 12:56 pm

Joel Shore,
Thanks for the clear analysis. Perhaps you can tutor foinavon on the issue. Is your analysis in the “peer reviewed literature” or only on blogs? Foinavon would seem to think it matters.
Foinavon,
Just so we’re all on the same page, and I apologize in advance for not arguing this point the way you would like me to argue it.
We agree:
1) Mann acknowledges potential problems with the Tijander series.
2) Mann shows in the supporting info that “it doesn’t matter”.
Do you agree with the following?:
3) Mann inverted the Tijander proxy
4) It was the inverted Tijander proxy that was then used to show “it doesn’t matter”.
A question I have;
5) Did Mann then go ahead and use the inverted Tijander proxy in the paper even though he acknowledged problems with the proxy, using the logic that because he showed in the suppl. info. that “it doesn’t matter” it would be OK? (Not a trick question, I honestly don’t know the answer.)

Joel Shore
January 26, 2009 3:09 pm

Joel Shore,
Thanks for the clear analysis. Perhaps you can tutor foinavon on the issue. Is your analysis in the “peer reviewed literature” or only on blogs? Foinavon would seem to think it matters.

What I gave you was simply results from McKitrick and Michaels’s erratum, which indeed did appear in the journal Climate Research.
And, I agree with foinavon on this point. It is not that everything not in the peer-reviewed literature is wrong or bad or that everything in the peer-reviewed literature is correct or even free from errors that are in fact pretty obvious (e.g., I gave a few examples above like the paper by Douglass et al. and Essex and McKitrick that had some things that were certainly easily catchable by a decent referee). However, peer review does act as a filter that tends to weed out a lot of the junk and polemics…and, hence, gives a much larger signal-to-noise ratio. Outside of the peer reviewed literature in a field like climate science, the signal-to-noise ratio can become very small indeed.

John M
January 26, 2009 3:30 pm

Joel Shore (15:09:04) :
Thanks for the comment. I certainly agree with your qualitative assessment, just as I agree qualitatively that the the Earth has warmed and CO2 has contributed. The disagreement is with the quantitation.
While it’s true that the signal-to-noise is generally higher in peer-reviewed literature, you and foinavon seem to be of the opinion that it’s near zero in certain blogs. I am of the opinion that facts can stand on their own, no matter where they appear. They may be likely to more frequently appear in one place than another, but I don’t view the issue in the same “binary” (if you’ll pardon the expression) fashion that you and foinavon seem to view it.

Allan M R MacRae
January 26, 2009 7:22 pm

ON CHERRY PICKING…
Since I have recently been accused of “Cherry Picking”, I thought I’d explore this subject:
What if you conducted a study of temperature as indicated by tree rings, and it indicated that global temperatures got colder in the last decades of the 20th Century?
The resulting plot (with respect to time) was a hockey stick shaped curve, with the blade pointing downward. Not good!
For an example, see the second set of plots at
http://www.climateaudit.org/?p=899
What to do? Delete the tree ring data for those decades and graft on a set of Surface Temperature data that shows warming.
Now the hockey stick blade points sharply upward to the right. Perfect!
THAT seems to me to be a really good example of cherry-picking.
Steve McIntyre writes about this “Divergence Problem” in his discussion of the proceedings of the NAS hearings in March 2006.
See http://www.climateaudit.org/?p=570
For eight years the scary hockey stick survived intact, while this very significant problem was NOT publicly discussed.
And we are supposed to spend tens of trillions of dollars based on THIS?