Steig et al – falsified

Smearing around data or paint - the results are similar
Smearing around data or paint - the results are similar

Jeff Id of The Air Vent emailed me today inviting me to repost Ryan O’s latest work on statistical evaluation of the Steig et al “Antarctica is warming” paper ( Nature, Jan 22, 2009) I thought long and hard about the title, especially after reviewing the previous work from Ryan O we posted on WUWT where the paper was dealt a serious blow to “robustness”. After reading this latest statistical analysis, I think it is fair to conclude that the paper’s premise has been falsified.

Ryan O, in his conclusion, is a bit more gracious:

I am perfectly comfortable saying that Steig’s reconstruction is not a faithful representation of Antarctic temperatures over the past 50 years and that ours is closer to the mark.

Not only that, Ryan O did a more complete job of the reconstruction than Steig et al did, he mentions this in comments at The Air Vent:

Steig only used 42 stations to perform his reconstruction. I used 98, since I included AWS stations.

The AWS stations have their problems, such as periods of warmer temperatures due to being buried in snow, but even when using this data, Ryan O’s analysis still comes out with less warming than the original Steig et al paper

Antarctica as a whole is not warming, the Antarctic peninsula is, which is signficantly removed climatically from the main continent.

Click for a larger image
Click for a larger image

It is my view that all Steig and Michael Mann have done with their application of RegEm to the station data is to smear the temperature around much like an artist would smear red and white paint on a pallete board to get a new color “pink” and then paint the entire continent with it.

It is a lot like “spin art” you see at the county fair. For example, look (at left) at the different tiles of colored temperature results for Antarctica you can get using Steig’s and Mann’s methodology. The only thing that changes are the starting parameters, the data remains the same, while the RegEm program smears it around based on those starting parameters. In the Steig et al case, PC and regpar were chosen by the authors to be a value of 3. Chosing any different numbers yields an entirely different result.

So the premise of the Steig et al paper paper boils down to an arbitrary choice of values that “looked good”.

I hope that Ryan O will write a rebuttal letter to Nature, and/or publish a paper. It is the only way the Team will back down on this. – Anthony

UPDATE: To further clarify, Ryan O writes in comments:

“Overall, Antarctica has warmed from 1957-2006. There is no debating that point. (However, other than the Peninsula, the warming is not statistically significant. )

The important difference is the location of the warming and the magnitude of the warming. Steig’s paper has the warming concentrated on the Ross Ice Shelf – which would lead you to entirely different conclusions than having a minimum on the ice shelf. As far as magnitude goes, the warming for the continent is half of what was reported by Steig (0.12 vs. 0.06 Deg C/Decade).

Additionally, Steig shows whole-continent warming from 1967-2006; this analysis shows that most of the continent has cooled from 1967-2006. Given that the 1940’s were significantly warmer in the Antarctic than 1957 (the 1957-1960 period was unusually cold in the Antarctic), focusing on 1957 can give a somewhat slanted picture of the temperature trends in the continent.”

Ryan O  adds later:  “I should have said that all reconstructions yield a positive trend, though in most cases the trend for the continent is not statistically significant.


Verification of the Improved High PC Reconstruction

Posted by Jeff Id on May 28, 2009

There is always something going on around here.

Up until now all the work which has been done on the antarctic reconstruction has been done without statistical verification. We believed that they are better from correlation vs distance plots, the visual comparison to station trends and of course the better approximation of simple area weighted reconstructions using surface station data.

The authors of Steig et al. have not been queried by myself or anyone else that I’m aware of regarding the quality of the higher PC reconstructions. And the team has largely ignored what has been going on over on the Air Vent. This post however demonstrates strongly improved verification statistics which should send chills down their collective backs.

Ryan was generous in giving credit to others with his wording, he has put together this amazing piece of work himself using bits of code and knowledge gained from the numerous other posts by himself and others on the subject. He’s done a top notch job again, through a Herculean effort in code and debugging.

If you didn’t read Ryan’s other post which led to this work the link is:

Antarctic Coup de Grace

——————————————————————————–

Fig_1

Fig. 1: 1957-2006 trends; our reconstruction (left); Steig reconstruction (right)

HOW DO WE CHOOSE?

In order to choose which version of Antarctica is more likely to represent the real 50-year history, we need to calculate statistics with which to compare the reconstructions. For this post, we will examine r, r^2, R^2, RE, and CE for various conditions, including an analysis of the accuracy of the RegEM imputation. While Steig’s paper did provide verification statistics against the satellite data, the only verification statistics that related to ground data were provided by the restricted 15-predictor reconstruction, where the withheld ground stations were the verification target. We will perform a more comprehensive analysis of performance with respect to both RegEM and the ground data. Additionally, we will compare how our reconstruction performs against Steig’s reconstruction using the same methods used by Steig in his paper, along with a few more comprehensive tests.

To calculate what I would consider a healthy battery of verification statistics, we need to perform several reconstructions. The reason for this is to evaluate how well the method reproduces known data. Unless we know how well we can reproduce things we know, we cannot determine how likely the method is to estimate things we do not know. This requires that we perform a set of reconstructions by withholding certain information. The reconstructions we will perform are:

1. A 13-PC reconstruction using all manned and AWS stations, with ocean stations and Adelaide excluded. This is the main reconstruction.

2. An early calibration reconstruction using AVHRR data from 1982-1994.5. This will allow us to assess how well the method reproduces the withheld AVHRR data.

3. A late calibration reconstruction using AVHRR data from 1994.5-2006. Coupled with the early calibration, this provides comprehensive coverage of the entire satellite period.

4. A 13-PC reconstruction with the AWS stations withheld. The purpose of this reconstruction is to use the AWS stations as a verification target (i.e., see how well the reconstruction estimates the AWS data, and then compare the estimation against the real AWS data).

5. The same set of four reconstructions as above, but using 21 PCs in order to assess the stability of the reconstruction to included PCs.

6. A 3-PC reconstruction using Steig’s station complement to demonstrate replication of his process.

7. A 3-PC reconstruction using the 13-PC reconstruction model frame as input to demonstrate the inability of Steig’s process to properly resolve the geographical locations of the trends and trend magnitudes.

Using the above set of reconstructions, we will then calculate the following sets of verification statistics:

1. Performance vs. the AVHRR data (early and late calibration reconstructions)

2. Performance vs. the AVHRR data (full reconstruction model frame)

3. Comparison of the spliced and model reconstruction vs. the actual ground station data.

4. Comparison of the restricted (AWS data withheld) reconstruction vs. the actual AWS data.

5. Comparison of the RegEM imputation model frame for the ground stations vs. the actual ground station data.

The provided script performs all of the required reconstructions and makes all of the required verification calculations. I will not present them all here (because there are a lot of them). I will present the ones that I feel are the most telling and important. In fact, I have not yet plotted all the different results myself. So for those of you with R, there are plenty of things to plot.

Without further ado, let’s take a look at a few of those things.

Fig_2Fig. 2: Split reconstruction verification for Steig reconstruction

You may remember the figure above; it represents the split reconstruction verification statistics for Steig’s reconstruction. Note the significant regions of negative CE values (which indicate that a simple average of observed temperatures explains more variance than the reconstruction). Of particular note, the region where Steig reports the highest trend – West Antarctica and the Ross Ice Shelf – shows the worst performance.

Let’s compare to our reconstruction:

Fig_3Fig. 3: Split reconstruction verification for 13-PC reconstruction

There still are a few areas of negative RE (too small to see in this panel) and some areas of negative CE. However, unlike the Steig reconstruction, ours performs well in most of West Antarctica, the Peninsula, and the Ross Ice Shelf. All values are significantly higher than the Steig reconstruction, and we show much smaller regions with negative values.

As an aside, the r^2 plots are not corrected by the Monte Carlo analysis yet. However, as shown in the previous post concerning Steig’s verification statistics, the maximum r^2 values using AR(8) noise were only 0.019, which produces an indistinguishable change from Fig. 3.

Now that we know that our method provides a more faithful reproduction of the satellite data, it is time to see how faithfully our method reproduces the ground data. A simple way to compare ours against Steig’s is to look at scatterplots of reconstructed anomalies vs. ground station anomalies:

Your browser may not support display of this image.

Fig_4Fig. 4: 13-PC scatterplot (left); Steig reconstruction (right)

The 13-PC reconstruction shows significantly improved performance in predicting ground temperatures as compared to the Steig reconstruction. This improved performance is also reflected in plots of correlation coefficient:

Fig_5Fig. 5: Correlation coefficient by geographical location

As noted earlier, the performance in the Peninsula , West Antarctica, and the Ross Ice Shelf are noticeably better for our reconstruction. Examining the plots this way provides a good indication of the geographical performance of the two reconstructions. Another way to look at this – one that allows a bit more precision – is to plot the results as bar plots, sorted by location:

Fig_6Fig. 6: Correlation coefficients for the 13-PC reconstruction

Fig_7Fig. 7: Correlation coefficients for the Steig reconstruction

The difference is quite striking.

While a good performance with respect to correlation is nice, this alone does not mean we have a “good” reconstruction. One common problem is over-fitting during the calibration period (where the calibration period is defined as the periods over which actual data is present). This leads to fantastic verification statistics during calibration, but results in poor performance outside of that period.

This is the purpose of the restricted reconstruction, where we withhold all AWS data. We then compare the reconstruction values against the actual AWS data. If our method resulted in overfitting (or is simply a poor method), our verification performance will be correspondingly poor.

Since Steig did not use AWS stations for performing his TIR reconstruction, this allows us to do an apples-to-apples comparison between the two methods. We can use the AWS stations as a verification target for both reconstructions. We can then compare which reconstruction results in better performance from the standpoint of being able to predict the actual AWS data. This is nice because it prevents us from later being accused of holding the reconstructions to different standards.

Note that since all of the AWS data was withheld, RE is undefined. RE uses the calibration period mean, and there is no calibration period for the AWS stations because we did the reconstruction without including any AWS data. We could run a split test like we did with the satellite data, but that would require additional calculations and is an easier test to pass regardless. Besides, the reason we have to run a split test with the satellite data is that we cannot withhold all of the satellite data and still be able to do the reconstruction. With the AWS stations, however, we are not subject to the same restriction.

Fig_8Fig. 8: Correlation coefficient, verification period, AWS stations withheld

With that, I think we can safely put to bed the possibility that our calibration performance was due to overfitting. The verification performance is quite good, with the exception of one station in West Antarctica (Siple). Some of you may be curious about Siple, so I decided to plot both the original data and the reconstructed data. The problem with Siple is clearly the short record length and strange temperature swings (in excess of 10 degrees), which may indicate problems with the measurements:

Fig_9Fig. 9: Siple station data

While we should still be curious about Siple, we also would not be unjustified in considering it an outlier given the performance of our reconstruction at the remainder of the station locations.

Leaving Siple for the moment, let’s take a look at how Steig’s reconstruction performs.

Fig_10Fig. 10: Correlation coefficient, verification period, AWS stations withheld, Steig reconstruction

Not too bad – but not as good as ours. Curiously, Siple does not look like an outlier in Steig’s reconstruction. In its place, however, seems to be the entire Peninsula. Overall, the correlation coefficients for the Steig reconstruction are poorer than ours. This allows us to conclude that our reconstruction more accurately calculated the temperature in the locations where we withheld real data.

Along with correlation coefficient, the other statistic we need to look at is CE. Of the three statistics used by Steig – r, RE, and CE – CE is the most difficult statistic to pass. This is another reason why we are not concerned about lack of RE in this case: RE is an easier test to pass.

Fig_11Fig. 11: CE, verification period, AWS stations withheld

Your browser may not support display of this image.

Fig_12Fig. 12: CE, verification period, AWS stations withheld, Steig reconstruction

The difference in performance between the two reconstructions is more apparent in the CE statistic. Steig’s reconstruction demonstrates negligible skill in the Peninsula, while our skill in the Peninsula is much higher. With the exception of Siple, our West Antarctic stations perform comparably. For the rest of the continent, our CE statistics are significantly higher than Steig’s – and we have no negative CE values.

So in a test of which method best reproduces withheld ground station data, our reconstruction shows significantly more skill than Steig’s.

The final set of statistics we will look at is the performance of RegEM. This is important because it will show us how faithful RegEM was to the original data. Steig did not perform any verification similar to this because PTTLS does not return the model frame. Unlike PTTLS, however, our version of RegEM (IPCA) does return the model frame. Since the model frame is accessible, it is incumbent upon us to look at it.

Note: In order to have a comparison, we will run a Steig-type reconstruction using RegEM IPCA.

There are two key statistics for this: r and R^2. R^2 is called “average explained variance”. It is a similar statistic to RE and CE with the difference being that the original data comes from the calibration period instead of the verification period. In the case of RegEM, all of the original data is technically “calibration period”, which is why we do not calculate RE and CE. Those are verification period statistics.

Let’s look at how RegEM IPCA performed for our reconstruction vs. Steig’s.

Fig_13Fig. 13: Correlation coefficient between RegEM model frame and actual ground data

As you can see, RegEM performed quite faithfully with respect to the original data. This is a double-edged sword; if RegEM performs too faithfully, you end up with overfitting problems. However, we already checked for overfitting using our restricted reconstruction (with the AWS stations as the verification target).

While we had used regpar settings of 9 (main reconstruction) and 6 (restricted reconstruction), Steig only used a regpar setting of 3. This leads us to question whether that setting was sufficient for RegEM to be able to faithfully represent the original data. The only way to tell is to look, and the next frame shows us that Steig’s performance was significantly less than ours.

 Fig. 14: Correlation coefficient between RegEM model frame and actual ground data, Steig reconstructionFig. 14: Correlation coefficient between RegEM model frame and actual ground data, Steig reconstruction

The performance using a regpar setting of 3 is noticeably worse, especially in East Antarctica. This would indicate that a setting of 3 does not provide enough degrees of freedom for the imputation to accurately represent the existing data. And if the imputation cannot accurately represent the existing data, then its representation of missing data is correspondingly suspect.

Another point I would like to note is the heavy weighting of Peninsula and open-ocean stations. Steig’s reconstruction relied on a total of 5 stations in West Antarctica, 4 of which are located on the eastern and southern edges of the continent at the Ross Ice Shelf. The resolution of West Antarctic trends based on the ground stations alone is rather poor.

Now that we’ve looked at correlation coefficients, let’s look at a more stringent statistic: average explained variance, or R^2.

Fig. 15: R2 between RegEM model frame and actual ground dataFig. 15: R^2 between RegEM model frame and actual ground data

Using a regpar setting of 9 also provides good R^2 statistics. The Peninsula is still a bit wanting. I checked the R^2 for the 21-PC reconstruction and the numbers were nearly identical. Without increasing the regpar setting and running the risk of overfitting, this seems to be about the limit of the imputation accuracy.

Fig_16Fig. 16: R^2 between RegEM model frame and actual ground data, Steig reconstruction

Steig’s reconstruction, on the other hand, shows some fairly low values for R^2. The Peninsula is an odd mix of high and low values, West Antarctica and Ross are middling, while East Antarctica is poor overall. This fits with the qualitative observation that the Steig method seemed to spread the Peninsula warming all over the continent, including into East Antarctica – which by most other accounts is cooling slightly, not warming.

CONCLUSION

With the exception of the RegEM verification, all of the verification statistics listed above were performed exactly (split reconstruction) or analogously (restricted 15 predictor reconstruction) by Steig in the Nature paper. In all cases, our reconstruction shows significantly more skill than the Steig reconstruction. So if these are the metrics by which we are to judge this type of reconstruction, ours is objectively superior.

As before, I would qualify this by saying that not all of the errors and uncertainties have been quantified yet, so I’m not comfortable putting a ton of stock into any of these reconstructions. However, I am perfectly comfortable saying that Steig’s reconstruction is not a faithful representation of Antarctic temperatures over the past 50 years and that ours is closer to the mark.

NOTE ON THE SCRIPT

If you want to duplicate all of the figures above, I would recommend letting the entire script run. Be patient; it takes about 20 minutes. While this may seem long, remember that it is performing 11 different reconstructions and calculating a metric butt-ton of verification statistics.

There is a plotting section at the end that has examples of all of the above plots (to make it easier for you to understand how the custom plotting functions work) and it also contains indices and explanations for the reconstructions, variables, and statistics. As always, though, if you have any questions or find a feature that doesn’t work, let me know and I’ll do my best to help.

Lastly, once you get comfortable with the script, you can probably avoid running all the reconstructions. They take up a lot of memory, and if you let all of them run, you’ll have enough room for maybe 2 or 3 more before R refuses to comply. So if you want to play around with the different RegEM variants, numbers of included PCs, and regpar settings, I would recommend getting comfortable with the script and then loading up just the functions. That will give you plenty of memory for 15 or so reconstructions.

As a bonus, I included the reconstruction that takes the output of our reconstruction, uses it for input to the Steig method, and spits out this result:

Fig_17Fig. 17: Steig reconstruction using the 13-PC reconstruction as input.

The name for the list containing all the information and trends is “r.3.test”.

—————————————————————-

Code is here Recon.R

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

225 Comments
Inline Feedbacks
View all comments
a jones
May 31, 2009 1:27 pm

LS
Quite so.
Well at the moment I don’t understand it either.
But we have a start.
I have downloaded the paper on the Friday Effect and skimmed it.
In the next few days I will go over it carefully and if anything occurs to me I will advise.
Kindest Regards

May 31, 2009 1:43 pm

a jones (11:46:46):
No I meant that the group velocity of light is necessarily always less than the phase velocity.

Which is the same as to say the velocity of photons (wave group) is less than the speed of light (wave phase). This could account for age of photons, i.e that photons get older and weaker through time or when they vanish at any boundary. Am I misinterpreting your assertion?
Slipsticks pushed us to reason thoroughly on the problems… Computers don’t. 🙂

May 31, 2009 1:45 pm

a jones (13:27:54) :
I have downloaded the paper on the Friday Effect and skimmed it.
The effects are observable, especially if one makes Chree-analyses to beat down the noise, but the paper has more to do with the calculation of the distance. I you the JPL Horizon ephemeris and they [LASP] use the VSOP coefficients. At the level of precision sought, it shouldn’t make any difference which one is used. But it does. BTW, as part of my discussion with them they detected a couple of minor errors [relativistic corrections as the photons climb out of the Sun’s gravity well] which they have fixed [they went to a new version of the processing software and reprocessed all the data]. This subject is complicated and most people [including me!] don’t really want to know all the details [except when something doesn’t look right 🙂 ].

May 31, 2009 1:52 pm

Leif… Isn’t the radial speed of Earth constant?

May 31, 2009 2:28 pm

Nasif Nahle (13:52:23) :
Leif… Isn’t the radial speed of Earth constant?
Heaven’s no. If it were we would either be swallowed up in the Sun already or out in the cold empty universe with no sun to warm us.
For once, the speed changes sign twice a year. From Jan, to July the velocity is away from the Sun, then it turns around and from July to next Jan. it is towards the Sun.

a jones
May 31, 2009 3:54 pm

LS
Don’t believe in giving the lad a simple job do you? a relativistic correction for photons climbing out of the gravity well eh?
What fun: but obviously I am going to have to do rather more than I anticipated so we will probably have to bat this to and fro several times as I get a grip on the problem that makes you so uneasy in your mind.
As aforesaid I will start with above paper, do some digging, and then come back.
NN
Of course you can regard light as a stream of photons that is the beauty of the duality of nature. And its great strength is that you can attack the problem from two directions: so if the right one don’t get you the left one will. With a bit of luck.
You are quite right that we have not yet learned to use computing power to its best advantage, though I expect we will. So much data, so much processing power, so many solutions which can be tweaked at the touch of a button. We lose sight of the woods, in the US I think they say forest, for the trees.
And all too often come to believe in the computer output rather than the real observed world. AGW is a classic example of this: but there are many others.
Likewise the mathematical methods we borrow from our friends can as easily conceal answers as provide them. Vector mathematics may be beautifully simple but because it depends on the arrow of time it also hides from us alternative solutions which can be found in the classical solution of the Maxwell equations.
Is this important? I don’t know but I have spent a lifetime in my spare time struggling with it without much success. Because you see the arrow of time is the embodiment of the second law of thermodynamics yet quantum mechanics knows nothing of it: unless of course the reason a particle can suddenly appear and disappear is that it can only exist if it has time to exist in and possibly thats it’s lifetime depends upon its mass as well.
Certainly we now know that this the case with black holes, they do obey the second law and therefore have a lifetime. Note this is essentially a classical solution that depends on the fact that the universe is itself a naked singularity so that no other naked singularity can exist within it.
The concept of difficulties with the arrow of time which I have puzzled over all my life, except when doing physics and engineering to put food on the table. are not popular. Instead we have dark matter, strings, which i think can be undone in a minute, branes etc. not to mention numerous dimensions.
Me I think that the answer lies in the nature of time itself and that whilst the Einstein/ Lorenz view is perfectly correct it is a special case of the more general case. But I must admit if there is a more general case I have not discovered it yet. But as with the case of black holes above I hope we are making progress.
Kindest Regards

May 31, 2009 4:08 pm

Leif Svalgaard (14:28:22) :
For once, the speed changes sign twice a year. From Jan, to July the velocity is away from the Sun, then it turns around and from July to next Jan. it is towards the Sun.
Yes, but my question was in the sense of the velocity of scape isn’t the same than the velocity towards the Sun?

May 31, 2009 5:56 pm

Nasif Nahle (16:08:45) :
Yes, but my question was in the sense of the velocity of scape isn’t the same than the velocity towards the Sun?
I don’t understand your question.

smallz79
June 1, 2009 7:31 am

This is what I am claiming about those warmist:
Appeal to Belief is a fallacy that has this general pattern:
Most people believe that a claim, X, is true.
Therefore X is true.
This line of “reasoning” is fallacious because the fact that many people believe a claim does not, in general, serve as evidence that the claim is true.
There are, however, some cases when the fact that many people accept a claim as true is an indication that it is true. For example, while you are visiting Maine, you are told by several people that they believe that people older than 16 need to buy a fishing license in order to fish. Barring reasons to doubt these people, their statements give you reason to believe that anyone over 16 will need to buy a fishing license.
There are also cases in which what people believe actually determines the truth of a claim. For example, the truth of claims about manners and proper behavior might simply depend on what people believe to be good manners and proper behavior. Another example is the case of community standards, which are often taken to be the standards that most people accept. In some cases, what violates certain community standards is taken to be obscene. In such cases, for the claim “x is obscene” to be true is for most people in that community to believe that x is obscene. In such cases it is still prudent to question the justification of the individual beliefs.

Chuck L
June 1, 2009 8:31 am
neill
June 1, 2009 8:44 am

there is a rebuttal post of some sort now over at RC.

smallz79
June 1, 2009 9:40 am

Look at this pretty interesting. Any one investigated this yet?
http://tech-know.eu/uploads/SUN_heats_EARTH.pdf

smallz79
June 1, 2009 9:42 am

Interesting idea that the Earth heats the atmoshphere, not the atmosphere heating the earth.

neill
June 1, 2009 11:07 am

….and the current titleholder Steig bolts from his corner, launching a massive shot to the body of the challenger Ryan O. Yet the effort caused Steig to drop his left slightly, and the challenger responds with a lighting right cross flush on Steig’s jaw, staggering the champ…..

neill
June 1, 2009 2:15 pm

The crowd is on its feet, the roar is deafening.
Suddenly, Steig backs away, begins to untie his boxing gloves, then pulls them off. The gloves drop to the floor of the ring. Just as suddenly, you can hear a pin drop in the cavernous arena.
Steig says, “I’m not at all interested in debating you — I’ve got much better things to do. Let me be very clear, though, that I’m by no means claiming our results are the last word, and can’t be improved upon. If you have something coherent and useful to say, say it in a peer reviewed paper. If your results improve upon ours, great, that will be a useful contribution.”
A cascade of boos descends upon the ring, along with some rotten fruit, which Steig ducks to avoid. He slips through the ropes at ringside, hops down and trots quickly out of the arena.
His gloves remain in the ring, a target for the crowd’s growing fury.

DJA
June 2, 2009 2:14 am

neill (14:15:30)
“His gloves remain in the ring, a target for the crowd’s growing fury.”
Not any more, comment is closed at RC, even the gloves are gone.

barry
June 2, 2009 11:07 pm

Steig and Ryan ) (and Jeff) had a brief, polite exchange with substance. That beats hooting from the stands and throwing fruit. Next step for Ryan is publication. I hope he submits his paper.

June 3, 2009 1:19 am

Peer reviewed papers aren’t worth much if the peers are all in on it as well.

barry
June 3, 2009 7:05 am

How true.

June 3, 2009 12:23 pm

Hopefully I’m not too off topic here as I hope people will see this one.
The latest BBC story on the Antarctic is at: (Sorry – I posted this URL on a recent WUWT article but was snipped (OT). In the interim my internet has cratered and I can’t get back on the BBC news site so you’ll have to look for yourselves. It was a ludicrous article and I’m sure that others will post it.)
In a nutshell, they say that the Antarctic has been covered in ice for 14 million years but unless we cut CO2 emmisions the sea level will rise and that:
“in around 1,000 years they [sea levels] will approach the same levels that existed “before there was persistent ice sheet in Antarctica”.
In addition the BBC states that:
“The worrying thing is that we seem to be going back to carbon dioxide concentrations consistent with there being a lot less ice around.”

neill
June 3, 2009 11:24 pm

barry (23:07:56) :
Who gives a hoot if the “peers” finally allow publication of a paper, when the political buzzer will have long since sounded at that point.
Steig appears to be running out the clock, along with the “team” at RC. Time is the key factor now as regards the policies of AGW being implemented.
Check out Jeff Id’s repeated ignored pleas with RC for the Steig et al code/data on the comment thread of ‘Politeness Part Deux’ thread at the Air Vent:
‘Jeff Id said
June 2, 2009 at 2:22 pm
Here’s a copy of my initial correspondence with RC.
I wonder if you know when the data and code for this will be released. If it has, where can I find it?
It doesn’t matter to me if the antarctic is warming or not, but I would like to know the details of this study. I’ve read the paper and SI and it isn’t exactly chock full of detail.
(Cut from moderation. I tried again.)
If you wouldn’t mind encouraging your colleagues to publish the data and code used, the review process may gain you considerable support.
I for one wouldn’t be surprised to find the Antarctic was warming, but I need to see the calculations used in order to trust the result. If it looks reasonable, there’s nothing wrong with that. That’s exactly what my blog will say.
(Cut again.
Undeterred I tried again.)
gavin,
After having so many reasonable comments cut I need to add something.
You may find working with me instead of actively suppressing my questions to be less troublesome, my blog is more popular every day.
All I really want to do is understand, Mann08 deserved every criticism I leveled at it (and more), you couldn’t force me to put my name on it. It’s rather unfortunate that it was the first climate paper from which I looked at the data, I understand now that despite the high profile of Mann, most papers are better quality but how am I supposed to react to a high profile climate paper like that?
This is a different paper and a different problem. As I have attempted to say, it has every potential for being accurate. Let it out in the light and let’s see.
I realize this will also be cut, but consider my words I do honor them.
Eventually part of a comment was let through in edited form on RC – requesting code and DATA.
[Response: What is there about the sentence, “The code, all of it, exactly as we used it, is right here,” that you don’t understand? Or are you asking for a step-by-step guide to Matlab? If so, you’re certainly welcome to enroll in one of my classes at the University of Washington.–eric] ‘
They don’t WANT their papers replicated because they know there’s 10 minutes to go in the fourth quarter (sorry, another tortured sports metaphor), they have the ball and all they need to do is run out the clock. They have no interest in the arcane concept of objective, replicated science at this point.
Steig shut down the thread after 42 comments. The previous fisherman etc etc thread ran for over 1000 comments, still open.
As long as there’s no sunshine on their claims, they can afford to be polite and dabble in “substance” while demanding review by “peers” — at the same time taking potshots on the web.
RC has been billowing smoke for how long and still they claim there’s no fire?
IMHO, RCs duplicity is well worthy of being the target of rotten fruit — and much worse.

June 4, 2009 4:08 am

Here’s that BBC article I was referring to. It’s titled: ” Origin of Antarctic ice revealed”.
http://news.bbc.co.uk/1/hi/sci/tech/8079767.stm

Hank Hancock
June 4, 2009 11:46 am

I attempted to post a polite and on-topic comment praising Ryan O’s work. It never made it past the moderators. In fact, despite being careful to be polite and germain in all cases, I’ve never had a post on any topic make it past the moderators at RC. It seems the only posts that will pass their moderators must be pro-AGW, ad homenin attacks on skeptics, or any post that gives the moderators an opportunity to portray a skeptic commentor as a fool. I have yet to see RC allow an informed debate on any foundational premis of AGW.
RC trumpets that there is no longer any debate on AGW. To be more accurate, there is no tolerance by the moderators for discenting views and therefore the appearance of no debate. It beggs the question why is is so necessary to protect the AGW hypothesis from reasonable inquiry? For intellectual honesty, I give RC an “F”.

Wolfgang
June 18, 2009 5:02 am

“Overall, Antarctica has warmed from 1957-2006. There is no debating that point. (However, other than the Peninsula, the warming is not statistically significant. ) ”
That is the problem with climatology. If something is not statistically significant, then there is no sense in discussing it. If the errors of the temperature measurement are bigger than the “warming”, then there is no reason to speak of a “warming” at all. That’s not science, that’s ideology.
Regarding climate chance. As a chemist I wonder about all the little bits of information I can read about climate phenomena, that are poorly understood. Two days ago there was a news story about glaciers in south america, that contrary to expectation of all the climate catastrophe proponents, do not melt but grow instead. I remember that I can read this type of article every two weeks, either something about chemical composition of aerosols, poorly understood and “misbehaving” in conventional climate models, or about this and that, always “unexpected” stuff and poorly understood by the climate change community.
In my opinion these plethora of tiny bits are the best example that you guys do not understand anything at all of your business! Don’t you think that some caution would be indicated, before you send out your research results in the style of a roman catholic dogma?

Adrian Ashfield
August 8, 2009 9:51 am

Anthony,
I wanted to send you a private copy of my email to Nature that linked this topic, but I can’t see a way of doing so.
Should you be interested, please email me your address.

1 7 8 9
Verified by MonsterInsights