RC's duplicity prods Jeff Id out of retirement

Jeff asked me to carry this, since his blog is off the radar since he announced his retirement. It took an RC moment to change that. – Anthony

Wrong is Wrong – A reply to the Real Noise at Real Climate

Posted by Jeff Id on February 7, 2011

I knew this was coming and based on Ryan’s personality that it would be a bit of a rocket.  Actually that knowledge helped me not blog on the RC post.  Steig hasn’t figured out several issues with our work or the fact that his result near Byrd was essentially random.  The post below is long and fun, but more importantly accurate, and contains what should be a nice little surprise for readers.  I’m still ticked that even my small critique wasn’t allowed at RC.   The scientists of RC are unqualified to judge what I write,  the censorship is nonsense and they should read my comments as carefully as anything from their peers.  What happens when you realize that you understand better than the alleged professionals?   –  you get bored — maybe you even quit blogging!

Anyway, Ryan ODonnell is a little tired of it as well and has just added another 14 pages to the unbelievable 88 page review.  Read on and you’ll see what I mean.- Jeff

Steve McIntyre has a duplicate of this at his site.

————–

ERIC STEIG’S CRITQUE – By Ryan ODonnell

Some of you may have noticed that Eric Steig has a new post on our paper at RealClimate.  In the past when I have wished to challenge Eric on something, I generally have responded at RealClimate.  In this case, a more detailed response is required, and a simple post at RC would be insufficient.  Based on the content, it would not have made it past moderation anyway. [Jeff’s bold]

Lest the following be entirely one-sided, I should note that most of my experiences with Eric in the past have been positive.  He was professional and helpful when I was asking questions about how exactly his reconstruction was performed and how his verification statistics were obtained.  My communication with him following acceptance of our paper was likewise friendly.  While some of the public comments he has made about our paper have fallen far short of being glowing recommendations, Eric has every right to argue his point of view and I do not begrudge his doing so.  I should also note that over the past week I was contacted by an editor from National Geographic, who mentioned in passing that he was referred to me by Eric.  This was quite gracious of Eric, and I honestly appreciated the gesture.

However, once Eric puts on his RealClimate hat, his demeanor is something else entirely.  Again, he has every right to blog about why he feels our paper is something other than how we have characterized it (just as we have every right to disagree).  However, what he does not have the right to do is to defend his point of view by intentionally and grossly misrepresenting facts.

In other words, in his latest post, Eric crossed the line.

Let us examine how (with the best, of course, saved for last).

***

The first salient point is that Eric still doesn’t get it.  The whole purpose of our paper was to demonstrate that if you properly use the data that S09 used, then the answer changes in a significant fashion.  This is different than claiming that a method whereby satellite data and ground station data are used together in RegEM that the resulting answer is a more accurate representation of the [unknown] truth than other methods.  We have not (and will not) make such a claim.  The only claim we make is – given the data and regression method used by S09 – that the answer is different when the method is properly employed.  Period.

Unfortunately, Eric does not seem to understand.  He wishes to continue comparing our results to other methods and data sets (such as NECP, ERA-40, Monaghan’s kriging method, and boreholes).  We did not use those sets or methods, and we make no comment on whether analyses conducted using those sets and methods are more likely to give better results.  Yet Eric insists on using such comparisons to cast doubt on our methodological criticisms of the S09 method.

While such comparisons are, indeed, important for determining what might be the true temperature history of Antarctica, they have absolutely nothing to do with the methodological criticisms – which were the whole point of our paper.  Zero.  Nilch.  Nada.  Note how Eric has refrained from talking about those criticisms directly.  I can only assume that this is because he has little to say, as the criticisms are spot-on.

Instead, what Eric would prefer to do is look at other products and say, “See!  Our West Antarctic trends at Byrd Station are closer than O’Donnell’s!  We were right!”  While it may be a true statement that the S09 results at Byrd Station prove to be more accurate as better and better analyses are performed, if so, it was sheer luck (as I will demonstrate, once again).  The S09 analysis does not have the necessary geographic resolution nor the proper calibration method necessary to independently demonstrate this accuracy.

I could write a chapter in the Farmer’s Almanac explaining how the global temperature will drop by 0.5 degrees by 2020 and base my analysis on the alignment of the planets and the decline in popularity of the name “Al”.  If the global temperature drops by 0.5 degrees by 2020, does that validate my method – and, by extension, invalidate the criticisms against my method?  Eric, apparently, would like to think so.

The question about whether the proper use of the AVHRR and station data sets yield an accurate representation of the temperature history of Antarctica is an entirely separate topic.  To be sure, it is an important one, and it is a legitimate course of scientific inquiry for Eric to argue that our West Antarctic results are incorrect based on independent analyses.  What is entirely, wholly, and completely not legitimate is to use those same arguments to defend the method of his paper, as the former makes no statement on the latter.

If he wishes to argue that our results are incorrect, that’s fine.  But if he wishes to defend his method, then it is time for him to begin advancing mathematically correct arguments why his method was better (or why our criticisms were not accurate).  Otherwise, it is time for Eric to stop playing the carnival prognosticator’s game of using the end result to imply that an inappropriate use of information was somehow “the right way”.

***

The second salient point relates to the evidence Eric presents that our reconstruction is less accurate.  When it comes to differences between the reconstruction and ground data, Eric focuses primarily on Byrd station.  While his discussion seems reasonable at first glance, it is quite misleading.  Let us examine Eric’s comments on Byrd in detail.

Eric first presents a plot where he displays a trendline of 0.38 +/- 0.2 Deg C / decade (Raw data) for 1957 – 2006.  He claims that this is the ground data (in annual anomalies) for Byrd station.  While there is not much untrue about this statement, there is certainly a lie of omission.  To see this lie, we only need look at the raw data from Byrd over this period:

Pay close attention to the post-2000 timeframe.  Notice how the winter months are absent?  Now what do we suppose might happen if we fit a trend line to this data?  One might go so far as to say that the conclusion is foregone.

So . . . would Eric Steig really do this?  Of course not.  A professional scientist would know better.

Trend check on the above plot:  0.38 Deg C / decade.

Hm.

By the way, the trend uncertainty when the trend is calculated this way is +/- 0.32, and, if one corrects for the serial correlation in the residuals, it jumps to +/- 0.86.  But since neither of those tell the right story, I suppose the best option is to simply copy over the +/- 0.2 from the Monaghan reconstruction trend.

When calculated properly, the 50-year Byrd trend is 0.25 +/- 0.15 (corrected for serial correlation).  This is still considerably higher than the Byrd location in our reconstruction, and is very close to the trend in the S09 reconstruction.  However, we’ve yet to address the fact that the pre-1980 data comes from an entirely different sensor than the post-1980 data.

Eric notes this fact, and says:

Note that caution is in order in simply splicing these together, because sensor calibration issues could means that the 1°C difference is an overestimate (or an underestimate).

He then proceeds to splice them together anyway.  His justification is that there is about a 1oC difference in the raw temperatures, and then goes on to state that there is independent evidence from a talk given at the AGU conference about a borehole measurement from the West Antarctic Ice Sheet Divide.  This would be quite interesting, except that the first half of the statement is completely untrue.

If you look at the raw temperatures (which was how he computed his trend, so one might assume that he would compare raw temperatures here as well), the manned Byrd station shows a mean of -27.249 Deg C.  The AWS station shows a mean of -27.149 . . . or a 0.1 Deg C difference.  Perhaps he missed a decimal point.

However, since computing the trend using the raw data is unacceptable due to an uneven distribution of months for which data is present, computing the difference in temperature using the raw data is likewise unacceptable.  The difference in temperature should be computed using anomalies to remove the annual cycle.  If the calculation is done this (the proper) way, the manned Byrd station shows an average anomaly of -0.28 Deg C and the AWS station shows an average of 0.27 Deg C.  This yields a difference of 0.55 Deg C . . . which is still not 1 Deg C.

Maybe he’s rounding up?

Seriously, Eric . . . are we playing horseshoes?

Of course, of this 0.55 Deg C difference, fully one-third is due to a single year (1980), which occurred 30 years ago:

Without 1980 – which occurs at the very beginning of the AWS record – the difference between the manned station anomalies and the AWS anomalies is 0.37 Deg C.

Furthermore, even if there were a 1 degree difference in the manned station and AWS values, this still doesn’t tell the story Eric wants it to.

The original Byrd station was located at 119 deg 24’ 14” W.  The AWS station is located at 119 deg 32’ W.  Seems like almost the same spot, right?  The difference is only about 2.5 km.  This is the same distance as that between McMurdo (elev. 32m) and Scott Base (elev. 20m).  So if one can willy-nilly splice the Byrd station together, one would expect that the same could be done for McMurdo and Scott Base.  So let’s look at the mean temperatures and trends for those two stations (both of which have nearly complete records).

McMurdo:  -16.899 Deg. C (mean)

Scott Base:  -19.850 Deg. C (mean)

That’s a 3 degree difference for stations at a similar elevation and a linear separation of 2.5 km . . . just like Byrd manned and Byrd AWS.

So what would the trend be if we spliced the first half of Scott Base with the second half of McMurdo?

1.05 +/- 0.19 Deg C / decade.

OMG . . . it is SO much worse than we thought!

Microclimate matters.  Sensor differences matter.  The fact that AWS stations are likely to show a warming bias compared to manned stations (as the distance between the sensor and the snow surface tends to decrease over time, and Antarctica shows a strong temperature gradient between the nominal 3m sensor height and the snow surface) matters.  All of these are ignored by Eric, and he should know better.

Eric goes on to state that this meant we somehow used less available station data than he did:

On top of that, O’Donnell et al. do not appear to have used all of the information available from the weather stations. Byrd is actually composed of two different records, the occupied Byrd Station, which stops in 1980, and the Byrd AWS station which has episodically recorded temperatures at Byrd since then. O’Donnell et al. treat these as two independent data sets, and because their calculations (like ours) remove the mean of each record, O’Donnell et al. have removed information that might be rather important. namely, that the average temperatures in the AWS record (post 1980) are warmer — by about 1 Deg C — than the pre-1980 manned weather station record.

In reality, the situation is quite the opposite of what Eric implies.  We have no a priori knowledge on how the two Byrd stations should be combined.  We used the relationships between the two Byrd stations and the remainder of the Antarctic stations (with Scott Base and McMurdo –which show strong trends of 0.21 and 0.25 Deg C / decade – dominating the regression coefficients) to determine how far the two stations should be offset.  By simply combining the two stations without considering how they relate to any other stations, it was Eric who threw this information away.

With this being said, combining the two station records without regard to how they relate to other stations does change our results.  So if Eric could somehow justify doing so, our West Antarctic trend would increase from 0.10 Deg C / decade to 0.16 Deg C / decade, and the area of statistically significant trends would grow to cover the WAIS divide, yielding statistically significant warming over 56% (instead of 33%) of West Antarctica.  However, to do so, Eric must justify why it is okay to allow RegEM to determine offsets for every other infilled point in Antarctica except Byrd, and furthermore must propose and justify a specific value for the offset.  If RegEM cannot properly combine the Byrd stations via infilling missing values, then what confidence can we have that it can properly infill anything else?  And if we have no confidence in RegEM’s ability to infill, then the entire S09 reconstruction – and, by extension, ours – are nothing more than mathematical artifacts.

However, to guard against this possibility (unlike S09), we used an alternative method to determine offsets as a check against RegEM (credit Jeff Id for this idea and the implementation).  Rather than doing any infilling, we determined how far to offset non-overlapping stations by comparing mean temperatures between stations that were physically close, and using these relationships to provide the offsets.  This method yielded patterns of temperature change that were nearly identical to the RegEM-infilled reconstructions, with a resulting West Antarctic trend of 0.12 Deg C / decade.

Lastly, Eric implies that his use of Byrd as a single station somehow makes his method more accurate.  This is hardly true.  Whether you use Byrd as a single station or two separate stations, the S09 answer changes by a mere 0.01 Deg C / decade in West Antarctica and 0.005 Deg C / decade at the Byrd location.  The characteristic of being entirely impervious to changes in the “most critical” weather station data is a rather odd result for a method that is supposed to better utilize the station data.

Interestingly, if you pre-combine the Byrd data like Eric does and perform the reconstruction exactly like S09, the resulting infilled ground station trend at Byrd is 0.13 Deg C / Decade (fairly close to our gridded result).  The S09 gridded result, however, is 0.25 Deg C / Decade – or almost double the ground station trend from their own RegEM infilling, and closer to our gridded result than to Eric’s.

(Weird.  Didn’t Eric say their reconstruction better captures the ground station data from Byrd?  Hm.)

Stranger yet, if you add a 0.1 Deg C / decade trend to the Peninsula stations, the S09 West Antarctic trend increases from 0.20 to 0.25 – with most of the increase occurring 2,500 km away from the Peninsula on the Ross Ice Shelf – the East Antarctic trend increases from 0.10 to 0.13 . . . but the Peninsula trend only increases from 0.13 to 0.15.  So changes in trends at the Peninsula stations result in bigger changes in West and East Antarctica than in the Peninsula!  Nor is this an artifact of retaining the satellite data (sorry, Eric, but I’m going to nip that potential arm-flailing argument in the bud).  Using the modeled PCs instead of the raw PCs in the satellite era, the West trend goes from 0.16 to 0.22, East goes from 0.08 to 0.12, and the Peninsula only goes from 0.11 to 0.14.

(Weird.  Didn’t Eric say their reconstruction better captures the ground station data from Byrd?  Hm.)

And (nope, still not done with this game) EVEN STRANGER YET, if you add a whopping 0.5 Deg C / decade trend to Byrd (or five times what we added to the Peninsula), the S09 West Antarctic trend changes by . . . well . . . a mere 0.02 Deg C / decade, and the gridded trend at Byrd station rises massively from 0.25 to . . . well . . . 0.29.  The Byrd station trend used to produce this result, however, clocks in at a rather respectable 0.75 Deg C / decade.  Again, this is not an artifact of retaining the satellite data.  If you use the modeled PCs, the West trend increases by 0.03 and the gridded trend at Byrd increases to only 0.26.

(Weird.  Didn’t Eric say their reconstruction better captures the ground station data from Byrd?  Hm.)

Now, what happens to our reconstruction if you add a 0.1 Deg C / decade trend to the Peninsula stations?  Our East Antarctic trends go from 0.02 to . . . 0.02.  Our West Antarctic trends go from 0.10 to 0.16, with all of the increase in Ellsworth Land (adjacent to the Peninsula).  And the Peninsula trend goes from 0.35 to 0.45 . . . or the same 0.1 Deg C / decade we added to the Peninsula stations.

And what happens if you add a 0.5 Deg C / decade trend to Byrd?  Why, the West Antarctic trend increases 260% from 0.10 to 0.26 Deg C / decade, and the gridded trend at Byrd Station increases to 0.59 Deg C / decade . . . with the East Antarctic trends increasing by a mere 0.01  and the Peninsula trends increasing by 0.02.

(Weird.  Didn’t Eric say their reconstruction better captures the ground station data from Byrd?  Hm.)

You see, Eric, the nice thing about getting the method right is that if the data changes – or more data becomes available (like, say, a better way to offset Byrd station than using the relationships to other stations), then the answer will change in response.  So if someone uses our method with better data, they will get a better answer.  If someone uses your method with better data, well, they will get the same answer . . . or they will get garbage.  This is why I find this comment by you to be particularly ironic:

At some point, yes. It’s not very inspiring work, since the answer doesn’t change, but i suppose it has to get done. I had hoped O’Donnell et al. would simply get it right, and we’d be done with the ‘debate’, but unfortunately not.—eric

Eric’s claims that his reconstruction better capture the information from the station data are wholly and demonstrably false.  With about 30 minutes of effort he could have proven this to himself . . . not only for his reconstruction, but also for ours.  On that note, I found this comment by Eric to be particularly infuriating:

If you can get their code to work properly, let me know. It’s not exactly user friendly, as it is all in one file, and it takes some work to separate the modules.

Here’s how you do it, Eric:

  1. Go to CRAN (http://cran.r-project.org/) and download the latest version of R.
  2. Download our code here:  http://www.climateaudit.info/data/odonnell
  3. Open up R.
  4. Open up our code in Notepad.
  5. Put your cursor at the very top of our code.
  6. Go all the way to the end of the code and SHIFT-CLICK.
  7. Press CTRL-C.
  8. Go to R.
  9. Press CTRL-V.
  10. Wait about 17 minutes for the reconstructions to compute.

Easy-peasy.  You didn’t even try.

I am sick of arm-waving arguments, unsubstantiated claims, and uncalled-for snark.  Did you think I wouldn’t check?  I would have thought you would have learned quite the opposite from your experience reviewing our paper.

Oops.

Did I let something slip?

***

I mentioned at the beginning that I was planning to save the best for last.

I have known that Eric was, indeed, Reviewer A since early December.  I knew this because I asked him.  When I asked, I promised that I would keep the information in confidence, as I was merely curious if my guess that I had originally posted on tAV had been correct.

Throughout all of the questioning on Climate Audit, tAV, and Andy Revkin’s blog, I kept my mouth shut.  When Dr. Thomas Crowley became interested in this, I kept my mouth shut.  When Eric asked for a copy of our paper (which, of course, he already had) I kept my mouth shut.  I had every intention of keeping my promise . . . and were it not for Eric’s latest post on RC, I would have continued to keep my mouth shut.

However, when someone makes a suggestion during review that we take and then later attempts to use that very same suggestion to disparage our paper, my obligation to keep my mouth shut ends.

(Note to Eric:  unsubstantiated arm-waving may frustrate me, but duplicity is intolerable.)

Part of Eric’s post is spent on the choice to use individual ridge regression (iRidge) instead of TTLS for our main results.  He makes the following comment:

Second, in their main reconstruction, O’Donnell et al. choose to use a routine from Tapio Schneider’s ‘RegEM’ code known as ‘iridge’ (individual ridge regression). This implementation of RegEM has the advantage of having a built-in cross validation function, which is supposed to provide a datapoint-by-datapoint optimization of the truncation parameters used in the least-squares calibrations. Yet at least two independent groups who have tested the performance of RegEM with iridge have found that it is prone to the underestimation of trends, given sparse and noisy data (e.g. Mann et al, 2007a, Mann et al., 2007b, Smerdon and Kaplan, 2007) and this is precisely why more recent work has favored the use of TTLS, rather than iridge, as the regularization method in RegEM in such situations. It is not surprising that O’Donnell et al (2010), by using iridge, do indeed appear to have dramatically underestimated long-term trends—the Byrd comparison leaves no other possible conclusion.

The first – and by far the biggest – problem that I have with this is that our original submission relied on TTLS.  Eric questioned the choice of the truncation parameter, and we presented the work Nic and Jeff had done (using ridge regression, direct RLS with no infilling, and the nearest-station reconstructions) that all gave nearly identical results.

What was Eric’s recommendation during review?

My recommendation is that the editor insist that results showing the ‘mostly likely’ West Antarctic trends be shown in place of Figure 3.  [these were the ridge regression results] While the written text does acknowledge that the rate of warming in West Antarctica is probably greater than shown, it is the figures that provide the main visual ‘take home message’ that most readers will come away with. I am not suggesting here that kgnd = 5 will necessarily provide the best estimate, as I had thought was implied in the earlier version of the text. Perhaps, as the authors suggest, kgnd should not be used at all, but the results from the ‘iridge’ infilling should be used instead. . . . I recognize that these results are relatively new – since they evidently result from suggestions made in my previous review [uh, no, not really, bud . . . we’d done those months previously . . . but thanks for the vanity check] – but this is not a compelling reason to leave this ‘future work’.

(emphasis and bracketed comment added by me)

And after we replaced the TTLS versions with the iRidge versions (which were virtually identical to the TTLS ones), what was Eric’s response?

The use of the ‘iridge’ procedure makes sense to me, and I suspect it really does give the best results. But O’Donnell et al. do not address the issue with this procedure raised by Mann et al., 2008, which Steig et al. cite as being the reason for using ttls in the regem algorithm. The reason given in Mann et al., is not computational efficiency — as O’Donnell et al state — but rather a bias that results when extrapolating (‘reconstruction’) rather than infilling is done. Mann et al. are very clear that better results are obtained when the data set is first reduced by taking the first M eigenvalues. O’Donnell et al. simply ignore this earlier work. At least a couple of sentences justifying that would seem appropriate.  (emphasis added by me)

So Eric recommends that we replace our TTLS results with the ridge regression ones (which required a major rewrite of both the paper and the SI) and then agrees with us that the iRidge results are likely to be better . . . and promptly attempts to turn his own recommendation against us.

There are not enough vulgar words in the English language to properly articulate my disgust at his blatant dishonesty and duplicity.

The second infuriating aspect of this comment is that he tries to again misrepresent the Mann article to support his claim when he already knew otherwise. How do I know he knew otherwise?  Because I told him so in the review response:

We have two topics to discuss here.  First, reducing the data set (in this case, the AVHRR data) to the first M eigenvalues is irrelevant insofar as the choice of infilling algorithm is concerned.  One could just as easily infill the missing portion of the selected PCs using ridge regression as TTLS, though some modifications would need to be made to extract modeled estimates for ridge.  Since S09 did not use modeled estimates anyway, this is certainly not a distinguishing characteristic.

The proper reference for this is Mann et al. (2007), not (2008).  This may seem trivial, but it is important to note that the procedure in the 2008 paper specifically mentions that dimensionality reduction was not performed for the predictors, and states that dimensionality reduction was performed in past studies to guard against collinearity, not – as the reviewer states – out of any claim of improved performance in the absence of collinear predictors.  Of the two algorithms – TTLS and ridge – only ridge regression incorporates an automatic check to ensure against collinearity of predictors.  TTLS relies on the operator to select an appropriate truncation parameter.  Therefore, this would suggest a reason to prefer ridge over TTLS, not the other way around, contrary to the implications of both the reviewer and Mann et al. (2008).

The second topic concerns the bias.  The bias issue (which is also mentioned in the Mann et al. 2007 JGR paper, not the 2008 PNAS paper) is attributed to a personal communication from Dr. Lee (2006) and is not elaborated beyond mentioning that it relates to the standardization method of Mann et al. (2005).  Smerdon and Kaplan (2007) showed that the standardization bias between Rutherford et al. (2005) and Mann et al. (2005) results from sensitivity due to use of precalibration data during standardization.  This is only a concern for pseudoproxy studies or test data studies, as precalibration data is not available in practice (and is certainly unavailable with respect to our reconstruction and S09).

In practice, the standardization sensitivity cannot be a reason for choosing ridge over TTLS unless one has access to the very data one is trying to reconstruct.  This is a separate issue from whether TTLS is more accurate than ridge, which is what the reviewer seems to be implying by the term “bias” – perhaps meaning that the ridge estimator is not a variance-unbiased estimator.  While true, the TTLS estimator is not variance-unbiased either, so this interpretation does not provide a reason for selecting TTLS over ridge.  It should be clear that Mann et al. (2007) was referring to the standardization bias – which, as we have pointed out, depends on precalibration data being available, and is not an indicator of which method is more accurate.

More to [what we believe to be] the reviewer’s point, though Mann et al. (2005) did show  in the Supporting Information where TTLS demonstrated improved performance compared to ridge, this was by example only, and cannot therefore be considered a general result.  By contrast, Christiansen et al. (2009) demonstrated worse performance for TTLS in pseudoproxy studies when stochasticity is considered – confirming that the Mann et al. (2005) result is unlikely to be a general one.  Indeed, our own study shows ridge to outperform TTLS (and to significantly outperform the S09 implementation of TTLS), providing additional confirmation that any general claims of increased TTLS accuracy over ridge is rather suspect.

We therefore chose to mention the only consideration that actually applies in this case, which is computational efficiency.  While the other considerations mentioned in Mann et al. (2007) are certainly interesting, discussing them is extratopical and would require much more space than a single article would allow – certainly more than a few sentences.

Note some curious changes from Eric’s review comment and his RC post.  In his review comment, he refers to Mann 2008.  I correct him, and let him know that the proper reference is Mann 2007.  He also makes no mention of Smerdon’s paper.  I do.  I also took the time to explain, in excruciating detail, that the “bias” referred to in both papers is standardization bias, not variance bias in the predicted values.

So what does Eric do?  Why, he changes the references to the ones I provided (notably, excluding the Christiansen paper) and proceeds to misrepresent them in exactly the same fashion that he tried during the review process!  Wow.  What honesty.

And by the way, in case anyone (including Eric) is wondering if I am the one who is misrepresenting Jason Smerdon’s paper, fear not.  Nic and I contacted him by email to ensure our description was accurate.

But the B.S. piles even deeper.  Eric implies that the reason the Byrd trends are lower is due to variance loss associated with iRidge.  He apparently did not bother to check that his reconstruction shows a 16.5% variance loss (on average) in the pre-satellite era when compared to ours. The reason for choosing the pre-satellite era is that the satellite era in S09 is entirely AVHRR data, and is thus not dependent on the regression method.  We also pointed this out during the review . . . specifically with respect to the Byrd station data. Variance loss due to regularization bias has absolutely NOTHING to do with the lower West Antarctic trends in our reconstruction . . . and Eric knows it.

This knowledge, of course, does not seem to stop him from implying the opposite.

Then Eric moves on to the TTLS reconstructions from the SI, grabs the kgnd = 6 reconstruction, and says, “See?  Overfitting!” without, of course, providing any evidence that this is the case.  He goes on to surmise that the reason for the overfitting is that our cross-validation procedure selected the improper number of modes to retain – yet again without providing any evidence that this is the case (other than it better matches his reconstruction).

So if Eric is right, then using kgnd = 6 should better capture the Byrd trends than kgnd = 7, right?  Let’s see if that happens, shall we?

If we perform our same test as before (combining the two Byrd stations and adding a 0.5 Deg C trend), we get:

Infilled trend (kgnd = 6):  0.45 Deg C / decade

Infilled trend (kgnd = 7):  0.52 Deg C / decade

Weird.  It looks as if the kgnd = 7 option better captures the Byrd trend . . . didn’t Eric say the opposite?  Hm.

These translate into reconstruction trends at the Byrd location of 0.42 and 0.45 Deg C / decade, respectively (you can try other, more reasonable trends if you want . . . it doesn’t matter).  I also note that the TTLS reconstructions do a poorer job of capturing the Byrd ground station trend than the iRidge reconstructions, which is the opposite behavior suggested by Eric (and this was noted during the review process as well).

Perhaps Eric meant that we overfit the “data rich” area of the Peninsula?  Fear not, dear Reader, we also have a test for that!  Let’s add our 0.1 Deg C / decade trend to the Peninsula stations, shall we, and see what results:

Recon trend increase (kgnd = 6):  Peninsula +0.08, West +0.02, East +0.02

Recon trend increase (kgnd = 7):  Peninsula + 0.11, West +0.03, East +0.01

Weird.  It looks as if the kgnd = 7 option better captures the Peninsula trend with a similar effect on the East or West trends . . . didn’t Eric say the opposite?  Hm.

By the way, Eric also fails to note that the kgnd = 5 and 6 Peninsula trends, when compared to the corresponding station trends, are outside the 95% CIs for the stations.  I guess that’s okay, though, since the only station that really matters in all of Antarctica is Byrd (even though his own reconstruction is entirely immune to changes at Byrd).

As far as the other misrepresentations go in his post, I’m done with the games.  These were all brought up by Eric during the review.  Rather than go into detail here, I have simply made all of the versions of our paper, the reviews, and the responses available at http://www.climateaudit.info/data/odonnell.

***

My final comment is that this is not the first time.

At the end of his post, Eric suggests that the interested Reader see his post “On Overfitting”.  I suggest the interested Reader do exactly that.  In fact, I suggest the interested Reader spend a good deal of time on the “On Overfitting” post to fully absorb what Eric was saying about PC retention.  Following this, I suggest that the interested Reader examine my posts in that thread.

Once this is completed, the interested Reader may find Review A rather . . . well . . . interesting when the Reader comes to the part where Eric talks about PC retention.

Fool me once, shame on you.  But twice isn’t going to happen, bud.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
142 Comments
Inline Feedbacks
View all comments
stupidboy
February 7, 2011 2:36 pm

Some 30 years ago, a friend of mine graduated in Marine Biology but because there was no money in it he became a banker. Now, of course, there is ample funding for marine biologists, as long as they promote AGW. Scientific research has become corrupted by the theory of AGW to the point where research is being treated with cynicism by lay people and scientists who claim record-breaking cold is caused by AGW are being laughed at. Fortunately, scientists are not yet held in the same contempt as bankers, but the greed of the peddlers of AGW porn is making them blind to the perfidious path they tread. Historians of science will be savage when they come to write about the disingenuous conspiricies surrounding AGW and its proponents, especially if they’re having to write by candlelight in the freezing cold because the wind isn’t blowing.

geo
February 7, 2011 2:36 pm

An excellent example of why journals are leary about publishing outsiders whose reputation and livliehood does not depend on “good relations in the community”. They won’t take being knifed in the back in such a manner with a thin smile and a mental note to return the favor likewise some day down the road.
Instead they go and cause a public nuisance that embarrasses everyone. . .
At least I suspect that is how it will be seen by “the Club”.

ZT
February 7, 2011 2:36 pm

Statistically, about 1 in 100 lawyers are thrown out of their profession during their careers, about 1 in 100 doctors, etc. Fortunately, no climatologist has ever been disciplined.
Q. Why is it that climatologists are so much better than the other professional classes?
A. All climatologists have been exonerated by 3 independent inquiries (at a minimum).

onion
February 7, 2011 2:43 pm

This brings graphically to life the intention to abuse the peer review process outlined by The Team in the Climategate emails

GeorgeGr
February 7, 2011 2:45 pm

Snipped from the newest post made at realclimate.org:

It seems Eric Steig lacks the guts to show up. He obviously prefer to hide out in his echo chamber over at realclimate.org, where he can snip or moderate away any counter arguments, thus create his own alternative reality… Sad.

GeorgeGr
February 7, 2011 2:48 pm

The html tags did not work. Here is the post from Realclimate that I was trying to cite:
RE: Vernon at 42 says:
““Why are responses from one of the co-authors not being posted? That seems counter productive.”
Two of the authors RyanO and JeffID are posting at ClimateAudit.org and Wattsupwiththat.com
Hey Eric – why don’t you go and debate this at ClimateAudit.org? At least then we will all know the comments are not being censored.[edit for insulting remarks].
[Response:Know what? I have a day job. And those guys know perfectly well I do not read those sites without a good reason to, and telling me I have ‘explaining to do’ doesn’t rise to that level. If they have scientific points to make, they should make them here.–eric]
Comment by ThinkingScientist — 7 Feb 2011 5:24 PM”

ge0050
February 7, 2011 2:59 pm

After climategate none of this surprises me. I’ve experienced RC censorship as described above. Snide comments from Gavin and buddies when they don’t agree with you, and any further postings you try and make showing evidence for your position are blocked. They are like little children, hiding behind their mommies skirts, darting out to throw rocks when the other kids on the block walk by. What is shows is RealScience = JunkScience. Like countries that use “democratic” in their name, if they need to use “Real” in their name, you can be sure they anything but.
I’ve never used “R” before. Following the above instructions I installed it on win 7×64, cut and pasted “RO10Code.txt” into “R” and after churning away for awhile up popped a screen with 3 graphs, showing Deg C, Variance, and # Eff Parameters. It’s still churning away so maybe more will happen?
> main.gnd = emFn(Yo, maxiter = 500, tol = 0.0005, type = “iridge”, plt = c(T, 1), DoFL = 12)
10: Relative: 1.78244; RMS: 0.054; Stagnation: 0.03029.
20: Relative: 1.93936; RMS: 0.01585; Stagnation: 0.00817.
30: Relative: 1.97744; RMS: 0.00709; Stagnation: 0.00359.
40: Relative: 1.99406; RMS: 0.00434; Stagnation: 0.00218.
50: Relative: 2.00321; RMS: 0.00296; Stagnation: 0.00148.
60: Relative: 2.00888; RMS: 0.00219; Stagnation: 0.00109.
70: Relative: 2.01275; RMS: 0.00171; Stagnation: 0.00085.
80: Relative: 2.01564; RMS: 0.00142; Stagnation: 7e-04.
90: Relative: 2.0179; RMS: 0.00129; Stagnation: 0.00064.
100: Relative: 2.02002; RMS: 0.00297; Stagnation: 0.00147.

Al Cooper
February 7, 2011 3:07 pm

Thank you Jeff.

February 7, 2011 3:11 pm

geoo50, from what I remember, I think it will converge after 141 iterations. 😉

Kate7
February 7, 2011 3:14 pm

http://www.youtube.com/user/TheAtheistAntidote#p/search/0/WftNsz1Nhz8
This expresses my thoughts on the boys at RC

February 7, 2011 3:15 pm

RE GeorgeGR
The “[edit for insulting remarks]” can be seen in full cross-posted at Bishophill where I cross-post whenever posting at RC. Its as telling see what is left out as reading the RC replies. For convenience, the edited “insulting remarks” I made were:
“And judging by the comments of RyanO at CA, which we all know you will have read by now, you have some serious explaining to do. Try doing that without RC moderation to hide behind.”

pax
February 7, 2011 3:17 pm

geoo50, I found your comment extremely amusing. Good one. Now can anyone get their granny to do it, it would be hilarious.

ge0050
February 7, 2011 3:19 pm

It converged and is now infilling AVHRR PC 1,2…124,125, and the graphs are cycling as this happens. Sort of like watching a Chinese movie. Things are happening on the screen, but I haven’t got a clue what they mean. Oh, I’ve now moved on to “RLS using 126 satellite spatial eigenvectors.” Don’t tell me the ending I don’t want to spoil it.

ge0050
February 7, 2011 3:23 pm

Looks like I got an error near the end:
Error in getStats(grid.orig[301:600, ], makeAnom(all.avhrr), idx[301:600, :
binary operation on non-conformable arrays
>
here is more detail:
> ### Calculate stats for just 1982 – 2006
>
> eigen.stats.57.82 = getStats(grid.orig[301:600, ], eigen.grid[301:600, ], idx[301:600, ])[[1]]
> rls.stats.57.82 = getStats(grid.orig[301:600, ], rls.grid[301:600, ], idx[301:600, ])[[1]]
> S09.stats.57.82 = getStats(grid.orig[301:600, ], S09.grid[301:600, ], idx[301:600, ])[[1]]
> avhrr.stats = getStats(grid.orig[301:600, ], makeAnom(all.avhrr), idx[301:600, ])[[1]]
Error in getStats(grid.orig[301:600, ], makeAnom(all.avhrr), idx[301:600, :
binary operation on non-conformable arrays
>
>
> ###==============================================
> ### EIGEN AND RLS SATELLITE STATS FOR COMPARISON
> ###==============================================
>
> ### Compare EIGEN and RLS vs. satellite
>
> eigen.sat.stats = getStats(makeAnom(all.avhrr), window(recon.eigen[[1]], start = 1982))
> rls.sat.stats = getStats(makeAnom(all.avhrr), window(recon.rls[[1]], start = 1982))
>
> ### Compare S09 unspliced solution vs. satellite
>
> S09.sat.stats = getStats(makeAnom(all.avhrr), window(recon.S09[[1]], start = 1982))
>
>
> ################################################################################
> ###################### END VERIFICATION ######################
> ################################################################################
>
>
>

TomRude
February 7, 2011 3:29 pm

[Response:Know what? I have a day job. And those guys know perfectly well I do not read those sites without a good reason to, and telling me I have ‘explaining to do’ doesn’t rise to that level. If they have scientific points to make, they should make them here.–eric]
+++
LOL, that’s keeping with the character exposed… digging deeper!

Hank Hancock
February 7, 2011 3:36 pm

Ryan, I appreciate your respect for how the Journal handled the situation and I am pleased that they brought in an additional reviewer as they should have. It is to their credit for doing so. However, bringing Steig onboard as a reviewer is difficult for me to reconcile in the context of my personal experience with clinical peer review.
Putting that concern aside, the behavior of Steig recommending a change in methodology during review under the anonymity of a blind review process then publicly criticizing the change after publication (and now exposed) throws more than just his own reputation under the bus. I would hope that the Journal will make inquiry (as any other journal would) as it appears he has abused his position as a reviewer for self interests.

AJ Abrams
February 7, 2011 3:46 pm

Ryan – As I said over on CA, I rarely post, but making an exception today. I wanted to personally thank you for all your clear , concise work and for your efforts in getting your work published after facing what you did. You have a respect from me that I cannot put easily into words. Thank you so much for your work and today’s clearing of the air.

February 7, 2011 3:50 pm

geoo50, depending on the amount of memory you have, you may have run out of memory during the recon.eigen computation. I’ve got a quad-core with 4 GB of RAM, and it ran just fine. Crashed at a similar spot on my laptop (1GB, single-processor) though.
🙁
Sorry . . . the operations are quite memory intensive.

February 7, 2011 3:52 pm

At http://noconsensus.wordpress.com/2010/12/01/doing-it-ourselves/ JeffID posted the following:
“The review process unfortunately took longer than expected, primarily due to one reviewer in particular. The total number of pages dedicated by just that reviewer alone and our subsequent responses – was 88 single-spaced pages, or more than 10 times the length of the paper.”
Was that Reviewer A, aka Steig?

February 7, 2011 3:58 pm

ge0050 says:
February 7, 2011 at 3:23 pm
Looks like I got an error near the end:
========================================================
Was the R downloaded made for use by Win 7 64 bit?

February 7, 2011 4:01 pm

ThinkingScientist: Yes.

Slabadang
February 7, 2011 4:22 pm

Well????
Doesnt it feél comfortable and safe that these are the guys the worlds leader is having as consultants and advisors?.
“The worlds leading climate scientists” makes you wonder whats the bad ones are like? And I really wonder how our governments is so skillfull in picking out just the rutten eggs from the scientific basket.
Is there a secret code within science that all the noogooders gets the political assignments and the goood ones have market value?
Jeff you really exposed the RC gang för what they are.And I thank you for that.No wonder RCs members is hiding in thier bunker and I cant feel sorry for them as the co2 level increases with every hatchet they close.

February 7, 2011 4:23 pm

ge0050
At the console before you run type Sys.info()
http://stat.ethz.ch/R-manual/R-patched/library/base/html/Sys.info.html
Also sessionInfo()
http://stat.ethz.ch/R-manual/R-patched/library/utils/html/sessionInfo.html
Then run the program.
If it fails, Type in traceback()
C&P all the previous and send to ryan
That should give enough detail for Ryan to track down the issue.

John Day
February 7, 2011 4:24 pm

Sexton
> Was the R downloaded made for use by Win 7 64 bit?
It ran OK on my 32-bit Win/XP R installation (with 3GB ram).

Frank K.
February 7, 2011 4:30 pm

Comment: [Response:Know what? I have a day job. And those guys know perfectly well I do not read those sites without a good reason to, and telling me I have ‘explaining to do’ doesn’t rise to that level. If they have scientific points to make, they should make them here.–eric]
Translation: [Whaaaaaaaaaaaah. Mommeeeeeeee!! Ryan and Jeff are being MEAN too me. Make them stop! Whaaaaaaaaaaah!]