
Gavin Schmidt, Michael Mann, and Scott Rutherford have written a comment letter to the Annals of Applied Statistics to the Hockey Stick busting McShane and Wyner paper covered on WUWT in August.
It’s quite something. Here’s the M&W graph:

It only took reading the first paragraph of the Team paper for me to get cheesed off. Emphasis mine:
McShane and Wyner (2010) (henceforth “MW”) analyze a dataset of “proxy”climate records previously used by Mann et al (2008) (henceforth “M08”) to attempt to assess their utility in reconstructing past temperatures. MW introduce new methods in their analysis, which is welcome. However, the absence of both proper data quality control and appropriate “pseudoproxy” tests to assess the performance of their methods invalidate their main conclusions.
Why am I cheesed off?
The sheer arrogance of claiming improper “data quality control” when Mann himself has issues with his own papers such as incorrect lat/lon values of proxy samples, upside down Tiljander sediment proxies, and truncated/switched data, is mind boggling. It’s doubly mind boggling when these errors are well known to thousands of people, and Mann has done nothing to correct them but then speaks of data quality control issues in rebuttal. And yet Schmidt defends these sort of things on RC. It’s like the Team never read the McShane Wyner paper, because they clearly said this about data issues:
We are not interested at this stage in engaging the issues of data quality. To wit, henceforth and for the remainder of the paper, we work entirely with the data from Mann et al. (2008)
So MW used Mann’s own data, made it clear that their paper was about methodology with data, and not the data itself, and now the Team is complaining about data quality control?
The Team egos involved must be so large that the highway department has to put out orange road cones ahead of these guys when they travel. And they wonder why people make cartoons about them:

They go on to whine about the MWP being “inflated”.
MW’s inclusion of the additional poor quality proxies has a material affect on the reconstructions, inflating the level of peak apparent Medieval warmth, particularly in their featured “OLS PC10”
Well, here’s the thing gents; you don’t KNOW what the temperature was during the MWP. There are no absolute measurements of it, only reconstructions from proxy, and the Team opinion on what the temperature may have been is based on assumptions, not actual measurement. You can’t set yourself up as an authority on knowing whether it was inflated or not without knowing what the temperature actually was. They also rail about “poor quality proxies” (their own) used in MW. Like these?
Half the Hockey Stick graphs depend on bristlecone pine temperature proxies, whose worthlessness has already been exposed. They were kept because the other HS graphs, which depend on Briffa’s Yamal larch treering series, could not be disproved. We now find that Briffa calibrated centuries of temperature records on the strength of 12 trees and one rogue outlier in particular. Such a small sample is scandalous; the non-release of this information for 9 years is scandalous; the use of this undisclosed data as crucial evidence for several more official HS graphs is scandalous. And not properly comparing treering evidence with local thermometers is the mother of all scandals.
Read the entire Team response here, comments welcome.
http://pubs.giss.nasa.gov/docs/notyet/inpress_Schmidt_etal_2.pdf
Backup location in case it falls down a rabbit hole: inpress_Schmidt_etal_2
For balance, the McShane and Wyner paper is available here: http://wattsupwiththat.files.wordpress.com/2010/08/mcshane-and-wyner-2010.pdf ======================
h/t to poptech
======================
UPDATE: Here are some other views:
Jeff Id, The Air Vent:
http://noconsensus.wordpress.com/2010/09/23/ostriches/
Luboš Motl, The Reference Frame:
http://motls.blogspot.com/2010/09/schmidt-mann-rutherford-just-clueless.html
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


Stefan,
That’s exactly what I wanted to say. Just the fact of publishing a reply in warmist circles equates “destroying”, “disproving” , “rebutting”. The content does not matter.
DennisA says:
November 29, 2009 at 4:56 pm
http://www.eastangliaemails.com/emails.php?eid=423&filename=1092167224.txt
Michael E. Mann wrote:
Dear Phil and Gabi,
I’ve attached a cleaned-up and commented version of the matlab code that I wrote for doing the Mann and Jones (2003) composites. I did this knowing that Phil and I are likely to have to respond to more crap criticisms from the idiots in the near future, so best to clean up the code and provide to some of my close colleagues in case they want to test it, etc. Please feel free to use this code for your own internal purposes, but don’t pass it along where it may get into the hands of the wrong people.
Surface Temperature Reconstructions using Terrestrial Borehole Data, Journal of Geophysical Research, 108 (D7), 4203, doi: 10.1029/2002JD002532, 2003.
Well they are right on one thing, the data quality is lousy. Their own data was used and we all know that has lousy quality. I am surprised they are admitting that their work has… “the absence of both proper data quality control and appropriate “pseudoproxy” tests”…
If this is turning into a debate on what datasets that where excluded in Manns hockey stick, and he is reluctant to share them, isn’t it then possible to make a computer program that searches through all combinations of graphs to find the one he must have used? And find out what data he used by back-engineering?
I know there are thousands of datasets but the way I understand it there is not that many that is excluded.
Please tell me if I don’t understand this correctly since I’m not an academic or scientist.
Martin Tingley, a statistician who actually deals with paleo reconstruction problems at the NCAR IMAGE program has a rather devastating response to MW at his website:
http://www.people.fas.harvard.edu/~tingley/Blakeley_Discussion_Tingley_Submitted.pdf
He makes many of the same methodological points made by Schmidt, et. al. but goes on to investigate the performance of LASSO on AR(1) random pseudoproxies. The results are not pretty. Figure 2 in particular is rather devastating to the MW case.
Looks like the Climate Cabal’s back in action trying to salvage their careers and prevent the sudden shutoff of Climate Ca$h to their respective institutions. Remember, folks, it’s all about the Climate Ca$h and partying (at taxpayer expense) in Bali and Cancun…
Re: “MW’s inclusion of the additional poor quality proxies has a material affect on the reconstructions…”
If poor quality proxies affect the reconstructions, then the result will be a material effect, unless we are imputing a role of emotional arrousal in the erections — sorry, reconstructions.
More significantly, modeling of ring growth must take into account many factors including, as E.M. Smith has noted above, (for the American Pacific North West) salmon returns and bear shit.
Among those other factors, rising atmospheric carbon dioxide concentration, supposedly the cause of rising temperatures, has been shown in many studies to stimulate tree growth. The failure of bristlecone tree ring widths, which were used by Mann et al. as temperature proxies, to reflect the increase in the instrumental temperature record in the period since 1980 when atmospheric carbon dioxide concentration rose more rapidly than at any time since the start of the industrial revolution is, therefore, remarkable.
But perhaps rising temperature and carbon dioxide concentration have antagonistic effects on tree ring widths. So it would appear from a study of the impact of Europe’s hottest summer on record (2005) and tree growth. High temperatures were associated with low precipitation and tree growth was severely inhibited:
http://treephys.oxfordjournals.org/content/25/6/641.full.pdf
So even if one ignores the vagaries of the Pacific salmon or the toilet behavior of Ursus americanus, is there any serious reason to believe that tree ring widths are of any value as a long-term proxy for temperature?
I wonder if the Team have the wit to re-arrange these words to form a well known phrase:
Own hoisted their on petard.
Pure, unmitigated gall !
If I’m understanding The Team’s defenders in this thread so far, they’re saying The Team isn’t invalidating its own data; but that MW used data that The Team never used. Well, where did MW get the idea it was using The Team’s data, and how did they get faulty data to use?
It’s sort of like Michael Mann saying he has a number in mind, asking you to pick it, and whatever number you choose he says, “Nope; that’s not it.”
This is science?
“Crispin in Waterloo,
If the method can’t predict reliably temperatures within a known data set, how can it accurately predict temperatures outside the set?”
I too am mystified as to how many responses have nothing to do with the methodology at all.
Ron Cram says:
September 23, 2010 at 5:35 am
glacierman, SMR is not trashing Mann’s data. They are saying MW wrongly included trees which Mann had excluded. I don’t know if Mann is guilty of cherry picking and MW were calling him on it or if MW made an error. But let’s be clear, SMR is saying the data used in MW is not the same data.”
All this massaging of data is worrisome. I’ve been a scientist for a lot of years and what I’ve learned is solid theories stand up to a lot of stuff. Good data, noisy data, blind tests, etc. drive ever more to proving the theory. When I’ve had to tweak, massage, hope, guess, include and exclude data based on ad hoc reasoning the theory fell apart in the end. All this wrangling over the proxy data smells to me that no trend could ever be proved. Every time you look it’s some reasons why or why not the thing is turning out the way you hoped. Real evidence doesn’t work this way
It appears that MW just downloaded all the data that was used as input and didn’t apply any of the data filtering techniques. IOW, they didn’t read and understand the original paper and the SI.
REPLY: Oh please John, now you are just speculating BS. – Anthony
UpNorthOutWest says:
September 23, 2010 at 10:53 am
Well, where did MW get the idea it was using The Team’s data, and how did they get faulty data to use?
It’s sort of like Michael Mann saying he has a number in mind, asking you to pick it, and whatever number you choose he says, “Nope; that’s not it.”
This is science?”
…
This is the question I guess. Is the mistake at M&W’s end, or at Mann’s end for not properly laying out methods and labeling data?
The answer will come soon enough.
Rattus Norvegicus says:
September 23, 2010 at 11:40 am
“It appears that MW just downloaded all the data that was used as input and didn’t apply any of the data filtering techniques.”
Data filtering techniques? Would they be the ones that show hockey sticks, then?
@ur momisugly September 23, 2010 at 5:41 am
Owen says:
“This fuss will be moot in 20 years when average global temps have continued on their upward trajectory, rising clearly above the MWP.”
Will we be comparing Mann’s temps for the MWP that he “ameliorated” in order to build the hockey stick? If so, then by jove, I think you are right! In fact, you are right, right now!
Like the techniques described in the comment by SMR. You did read it, didn’t you?
As the Landscheidt Grand Solar Minimum unfolds and the climate gets colder and colder, grinding away toward the 2030 secular bottom, the Hockey Stick will become irrelevant and a distant memory.
The only question remaining is: Will the natural cold from planetray mechanics overtake the fictional warming fast enough to deflect the best intentions of politicians to fix the non-problem of agw by creating another fractional reserve banking scam (Cap and Tax) like the Federal Reserve System?
As a genuine scientist and engineer – I would never want to sound like I am making an ad hominem attack on Mann and his buddies – but there comes a point when one surely realises that credibility is completely lost…
for a real man (or woman), there are various options, including:
1) resign in a blaze of publicity (the equivalent of taking ones ball home) and storm off in a mega-sulk
2) keep quiet, sneak off, rework your data and hopefully blame someone else for your mistake when/if its ‘republished’
3) Stand up proud but humble – ADMIT mistakes, errors and accept that one is capable of learning and indeed has learnt from them. A real man/scientist would indeed graciously THANK those who pointed out his errors, for that is the nature of the beast.
I know which I would consider the most honourable, and I am sure in the distant future, when history recalls these events – many so called experts and ‘climate scientists’ will be remembered for the fact that ‘they were not real men or indeed real scientists’…….
just my opinion…..
E.M.Smith
Anthony
: September 23, 2010 at 7:01 am
Thanks for the reply fellows. I really did think I had slipped by, never having to get into the intricacies of the tree ring industry and statistics but just had to get a jab in above. Anthony, I’ll check out that ref for I’m afraid the zombies have risen and now I must learn.
EM, I like the one on “bear poo”, that’s a classic I’ll remember forever, but sadly its very real. Your right, we have no foggy idea what was happening as those trees grew, not really. I now will see a bear 1500 years ago squatting by TAD061 tree for a big one and 1500 years later Mann sitting in his office saying to Phil on the phone… “Phil, in this tree we show some real warmth!”. Now tell me all things are not interconnected!
You’re a programmer, have you scanned all of that R code. I read through most of it but being a proper programmer it seems to be missing about six detailed comments PER LINE of that code to understand what they are actually doing on the statistics side. When I see a line raising anomalies to the fourth power I immediately think, “heavily weighting the outliners“. Don’t know if that ends up being the effect but it will take someone knowing R much better that I to decipher it.
Have you ever looked into different modes of the averages that statistics is built on? I think that is the proper term. What I mean is if you want an average there are many you can chose from. Given avg = (1/n SUMn(|Vn-Vmean|^x))^(1/x) you have a spectrum of “averages”. If x=1 you have a normal average. If x=2 you have sum of squares weighting outliner values. If x=0.5 you have sum of the square roots de-weighting the outliners. If x=4 you really, really give a huge weight to the outliners basically ignoring any values near the mean, that is ignoring normal values. The value of x then becomes the base from which deeper statistical functions as “standard” deviation, regressions, etc. Which deviation is “standard”? (I know, I know, from history in math)
It is things like this that make all of this statistical analysis a bit suspicious to me. Which “mode” is correct. Squared, or mode two, seems perfect for most applications especially in industrial analysis. You want to detect outliners or “bad” things occurring in a process, but do you always have to use sqruared. Do you add weight for a freak warm week, or, do you de-weight such rare events. Who is it that decides? (And I really don’t know if R can handle this type of overlying parameter even though I have written a script-like language close to R in it’s functionality with self-recursive matrices, vectors, list with complex, etc, really a graphical calculator with all scientific units embedded) So I know the code at that level but not on the proper statistics level.
You can even run the entire analysis five times using 0.5, 0.75, 1, 1.5, & 2 and then plot all outputs under these “modes” which tells you a whole lot more than one specific dimensional view. Statisticians might say this is meaningless but to me, it’s core. You look at very noisy data with your eyes and say to yourself it is level. In modes greater than one you get an increasing slope, in less than one you get a slight downward slope. Most of the data visually sloped downward but the outliners when weighted made the overall regression slope up!
Anything on that?
Well, guess I’ll have to delve into all of this tree ring area after all.
Chrispin in Waterloo @ur momisugly 8.20 .
Extremely well put and argued. That is exactly what it is about.
I have the MW response. Just one sentence:
“The claim that improper ‘data quality control’ was used is bizarre.”
SteveM must be loving this. Cheers!
Have not read the other comments yet but this article had me howling in disbelief. Galling hypocrites! I wish someone could jump in a Tardis and show The Team last year what they have written this year. Their disbelief would not be greater than mine – except, in the case of my reaction, what else does one expect from this crew? They have zero integrity!
I haven’t followed the whole HS issue very closely, so I have this question:
Even now, nobody, except the hockey team, knows exactly what tree ring dataset is used in the Mann (08) paper?
anna v says: “I will take the risk of sounding elitist…”
Anyone who has ever read your physics-based comments would never classify you as elitist.
Which brings me to an off-topic discussion. The following post at RealClimate on the effects of Downward Longwave Radiation on the oceans always stuck me as odd, but I can’t put my finger on why:
http://www.realclimate.org/index.php/archives/2006/09/why-greenhouse-gases-heat-the-ocean/
Comments?