R expert replicates McShane and Wyner hockey stick analysis

Unlike what we’ve seen previously in climate science, where it takes years of complaints, demands, taunts, FOI’s and other assorted embarrassments to finally pry code and data out of scientists, McShane and Wyner made all their data and code available up front. That allowed R coder mind of a Markov chain to replicate portions of the M&W work.

They write:

There are a bunch of “hockey sticks” that calculate past global temps. through the use of proxies when instrumental data is absent.

There is a new one out there by McShane and Wyner (2010) that’s creating quite a stir in the blogosphere (here, here, here, here). The main take out being, that the uncertainty is too great for the proxies to be any good.

Here’s an output from the replication:

Replicated McShane and Wyner Figure 18

They continue:

I think this is pretty much identical to Fig. 18b in the paper. One thing to notice is that I smoothed over the estimates and the 95% intervals. M&W2010 weirdly draws over all instances of the simulations (notice the wide grey intervals). Since all the realizations of the model are overlapping each other, the graph effectively shows the maximum and minimum of the 95% interval. Where my interval will probably be interpreted as about +/- 0.4 on each side, M&W2010′s paper could be interpreted as +/- 0.5 degrees. This is a large difference especially for a paper that espouses large uncertainties in the proxy data. OTOH, Mann et al. (2008) I think produces mean uncertainty bounds which might be confusing depending on one’s perspective.

Apparently there is another paper (w/ code and data) coming out soon that uses Bayesian hierarchical models that I might recreate as well (Bo Li1, Douglas W. Nychka2 and Caspar M. Ammann. 2010. The Value of Multi-proxy Reconstruction of Past Climate). More fun to chew on.

Read the entire essay here

h/t to WUWT reader Uppyn

Share

0 0 votes
Article Rating
21 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
PJB
August 23, 2010 5:53 am

Uncertainty issues…..
This involves the use of “seems likely”, “is highly possible” and the litany of other hedges and suppositions that the IPCC et al use to find “significance” in the random noise of their peer-reviewed pap.
At least if the CAGW crowd complain about these statistical methods of analysis, they will be going up against their own models as well as their own results.
About time.

Slabadang
August 23, 2010 6:10 am

How`s Briffa these days?

Phil.
August 23, 2010 6:18 am

The author has added an update which you may want to add as well:
(Update: the way I calculate the projection is flawed, hopefully I’ll have a post on this later).

Orkneygal
August 23, 2010 6:19 am

PJB-
“Uncertainty” has a very specific meaning in Bayesian based methods.
Basically, it is a statement about the ,a priori distribution range and shape.
The drivel about “significance” used by the IPCC has nothing to do with Bayesian, Frequentist or Classical statistical methods.
Remember that the authors of the paper are proper statisticians and appear to be consistent in their methods of using statistical terms correctly; unlike the IPCC in AR4.

PJB
August 23, 2010 6:51 am

Orkneygal
Indeed it is. That is the greater problem. The IPCC mix terms and direct their rhetoric to policy-makers that have little or no real scientific background that would enable them to differentiate between statistical significance and hysterical histrionics.
Forcing the IPCC to provide concrete and specific numbers and effects would be a start to eliminating the innuendo and influence. Either way, we must constantly bash away at their methods, such as they are.

Skeptical Statistician
August 23, 2010 7:43 am

Slabadang says:
How`s Briffa these days?
Now THAT’S comedy. We don’t know where Keith is, and it’s a travesty that we don’t.

PhilJourdan
August 23, 2010 7:44 am

Very Interesting. And refreshing – to see the way Science is supposed to work.

Editor
August 23, 2010 8:10 am

Slabadang says:
August 23, 2010 at 6:10 am
> How`s Briffa these days?
Still sick with a kidney issue AFIAK. I don’t recall seeing anything from him this year.

NK
August 23, 2010 8:48 am

This really is refreshing; a published article with code/data included. Other statisticians attempt to replicate in order to valdate/invalidate the article. Wow, real science. The result, Mann’s Hockey Stick has now been invalidated as an unreliable model because of corrupt/incomplete data and unreliable methods by 2 statisticians. What a surprise.

John Whitman
August 23, 2010 9:20 am

I think the below link is the forthcoming paper mentioned by “mind of a Markov chain” in the last paragraph of his essay. It is “Bo Li1, Douglas W. Nychka2 and Caspar M. Ammann. 2010. The Value of Multi-proxy Reconstruction of Past Climate”
http://www.image.ucar.edu/~nychka/manuscripts/JASALiPaleo.pdf
John

August 23, 2010 9:31 am

IF these proxies were suitable substitutes for thermometers, the divergence since 1982 would be an indictment of the “adjustments” used in the production of the HadCRU temperatures. I guess the trees were a suitable distance from airports, parking lots, etc 😉

Ben
August 23, 2010 9:55 am

Phil. says:
August 23, 2010 at 6:18 am
The author has added an update which you may want to add as well:
(Update: the way I calculate the projection is flawed, hopefully I’ll have a post on this later).

Very good point. Now if only climate scientists would do the same with their faults we would all have good science in climate studies.

k winterkorn
August 23, 2010 10:20 am

A full appreciation of uncertainty is at the heart of the scientific method.
Robert Heinlein, in “Stranger in a Strange Land”, introduced a great term, “grok”, meaning incorporation of a concept into the mind as fully as one incorporates well digested food into the body, and more.
To grok “uncertainty” in the scientific sense, one must see the role of uncertainty from the elemental level of complexity, as in the fundamental role of uncertainty in quantum mechanics, to uncertainty’s role at the highest levels of complexity of which we are aware, namely human behavior. (Consider whether behavior science could ever predict exactly a human’s behavior—–one cannot know what the physical state of the brain is, without altering the brain, therefore, as with an electron, one cannot know both where the brain is and where a brain is going, to speak loosely: a Psychologic Uncertainty Principle, if you will.)
In “Climate Science”, to use a term that gives the field more credibility than it has recently earned, uncertainty reigns supreme. Weather lies at a high level of complexity, an interaction of many variables, each with a fair degree of uncertainty. The Earth’s oceans and atmosphere, plus the strange changings of solar input, orbital mechanics, and cosmic radiation are a system that more closely approximates a brain’s complexity than an electron bound to a proton. Uncertainties piled upon uncertainties.
A chaotic sytem, indeed.
Practical mathematics famously stumbles before the “three body problem”, accurate prediction of the motion of three gravitationally interactive bodies. How much more humble “climate science” should be with its ten-body level of complexity and extremely fuzzy measurements of each of its “bodies” (cf. all the threads on surface temps, sea ice extent vs volume, ice cores, tree rings, UHI and airport effects, etc.)
What I grok about weather is that it is talked about much and hard to predict. What I grok about climate is that it is weather integrated over time, chaotic and uncertain. Statements by anyone re weather and climate that are confident and certain all need good statistical review.

RayG
August 23, 2010 11:24 am

@ PJB says:
August 23, 2010 at 6:51 am
Nice alliterations with “statistical significance” and “hysterical histrionics.”

RayG
August 23, 2010 12:04 pm

I just submitted the following post at Andy Revkin’s NYTimes Dot Earth blog. I will be interested is seeing if it makes it through moderation.
“Mr. Revkin, while this is off topic for this particular thread, you don’t seem to have a current one running where this would be \”on topic.\” so I am asking for your indulgence in posting it here. Tks, RayG
I am sure that, as you and many of your readers are aware, peer review is only a step along the road to establishing whether or not a particular scientific hypothesis is valid. It is not a golden standard that establishes beyond question the validity of a particular hypothesis. The key step in the process which comes later is that of replication. This is the crux of the argument for making the data, computer code, models, methodology and so forth available. This is the way in which the scientific method works.
I trust that you have read some of the reports in the blogosphere surrounding McShane and Wyner’s paper in the Annals of Applied Statistics titled “A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable? “ which is highlighted at Climate Audit. The following quote from the abstract succinctly summarizes their findings, “In this paper, we assess the reliability of such reconstructions and their statistical significance against various null models. We find that the proxies do not predict temperature significantly better than random series generated independently of temperature.”
I realize that I brought this paper to the attention of DotEarth readers several days ago. My reason for highlighting this paper is because another professional statistician has replicated McShane and Wynter 2010. See the paper “Global Temperature Proxy Reconstructions ~ Bayesian extrapolation of warming w/ rjags” which may be found at:
probabilitynotes.wordpress.com/2010/08/22/global-temperature-proxy-reconstructions-bayesian-extrapolation-of-warming-w-rjags/
The obvious conclusion that can be drawn with the replication of McShane and Wyner 2010 is that Mann et al 1998, Mann et al 1999 and Mann et al 2008 have been falsified. Given the role that these papers have been given in the climate change debates, is past time to take a deep breath and institute a truly independent review of the current state of climate science starting with the quality and reliability of the source data and going forward. This must include the use of verification, testing and documentation standards as well as archiving for all computer models based on modern standards comparable with those required by the FDA in their review of applications for approval of new pharmaceuticals. Until this is done, schemes such as cap and trade, the EPA designation of CO2 as an environmental hazard, etc., must be placed on hold.”

August 23, 2010 12:32 pm

Thanks for the mention. I’ve fixed the projection.

captainfish
August 23, 2010 12:59 pm

Forgive a layman in global climate research, but as biologist, these statements don’t make much sense to me:

What does that mean? Why is the difference between +/- 0.4 and +/- 0.5 such a large difference? Seems to me their difference is only +/- 0.1. How is that a large difference? Especially in light that the M&W paper highlights that there are already large differences as proof that the proxies do not work?
Thanks for any replies.

captainfish
August 23, 2010 1:08 pm

urgh, that post didn’t work out well. The quote I was asking about was this one:
“Where my interval will probably be interpreted as about +/- 0.4 on each side, M&W2010′s paper could be interpreted as +/- 0.5 degrees. This is a large difference especially for a paper that espouses large uncertainties in the proxy data. “

Jeff Alberts
August 23, 2010 6:28 pm

“McShane and Wyne”
Man, you’re really having trouble with Prof Wyner’s name. I’ve seen Wyler here a couple times, and now Wyne… 😉

Pascvaks
August 24, 2010 6:54 am

From the graph, it sure looks like things “aren’t” as hot as we’ve been told. Hummm…

John
August 25, 2010 3:24 am

Any comments on what Richard L Smith has to say here http://magazine.amstat.org/blog/2010/07/01/congbriefingclim710/
or this http://deepclimate.org/2010/08/19/mcshane-and-wyner-2010/
I would rather see these things debated openly in an unbiased manner rather than ‘pros’ to be found on one website and ‘antis’ to be found on another.