New paper makes a hockey sticky wicket of Mann et al 98/99/08

NOTE: This has been running two weeks at the top of WUWT, discussion has slowed, so I’m placing it back in regular que.  – Anthony

UPDATES:

Statistician William Briggs weighs in here

Eduardo Zorita weighs in here

Anonymous blogger “Deep Climate” weighs in with what he/she calls a “deeply flawed study” here

After a week of being “preoccupied” Real Climate finally breaks radio silence here. It appears to be a prelude to a dismissal with a “wave of the hand”

Supplementary Info now available: All data and code used in this paper are available at the Annals of Applied Statistics supplementary materials website:

http://www.imstat.org/aoas/supplements/default.htm

=========================================

Sticky Wicket – phrase, meaning: “A difficult situation”.

Oh, my. There is a new and important study on temperature proxy reconstructions (McShane and Wyner 2010) submitted into the Annals of Applied Statistics and is listed to be published in the next issue. According to Steve McIntyre, this is one of the “top statistical journals”. This paper is a direct and serious rebuttal to the proxy reconstructions of Mann. It seems watertight on the surface, because instead of trying to attack the proxy data quality issues, they assumed the proxy data was accurate for their purpose, then created a bayesian backcast method. Then, using the proxy data, they demonstrate it fails to reproduce the sharp 20th century uptick.

Now, there’s a new look to the familiar “hockey stick”.

Before:

Multiproxy reconstruction of Northern Hemisphere surface temperature variations over the past millennium (blue), along with 50-year average (black), a measure of the statistical uncertainty associated with the reconstruction (gray), and instrumental surface temperature data for the last 150 years (red), based on the work by Mann et al. (1999). This figure has sometimes been referred to as the hockey stick. Source: IPCC (2001).

After:

FIG 16. Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD and backcasts 998-1849 AD. The cyan region indicates uncertainty due to t, the green region indicates uncertainty due to β, and the gray region indicates total uncertainty.

Not only are the results stunning, but the paper is highly readable, written in a sensible style that most laymen can absorb, even if they don’t understand some of the finer points of bayesian and loess filters, or principal components. Not only that, this paper is a confirmation of McIntyre and McKitrick’s work, with a strong nod to Wegman. I highly recommend reading this and distributing this story widely.

Here’s the submitted paper:

A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable?

(PDF, 2.5 MB. Backup download available here: McShane and Wyner 2010 )

It states in its abstract:

We find that the proxies do not predict temperature significantly better than random series generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago.

Here are some excerpts from the paper (emphasis in paragraphs mine):

This one shows that M&M hit the mark, because it is independent validation:

In other words, our model performs better when using highly autocorrelated

noise rather than proxies to ”predict” temperature. The real proxies are less predictive than our ”fake” data. While the Lasso generated reconstructions using the proxies are highly statistically significant compared to simple null models, they do not achieve statistical significance against sophisticated null models.

We are not the first to observe this effect. It was shown, in McIntyre

and McKitrick (2005a,c), that random sequences with complex local dependence

structures can predict temperatures. Their approach has been

roundly dismissed in the climate science literature:

To generate ”random” noise series, MM05c apply the full autoregressive structure of the real world proxy series. In this way, they in fact train their stochastic engine with significant (if not dominant) low frequency climate signal rather than purely non-climatic noise and its persistence. [Emphasis in original]

Ammann and Wahl (2007)

On the power of the proxy data to actually detect climate change:

This is disturbing: if a model cannot predict the occurrence of a sharp run-up in an out-of-sample block which is contiguous with the insample training set, then it seems highly unlikely that it has power to detect such levels or run-ups in the more distant past. It is even more discouraging when one recalls Figure 15: the model cannot capture the sharp run-up even in-sample. In sum, these results suggest that the ninety-three sequences that comprise the 1,000 year old proxy record simply lack power to detect a sharp increase in temperature. See Footnote 12

Footnote 12:

On the other hand, perhaps our model is unable to detect the high level of and sharp run-up in recent temperatures because anthropogenic factors have, for example, caused a regime change in the relation between temperatures and proxies. While this is certainly a consistent line of reasoning, it is also fraught with peril for, once one admits the possibility of regime changes in the instrumental period, it raises the question of whether such changes exist elsewhere over the past 1,000 years. Furthermore, it implies that up to half of the already short instrumental record is corrupted by anthropogenic factors, thus undermining paleoclimatology as a statistical enterprise.

FIG 15. In-sample Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD.

We plot the in-sample portion of this backcast (1850-1998 AD) in Figure 15. Not surprisingly, the model tracks CRU reasonably well because it is in-sample. However, despite the fact that the backcast is both in-sample and initialized with the high true temperatures from 1999 AD and 2000 AD, it still cannot capture either the high level of or the sharp run-up in temperatures of the 1990s. It is substantially biased low. That the model cannot capture run-up even in-sample does not portend well for its ability

to capture similar levels and run-ups if they exist out-of-sample.

Conclusion.

Research on multi-proxy temperature reconstructions of the earth’s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with universitylevel, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in some

respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.

On the one hand, we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data. The fundamental problem is that there is a limited amount of proxy data which dates back to 1000 AD; what is available is weakly predictive of global annual temperature. Our backcasting methods, which track quite closely the methods applied most recently in Mann (2008) to the same data, are unable to catch the sharp run up in temperatures recorded in the 1990s, even in-sample.

As can be seen in Figure 15, our estimate of the run up in temperature in the 1990s has

a much smaller slope than the actual temperature series. Furthermore, the lower frame of Figure 18 clearly reveals that the proxy model is not at all able to track the high gradient segment. Consequently, the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.

Our main contribution is our efforts to seriously grapple with the uncertainty involved in paleoclimatological reconstructions. Regression of high dimensional time series is always a complex problem with many traps. In our case, the particular challenges include (i) a short sequence of training data, (ii) more predictors than observations, (iii) a very weak signal, and (iv) response and predictor variables which are both strongly autocorrelated.

The final point is particularly troublesome: since the data is not easily modeled by a simple autoregressive process it follows that the number of truly independent observations (i.e., the effective sample size) may be just too small for accurate reconstruction.

Climate scientists have greatly underestimated the uncertainty of proxy based reconstructions and hence have been overconfident in their models. We have shown that time dependence in the temperature series is sufficiently strong to permit complex sequences of random numbers to forecast out-of-sample reasonably well fairly frequently (see, for example, Figure 9). Furthermore, even proxy based models with approximately the same amount of reconstructive skill (Figures 11,12, and 13), produce strikingly dissimilar historical backcasts: some of these look like hockey sticks but most do not (Figure 14).

Natural climate variability is not well understood and is probably quite large. It is not clear that the proxies currently used to predict temperature are even predictive of it at the scale of several decades let alone over many centuries. Nonetheless, paleoclimatoligical reconstructions constitute only one source of evidence in the AGW debate. Our work stands entirely on the shoulders of those environmental scientists who labored untold years to assemble the vast network of natural proxies. Although we assume the reliability of their data for our purposes here, there still remains a considerable number of outstanding questions that can only be answered with a free and open inquiry and a great deal of replication.

===============================================================

Commenters on WUWT report that Tamino and Romm are deleting comments even mentioning this paper on their blog comment forum. Their refusal to even acknowledge it tells you it has squarely hit the target, and the fat lady has sung – loudly.

(h/t to WUWT reader “thechuckr”)

Share

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
1.2K Comments
Inline Feedbacks
View all comments
Scott B
August 17, 2010 12:51 pm

BillD says:
August 17, 2010 at 12:23 pm
Are you just speaking in general terms, or are there specific issues you think could have been addressed/avoided with this consultation?

August 17, 2010 12:52 pm

Henry Dave
http://wattsupwiththat.com/2008/06/21/a-window-on-water-vapor-and-planetary-temperature-part-2/
Has a nice chart of different major absorption bands for GHGs.
If you would look carefully at the incoming and outgoing radiation, you would have noticed that only a small corner of earth’s radiation is cut off by the CO2 at 14-15 due to the water vapor overlap. Co2 has reasonably strong absorption at around 2 um – (3 peaks) hence they can measure it coming back from the moon. It does make a dent here in the sun’s radiation. A better solar graph will show this more clearly.
CO2 also absorbs strongly at between 4-5 where both earth and sun radiate. But who of the two radiates stronger here?
I would say, even looking at these stupid graphs, that it is pretty much evens with the cooling and warming.
But I wanted results in W/(0.03%-0.06% CO2/m3/m2/24hours cooling and warming, please ….
You have not proven to me that CO2 is a greenhouse gas….sorry

Ferd
August 17, 2010 12:57 pm

Didnt Mann recently say his hockey schtic should not have gotten as much play in the IPCC?

Invariant
August 17, 2010 12:57 pm

1. The Oxburgh Report stated ”We cannot help remarking that it is very surprising that research in an area that depends so heavily on statistical methods has not been carried out in close collaboration with professional statisticians.”
2. Rasmus Benestad stated ”I think neither McKitric or McIntyre (or Michaels) is very strong in statistics – but it does not prevent them from being very loud. Unfortunately, I have the impression that too few scientists have a very good grasp of statistics.” Translation from http://www.forskning.no/artikler/2009/november/235924.
3. Judith Curry stated: ”Not sure if you have caught the emerging hoopla about a new hockey stick paper, by leading statisticians[McShane and Wyner], to be published (in press) by a leading statistics journal.” http://www.collide-a-scape.com/2010/08/04/gavins-perspective/#comment-14404
Now I wonder what the response by The Team will be? I think that they risk insulting the larger professional academic statistical community if the continue as before.

R Connelly
August 17, 2010 12:58 pm

An interesting paper, several Inquiries (Wegman, Oxbourgh) have noted that Climate Science should utilize more statisticians in their work. Hopefully, they will embrace the LASSO and not just circle the wagons.
Its been a tough summer for the Team, what with Smerdon et al, McShane and Wyner 2010, McKitrick, McIntyre and Herman (2010) being published.

Jeremy
August 17, 2010 1:02 pm

BillD says:
I occasionally counsult with statisticians about the analysis of my scientific data and the collaboration can be quite helpful. However, based on my experience, statisticians analyzing scientific data without the help of a scientist would be more likely to make mistakes than scientists who analyze their own data without the assistance of a professional statistician. … Even though the M & W paper was submitted to a statistics journal, the editors would have been well advised to include some scientific input.
That’s all well and good. In fact if all this paper does is call out Mann & the team to fully scientifically justify their use of this data for temperature reconstruction to shame the statisticians, that in itself is a huge victory. To my knowledge they have never provided a single paper fully investigating the usefulness of tree rings as proxies for temperature (correct me here if I’m wrong, but I’m quite sure that any argument made in the affirmative has been decidedly weak and not vetted out by the scientific process). It is one of the very first questions ever raised, and if that’s what it takes to shut up the statisticians, fantastic, we should all welcome that kind of scientific debate on reality.

wobble
August 17, 2010 1:06 pm

Using our model, we calculate that there is a 36% posterior probability that 1998 was the warmest year over the past thousand.

Since we don’t have a model for these premises, we cannot say explicitly how low the probabilities should drop; but a reasonable guess is by at least a quarter to a half. That’s based on my subjective assessment of the likelihood that (1) the model is perfect and (2) the data are measured without significant error, and (3) the relationship is stationary.
So the 36% is under ideal conditions, but the authors’ reasonable guess is dropped to 18% – 27% or to 0% – 11% depending on what you think they mean by dropping the probabilities by at least a quarter to a half.

Duke C.
August 17, 2010 1:11 pm

Blake McShane has posted a clarification wrt M&W2010 at his website:
(Aug 16, 2010 at 8:23pm EDT)– Note on “A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable?” by Blakeley B. McShane and Abraham J. Wyner:
“The paper has been accepted at the Annals of Applied Statistics and a draft version is posted on the journal’s website in the forthcoming section. The posted draft was submitted for referee and editor comments and is not yet in “final” form. Likewise, some have obtained the code and data which was intended for the referees and editors as part of the review process. This code and data is not yet in final form nor is the documentation complete. The final draft of the paper and the code and data bank will be posted at the journal’s website come publication.”
http://www.blakemcshane.com/
Posted on tips & notes by mistake 🙂

August 17, 2010 1:18 pm

They provide confirmation of what many of us have known. They limited their study to temperature proxies. The ice core CO2 proxie is much less reliable. The legs of the CAGW myth are being broken. The “true believers” should be preparing for the fall instead of shouting that their platform is sturdy and everyone should join them.

DCA, engineer
August 17, 2010 1:24 pm

Has anyone seen this comment on Deltoid and would like to address it?
“The funny thing is that this paper actually replicates Mann et al. 2008 without even noticing it…
To partake in this dirty little secret, see their Figure 14 on page 30: the blue curve is wiggle-identical and practically a photocopy of Mann’s corresponding EIV NH land curve. As it should be. The higher (green) curve they canonize and which is shown above is the result of an error: they calibrate their proxies against hemispherical mean temperature, which is a poor measure of forced variability. The instrumental PC1 which the blue curve is based on, is a much better measure; its EOF contains the polar amplification effect. What it means is that high-latitude proxies, in order to be made representative for global temperatures, should be downweighted. The green curve fails to do this. Thus, high latitudes are overrepresented in this reconstruction, which is why the “shaft” is at such an angle, due to the Earth axis’s changing tilt effect on the latitudinal temperature dependence described in Kaufman et al. 2009.
The authors have no way of detecting such an error as their RMSE goodness-of-fit seems to be also based around the hemispherical average…”

stephen richards
August 17, 2010 1:33 pm

Jaye Bass says:
August 17, 2010 at 10:54 am
Jaye I did read it all but quite clearly misinterpreted the english. I guess it was the experimenter and realising which one you were writing about. The first or the second.
I apologise

Gaylon
August 17, 2010 1:40 pm

BillD says:
August 17, 2010 at 12:23 pm
BillD,
The fact that the ‘Team’ had/has not consulted with statisticians turns out ot be the crux of the problem. This way they could do whatever they wanted, and as it turns out that is exactly what they did: whatever they wanted. Please recall the numerous and hard fought FOI requests and denials (see climategate). Why?
The M&M articles, the NAS report and the Wegman Report a long time ago should really have put an end to this dung-pile and the $$billions that have been wasted on it…your tax dollars, my tax dollars. The governments decision to accept the results MBH based on the North Report but denounce the method was purely a political move. How could any person do that with a straight face?
The M&W paper deconstructs the MBH et al products on a statistical basis rendering any conclusions based on their reconstructions null and void; our tax dollars wasted.
The only possible input the “scientist’s” could possibly have had would be along the lines of, “Huh? Why’d you do that? Really?…so we were wrong all along? huh, that’s wierd. Really?…random noise is a BETTER predictor? Huh, that’s weird.” I am certain you will not be hearing that from the CAGW crowd.
From what I’ve read of their resume and comments on this blog and others it is apparent that these guys are “heavy-hitters”, or perhaps more aptly put: above reproach
as statistical scientists.
The fact that they are not “climate scientists” bolsters their conclusions by the simple facts that, 1) They did not have preconceived parameters that dictated the outcome and, 2) They did not care about the quality control of the data-sets, they used the original data as-is. How damning is that?
Their goal was to determine IF a signal could be detected statistically from the proxies and IF that signal had the statistical integrity to make meaningful predictions. Turns out it didn’t/doesn’t. End of story…kaput.
Its been said many times before and in many different ways, I will repeat it for affect:
This destroys the very foundation that the CAGW ‘house-of-cards’ is built upon, IMO.

Evan Jones
Editor
August 17, 2010 1:52 pm

“The funny thing is that this paper actually replicates Mann et al. 2008 without even noticing it…
Oh, okay, in that case the Deltoid will be perfectly happy when the MW curve is substituted for Mann’s. I propose they do so immediately.

Richard S Courtney
August 17, 2010 1:57 pm

DCA, engineer:
At August 17, 2010 at 1:24 pm you ask:
“Has anyone seen this comment on Deltoid and would like to address it?”
I would never visited that website but if your quotation of the comment is correct then the comment it is mistaken in that it completely fails to understand the M&W analysis.
Of course, the Figure 16 of M&W has some similarity to the MBH ‘hockey stick’: the M&W graph was derived from the same proxies as the’hockey stick’. And, of course, by adjusting the weightings applied to individual proxies either analysis could be made to have similar shape to the other.
But so what?
The paper by M&W shows that the errors of the graphs are such that anything (including MWP and the LIA) may exist within those errors, so NEITHER GRAPH TELLS ANYTHING OF USE.
Richard

Green Sand
August 17, 2010 2:07 pm

DCA, engineer says:
August 17, 2010 at 1:24 pm
Has anyone seen this comment on Deltoid and would like to address it?
“The funny thing is that this paper actually replicates Mann et al. 2008 without even noticing it…

Josh has http://www.bishop-hill.net/blog/2010/8/17/josh-31.html

August 17, 2010 2:08 pm

From Deltoid:
“The funny thing is that this paper actually replicates Mann et al. 2008 without even noticing it…
OK then. Lets plug in the data from Mann 2008 and see what we get there.

Philemon
August 17, 2010 2:09 pm

BillD says:
August 17, 2010 at 12:23 pm
“…I have not tried working with statisticians, such as M & W, who work primarily (I assume) with economic and business data, rather than scientific data….”
Instead of making assumptions, you could look at their CV’s.
Blakeley B. McShane
https://www.gsb.stanford.edu/facseminars/events/marketing/documents/cv_McShane.pdf
Ph.D. in Statistics, University of Pennsylvania ,The Wharton School, 2010;
Thesis: Integrating Machine Learning Methods with Hidden Markov Models: A New Approach to Categorical Time Series Analysis with Application to Sleep Data
Abraham J. Wyner
http://statistics.wharton.upenn.edu/documents/cv/Resume-4-5-10.pdf
PhD in Statistics, Stanford University, 1993;
Research Areas: Probabilistic modeling; information theory; entropy; data compression; estimation

Ben Wolf
August 17, 2010 2:21 pm

Jimbo,
The study you link to from the NAS repeatedly states it is confined to the Icelandic shelf. It is not a global temperature reconstruction and tells us nothing about the “Roman Warm Period”.

Stephen Brown
August 17, 2010 2:21 pm

Josh, the well-known cartoonist, has joined the fray.
Hilarious!
http://www.cartoonsbyjosh.com/

Dave Wendt
August 17, 2010 2:23 pm

wobble says:
August 17, 2010 at 1:06 pm
You appear to be responding to a comment of mine above and I should clarify. I posted a comment response from W.M. Briggs made on his site. In the original the first paragraph was italicized and is the only part that is a quote from the paper. Italicization was lost in C&P and everything after the first para is Mr. Briggs analysis. Your concluding paragraph is a reasonable summary of that analysis, but we shouldn’t assume the authors share Mr. Briggs view, although I suspect they probably wouldn’t argue to stridently against it.

August 17, 2010 2:26 pm

Stephen Brown says:
August 17, 2010 at 11:56 am
It has happened just as predicted numerous times above.
“While WattsUpWithThat thinks this paper is so important that he has been running a post on it at the top of his blog for days, he conveniently omits this rather remarkable statement from the authors:
Using our model, we calculate that there is a 36% posterior probability that 1998 was the warmest year over the past thousand. If we consider rolling decades, 1997-2006 is the warmest on record; our model gives an 80% chance that it was the warmest in the past thousand years.
Doh!”
Uhmm, because also from the paper,
“This is disturbing: if a model cannot predict the occurrence of a
sharp run-up in an out-of-sample block which is contiguous with the insample
training set, then it seems highly unlikely that it has power to detect
such levels or run-ups in the more distant past. It is even more discouraging
when one recalls Figure 15: the model cannot capture the sharp run-up
even in-sample. In sum, these results suggest that the ninety-three sequences
that comprise the 1,000 year old proxy record simply lack power to detect a sharp increase in temperature.”
and next paragraph,
“As mentioned earlier, scientists have collected a large body of evidence
which suggests that there was a Medieval Warm Period (MWP) at least in
portions of the Northern Hemisphere. The MWP is believed to have occurred
from c. 800-1300 AD (it was followed by the Little Ice Age). It is
widely hoped that multi-proxy models have the power to detect (i) how warm the Medieval Warm Period was, (ii) how sharply temperatures increased
during it, and (iii) to compare these two features to the past decade’s
high temperatures and sharp run-up. Since our model cannot detect the recent
temperature change, detection of dramatic changes hundreds of years
ago seems out of the question.”
and from page 41 in the conclusions section,
“On the one hand, we conclude unequivocally that the evidence for a
”long-handled” hockey stick (where the shaft of the hockey stick extends
to the year 1000 AD) is lacking in the data.”
later from the conclusions, “Furthermore, the
lower frame of Figure 18 clearly reveals that the proxy model is not at all
able to track the high gradient segment. Consequently, the long flat handle
of the hockey stick is best understood to be a feature of regression and less
a reflection of our knowledge of the truth. Nevertheless, the temperatures
of the last few decades have been relatively warm compared to many of the
thousand year temperature curves sampled from the posterior distribution
of our model.”
and finally, “Climate scientists have greatly underestimated the uncertainty of proxybased
reconstructions and hence have been overconfident in their models.
We have shown that time dependence in the temperature series is sufficiently
strong to permit complex sequences of random numbers to forecast
out-of-sample reasonably well fairly frequently (see, for example, Figure
9). Furthermore, even proxy based models with approximately the same
amount of reconstructive skill (Figures 11,12, and 13), produce strikingly
dissimilar historical backcasts: some of these look like hockey sticks but
most do not (Figure 14).”

RobertStephan, I think you missed the point of the statement. Their statement is qualified that they are using the proxy data available. Later, they go on to say the proxy data doesn’t amount to much. Try again.
My response is a copy and paste(reason for the strike through) from another response to one of the many making the same illogical argument. Is there some place giving you guys directions to take out of context the statements of the authors? Do you guys really think that’s going to work? I should have started that early in the discussion to see how many of you would do this exact same illogical argument.

Stephen Brown
August 17, 2010 2:28 pm

” Stephen Brown says:
August 17, 2010 at 11:56 am
It has happened just as predicted numerous times above.”
The point that I was trying to make was that Romm picked on just the paragraph that many said that he would, without his considering another word from the rest of the paper.
And so it came to pass!
Might I typify Romm, in the finest English vernacular, as a “plonker”?

August 17, 2010 2:57 pm

Stephen Brown says:
August 17, 2010 at 2:28 pm
lol, OIC, my bad, I had missed your point.

Stephen Brown
August 17, 2010 3:00 pm

My post, made at 2257 on 17/08/2010 at Climateprogress. Will it pass moderation? I think not.
” John Mason says:
August 17, 2010 at 4:37 am
Copy of my post over at Tamino’s blog:
I did try posting on WUWT last night, not something I do often, suggesting that people waited until the paper was published and other specialists in the relevant field had formally responded before arriving at a considered opinion – as per standard academic procedure.
It didn’t make me that popular, although some of the membership did seem to broadly agree.”
So, WUWT not only permits but welcomes dissenting opinions!
Mr. Romm, please afford me the same courtesy.
The McShane and Wyner 2010 paper renders the well-known ‘hockey-stick’ graph beyond the moribund state. The graph, the fallacious data and statistics which were its progenitors have been shown to be incorrect and all conclusions drawn therefrom are equally wrong.
Read the paper carefully. The light at the end of the tunnel is the breaking dawn of scientific and statistical truth.
Sincerely,
Stephen Brown.

DCA, engineer
August 17, 2010 3:06 pm

James Sexton,
I appreciate your comments above. Since you have read the paper and appear to have a good understanding, would it be possible for you to oblige us all with a comment addressing an issue brought up by Eli on this blog and one on deltoid by a poster named Martin Vermeer.
“The funny thing is that this paper actually replicates Mann et al. 2008 without even noticing it…
To partake in this dirty little secret, see their Figure 14 on page 30: the blue curve is wiggle-identical and practically a photocopy of Mann’s corresponding EIV NH land curve. As it should be. The higher (green) curve they canonize and which is shown above is the result of an error: they calibrate their proxies against hemispherical mean temperature, which is a poor measure of forced variability. The instrumental PC1 which the blue curve is based on, is a much better measure; its EOF contains the polar amplification effect. What it means is that high-latitude proxies, in order to be made representative for global temperatures, should be downweighted. The green curve fails to do this. Thus, high latitudes are overrepresented in this reconstruction, which is why the “shaft” is at such an angle, due to the Earth axis’s changing tilt effect on the latitudinal temperature dependence described in Kaufman et al. 2009.”
And also on Eli Rabbets’ blog.
http://rabett.blogspot.com/2010/08/flat-new-puzzler.html

1 25 26 27 28 29 49