New paper makes a hockey sticky wicket of Mann et al 98/99/08

NOTE: This has been running two weeks at the top of WUWT, discussion has slowed, so I’m placing it back in regular que.  – Anthony

UPDATES:

Statistician William Briggs weighs in here

Eduardo Zorita weighs in here

Anonymous blogger “Deep Climate” weighs in with what he/she calls a “deeply flawed study” here

After a week of being “preoccupied” Real Climate finally breaks radio silence here. It appears to be a prelude to a dismissal with a “wave of the hand”

Supplementary Info now available: All data and code used in this paper are available at the Annals of Applied Statistics supplementary materials website:

http://www.imstat.org/aoas/supplements/default.htm

=========================================

Sticky Wicket – phrase, meaning: “A difficult situation”.

Oh, my. There is a new and important study on temperature proxy reconstructions (McShane and Wyner 2010) submitted into the Annals of Applied Statistics and is listed to be published in the next issue. According to Steve McIntyre, this is one of the “top statistical journals”. This paper is a direct and serious rebuttal to the proxy reconstructions of Mann. It seems watertight on the surface, because instead of trying to attack the proxy data quality issues, they assumed the proxy data was accurate for their purpose, then created a bayesian backcast method. Then, using the proxy data, they demonstrate it fails to reproduce the sharp 20th century uptick.

Now, there’s a new look to the familiar “hockey stick”.

Before:

Multiproxy reconstruction of Northern Hemisphere surface temperature variations over the past millennium (blue), along with 50-year average (black), a measure of the statistical uncertainty associated with the reconstruction (gray), and instrumental surface temperature data for the last 150 years (red), based on the work by Mann et al. (1999). This figure has sometimes been referred to as the hockey stick. Source: IPCC (2001).

After:

FIG 16. Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD and backcasts 998-1849 AD. The cyan region indicates uncertainty due to t, the green region indicates uncertainty due to β, and the gray region indicates total uncertainty.

Not only are the results stunning, but the paper is highly readable, written in a sensible style that most laymen can absorb, even if they don’t understand some of the finer points of bayesian and loess filters, or principal components. Not only that, this paper is a confirmation of McIntyre and McKitrick’s work, with a strong nod to Wegman. I highly recommend reading this and distributing this story widely.

Here’s the submitted paper:

A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable?

(PDF, 2.5 MB. Backup download available here: McShane and Wyner 2010 )

It states in its abstract:

We find that the proxies do not predict temperature significantly better than random series generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago.

Here are some excerpts from the paper (emphasis in paragraphs mine):

This one shows that M&M hit the mark, because it is independent validation:

In other words, our model performs better when using highly autocorrelated

noise rather than proxies to ”predict” temperature. The real proxies are less predictive than our ”fake” data. While the Lasso generated reconstructions using the proxies are highly statistically significant compared to simple null models, they do not achieve statistical significance against sophisticated null models.

We are not the first to observe this effect. It was shown, in McIntyre

and McKitrick (2005a,c), that random sequences with complex local dependence

structures can predict temperatures. Their approach has been

roundly dismissed in the climate science literature:

To generate ”random” noise series, MM05c apply the full autoregressive structure of the real world proxy series. In this way, they in fact train their stochastic engine with significant (if not dominant) low frequency climate signal rather than purely non-climatic noise and its persistence. [Emphasis in original]

Ammann and Wahl (2007)

On the power of the proxy data to actually detect climate change:

This is disturbing: if a model cannot predict the occurrence of a sharp run-up in an out-of-sample block which is contiguous with the insample training set, then it seems highly unlikely that it has power to detect such levels or run-ups in the more distant past. It is even more discouraging when one recalls Figure 15: the model cannot capture the sharp run-up even in-sample. In sum, these results suggest that the ninety-three sequences that comprise the 1,000 year old proxy record simply lack power to detect a sharp increase in temperature. See Footnote 12

Footnote 12:

On the other hand, perhaps our model is unable to detect the high level of and sharp run-up in recent temperatures because anthropogenic factors have, for example, caused a regime change in the relation between temperatures and proxies. While this is certainly a consistent line of reasoning, it is also fraught with peril for, once one admits the possibility of regime changes in the instrumental period, it raises the question of whether such changes exist elsewhere over the past 1,000 years. Furthermore, it implies that up to half of the already short instrumental record is corrupted by anthropogenic factors, thus undermining paleoclimatology as a statistical enterprise.

FIG 15. In-sample Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD.

We plot the in-sample portion of this backcast (1850-1998 AD) in Figure 15. Not surprisingly, the model tracks CRU reasonably well because it is in-sample. However, despite the fact that the backcast is both in-sample and initialized with the high true temperatures from 1999 AD and 2000 AD, it still cannot capture either the high level of or the sharp run-up in temperatures of the 1990s. It is substantially biased low. That the model cannot capture run-up even in-sample does not portend well for its ability

to capture similar levels and run-ups if they exist out-of-sample.

Conclusion.

Research on multi-proxy temperature reconstructions of the earth’s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with universitylevel, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in some

respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.

On the one hand, we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data. The fundamental problem is that there is a limited amount of proxy data which dates back to 1000 AD; what is available is weakly predictive of global annual temperature. Our backcasting methods, which track quite closely the methods applied most recently in Mann (2008) to the same data, are unable to catch the sharp run up in temperatures recorded in the 1990s, even in-sample.

As can be seen in Figure 15, our estimate of the run up in temperature in the 1990s has

a much smaller slope than the actual temperature series. Furthermore, the lower frame of Figure 18 clearly reveals that the proxy model is not at all able to track the high gradient segment. Consequently, the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.

Our main contribution is our efforts to seriously grapple with the uncertainty involved in paleoclimatological reconstructions. Regression of high dimensional time series is always a complex problem with many traps. In our case, the particular challenges include (i) a short sequence of training data, (ii) more predictors than observations, (iii) a very weak signal, and (iv) response and predictor variables which are both strongly autocorrelated.

The final point is particularly troublesome: since the data is not easily modeled by a simple autoregressive process it follows that the number of truly independent observations (i.e., the effective sample size) may be just too small for accurate reconstruction.

Climate scientists have greatly underestimated the uncertainty of proxy based reconstructions and hence have been overconfident in their models. We have shown that time dependence in the temperature series is sufficiently strong to permit complex sequences of random numbers to forecast out-of-sample reasonably well fairly frequently (see, for example, Figure 9). Furthermore, even proxy based models with approximately the same amount of reconstructive skill (Figures 11,12, and 13), produce strikingly dissimilar historical backcasts: some of these look like hockey sticks but most do not (Figure 14).

Natural climate variability is not well understood and is probably quite large. It is not clear that the proxies currently used to predict temperature are even predictive of it at the scale of several decades let alone over many centuries. Nonetheless, paleoclimatoligical reconstructions constitute only one source of evidence in the AGW debate. Our work stands entirely on the shoulders of those environmental scientists who labored untold years to assemble the vast network of natural proxies. Although we assume the reliability of their data for our purposes here, there still remains a considerable number of outstanding questions that can only be answered with a free and open inquiry and a great deal of replication.

===============================================================

Commenters on WUWT report that Tamino and Romm are deleting comments even mentioning this paper on their blog comment forum. Their refusal to even acknowledge it tells you it has squarely hit the target, and the fat lady has sung – loudly.

(h/t to WUWT reader “thechuckr”)

Share

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
1.2K Comments
Inline Feedbacks
View all comments
jobnls
August 19, 2010 1:21 am

Re: Barry, sounds like a perfect study on your part =)
Too bad policy is not being driven by your friends and relatives. The point here is that the hockey stick has been a huge factor in the overselling of propaganda to key persons in activist groups, government and international bodies. I do not think that anyone can deny this or the importance of unprecedented warming.
To go back to the paper I am still surprised at how many people look at the back cast graph in the wrong way. This has been said numerous times, but please note that the important part is not the authors red line but the gray area. The red line is just a guess but the actual temperature (if you believe Manns proxies are accurate in the first place) could just as well be any line of any variation drawn within the gray area.

Richard
August 19, 2010 1:23 am

James Sexton says:
August 18, 2010 at 9:34 pm:
James you finish this piece off so well, I wanted to find the words but you have said them for me. Australia in on the brink of an election (21st August) and this issue is up there with the best of them, immigration, economy, energy etc. People are arguing who will do the best job of tackling an issue that needs no tackling. We do not need wind farms and a tax on everything. The press are just as much to blame for their one -eyed approach. Below is the bit that says it all.
Barry, I’d be happy to discuss CAGW with you or anyone else on an hypothetical level. But that isn’t where the world is at in this discussion. The world is acting upon impressions. Laws are being passed. Treaties are being signed. Lives and livelihoods are being destroyed. Why? Because the perception of the graph is deemed scientific proof.
And no one but a skeptic stands up to say, “wait”, there may be a misconception about reality here! All the while people like Mann, Hansen and Jones, ad nauseum remain silent while people like you say, “He didn’t explicitly say that!”

Richard S Courtney
August 19, 2010 1:53 am

barry:
I am now convinced that your arguments are an attempt to deflect this thread from a comment that I made and onto a tangent. So, I shall address the pertinent parts of your post at August 18, 2010 at 7:00 pm and touch on the tangential issue after that.
The part of your paper that is directly pertinent to this thread says:
“There is an excellent paper that examines many of the thorny issues with millennial reconstructions. Therein a good number of studies, including MBH are considered and combined. As part of the testing, the controversial US tree proxies are discarded, the Yamal series, Mann’s PC analysis eschewed, and other methods and proxies are either reworked or left out, or retained if they are considered robust.
http://www.geos.ed.ac.uk/homes/ghegerl/cp-2006-0049.pdf
If all of Mann’s studies were discarded, the story generally told by other, multi-proxy, millennial reconstructions matches that which appears in the IPCC.
That is, the MWP seems to have been as warm as the mean of the 20th century, but probably not as warm as the last few decades of the 20th century.
With that in mind, I cannot fathom why there is such triumphalism surrounding criticism of Mann’s 12 year-old papers.”
THAT IS NOT TRUE.
The paper you cite is NOT “an excellent paper that examines many of the thorny issues with millennial reconstructions”. It is – as its title says – an intercomparison of some proxy studies. Which ones? Well that paper says;
“This paper reviews reconstructions of past temperature, on the global, hemispheric, or near hemispheric scale, by Jones et al. (1998) [JBB1998], Mann et al. (1998) [MBH1998], Mann et al. (1999) [MBH1999], Huang et al. (2000) [HPS2000], Crowley and Lowery (2000) [CL2000], Briffa et al. (2001) [BOS2001], Esper et al. (2002) [ECS2002], Mann and Jones (2003) [MJ2003], Moberg et al. (2005) [MSH2005], Oerlemans (2005) [OER2005], and Hegerl et al. (2007a) [HCA2007].”
So, that paper is an intercomparison of the studies which used the flawed statistical analysis method adopted by MBH. The paper of McS&W proves that analysis method provides worthless results so it disproves the indications of ALL those studies.
Therefore, it is disingenuous for you to assert that ;
“If all of Mann’s studies were discarded, the story generally told by other, multi-proxy, millennial reconstructions matches that which appears in the IPCC.”
The paper of McS&W proves their “story” is a fairy tale.
So, you are plain wrong when, on the basis of those discredited studies, you assert;
“the MWP seems to have been as warm as the mean of the 20th century, but probably not as warm as the last few decades of the 20th century.”
The individual proxy studies of individual proxies (e.g. stalagtites) from around the show a general picture of the world being warmer than now in the MWP. And those studies do not suffer from the same statistical defect as the MBH and similar studies which you proclaim.
Indeed, the studies from single locations cannot suffer from the assumption of ‘teleconnections’ that is used by MBH and similar studies because they do not use any such assumption. And the paper by McS&M proves that assumption adds great inherent error to the results of the MBH analysis method.
And you say;
“With that in mind, I cannot fathom why there is such triumphalism surrounding criticism of Mann’s 12 year-old papers.”
This seems to be a meme that has been generated since the McS&M paper was submitted for publication. Even Michael Mann has been saying that he thinks his ‘hockey stick’ should not have been used as a “poster child” of AGW.
This seems to be a policy of tactical retreat.
But there is no ”triumphalism” over the retreat. There is only proclamation and despair.
The ‘hockey stick’ is a zombie. It has been repeatedly destroyed by M&M, Wegman, North, and etc. but it keeps on having a life of its own by being promulgated by AGW supporters as having some validity. Indeed, you try to continue it when you assert:
“If all of Mann’s studies were discarded, the story generally told by other, multi-proxy, millennial reconstructions matches that which appears in the IPCC.”
But ,
ALL THOSE RECONSTRUCTIONS HAVE BEEN SHOWN TO BE WORTHLESS BY THE McS&M PAPER.
The hope is that the McS&M paper is the ‘silver bullet’ that will finally get rid of the zombie. And that is why it is being proclaimed (n.b. not triumphed).
You make an Orwellian assertion that;
“In fact, in the rest of the world, very few people read the IPCC TAR Summary for Policy Makers, very few people would have known what the graph was had it been given to them unlabeled, and the only reason it has emerged into the worldwide public sphere is because of the repeated attempts to portray it as the central pillar of the AGW argument.”
Do you really think that anybody here has so imperfect a memory that they do not know the ‘hockey stick’ was the “proof” of AGW which was presented in thr SPM of IPCC Third Assessment Report (TAR)?
And you follow that with ad hom and falsehood when you conclude saying;
“That the argument is basically political is apparent in how critics generally do not discuss the findings of MBH 99 from the text of that paper, or even know what they are. It’s all about the graph and perceptions arising from it. As it is for you. Which is why you mistakenly repeat that MBH did away with the MWP and LIA. You would do well to read the paper.”
Your insinuation that I have not read MBH98 and MBH99 is unfounded and silly. But your claim that the “argument is basically political” is correct because that is why the IPCC proclaimed it in their Summary For Policymakers. However, your assertion that my opposition to that piece of psudo-science is an unfounded insult. My opposition to it is summarised in a peer review comment I provided IPCC Fourth Assessment Report (AR4): that said;
Page 1-13 Chapter 1 Section 1.5.2 Line 36
For accuracy and completeness, after “(IPCC, 2001a)” it is very, very important to add:
“However, since the TAR several studies have provided doubt to that work of Mann et al.. Many studies provide data that conflict with the findings of that work of Mann et al. (e.g. Beltrami et al) (ref. Beltrami et al “Long-term tracking of climate change by underground temperatures”, Geophysical Research Letters v.12 (2005) ). In 2005 McIntyre and McKitrick published two papers that together provide a complete refutation of that work of Mann et al. (ref. McIntyre S & McKitrick R, Energy & Environment, v 16, no.1 (2005)) (2005), Geophysical Research Letters Vol. 32, No. 3, (2005)). But, perhaps the most important of their studies of that work of Mann et al. was their publication in 2003 (ref. McIntyre S & McKitrick R, Energy & Environment, v 24, pp 751-771 (2003)) that showed it is not possible to replicate the work of Mann et al. There are several reasons for the inability to replicate this work of Mann et al.; not least that Mann refuses to reveal his source codes. The inability to replicate this work of Mann et al. means it has no scientific worth: i.e. this work of Mann et al. is anecdote of similar kind to a report of a ghost sighting. Hence, the IPCC now apologises for including it in the TAR. The IPCC will now disregard this work of Mann et al. and recommends that all others should also disregard it until it can be – and has been – independently replicated.”
But my recommendation was ignored and the AR4 only made a bland comment that natural variability has been found to be greater than indicated in the TAR.
THAT IS POLITICAL.
I conclude by addressing the tangential issue. Your post at August 18, 2010 at 7:00 pm asserts that you have not tried to dispute the existence of the MWP. In that case, I fail to understand the point of your post at August 18, 2010 at 6:56 am that claims some individual proxy studies show warming and cooling that are out of phase with the bulk of studies which show the MWP.
Richard

duckster
August 19, 2010 2:12 am

Deep Climate has just posted its response to the paper:
McShane and Wyner 2010

Richard S Courtney
August 19, 2010 2:38 am

barry:
Your comment at August 18, 2010 at 11:22 pm is addressed to me but quotes and answers a point from James Sexton (at August 18, 2010 at 7:00 pm), not me.
However, the point made by James is valid. And your family do not make laws; politicians do that.
The’hockey stick’ was presented as proof of AGW to politicians in the Summary for Policymakers of the IPCC TAR.
Richard

Gaylon
August 19, 2010 4:08 am

This from ‘Deep Climate’ courtesy of the ‘duckster,
Big surprise too! Not.
Lots of talk about the RE value and RMSE (hope I got that right) but nary a word about the R2 statistic/validation. Lots of history too (?). They refer to Wegman’s congressional testimony as lacking peer-review? I’m tired, I probably got that wrong, anyway here’s the ending (if it was a movie I wouldn’t post this, but you already knew what was coming…didn’t ya ;>).
“So there you have it. McShane and Wyner’s background exposition of the scientific history of the “hockey stick” relies excessively on “grey” literature and is replete with errors, some of which appear to be have been introduced through a misreading of secondary sources, without direct consultation of the cited sources. And the authors’ claims concerning the performance of “null” proxies are clearly contradicted by findings in two key studies cited at length, Mann et al 2008 and Ammann and Wahl 2007.These contradictions are not even mentioned, let alone explained, by the authors.
In short, this is a deeply flawed study and if it were to be published as anything resembling the draft I have examined, that would certainly raise troubling questions about the peer review process at the Annals of Applied Science.”
I had to laugh when I read this last sentence…I’m going to bed.

Veronica
August 19, 2010 4:35 am

Mike Roddy
The fish stocks haz delpeted bcoz we haz eated em. om nom nom.

RockyRoad
August 19, 2010 4:36 am

I’ve read the Deep Climate response and all of the comments above, and to me the bottom line is this: Mann’s reconstruction is indefensible. If it is true that Mann hasn’t revealed his source code, it is high time for him to do so. If he refuses, then his interpretation is simply a figment of his personal imagination and is meaningless.
I found the Deep Climate response to be a lot of deflection from this key point. They seem to be highly concerned about a lot of other areas–as if those will somehow compensate for Mann’s lack of transparency and professionalism, but they do not.
I’ve said it several times in the past but it bears repeating: If a “scientist” refuses to show either his data or his methodology, then he is not a scientist at all. He is a charlatan.

Mike Ozanne
August 19, 2010 4:50 am

“hunter says:
August 17, 2010 at 3:31 pm
One question from a sserious person I have seen is why did MW not use a fourier transformation in their analysis?
Any insights on this would be greatly appreciated.
I guess the followup question would be did Mann use an FT? If so, why? If not, why not?”
Fourier Transforms…. Damn that was a long time ago… Fourier Analysis as we used it in Physiscs would be used to find out what mixture of frequencies would make up your resulting signal. As far as your second question goes, isn’t looking for underlying cyclic patterns the exact antithesis of what being a CAGWanker is all about……:-)

Lady in Red
August 19, 2010 5:09 am

I was just daydreaming about how lucky Michael Mann is that he “studies” climate science and isn’t selling mining shares. If he were, he’d probably be cooling his heels alongside Bernie Madoff.
But, because he does what he does in the name of something greater than himself (although I don’t believe that for a moment), we’re stuck with him….
…hawking the snake oil from town to town, refusing to give up, despite the fact he knows – and he knows we know and he doesn’t care – until he’s stopped, run out of town on a rail, tarred and feathered. Until then, it will continue to be the same old story.
He will never change, apologize, admit a mistake.
Simply and sadly: he needs to be stopped. …Lady in Red
PS: I just skimmed Deep Climate’s review of McShane and Wyner. I wish I could assess it. I cannot. I find DC much more credible, albeit petty about details to the exclusion of the larger message, than Romm, Gavin or Tamino or Eli. I would find it helpful to read a reaction to his analysis. I, along with other questioning folk I’m sure, have been banned from posting on his site. He don’t like questions! I wonder who he is.

August 19, 2010 5:25 am

barry says:
August 18, 2010 at 11:22 pm
Thanks, printed and now will test. BTW, isn’t that a mash-up of both proxy and temp?

August 19, 2010 6:14 am

Barry,
I did the same experiment. 2 out of 10 thought it a temp graph. The fact they thought it was a temp graph is because that is the way it is presented to the public. I don’t think I can add much to Richard S Courtney’s comments or the one I made that you perhaps thought was Richard’s. Again, I believe you’re missing the point.

RR Kampen
August 19, 2010 6:18 am

Scott Walter says:
August 18, 2010 at 4:38 pm
It’s the role of scientists to prove that CO2 played a role not to disprove it didn’t.

Nonsense. Proving something in an empirical science is fiction. Which is why we scientists always come with utterances like ‘very likely’ (check AR4). You are suggesting that Relativity Theory has no value because it cannot be proven in the sense you seem to want proof. This means that your screen is not functioning and you cannot read my post.
To be clear, if you accept that it is necessary to disprove the point, then you shouldn’t stop there. You’ll need to disprove the data was corrupt, temperature variations were due to natural weather variations and the Greek gods weren’t simply having a bad day.
You tell me to disprove the Greek gods had a bad day. Very well. First prove to me the Greek gods exist. The prove to me the Greek gods can actually have good days and bad ones. Then maybe I can oblige.
Over to you to prove CO2 played a role.
CO2 very likely played a role, it being a greenhouse gas whose concentration is rising very strongly. There are no apparent other mechanisms that do the two things necessary to deny AGW: 1) ensure that CO2 has NO effect and 2) the mechanism DOES create the effect. It is the burden of AGW-skeptics to prove their point; meantime AGW-theory is simply the best theory explaining recent quick warming.
It’s back to drawing board old man.
Okay: http://weerwoord.be/includes/forum_read.php?id=1171935&tid=1171935

Gail Combs
August 19, 2010 6:54 am

Dave Springer says:
August 18, 2010 at 7:19 am
@Gail Combs August 18, 2010 at 5:24 am
The US ranks in or near the top 20 in scientific literacy….
____________________________________________________-
That is not what the studies show:
“For 10 years, William Schmidt, a statistics professor at Michigan State University, has looked at how U.S. students stack up against students in other countries in math and science. “In fourth-grade, we start out pretty well, near the top of the distribution among countries; by eighth-grade, we’re around average, and by 12th-grade, we’re at the bottom of the heap, outperforming only two countries, Cyprus and South Africa.”
http://www.enterstageright.com/archive/articles/0804/0804textbooks.htm
… Surveys of corporations consistently find that businesses are focused outside • the U.S. to recruit necessary talent. In a 2002 survey, 16 global corporations complained that American schools did not produce students with global skills. United States companies agreed. The survey found that 30 percent of large U.S. companies “believed they had failed to exploit fully their international business opportunities due to insufficient personnel with international skills.” One respondent to the survey even noted, “If I wanted to recruit people who are both technically skilled and culturally aware, I wouldn’t even waste time looking for them on U.S. college campuses.”
…the U.S. ranks 21st out of 29 Organization for Economic Cooperation and Development (OECD) countries in mathematics scores, with nearly one-quarter of students unable to solve the easiest level of questions….In 2000, 28 percent of all freshmen entering a degree-granting institution required remedial coursework
http://www.edreform.com/_upload/CER_JunkFoodDiet.pdf
The kids that are home schooled or go to private schools like Phillips Academy, do quite well. Also many of those graduating from US Universities are foreign students. However our government education is terrible.

cohenite
August 19, 2010 6:59 am

The substantial [sic] complaint against this paper is stated thus:
DCA, engineer says:
August 17, 2010 at 1:24 pm
Has anyone seen this comment on Deltoid and would like to address it?
“The funny thing is that this paper actually replicates Mann et al. 2008 without even noticing it…
To partake in this dirty little secret, see their Figure 14 on page 30: the blue curve is wiggle-identical and practically a photocopy of Mann’s corresponding EIV NH land curve. As it should be. The higher (green) curve they canonize and which is shown above is the result of an error: they calibrate their proxies against hemispherical mean temperature, which is a poor measure of forced variability. The instrumental PC1 which the blue curve is based on, is a much better measure; its EOF contains the polar amplification effect. What it means is that high-latitude proxies, in order to be made representative for global temperatures, should be downweighted. The green curve fails to do this. Thus, high latitudes are overrepresented in this reconstruction, which is why the “shaft” is at such an angle, due to the Earth axis’s changing tilt effect on the latitudinal temperature dependence described in Kaufman et al. 2009.
The authors have no way of detecting such an error as their RMSE goodness-of-fit seems to be also based around the hemispherical average…”
Eli sums this up succinctly:
1. Eli Rabett says:
August 17, 2010 at 5:36 am
Well yeah, the science team always looks at things, and finds answers. It looks like the basic error on this one is that by calibrating against the hemispheric average, rather than smaller grid cells, they loose information and kill the signal to noise. Averaging out the local signal means that noise looks better than signal and in their words, noise provides a better fit than the proxys. There are, however, some other useful ideas in the paper.
Firstly, this is ironic since Mann in both his hockeystick papers calibrated regional proxies with other regional proxies seperated by time and space; this ‘method’ was also demonstrated in his Antarctic warming paper co-authored with Steig; the proxies Mann uses are calibrated to are “less than or equal to” the base proxy but can be anywhere or anytime; they can also be used more than once and are averaged against the criteria and each other. The proxies are then calibrated with the instrumental data which has been infilled back to 1850; any proxy which does not calibrate with the instrumental data is discarded. the final proxy/instrument hybrid is averaged to give hemispheric averages even though favourably calibrated proxies may not have come from the hemisphere.
So, is this a better ‘method’ of extracting a meaningful temperature history then what M&W do, which, as eli says, is to calibrate each proxy against hemispheric temps? Arguably it is not because it reflects the old dispute about whether a GMST or a world average temp is a proper reflection of regional climate effects. [ see: http://pielkeclimatesci.files.wordpress.com/2009/10/r-321.pdf ] The Mann method smooths the regional effect and ultimately submerges it in the hemispheric averages. The M&W method preserves regional integrity and correctly uses that to generate a meaningful picture of hemispheric and GMST.

Stephan
August 19, 2010 7:04 am

1 He lied and/or cheated
2. He doesn’t what he is/was doing
Cucinelli can now proceed. the lawers now have solid proof

August 19, 2010 7:34 am

I read the critique at Deep Climate, this is my response I posted there.
“So you spend half of this post to refute their interpretation historical events not related to the actual study?
You stated “But not so fast. The proper comparison is really with the very first and very last blocks – the ones actually used in climate studies. And those two blocks tell a very different story.” Isn’t that what they said? In as simple terms as I can……This is a time/temp series. The fact that 2 separate time block preformed well and all other didn’t doesn’t validate anything. In fact it calls into question the entire proxy series. This is much akin to looking at a clock and noting that it is correct twice a day, yet incorrect much of the other parts of the day but because it is correct twice a day the clock is correct. It doesn’t work that way.
Frankly, after reading your response to M&W, I agree, there are questions that probably need answered before stating the paper is valid. But you show little proof behind your conclusion statement. Probably because you spent half your time disputing irrelevant sequences of events. The history of the hockey stick isn’t relevant to the study itself, but rather relevant to the impetus. Which in the end, who really cares why they choose to write a paper on the reliability of proxy data? <—— Remember that? That was the purpose of the study, not historical sequences of a graph debate. Which, btw, I'll have to check but I don't believe you have it proper either.
In another part of your conclusion, you state, “So there you have it. McShane and Wyner’s background exposition of the scientific history of the “hockey stick” relies excessively on “grey” literature…”
Reading the paper, “We are not the first to observe this effect. It was shown, in McIntyre and McKitrick (2005a,c),” and “This approach is similar to that of McIntyre and McKitrick (2005a,c) who use the full empirical autocorrelation function to generate trend-less pseudo-proxies.” Those are the only 2 references in the actual study(unlike you, I don’t include the introduction as part of the study) to McIntyre and McKitrick and it is quite obvious they didn’t use them in the study but only cite observations that are similar. Wegman is cited once in the conclusions to verify their statement, “While the literature is large, there has been very little collaboration with university level, professional statisticians.” I suppose you could disagree with that statement, perhaps you and I can get a grant and study whether that statement is true of false and then submit our conclusion to a climate journal and get peer-reviewed so we could know if in fact climatologists consult with statisticians or not.(Because apparently, only selected climate journals are arbiters of truth.) That being said, I’ve gotta ask, what grey literature are M&W relying upon? Reading your critique, you state it is a flawed paper, but mostly you cited their statements in the introduction not relevant to the study itself. Further, you state they rely on gray literature. Where, and in what manner do they rely upon gray literature? You state the proxies are valid because they perform well in the parts you deem important. Sir, your critique is flawed. Try again. Thanks.”

latitude
August 19, 2010 7:42 am

RR Kampen says:
August 19, 2010 at 6:18 am
There are no apparent other mechanisms
====================================
Of course there is RR.
The most obvious is the climate changes.
To prove AGW you have to believe that temp proxies are correct, it’s unprecedented, computer games are correct, etc.
To disprove AGW all you have to do is go with the most obvious. It’s been warmer in the past, CO2 has been higher, extremely high levels of CO2 did not insulate the planet or prevent an ice age, the oceans hold and release CO2 according to temps, and on and on………
The whole AGW theory could not be any shakier or weaker if you look at it.

Richard S Courtney
August 19, 2010 7:46 am

RRKampen:
At August 19, 2010 at 6:18 am you assert:
“CO2 very likely played a role, it being a greenhouse gas whose concentration is rising very strongly. There are no apparent other mechanisms that do the two things necessary to deny AGW: 1) ensure that CO2 has NO effect and 2) the mechanism DOES create the effect. It is the burden of AGW-skeptics to prove their point; meantime AGW-theory is simply the best theory explaining recent quick warming.”
Sorry, but No! The AGW hypothesis is denied by observations.
The absence of the tropospheric ‘hot spot’ is direct evidence that the positive feedbacks required for CAGW are NOT happening.
In fact, nothing the AGW hypothesis predicts has been observed and the opposite of some of its predictions is observed.
1.
The anthropogenic emissions and global temperature do not correlate.
2.
Change to atmospheric carbon dioxide concentration follows change to global temperature at all time scales.
3.
Recent rise in global temperature has not been induced by rise in atmospheric carbon dioxide concentrations.
Global temperature fell from ~1940 to ~1970, rose to 1998, and has fallen since. That’s 40 years of cooling and 28 years of warming. Global temperature is now similar to that of 1990. But atmospheric carbon dioxide concentration has increased at a near constant rate and by more than 30% since 1940. It has increased by 8% since 1990.
4.
Rise in global temperature has not been induced by anthropogenic emissions of carbon dioxide.
Over 80% of the emissions have been since 1940 and the emissions have been increasing at a compound rate. But since 1940 there have been 40 years of cooling with only 28 years of warming. There’s been no significant warming since 1995, and global temperature has fallen since the high it had 10 years ago.
5.
The pattern of atmospheric warming predicted by the AGW hypothesis is absent.
The hypothesis predicts most warming of the air relative to the surface at altitude in the tropics. Measurements from weather balloons and from satellites both show slight cooling relative to the surface at altitude in the tropics.
Simply, the AGW hypothesis is denied by observations. It is not a “theory”: it is junk.
Richard

Slabadang
August 19, 2010 7:57 am

Well Mr Romm and company….
Really puts “deniers” in relevant context and gives it a meaning!!

barry
August 19, 2010 8:13 am

Richard,

that paper is an intercomparison of the studies which used the flawed statistical analysis method adopted by MBH.

It doesn’t seem so. Jukes 2007 discusses the different methodologies and weighs the pros and cons.

Abstract. There has been considerable recent interest in paleoclimate reconstructions of the temperature history of the last millennium. A wide variety of techniques have been used. The interrelation among the techniques is sometimes unclear, as different studies often use distinct data sources as well as distinct methodologies. Here recent work is reviewed and some new calculations performed with an aim to clarifying the consequences of the different approaches used….
One factor which complicates the evaluation of the various reconstructions is that different authors have varied both method and data collections….
In addition, the above works also use a range of techniques. The subsections
below cover different scientific themes…

Different methods are then described.

Jones et al. (1998)… composites are scaled by variance matching
(Appendix A)…
MBH1998,1999 also differ from Jones et al. (1998) in using spatial patterns of temperature variability rather than a hemispheric mean temperature time series….Different modes of atmospheric variability are evaluated through an Empirical Orthogonal Function [EOF] analysis of the time period 1902 to 1980, expressing the global field as a sum of spatial patterns (the EOFs) multiplied by Principal Components…
Osborn and Briffa (2006) perform a more rigorous and quantitative analysis along the lines of Soon and Baliunas (2003), using a method that by-passes the problem of proxy calibration against instrumental temperatures….
Moberg et al. (2005)… discard the low frequency components of the tree-ring data and replace them with information from proxies with lower temporal resolution. A wavelet analysis is used to filter different temporal scales…. This composite wavelet
transform is inverted to create a dimensionless temperature reconstruction, which is calibrated against the instrumental record of Northern Hemisphere mean temperatures, AD 1856–1979, using a variance matching method…..
As mentioned earlier, MBH1998, 1999 also used inverse regression, but the method used here differs from that of MBH1998, 1999 in using Northern Hemisphere temperature to calibrate against, having a longer calibration period, and reconstructing only a single variable instead of multiple PCs….
there is also a difference because MBH1999 used inverse regression against temperature principle components rather than Northern Hemisphere mean temperature as here…

M&M (2005) chief criticisms of MBH 1999 are the “impact of the standardisation
of the proxy records prior to principal component calculation and the other to the validity of specific proxy data(bristlecone pines) as indicators of past temperature variations.” That method is not common to the other papers, and that proxy data does not appear in all of them. Indeed, the ‘Union’ composite that the authors produce is tested without the Mannian PA analysis, and without the bristle cone pines.
But perhaps you are thinking of an alternative, contentious methodology common to the papers. If so, name it, or describe it, and I will attempt to cross-reference (although, I’d be appreciative if you would help me out by substantiating with references).

DCA engineer
August 19, 2010 8:19 am

cohenite:
Thanks for your reply.
I noticed you were making comments on deltoid, which is where the issue was first raised, however I didn’t see you make the same reply there. Was your reply blocked?

August 19, 2010 8:23 am

So a non-appearance in places is an indication that I’m singing there? My costumers are relieved at how little work they have to do for this performance.

August 19, 2010 8:51 am

James Sexton says:
August 17, 2010 at 7:56 pm
latitude says:
August 17, 2010 at 5:17 pm
“One question before I stick both feet in my mouth…
Didn’t Mann substitute the tree ring data with real temperature data for the last 30 years?”
Sort of, his trick. He merged the data on the graph to make it look seamless, when in fact, the proxy data, using his methodologies diverged significantly from the temp record. Apparently, the reason why this was successful is because most alarmists respond to pictures more than they do numbers or words. (See this entire thread for evidence of my prior sentence.)

Actually he didn’t do that, he superposed the temperature record in a contrasting color and clearly indicated in the legend that he had done so.

Richard S Courtney
August 19, 2010 9:03 am

barry:
At August 19, 2010 at 8:13 am you assert that the paper you cited (i.e.
http://www.geos.ed.ac.uk/homes/ghegerl/cp-2006-0049.pdf )
is not an intercomparison of the studies which used the flawed statistical analysis method adopted by MBH.
Firstly, the title of the paper is
“Millennial temperature reconstruction intercomparison and evaluation”
so I think we can agree that it is an intercomparison.
But you list various differences between the adopted reconstructions and cite the paper as saying:
“One factor which complicates the evaluation of the various reconstructions is that different authors have varied both method and data collections….
In addition, the above works also use a range of techniques. The subsections
below cover different scientific themes…”.
However, the McS&M paper considered the statistical methodology adopted by MBH and not the “scientific themes”. Importantly, the McS&M paper did not consider the data and/or its sources: indeed, it accepts and uses the MBH data for the purpose of its analysis. So, variations in the “data collections” are not pertinent.
Most of the differences you cite are differences in data selection and, therefore, they are not relevant to consideration of their validation (or invalidation) by use of the MBH statistical method.
Similarly, different calibration choices used in the reconstructions are not relevant.
And the weightings that the papers apply to various proxies are not relevant, either. Indeed, those weightings merely reflect the prejudices of the people who chose the different weightings.
The other differences are trivial to this discussion in that all the compared time series are obtained by use of the basic MBH methodology.
Furthermore, in some cases the slight differences from the MBH method enhance the problem: e.g. as you say, the paper you cite says;
“Moberg et al. (2005)… discard the low frequency components of the tree-ring data and replace them with information from proxies with lower temporal resolution.”
Deliberately reducing the temporal resolution inhibits ability to observe magnitudes and rates of changes because they are ‘smeared’ over longer time.
And, switching the discussion to the M&M criticisms – as your message attempts to do – does not help. The M&M criticisms have often been attacked and have withstood all attacks. But we are here discussing the McS&M criticism.
Richard

1 32 33 34 35 36 49