NOTE: This has been running two weeks at the top of WUWT, discussion has slowed, so I’m placing it back in regular que. – Anthony
UPDATES:
Statistician William Briggs weighs in here
Eduardo Zorita weighs in here
Anonymous blogger “Deep Climate” weighs in with what he/she calls a “deeply flawed study” here
After a week of being “preoccupied” Real Climate finally breaks radio silence here. It appears to be a prelude to a dismissal with a “wave of the hand”
Supplementary Info now available: All data and code used in this paper are available at the Annals of Applied Statistics supplementary materials website:
http://www.imstat.org/aoas/supplements/default.htm
=========================================
Sticky Wicket – phrase, meaning: “A difficult situation”.
Oh, my. There is a new and important study on temperature proxy reconstructions (McShane and Wyner 2010) submitted into the Annals of Applied Statistics and is listed to be published in the next issue. According to Steve McIntyre, this is one of the “top statistical journals”. This paper is a direct and serious rebuttal to the proxy reconstructions of Mann. It seems watertight on the surface, because instead of trying to attack the proxy data quality issues, they assumed the proxy data was accurate for their purpose, then created a bayesian backcast method. Then, using the proxy data, they demonstrate it fails to reproduce the sharp 20th century uptick.
Now, there’s a new look to the familiar “hockey stick”.
Before:

After:

Not only are the results stunning, but the paper is highly readable, written in a sensible style that most laymen can absorb, even if they don’t understand some of the finer points of bayesian and loess filters, or principal components. Not only that, this paper is a confirmation of McIntyre and McKitrick’s work, with a strong nod to Wegman. I highly recommend reading this and distributing this story widely.
Here’s the submitted paper:
(PDF, 2.5 MB. Backup download available here: McShane and Wyner 2010 )
It states in its abstract:
We find that the proxies do not predict temperature significantly better than random series generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago.
Here are some excerpts from the paper (emphasis in paragraphs mine):
This one shows that M&M hit the mark, because it is independent validation:
In other words, our model performs better when using highly autocorrelated
noise rather than proxies to ”predict” temperature. The real proxies are less predictive than our ”fake” data. While the Lasso generated reconstructions using the proxies are highly statistically significant compared to simple null models, they do not achieve statistical significance against sophisticated null models.
We are not the first to observe this effect. It was shown, in McIntyre
and McKitrick (2005a,c), that random sequences with complex local dependence
structures can predict temperatures. Their approach has been
roundly dismissed in the climate science literature:
To generate ”random” noise series, MM05c apply the full autoregressive structure of the real world proxy series. In this way, they in fact train their stochastic engine with significant (if not dominant) low frequency climate signal rather than purely non-climatic noise and its persistence. [Emphasis in original]
Ammann and Wahl (2007)
…
On the power of the proxy data to actually detect climate change:
This is disturbing: if a model cannot predict the occurrence of a sharp run-up in an out-of-sample block which is contiguous with the insample training set, then it seems highly unlikely that it has power to detect such levels or run-ups in the more distant past. It is even more discouraging when one recalls Figure 15: the model cannot capture the sharp run-up even in-sample. In sum, these results suggest that the ninety-three sequences that comprise the 1,000 year old proxy record simply lack power to detect a sharp increase in temperature. See Footnote 12
Footnote 12:
On the other hand, perhaps our model is unable to detect the high level of and sharp run-up in recent temperatures because anthropogenic factors have, for example, caused a regime change in the relation between temperatures and proxies. While this is certainly a consistent line of reasoning, it is also fraught with peril for, once one admits the possibility of regime changes in the instrumental period, it raises the question of whether such changes exist elsewhere over the past 1,000 years. Furthermore, it implies that up to half of the already short instrumental record is corrupted by anthropogenic factors, thus undermining paleoclimatology as a statistical enterprise.
…

We plot the in-sample portion of this backcast (1850-1998 AD) in Figure 15. Not surprisingly, the model tracks CRU reasonably well because it is in-sample. However, despite the fact that the backcast is both in-sample and initialized with the high true temperatures from 1999 AD and 2000 AD, it still cannot capture either the high level of or the sharp run-up in temperatures of the 1990s. It is substantially biased low. That the model cannot capture run-up even in-sample does not portend well for its ability
to capture similar levels and run-ups if they exist out-of-sample.
…
Conclusion.
Research on multi-proxy temperature reconstructions of the earth’s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with universitylevel, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in some
respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.
On the one hand, we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data. The fundamental problem is that there is a limited amount of proxy data which dates back to 1000 AD; what is available is weakly predictive of global annual temperature. Our backcasting methods, which track quite closely the methods applied most recently in Mann (2008) to the same data, are unable to catch the sharp run up in temperatures recorded in the 1990s, even in-sample.
As can be seen in Figure 15, our estimate of the run up in temperature in the 1990s has
a much smaller slope than the actual temperature series. Furthermore, the lower frame of Figure 18 clearly reveals that the proxy model is not at all able to track the high gradient segment. Consequently, the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.
Our main contribution is our efforts to seriously grapple with the uncertainty involved in paleoclimatological reconstructions. Regression of high dimensional time series is always a complex problem with many traps. In our case, the particular challenges include (i) a short sequence of training data, (ii) more predictors than observations, (iii) a very weak signal, and (iv) response and predictor variables which are both strongly autocorrelated.
The final point is particularly troublesome: since the data is not easily modeled by a simple autoregressive process it follows that the number of truly independent observations (i.e., the effective sample size) may be just too small for accurate reconstruction.
Climate scientists have greatly underestimated the uncertainty of proxy based reconstructions and hence have been overconfident in their models. We have shown that time dependence in the temperature series is sufficiently strong to permit complex sequences of random numbers to forecast out-of-sample reasonably well fairly frequently (see, for example, Figure 9). Furthermore, even proxy based models with approximately the same amount of reconstructive skill (Figures 11,12, and 13), produce strikingly dissimilar historical backcasts: some of these look like hockey sticks but most do not (Figure 14).
Natural climate variability is not well understood and is probably quite large. It is not clear that the proxies currently used to predict temperature are even predictive of it at the scale of several decades let alone over many centuries. Nonetheless, paleoclimatoligical reconstructions constitute only one source of evidence in the AGW debate. Our work stands entirely on the shoulders of those environmental scientists who labored untold years to assemble the vast network of natural proxies. Although we assume the reliability of their data for our purposes here, there still remains a considerable number of outstanding questions that can only be answered with a free and open inquiry and a great deal of replication.
===============================================================
Commenters on WUWT report that Tamino and Romm are deleting comments even mentioning this paper on their blog comment forum. Their refusal to even acknowledge it tells you it has squarely hit the target, and the fat lady has sung – loudly.
(h/t to WUWT reader “thechuckr”)

I started reading the comments here and time and again the comment was “RTFD”.
So I did. I read the preamble, I read the tone of the document and I read the “executive summary” style of presentation at the beginning of the document.
What I took from it was
– The data is not good although there is a lot of it
– We do statistics better than the climate scientists
– The historical data cannot predict the future from the historical trend using statistics
Then I got to Figure 8.
There is simply no point in reading the document any futher. The rest is an excercise in statisticians telling us they don’t understand the science and can’t get the data to make sense no matter what they do.
EXACTLY
What they do NOT say. And I don’t expect anyone who thinks this analysis is wonderful to agree with me. Is that the Scientists do not use statistical models to predict future changes in the climate; they use physics models!
The statisticians say that in the last 1,000 years the data is patchy and causes huge error bars. However they say that the reconstructions track extremely well with 120 out of the 150 years of the instrument record. Then they go on to say and SHOW that the reconstructions can’t predict the warming in the last 30 years.
YET the physics models have predicted the warming in the last 30 years. In fact the physics models have slightly underestimated the warming in the last 30 years with the recorded warming at the upper end of the scale.
What is more, the physics models work on the basis that CO2 and other gasses warm the planet and that another set of gasses and contaminants in the atmosphere cool it.
I’m not surprised the scientists are up in arms about this document. In plain English all it says is that statistical analysis of historical trends in weather and climate cannot predict, or account for, the current warming trend today.
EXACTLY THE POINT
Once the statisticians throw their arms up in the air and say they can’t use the data to predict what is going to happen next, it is down to the scientists (who do know what they are doing), to use their knowledge of climate interactions with known gas emissions to predict the future warming trend.
I expect this article to be rebutted for adding nothing new to the debate except for the facts that
1. Statisticians know litte or nothing about climate science
2. Whilst climate scientists can use statistics quite well to do their job, they CAN see the woods for the trees in terms of statistics and climate forcing, whilst the statisticians can’t.
As an outsider looking in on this field of “expertise” with a great deal of concern, it sounds like there was no professional academian except a retiree with a pair that has challenged this Mann in 20 years? Who has failed science here? I am embarrassed to have thought that science was an ethical and moral discipline that was to be respected and an institution that could be trusted. Help restore the status of your academic professions. What of Beck’s work? Has CO2 really risen or not? Is petroleum of abiotic origin? The big picture of this “global warming”/”climate change”/”CO2 is bad” boondoggle has to include answers to all of these. Nobody believes what the weatherman says for the next 24 hours. Quit pissing back and forth here and write some papers like Anthony did on surface temperatures so that inquisitive amateurs like me don’t have to do the work of professionals – you know, when you want the job done right, do it yourself, like how you feel when a “professional” tradesman does work for you… and grandly screws it up. Thank God for people like McIntyre.
NeilT:
But surely Mann (1998) was not a physics model. It was a proxy study based on multiple statistical series. It seems to me as if your criticisms may be fundamentally misplaced.
And quite regardless of whether a statistician is not a climatologist, a non-statistician climatologist conducting a statistical study without outside consultation has gone toad-sticking without a light.
Furthermore, any scientist in this very poorly understood field who thinks he “knows what he’s doing” doesn’t know what he’s doing. Q.E.D.
How could it be misplaced. Mann did not try and forecast future climate change with this reconstruction.
When you strip away all the equations, verbiage and everything else and analyse what they are saying, it is not in line with what they are showing.
In fact what they are saying is that you follow the rules rigidly and if, when you are looking for a number between 1 and 1000, the answer comes back as yellow, you stop there an say it’s all rubbish.
Well that might be OK for a statistician, but for someone who actually has to use this data as One of may datum points in their calculations, that’s not good enough. The person who has to make sense of this must go back and find out why 1-1000 = yellow, correct the misconceptions and move on.
This article is nothing more than academic nitpicking in which they have used assumptions (in their language), which are not valid to prove their point.
What more do I need to know? There is no point in doing a degree class study in statistics to analyse the paper, I already know enough English to understand that the premise behind the paper is not fit for purpose in the theatre of science to which it is being applied.
Which might be why the scientists are getting so riled about it.
It’s arguing apples with potatoes. Not even the same food class.
Russell Seitz-
I’m a university student and live in Wellington, New Zealand. My father is from the Orkney Islands.
A big lawsuit has been filed here against NIWA over their mucking with raw temperature data and failure by them to explain why they have been mucking with it.
Don’t you think it is sad that taxpayer funded scientists have to be sued to make them tell the truth?
barry: August 17, 2010 at 8:59 pm
Richard S Courtney says:
August 17, 2010 at 11:43 am
“However, there is an enormous amount of information from history and from archaeology (in addition to proxy studies) that indicates the existence of the ~900 year global temperature cycle.”
Anecdotes and local accounts. This is even less useful than attempts at composite reconstructions.
Archaeological findings are not anecdotal accounts, they are physical evidence, and the sum total of the proxies — including oxygen isotope variations drawn from stalagmites — in both Northern and Southern Hemispheres are indicators that the MWP was a global event.
@NeilT: I don’t think you read it quite right. From your tone, I also got the feeling that you browsed the document until you found what you felt was an excuse to stop reading. I think it’s a good idea to instead read it more carefully and assume, for the sake of argument, that there is some substance in the paper.
MW10 stated that their model cannot capture the run-up in recent years. If you look carefully at the MBH papers, they suffer from the same problem, although it is not really pointed out as a problem (unless I missed it). MBH98 truncated some of the data, since it didn’t agree with the recent instrumental record (the much-debated “trick”), implicitly following previous papers that did so absent any empirical explanation for the divergence. In MBH08, the reconstructions do not have nearly the same slope as the instrumental record, but you have to really zoom in to see it, since the temperature graphs are overlaid on top.
What MW10 are saying is that their model validates as well as any other for the period where there is an instrumental record, but they place significance on the “in-sample” divergence, whereas some of the other papers seem to suggest that we don’t need a good fit for recent years, since we have the CRU temps. Even in MBH08, you can draw a horizontal line back from the most recent reconstructed temps and be within the error bars pretty much all the way from 1400 and back. The grafted temps indicate that it is much warmer now than in the past, but the reconstructed temps really don’t. MW10 clearly disagrees with this approach.
The way I understand some of the comments at Deltoid (and I’m by no means a climate scientist), they seem to say that “ah, but these statisticians don’t understand all the stepwise procedures we use to increase the confidence in our reconstructions”.
My problem with that argument is that those stepwise procedures seem to have a fairly weak grounding in actual underlying physical processes. And besides, given all the issues with the underlying data itself, including bristlecones expressly sampled because they were believed to have been affected by C02 contamination, upside-down proxies, use of proxies known to be contaminated, proxies located in the wrong gridcell, mixing different types of proxies (presumably assuming that they show the same linear response to temperature) – all seem to suggest to me some kind of belief in divination rather than methodical application of adjustments based on known physical relationships.
2 papers from M&M (2003/2005), the Wegman report and this latest directly challenge MBH studies.
Meanwhile, other groups of scientists have collected data and drawn up their own millennial reconstructions. There are at least 12 other studies (probably more by now) using various proxy data, some overlapping with MBH 98/99, and some independent. These people are doing the hard yards while others make use of their research.
Not too bad for a relatively new field with few participants and with proxy data difficult to get from nature and assess. Let there be more. And let climate scientists and statisticians collaborate more.
Here is an incomplete list of papers on the MWP.
NeilT, try not to use red herrings (or red noise) . What Mann & Co have been doing is not physics, but statistical manipulations using tree ring (and a few other proxies) to “get rid of the Medieval Warm Period”. As McIntyre and McKitrick showed in 2003 was that Mann’s proxies were not valid temperature proxies, and his statistical methods were wrong. That was confirmed by the Wegman Panel and the North/NAS panel.
One of the two legs of CAGW is that the 20th Century exhibited “unprecedented” warming. But the Hockey Stick is more than wrong, since Mann and “The Team” have deliberately continued to use bad data (Graybill, Yamal, Tiljander, etc) and bad statistical methods to create yet more phony Hockey Sticks to eliminate the MWP and LIA. That’s not science, it’s Lysenkoism.
M&W clearly show a MWP and LIA using Mann’s own bogus data.
Where is there valid scientific evidence that the MWP and LIA did NOT exist, as the CAGW crowd contend? Where is their valid scientific evidence that 20th Century warming was “unprecedented”
There are no “scientists getting riled” about M&W (or M&M for that matter), since a scientist is someone who follows the Scientific Method. Someone who allows independent replication of their results. Someone whose data and methods are transparent. It’s long been demonstrated that “climate scientists” aren’t in fact real scientists.
Anthony, would you consider asking McShane and Wyner to do a guest post here, perhaps touching on some of the issues raised/points of contention?
duckster: August 17, 2010 at 9:26 pm
In addition to Barry’s good questions, I’d also be interested to know what kind of mechanism you would see as driving such 900 year cycles – and what evidence you have that this mechanism is still operational.
Natural variation, which is the null hypothesis. And it’s your task to provide evidence that it *isn’t* still operational.
Didn’t notice the time-frame before.
Mann’s contentious studies were published 13 and 12 years ago, and didn’t hit the public until 2001, in the IPCC TAR. So it took 2 years for the first challenge to be published, and another 2 years for a second challenge and a governmental enquiry. The challenges occurred before the next IPCC report, and are discussed in that report (IPCC AR4, 2007).
Oh, and a lot of people in this debate seem to have the impression that error bars are just there for decoration purposes. They’re not. Careful treatment of margins of error, and understanding of how the uncertainties add up, or even multiply, during processing of the data, are absolutely key to good analysis.
Strictly speaking, any value within the error bars could be the “correct” one.
MW10 seems to be mainly about assessing the uncertainties of the reconstructions. This is where they claim that “Climate scientists have greatly underestimated the uncertainty”.
Actually, Neil, Apples and Potatoes are in the same food class. Vegetable/Fruit, ie Starch.
Orkneygal – taxpayer funded = taking the King’s shilling……I suspect they massage/produce only what they’re told to.
Truth’s got nothing to do with it, but I’m VERY happy to hear the lawyers are after the NIWA. 🙂
Best,
OL
p.s. brass monkeys up here……summer’s over.
Henry@DaveSpringer
In case you missed it –
http://wattsupwiththat.com/2010/08/17/breaking-new-paper-makes-a-hockey-sticky-wicket-of-mann-et-al-99/#comment-459755
You have not proven that CO2 is a greenhouse gas, i.e. that its cooling properties are smaller than its warming properties.
Unless there is something wrong with the definition of a GHG. I think they also call ozone a GHG. But if you look carefully at the incoming radiation, then ozone on its own cuts away almost 15-20 op the sun’s radiation. It compares to not much with what ozone cuts away from earth’s radiation. So I am sure ozone is cooling more than it is warming. But they still call it a CHG?
Sorry abt the typos’
Henry @ur momisugly Dave Springer
You have not proven that CO2 is a greenhouse gas, i.e. that its cooling properties are smaller than its warming properties.
Unless there is something wrong with the definition of a GHG. I think they also call ozone a GHG. But if you look carefully at the incoming radiation, then ozone on its own cuts away almost 15-20% of the sun’s radiation. That what ozone cuts away from earth’s radiation compares to not much with that 15-20%. So I am sure ozone is cooling more than it is warming. But they still call it a CHG?
The process that should follow, once the MW10 paper is finally in print, is clearly outlined in How to Publish a Scientific Comment in 1 2 3 Easy Steps”
A great summary of the Scientific Process at its best! 🙂
Russell Seitz says:
August 17, 2010 at 5:13 pm
REPLY: Amongst the clutter of facts not written with this paragraph when cited as above is the fact that 1998 was a super El Nino anomaly event, not a super global warming event. Weather, not climate. Wind pattern driven, not CO2 driven. -Anthony
Are you trying to say that in the year 1998 CO2 played no role for just that year? Allright, I will not believe that. Then de question is whether the 30% extra CO2 compared to 1900 (and before) wouldn’t have helped 1998 to become so warm as it was. It seems you are denying this. But then I’ll to believe my first remark after all.
In fact what they are saying is that you follow the rules rigidly and if, when you are looking for a number between 1 and 1000, the answer comes back as yellow, you stop there an say it’s all rubbish.
I don’t understand. A trend is a trend. It fits or it doesn’t. Multiple trends are properly normalized or they are not. It doesn’t matter whether the series is bananas or degrees C.
If Mann had been working with proper statisticians and the answer came back yellow, they might have worked it out. Or not. But Mann was too busy working out his own private PCA. (There is also the problem of weighting.) That is why a statistician was indispensable.
Even if one is doing very basic stuff (like what I play with), a proper statistical review is necessary. At the very least it should be part of the peer-review process. It really ought to be part of internal review. To leave it to independent review is jaw-dropping.
And to refuse to release data and method until threatened by congressional subpoena is mind boggling. How could anyone even pay heed to such a piece of work? Far less prominently use it as a basis for staggeringly expensive and intrusive policy?
I have read the paper as best I can, as well as all comments (so far) here and at CA. Being somewhat mathematically challenged, I want to be sure that I have taken home the key messages of the paper, and so below, I outline what appear to me to be the essential points. If I have those right, it would be very helpful to know; and if wrong, to know how I might need to amend my understanding. TIA for any responses.
1. The paper seeks to assess the reliability of multiple proxies in reconstructions of surface temperatures over the past 1000 years as evidenced primarily in papers by Mann et al.
2. The authors accept the data used by Mann et al. as it stands. Questioning its veracity is beyond the scope of the paper. It is not addressing scientific issues, but is solely an investigation of the data by professional statisticians, who have often not been explicitly involved in past statistical analyses. This point has also been made, for example, in the conclusions of the Oxburgh “Climategate” enquiry.
3. The key conclusion is that proxy data is not good enough to give a reliable reconstruction for the pre-instrumental period, and even that for the instrumental period is circumspect.
4. The authors have developed a model for analysing the data and producing new reconstructions. They say, amongst other things:
“We calculate that there is a 36% posterior probability that 1998 was the warmest year over the past thousand. If we consider rolling decades, 1997-2006 is the warmest on record; our model gives an 80% chance that it was the warmest in the past thousand years.”
However, because of 1-3. above, this cannot be taken as supporting previous results, e.g. as in the so-called “hockey stick” graph. The error bars for the data are extremely wide (more so than in prior reconstructions by Mann et al.), so that no definitive conclusions can be reached from it about the presence (or absence) of the MWP or LIA, for example.
5. The authors are not claiming that their model should be used in preference to the methods used by Mann. et al, or that they provide support for the latter. The overall inference is that there is so much uncertainty that there can be no support in reconstructions so far produced for the hypothesis of unprecedented effects of anthropogenic, CO2-induced global warming. Putative firmer evidence for such could only come from other kinds of study not addressed or commented on in this paper.
After Mr Mann has failed so badly in his work but stuck to the political line, I await his appointment to the house of lords or an MP even, either that or he’ll be left out to dry which could be intresting as he’ll ever take people down with him or mentally implode.
This is a pop corn time of deleting post fame.
Barry and Duckster:
Barry, your tendentious post at August 17, 2010 at 8:59 pm is iterative to some degree so I am only copying its statements that I interpret to be your substantive points, and I am answering those. If I have omitted anything significant then that is inadvertant so please get back to me.
In response to my having written:
“However, there is an enormous amount of information from history and from archaeology (in addition to proxy studies) that indicates the existence of the ~900 year global temperature cycle.”
You respond with:
“Anecdotes and local accounts. This is even less useful than attempts at composite reconstructions.”
You are entitled to your opinion, but historians and archaeologists would not agree with it. Importantly, your quotation of me seperates from this an important statement that goes to the crux of the discussion on this thread.
My actual statements said in context;
“However, there is an enormous amount of information from history and from archaeology (in addition to proxy studies) that indicates the existence of the ~900 year global temperature cycle. The importance that was placed on the MBH ‘hockey stick’ was that it seemed to deny all that evidence.
But the MBH ‘hockey stick’ is now in the refuse bin so all that evidence is again seen to be valid.”
Then, you say.
“There is a famous MWP chart produced by a skeptic attempting to show that the MWP was global.
http://pages.science-skeptical.de/MWP/MedievalWarmPeriod.html
However, if you examine the temperature profiles for each of the proxies on the map, you discover that the warm peaks early in the millennium can be as far apart as 500 years. The irony of this chart is that it actually shows what the literature (including MBH) says – that medieval warming was not spatially and temporally coherent.”
But that depends on what you mean by “spatially and temporally coherent”. For example, there are places on Earth that show warming trends and others that show cooling trends over the twentieth century. Are you saying that, therefore, there was no global warming over the twentieth century?
The fact is that all those isolated places around the globe show periods of warming and cooling and they each show a peak of temperature (i.e. the MWP) in the period from 750 to 1250 AD. Together they indicate a peak near 1000 AD. This peak is the MWP.
After that you ask me:
“Also, you indicate that you think there are reliable proxy records (on which the 900-year cycle is based). Could you reference these, so that we can see how they stack up regarding the MWP/current temps?”
And, of course, the answer is YES there are hundreds and they are linked from the URL which you cite; viz.
http://pages.science-skeptical.de/MWP/MedievalWarmPeriod.html
You follow that by quoting my statement (that I put back into context above) which says;
“The importance that was placed on the MBH ‘hockey stick’ was that it seemed to deny all that evidence.”
And then assert:
“At no time did any of the MBH reconstructions discount or even consider a 900-year warming cycle. This notion appears to have been superimposed on the studies by you. MBH do distinguish a MWP and LIA. If you focus on the graph and exclude the content of the paper, this may be how you think otherwise. But I presume we are talking here about the scientific findings of MBH rather than graphical representations.”
This conflates a misrepresentation of what I wrote with an Orwellian distortion of history. In context (see above) my statement was about the MWP and the LIA, not the ~900 year cycle. Then one only has to see
(a) the graphical representation of the MWP and the LIA presented in the first two IPCC Reports
and
(b) their displacement by the ‘hockey stick’ in the Third IPCC Report
to recognise that what I said is true.
And please note that Michael Mann was a Lead Author for the Chapter of the IPCC Report that replaced the previous IPCC paleoclimate history with his ‘hockey stick’.
Then you write:
“May I again bring to your attention the contention you raised re the alleged 60-year ‘cycle’ in the instrumental record. You have told us that the current cycle (from 2000) is supposed to be one of cooling. Yet current trends indicate warming.
Trends from 2000 to present:
RSS = 0.11C/dec
UAH = 0.15C/dec”
Please read what I wrote at August 17, 2010 at 2:52 am because you do claim to be discussing it. I wrote there:
“The global temperature seems to vary in cycles that are overlaid on each other.
[snip]
There is an apparent ~900 year oscillation.
[snip]
And there is an apparent ~60 year oscillation”
And I said the ~900 year oscillation is in a warming phase and has been since the depths of the LIA while the ~~60 year oscillation has been in a cooling phase since ~2000. If you cannot understand why this provided rapid warming from ~1970 to ~2000 but little (or no warming) since ~2000 then I doubt that I am capable of explaining it to you.
duckster:
At August 17, 2010 at 9:26 pm you ask me:
“In addition to Barry’s good questions, I’d also be interested to know what kind of mechanism you would see as driving such 900 year cycles – and what evidence you have that this mechanism is still operational.”
I do not know the mechanism that creates the ~900 year cycle. I merely observes that it has existed in recent millenia. Similarly, nobody knows the mechanism that creates gravity but we observe that it keeps the planets in their orbits.
Science is about determing how and why the universe behaves as it s observed to behave. It is not about proving the universe has not stopped behaving as it is observed to be behaving.
Richard
I have cherry picked my two favorite comments on this paper from over a thousand on various sites,
From CA…
GrantB Posted Aug 15, 2010 at 5:51 AM
Blakeley McShane is from the Kellogg School of Management and is obviously funded by big corn.
And from CP…
lerogue | August 15, 2010 at 11:16 am
One is disappoointed to see that some well known denialists, McShane and Wyner, have managed to scrape a paper through the peer review process which is critical of Michael Mann’s work. A great pity.
Having used “Climate Science” statistical methods to analyse my cherry picked sample I find that my conclusion that climate sceptics have a better sense of humour is robust 🙂
@Bill Tuttle
Natural variation, which is the null hypothesis.
This is baffling. How can natural variation be responsible for a 900 year cycle? And how would it be any different from noise – in which case it wouldn’t be a cycle. If there is a cycle, then there must be something driving it – something that would be measureable and predictable.