New paper makes a hockey sticky wicket of Mann et al 98/99/08

NOTE: This has been running two weeks at the top of WUWT, discussion has slowed, so I’m placing it back in regular que.  – Anthony

UPDATES:

Statistician William Briggs weighs in here

Eduardo Zorita weighs in here

Anonymous blogger “Deep Climate” weighs in with what he/she calls a “deeply flawed study” here

After a week of being “preoccupied” Real Climate finally breaks radio silence here. It appears to be a prelude to a dismissal with a “wave of the hand”

Supplementary Info now available: All data and code used in this paper are available at the Annals of Applied Statistics supplementary materials website:

http://www.imstat.org/aoas/supplements/default.htm

=========================================

Sticky Wicket – phrase, meaning: “A difficult situation”.

Oh, my. There is a new and important study on temperature proxy reconstructions (McShane and Wyner 2010) submitted into the Annals of Applied Statistics and is listed to be published in the next issue. According to Steve McIntyre, this is one of the “top statistical journals”. This paper is a direct and serious rebuttal to the proxy reconstructions of Mann. It seems watertight on the surface, because instead of trying to attack the proxy data quality issues, they assumed the proxy data was accurate for their purpose, then created a bayesian backcast method. Then, using the proxy data, they demonstrate it fails to reproduce the sharp 20th century uptick.

Now, there’s a new look to the familiar “hockey stick”.

Before:

Multiproxy reconstruction of Northern Hemisphere surface temperature variations over the past millennium (blue), along with 50-year average (black), a measure of the statistical uncertainty associated with the reconstruction (gray), and instrumental surface temperature data for the last 150 years (red), based on the work by Mann et al. (1999). This figure has sometimes been referred to as the hockey stick. Source: IPCC (2001).

After:

FIG 16. Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD and backcasts 998-1849 AD. The cyan region indicates uncertainty due to t, the green region indicates uncertainty due to β, and the gray region indicates total uncertainty.

Not only are the results stunning, but the paper is highly readable, written in a sensible style that most laymen can absorb, even if they don’t understand some of the finer points of bayesian and loess filters, or principal components. Not only that, this paper is a confirmation of McIntyre and McKitrick’s work, with a strong nod to Wegman. I highly recommend reading this and distributing this story widely.

Here’s the submitted paper:

A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable?

(PDF, 2.5 MB. Backup download available here: McShane and Wyner 2010 )

It states in its abstract:

We find that the proxies do not predict temperature significantly better than random series generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago.

Here are some excerpts from the paper (emphasis in paragraphs mine):

This one shows that M&M hit the mark, because it is independent validation:

In other words, our model performs better when using highly autocorrelated

noise rather than proxies to ”predict” temperature. The real proxies are less predictive than our ”fake” data. While the Lasso generated reconstructions using the proxies are highly statistically significant compared to simple null models, they do not achieve statistical significance against sophisticated null models.

We are not the first to observe this effect. It was shown, in McIntyre

and McKitrick (2005a,c), that random sequences with complex local dependence

structures can predict temperatures. Their approach has been

roundly dismissed in the climate science literature:

To generate ”random” noise series, MM05c apply the full autoregressive structure of the real world proxy series. In this way, they in fact train their stochastic engine with significant (if not dominant) low frequency climate signal rather than purely non-climatic noise and its persistence. [Emphasis in original]

Ammann and Wahl (2007)

On the power of the proxy data to actually detect climate change:

This is disturbing: if a model cannot predict the occurrence of a sharp run-up in an out-of-sample block which is contiguous with the insample training set, then it seems highly unlikely that it has power to detect such levels or run-ups in the more distant past. It is even more discouraging when one recalls Figure 15: the model cannot capture the sharp run-up even in-sample. In sum, these results suggest that the ninety-three sequences that comprise the 1,000 year old proxy record simply lack power to detect a sharp increase in temperature. See Footnote 12

Footnote 12:

On the other hand, perhaps our model is unable to detect the high level of and sharp run-up in recent temperatures because anthropogenic factors have, for example, caused a regime change in the relation between temperatures and proxies. While this is certainly a consistent line of reasoning, it is also fraught with peril for, once one admits the possibility of regime changes in the instrumental period, it raises the question of whether such changes exist elsewhere over the past 1,000 years. Furthermore, it implies that up to half of the already short instrumental record is corrupted by anthropogenic factors, thus undermining paleoclimatology as a statistical enterprise.

FIG 15. In-sample Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD.

We plot the in-sample portion of this backcast (1850-1998 AD) in Figure 15. Not surprisingly, the model tracks CRU reasonably well because it is in-sample. However, despite the fact that the backcast is both in-sample and initialized with the high true temperatures from 1999 AD and 2000 AD, it still cannot capture either the high level of or the sharp run-up in temperatures of the 1990s. It is substantially biased low. That the model cannot capture run-up even in-sample does not portend well for its ability

to capture similar levels and run-ups if they exist out-of-sample.

Conclusion.

Research on multi-proxy temperature reconstructions of the earth’s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with universitylevel, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in some

respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.

On the one hand, we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data. The fundamental problem is that there is a limited amount of proxy data which dates back to 1000 AD; what is available is weakly predictive of global annual temperature. Our backcasting methods, which track quite closely the methods applied most recently in Mann (2008) to the same data, are unable to catch the sharp run up in temperatures recorded in the 1990s, even in-sample.

As can be seen in Figure 15, our estimate of the run up in temperature in the 1990s has

a much smaller slope than the actual temperature series. Furthermore, the lower frame of Figure 18 clearly reveals that the proxy model is not at all able to track the high gradient segment. Consequently, the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.

Our main contribution is our efforts to seriously grapple with the uncertainty involved in paleoclimatological reconstructions. Regression of high dimensional time series is always a complex problem with many traps. In our case, the particular challenges include (i) a short sequence of training data, (ii) more predictors than observations, (iii) a very weak signal, and (iv) response and predictor variables which are both strongly autocorrelated.

The final point is particularly troublesome: since the data is not easily modeled by a simple autoregressive process it follows that the number of truly independent observations (i.e., the effective sample size) may be just too small for accurate reconstruction.

Climate scientists have greatly underestimated the uncertainty of proxy based reconstructions and hence have been overconfident in their models. We have shown that time dependence in the temperature series is sufficiently strong to permit complex sequences of random numbers to forecast out-of-sample reasonably well fairly frequently (see, for example, Figure 9). Furthermore, even proxy based models with approximately the same amount of reconstructive skill (Figures 11,12, and 13), produce strikingly dissimilar historical backcasts: some of these look like hockey sticks but most do not (Figure 14).

Natural climate variability is not well understood and is probably quite large. It is not clear that the proxies currently used to predict temperature are even predictive of it at the scale of several decades let alone over many centuries. Nonetheless, paleoclimatoligical reconstructions constitute only one source of evidence in the AGW debate. Our work stands entirely on the shoulders of those environmental scientists who labored untold years to assemble the vast network of natural proxies. Although we assume the reliability of their data for our purposes here, there still remains a considerable number of outstanding questions that can only be answered with a free and open inquiry and a great deal of replication.

===============================================================

Commenters on WUWT report that Tamino and Romm are deleting comments even mentioning this paper on their blog comment forum. Their refusal to even acknowledge it tells you it has squarely hit the target, and the fat lady has sung – loudly.

(h/t to WUWT reader “thechuckr”)

Share

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
1.2K Comments
Inline Feedbacks
View all comments
Eric Dailey
August 15, 2010 4:28 am

I love when the Fat Lady sings.

Mikael Pihlström
August 15, 2010 4:31 am

Philemon says:
August 15, 2010 at 2:44 am
Mikael Pihlström says:
August 15, 2010 at 2:22 am
“…why not use fig. 17, which brings it alltogether: the warming of the last decades
is bigger than any backcast, H&W 2010 included.”
Look at the uncertainty bands.
“In fact, our uncertainty bands are so wide that they envelop all of the other backcasts in the literature. Given their ample width, it is difficult to say that recent warming is an extraordinary event compared to the last 1,000 years. For example, according to our uncertainty bands, it is possible that it was as warm in the year 1200 AD as it is today.” (McShane and Wyner, AOAS 2010, p. 37)
—————
You are right, but uncertainty works both ways: It could have been as
warm in 1200 AD, or considerably cooler.

Stu
August 15, 2010 4:57 am

Mike Roddy wrote,
“Similarly, climate scientists are getting bored with arguments from untrained individuals that the “trace gas” CO2 does not play the major role in the recent and rapid temperature increases. This role was proven in a laboratory in the 19th century by Arrhenius, and has not been seriously disputed since.”
I can see climate scientists getting bored with history pretty soon. You already have Gavin Schmidt claiming that the warmth of the WMP is an ‘uninteresting’ question scientifically. And Tamino seems to be uninterested in anything prior to the year 1975…
http://tamino.wordpress.com/2010/08/13/changes/
So… these guys may be bored with history (I don’t blame them, I thought history was quite dull), but that kind of begs the question of why all the fuss in the first place?
Mainstream science. Spending your money on figuring out the answers to questions that bore scientists.
Terrific! 😉

Editor
August 15, 2010 4:57 am

Jimbo says:
August 15, 2010 at 2:42 am

Suggestion: Will you consider creating a “Hockey Stick” page under your Categories pull down menu on the right side of the page?

There’s an entry for paleoclimatology, see
http://home.comcast.net/~ewerme/wuwt/categories.html
http://home.comcast.net/~ewerme/wuwt/cat_paleoclimatology.html

Robert of Ottawa
August 15, 2010 4:59 am

Does this paper kill paeloclimatology? No. There are many more direct proxy methods of estimating past temperatures.
Does this paper kill dendroclimatology – very possibly.

orkneygal
August 15, 2010 5:02 am

Stanislav Lem-
Precisely.
Since they are all bad, there must be one that is “least worst”. That hardly means it is skillful.

TerryS
August 15, 2010 5:02 am

Re: eudoxus
“McShane and Wyner, 2010, figure 16, illustrates an absolutely (in terms of the sign of slope) unprecedented (over the last millennium) rate of increase in global temperature”
From the actual paper:

On the other hand, perhaps our model is unable to detect the high level of and sharp run-up in recent temperatures because anthropogenic factors have, for example, caused a regime change in the relation between temperatures and proxies. While this is certainly a consistent line of reasoning, it is also fraught with peril for, once one admits the possibility of regime changes in the instrumental period, it raises the question of whether such changes exist elsewhere over the past 1,000 years. Furthermore, it implies that up to half of the already short instrumental record is corrupted by anthropogenic factors, thus undermining paleoclimatology as a statistical enterprise.

In other words, since the reconstruction can not pick up sharp up ticks in temperature you can not call it unprecedented.
You say: “It displays no evidence of a medieval warm period during the range 1000-1200 CE, but seems, rather, to predict a dip in temperature during that range of years.”
The error bars are so big in the graph that it encompasses everything from the MWP being colder than the LIA to the LIA being warmer than today and pretty much everything in between. From the paper itself:

In fact, our uncertainty bands are so wide that they envelop all of the other backcasts in the literature. Given their ample width, it is difficult to say that recent warming is an extraordinary event compared to the last 1,000 years. For example, according to our uncertainty bands, it is possible that it was as warm in the year 1200 AD as it is today.

You say: “Figure 16 illustrates the remarkable feature that, at the onset of the industrial revolution, the increase in the Earth’s temp was so great it created a reversal in its slope. Fascinating.”
The thick red line in the Figure 16 that you seem to think proves AGW also has 1000AD being warmer than today. Fascinating.

Mikael Pihlström
August 15, 2010 5:05 am

James Sexton says:
August 15, 2010 at 3:53 am
—–
I don’t really disagree with what you say here. Should perhaps have
read the article more comprehensively. But, actually the results are
rather to my liking, if they will stand. It seems wise to reduce the
confidence given to proxy studies at least for the time being.

richard verney
August 15, 2010 5:05 am

Their article is interesting since it is based (for the purpose of argument) upon an “acceptance” of the veracity of the data. Of course, if that data is wrong, then their methodology would suggest even less dramatic rise in recent temperatures and higher backcast temperatures.
Of course, the key issue is the quality of the proxy data and the risk of drawing too many inference from scant quantities of data. GIGO.
The proxy data can be no more than a very rough and ready guide and for the purposes of serious prediction should be thrown out. This follows from the known fact that the tree proxy data as from 1960 does not match the instrument record. This fact alone means one of three things. Namely, either:
a) The proxy data is wrong -thereby confirming the unreliability of all pre 1960 tree proxy data such that it would be unsafe to assume that global temperatures pre 1850 are as ascertained from tree proxy data: or
b) The instrument record post 1960 is wrong – more specifically the ‘corrected’ ‘adjusted’ instrument record is wrong and that if the ‘corrections’ ‘adjustments’ were done properly the modern instrument record would be consistent with the lower temperatures suggested by the post 1960 tree proxy record: or
c) Both the proxy record and the modern instrument record (by which I refer to the ‘corrected’ ‘adjusted’ data) are wrong and unreliable such that we have no qualative data upon which conclusions about past temperatures (ie., those pre 1960 back to say 1000) or modern temperatures (ie those post 1960) can safely be drawn.
My own take on the situation is that set out in c) above and that means that we need to go back to the drawing board. It may be that there is simply insufficient reliable data (both proxy and instrument) covering the southern hemisphere. If that is the case, we should simply ignore the southern hemisphere altogether and just look at the northern hemisphere data. We need a re-evaluation of the proxy data for the northern hemisphere (including taking into account written and archaeological records). We also need to carefully review the instrument record for the northern hemisphere and just look at data from sites where we can be reasonably certain that no adjustments/corrections are necessary. These sites will inevitably be rural sites and ones which have the longest uninterrupted temperature record. It may be that these sites will be far and few between but if global warming is a global issue (and I am of the view that it is probably not a ‘global’ phenomenon and certainly the consequences are local rather than global) for the purposes of considering probable effect one can assume that any noted trend would similarly occur over the entire land area of the northern hemisphere (although I do hate making assumptions).
Mind you given that the land temperature record is corrupted by changes in land use and UHI and given that 4/5ths of the globe is water and given the sheer volume of the seas (which act as huge storage reservoirs) one wonders why it is worthwhile looking at land temperatures if one is investigating global warming. It is the seas that are the key driver of climate and it will be only through a proper understanding of sea temperatures, currents and cloud formation that we will gain insight into what extent there is global warming and what effect this will have.
I say back to square one and not to do anything until we have a better understanding of climate drivers and can put together a data set of temperatures upon which we can be confident. Would it not be silly to spend $trillions on curbing CO2 only to find out that there is no problem with rising temperatures, or no problem with CO2 (ie., CO2 is not responsible for the rising temperature). The latter is particularly stupid since we may face a scenario whereby there are rising temperatures (due to natural variation or some manmade villain other than CO2) and these rising temperatures cause serious problems such that we then need to spend $trillions on dealing with the effect. We have then spent two sets of $trillions one wasted on dealing with an assumed cause which was not in fact the cause and therefore did not remedy the situation, and the other dealing with the effect.
I also consider that we need to re-evaluate whether rising temperatures would in fact be the disaster that so many people predict. Given that bio-diversity favours warm conditions and given that civilisations and mankind flourished in warm conditions (it is no accident that none of the old civilisations flourished in high latitudes – and to the extent that the Viking civilisation flourished this was during a warm spell in the northern hemisphere), it is probable that a rise in temperature would overall be a good and beneficial thing.
Of course, this does not mean that we should not strive to find viable alternative energy sources (for some solar is an option and others tidal – although wind seems too unreliable to have any future – but in the main this will have to be nuclear preferably fusion) and to lessen our dependence upon fossil fuels (oil reserves would be much better utilised for plastics and the like rather than ‘wasted’ in providing energy) not only because of the environmental effects of the latter but also because of the political uncertainties of supply.

Dave Springer
August 15, 2010 5:09 am

It’s good news if the earth is warming up a bit regardless of why.
Green plants and animals = good.
Rocks and ice = bad.
Any questions?

TerryS
August 15, 2010 5:29 am

Re: joshua corning says:

The real fun will be watching the next IPCC panel doing back flips to keep this out of their next report.

It will be easy for them. They will simply arrange for one of their pet journals to publish a paper, refuting this one, just before the cutoff date for IPCC submissions. The paper won’t have to be accurate or have sound statistics, it simply has to be published too late for any responses to it to make it into the next IPCC report.

Ken Harvey
August 15, 2010 5:44 am

“While the literature is large, there has been very little collaboration with universitylevel, professional statisticians”
Why would they want to spend some of their grant money on professional numbers men? Apart from the unnecessary expense, people whose only specialty is numbers could not be expected to understand the special needs of climatology.
Note well that your drugs and food additives are approved with even greater prudent savings on unnecessary expenditure on professional statisticians.

stephen richards
August 15, 2010 5:46 am

singularian says:
August 15, 2010 at 1:03 am
It’s late Saturday night, this post has been here 4 1/2 hours, and there are 93 comments. Busy night for Anthony and the Moderators.
Sunday evening here – want to know what the weather’s like tomorrow?
Best one yet looooool

Skeptical Statistician
August 15, 2010 5:49 am

“The authors of the 20- odd studies that confirmed Mann’s data are not really interested in what professional statisticians and mathematicians are saying about it.”
Yeah, that’s been the problem all along.
How long before Gavin et al say that this study is “fatally flawed”, “gravely flawed”, “seriously flawed”, etc.? That is the usual rhetorical trick.

Joe Horner
August 15, 2010 5:56 am

Mikael Pihlström says:
August 15, 2010 at 1:49 am
If proxies have no predictive value, why do the authors persist in
doing their own reconstruction? If paleo reconstructions are universally
dead (I am OK with that) they are dead for everyone. You have to forget
your MWP argument to.

Not at all, Mikael. If proxies have no predictive value then all it means is that they cannot be used to invalidate a MWP that has been long-inferred from other evidence. As for why people “persist in doing their own reconstructions”, surely doing that (and getting vastly differing results from the same data) is the logical way to investigate the reliability of those proxies? Which is a valid scientific endeavour.
Mikael Pihlström says:
August 15, 2010 at 2:22 am
BTW Smokey, if you want to link a figure from the article, why not
use fig. 17, which brings it alltogether: the warming of the last decades
is bigger than any backcast, H&W 2010 included.

So, your call: Either:
(a) the proxies are unreliable predictors because they fail to track the current temp rise. In which case they are also worthless for back-casting. In which case there is absolutely no evidence to claim current warming is “unprecedented”, or,
(b) the proxies are reasonable predictors. In which case they may be ok to support a claim of unprecedented warming. But in that case, the insturmental record is showing warming that isn’t really there because the (reliable) proxies would show it if it was. In which case, the instrumental record is (as has been widely discussed) contaminated beyond usefulness.
Your call, (a), (b) or both of the above?

Michael Jankowski
August 15, 2010 6:07 am

[Response: The M&W paper will likely take some time to look through (especially since it isn’t fully published and the SI does not seem to be available yet), but I’m sure people will indeed be looking. I note that one of their conclusions “If we consider rolling decades, 1997-2006 is the warmest on record; our model gives an 80% chance that it was the warmest in the past thousand years” is completely in line with the analogous IPCC AR4 statement. But this isn’t the thread for this, so let’s leave discussion for when there is a fuller appreciation for what’s been done. – gavin]
I knew the RC folks would latch on to that paragraph.
That “conclusion” (which isn’t in the conclusions section, BTW) is based purely on the acceptance of the data, the relationship of proxies to temperature over the calibration period, and the applicability of their model. They immediately start rolling through a number of caveats and criticisms which essentially say the sensitivity just isn’t there to draw substantial conclusions outside of the temperature record.

August 15, 2010 6:09 am

eudoxus says at 12:54 am:
The graph “…displays no evidence of a medieval warm period during the range 1000-1200 CE, but seems, rather, to predict a dip in temperature during that range of years.”
eudoxus, don’t you understand? The McShane & Wyner paper did not use the mountains of data confirming the high MWP temperatures.
They did a straight statistical study using only the carefully selected proxies used by Mann et al. And they still came out with high MWP temperatures — which Mann had claimed were completely non-existent in his hokey stick. Compare Mann’s chart in the article with the chart constructed with the correct statistical methodology.
Try to understand: this paper debunks Mann’s faked conclusions by using his own cherry-picked data. It does not purport to be a representation of the long-established MWP.

latitude
August 15, 2010 6:14 am

I don’t understand the discussion of the MWP.
The fact that Mann said his data did not show one, and they used Mann’s data.
The fact that it raises the 1000 year temperatures, when it should not at all, is all you need to know.

Brad
August 15, 2010 6:27 am

So what happens when you use the real data? I guess the whole thing was made up?
Amazing…simply amazing lie told by “men of science.” How do we get these guys to tell the truth? What is worng with the current funding/publication mechanisms that allowed this lie of AGW to be foisted on the world?

Latimer Alder
August 15, 2010 6:32 am

Roddy
‘The authors of the 20- odd studies that confirmed Mann’s data are not really interested in what professional statisticians and mathematicians are saying about it’
What an astonishing remark!
For a community that is ever quick to criticise others for lacking the ‘right’ qualifications in climatology, (whatever those may be) to be indifferent to professional statisticians verdict on their work is quite amazing.
As far as I can tell, having established the basic data to be used, there is no climatological knowledge required to manipulate the numbers and produce the graphs that Mann and his chums have relied on for over a decade. The knowledge and skills required are purely statistical.
And here we have two professional statisticians demonstrating that this part of the work has not been done to a professional standard and that many of the supposed conclusions cannot be derived from the data. And that the basic premise – that tree ring data can somehow tell us about past temperatures – is unsound.
Wow! no doubt there will be a considerable brouhaha once the paper is properly published…now it is in the public domain it cannot be suppressed anyway…but it is difficult to imagine what robust defence the Team can come up with.
That the authors are unqualified in their field…nope..better qualified than Mann et al
That the authors have cherry picked the data…nope…they used the same data as Mann
That the authors are funded by Big Oil..even if true, unlikely to be taken seriously as an argument apart from by True Believers
That its all terribly unfair and the poor polar bears are going to fry just about ten minutes before they would have drowned……about the best that they can do.
Its been a great summer so far. The total debunking of CRU already, and now this earth-shattering paper.
That they choose

August 15, 2010 6:40 am

@TerryS
“They will simply arrange for one of their pet journals to publish a paper, refuting this one, just before the cutoff date for IPCC submissions. The paper won’t have to be accurate or have sound statistics, it simply has to be published too late for any responses to it to make it into the next IPCC report.”
Worse, if there is a repeat of the kind of behind the scenes shenanigans identified by Christopher Booker, Richard North et al then we can expect “useful” research to find its way in even after, the cutoff date.

Richard M
August 15, 2010 6:52 am

We’ve already seen attempts at ad homs, strawmen, out of context claims and just plain denial from the AGW supporters. It’s almost laughable. What we haven’t seen is any attempt at scientific understanding of what this paper represents. Very telling.
Add this statistical paper to the problems presented by unit roots and the statistics used in climate science is now pretty much trashed.
On another angle, as Lew Skannen indicated, the models are now in question. I’d further that by saying they are completely trashed also. They need to figure out how to predict natural warming events like the MWP as well CO2 warming events if the AGW hypothesis is to be supported. Looks like they have a lot of work to do.

August 15, 2010 6:57 am

wwf says:
August 15, 2010 at 2:54 am
I still see an unprecedented warming and at an unprecedented rate over the last century and a half.

Then be so kind to explain why this warming appear to have taken place in three distinct periods since 1850, and wich only the last warming-period (1970-2000) has been attributed to that trace gas called CO2.
To everyone else, figure 15, do i spot the trick there? Nice to see that this report also would have the need of a neat trick to show a unprecedented warming if they had to. But they don’t have to 🙂

August 15, 2010 6:59 am

joshua corning says:
August 15, 2010 at 1:53 am
The real fun will be watching the next IPCC panel doing back flips to keep this out of their next report.

joshua,
Good comment, yes, we should start thinking down the road a bilt.
Not just fun to watch the IPCC regarding papers like this, we must follow it closely. We must be the auditing body to follow the progress of the preparation of the next IPCC report to ensure any irregularities in the IPCC process are exposed promptly. Vigilance, watch the IPCC now closely.
John

Ben
August 15, 2010 7:08 am

The entire MWP argument is just a distraction from the facts of the paper. It may or may not have existed, this thread should not be about the MWP at all, it should be about this research paper and the implications it entails. Sure, it will have some effect on the MWP, but that is besides the point.

1 6 7 8 9 10 49