New paper makes a hockey sticky wicket of Mann et al 98/99/08

NOTE: This has been running two weeks at the top of WUWT, discussion has slowed, so I’m placing it back in regular que.  – Anthony

UPDATES:

Statistician William Briggs weighs in here

Eduardo Zorita weighs in here

Anonymous blogger “Deep Climate” weighs in with what he/she calls a “deeply flawed study” here

After a week of being “preoccupied” Real Climate finally breaks radio silence here. It appears to be a prelude to a dismissal with a “wave of the hand”

Supplementary Info now available: All data and code used in this paper are available at the Annals of Applied Statistics supplementary materials website:

http://www.imstat.org/aoas/supplements/default.htm

=========================================

Sticky Wicket – phrase, meaning: “A difficult situation”.

Oh, my. There is a new and important study on temperature proxy reconstructions (McShane and Wyner 2010) submitted into the Annals of Applied Statistics and is listed to be published in the next issue. According to Steve McIntyre, this is one of the “top statistical journals”. This paper is a direct and serious rebuttal to the proxy reconstructions of Mann. It seems watertight on the surface, because instead of trying to attack the proxy data quality issues, they assumed the proxy data was accurate for their purpose, then created a bayesian backcast method. Then, using the proxy data, they demonstrate it fails to reproduce the sharp 20th century uptick.

Now, there’s a new look to the familiar “hockey stick”.

Before:

Multiproxy reconstruction of Northern Hemisphere surface temperature variations over the past millennium (blue), along with 50-year average (black), a measure of the statistical uncertainty associated with the reconstruction (gray), and instrumental surface temperature data for the last 150 years (red), based on the work by Mann et al. (1999). This figure has sometimes been referred to as the hockey stick. Source: IPCC (2001).

After:

FIG 16. Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD and backcasts 998-1849 AD. The cyan region indicates uncertainty due to t, the green region indicates uncertainty due to β, and the gray region indicates total uncertainty.

Not only are the results stunning, but the paper is highly readable, written in a sensible style that most laymen can absorb, even if they don’t understand some of the finer points of bayesian and loess filters, or principal components. Not only that, this paper is a confirmation of McIntyre and McKitrick’s work, with a strong nod to Wegman. I highly recommend reading this and distributing this story widely.

Here’s the submitted paper:

A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable?

(PDF, 2.5 MB. Backup download available here: McShane and Wyner 2010 )

It states in its abstract:

We find that the proxies do not predict temperature significantly better than random series generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago.

Here are some excerpts from the paper (emphasis in paragraphs mine):

This one shows that M&M hit the mark, because it is independent validation:

In other words, our model performs better when using highly autocorrelated

noise rather than proxies to ”predict” temperature. The real proxies are less predictive than our ”fake” data. While the Lasso generated reconstructions using the proxies are highly statistically significant compared to simple null models, they do not achieve statistical significance against sophisticated null models.

We are not the first to observe this effect. It was shown, in McIntyre

and McKitrick (2005a,c), that random sequences with complex local dependence

structures can predict temperatures. Their approach has been

roundly dismissed in the climate science literature:

To generate ”random” noise series, MM05c apply the full autoregressive structure of the real world proxy series. In this way, they in fact train their stochastic engine with significant (if not dominant) low frequency climate signal rather than purely non-climatic noise and its persistence. [Emphasis in original]

Ammann and Wahl (2007)

On the power of the proxy data to actually detect climate change:

This is disturbing: if a model cannot predict the occurrence of a sharp run-up in an out-of-sample block which is contiguous with the insample training set, then it seems highly unlikely that it has power to detect such levels or run-ups in the more distant past. It is even more discouraging when one recalls Figure 15: the model cannot capture the sharp run-up even in-sample. In sum, these results suggest that the ninety-three sequences that comprise the 1,000 year old proxy record simply lack power to detect a sharp increase in temperature. See Footnote 12

Footnote 12:

On the other hand, perhaps our model is unable to detect the high level of and sharp run-up in recent temperatures because anthropogenic factors have, for example, caused a regime change in the relation between temperatures and proxies. While this is certainly a consistent line of reasoning, it is also fraught with peril for, once one admits the possibility of regime changes in the instrumental period, it raises the question of whether such changes exist elsewhere over the past 1,000 years. Furthermore, it implies that up to half of the already short instrumental record is corrupted by anthropogenic factors, thus undermining paleoclimatology as a statistical enterprise.

FIG 15. In-sample Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD.

We plot the in-sample portion of this backcast (1850-1998 AD) in Figure 15. Not surprisingly, the model tracks CRU reasonably well because it is in-sample. However, despite the fact that the backcast is both in-sample and initialized with the high true temperatures from 1999 AD and 2000 AD, it still cannot capture either the high level of or the sharp run-up in temperatures of the 1990s. It is substantially biased low. That the model cannot capture run-up even in-sample does not portend well for its ability

to capture similar levels and run-ups if they exist out-of-sample.

Conclusion.

Research on multi-proxy temperature reconstructions of the earth’s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with universitylevel, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in some

respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.

On the one hand, we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data. The fundamental problem is that there is a limited amount of proxy data which dates back to 1000 AD; what is available is weakly predictive of global annual temperature. Our backcasting methods, which track quite closely the methods applied most recently in Mann (2008) to the same data, are unable to catch the sharp run up in temperatures recorded in the 1990s, even in-sample.

As can be seen in Figure 15, our estimate of the run up in temperature in the 1990s has

a much smaller slope than the actual temperature series. Furthermore, the lower frame of Figure 18 clearly reveals that the proxy model is not at all able to track the high gradient segment. Consequently, the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.

Our main contribution is our efforts to seriously grapple with the uncertainty involved in paleoclimatological reconstructions. Regression of high dimensional time series is always a complex problem with many traps. In our case, the particular challenges include (i) a short sequence of training data, (ii) more predictors than observations, (iii) a very weak signal, and (iv) response and predictor variables which are both strongly autocorrelated.

The final point is particularly troublesome: since the data is not easily modeled by a simple autoregressive process it follows that the number of truly independent observations (i.e., the effective sample size) may be just too small for accurate reconstruction.

Climate scientists have greatly underestimated the uncertainty of proxy based reconstructions and hence have been overconfident in their models. We have shown that time dependence in the temperature series is sufficiently strong to permit complex sequences of random numbers to forecast out-of-sample reasonably well fairly frequently (see, for example, Figure 9). Furthermore, even proxy based models with approximately the same amount of reconstructive skill (Figures 11,12, and 13), produce strikingly dissimilar historical backcasts: some of these look like hockey sticks but most do not (Figure 14).

Natural climate variability is not well understood and is probably quite large. It is not clear that the proxies currently used to predict temperature are even predictive of it at the scale of several decades let alone over many centuries. Nonetheless, paleoclimatoligical reconstructions constitute only one source of evidence in the AGW debate. Our work stands entirely on the shoulders of those environmental scientists who labored untold years to assemble the vast network of natural proxies. Although we assume the reliability of their data for our purposes here, there still remains a considerable number of outstanding questions that can only be answered with a free and open inquiry and a great deal of replication.

===============================================================

Commenters on WUWT report that Tamino and Romm are deleting comments even mentioning this paper on their blog comment forum. Their refusal to even acknowledge it tells you it has squarely hit the target, and the fat lady has sung – loudly.

(h/t to WUWT reader “thechuckr”)

Share

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
1.2K Comments
Inline Feedbacks
View all comments
Pull My Finger
August 26, 2010 10:26 am

One item I have not seen discussed is the frequency of calibration and accuracy of the various temperature recording devices used in the last 30 to 150 years and various stations that most of the AGW argument is based on, and what the methodology of recording and reporting data has been.
I would imagine temperature devices early in this data collection were lucky to be accurate to a degree given they were likely your old run of the mill mercury thermometers with probably very shaky quality control in production.
Another issue is in the days before digital readouts, which were not all that long ago, what was the standard method of reporting? Round up? Round down? Was is consistent year over year or did the procedure change a number of times? We’ve seen the temperature rise a fraction of a degree since 1970 or so, a simple change in rounding procedure could explain the entire issue.
Any one have any insights?

latitude
August 26, 2010 11:05 am

Tim Folkerts says:
August 25, 2010 at 7:58 pm
“Interestingly, you are willing to believe that statisticians are experts in statistics and hence should be trusted. But you AREN’T willing to believe that climate scientists are experts in climate science and hence should be trusted.”
=========================================
Tim, All statisticians are trained in statistics.
No climate scientists are trained in computer programing or statistics.
But climate science has morphed into programing computer games and statistics

tonyb
Editor
August 26, 2010 12:03 pm

Pull my finger
I wrote on that very subject in my article here;
Article: History and reliability of global temperature records. Author: Tony Brown
This article (part 1 of a series of three) examines the period around 1850/80 when Global temperatures commence, and looks at the long history of reliable observations and records prior to the development of instrumental readings.
http://wattsupwiththat.com/2009/11/14/little-ice-age-thermometers-%e2%80%93-history-and-reliability/
tonyb

August 26, 2010 12:23 pm

RR Kampen: August 26, 2010 at 1:30 am
Don’t get put off by low concentrations. A very small amount of cyanide kills.
Cyanide is a poison — it’s been empirically proven that it *will* kill in lethal doses.
There is no empirical evidence that increasing carbon dioxide concentrations in free convection has any effect on the temperature — if anything, the evidence shows that carbon dioxide *lags* temperature by about 800 years.

Pull My Finger
August 26, 2010 12:32 pm

Thanks Tony B, interesting stuff.

Richard S Courtney
August 26, 2010 1:32 pm

RR Kampen:
Re your post at August 26, 2010 at 7:09 am ,
I sincerely apologise for mistyping your name and for having done it repeatedly.
I have no excuse but – in mediation of my error – I point out that that in each case I copied and pasted the time stamp so those who checked my quotations of you would have seen that they were quotations of your words.
But, my error was wrong. So, I offer no excuse and provide an abject apology for it.
Also, in the same post where you point out the error (for which I have here apologised) you ask me:
“Also a question: have you at one time misread ‘latitude’ for ‘altitude’? It would explain our misunderstanding.”
No! I stated “at altitude in the tropics” and you later talked about “latitude”. If there was any misreading then it was not by me.
The ‘hot spot’ is missing and this indicates that the postulated feedbacks required to convert any AGW into a discernible effect do not exist. Live with it, and rejoice at it because it is good news.
Richard

Wiglaf
August 26, 2010 1:40 pm

Is it true that cyanide only kills if it comes into contact with an acid (stomach acid for instance)?
Also, apple seeds have cyanide in them, but if you eat a couple apples, seeds and all, you won’t die. However, if you eat a cupful of apple seeds, it will likely be fatal. In that case, the dose makes the poison. Of course, if you have cyanide gas, one whiff can send you into a coma. I’m sure that’s far more concentrated and lethal administration than apple seeds.

bemused
August 26, 2010 2:04 pm

Latitude said:
“No climate scientists are trained in computer programing or statistics.”
What an extraordinary claim.

Bryan
August 26, 2010 3:16 pm

Bill Tuttle
Not only that but I heard Litzen in an amicable conversation with a consensus Professor and they both agreed that in a forest walk you will have a CO2 level at 4 times the present average value with no problems for health.

Alan McIntire
August 26, 2010 8:50 pm

“bemused says:
August 26, 2010 at 2:04 pm
Latitude said:
“No climate scientists are trained in computer programing or statistics.”
What an extraordinary claim.”
Here’s a link to Mann’s 1998 “Nature” paper.
http://www.meteor.iastate.edu/classes/ge415/papers/Mann_et_al_Nature1998.pdf
Note the equation in the Methods section, on pg numbered 785.
It refers to the RE statistic, there’s a slight misprint, it should read
b = 1 – ((Yref – Ypredicted)^2/(Yref)^2)
The printer left out that divided by sign.
The last paragraph on the page reads:
“b is a quite rigorous measure of the similarity between two variables,
measuring their correspondence not only in terms of the relative departures
from mean values (as does the correlation coefficient r) but also in terms of the
means and absolute variance of the two series. For comparison, correlation (r)
and squared-correlation (r2) statistics are also determined. The expectation
value for two random series is b = -1. Negative values of b may in fact be
statistically significant for sufficient temporal degrees of freedom. Nonetheless,
the threshold b ¼ 0 defines the simple ‘climatological’ model in which a series
is assigned its long-term mean…”
That’s a real faux pas.
Here’s a reference to RE2 from a statistical perspective rather than a climate
perspective.
http://www.graphpad.com/curvefit/goodness_of_fit.htm
“Tip: Don’t make the mistake of using R2 as your main criterion for whether a fit is reasonable. A high R2 tells you that the curve came very close to the points. That doesn’t mean the fit is “good” in other ways. The best-fit values of the parameters may have values that make no sense (for example, negative rate constants) or the confidence intervals may be very wide….”
“Note that R2 is not really the square of anything. If SSreg is larger than SStot, R2 will be negative. While it is surprising to see something called “squared” have a negative value, it is not impossible (since R2 is not actually the square of R). R2 will be negative when the best-fit curve fits the data worse than a horizontal line at the mean Y value. This could happen if you pick an inappropriate model, or fix a parameter to an inappropriate constant value (for example, if you fix the Hill slope of a dose-response curve to 1.0 when the curve goes downhill). ”
Mann was confusing RE2 with r^2. r^2, the correlation, CAN be significant either positive or negative. RE2 is normally positive. If you just guess the AVERAGE for y, you’ll get an RE2 of zero. If you get a NEGATIVE value, it means your prediction is
worse than just taking the average for all observed y. An example of that
happening is where the actual graph is linear, and my model predicts exponential
growth in the near future. The reviewers didn’t catch the fact that negative RE2
is significant only in the sense that your model is demonstrably crap. Given that
nobody commented on this obvious faux pas until Steven McIntyre commented on it, the statement that climatologists who read Mann’s NATURE article didn’t know statistics was correct.
Deep climate made the same mistake as Mann in confusing RE2 with
r^2
http://deepclimate.org/2010/08/19/mcshane-and-wyner-2010/
“At almost 0.9, the RE score is well above the 95% significance level, which is only 0.4 for the “null” proxies. Recall the definition of RE (courtesy of the NRC):
where is the mean squared error of using the sample average temperature over the calibration period (a constant, ) to predict temperatures during the period
of interest ”
http://deepclimate.org/2010/08/19/mcshane-and-wyner-2010/
Note that you can get a high RE2 by overfitting.
DEEPCLIMATE linked to this paper without going on to read page 95
http://www.nap.edu/openbook.php?record_id=11676&page=93
On page 95:
“Reconstructions that have poor validation statistics (i.e., low CE) will have correspondingly wide uncertainty bounds, and so can be seen to be unreliable in an objective way. Moreover, a CE statistic close to zero or negative suggests that the reconstruction is no better than the mean, and so its skill for time averages shorter than the validation period will be low. Some recent results reported in Table 1S of Wahl and Ammann (in press) indicate that their reconstruction, which uses the same procedure and full set of proxies used by Mann et al. (1999), gives CE values ranging from 0.103 to –0.215, depending on how far back in time the reconstruction is carried. ”
A high RE2 and a low correlation, as with Amman and Wahl, indicates
Overfitting your model. For more on that, google “RE2 overfitting”.

August 26, 2010 11:51 pm

Djozar: Now I’m totally confused about ozone and the ozone hole
No need to be confused !
On its own, ozone cuts away about 15-20% of the sun’s radiation. A few decades ago CFC’s were linked to the destruction of ozone, thereby increasing the ozone hole, allowing more UV radiation in….
This could be one of the (real) causes for modern warming. Last I looked, I saw ozone is increasing again. I have listed this as one of the reasons to expect (global )cooling.
http://letterdash.com/HenryP/more-carbon-dioxide-is-ok-ok

August 27, 2010 12:07 am

Henry@Bryan, Bill etc.
The safe working level of CO2 in factories, greenhouses, etc. is 9000 mg/m3.
That is 0,75%. However, it won’t kill until 20 or 30%, and then only because of a lack of oxygen. CO2 in the air has increased from 0,03 to 0.04 % during the last 50 years. So it is is still save here for a long time to come!!!!

RR Kampen
August 27, 2010 12:22 am

Henry Pool says:
August 26, 2010 at 7:13 am
RR kampen
It seems nobody here including yourself has yet been able to prove to me that CO2 is a greenhouse gas.

I hope the problem is not proving it to you. Anyway, you can do John Tyndall’s experiments (around 1860) in your own yard.
Here is another experiment: http://www.youtube.com/watch?v=SeYfl45X1wo .
Beware. I could not prove to most people that the ratio of circumference and diameter of a circle is not a rational number. They don’t know the math.

August 27, 2010 1:42 am

Henry pool, Djozar
Sure, ozone is another Greenhouse Gas. But its level fluctuates and the fluctuations seem to correlate to solar fluctuations enough to allow the possibility that the fluctuations could all be natural. Ozone’s absorption of unwanted UV is what heats the stratosphere.
Anthony
This terrific post has run a long time now, is there a reason you still want it “guarding the gates”, seeing that comment strength has diminished?
I look forward to more of your Surface Stations material.

Tim Spence
August 27, 2010 2:30 am

It was a tricky dicky, cherry picky, cocky crocky hockey sticky ……..
(anyone care to finish)

August 27, 2010 3:26 am

Henry@Lucy (abt the ozone)
I have not seen that correlation and doubt its existence. I think ozone’s depletion in the past and consequent warming of the planet (due to the lack of ozone’s cooling) is the one thing I think could be attributed to man.
If the net effect of a substance in the air is cooling (by re-radiating sunshine) rather than warming (by trapping earth-shine) , would you still call it a greenhouse gas?
I think I would call ozone an anti-greenhouse gas. (look carefully at the incoming and outgoing radiation graphs)

RR Kampen
August 27, 2010 4:28 am

Bill Tuttle says:
August 26, 2010 at 12:23 pm
RR Kampen: August 26, 2010 at 1:30 am
Cyanide is a poison — it’s been empirically proven that it *will* kill in lethal doses.
There is no empirical evidence that increasing carbon dioxide concentrations in free convection has any effect on the temperature — if anything, the evidence shows that carbon dioxide *lags* temperature by about 800 years.

My point is that very low concentrations can have big effects. For greenhouse gases and ozone (a poison almost as potent as HCN!) this is evident.
The empirical evidence can be witnessed today.
…the evidence shows that carbon dioxide *lags* temperature by about 800 years.
You are referring to the process of end ice age, start interglacial. Couple of remarks:
– This process happens temperaturewise and CO2-wise much slower than the climate- and greenhouse changes we are witnessing today.
– There are more causes for climate change even if CO2 is today the most important one.
– Climate change by e.g. Milankovitch cycles gives rise to changes in land and sea vegetation. This and warming of the sea releases CO2 after the warming. 800 years is the mixing time of the oceans.
Of course, this CO2 adds to further warming immediately. From about one third to one half of the warming trajectory to interglacial, this CO2 becomes the main driver of further warming. In the well-known graphs comparing historical temp and CO2 you will see temp and CO2 go up together at this stage, no lag of 800 years anymore. In fact temp goes up a couple years after the CO2 in this case – which is to small a time difference to distinguish.
Richard S Courtney, I am overwhelmed by your apology 🙂 Accepted, of course.

Djozar
August 27, 2010 5:05 am

Thanks Lucy & Henry

August 27, 2010 5:15 am

Henry@RR kampen August 27 4:28 AM
You have to be kidding me?
CO2 is a weak greenhouse gas, if it is one. You have not proven that CO2 is a greenhouse gas.
http://wattsupwiththat.com/2010/08/17/breaking-new-paper-makes-a-hockey-sticky-wicket-of-mann-et-al-99/#comment-467122

Stu
August 27, 2010 5:36 am

“It was a tricky dicky, cherry picky, cocky crocky hockey sticky ……..
(anyone care to finish)”
‘…that Mann swore showed the worst climb today’

Francisco
August 27, 2010 6:36 am

@Henry Pool
http://letterdash.com/HenryP/more-carbon-dioxide-is-ok-ok
“To start off with, I found Svante Arrhenius’ formula completely wrong and since then I could not find any correctly conducted experiments (tests & measurements) that would somehow prove to me that the warming properties of CO2 (by trapping earth’s radiation between the wavelengths 14-15 um) are greater than its cooling properties (by deflecting sunlight at various wavelengths between 0 – 5 um).”
———-
I have always wondered why no experiments of this sort seem to have been conducted to measure at least the basic effect of different CO2 concentrations in air. Some say it is impossible because you cannot reproduce the entire atmosphere in lab conditions and so on. But is this really needed? What insurmountable difficulties can there be in using for example open-top columns of air in which CO2 concentrations can be altered at will, over which you apply full spectrum light from he top, with a bottom that absorbs heat and re-emits IR radiation, or something roughly along those lines to determine the thermal effects near the surface of different concentrations of this gas? If these kinds of experiments exist, they are certainly not widely mentioned. Sometimes you hear that the basic effect is very well understood, easy to calculate, and easily observable in laboratory experiments, but specific references to any such experiments seem to shine by their absence. As far as I can tell, the effect is always calculated theoretically, often with much disagreement among experts, and some insisting it doesn’t even exist. The apparently complete lack of empirical corroboration on this crucial issue is a source of great puzzlement to me. Particle physicists have been able to devise extremely ingenious experiments to test and corroborate their theories. But somehow, measuring (however crudely) the magnitude of CO2’s greenhouse effect is beyond the capabilities of science. Has anyone found any concrete references to such experiments?

August 27, 2010 6:40 am

Henry at RR Kampen
the video proves warming. But you have to prove that CO2 warms more than it cools. Better read my previous posts (to which I have made reference)
I do recommend my video:

I laughed. And laughed.

August 27, 2010 6:43 am

RR Kampen: August 27, 2010 at 4:28 am
My point is that very low concentrations can have big effects. For greenhouse gases and ozone (a poison almost as potent as HCN!) this is evident.
My point — which you ignored — is that no one has ever proven that *any* increase in CO2 in free convection has an effect on the temperature.
The empirical evidence can be witnessed today.
Correlation is not causality. Kindly cite the empirical evidence — not the conjecture — that *proves* an increase in free atmospheric CO2 causes an increase in temperature.
You are referring to the process of end ice age, start interglacial.
Actually, I was referring to the beginning of several ice ages, when CO2 continued to increase as the temperatures continued to fall.
Couple of remarks:
– This process happens temperaturewise and CO2-wise much slower than the climate- and greenhouse changes we are witnessing today.

Not necessarily. According to the folks who analyze ice cores, temperature changed *rapidly* while CO2 lagged: “A more detailed ice core analysis shows an occasional abrupt change of climate during the last interglacial (the Eemian, at 120 kBP), changing by as much as 10K during only 10 -30 years.”
http://www-das.uwyo.edu/~geerts/cwx/notes/chap01/icecore.html
– There are more causes for climate change even if CO2 is today the most important one.
If CO2 wasn’t the most important one in the past, why is it suddenly the most important one today?
– Climate change by e.g. Milankovitch cycles gives rise to changes in land and sea vegetation. This and warming of the sea releases CO2 after the warming. 800 years is the mixing time of the oceans.
That’s my point: that CO2 increases follow a rising temperature, and do not cause it. If rising CO2 levels *caused* an increase in temperatures, you would not see temperatures falling as CO2 levels continued to rise for an additional 800 years.
Of course, this CO2 adds to further warming immediately. From about one third to one half of the warming trajectory to interglacial, this CO2 becomes the main driver of further warming.
Again, that is conjecture — it hasn’t been *proven* — and if CO2 were the main driver, there would have been no temperature *decreases* during periods of increasing CO2.
In the well-known graphs comparing historical temp and CO2 you will see temp and CO2 go up together at this stage, no lag of 800 years anymore. In fact temp goes up a couple years after the CO2 in this case – which is to small a time difference to distinguish.
In the well-known USHCN V2 temperature/CO2 comparison graph from 1895 to the present, I see a cooling trend from 1933 until 1979, while CO2 rose from 308ppm to 337ppm. It is followed by a rise in temperature until 1999, then a slight cooling trend, all the while CO2 is continuing to rise. If CO2 were the main driver, the temperatures would have shown *no* cooling trends during that time.

August 27, 2010 6:48 am

Stu: August 27, 2010 at 5:36 am
*groan*
Now I have a Brian Hyland earworm…

Bryan
August 27, 2010 7:26 am

Its a wonder that this news is not making more of an impact.
From the New Scientist
…”IT IS time to start asking the hard questions. Countless people in flood-stricken Pakistan have lost families and livelihoods. Who can they hold responsible and turn to for reparations?
Less than a decade ago, these questions would have been dismissed outright. “Many scientists at the time said that you can never blame an individual weather event on climate change,” says Myles Allen of the University of Oxford. But a small meeting of scientists in Colorado last week – organised by the US National Oceanic and Atmospheric Administration, the UK Foreign and Commonwealth Office and the UK Met Office’s Hadley Centre, among others – suggests the tide is turning.
The aim of the Attribution of Climate-Related Events workshop was to discuss what information is needed to determine the extent to which human-induced climate change can be blamed for extreme weather events – possibly even straight after they have happened.”………..
It seems to me that what they are proposing is that from now on if there is a tragedy like a Tsunami or Pakistani type flooding the DEFAULT POSITION will be that it can all be blamed on climate change.
Sceptics will have to prove that it wasn’t!
The burden of proof has now shifted from proving a hypothesis to prove the hypothesis wrong (Terry Oldberg please note.)
And what is the presence of the Government body “UK Foreign and Commonwealth Office ” doing at what purports to be a meeting of scientists.
Very sinister!