NOTE: This has been running two weeks at the top of WUWT, discussion has slowed, so I’m placing it back in regular que. – Anthony
UPDATES:
Statistician William Briggs weighs in here
Eduardo Zorita weighs in here
Anonymous blogger “Deep Climate” weighs in with what he/she calls a “deeply flawed study” here
After a week of being “preoccupied” Real Climate finally breaks radio silence here. It appears to be a prelude to a dismissal with a “wave of the hand”
Supplementary Info now available: All data and code used in this paper are available at the Annals of Applied Statistics supplementary materials website:
http://www.imstat.org/aoas/supplements/default.htm
=========================================
Sticky Wicket – phrase, meaning: “A difficult situation”.
Oh, my. There is a new and important study on temperature proxy reconstructions (McShane and Wyner 2010) submitted into the Annals of Applied Statistics and is listed to be published in the next issue. According to Steve McIntyre, this is one of the “top statistical journals”. This paper is a direct and serious rebuttal to the proxy reconstructions of Mann. It seems watertight on the surface, because instead of trying to attack the proxy data quality issues, they assumed the proxy data was accurate for their purpose, then created a bayesian backcast method. Then, using the proxy data, they demonstrate it fails to reproduce the sharp 20th century uptick.
Now, there’s a new look to the familiar “hockey stick”.
Before:

After:

Not only are the results stunning, but the paper is highly readable, written in a sensible style that most laymen can absorb, even if they don’t understand some of the finer points of bayesian and loess filters, or principal components. Not only that, this paper is a confirmation of McIntyre and McKitrick’s work, with a strong nod to Wegman. I highly recommend reading this and distributing this story widely.
Here’s the submitted paper:
(PDF, 2.5 MB. Backup download available here: McShane and Wyner 2010 )
It states in its abstract:
We find that the proxies do not predict temperature significantly better than random series generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago.
Here are some excerpts from the paper (emphasis in paragraphs mine):
This one shows that M&M hit the mark, because it is independent validation:
In other words, our model performs better when using highly autocorrelated
noise rather than proxies to ”predict” temperature. The real proxies are less predictive than our ”fake” data. While the Lasso generated reconstructions using the proxies are highly statistically significant compared to simple null models, they do not achieve statistical significance against sophisticated null models.
We are not the first to observe this effect. It was shown, in McIntyre
and McKitrick (2005a,c), that random sequences with complex local dependence
structures can predict temperatures. Their approach has been
roundly dismissed in the climate science literature:
To generate ”random” noise series, MM05c apply the full autoregressive structure of the real world proxy series. In this way, they in fact train their stochastic engine with significant (if not dominant) low frequency climate signal rather than purely non-climatic noise and its persistence. [Emphasis in original]
Ammann and Wahl (2007)
…
On the power of the proxy data to actually detect climate change:
This is disturbing: if a model cannot predict the occurrence of a sharp run-up in an out-of-sample block which is contiguous with the insample training set, then it seems highly unlikely that it has power to detect such levels or run-ups in the more distant past. It is even more discouraging when one recalls Figure 15: the model cannot capture the sharp run-up even in-sample. In sum, these results suggest that the ninety-three sequences that comprise the 1,000 year old proxy record simply lack power to detect a sharp increase in temperature. See Footnote 12
Footnote 12:
On the other hand, perhaps our model is unable to detect the high level of and sharp run-up in recent temperatures because anthropogenic factors have, for example, caused a regime change in the relation between temperatures and proxies. While this is certainly a consistent line of reasoning, it is also fraught with peril for, once one admits the possibility of regime changes in the instrumental period, it raises the question of whether such changes exist elsewhere over the past 1,000 years. Furthermore, it implies that up to half of the already short instrumental record is corrupted by anthropogenic factors, thus undermining paleoclimatology as a statistical enterprise.
…

We plot the in-sample portion of this backcast (1850-1998 AD) in Figure 15. Not surprisingly, the model tracks CRU reasonably well because it is in-sample. However, despite the fact that the backcast is both in-sample and initialized with the high true temperatures from 1999 AD and 2000 AD, it still cannot capture either the high level of or the sharp run-up in temperatures of the 1990s. It is substantially biased low. That the model cannot capture run-up even in-sample does not portend well for its ability
to capture similar levels and run-ups if they exist out-of-sample.
…
Conclusion.
Research on multi-proxy temperature reconstructions of the earth’s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with universitylevel, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in some
respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.
On the one hand, we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data. The fundamental problem is that there is a limited amount of proxy data which dates back to 1000 AD; what is available is weakly predictive of global annual temperature. Our backcasting methods, which track quite closely the methods applied most recently in Mann (2008) to the same data, are unable to catch the sharp run up in temperatures recorded in the 1990s, even in-sample.
As can be seen in Figure 15, our estimate of the run up in temperature in the 1990s has
a much smaller slope than the actual temperature series. Furthermore, the lower frame of Figure 18 clearly reveals that the proxy model is not at all able to track the high gradient segment. Consequently, the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.
Our main contribution is our efforts to seriously grapple with the uncertainty involved in paleoclimatological reconstructions. Regression of high dimensional time series is always a complex problem with many traps. In our case, the particular challenges include (i) a short sequence of training data, (ii) more predictors than observations, (iii) a very weak signal, and (iv) response and predictor variables which are both strongly autocorrelated.
The final point is particularly troublesome: since the data is not easily modeled by a simple autoregressive process it follows that the number of truly independent observations (i.e., the effective sample size) may be just too small for accurate reconstruction.
Climate scientists have greatly underestimated the uncertainty of proxy based reconstructions and hence have been overconfident in their models. We have shown that time dependence in the temperature series is sufficiently strong to permit complex sequences of random numbers to forecast out-of-sample reasonably well fairly frequently (see, for example, Figure 9). Furthermore, even proxy based models with approximately the same amount of reconstructive skill (Figures 11,12, and 13), produce strikingly dissimilar historical backcasts: some of these look like hockey sticks but most do not (Figure 14).
Natural climate variability is not well understood and is probably quite large. It is not clear that the proxies currently used to predict temperature are even predictive of it at the scale of several decades let alone over many centuries. Nonetheless, paleoclimatoligical reconstructions constitute only one source of evidence in the AGW debate. Our work stands entirely on the shoulders of those environmental scientists who labored untold years to assemble the vast network of natural proxies. Although we assume the reliability of their data for our purposes here, there still remains a considerable number of outstanding questions that can only be answered with a free and open inquiry and a great deal of replication.
===============================================================
Commenters on WUWT report that Tamino and Romm are deleting comments even mentioning this paper on their blog comment forum. Their refusal to even acknowledge it tells you it has squarely hit the target, and the fat lady has sung – loudly.
(h/t to WUWT reader “thechuckr”)

Henry@DCA engineer
*(There is an inverse relationship).
Obvious, that is quite apart from the fact that burning fossil fuels by man also add CO2 to the atmosphere.
Henry,
Does this in any way relate to what Miskolczi has said?
RE: Henry Pool: (August 25, 2010 at 8:03 am) “Even without having any real test results (on that balance sheet), I would estimate that the net result of the increase of Co2 and ozone (when taken together) of the past 10 years is cooling rather than warming.”
According to the MODTRAN online infra-red atmospheric radiation calculator tool, the net effect of the CO2 increase from 371.51 to 390.09 ppm over the last ten years is to increase the raw temperature required to maintain a thermal outflow of 292.993 W/sqr mtr, (a standard reference that I use) from 301.01 to 301.16 deg K, a difference so small (0.15 deg C or 0.27 deg F) as to be less than the random noise to be expected in any single practical measurement. I use the default MODTRAN tool settings in clear tropical air, thus, I believe, no feedback or compensation effects should apply.
Sorry, don’t know M.
RR Kampen says:
August 18, 2010 at 1:29 am
“… question is whether the 30% extra CO2 compared to 1900 (and before) wouldn’t have helped 1998 to become so warm as it was. …”
One of the profound problems with using percentages is that a percentage can often impress as being larger than it is. Since the change actually compose about 9.5*10^-6 ppm of the atmosphere, rather than tossing scary sounding percentages, the question might be better cast as “… whether a change of 0.0000095 ppm compared to 1900 (and before) wouldn’t have helped 1998 to become as warm as it was? …” Do you really think so?
RE: Spector:
(August 25, 2010 at 12:04 pm) “…from 301.01 to 301.16 deg K, a difference so small (0.15 deg C or 0.27 deg F)”
Correction: The above should read “from 301.10 to 301.16 deg K, a difference so small (0.06 deg C or 0.108 deg F)”
Vince Whirlwind (August 23, 2010 at 9:36 PM):
Thank you for giving me the opportunity to clarify my post (August 22,2010 at 12:23 PM).
Regarding Arrhenius and the burden of proof, Arrhenius says this in his 1906 paper “”Die vermutliche Ursache der Klimaschwankungen”:”The opinion that a decrease of carbonic acid in the air can explain ice-age temperatures is not proved wrong until it is shown that the total disappearance of carbonic acid from the atmosphere would not be sufficient to cause a lowering of temperatures about four to five degrees.” “Carbonic acid” is Arrhenius’s term for CO2.
The geologist who blogs at (http://hothouse.geologist-1011.net/) offers the following analysis of the above referenced quote from Arrhenius: “It is unscientific to say that an idea is true until such time as it is proven wrong and to suggest an unmeetable condition for falsification of a scientific idea is extreme sophistry. The onus of proof rests firmly upon the proposer of a hypothesis, not with it’s refutation. Every scientist understands this, and arguing as Arrhenius did in the above quotation, without the repeated confirmation of the idea (as opposed to its underlying principles) constitutes not just fallacy or sophistry, but scientific dishonesty. Moreover, the conditions Arrhenius has set for the refutation of his idea are scientifically impossible to achieve, unless by some dark art it is possible to remove all atmospheric carbon dioxide from the planet. Arrhenius, by imposing an impossible condition for falsification has tacitly admitted that his “Greenhouse Effect” is not falsifiable. We may take this statement of Arrhenius (1906a) as a clear indication that the “Greenhouse Effect” is simply not science. In fact, this is the first clear evidence that the “Greenhouse Effect” is a hoax.”
Regarding your argument that “…by the logic of your second paragraph, if I were to say ‘Sunrise tomorrow is at 0617’ and tomorrow happens to be a cloudy day, then sunrise has not occurred,” your argument is of the form of a strawman argument that achieves its end of falsifying the opposing argument by interpreting words or phrases of the latter argument in unusual ways. As words and phrases of my paragraph 2 usually are interpreted, a “cause” is a state of nature such as “cloudy.” An “effect” is a state of nature such as “rain in the next 24 hours.” If the event of “cloudy” is not followed by the event of “rain in the next 24 hours” then the cause and effect relation “given cloudy, rain in the next 24 hours” is falsified by the empirical evidence. This argument is perfectly logical.
Regarding your statement that “You seem to be struggling with some very basic intellectual skills” this statement is an example of an ad hominem argument. As such, it is logically fallacious.
@ur momisugly stephen richards
“These are STATISTICIANS, they did statistics. Get It!?”
Interestingly, you are willing to believe that statisticians are experts in statistics and hence should be trusted. But you AREN’T willing to believe that climate scientists are experts in climate science and hence should be trusted.
This is a general problem with BOTH sides. We are (subconsciously or openly) biased toward the experts who’s results agree with our expectations. The debate here should go a long way toward the helping to reach a better understanding — where ideas arE vetted and discussed to find the strongest points on both sides of hte argument. The challenge is to allow ourselves to listen when the “other side” makes a good point.
@tim folkerts
‘Interestingly, you are willing to believe that statisticians are experts in statistics and hence should be trusted. But you AREN’T willing to believe that climate scientists are experts in climate science and hence should be trusted.
This argument would have more credence with me if you could show that there is indeed some special ‘science’ unique to climatology, and that can only be practiced by those who have trained in that particular field.
If there is such a science, different from the traditional subjects of statistics, maths, chemistry, physics etc..and where the standard techniques used in those disciplines do not apply, then please let me know.
Because as far as I can tell (I have a Chemistry degree), ‘climate science’ just takes a bit of some of those and mixes them all together … and then somehow comes up with what they claim to be ‘climate unique’ answers.
An easy way to show me that I am wrong would be to publicise a degree level (undergraduate or a masters) syllabus fro a ‘climate science’ degree, and to highlight for me the bits that are different.
In the case under discussion here, the authors have explicitly used the data collected and ‘adjusted’ by climate scientists, and then used standard statistics to draw their conclusions. If it is your case that there are some special ‘climate-related’ ways of doing statistics that invalidate their conclusions, then I suggest that you need to provide some greater intellectual basis for this than from a self-taught, self-certified climate scientist such as Mann simply declaring it to be so.
Because, frankly, I do not believe this assertion. Any more than i would believe it if Mann declared that there was a special ‘climate chemistry’ only accessible to ‘climate chemists’ and different from that practiced by thousands of conventional chemists daily.
We used to have a word for that sort of belief…it was called Alchemy.
This is off-topic a bit from the original blog post (which I don’t see as a “smoking gun” but as an interesting step toward more and better research and discussions).
Anyway — comments from Richard S Courtney August 20, 2010 at 9:48 am made me dig a little on my own. He specifically mentioned the HadCRUT3 data set, so I downloaded it ….
Here are some claims, some counter-claims, and a little analysis of my own …
”Reality: the past decade was warmest; decade 1990′s second warmest, decade 1980′s third warmest and the difference between the nineties and tens is bigger than between the eighties and nineties. In other words: warming is accelerating.
Also, we are almost on par with 1998, not 1990 – without a superNiño and during deep solar minimum: http://www.weerwoord.be/uploads/14820101561.png . Some ice is melting, too.” ”
Sorry, but even Phil Jones agrees that there has been no statistically significant warming for the last 15 years (i.e. since 1995). And the period from 1940 to 1970 showed similar decline.
Both are correct to some extent, but Richard seems farther off.
The decadal average temperatures were
1859
-0.37
1869
-0.36
1879
-0.27
1889
-0.3
1899
-0.39
1909
-0.44
1919
-0.43
1929
-0.3
1939
-0.13
1949
-0.07
1959
-0.16
1969
-0.11
1979
-0.09
1989
0.08
1999
0.24
2009
0.41
Clearly the top 3 decades were 2000-2009, 1990-1999 and 1980-1989 respectively.
As to the “no statistically significant warming for the last 15 years”, that does seem to be true as well! For the years 1995-2009, the regression equation is C14 = 0.307 + 0.0109 YEAR (Where “YEAR” is the year since 1995)
The p-value for the slope is 0.088. The generally accepted value for “statistically significant” is 0.05 or smaller ie the odds of getting such an extreme result “by accident” are 1 in 20 (or less). So it does fail that test. On the other hand, this p value suggests that the odds of such an extreme slope “by accident” are only 1 in 12. Not definitely, but certainly suggestive.
Furthermore, if there has been no warming since 1995, within the error estimates then I am correct to say that “Global temperature is now similar to that of 1990″.
This one could go either way, depending on how you want to define “now” and “similar”.
For 1995-2009 The period that had been under discussion), the global average temperate anomaly from the data set you quoted was 0.3835, with a standard deviation of 0.1066. 1990 was 1.2 standard deviations below the mean, so it is noticeably cooler, bit not statistically significantly cooler (which is typically taken as 2 standard deviations from the mean). On the other hand, 1990 IS significantly cooler than “now” if “now” means the period 2007-2009. Or 2006-2009. Or 2005-2009. Or and period back to 1998 – 2009.
So if “now” means the years since 1997 (or earlier) to now, then 1990 is NOT “significantly” cooler. But if “now” means the years since 1998 (or later), then 1990 IS significantly cooler. (And 1990 was a relatively warm year. 1989 or 1991 was even MORE significantly cooler.)
And your assertion that “warming is accelerating” is plain fantasy. Indeed, warming stopped from its rapid rate from ~1970 to ~2000 some 10 years ago. A rational discussion would be as to if and when similar rapid warming will resume.
This one I give to Richard. I took “rate of global warming” to be the slope of the global temperatures for the previous decade. The rate was unusually high from 1998 to 2005, but for the last several years the rate of warming is almost zero. So while these years are still exceptionally warm, there has been little ADDITIONAL warming for the past 5 years.
In fact, perhaps the most unprecedented warming occurred from around 1915 to 1945. (Although that period was also followed by the biggest spell of global cooling in the record. It will be quite interesting to see in a decade if we again get a big cooling spell, or if we continue to have warming (or at least steady) temperatures.)
You conclude by asserting:
“AGW cannot be debunked because it is the reality.”
A truthful statement is that
AGW has been debunked by reality.
Based just on what I say here, I’d have to say the jury is still out on AGW. I think both sides need to refine their arguments. 🙂
Tim
P.S. I wish there was a “preview” feature so I could check the html tags.
Tim Folkerts says:
August 25, 2010 at 7:58 pm
“Interestingly, you are willing to believe that statisticians are experts in statistics and hence should be trusted. But you AREN’T willing to believe that climate scientists are experts in climate science and hence should be trusted.”
=================================
Tim, you have to be joking !
Did you read the emails released from the UEA/CRU ?
There are plenty of reasons out there NOT to trust climate scientists.
Until the statisticians stuff up like the climate scientists did, I would trust the statisticians well before I would trust the climate scientists especially the ones pushing AGW.
Duster says:
August 25, 2010 at 12:13 pm
One of the profound problems with using percentages is that a percentage can often impress as being larger than it is. Since the change actually compose about 9.5*10^-6 ppm of the atmosphere, rather than tossing scary sounding percentages, the question might be better cast as “… whether a change of 0.0000095 ppm compared to 1900 (and before) wouldn’t have helped 1998 to become as warm as it was? …” Do you really think so?
Having a mathematics background, I am insensitive to ‘impression’. Whether you take absolute numbers or percentages, the concentration of an important greenhouse gas has risen considerably (much more than your “0.00000095 ppm”, which I take to be a sort of typo). This MUST have consequences; which is under discussion. I really think it causes global warming, yes.
Don’t get put off by low concentrations. A very small amount of cyanide kills. Concentrations of ozone in the ozonelayer are at most 1/400th of that of CO2 in the lower troposphere, and because stratospheric air is so much thinner, absolute quantity of ozone is really negligable. Except: without it, you and I would not live.
Unfortunately this is not just about the data. Scientists, palaeontologists, archaeologists and other people from every walk of ‘scientific’ life will, having spent a good number of years working on a particular subject, believe that what they have produced is fact, irrespective of whether there are inconsistencies or questionable ‘shortcuts’ with their work.
They will also do everything to ensure that anyone questioning their work or suggesting new ideas is shouted down or made to look like an idiot as their reputation (and ego) is at stake.
These days, funding is of major importance and as climate change (AGW) is the new religion worth billions, scientists will sell their souls for a slice of the action.
Remember, deniable deceit is just as important as the data.
It has been almost a week since the google news count for “global warming” fell below 7000. I’ve not checked to see whether this could be due to less online news generally, but given the mediocre news stories on this subject it seems pretty obvious that the fire has gone out of this story and all we are left with is a few smoking embers from die hard global warming believing journalists who’ll keep on printing the world is warming even if we were all freezing from an iceage.
It also indicates that the academics no longer view “global warming” as a nice tag on which to hang their latest research finding – so there isn’t the academic push fanning the fire.
So, with yet another clear rebuttal of Mann I can’t realistically see this whole thing flaring up. OK, there’ll be a google news peak when they manufacture the “hottest year ever”, by sending a lot of colder sensors off for calibration and removing their data from the data set, but seriously who will really care if it is a 1/10 of a degree warmer than last?
Tim Folkerts:
Thank you for your comparison of my and KK Hempen’s comments based on consideration of HadCRUT3 data that you provide at August 25, 2010 at 9:30 pm.
It is very helpful and clearly not one-sided so others can evaluate it for themselves (but I think most will not agree with your assertion that “Richard seems farther off” because you do not state any clear error in my statements).
Importantly, you conclude by saying:
“Based just on what I say here, I’d have to say the jury is still out on AGW. I think both sides need to refine their arguments.”
Indeed, based only on what you say then the “jury” would “still be out”. But your conclusion is based solely on the HadCRUT3 data and ignores the missing hot spot. Please note that this ‘hot spot’ evidence is the most important which is why I stated it twice in my post at August 19, 2010 at 7:46 am. KK Hampen recognised this so tried – and failed – to pretend it does not exist.
In my post at August 19, 2010 at 7:46 am I wrote:
“The absence of the tropospheric ‘hot spot’ is direct evidence that the positive feedbacks required for CAGW are NOT happening.”
and
“5.
The pattern of atmospheric warming predicted by the AGW hypothesis is absent.
The hypothesis predicts most warming of the air relative to the surface at altitude in the tropics. Measurements from weather balloons and from satellites both show slight cooling relative to the surface at altitude in the tropics.”
In response to KK Hampen disputing those statements, I wrote to him at August 20, 2010 at 9:48 am saying:
“I wrote concerning the AGW hypothesis:
“The hypothesis predicts most warming of the air relative to the surface at altitude in the tropics.”
But you reply:
“In reality the hypothesis predicts most warming at high latitudes, particularly northern high latitudes. The hypothesis does so now and it did so when I studied climatology in an era the IPCC still had to be set up. The observations are conform hypothesis.”
Sorry, but the IPCC AR4 agrees with me and not you.
The matter is explained in Chapter 9 of the AR4 and you can read it at
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf
Figure 9.1 (on page 675) summarises the expected responses to various forcings from 1880 to 1999. Figure 9.1(c) shows the expectation from GHG increase and Figure 9.1(f) the sum of all forcings.
Figure 9.1(c) is the only diagram of the set of individual forcings that provides the pattern of warming I described.
And both Figures 9.1 (c) and (f) display the pattern I described because the AGW prediction is that the effect of the increased GHGs is to overwhelm the effects of the other forcings.
That warming at altitude in the tropics has not happened according to radiosonde (i.e. balloon) measurements taken over the last 50 years and has not happened according to MSU (i.e. satelire) measurements taken since 1979.
Indeed, the data indicates slight cooling at altitude in the tropics (i.e. the opposite of the expected effect of GHGs).”
In science it is not permissible to ignore the primary evidence then declare:
“The jury is still out”.
Richard
This “decade” thing is very silly. We have given sentient qualities to the Earth’s oceanic, topographic, and atmospheric entities? Does it think, “New decade, so I must get warmer (or colder – take your pick).” No. So if you don’t mind, let’s stop with the decade thing. It is meaningless and is simply a way to impress your opinion (on either side of the fence) onto less intellectually agile minds.
During La Nina months, global temperatures demonstrated [insert whatever the trend is]. During El Nino months, global temperatures demonstrated [insert whatever the trend is]. During ENSO neutral months, global temperatures demonstrated [insert whatever the trend is]. Do the same for AMO, PDO, etc. That is a more meaningful discussion of trends than arbitrary decades, years, or months.
A couple of days ago, I tried for the third time to post the following comment to the Realclimate discussion about the Hockey stick. “The reports of McIntyre, Wegman, and McShane all conclude that Mann’s method produces the hockey stick from random data. Therefore the hockey stick has no statistical significance.” Three times the comment has been blocked. If you go through their discussions looking for the word “random” you find no mention of the effect of red-noise (random data) upon Mann’s method. Maybe they just have a thing against me personally. Perhaps someone else could try it and see if their effort is blocked.
“This MUST have consequences; which is under discussion. I really think it causes global warming, yes. ”
This is contrary your own quote “Having a mathematics background, I am insensitive to ‘impression’. ” Can you elaborate?
“A very small amount of cyanide kills.”
This kind of sound bytes makes the AGW theory untrustworthy and even silly.
Richard, can you point me to the posts by KK Hempen and KK Hampen? The moderator may have snipped them?
Also a question: have you at one time misread ‘latitude’ for ‘altitude’? It would explain our misunderstanding.
Henry@RR kampen
It seems nobody here including yourself has yet been able to prove to me that CO2 is a greenhouse gas.
http://wattsupwiththat.com/2010/08/17/breaking-new-paper-makes-a-hockey-sticky-wicket-of-mann-et-al-99/#comment-466077
http://letterdash.com/HenryP/more-carbon-dioxide-is-ok-ok
I do agree that O3 could have been destroyed by CFC’s in the past leading to global warming. We fixed that problem, did we not?
A prewiew of what to expect from IPCC `s AR5!
The lead author of chapter 1 Deliang Chang made the following statement. Mr Chang is professor in meterologial physics at the University of Gothemburg.
“Tittar man på de senaste 10, 30, 50 årens meteorologiska data så inte bara tror vi utan VET att extremhändelserna blivit vanligare och ökat i intensitet och att det har med klimatförändringar att göra.”
Translation: “If you analyze the last 10,30.50 year of metereological data we not only belive but KNOW that extreme events has become more frequent with increased intensity and it is related to climate change”.
Well there are many things that makes me proud of Sweden…this is not one of them!
Source http://sverigesradio.se/sida/artikel.aspx?programid=3345&artikel=3946821
No the ozone hole did disappear but grew back again. It is quite evidently a natural phenomenon. As usual scientists popping their heads into a middle of a cycle and doing straight line extrapolations.
@Tim Folkerts
“Interestingly, you are willing to believe that statisticians are experts in statistics and hence should be trusted. But you AREN’T willing to believe that climate scientists are experts in climate science and hence should be trusted.”
It has been a recurring problem over many years, particularly in the social sciences I must say, but also found in biological or physical sciences, that some domain researchers failed to either get a good intellectual grounding in statistics, or consult a good statistician, and consequently produce flawed or meaningless results in their papers. When I worked in biomedical research, the guys with medical or biology PhDs would always make sure and consult with epidemiologists and statisticians to make sure they weren’t making errors of that sort.
It is precisely this scenario that is half of the question about Mann’s results. So it is very appropriate for statisticians to examine the statistical techniques used. (The other questions about Mann involve mismanaged or poorly chosen data.)
Climate scientists in recent years seem to be uncommonly prone to both bad data and bad statistics.
Now I’m totally confused about ozone and the ozone hole. In the past year, I’ve read articles that cite:
– closing the ozone hole could contribute to global warming.
– no real data exists for ozone hole before 1970.
– the chemical reaction originally used to link CFC’s and ozone depletion were erroneous.
– the size of the ozone hole has been relatively static despite the ban on CFC’s and reduction in HCFC’s.
Anyone care to enlighten me?