New paper makes a hockey sticky wicket of Mann et al 98/99/08

NOTE: This has been running two weeks at the top of WUWT, discussion has slowed, so I’m placing it back in regular que.  – Anthony

UPDATES:

Statistician William Briggs weighs in here

Eduardo Zorita weighs in here

Anonymous blogger “Deep Climate” weighs in with what he/she calls a “deeply flawed study” here

After a week of being “preoccupied” Real Climate finally breaks radio silence here. It appears to be a prelude to a dismissal with a “wave of the hand”

Supplementary Info now available: All data and code used in this paper are available at the Annals of Applied Statistics supplementary materials website:

http://www.imstat.org/aoas/supplements/default.htm

=========================================

Sticky Wicket – phrase, meaning: “A difficult situation”.

Oh, my. There is a new and important study on temperature proxy reconstructions (McShane and Wyner 2010) submitted into the Annals of Applied Statistics and is listed to be published in the next issue. According to Steve McIntyre, this is one of the “top statistical journals”. This paper is a direct and serious rebuttal to the proxy reconstructions of Mann. It seems watertight on the surface, because instead of trying to attack the proxy data quality issues, they assumed the proxy data was accurate for their purpose, then created a bayesian backcast method. Then, using the proxy data, they demonstrate it fails to reproduce the sharp 20th century uptick.

Now, there’s a new look to the familiar “hockey stick”.

Before:

Multiproxy reconstruction of Northern Hemisphere surface temperature variations over the past millennium (blue), along with 50-year average (black), a measure of the statistical uncertainty associated with the reconstruction (gray), and instrumental surface temperature data for the last 150 years (red), based on the work by Mann et al. (1999). This figure has sometimes been referred to as the hockey stick. Source: IPCC (2001).

After:

FIG 16. Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD and backcasts 998-1849 AD. The cyan region indicates uncertainty due to t, the green region indicates uncertainty due to β, and the gray region indicates total uncertainty.

Not only are the results stunning, but the paper is highly readable, written in a sensible style that most laymen can absorb, even if they don’t understand some of the finer points of bayesian and loess filters, or principal components. Not only that, this paper is a confirmation of McIntyre and McKitrick’s work, with a strong nod to Wegman. I highly recommend reading this and distributing this story widely.

Here’s the submitted paper:

A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable?

(PDF, 2.5 MB. Backup download available here: McShane and Wyner 2010 )

It states in its abstract:

We find that the proxies do not predict temperature significantly better than random series generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago.

Here are some excerpts from the paper (emphasis in paragraphs mine):

This one shows that M&M hit the mark, because it is independent validation:

In other words, our model performs better when using highly autocorrelated

noise rather than proxies to ”predict” temperature. The real proxies are less predictive than our ”fake” data. While the Lasso generated reconstructions using the proxies are highly statistically significant compared to simple null models, they do not achieve statistical significance against sophisticated null models.

We are not the first to observe this effect. It was shown, in McIntyre

and McKitrick (2005a,c), that random sequences with complex local dependence

structures can predict temperatures. Their approach has been

roundly dismissed in the climate science literature:

To generate ”random” noise series, MM05c apply the full autoregressive structure of the real world proxy series. In this way, they in fact train their stochastic engine with significant (if not dominant) low frequency climate signal rather than purely non-climatic noise and its persistence. [Emphasis in original]

Ammann and Wahl (2007)

On the power of the proxy data to actually detect climate change:

This is disturbing: if a model cannot predict the occurrence of a sharp run-up in an out-of-sample block which is contiguous with the insample training set, then it seems highly unlikely that it has power to detect such levels or run-ups in the more distant past. It is even more discouraging when one recalls Figure 15: the model cannot capture the sharp run-up even in-sample. In sum, these results suggest that the ninety-three sequences that comprise the 1,000 year old proxy record simply lack power to detect a sharp increase in temperature. See Footnote 12

Footnote 12:

On the other hand, perhaps our model is unable to detect the high level of and sharp run-up in recent temperatures because anthropogenic factors have, for example, caused a regime change in the relation between temperatures and proxies. While this is certainly a consistent line of reasoning, it is also fraught with peril for, once one admits the possibility of regime changes in the instrumental period, it raises the question of whether such changes exist elsewhere over the past 1,000 years. Furthermore, it implies that up to half of the already short instrumental record is corrupted by anthropogenic factors, thus undermining paleoclimatology as a statistical enterprise.

FIG 15. In-sample Backcast from Bayesian Model of Section 5. CRU Northern Hemisphere annual mean land temperature is given by the thin black line and a smoothed version is given by the thick black line. The forecast is given by the thin red line and a smoothed version is given by the thick red line. The model is fit on 1850-1998 AD.

We plot the in-sample portion of this backcast (1850-1998 AD) in Figure 15. Not surprisingly, the model tracks CRU reasonably well because it is in-sample. However, despite the fact that the backcast is both in-sample and initialized with the high true temperatures from 1999 AD and 2000 AD, it still cannot capture either the high level of or the sharp run-up in temperatures of the 1990s. It is substantially biased low. That the model cannot capture run-up even in-sample does not portend well for its ability

to capture similar levels and run-ups if they exist out-of-sample.

Conclusion.

Research on multi-proxy temperature reconstructions of the earth’s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with universitylevel, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in some

respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.

On the one hand, we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data. The fundamental problem is that there is a limited amount of proxy data which dates back to 1000 AD; what is available is weakly predictive of global annual temperature. Our backcasting methods, which track quite closely the methods applied most recently in Mann (2008) to the same data, are unable to catch the sharp run up in temperatures recorded in the 1990s, even in-sample.

As can be seen in Figure 15, our estimate of the run up in temperature in the 1990s has

a much smaller slope than the actual temperature series. Furthermore, the lower frame of Figure 18 clearly reveals that the proxy model is not at all able to track the high gradient segment. Consequently, the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.

Our main contribution is our efforts to seriously grapple with the uncertainty involved in paleoclimatological reconstructions. Regression of high dimensional time series is always a complex problem with many traps. In our case, the particular challenges include (i) a short sequence of training data, (ii) more predictors than observations, (iii) a very weak signal, and (iv) response and predictor variables which are both strongly autocorrelated.

The final point is particularly troublesome: since the data is not easily modeled by a simple autoregressive process it follows that the number of truly independent observations (i.e., the effective sample size) may be just too small for accurate reconstruction.

Climate scientists have greatly underestimated the uncertainty of proxy based reconstructions and hence have been overconfident in their models. We have shown that time dependence in the temperature series is sufficiently strong to permit complex sequences of random numbers to forecast out-of-sample reasonably well fairly frequently (see, for example, Figure 9). Furthermore, even proxy based models with approximately the same amount of reconstructive skill (Figures 11,12, and 13), produce strikingly dissimilar historical backcasts: some of these look like hockey sticks but most do not (Figure 14).

Natural climate variability is not well understood and is probably quite large. It is not clear that the proxies currently used to predict temperature are even predictive of it at the scale of several decades let alone over many centuries. Nonetheless, paleoclimatoligical reconstructions constitute only one source of evidence in the AGW debate. Our work stands entirely on the shoulders of those environmental scientists who labored untold years to assemble the vast network of natural proxies. Although we assume the reliability of their data for our purposes here, there still remains a considerable number of outstanding questions that can only be answered with a free and open inquiry and a great deal of replication.

===============================================================

Commenters on WUWT report that Tamino and Romm are deleting comments even mentioning this paper on their blog comment forum. Their refusal to even acknowledge it tells you it has squarely hit the target, and the fat lady has sung – loudly.

(h/t to WUWT reader “thechuckr”)

Share

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
1.2K Comments
Inline Feedbacks
View all comments
JB7088
August 20, 2010 5:36 pm

“..consider the irony of the critics embracing a paper that contains the line “our model gives a 80% chance that [the last decade] was the warmest in the past thousand years”….”-Real Climate

August 20, 2010 6:25 pm

Carry your “after” graph out to 2010, and you STILL have a hockey Stick.

cohenite
August 20, 2010 6:32 pm

barry says:
“1. barry says:
August 20, 2010 at 8:12 am
cohenite,
I am referring to the 20th century temperature rise, not 30-year blocks within it (which have some similarities – but then we need to talk about attribution – not here, though)
1. barry says:
August 20, 2010 at 8:30 am
Lab tests performed thousands of times in high schools and universities definitively show that increasing CO2 in a volume of atmosphere will result in more infrared absorption leading to the heating of the volume.
It would be nice to have a nearby planet composed much like ours is and dump enormous quantities of CO2 into its atmosphere to test the results.”
All of this in way or other is wrong. The first part is wrong for this reason: since 1850, when by general consensus the LIA finished and there was a slight increases in TSI, there have been 3 warm periods correlating with PDO phase changes:
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1976/to:1998/trend/plot/hadcrut3vgl/from:1910/to:1940/trend/plot/hadcrut3vgl/from:1850/to:1880/trend/plot/hadcrut3vgl/from:1998/trend
The first warm period, or “blocks” as barry calls them, was ~ from 1850-1880 and the rate of increase in temp then was 0.00525384 per year. In the 2nd warm period from ~1910-1940, the rate of increase was 0.0152788 per year, nearly 3 times the preceding warm period; during 1910-1940 the increase in CO2 was slight; the 3rd warm period was from ~1976-1998 and the rate of increase was 0.0146429 per year, less than the preceding warm period; from 1998 onwards the rate of increase has been 0.00230116 per year, much less than from 1976-1998 and that increase is entirely due to the El Nino conditions at the beginning of 2010.
The inescapable conclusion is that temp movement during the 20thC, indeed since 1850, have been closely correlated with TSI:
http://www.rocketscientistsjournal.com/2010/03/sgw.html
This is entirely to the point of the hockey stick and the current critique of it by M&W; there is no exceptional rate of temp increase during the alleged period of maximum AGW.
As for the 2nd comment by Barry about increases in CO2 inevitably leading to warming; this is contradicted by the lack of increase in optical depth over the last 60 years; this is the [in]famous Tau of Miskolczi which even Roy Spencer concedes. The simple fact of CO2 is that its greenhouse properties are defined by Beer-Lambert, decreasing, asymptotic, effect and dwarfed by water as Ramanathan shows:
http://scienceofdoom.files.wordpress.com/2010/02/ramanathan-coakley-1978-role-of-co2.png
the respective greenhouse properties of H2O and CO2 show that H2O has 2.5 times the greenhouse effect of CO2; on this basis, ignoring the TSI effect, any temp movement over the 20thC, about 0.7C, should have attribution between H2O and CO2 done on the basis of a 5:2 ratio.
Finally, your wish to have a planet nearby with large amounts of CO2 present; we have 2; Venus and Mars, which have 96% and 95% atmospheric concentrations of CO2; the difference in atmospheric temps between those 2 planets is due to atmospheric pressure not the radiative properties of the concentrations of CO2.

Shub Niggurath
August 20, 2010 7:34 pm

Russel “we deride”,
For someone who spent a good amount of time indulging in rancorous google-o-matic sputterings about Lord Monckton in the service of AGW, your imagination does run wild. About the science and stuff.

barry
August 20, 2010 7:39 pm

Vince,

Sceptics accept that the gas absorbs ir

If you’ve been reading the comments here, you’ll see that there is quite a lot of resistance to the notion (see mt reply to George below, for example). However, I agree that prominent skeptics in the literature agree with this empirical fact (Lindzen, Pielke Sr, Spencer, Christy etc). Roy Spencer attempted to set the story straight at his blog, but many skeptical lay people didn’t buy it.

but the real question is what is the aggregate effect in the real world climate with all the hydrological and convection cycles.

Aye, that is the next step.
It’s not the right thread to discuss that, but it wouldn’t matter if a thousand studies on the greenhouse effect were cited here. Skeptics will simply announce they are all ‘wrong’, or not ‘definitive’, having read not a one.
George,

I’d be very surprised if Barry; whoever Barry is can cite just ONE specific instance in which that experiment was performed.

I provided a youtube link to such an experiment.
Here’s a paper by John Tyndal that documents probably the first ever test done in a lab on this in the mid-1800s. And if you click HERE, you can see the diagram of equipment for his experiments.
This is a typical high school document outlining a testing method. This is a primer document, also typical of schools. Here is another.
Here is a simple experiment description and a photo of the set up. Here is another…..
It’s really not controversial.
We’re straying further away from the topic here as new people come in and dismiss basic science. On to a new thread. Thanks for the conversations.

TomRude
August 20, 2010 7:39 pm

Zorita in his latest paper with Frank doesn’t even have MMs in his references…

Stephan
August 20, 2010 7:48 pm

This is even bigger than this story
“Prof. Mann was quoted in the British media as saying he believed that his little graph had gained undue attention.” Is he caving in? LOL
Read more: http://dailycaller.com/2010/07/25/michael-hockey-stick-mann-hides-atop-the-climate-change-ivory-tower/#ixzz0xClA6OGN

Patrick Davis
August 20, 2010 8:59 pm

“Chris in OZ says:
August 20, 2010 at 3:54 pm
I’ve had enough talk. It is now 9AM here on the East coast of Australia, and I am off to vote in our elections. Hopefully, this evening, we will have a new government and I remember Tony Abbot saying, “Climate change is crap”.”
Lets hope so aye? Unfortunately, watching ABC 24hr News channel here in Australia and the election coverage, there has been roughly 40 minutes in the last hour devoted to “climate change”, CPRS and ETS systems etc etc, lots of econimists commenting however. Mr Abbott won’t be the next PM of Australia as Austrlians believe that reducing the ~1%-2% of Australian CO2 contributions to the total, global, CO2 emissions volume by 20% is going to, somehow, save the planet (Which is only in danger from it’s own star).

Latimer Alder
August 20, 2010 10:31 pm


‘The first warm period, or “blocks” as barry calls them, was ~ from 1850-1880 and the rate of increase in temp then was 0.00525384 per year. In the 2nd warm period from ~1910-1940, the rate of increase was 0.0152788 per year, nearly 3 times the preceding warm period; during 1910-1940 the increase in CO2 was slight; the 3rd warm period was from ~1976-1998 and the rate of increase was 0.0146429 per year, less than the preceding warm period; from 1998 onwards the rate of increase has been 0.00230116 per year, much less than from 1976-1998 and that increase is entirely due to the El Nino conditions at the beginning of 2010.’
I wonder how you can quote the rates to six significant figures. You must be very very very clever…and have totally misunderstood what you are doing. Or you haven;t studied the ‘how to use a calculator’ bit of basic sums (‘math’ in the US).
Since the article under discussion is all about the magnitude of uncertainties in statistical processes, I didn’t bother to read the rest of your contribution.

duckster
August 20, 2010 11:27 pm

@Latimer Alder says:
Duckster: ‘Since when has it been science to accept a paper as scientifically valid without its findings being independently repeated?’
Latimer Alder: I was under the impression that this is the norm for climatological work. Since data and methods are rarely published in enough detail, independent repeats are, by definition, impossible.

I have to admit to the use of the wrong term above. I should have said reproducibility, not repeatability. Scientific reproducibility refers to other scientists replicating experiments done by one scientists, while repeatability refers to the same scientists repeating their own results (possibly under different circumstances).
So I had a quick check through Google scholar to look at studies on reproducibility in climate science – not just counting the number of hits, but looking at what the studies briefly tried to do – and lo! Who’d a thought it. There are tons of studies which work on reproducibility of climate science.
richards
Duckster. Foot, Mouth swallowed comes to mind.
Thanks for this intelligent addition to the discussion. I am completely floored by this. Obviously.

August 20, 2010 11:30 pm

Henry@barry
Surely you must realize that we been all through that? Svante Arrhenius’s formula was wrong. If it had been right earth would have been lot warmer by now. I even told you what he (and everyone else after him ) did wrong. He forgot about the cooling caused by GHG’s. Looking carefully at the incoming and outgoing radiation graphs I estimate that it could be pretty much evens between the warming and cooling of CO2.
But based on your science, what is the correct formula? That is my challenge to you and everyone who believes that CO2 is bad. The IPPC’s forcings are based on weighting, i.e. calculations based on observed global warming and observed increases in GHG’s. Not actual testing
I hope you watched the you tube video about the man wanting to prove global warming for $36. It came after your video. It is 4 minutes. I laughed. And laughed.

August 20, 2010 11:56 pm

Arno Arrak says:
August 20, 2010 at 4:56 pm
“And my personal feeling is that putting an uncertainty estimate on the graph as they all do is simply an annoyance and does not contribute any real information.”
Arno, I think the whole point of the paper was to demonstrate uncertainty. True, they could have simply stated the levels of uncertainty, but many that have come here to debate the points of the paper only look at the pictures and they’d have never understood the levels of uncertainty.

cohenite
August 21, 2010 12:05 am

Latimer Alder says:
August 20, 2010 at 10:31 pm
“I wonder how you can quote the rates to six significant figures. You must be very very very clever…and have totally misunderstood what you are doing. Or you haven;t studied the ‘how to use a calculator’ bit of basic sums (‘math’ in the US).”
Are you nuts; the graphs are from WFT which includes the raw data and, in the case of OLS, the numerical slopes; look at the site and patronise someone else, or at least think before shooting yourself in the foot.
” I didn’t bother to read the rest of your contribution”
I’m not surprised since you didn’t have a clue about what you did read.

Gunter
August 21, 2010 12:25 am

“After a week of being “preoccupied” Real Climate finally breaks radio silence here:”
I would much rather wait a little and see a carefully considered analysis of the paper (from all sides of the debate), than to have it lauded and cheered immediately by people who have neither read nor understood it, but do so because its conclusions seem to agree with their prejudices.

August 21, 2010 12:32 am

Henry Pool says:
August 20, 2010 at 11:30 pm
“I hope you watched the you tube video about the man wanting to prove global warming for $36. It came after your video. It is 4 minutes. I laughed. And laughed.”
I didn’t know whether to laugh or cry. I need to get a picture of Mr. Sietz to make sure that wasn’t him out there doing his science.
Barry, I’m not sure your u-tube video was much better. That silly little test doesn’t show anything but demonstrate a well know property of CO2. It almost inspires me to video an apple falling to the ground to prove gravity.
You guys have a pleasant…….well, it’s 2:30 A.M. here.

Mike Edwards
August 21, 2010 12:46 am

barry says:
August 20, 2010 at 8:30 am

Firstly, there is no such thing as a definitive paper. Were Einstein’s relativity papers ‘definitive?’…

Er, well, Einstein’s relativity papers actually are definitive, as are others such as Newton’s Principia. Those two examples alone falsify your statement.
I can agree, however, that not all advances is science are marked by “definitive papers”. On the other hand, it is a reasonable question to ask “what are the 10 most significant papers which establish the AGW theory?” (if you think 10 ain’t the right number, can you suggest an alternative which is better?)

Richard S Courtney
August 21, 2010 12:51 am

Anthony:
In your excellent reply to Russell Seitz that you append to his post at August 20, 2010 at 4:25 pm you suggest to him:
“ In the meantime you might consider doing some science to contribute to this thread …”
sarc on/
Your suggestion is clearly mistaken.
Every academic involved in using public money to study AGW knows that
(a) the world outside of academia is real
so
(b) is not of interest,
but
(c) virtual worlds can be constructed in computer models,
and
(d) AGW can be observed, measured and ‘projected’ by an appropriately devised computer model,
so
(e) appropriate computer models are useful tools to generate additonal research funds,
and
(f) the only function of the real world is to provide the research funds.
Therefore,
(g) any comments from the real world are irrelevant noise to be ignored or silenced
because
(h) academics have wives and families they need to house and feed
and
(i) income from the research funds fulfils this need.
Sarc off/
Anyway, observations support the above sarcastic argument more than observations support the AGW hypothesis.
Richard

Latimer Alder
August 21, 2010 12:52 am

@duckster
‘I have to admit to the use of the wrong term above. I should have said reproducibility, not repeatability. Scientific reproducibility refers to other scientists replicating experiments done by one scientists, while repeatability refers to the same scientists repeating their own results (possibly under different circumstances)’
Thanks for the correction. I am glad that we agree that the standards of disclosure within climatology have been so low that independent repeatability of past work is in fact impossible.
We are forced to rely on self-certification that the authors have done their work correctly, since they have not published their methods or data in any detail. And we also know that the ‘peer-review’ process has never been robust in attempting to demonstrate repeatability. P Jones remarked…’they never asked’ when questionned about how many times a peer-reviewer had asked to see his data and methods in a 30-year career. We must just take the correctness of the papers on this criterion as an act of faith, rather than by any independent verification.
So repeatability is effectively off the menu in this field.
The debate moves to ‘reproducibility’.
Since there are no actual experiments in climatology, the only thing left to attempt to reproduce is the statistical manipulation of previously collected data. And this is what the M&W paper attempts to do for previous work.
Using exactly the same data (as far as can be ascertained) as Mann used, they arrive at very different conclusions from him. They attempt to reproduce his work and fail to do so.
Their interpretation is that his claims to have found a robust temperature signal among the noisy data are not supported if the statistical manipulation is done using industry-standard techniques, rather than by novel and Mann-unique methods.
And they lay out their methods and data in plain sight for all to see and criticise.
So, like you I believe strongly in both repeatability and reproducibility as being essential to scientific progress.
We have seen that repeatability is only possible if the ‘scientists’ publish their data and methods. Now we have a paper looking at the reproducibility of non-repeatable work. And showing that the results are not reproducible either.
What scientific validity remains for any work when both your tests are spectacularly failed?

Latimer Alder
August 21, 2010 1:29 am


‘I would much rather wait a little and see a carefully considered analysis of the paper (from all sides of the debate), than to have it lauded and cheered immediately by people who have neither read nor understood it, but do so because its conclusions seem to agree with their prejudices.’
So you’ll be ignoring any remarks from RC then? A website explicitly set up by a communications corporation (Environmental Media Services) to defend AGW and the Hockey Stick. And EMS was set up as an activist organisation by multi-millionaire David Fenton (http://en.wikipedia.org/wiki/David_Fenton) to promote his political views.
Real Climate is noted for its ruthless policy of not accepting any comments that do not adhere to their approved predictions of imminent climate change catastrophe.
So I am sure you will not be looking there for a ‘carefully considered analysis of the paper (from both sides)’.

Latimer Alder
August 21, 2010 1:47 am


Thank you for your reply. I did indeed understand exactly what I was reading…. a piece from somebody who has no experience of processing experimental data or basic statistics.
And who attempts to show how clever the work is by using a level of apparent certainty way in advance of their ability to understand these simple concepts.
By making such a basic error, [snip…play nice ~ ctm].

Latimer Alder
August 21, 2010 2:29 am

Hi ctm
Thanks for your snip. I posted and regretted that last bit immediately. Cheers

Ben
August 21, 2010 2:41 am

Is that an MIT professor repeating an experiment that has been repeated “1000’s” of times?
Whats next out of the illustory halls of MIT? Watching an apple drop and yelling “gravity!”
Just as a note: There is a reason most of us do not take the science seriously….It is because of stunts like that…If you were taking the time to seriously test theories posted here instead of saying “you are wrong, we are right”, you might be taken more seriously…You perform those stunts without reading what is actually being said.
From a “I don’t care, its 4:30am and I was out with friends not drinking while they were”…I can say this: I think the points about not testing possible feedbacks of CO2 whether they are positive or negative from a physics point of view is why most people are not taking you seriously on this thread. The standard response of GCM’s prove this is utter rubbish because those are using statistics and data modeling/mining to come to a conclusion when all along we have questioned these models and for years have fought for the actual code to reproduce this ourselves.
If the code is correct, there is no reason to not post every single piece of the puzzle. Make it so a non-expert can reproduce your code. Is that too difficult? If it is, then explain to me why. Until that is the case, I trust professional statisticians over you any day of the week.

Kilted Mushroom
August 21, 2010 3:02 am

Zorita and others here fail to understand that M&W are not doing science in thier paper. No more than a proof reader is writing the book, they are proofing the stats.

Latimer Alder
August 21, 2010 3:05 am


I’m sorry that I suggested that you couldn’t use a pocket calculator and interpret the results correctly. Having studied the WFT website, I now see that you didn’t need the calculator. Because the site does all the sums for you. Its so much easier that way – saves having any need to understand the data and therefore what you are actually working with. But it doesn’t aid your insight.
But you have still reported the results to a mathematical precision far greater than the underlying data will permit. And have failed to even consider uncertainties.
To use a simple example, the mathematical answer to (1/2)*(1/2)*(1/2)*(1/2) is 0.0625. This does not mean that the answer to a real world problem of ‘about a half of about a half of about a half of about a half’ is exactly 0.0625, and not 0.060 or 0.065. Or even 0.1. or 0.02 – depending on how close the ‘abouts’ were above. We express this by using uncertainties. eg. 0.06 +/- 0.1. This tells us how good an estimate to the real world we think our sums have been.
Climatology takes temperature data that is pretty coarse..=/- 0.1 degrees at best. The ‘about’ is quite large. If I report 11.1 C, I actually mean 11.1 +/- 0.05C.
Using data of this precision it is ridiculous to express trends as ‘0.00525384 per year’ as you have. 0.005 +/- 0.001 might be a sound estimate from the underlying data. What you have described is not. The end 25384 are mathematical artefcats and add nothing….
If the website doesn’t help you to understand this then
a. it b…y well should,
b. they probably don’t understand it themselves (no big surprise there the are climatologists and there is no such thing as uncertainty) and
c. you need to use a new website…..or a calculator.
c. is best..it will help you to understand the data.

1 37 38 39 40 41 49