Reader Crosspatch writes in comments:
4241.txt is where Briffa Rob Wilson apparently believes he recreates what McIntyre is talking about the hockey stick showing up no matter what data you feed into it. Briffa Wilson creates randomly generated time series, feeds them in to Mann’s maths and bingo … out pops a hockey stick.
I first generated 1000 random time-series in Excel – I did not try and approximate the persistence structure in tree-ring data. The autocorrelation therefore of the time-series was close to zero, although it did vary between each time-series.
Playing around therefore with the AR persistent structure of these time-series would make a difference. However, as these series are generally random white noise processes, I thought this would be a conservative test of any potential bias.
…
The reconstructions clearly show a ‘hockey-stick’ trend. I guess this is precisely the phenomenon that Macintyre has been going on about.
Here’s the full email:
cc: edwardcook <REDACTED>, <REDACTED>, “David Frank” <REDACTED>, “Jan Esper” <REDACTED>, “Tim Osborn” <REDACTED>, <REDACTED>, “Brian Luckman” <REDACTED>, “Andrea Wilson” <REDACTED>, “rosanne” <REDACTED>, “Watson,Emma [Ontario]” <REDACTED>, “Gordon Jacoby” <REDACTED>, “Brohan, Philip” <REDACTED>
date: Tue, 7 Mar 2006 17:44:46 +0700
from: edwardcook <REDACTED>
subject: Re: Emailing: Rob’s Hockey Sticks
to: “Rob Wilson” <REDACTED>
Hi Rob,
You are a masochist. Maybe Tom Melvin has it right: “Controversy about which bull caused mess not relevent. The possibility that the results in all cases were heap of dung has been missed by commentators.”
Cheers,
Ed
On Mar 7, 2006, at 5:20 PM, Rob Wilson wrote:
Greetings All,
I thought you might be interested in these results.
The wonderful thing about being paid properly (i.e. not by the hour) is that I have time to
play.
The whole Macintyre issue got me thinking about over-fitting and the potential bias of
screening against the target climate parameter.
Therefore, I thought I’d play around with some randomly generated time-series and see if I could ‘reconstruct’ northern hemisphere temperatures.
I first generated 1000 random time-series in Excel – I did not try and approximate the
persistence structure in tree-ring data. The autocorrelation therefore of the time-series
was close to zero, although it did vary between each time-series. Playing around therefore
with the AR persistent structure of these time-series would make a difference. However, as
these series are generally random white noise processes, I thought this would be a
conservative test of any potential bias.
I then screened the time-series against NH mean annual temperatures and retained those
series that correlated at the 90% C.L.
48 series passed this screening process.
Using three different methods, I developed a NH temperature reconstruction from these data:
1. simple mean of all 48 series after they had been normalised to their common period
2. Stepwise multiple regression
3. Principle component regression using a stepwise selection process.
The results are attached.
Interestingly, the averaging method produced the best results, although for each method
there is a linear trend in the model residuals – perhaps an end-effect problem of
over-fitting.
The reconstructions clearly show a ‘hockey-stick’ trend. I guess this is precisely the
phenomenon that Macintyre has been going on about.
It is certainly worrying, but I do not think that it is a problem so long as one screens
against LOCAL temperature data and not large scale temperature where trend dominates the correlation.
I guess this over-fitting issue will be relevant to studies that rely more on trend
coherence rather than inter-annual coherence. It would be interesting to do a similar
analysis against the NAO or PDO indices. However, I should work on other things.
Thought you’d might find it interesting though.
comments welcome
Rob
REDACTEDREDACTEDREDACTEDREDACTED—
Dr. Rob Wilson
Research Fellow
School of GeoSciences,
Grant Institute,
Edinburgh University,
West Mains Road,
Edinburgh EH9 3JW,
Scotland, U.K.
Tel:REDACTED
Publication List: [1]http://freespace.virgin.net/rob.dendro/Publications.html
“…..I have wondered about trees.
They are sensitive to light, to moisture, to wind, to pressure.
Sensitivity implies sensation. Might a man feel into the soul of a tree
for these sensations? If a tree were capable of awareness, this faculty
might prove useful. ”
“The Miracle Workers” by Jack Vance
REDACTEDREDACTEDREDACTEDREDACTED—
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
It is not Briffa but Rob Wilson who wrote this. Why the confusion?
You attribute this to Dr Briffa, but I cannot see his name above. Is that a typo?
REPLY: It is and fixed thanks. Crosspatch got the name it wrong, its Rob Wilson to several people at CRU- Anthony
The email is from Rob Wilson?
I got the name wrong, sorry about that.
So let’s see if I’ve got this right? CRU knew this back in 2006 and the best advice Ed Cook could give to Rob Wilson was that it didn’t matter and no one was noticing anyway?
I had two different emails up in two different windows. One was a Briffa mail, the other was this one and as I got to posting that in the comments, I clicked on the wrong window to get the attribution.
“It is certainly worrying, but I do not think that it is a problem so long as one screens
against LOCAL temperature data…”
Great work – he has successfully and smoothly “debunked” any idea that McIntyre’s work challenges the Mann hockey stick.
“Thought you’d might find it interesting though. comments welcome. Rob”
Oh, yeah, Rob. We sure did. Thanks.
“I then screened the time-series against NH mean annual temperatures and retained those
series that correlated at the 90% C.L.”
By that, does he mean that he compared the “tree ring” time series against the expected answer and only kept those that matched? If so, is this the normal way they go about these things? “Oh this tree’s record has *clearly* been affected by something other than temperature so we’ll ignore it… and this one too…. and another…. and yet another…. oh but this one’s ok so we’ll use that… [some time later] It’s ok we’ve got 12 that tell us what we need”.
World temps will go up sooner or later, as we well KNOW.
And due to our immense expertise, we KNOW it will be veerrryyy BAD.
Thus we mustn’t be distracted by actual measurements
during the current ‘warming pause’.
Emissions must be restricted NOW,
so we can’t wait until temperatures cooperate with our theory.
Just let us get on with our ‘private’ world-saving data ‘adjustment’.
Hey, if we’re wrong it doesn’t matter
since green energy is so good for us anyway.
What is a “target” climate?
Bill,
For a moment I thought you were serious. You sounded exactly like the typical alarmist.
The whole USP of Mann was that he took lots of disparate proxies and regressed them against a global trend. Not a trace of local screening if I remember correctly.
@Peter Ward – even if there is no screening, the process of weighting different series based on their correlation (read up on the hockey stick method, Montford’s book is probably a reasonable start) does exactly this. The process will take a series, assign it a signed weighting, and add it to the mix in order to produce the best estimate of the calibration series using a collection of so-called proxies. Mann’s belief in this process is so strong that he seemed to imagine that even the sign of a proxy series did not matter because the statistical procedure would sort it out.
The reason for this being such a mess in the first place is that the temperature signal is weak, and contaminated by other effects (like rainfall) – it is a non-trivial process to perform the estimate of temperature for the past 2000 years. Not to say that we shouldn’t try.
In case you missed it, adding random non-proxies tends to reduce the variation going back in time (a bit of an over-simplification, but Jeff Id’s work supports this I think)
“but I do not think that it is a problem so long as one screens
against LOCAL temperature data and not large scale temperature where trend dominates the correlation”
That sounds like speculation. Was it ever checked out to see whether it is true?
Did Mann screen against local, regional, global or at all ?
Brings to mind many decades ago when I was rather naïve in mathematics that if you ever get a time period differential in the right-hand side of a relationship, possibly even in a recursive function, you can create a hockey stick in a flash. In fact, just as MacIntyre and Wilson both showed, it will always give you back a hockey stick no matter what. The difference of any arithmetic series injects this behavior.
If someone has the code Mann was using, I am assuming Steve has it, someone should scan the algorithms looking for an reference of time variables, particularly differences of such, on the right side. Not saying that is it but it would be where I would start looking for such defect.
oakwood says:
You really don’t understand how the hockey stick handle is generated do you? Neither does Rob Wilson for that matter.
It must be nice to be so believing that somebody only has to state they think something is true and you believe it. Rob states “but I do not think that it is a problem”. Rob does not perform any experiments to verify this statement. He does not provide any reasoning as to why it would not be a problem. Since he does not reveal any understanding of why the random data produces hockey sticks, he could not know whether using local temperatures would as well.
Your reaction and unquestioning acceptance of Rob Wilson’s “thought” is what I would expect from a brainwashed follower of a cult.
http://www.telegraph.co.uk/finance/8909585/Green-energy-could-trigger-catastrophic-blackouts.html
This should be of some interest to the both sides of the current argument:
http://www.vukcevic.talktalk.net/CET-NVa.htm
Natural variability could be the next field of conflict, as the AGWs scramble for explanations to the likely temperatures downturn.
I think the choice quote is probably this reported by Edward Cook in the original email:
Maybe Tom Melvin has it right: “Controversy about which bull caused mess not relevent (sic). The possibility that the results in all cases were heap of dung has been missed by commentators.”
Three years later, Tom Osborn had this to say about Tom Melvin (email 0333):
“I think Tom Melvin here is the only person who could shed light on the McIntyre criticisms of Yamal. But he can be a rather loose cannon and shouldn’t be directly contacted about this … “
So he found ‘hockey sticks’ after processing all the random series that matched the trend of the instrumental record…. hardly surprising.
What may be more surprising is why Wegman et al FAILED to replicate the M&M findings relying instead on the original flawed results from M&M which were cherry-picked for ‘hockey sticks’from many radnom series that showed no end-effect trend..
Hockey sticks however that were at least an order of magnitude smaller than the proxy and instrumental results!
I have not kept up with the names of all the scientists at CRU – but something strikes me as odd about the TO: list in the email – I don’t recognise any of the names..
Maybe just because I’m a casual observer.
But to someone who has kept better tabs on things – is this possibly a sub group of scientists… who held some concerns about the practices of the ‘big names’ ?
Actually.. I have seen Edward Cook’s name… wrt other concerns about Mr Mann
Is this is sub group of reasonable climate scientists at CRU?
Adding to:
Imagine if a drug company did that as a method of proving how successful a drug was a treating a particular ailment?
Our new drug is 100% successful (at treating the 1% of people who are treated successfully)
Of course, the actual Mann technique is worse
– here is some data that shows our new drug actually makes some patients worse!
– but since we know our drug works really well, we’ll just invert the data, and say it works on on these patients too!
Peter Ward,
See Jeff Id’s recent post:
http://noconsensus.wordpress.com/2011/11/26/456-5/
especially the section on Cherry Picking towards the end. The process of Hockeystick manufacture goes basically like this:
1. Get a collection of proxies or if that’s too hard, just use random numbers, representing, say, the last 1,000 years
2. Select “good” proxies by calibrating against the last 100 years of instrumental temperature data and upweighting them by the algorithm of your choice. The calibration can be to either local temperatures (where the proxy is located) or global by the mysterious process of teleconnections – which ever works better.
3. Avoid issues with the divergence problem by truncating tree ring proxies which dip after 1960 or so, and optionally splice in the measured temperature from 1960 to replace the truncated data
4. Combine the lot by anything from simple average to principle components to CPS – brownie points for novel stats procedures with unknown properties.
Voila! Hockeystick!
The reason is that proxies are inherently noisy, dating for many of them is uncertain, and global temps do not move in lock-step, so what you end up with by the weightings procedure is a convergence upwards over the last 100 years due to the current warm period, and a cancelling out of all the noise for the prior 900 years. The cancelling out gives you the handle (suppression of low frequency variance), and the weightings by concentrating the proxies which follow 20th century warming (or the first 60 years thereof) give you the blade.