How to trick yourself into unintentional cherry picking to make hockey sticks

This post over at Lucia’s Blackboard caught my interest because it shows you don’t need to operate heavy equipment (like Al Gore’s famous elevator scene) to get yourself a big data hockey stick. More evidence that Mann and Briffa’s selection criteria for trees leads to a spurious result.

cherry_pickers

It doesn’t matter if you are using treemometers, thermometers, stock prices, or wheat futures, if your method isn’t carefully designed to prevent an artificial, and unintentional selection of  data that correlates with your theory, the whole study can turn out like a hockey stick. Anyone can do this, no special science skills are needed.

Lucia writes:

So, even though the method seems reasonable, and the person doing it doesn’t intend to cherry pick, if they don’t do some very sophisticated things, rejecting trees that don’t correlate with the recent record biases an analysis. It encourages spurious results, and in the context of the whole “hockey stick” controversy, effectively imposes hockey sticks on the results.

And she backs it up with a simple experiment anybody can do with Microsoft Excel.

Method of creating hockey stick reconstructions out of nothing

To create “hockey stick” reconstructions out of nothing, I’m going to do this:

  1. Generate roughly 148 years worth of monthly “tree – ring” data using rand() in EXCEL. This corresponds to 1850-1998. I will impose autocorrelation with r=0.995. I’ll repeat this 154 times. (This number is chosen arbitrarily.) On the one hand, we know these functions don’t correlate with Hadley because they are synthetically generated. However, we are going to pretend we believe “some” are sensitive to temperature and see what sort of reconstruction we get.
  2. To screen out the series that prove themselves insensitive to temperature, I’ll compute the autocorrelation, R, between Hadley monthly temperature data and the tree-ring data for each of the 154 series. To show this problem with this method, I will compute the correlation only over the years from 1960-1998. Then, I’ll keep all series that have autocorrelations with absolute values greater than 1.2* the standard deviation of the 154 autocorrelations R. I’ll assume the other randomly generated monthly series are “not sensitive” to temperature and ignore them. (Note: The series with negative values of R are the equivalent of “upside down” proxies. )
  3. I’ll create a proxy by simply averaging over the “proxies” that passed the test just described. I’ll rebaseline so the average temperature and trends for the proxie and Hadley match between 1960-1998.
  4. I’ll plot the average from the proxies and compare it to Hadley

The comparison from one (typical) case is shown below. The blue curve indicates is the “proxy reconstruction”; the yellow is the Hadley data (all data are 12-month smoothed.)

Figure 1: "Typical" hockey stick from generated by screening synthetic red noise.

Notice that after 1960, the blue curve based on the average of “noise” that passed the test mimics the yellow observations. It looks good because I screened out all the noise that was “not sensitive to temperature”. (In reality, none is sensitive to temperature. I just picked the series that didn’t happen to fail. )

Because the “proxies” really are not sensitive to temperature, you will notice there is no correspondence between the blue “proxy reconstruction” and the yellow Hadley data prior to 1960. I could do this exercise a bajillion times and I’ll always get the same result. After 1960, there are always some “proxies” that by random chance correlate well with Hadley. If I throw away the other “proxies” and average over the “sensitive” ones, the series looks like Hadley after 1960. But before 1960? No dice.

Also notice that when I do this, the “blue proxie reconstruction” prior to 1960 is quite smooth. In fact, because the proxies are not sensitive, the past history prior to the “calibration” period looks unchanging. If the current period has an uptick, applying this method to red noise will make the current uptick look “unprecedented”. (The same would happen if the current period had a down turn, except we’d have unprecedented cooling. )

The red curve

Are you wondering what the red curve is? Well, after screening once, I screened again. This time, I looked at all the proxies making up the “blue” curve, and checked whether they correlated with Hadely during the periods from 1900-1960. If they did not, I threw them away. Then I averaged to get the red line. (I did not rebaseline again.)

The purpose of the second step is to “confirm” the temperature dependence.

Having done this, I get a curve that sort of looks sort of like Hadley from 1900-1960. That is: the wiggles sort of match. The “red proxy reconstructions” looks very much like Hadley after 1960: both the “wiggles” and the “absolute values” match. It’s also “noisier” than the blue curve–that’s because it contains fewer “proxies”.

But notice that aprior to 1900, the wiggles in the red proxy and the yellow Hadley data don’t match. (Also, the red proxie wants to “revert to the mean.”)

Can I do this again? Sure. Here are the two plots created on the next two “refreshes” of the EXCEL spreadsheet:

Hockey2

Hockey3

I can keep doing this over and over. Some “reconstructions” look better; some look worse. But these don’t look too shabby when you consider that none of the “proxies” are sensitive to temperature at all. This is what you get if you screen red noise.

Naturally, if you use real proxies and that contain some signal, you should do better than this. But knowing you can get this close with nothing but noise should make you suspect that screening out based on a known temperature record can bias your answers to:

  1. Make a “proxy reconstruction” based on nothing but noise match the thermometer record and
  2. Make the historical temperature variations looks flat and unvarying.

So, Hu is correct. If you screen out “bad” proxies based on a match to the current temperature record, you bias your answer. Given the appearance of the thermometer record during the 20th century, you bias toward hockey sticks!

Does this mean it’s impossible to make a reliable reconstruction? No. It means you need to think very carefully about how you select your proxies. Just screening to match the current record is not an appropriate method.

Update

I modified the script to show the number of proxies in the “blue” and “red” reconstructions. Here is one case, the second will be uploaded in a ’sec.

Hockey4

Hockey

Steve McIntyre writes in comments:

Steve McIntyre (Comment#21669)

October 15th, 2009 at 4:24 pm

Lucia, in addition to Jeff Id, this phenomenon has now been more or less independently reported by Lubos, David Stockwell and myself. David published an article on the phenomenon in AIG News, online at http://landshape.org/enm/wp-co…..6%2014.pdf . We cited this paper in our PNAS comment (as one of our 5 citations.) I don’t have a link for Lubos on it, but he wrote about it.

I mention this phenomenon in a post prior to the starting of Climate Audit, that was carried forward from my old website from Dec 2004 http://www.climateaudit.org/?p=9, where I remarked on this phenomenon in connection with Jacoby and D’Arrigo picking the 10 most “temperature-sensitive” out of 35 that they sampled as follows:

If you look at the original 1989 paper, you will see that Jacoby “cherry-picked” the 10 “most temperature-sensitive” sites from 36 studied. I’ve done simulations to emulate cherry-picking from persistent red noise and consistently get hockey stick shaped series, with the Jacoby northern treeline reconstruction being indistinguishable from simulated hockey sticks. The other 26 sites have not been archived. I’ve written to Climatic Change to get them to intervene in getting the data. Jacoby has refused to provide the data. He says that his research is “mission-oriented” and, as an ex-marine, he is only interested in a “few good” series.

===

Read the whole post at Lucia’s blog here

I encourage readers to try these experiments in hockey stick construction themselves. – Anthony

0 0 votes
Article Rating
80 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
tallbloke
October 17, 2009 12:55 pm

Steve-M: Jacoby has refused to provide the data. He says that his research is “mission-oriented” and, as an ex-marine, he is only interested in a “few good” series
Lol, of all the lame excuses for poor scientific method I’ve ever heard from the warmista… Very VERY revealing.
Jacoby has earned a couple of lines in my next satirical song.

Doug in Seattle
October 17, 2009 12:57 pm

Yes, once again we see the power of the patented Mann-O-matic proxy fitting AlGore-ithm.
Its amazing how the religious among us avert their gazes from this obvious torture of numbers. Kinda like the Spanish Inquisition.

Evan Jones
Editor
October 17, 2009 12:59 pm

Sort of like Michelangelo’s elephant. Just cut away all the parts that don’t look like an elephant.

dhmo
October 17, 2009 1:07 pm

Look I am not a scientist but this whole thing smacks to me of snake oil. To believe that you can take trees and get precise enough temperatures from them to splice it with modern records and get measurements to the tenth of a degree is delusional. Perhaps we should just chuck out our thermometers and use trees instead. Could a double blind test be done? If not it is not worth anything! I very much appreciate the efforts to debunk it but now we need to get this bunkum before the general public that is the real problem. I have asked on blogs when was it the climate did not change and what would be an ideal global average temperature? I get abuse from the warmists no answers. To show the lie is simple to put it to joe public is not he is beset by superstition.

David Ball
October 17, 2009 1:12 pm

“Only interested in a few good series’ and “mission oriented” speaks volumes. The refusal to provide data for replication is inexcusable. When are the governing bodies going to admonish these people? After science is dead? Methinks the defibrillator is needed now.

October 17, 2009 1:24 pm

So, even though the method seems reasonable, and the person doing it doesn’t intend to cherry pick, if they don’t do some very sophisticated things, rejecting trees that don’t correlate with the recent record biases an analysis. It encourages spurious results, and in the context of the whole “hockey stick” controversy, effectively imposes hockey sticks on the results.

Very understandable. The problem, however, is when these issues are pointed out to the perpetrators, and summarily ignored. Their papers should also be ignored. End of story.

crosspatch
October 17, 2009 1:26 pm

And they want to take our money to mitigate data cherry picking?

Clayton Hollowell
October 17, 2009 1:27 pm

Once again, the folly of the statistically unwashed attempting to do statistics is demonstrated. This is what I was trying to argue with Tom P (or whatever) a while ago, but they just don’t get it.
Our universities churn out “scientists” on an assembly line (especially biologists) who remain ignorant of math, and this is the results that we end up with.

Louis Hissink
October 17, 2009 1:41 pm

I think there is a data problem in the sense that a yearly temperature metric is essentially a yearly statistic derived from a monthly statistic which is derived from a daily statistic that is computed from raw temperature measurements over the day. To then compute the ubiqitous temperature anomaly, another derived statistic, the 30 year avergage, is then subtracted from the statistics from which it was computed.
So in a nutshell we are looking at the variation of a statistic varying over time compared to equally nebulous metrics derived from tree rings – that different conclusions could be reached from this data depending on the choice of time frame, sort of quais-cherry picking, tells me that these derived data are simply random.
Lucia’s work seems to bear this out as well, and rather than carp on the methologies used to analyse the data, it’s in the original data collection stage that the problems start.
This happens when the raw data (temperature) are intensive variables – intensive variables do not represent physical quantities – and while you can subject them to statistical analyis because they are expressed as numbers, it’s physicall meaningless – and no better than doing a stats analysis of the phone numbers for Los Angeles.
I personally don’t think the Team are doing this purposefully – they just don’t understand the physical science behind their maths.

michel
October 17, 2009 1:47 pm

Yes, this was an excellent posting by Lucia. Very clear. The thing people with less stats background find difficult to see is: when you do the selection of proxies to find which ones are good temperature readers, you are not ‘picking the good temperature readers’. You are just picking the ones that correlate well for the period you have temps for.
As Lucia says in the comments, there is nothing wrong with picking some trees rather than others. It is just that if you are using them as a proxy for temps, you cannot have temps as your selection criterion. This sounds totally unintuitive. The lay persons natural impulse is to say, that is crazy, of course you should use temps, that is what you are interested in.
Well no, because to make the selection when using temps, in effect you assume what is required to be proven, that the same trees that correlate with temps in the period you use for selection, also did in the period you are trying to use them for. And the fact that they did in the period you are using for selection tells you nothing about whether they will correlate in the period you are trying to use them to measure.
Now, if you picked them on some other basis, like being long lived, well formed, whatever, and then came on a temp reconstruction that resulted, that might be legitimate. If you could show any theoretical reason why that make make them better thermometers. Then the correlation of such an independently selected series with a given set of temp measurements would have some validation value.
The easy way to see it is to ask yourself, why does this correlation of temp with tree rings for this sample for the years 18xx to 19xx make me think that there will be the same correlation for the years 12xx to 14xx. As soon as you ask that, you see that you cannot get to any greater certainty about the second hypothesis from selecting on the basis of the first correlation.

Sam the Skeptic
October 17, 2009 2:02 pm

“… they just don’t understand the physical science behind their maths.”
Or perhaps, if John Reid is to be believed …
http://www.quadrant.org.au/magazine/issue/2009/10/climate-modelling-nonsense
they don’t understand the maths behind the physical science 🙂

Clayton Hollowell
October 17, 2009 2:29 pm

Louis Hissink wrote: “I personally don’t think the Team are doing this purposefully – they just don’t understand the physical science behind their maths.”
I’ve said it before (in the post directly above yours, at the latest), and now I’ll say it again, directly.
It’s not that these people don’t understand the physical science behind their math (they are highly educated, and probably very smart, professional scientists), it’s that they don’t understand the MATH behind their science.

Tom P
October 17, 2009 2:31 pm

Lucia:
“Does this mean it’s impossible to make a reliable reconstruction? No. It means you need to think very carefully about how you select your proxies. Just screening to match the current record is not an appropriate method.”
This issue has indeed been thought about very carefully. For example, Mann and coworkers in their supplement to their 2008 paper on proxies:
http://www.pnas.org/content/suppl/2008/09/02/0805721105.DCSupplemental/0805721105SI.pdf
“Although 484 (~40%) pass the temperature screening process over the full (1850 –1995) calibration interval, one would expect that no more than ~150 (~13%) of the proxy series would pass the screening procedure described above by chance alone. This observation indicates that selection bias, although potentially problematic when employing screened predictors… does not appear a significant problem in our case.”

Espen
October 17, 2009 2:34 pm

I’ve just started to read all the articles on this, especially Steve McIntyre’s work, and am still just flabbergasted. It seems to me that they hockey players are breaking the most fundamental principles of statistics (wrt. randomness of samples and using separate sample sets for model building and the actual tests), but I need to read a little more on PCA on time series to get the grasp of this.

Neo
October 17, 2009 2:56 pm

I once worked with a professional statistician. One day I asked him “what job and with whom do statisticians aspire” ?
His reply was short and quick .. The Tobacco Institute.
This “climate studies” field seems to be teaming with Tobacco Institute refugees as the shine is now off it’s previous beauty.

October 17, 2009 2:56 pm

It’s not just hockey sticks.
When I attended Caltech, in our Freshmen chemistry class we were treated to a lecture that is eventually given to everyone who graduates from Caltech, on the famous Millikan Oil Drop Experiment. The lecture was originally derived from Richard Feynman’s commencement lecture given in 1974, on cargo cult science, and the lecture goes something like this:
Robert Millikan (who the Millikan library at Caltech is named after) originally devised an experiment using drops of oil suspended in an electric field in order to measure the charge of an electron: if you know the current used to suspend the oil drop and you know its mass (by allowing it to fall in the absence of a field), you can calculate the force due to charges on oil drops–and by examining the different values of those charges you can extrapolate the electric charge of an electron.
Now, to quote Dr. Feynman:
“We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
“Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of – this history – because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong – and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.”
With all due respect to the late Dr. Feynman, it’s clear we still have that kind of a disease.
As an aside, one of the things that saddens me about most scientific educations is that most scientists never hear the story of the Millikan Oil Drop Experiment, or see the graph, shown in my chemistry class, giving the “accepted” value of the electron charge plotted over time, which shows this beautiful sinusoidal curve, with the largest uptick right around 1953.

michel
October 17, 2009 3:08 pm

Tom P
And how, exactly, are you going to show how many should correlate on a purely chance basis? Think about it!

October 17, 2009 3:09 pm

The reason this method is not so obviously flawed to the eye seems to be because there is a springiness to the median line that prevents it from very suddenly jerking down below the median when the thermometer data begins, in this case in 1960. Can the randomness be tweaked for which time period it varies in to alter this transition? Were a much larger number of random graphs used, two things would stand out clearly that would give a casual viewer pause: (1) there would be a PERFECT match to the thermometer data right down to what is clearly noise while there would be no visible noise in the pre-thermometer data at all, and (2) the dip right before temperature data appears would either be an abrupt 90 degree one or there would be none at all, meaning a completely and perfectly impossible horizontal “temperature” line followed by a perfect match of noisy thermometer data. So even using this method as is, if many hundreds of “good temperature signal” trees were used instead of less than a dozen each time, the graph itself would lack the noise that is required to appear data like instead of obviously not at all data like.
My curiosity is really that of why there is any springiness to the baseline at all. It is that very dip that makes a hockey stick look like real data and thus it is that dip (along with some noise) that fools the eye into not automatically doubting whether what you are looking at is too artificial looking. Imagine a yellow graph with pre-1960 data replaced by a ruler-drawn horizontal line.
The solution would be to go outside and get 10X-100X as much data, then accept their method and make a new hockey stick that thus becomes too perfect a match to thermometer data NOISE to be taken seriously!

Michael
October 17, 2009 3:09 pm

I remember that picture of cranes and cherry pickers. It was from an auction of unused and repoed construction equipment because of the real estate bust.

Patrik
October 17, 2009 3:16 pm

Wow! Great explenation!
I haven’t even begun to grasp the real meaning of the critique of the “excluding bad matches”-method before…
But I believe I see it clearly now. I think… 😮
Does this mean that if one was to move the 1960-1990 Hadley temp data to the horisontal centre of the graph, then both ends on the sides of it would “straighten out”?
I.e. if I select series that match the 1960-1990 Hadley data but do it as if they represent, say, 1930-1960 – I would then get a rather straight line with a bump in the middle instead of a hockey stick?
Because the “unmatched” parts of the series are more “truly random” than the matched data and will therefore always be represented by a somewhat straight line?
And; the more series one adds the smoother the line “outside” the matched area will be!?
This means that the more proxy studies they add – the more straight the blade of the hockey stick will become! 😀

Joel Shore
October 17, 2009 3:17 pm

Neo says:

This “climate studies” field seems to be teaming with Tobacco Institute refugees as the shine is now off it’s previous beauty.

Actually, the Tobacco Institute refugees seem to be mainly on the “skeptic” side of the climate change issue. These include Steven Milloy of JunkScience.com ( http://www.sourcewatch.org/index.php?title=Steven_Milloy ), the late Frederick Seitz ( http://www.sourcewatch.org/index.php?title=Frederick_Seitz ), and S. Fred Singer ( http://www.sourcewatch.org/index.php?title=Fred_Singer ).
As for this current post, the important point is made by Tom P: The potential for this sort of bias has been understood and that is why it is controlled for or various verification methods are used. Are those methods sufficient? I haven’t investigated well enough to know. However, that should be the question rather than just illustrating this fact that has been understood and then not looking into how it has been dealt with.

Patrik
October 17, 2009 3:19 pm

Tom P>> “would expect that no more than ~150 (~13%) of the proxy series would pass the screening procedure”
Interresting – how would they know this?

Michael
October 17, 2009 3:22 pm

[snip – moved to the correct thread where we cover this, thanks for the video tip – Anthony]

stevemcintyre
October 17, 2009 3:26 pm

Tom P (14:31:51) : I suggest that you read the contemporary CA posts on Mann et al 2008 (See the Categories in the left frame.) To say that “Mann and co-workers” dealt with this issue “very carefully” is laughable.

October 17, 2009 3:36 pm


Michael (15:09:30) :
I remember that picture of cranes and cherry pickers. It was from an auction of unused and repoed construction equipment because of the real estate bust.
/

I see loading forks on a number of those Gradalls and the JLG showing in the picture (meaning: they were used in wharehouse operations).
Along I-55 in mid Illinois one passes an equipment yard where that kind of equipment can be seen year after year … I have a picture somewhere on this PC of that yard from about 2003 …
.
.

Patrik
October 17, 2009 3:41 pm

Tom P>> Would You agree on this statement:
IF the data is random and the method described by Lucia is used THEN the more series you add, the straighter and smoother the “unmatched” parts of the series will be. At the same time, the more series you add, the “uptick part”/blade of the stick will be more and more similar in detail to the data you’re matching against.
?

Tony Hansen
October 17, 2009 3:45 pm

re Tom P ‘… The issue has indeed been thought about very carefully.’
Thinking about something in no way guarantees understanding.
Perhaps one might think on that.

stevemcintyre
October 17, 2009 4:02 pm

CA posts on this topic include http://www.climateaudit.org/?p=4908 4216 3821 3838 3858.
While the matter may have been thought about “very carefully”, unfortunately with Team articles, that often means that you also have to examine it very carefully.
If something is a temperature “proxy”, then one would it expect it to have significant correlation in both “early” and “late” periods as defined in M08. Only 342 of the ~400 do (and one of these is disqualified in my opinion because the “significant” correlation has a different sign in each period.
Further breaking down the 342 of 1209: 71 of 71 Luterbacher proxies pass – these use instrumental data and thus are irrelevant. Of the 927 code 9000 dendro proxies (the majority), only 143 (15.4%) pass the first-cut test. There are issues pertaining to AR1 autocorrelation that would reduce this. Also Mann picks the better of two correlations, goosing the number up a bit on the observed without adjusting the benchmark. I didn’t test how to do a proper benchmark – at this stage, one merely knows that Mann’s benchmark is fudged. However 93 of 105 MXD proxies (these are the Rutherford RegEMed version of Briffa’s MXD proxies.) These suffer from the divergence problem and therefore Mann 2008 deleted the post-1960 portion of these proxies and substituted infilled data. I don’t recall offhand whether we figured out which the correlations were calculated on. The Briffa MXD series are relatively short – none go before AD1400 and do not pertain to the MWP-modern comparison that is the main game.
Tom P, as usual, speaks over certainly on a matter where he has not familiarized himself with the relevant analysis.

October 17, 2009 4:11 pm

David Ball (13:12:03) : ” ‘Only interested in a few good series’ and ‘mission oriented’ [speak] volumes. The refusal to provide data for replication is inexcusable. When are the governing bodies going to admonish these people? After science is dead? Methinks the defibrillator is needed now.”
“She’s dead, Jim.”

AnonyMoose
October 17, 2009 4:25 pm

Instead of throwing out data, should the calibration instead adjust factors which are applied to all of the data? If a linear or geometric relationship doesn’t exist, that’s just too bad.

AnonyMoose
October 17, 2009 4:31 pm

While the matter may have been thought about “very carefully”,…

The Nobel Prize in Chemistry will go to someone who very carefully thought about how to change lead into gold. Someone who tried very hard and very sincerely. That the results are not gold is a detail, some of them do look like yellow metal.

Gordon Ford
October 17, 2009 5:10 pm

Excellent explanation Lucia.
This tale reminds me of the days long age when I was evaluating gold properties for a mining company. One class of property was the gold – quartz vein with pockets of free (visible) gold. This type of property was typically owned by a junior mining company with a name like Free Gold Inc. or GMITS Inc. The owners, typically a prospector, a promoter and a couple of dentists could proudly point to sections of the vein and truthfully say “John Doe (A reputable geologist whom I knew) took a sample from here that ran 10 ounces to the ton and over there he got 5 ounces to the ton.”
On examining and sampling the exposed portions of the vein I would note the small pockets containing visible gold (A sampling QA/QC and statistical nightmare) and long sections of barren looking quartz vein (a mining / grade control nightmare) .
On receipt of the assay results from my sampling (Typically one or two intresting values and a lot of trace or nils) I would write them a nice letter expressing our regrets that their property was not the type of gold property we were looking for and as a courtesy enclose a copy of the assay results of my sampling.
The owners were still convinced that they had a high grade gold vein and had more assay reports to prove it. In reality they had an essentially barren quartz vein with small pockets containing irratically distributed flakes of gold.
The intresting thing is while they could never get a mining company to put the property into production (and make them millionairs) they could always find another dentist, doctor or lawyer to put more money into the company.
I guess that it will always be that some people (possibly most people) will fixate on data that conforms to their beliefs and reject or ignore that which doesn’t.
My appologies to dentist, doctors and lawyers but they kept cropping up time after time.

October 17, 2009 5:12 pm

Espen (14:34:30) : “I’ve just started to read all the articles on this, especially Steve McIntyre’s work, and am still just flabbergasted. It seems to me that they hockey players are breaking the most fundamental principles of statistics (wrt. randomness of samples and using separate sample sets for model building and the actual tests), but I need to read a little more on PCA on time series to get the grasp of this.”
Special rules seem to apply to climatology in general and to dendroclimatology in particular. The relevance of tree cores is extremely low, given that they are highly ambiguous per se and are being used to attempt to measure localized atmospheric temperatures. Any dendro climatic signals are easily obscured by weather signals. The oceans are ~1200 times as great a heatsink as the atmosphere. How significant can this type of study be? It’s astrology.

Michael
October 17, 2009 5:15 pm

“_Jim (15:36:58) :
I see loading forks on a number of those Gradalls and the JLG showing in the picture (meaning: they were used in wharehouse operations).”
Jim,
Now I know what the AGW’ers feel like when they get debunked.
I do have a picture very similar though of construction equipment up at auction.

Pamela Gray
October 17, 2009 6:07 pm

There are cases in research when this is an acceptable practice (IE gathering data and then using only the sensitive to the treatments subjects). I did this in my research. It was necessary as we did not put our subjects to sleep (which would have caused nearly all subjects to be sensitive to the treatment). So instead we had to find subjects whose brains were quiet enough when awake to allow their auditory brainstem synaptic response to the signals to rise above the noisy synaptic brain when listening to high frequency tone pips. I even had my brain tested and it turned out to be WAAAYYYYY too noisy for any use in the study. A few years later I had an EEG done (I was put to sleep). It confirmed that even when asleep, my brain just keeps talking (go figure). I was able to find 6 subjects who had quiet brains. As a result we were able to demonstrate that the auditory pathway was sensitive to narrow frequency from the get go (at the first synaptic junction as well as the later ones in the brainstem).
So I can understand why a researcher would think that looking through the trees to find the sensitive ones is okay to do. However the difference is this: In my case we were removing higher noisy responses that were not related to frequency response in the brainstem (the noise comes from higher synaptic junctions and could be the result of just “thinking”). So we needed to remove noisy brains or put all our subjects to sleep.
With trees, the rings demonstrate the tree’s response to the environment, which is the treatment under consideration. So all rings should be used within a stand that is subject to the same environment (meaning you should not mix trees in a meadow with trees growing next to a river bank). If the intent was to remove trees that were growing next to a stream from trees that were in the meadow, it would make sense to remove those rings from the data pool. Is it possible this is what was done?

glen martin
October 17, 2009 6:25 pm

“stevemcintyre (16:02:45) :
Of the 927 code 9000 dendro proxies (the majority), only 143 (15.4%) pass the first-cut test.”
Roughly what they claim would be expected by chance alone (~13%)

David Ball
October 17, 2009 6:44 pm

Joel Shore, speaking of things that have been understood for some time, the tree ring proxy has been known to have very little accuracy when dealing with temperature for at least three decades. As a child, I remember dinner table conversations between my father and his colleagues over this very issue. It was known to be problematic 30 years ago. Climate science had been way beyond this stuff, and here we are arguing the validity today. The use of tree ring proxies is a joke, yet the people using them are still trying to show they are valid. To discuss the interpretation of this proxy data is a waste of time. Examine your basic assumptions.

climatebeagle
October 17, 2009 6:45 pm

Very nice article by Lucia and very clear & valid comment by michel (13:47:43)

p.g.sharrow "PG"
October 17, 2009 10:28 pm

I remember when research on tree rings was an important field as a proxy for climate favorability to local farm or food production in areas without written records. Sometimes this was used as a presipatation proxy and at other times as a temperature proxy by lazy theoriests to prove their point.
Tree rings are a good proxy for plant growth conditions and little else. With nearly 60 years of growing trees and other plants, I can attest to that fact.

michel
October 17, 2009 11:49 pm

[i]AnonyMoose (16:25:46) :
Instead of throwing out data, should the calibration instead adjust factors which are applied to all of the data? If a linear or geometric relationship doesn’t exist, that’s just too bad.[/i]
Yes, exactly. That’s the procedure that Pamela Gray describes. Call this Method A. You have a huge number of trees. You may have some reason for thinking some trees not others are thermometers. They may, for instance, be of a particular species, particularly large, old, undamaged, regular in shape, irregular in shape, in a particular region. Whatever.
For some reason connected to a an account of their biology you think you’ve found the thing that makes these particular ones accurate registers of temperature. So you pick them. Notice that so far you have not compared any of them to any temperature record.
THEN you plot the temperatures they indicate.
Now you check your results by comparing the temps they seem to indicate with a known real thermometer record you trust for part of the period you are interested in. They show a reasonably decent correlation for this period. Now you have some reason to find it plausible that this sample are thermometers, at least for some period. So you have some reason to think that they will also be thermometers in other periods.
The way Lucia is exposing, Method B, is, you take the same thermometer record and the same initial large sample. Then you take the same temperature record. You just compare your sample against the temperature record, and you throw out all the ones that do not correlate. This is not legitimate.
Again, you can see why, if you consider how likely the two methods are to lead to the same sub-sample. Only under one condition: that a temperature match in your sample period is only found in trees with some biological reason for being good thermometers throughout their lives. But that is the question at issue, so you’ve assumed what you are trying to prove. Method A provides some kind of test of whether your sample is a sample of thermometers, precisely because you did the picking before doing any temperature checks.
It is possible that Method B could work, and yield a sample of thermometers. How would you know? Well, you’d have to do Method A, find the key biological variables, then see if Method B also selected the trees with them. Bit of a waste of time then, doing Method B. In short, you have no way of avoiding doing Method A. Method B adds nothing.
You can also see the problem if you consider how certain you will be in the first case of correlating to temperature in any given period. Not at all a priori. It really will depend on whether your theory about the biology of the trees is correct, and that is one of the things you’ll be finding out in Method A, when you compare your runs against known temps.
Therefore, in Method A, correlating against temperatures really may give you some new information. In Method B, as the red noise example shows, it does not. It just picks the samples that show what you want them to show in the period you’ve picked for correlation.
Take this a step further. Someone criticizes your procedure in Method B by arguing that you just picked trees that correlate in your temperature measured time, and that this shows nothing about whether this sample correlated earlier or later. Aha, you say. I have calculated the results of picking from my tree sample randomly, and correlating the resulting sample against temperature in my period. If I do this, I find that very few of my trees correlate with temps. This shows I am not picking at random. I am picking ones that really do correlate.
Which of course, we knew all along was being done, and the problem was not that at all. It was that the procedure gave us no reason to think this sample would show the same correlations outside the measured temp period that it did within it.
There is no way around this one. You cannot do sampling like this. This is like the absorption spectrum of CO2, or gravity, this really is settled science. There is no point denying it. Its just that because its statistics, it can be a bit hard to get your head around. But it really is settled.
It does seem very odd that Climate Scientists should refuse to accept settled science when it comes to statistics, or should misrepresent what the settled science actually is – they have form on this, have a look at Tamino’s series on PCA – but accuse others of denialism when they point out that the science on this matter is not what they are saying it is. Doing and defending statistics in this way really is denialism.

Greg Cavanagh
October 18, 2009 12:47 am

Tom P (14:31:51) :
“Although 484 (~40%) pass the temperature screening process over the full (1850 –1995) calibration interval, one would expect that no more than ~150 (~13%) of the proxy series would pass the screening procedure described above by chance alone. This observation indicates that selection bias, although potentially problematic when employing screened predictors… does not appear a significant problem in our case.
I’ve got an issue with this.
Of the 100% of data captured they know 13% is biased. They then discard 60% of the data keeping the remaining 40%, which still contains the 13% biased data. In effect they now have 32.5% biased data.

October 18, 2009 12:56 am

A Mathematica notebook, and its PDF preview, showing how you get a hockey stick out of red noise if you prefer the series that show warming at the end:
http://cid-9cd81cfa06ff7718.skydrive.live.com/self.aspx/.Public/mann-hockey-fun.pdf
http://cid-9cd81cfa06ff7718.skydrive.live.com/self.aspx/.Public/mann-hockey-fun.nb
http://motls.blogspot.com/2009/09/beaten-with-hockey-sticks-yamal-tree.html
The problem of the MBH98, MBH99, and similar papers was (and is!) that the algorithm preferred proxies – or trees (or their equivalents) – that showed a warming trend in the 20th century, assuming that this condition guaranteed that the trees were sensitive to temperature.
But even if such a 20th century trend occurred by chance for a certain tree (and a fraction of the trees inevitably satisfies this condition), the corresponding tree would influence Mann’s final graphs a lot. Effectively, the algorithm picked a lot of trees that didn’t show any correlation with the temperature but they were rather composed out of random data – red noise – before 1900, and an increasing trend in 1900-2000.
You can’t be surprised that the average of such trees looked like a hockey stick even if the temperature didn’t. The noise before 1900 averages to a constant temperature or something close to it while the 20th century warming survives.

Tom P
October 18, 2009 2:01 am

stevemcintyre (16:02:45) :
“I didn’t test how to do a proper benchmark – at this stage, one merely knows that Mann’s benchmark is fudged.”
That’s what I call a statistical analysis! You promised back in January in your original posting you referenced to look further into “questionable Mannian benchmarks”. Are you going to do this?

O. Weinzierl
October 18, 2009 2:28 am

Why not extend the timeline of the random data to 2100? You can get future climate data this way much cheaper than modelling and at least the same quality, probably even better.

Jack Simmons
October 18, 2009 2:31 am

dhmo (13:07:32) :

Look I am not a scientist but this whole thing smacks to me of snake oil. To believe that you can take trees and get precise enough temperatures from them to splice it with modern records and get measurements to the tenth of a degree is delusional. Perhaps we should just chuck out our thermometers and use trees instead.

If its all the same to you, I would prefer to keep the rectal thermometer I have.

Patrik
October 18, 2009 2:38 am

Tom P>> Please enlighten us:
Please tell us how Mann et al knows that one should expect ~13% conformity with measurements from a purely random source?
I’m not sure that I will understand the answer, but at least you owe the more mathematically gifted people here that.

Bulldust
October 18, 2009 2:38 am

Sooner or later someone is going to come up with something witty regarding Pres Washington and cherry trees… I can feel it in my waters. Somehow I sense a missed opportunity that he did not graph the rings.
Anywho… as for data & statistics, look no further than Aesop (and that IS going back a ways):
“We can easily represent things as we wish them to be.”
Tru dat.

Patrik
October 18, 2009 2:54 am

Greg Cavanagh (00:47:35)>>
I’m not sure that is the context of the Mann quote. I reacted the same way as you first, but:
I believe they are saying that if the conforming sources would be amount to ~13%, then those series would probably be of a random nature.
This actually doesn’t mean that ~13% is always random noise.
I’m still wondering how on earth they would know that 13% is the magic number.
Perhaps there is some statistical algorithm that can resolve this likelihood?
I’m waiting for Tom P to give us this algortithm.
On the other hand, I really don’t see why such an algorithm would be of any significanse, since:
A) The sources are organic an may well have developed differently during different time spans.
B) Point A) means that any or none source could be valid BEFORE ~1850, totally unrelated to any mathematical formulae.
I believe that A and B is supported by people who knows much about trees and vegetation in general.
And if A and B is true, then this is also true:
C) The probabilty for pre 1850 data being noise, as opposed to temp data, must be resolved also.
If C) is resolved and it is found that the probability for these data being temp is sufficiently high, then one should proceed to:
D) Gather measured temp data from exactly those areas where the samples where collected (which would probably only leave sattelite temp data from ~1980 to compare with) and then:
E) Start over again from A), and only do matching against the actually measured data (sattelite) from the exact areas where one have gathered the proxy data.
I believe the above A-E process is impossible, since there will probably be a huge gap between available proxies and temp data from ~1980, but this process only would reveal how T might have fluctuated in these actual areas of proxy collection.
I believe there is a looong way to wander before any significance can be given to these pre-measurement data used in reconstructions.

UK Sceptic
October 18, 2009 4:47 am

I have always thought it ironic that Gore used a cherry picker (what we in the UK call an elevator machine) to demonstrate the Hockey Stick graph. Now, with Mr McIntyre lifiting the lid on the diddled dendro revelations, the irony is all the more delicious.

Stefan
October 18, 2009 5:30 am

I like Lucia’s simple article, very informative, and fun too! I like that it demonstrates things in a way a layperson can understand.
One of the things that has often troubled me about Climate Science, is the attitude that, the science is so complex that only experts are qualified to judge. And yet, sooner or later, one has to summarise, and the summary needs to make sense. I’m sure one needs to be expert to grasp the details, but one need only have a brain to grasp the logic.
As for suggestion that the biases have already been corrected or accounted for, well, what can I say, I’ve been taught a little something by various experts on how people deceive themselves, how I deceive myself, and I know that it takes a lot of introspection and testing to actually get out of that hole. The automatic response is to create a smoke screen, and I emphasise, it is automatic–the person under the illusion doesn’t know it. Just watch a friend say he’s planning on starting a diet on Jan 1st, as a New Year’s resolution, and then watch in January and February all the reasons the friend gives for why it isn’t the right time to start yet. All the reasons are reasonable and make sense. We all know this. No expertise required.
So when scientists are criticised, their supporters can claim, “oh, we already knew that! we’ve already corrected!” The more truthful can say, “oh, well, um… I’m sure they probably corrected for that already!”
The cool thing about being a layperson or not having a vested interest in the topic, is that there is no pressure to be right about it. I mean, this is what lay AGW supporters keep saying, they keep claiming that various scientists are paid by oil industry–in effect affirming that scientists can be easily biased, and yet any scientist in favour of AGW is somehow so immune to bias that they can self-correct for any and all bias–and we can even assume that they have done this, to the point that we should be the ones to jump through hoops to disprove this, rather than them jumping through hoops to show that they have.
I’m sure my friend is very right, that he is already perfectly well aware that he didn’t start the diet, and as he says that’s for some very good reasons–plus he’s already planning a new diet that’ll be twice as effective–and he doesn’t need to say what they are, for what business is it of mine anyway.

Mike Kelley
October 18, 2009 8:20 am

As the brilliant card magician Ricky Jay said: “Every profession is a conspiracy against the laity”.

J.K.
October 18, 2009 8:56 am

Al gore didn’t use an elevator. It’s called a scissor lift. 😉

Pamela Gray
October 18, 2009 9:52 am

Method A is exactly what was done. We used a broad frequency click to measure subjects first. If their brainstem response was easy to read, we put them in the next group. If the response was too noisy (that higher brain problem), we rejected the subject for inclusion. We then selected the 6 easiest to read click responders from the second group and performed the rest of the procedures. Why didn’t we just put them all to sleep? The study was expensive and we had a very limited budget. Putting them to sleep would have enlarged the costs way beyond our budget. And the literature was already filled with comparisons between the response ability of the auditory pathway in sleeping brains and quiet brains, demonstrating very little difference. So our method was valid.

Carl
October 18, 2009 10:43 am

Those persons/traders/analysts in the financial markets are well aware of such biases and persons in data modeling in general, myself included. At a *minimum*, you must fit data on one set and validate on another. In our analysis, as we search for models, we fit one set, validate on a second to evaluate model performance, then validate on a third set to select models and finally a FOURTH set to hold back to make sure the entire modeling process wasn’t blind luck. Good performance means your “fit” matches on ALL data sets EQUALLY.
In matters of major importance, such as taxation of masses and suppression of standards of living, such modeling should stand to the highest rigors of validation.
Thanks,
Carl

David Ball
October 18, 2009 12:02 pm

TomP, I do not believe you are in a position to make any recommendations on how Steve Mcintyre should make use of his time. In the past, I have found my life to be on the wrong tack. Course corrections can be difficult and humbling, but in the end, were very rewarding. We can help you through this.

hotrod
October 18, 2009 12:14 pm

Suppose you had 50 years of good temperature data from modern thermometers, and you first selected your trees using only the last 25 years of data, then you checked it by seeing how they compare to the first 25 years of data? Would that sort of test reliably tell you that the process was of selection was broken?
Just curious how you would go about “proving” to a true believer that the process is broken using their own data.
Larry

Carl
October 18, 2009 1:08 pm

hotrod,
“Suppose you had 50 years of good temperature data from modern thermometers, and you first selected your trees using only the last 25 years of data, then you checked it by seeing how they compare to the first 25 years of data? Would that sort of test reliably tell you that the process was of selection was broken?”
That would be only the most primitive first step and probably not sufficient. The field of model building and validation is substantial and covers more than can be posted here as a comment, but includes things such as model parsimony (degrees of freedom), bootstrapping within and across multiple windows (data sets) to prove model reliability and fit for purpose, extensive analysis of errors, the performance on data after the model building and validation process is complete (true out-of-sample) and much more.
Data snooping, overt and subtle, can cause models to look valid after doing all the above on historical data. It’s easy for your presumptions and biases to leak into the process (the subject of the article), but ongoing performance after you say “there, I’m done” is the final true test. If the model’s performance, which can be measured in many ways, degrades going forward, you find out you weren’t as successful as you thought, which is often the case, I’m afraid. We live in a non-stationary, highly interconnected universe and to claim that you have a model that works in all cases for all time would be foolish. Even Newton was “wrong”. The best you can expect is a workable (useful) approximation.
Thanks,
Carl

michel
October 18, 2009 1:58 pm

“Suppose you had 50 years of good temperature data from modern thermometers, and you first selected your trees using only the last 25 years of data, then you checked it by seeing how they compare to the first 25 years of data? Would that sort of test reliably tell you that the process was of selection was broken?”
This is what they should have done to confirm their selection method. Easy really. They picked the trees that were good thermometers in the modern period. Now the question was, were they also good thermometers in the distant past. All they had to do is find out what the temps were in the distant past, then compare what their selected trees said they were. They must have done that, mustn’t they? That’s how they would confirm their selection method really works.
But wait a minute. I just thought of a problem with that. The reason we were using these trees because they didn’t got no thermometers back in that distant past. So how do we go about validating that the modern selected trees, just because they match the modern record, also matched the distant past record?
Round about this point you realize the only thing to do is accuse your critics of being a close relative of Dick Cheney, or maybe having worked for Exxon, or been associated with the tobacco lobby, or perhaps being a creationist. Because the statistics, well, on that basis, you are sunk.

Joel Shore
October 18, 2009 3:26 pm

michel:

Round about this point you realize the only thing to do is accuse your critics of being a close relative of Dick Cheney, or maybe having worked for Exxon, or been associated with the tobacco lobby, or perhaps being a creationist.

Actually, both sides tend to accuse the other side of associations of this type. The difference is that one side suggests that the Heartland Institute might not be the most unbiased source of information while the other side claims there is some kind of mass conspiracy or collusion / distortion brought about by funding that has made the IPCC, the National Academy of Sciences and the analogous bodies in all the other G8+5 countries, AAAS, and the councils of most of the major scientific societies (APS, AMS, AGU, …) untrustworthy. Do you see the difference?
Also, since you mentioned Exxon, it is worth noting that Exxon is now light-years ahead of the “skeptic” community here in publicly accepting the science of AGW: http://www.exxonmobil.com/Corporate/energy_climate_views.aspx

bill
October 18, 2009 3:38 pm

But surely the random sequences added together are just that random. Because they are random there will be random sequences that conform to any curve required, but outside the conformance the sequence will fall back to random = average zero.
Surely what is being proposed is that trees growths are controlled by many factors. no randomness just noise and a combination of factors.
Trees will not grow at -40C trees will not grow at +100c.
Trees do grow well at a temp in between (all else being satisfactory).
Choosing trees that grow in tune to the temperature means that if they extend beyond the temp record than the is a greater possibility that these will continue to grow in tune with the temp. If they grow to a different tune then they are invalid responders.
A long time ago I posted a sequence of pictures showing what can be obtained by adding and averaging a sequence of aligned photos – the only visible data was the church and sky glow. I added 128 of these images together and obtained this photo:
http://img514.imageshack.us/img514/1989/128imagesaddednootheradub9.jpg
Note that it also shows the imperfections in the digital sensor (the window frame effect)
Image shack did have a single image with the gamma turned up to reveal the only visible image (Church+sky) but they’ve lost it!
The picture was taken in near dark conditions.
A flash photo of the same:
http://img514.imageshack.us/img514/6475/singleflashulpn3.jpg
By removing all invalid data (pictures of the wife, the kids, flowers etc) that do not have the church and sky, a reasonable picture of the back garden appears from the noise.
Of course I may have included a few dark picture with 2 streetlights in those locations, but with enough of the correct image these will have a lessening effect.

Tom P
October 18, 2009 3:41 pm

wattsupwiththat (12:08:12) :
“If you are so concerned, why not do the work yourself and publish it with your name on it?”
It’s Steve McIntyre who is making the accusation against Mann with his description of “fudged” benchmarks. The burden is on Steve to publish and prove Mann wrong.
In any event I very much doubt I could get a publication if all I did was reproduce Mann’s paper.

October 18, 2009 4:03 pm

Joel Shore (15:26:16) :
[…]
Also, since you mentioned Exxon, it is worth noting that Exxon is now light-years ahead of the “skeptic” community here in publicly accepting the science of AGW:

How, exactly, is ExxonMobil supposed to act when the US Senate is threatening them with a tobacco-style inquisition if they don’t start toeing the line…

October 27, 2006
Mr. Rex W. Tillerson
Chairman and Chief Executive Officer
ExxonMobil Corporation
5959 Las Colinas Boulevard
Irving, TX 75039
Dear Mr. Tillerson:
Allow us to take this opportunity to congratulate you on your first year as Chairman and Chief Executive Officer of the ExxonMobil Corporation. You will become the public face of an undisputed leader in the world energy industry, and a company that plays a vital role in our national economy. As that public face, you will have the ability and responsibility to lead ExxonMobil toward its rightful place as a good corporate and global citizen.
We are writing to appeal to your sense of stewardship of that corporate citizenship as U.S. Senators concerned about the credibility of the United States in the international community, and as Americans concerned that one of our most prestigious corporations has done much in the past to adversely affect that credibility. We are convinced that ExxonMobil’s longstanding support of a small cadre of global climate change skeptics, and those skeptics access to and influence on government policymakers, have made it increasingly difficult for the United States to demonstrate the moral clarity it needs across all facets of its diplomacy.
Obviously, other factors complicate our foreign policy. However, we are persuaded that the climate change denial strategy carried out by and for ExxonMobil has helped foster the perception that the United States is insensitive to a matter of great urgency for all of mankind, and has thus damaged the stature of our nation internationally. It is our hope that under your leadership, ExxonMobil would end its dangerous support of the “deniers.” Likewise, we look to you to guide ExxonMobil to capitalize on its significant resources and prominent industry position to assist this country in taking its appropriate leadership role in promoting the technological innovation necessary to address climate change and in fashioning a truly global solution to what is undeniably a global problem.
While ExxonMobil’s activity in this area is well-documented, we are somewhat encouraged by developments that have come to light during your brief tenure. We fervently hope that reports that ExxonMobil intends to end its funding of the climate change denial campaign of the Competitive Enterprise Institute (CEI) are true. Similarly, we have seen press reports that your British subsidiary has told the Royal Society, Great Britain’s foremost scientific academy, that ExxonMobil will stop funding other organizations with similar purposes. However, a casual review of available literature, as performed by personnel for the Royal Society reveals that ExxonMobil is or has been the primary funding source for the “skepticism” of not only CEI, but for dozens of other overlapping and interlocking front groups sharing the same obfuscation agenda. For this reason, we share the goal of the Royal Society that ExxonMobil “come clean” about its past denial activities, and that the corporation take positive steps by a date certain toward a new and more responsible corporate citizenship.
ExxonMobil is not alone in jeopardizing the credibility and stature of the United States. Large corporations in related industries have joined ExxonMobil to provide significant and consistent financial support of this pseudo-scientific, non-peer reviewed echo chamber. The goal has not been to prevail in the scientific debate, but to obscure it. This climate change denial confederacy has exerted an influence out of all proportion to its size or relative scientific credibility. Through relentless pressure on the media to present the issue “objectively,” and by challenging the consensus on climate change science by misstating both the nature of what “consensus” means and what this particular consensus is, ExxonMobil and its allies have confused the public and given cover to a few senior elected and appointed government officials whose positions and opinions enable them to damage U.S. credibility abroad.
Climate change denial has been so effective because the “denial community” has mischaracterized the necessarily guarded language of serious scientific dialogue as vagueness and uncertainty. Mainstream media outlets, attacked for being biased, help lend credence to skeptics’ views, regardless of their scientific integrity, by giving them relatively equal standing with legitimate scientists. ExxonMobil is responsible for much of this bogus scientific “debate” and the demand for what the deniers cynically refer to as “sound science.”
A study to be released in November by an American scientific group will expose ExxonMobil as the primary funder of no fewer than 29 climate change denial front groups in 2004 alone. Besides a shared goal, these groups often featured common staffs and board members. The study will estimate that ExxonMobil has spent more than $19 million since the late 1990s on a strategy of “information laundering,” or enabling a small number of professional skeptics working through scientific-sounding organizations to funnel their viewpoints through non-peer-reviewed websites such as Tech Central Station. The Internet has provided ExxonMobil the means to wreak its havoc on U.S. credibility, while avoiding the rigors of refereed journals. While deniers can easily post something calling into question the scientific consensus on climate change, not a single refereed article in more than a decade has sought to refute it.
Indeed, while the group of outliers funded by ExxonMobil has had some success in the court of public opinion, it has failed miserably in confusing, much less convincing, the legitimate scientific community. Rather, what has emerged and continues to withstand the carefully crafted denial strategy is an insurmountable scientific consensus on both the problem and causation of climate change. Instead of the narrow and inward-looking universe of the deniers, the legitimate scientific community has developed its views on climate change through rigorous peer-reviewed research and writing across all climate-related disciplines and in virtually every country on the globe.
Where most scientists dispassionate review of the facts has moved past acknowledgement to mitigation strategies, ExxonMobil’s contribution the overall politicization of science has merely bolstered the views of U.S. government officials satisfied to do nothing. Rather than investing in the development of technologies that might see us through this crisis–and which may rival the computer as a wellspring of near-term economic growth around the world–ExxonMobil and its partners in denial have manufactured controversy, sown doubt, and impeded progress with strategies all-too reminiscent of those used by the tobacco industry for so many years. The net result of this unfortunate campaign has been a diminution of this nation’s ability to act internationally, and not only in environmental matters.
In light of the adverse impacts still resulting from your corporations activities, we must request that ExxonMobil end any further financial assistance or other support to groups or individuals whose public advocacy has contributed to the small, but unfortunately effective, climate change denial myth. Further, we believe ExxonMobil should take additional steps to improve the public debate, and consequently the reputation of the United States. We would recommend that ExxonMobil publicly acknowledge both the reality of climate change and the role of humans in causing or exacerbating it. Second, ExxonMobil should repudiate its climate change denial campaign and make public its funding history. Finally, we believe that there would be a benefit to the United States if one of the world’s largest carbon emitters headquartered here devoted at least some of the money it has invested in climate change denial pseudo-science to global remediation efforts. We believe this would be especially important in the developing world, where the disastrous effects of global climate change are likely to have their most immediate and calamitous impacts.
Each of us is committed to seeing the United States officially reengage and demonstrate leadership on the issue of global climate change. We are ready to work with you and any other past corporate sponsor of the denial campaign on proactive strategies to promote energy efficiency, to expand the use of clean, alternative, and renewable fuels, to accelerate innovation to responsibly extend the useful life of our fossil fuel reserves, and to foster greater understanding of the necessity of action on a truly global scale before it is too late.
Sincerely,
John D. Rockefeller IV
Olympia Snowe

Very McCarthy-esque… “Are you now, or have you ever been, a AGW denier?”
One would have to assume that the world’s largest organization of sedimentary geologists, the AAPG, is one of the “groups… whose public advocacy has contributed to the small, but unfortunately effective, climate change denial myth.”
The biggest lie in all of this is the phrase “climate change denial.” None of the serious AGW skeptics has ever denied “climate change.” Our main point is that the climate is always changing.

stevemcintyre
October 18, 2009 5:23 pm

Tom P, oh puh-leeze. Mann developed an unheard of pick-two procedure. The benchmarks are going to be higher than the pick-one used by Mann. How much higher? I didn’t try to simulate it at the time: but really that should have been Mann’s job rather than not allowing for the procedure. My point here was that it was going to be a higher benchmark than the one reported by Mann and that the dendro proxies (other than the RegEMed MXD series) were already more or less at a random yield.

David Ball
October 18, 2009 6:33 pm

Joel Shore said “light years ahead of this blog”. Light years in the wrong direction. By the way, Joel, condescending means “to talk down too”. Just so you know, …. 8^]

Alan S. Blue
October 18, 2009 7:12 pm

It is worth noting Jeff Id’s demonstration of using Mann’s method on a large grouping of random data.
After you get through finding spurious sine waves and step functions, he demonstrates how even the slightest imperfection in “the true temperature” can drive the entire calculation off into the weeds. Quickly.
It would interesting IMNSHO to see the discrepancies between the normal Mann method, and the Mann method applied strictly to the satellite era and the local grid cell temperatures as determined by satellites.

Joel Shore
October 18, 2009 7:29 pm

Dave Middleton says:

Very McCarthy-esque… “Are you now, or have you ever been, a AGW denier?”

It doesn’t sound that McCarthy-esque to me. They are just calling Exxon-Mobil out on some deceptive practices. As they note, it is not as if Exxon was funding peer-reviewed science that was attempting to prevail in the scientific community. Rather they were funding public obfuscation of peer-reviewed science.
If you want to talk about something McCarthy-esque, you could talk about Barton’s inquisition of Mann and his co-authors, which was so bad that even fellow Republican Sherwood Boehlert, Chair of the House Science Committee, found it extremely objectionable ( http://sciencepolicy.colorado.edu/prometheus/archives/climate_change/000497letter_from_boehlert.html ):

I am writing to express my strenuous objections to what I see as the misguided and illegitimate investigation you have launched concerning Dr. Michael Mann, his co-authors and sponsors.

My primary concern about your investigation is that its purpose seems to be to intimidate scientists rather than to learn from them, and to substitute Congressional political review for scientific peer review. This would be pernicious.
It is certainly appropriate for Congress to try to understand scientific disputes that impinge on public policy. There are many ways for us to do that, including hearings with a balanced set of witnesses, briefings with scientists, and requests for reviews by the National Academy of Sciences or other experts.
But you have taken a decidedly different approach – one that breaks with precedent and raises the specter of politicians opening investigations against any scientist who reaches a conclusion that makes the political elite uncomfortable.

The precedent your investigation sets is truly chilling. Are scientists now supposed to look over their shoulders to determine if their conclusions might prompt a Congressional inquiry no matter how legitimate their work? If Congress wants public policy to be informed by scientific research, then it has to allow that research to operate outside the political realm. Your inquiry seeks to erase that line between science and politics.

Of course, various scientific organizations like AAAS also weighed in on the heavy-handed tactics of Congressman Barton. But, I didn’t hear a lot of concern about McCarthy-eqsue tactics coming from the “skeptic” community on that one.

Carrick
October 18, 2009 8:39 pm

Joel Shore, I find it a bit odd you don’t mention the pro-global warming funding from Exxon-Mobile.
If you sum pro and anti-global warming studies funded by Exxonmobil, cae to guess which is larger?
The problem for people like you isn’t that they exclusively fund climate skeptics (which they don’t fund at all now, btw), it’s that they ever funded them at all.

michel
October 19, 2009 12:33 am

The thing that is destroying the AGW movement from within is that it cannot admit any error, ever, past or present, by any member of the Nomeklatura.
And so we find RC still defending MBH98, and Tamino asserting that the PCA methods used in it are legitimate and approved of by Ian Joliffe, and now we find a chorus of people asserting that Briffa’s selection procedures are entirely normal and legitimate, despite the fact that this stuff is what stats students are taught not to do in the first course. We find the surface station record defended in its entirety, down to the last station.
The funny thing about it, really, is that it was the AGW movement which began by calling dissidents ‘denialists’, but it is now way outscoring its worst betes noires on denialism. What is coming out of the AGW movement on this question of the Hockey Stick, and the Mann and Briffa studies, is now total denialism of settled science. We know, everyone knows, that you just cannot do sampling and statistics like that.
If they could just admit that, get rid of the studies, drop the Hockey Stick, and move on, they would have a chance. There are lot more serious indicators that warming is an issue. But they will not or cannot. And so the thing is snowballing all the time, and with every desperate attempt to cover it up, it gets worse and they lose more credibility.
As always, its not the mistake that sinks you, its the coverup.

October 19, 2009 3:39 am

Joel Shore (19:29:22) :
Dave Middleton says:
Very McCarthy-esque… “Are you now, or have you ever been, a AGW denier?”

It doesn’t sound that McCarthy-esque to me. They are just calling Exxon-Mobil out on some deceptive practices. As they note, it is not as if Exxon was funding peer-reviewed science that was attempting to prevail in the scientific community. Rather they were funding public obfuscation of peer-reviewed science.
If you want to talk about something McCarthy-esque, you could talk about Barton’s inquisition of Mann and his co-authors, which was so bad that even fellow Republican Sherwood Boehlert, Chair of the House Science Committee, found it extremely objectionable…
[…]

Barton wasn’t threatening anyone or any corporation; nor was Barton demanding that anyone or any corporation spend their own money in an Al Gore-approved manner; nor was Barton demanding that anyone or any corporation.
Barton asked Wegman to check Mann’s work because the peers who supposedly reviewed MBH98/99 failed to do so.
This is not a Republican v Democrat thing. A lot of Republicans have fallen for this AGW scam. Enough Republicans have fallen for it, that we will almost certainly have some sort of massive carbon tax imposed on our economy. So, when oil, natural gas and electricity prices go through the roof while our gov’t is artificially restricting supplies over the next few decades, the AGW crowd better hope that the cooling over the next ~25 years is more like 1942 to 1976 than it will be like the Dalton or Maunder Minima… Because our response will be, “Y’All can freeze in the dark for all we care.”

October 19, 2009 3:41 am

I think the Spam filter grabbed my last post… I’m not sure why.

Tim Clark
October 19, 2009 9:30 am

Joel Shore (19:29:22) :
If you want to talk about something McCarthy-esque, you could talk about Barton’s inquisition of Mann and his co-authors, which was so bad that even fellow Republican Sherwood Boehlert, Chair of the House Science Committee, found it extremely objectionable.

I do not take the scientific credentials of this man seriously. In fact, I find it extremely objectionable even referencing him on a science blog. In addition, as you are quick to point out, he has not been peer-reviewed in a scientific journal.
Boehlert was born in Utica, New York to Elizabeth Monica Champoux and Sherwood Boehlert,[1] and graduated with a B.A., Utica College, Utica, N.Y., 1961;He served two years in the United States Army (1956–1958) and then worked as a manager of public relations for Wyandotte Chemical Company. After leaving Wyandotte, Boehlert served as Chief of Staff for two upstate Congressmen, Alexander Pirnie and Donald J. Mitchell[2]; following this, he was elected the county executive of Oneida County, New York, serving from 1979 to 1983. After his four-year term as county executive, he ran successfully for Congress.
Since 2007, Boehlert has remained active promoting environmental and scientific causes. He serves currently on the Board of the bipartisan Alliance for Climate Protection chaired by former Vice President Al Gore.

Joel Shore
October 19, 2009 10:54 am

Dave Middleton says:

Barton wasn’t threatening anyone or any corporation; nor was Barton demanding that anyone or any corporation spend their own money in an Al Gore-approved manner; nor was Barton demanding that anyone or any corporation.

Well, Congressman Boehlert found Barton’s approach to be quite intimidating to scientists, as did scientific organizations like AAAS ( http://www.aaas.org/news/releases/2005/0714letter.pdf ) and Ralph Cicerone, the head of the National Academy of Sciences ( http://www.realclimate.org/Cicerone_to_Barton.pdf )
And, for that matter, I don’t see Sen. Rockefeller and Snowe’s letter as being threatening or demanding. They were just making their views known and requesting that Exxon act as a better corporate citizen.

This is not a Republican v Democrat thing. A lot of Republicans have fallen for this AGW scam. Enough Republicans have fallen for it, that we will almost certainly have some sort of massive carbon tax imposed on our economy.

What I would say is that there are a fair number of Republicans who are not completely ideologically-blinded and are thus actually listening to the scientific community on this issue.

the AGW crowd better hope that the cooling over the next ~25 years is more like 1942 to 1976 than it will be like the Dalton or Maunder Minima… Because our response will be, “Y’All can freeze in the dark for all we care.”

And, what happens if the warming continues at about the average rate it has between the mid-1970s and now?

October 19, 2009 11:41 am

Joel Shore (10:54:31) :
[…]
And, what happens if the warming continues at about the average rate it has between the mid-1970s and now?

To quote Bill Cosby, “How long can you tread water?”
It would be the end of the PDO. The Earth did warm from somewhere around 1976 to somewhere around 2003. In many of the surface stations, particularly along the Pacific coast of North America, the Climate Shift of 1976 (PDO shift) is very obvious. Somewhere between 2003 and 2007, the PDO shifted back to negative.
Where do you start calculating your linear temperature trend?
What would have happened if the Earth had continued to cool at the rate in had from 1942-1976? We’d be back in Little Ice Age conditions. What would have happened if the Earth had not cooled from 1942-1976? The rate of warming from 1908-1942 was almost exactly the same as it was from 1977-2005. Without that ~30-year cooling period, the Earth might be getting close to as warm as it was in the Sangamon interglacial.
Back to your question…”And what happens if the warming continues at about the average rate it has between the mid-1970s and now?”

A) If I’m wrong and we just maintain business as usual, we will have the economic resources to deal with slightly elevated sea levels, longer growing seasons and occasionally more powerful hurricanes. We will also have better access to the mineral resources (90 billion barrels of oil, 1,669 trillion cubic feet of natural gas, and 44 billion barrels of natural gas liquids) north of the Arctic Circle.
B) If I’m wrong and we bankrupt ourselves pursuing an 80% reduction in carbon emissions by 2050 (80 X 50)… Our efforts may or may not reverse or even alter AGW. Even if our efforts eliminate AGW, the natural climate cycles might just be indistinguishable from the AGW-cycles.
C) If I’m right and we bankrupt ourselves pursuing an 80% reduction in carbon emissions by 2050 (80 X 50)… We’ll get an up close and personal reprise of the Dark Ages.
D) If I’m right and we just maintain business as usual, we will have the economic resources to deal with the natural cycles of climate change that will always be with us.

It will be really easy to tell if I’m wrong… The satellite temperature data will quickly revert to the pre-2003 trend within the next few years.

Joel Shore
October 19, 2009 12:46 pm

Dave Middleton: Your A) – D) scenario seems to rely on assuming that even if you are wrong, the effects of climate change are less severe than most projections and that the economic effects of mitigating climate change are much, much larger than most projections.

It will be really easy to tell if I’m wrong… The satellite temperature data will quickly revert to the pre-2003 trend within the next few years.

Indeed it will. Here is a plot where I added in a few more linear fits over similar time periods to see how well similar extrapolations might have done in the past: http://www.woodfortrees.org/plot/uah/plot/uah/from:2003/trend/plot/uah/to:2003/trend/plot/uah/from:1979/to:1986/trend/plot/uah/from:1988/to:1995/trend

October 19, 2009 2:51 pm

Joel Shore (12:46:57) :
Dave Middleton: Your A) – D) scenario seems to rely on assuming that even if you are wrong, the effects of climate change are less severe than most projections and that the economic effects of mitigating climate change are much, much larger than most projections.
It will be really easy to tell if I’m wrong… The satellite temperature data will quickly revert to the pre-2003 trend within the next few years.
Indeed it will. Here is a plot where I added in a few more linear fits over similar time periods to see how well similar extrapolations might have done in the past…

You can actually “fit” all of the warming trend in the UAH series into one 63-month period from January 1995 to March 2000. The trend was flat before January 1995 and after March 2000.

October 20, 2009 1:26 am

michel (00:33:18) : The thing that is destroying the AGW movement from within is that it cannot admit any error, ever, past or present, by any member of the Nomeklatura…
well put, whole post, deserves QOTW.

MrAce
October 20, 2009 12:09 pm

@Patrik
“Please tell us how Mann et al knows that one should expect ~13% conformity with measurements from a purely random source?”
Generate a 100 groups of 100 random series and count for each group the series that match. Average this over the 100 groups. This number turned up to be 13%. So if your group of 100 series has 40 matches it is probably not random.

Patrick
November 23, 2009 8:41 pm

@MrAce, P
MrAce said, “Generate a 100 groups of 100 random series and count for each group the series that match. Average this over the 100 groups. This number turned out to be 13%”
What would be the result of this type of analysis if there is actually no correlation between temperature and tree growth but there is a strong correlation between tree growth in one period and the next? (If month X was a good growth year for a tree, isn’t month (x+1) more likely to be a good growth month?) If this were true, wouldn’t the bar be set drastically too low by this random-generation screening method?
To test for selection bias, it seems to me you’d at least have to figure out how to measure/model this inertia or self-correlation or whatever you’d like to call it. This seems hard, but perhaps doable. I haven’t read about anything like this being accounted for, but if it was done, I’d be very interested in the methodology. It sounds like a very interesting problem.
I might look at the dendrology data used for this study and try to figure out a good way to estimate the self-correlation in growth and the effect on the number of tree series that would match if there were no true correlation between temperature and tree growth. Any suggestions for how to do this would be appreciated. I’m already thinking that I might want to look at both pre-industrial and post-industrial periods to see if the self-correlation is the same. I could probably find this on my own, but does anyone have a link to the dendrology data in a nice format (.csv or Excel would be ideal for me, but I’m flexible).

November 28, 2009 7:29 am

Co2 is plant food.