Duke Neukom's Secret Sauce

Guest Post by Willis Eschenbach

In my last post, I talked about the “secret sauce”, as I described it, in the Neukom et al. study “Inter-hemispheric temperature variability over the past millennium”. By the “secret sauce” I mean the method which is able to turn the raw proxy data, which has no particular shape or trend, into a consensus-approved hockeystick … but in truth, I fear I can’t reveal all of the secret sauce, because as is far too common in climate science, they have not released their computer code. However, they did provide some clues, along with pretty pictures.

neukom figure 1Figure 1. The overview graphic from Neukom2014. Click to embiggen.

So what did they do, and how did they do it? Well, don your masks, respirators, coveralls, and hip boots, because folks, we’re about to go wading in some murky waters …

From my last post, Figure 2 shows the mean of the proxies used by Neukom, and the final result of cooking those proxies with their secret sauce:

mean neukom proxies final resultFigure 2. Raw proxy data average and final result from the Neukom2014 study. Note the hockeystick shape of the result.

Let me start with an overview of the whole process of proxy reconstruction, as practiced by far too many paleoclimatologists. It is fatally flawed, in my opinion, by their proxy selection methods.

What they do first is to find a whole bunch of proxies. Proxies are things like tree ring widths, or the thickness of layers of sediment, or the amounts of the isotope oxygen-18 in ice cores—in short, a proxy might be anything and everything which might possibly be related to temperature. The Neukom proxies, for example, include things like rainfall and streamflow … not sure how those might be related to temperature in any given location, but never mind. It’s all grist for the proxy mill.

Then comes the malfeasance. They compare the recent century or so of all of the proxies to some temperature measurement located near the proxy, like say the temperature of their gridcell in the GISS temperature dataset. If there is no significant correlation between the proxy and the gridcell temperature where the proxy is located, the record is discarded as not being a temperature proxy. However, if there is a statistically significant correlation between the proxy and the gridcell temperature, then the proxy is judged to be a valid temperature proxy, and is used in the analysis.

Do you see the huge problem with this procedure?

The practitioners of this arcane art don’t see the problem. They say this procedure is totally justified. How else, they argue, will we be able to tell if something actually IS a proxy for the temperature or not? Here is Esper on the subject:

However as we mentioned earlier on the subject of biological growth populations, this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal. The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.

“An advantage unique to dendroclimatology”? Why hasn’t this brilliant insight been more widely adopted?

To show why this procedure is totally illegitimate, all we have to do is to replace the word “proxies” in a couple of the paragraphs above with the words “random data”, and repeat the statements. Here we go:

They compare the recent century or so of all of the random data proxies to some temperature measurement located near the random data proxy, like say the temperature of their gridcell in the GISS temperature dataset. If there is no significant correlation between the random data proxy and the gridcell temperature, the random data proxy is discarded. However, if there is a statistically significant correlation between the random data proxy and the gridcell temperature, then the random data proxy is judged to be a valid temperature proxy, and is used in the analysis.

Now you see the first part of the problem. The selection procedure will give its blessing to random data just as readily as to a real temperature proxy. That’s the reason why this practice is “unique to dendroclimatology”, no one else is daft enough to use it … and sadly, this illegitimate procedure has become the go-to standard of the industry in proxy paleoclimate studies from the original Hockeystick all the way to Neukom2014.

The name for this logical error is “post-hoc proxy selection”. This means that you have selected your proxies, not based on some inherent physical or chemical properties that tie them to temperature, but on how well they match the data you are trying to predict …

The use of post hoc proxy selection in Neukom2014 is enough in itself to totally disqualify the study … but wait, it gets worse. I guess that comparing a proxy with the temperature record of the actual gridcell where it is physically located was too hard a test, and as a result they couldn’t find enough proxies random data that would pass that test. So here is the test that they ended up using, from their Supplementary Information:

We consider the “local” correlation of each record as the highest absolute correlation of a proxy with all grid cells within a radius of 1000 km and for all the three lags (0, 1 or -1 years). A proxy record is included in the predictor set if this local correlation is significant (p<0.05).

“Local” means within a thousand kilometers? Dear heavens, how many problems and misconceptions can they pack into a single statement? Like I said, hip boots are necessary for this kind of work.

First question, of course, is “how many gridcells are within 1,000 kilometres of a given proxy”? And this reveals a truly bizarre problem with their procedure. They are using GISS data on a regular 2° x 2° grid. At the Equator, there are no less than 68 to 78 of those gridcells whose centers are within 1,000 km of a given point, depending on the point’s location within the gridcell … so they are comparing their proxy to ABOUT 70 GRIDCELL VALUES!!! Talk about a data dredge, that about takes the cake … but not quite, because they’ve outdone themselves.

The situation on the Equator doesn’t take the cake once we consider say a proxy which is an ice core from the South Pole … because there are no less than 900 2° x 2° gridcells within 1000 kilometres of the South Pole. I’ve heard of tilting the playing field in your favor, but that’s nonsense.

I note that they may be a bit uneasy about this procedure themselves. I say this because they dodge the worst of the bullet on other grounds, saying:

The predictors for the reconstructions are selected based on their local correlations with the target grid. We use the domain covering 55°S-10°N and all longitudes for the proxy screening. High latitude regions of the grid are excluded from the correlation analysis because south of 55°S, the instrumental data are not reliable at the grid-point level over large parts of the 20th century due to very sparse data coverage (Hansen et al., 2010). We include the regions between 0°N and 10°N because the equatorial regions have a strong influence on SH temperature variability.

Sketchy … and of course that doesn’t solve the problem:

Proxies from Antarctica, which are outside the domain used for proxy screening, are included, if they correlate significantly with at least 10% of the grid-area used for screening (latitude weighted).

It’s not at all clear what that means. How do you check correlation with 10% of a huge area? Which 10%? I don’t even know how you’d exhaustively search that area. I mean, do you divide the area into ten squares? Does the 10% have to be rectangular? And why 10%?

In any case, the underlying issue of checking different proxies against different numbers of gridcells is not solved by their kludge. At 50°S, there are no less than one hundred gridcells within the search radius. This has the odd effect that the nearer to the poles that a proxy is located, the greater the odds that it will be crowned with the title of temperature proxy … truly strange.

And it gets stranger. In the GISS temperature data, each gridcell’s temperature is some kind of average of the temperature stations in that gridcell. But what if there is no temperature station in that gridcell? Well, they assign it a temperature as a weighted average of the other local gridcells. And how big is “local” for GISS? Well … 1,200 kilometres.

This means that when the proxy is compared to all the local gridcells, in many cases a large number of the gridcell “temperatures” will be nothing but slightly differing averages of what’s going on within 1,200 kilometres.

Not strange enough for you? Bizarrely, they then go on to say (emphasis mine):

An alternative reconstruction using the full un-screened proxy network yields very similar results (Supplementary Figure 20, see section 3.2.2), demonstrating that the screening procedure has only a limited effect on the reconstruction outcome.

Say what? On any sane planet, the fact that such a huge change in the procedure has “only a limited effect” on your results should lead a scientist to re-examine very carefully whatever they are doing. To me, the meaning of this phrase is “our procedures are so successful at hockeystick mining that they can get the same results using random data” … how is that not a huge concern?

Returning to the question of the number of gridcells, here’s the problem with looking at through that many gridcells to find the highest correlation. The math is simple—the more times or places you look for something, the more likely you are to find an unusual but purely random result.

For example, if you flip a coin five times, the odds of all five flips coming up heads are 1/2 * 1/2 * 1/2 * 1/2 * 1/2. This is 1/32, or about .03. This is below the normal 0.05 significance threshold usually used in climate science.

So if that happened the first time you flipped a coin five times, five heads in a row, you’d be justified in saying that the coin might be weighted.

But suppose you repeated the whole process a dozen times, with each sample consisting of flipping the same coin five times. If we come up with five heads at some point in that process, should we still think the coin might be loaded?

Well … no. Because in a dozen sets of five flips, the odds of five heads coming up somewhere in there are about 30% … so if it happens, it’s not unusual.

So in that context, consider the value of testing either random data or a proxy against a hundred gridcell temperatures, not forgetting about checking three times per gridcell to include the lags, and then accepting the proxy if the correlation of any one of those is above 0.05 … egads. This procedure is guaranteed to drive the number of false positives through the roof.

Next, they say:

Both the proxy and instrumental data are linearly detrended over the 1911-1990 overlap period prior to the correlation analyses.

While this sounds reasonable, they haven’t thought it all the way through. Unfortunately, procedure this leads to a subtle error. Let me illustrate it using the GISS data for the southern hemisphere, since this is the mean of the various gridcells they are using to screen their data:

giss southern hemisphere temperatueFigure 3. GISS land-ocean temperature index (LOTI) for the southern hemisphere.

Now, they are detrending it for a good reason, which is to keep the long-term trend from influencing the analysis. If you don’t do that, you end up doing what is also known as “mining for hockeysticks”, because the trend of the recent data will dominate the selection process. So they are trying to solve a real problem, but look what happens when we do linear detrending:

detrend giss southern hemisphere temperature detrendedFigure 4. Linearly detrended GISS land-ocean temperature index (LOTI) for the southern hemisphere.

All that this does is change the shape of the long-term trend. It does not remove the trend, it rises steadily after 1910. So they are still mining for hockeysticks.

The proper way to do this detrending is to use some kind of smoothing filter on the data to remove the slow swings in the data. Here’s a loess smooth, you can use others, the particular choice is not critical for these purposes:

loess giss southern hemisphere temperature detrendedFigure 5. Loess smooth of GISS land-ocean temperature index (LOTI) for the southern hemisphere.

And once we subtract that loess smooth (gold line) from the GISS LOTI data, here’s what we get:

loess detrend giss southern hemisphere temperature detrendedFigure 6. GISS land-ocean temperature index (LOTI) for the southern hemisphere, after detrending using a loess smooth.

As you can see, that would put all of the proxies and data on a level playing field. Bear in mind, however, that improving the details of the actual method of post-hoc proxy selection is just putting lipstick on a pig … it’s still post-hoc proxy selection.

And since they haven’t done that, they are definitely mining for hockeysticks. No wonder that their proxy selection process is so meaningless.

From there, the process is generally pretty standard. They “calibrate” each proxy using a linear model to determine the best fit of the proxy to the temperature data from whichever of the 70 gridcells that the proxy got the best correlation with. Then they use some other portion of the data (1880-1910) to “validate” the calibration parameters, that is to say, they check how well their formula works to replicate the early portion of the data.

However, in Neukom2014 they introduced an interesting wrinkle. In their words:

For most of these choices, objective “best” solutions are largely missing in literature. The main limitation is that the real-world performance of different approaches and parameters can only be verified over the instrumental period, which is short and contains a strong trend, complicating quality assessments. We assess the influence of these methodological choices by varying methodological parameters in the ensemble and quantifying their effect on the reconstruction results. Obviously, the range within which these parameters are varied in the ensemble is also subjective, but we argue that the ranges chosen herein are within reasonable thresholds, based our own experience and the literature. Given the limited possibilities to identify the “best” ensemble members, we treat all reconstruction results equally and consider the ensemble mean our best estimate.

OK, fair enough. I kind of like this idea, but you’d have to be very careful with it. It’s like a “Monte Carlo” analysis. For each step in their analysis, they generate a variety of results by varying the parameters up and down. That explores the parameter space of the model to a greater extent. In theory this might be a useful procedure … but the devil is in the details, and there are a couple of them that are not pretty. One difficulty involves the uncertainty estimates for the “ensemble mean”, the average of the whole group of results that they’ve gotten by varying the parameters of the analysis.

Now, the standard formula for the errors in calculating the mean has been known for a long time. the error of the mean is the standard deviation of the results, divided by the square root of the number of data points.

However, they don’t use that formula. Instead, they say that the error is the quadratic sum (the square root of the sum of the squares) of the standard deviation of the data and the “residual standard deviation”. I can’t make heads or tails out of this procedure. Why doesn’t the number of data points enter into the calculation of the standard error? Is this some formula I’m unaware of?

And what is the “residual standard error”? It’s not explained, but I think the “residual standard error” is the standard deviation of the residuals in the model for each proxy. This is a measure of how well or how poorly the individual proxy matched up with the actual temperature it was calibrated against.

So they are saying that the overall error can be calculated as the quadratic sum of the year-by-year average of the residual errors of all proxies contributing to that year and the standard deviation of the 3,000 results for that year … gotta confess, I’m not feeling it. I don’t understand even in theory how you’d calculate the expected error from this procedure, but I’m pretty sure that’s not it. In any case, I’d love to see the theoretical derivation of that result.

I mentioned that the devil is in the details. The second kinda troublesome detail about their Monte Carlo method is that at the end of the day, their method does almost nothing.

Here’s why. Let me take one of the “methodological parameters” that they are actually varying, viz:

Sampling the weight that each proxy gets in the PC analysis by increasing its variance by a factor of 0.67-1.5 (after scaling all proxies to mean zero and unit standard deviation over their common period).

OK, in the standard analysis, the variance is not adjusted at all. This is the equivalent of a variance factor of 1. Now, they are varying it above and below 1, from 2/3 to 3/2, in order to explore the possible outcomes. This gives a whole range of possible outcomes, they collected 3,000 of them

The problem is that at the end of the day, they average out all of the results to get their final answer … and of course, that ends them back where they started. They have varied the parameter up and down from the actual value used, but the average of all of that is just the actual value …

Unless, of course, they vary the parameter more in one direction than the other. This, of course, has the effect of simply increasing or decreasing the parameter. Because at the end of the day, in a linear model if you vary a parameter and average the results, all you end up with is what you’d get if you had simply used the average of the random parameters chosen.

Dang details, always messing up a good story …

Anyhow, that’s at least some of the oddities and the problems with what they’ve done. Other than that it is just more of the usual paleoclimate handwaving, addition and distraction. Here’s one of my favorite lines:

To determine the extent to which reconstructed temperature patterns are independently identified by climate models, we investigate inter-hemispheric temperature coherence from a 24-member multi-model ensemble

Yes siree, that’s the first thing I’d reach for in their situation, a 24-model climate circus, that’s the ticket …

If nothing else, this study could serve as the poster child for the need to provide computer code. Without it, despite their detailed description, we don’t know what was actually done … and given the fact that bugs infest computer code, they may not even have done what they think they’ve done.

Conclusions? My main conclusion is that almost the entire string of paleoclimate reconstructions, from the Hockeystick up to this one, are fatally flawed through their use of post-hoc proxy selection. This is exacerbated by the bizarre means of selection. In addition their error results seem doubtful. They are saying that they know the average temperature of the southern hemisphere in the year 1000 to within a 95% confidence interval of plus or minus a quarter of a degree C?? Really? … c’mon, guys. Surely you can’t expect us to believe that …

Anyhow, that’s their secret sauce … post-hoc proxy selection.

My best wishes to all,

w.

CODA: With post-hoc proxy selection, you are choosing your explanatory variables on the basis of how well they match up with what you are trying to predict. This is generally called “data snooping”, and in real sciences it is regarded as a huge no-no. I don’t know how it got so widespread in climate science, but here we are … so given that post-hoc selection is clearly the wrong way to go, what would be the proper way to do a proxy temperature reconstruction?

First, you have to establish the size and nature of the link between the proxy and the temperature. For example, suppose your experiments show that the magnesium/calcium ratio in a particular kind of seashell varies up and down with temperature. What you do then is you get every freaking record of that kind of seashell that you can lay your hands on, from as many drill cores in as many parts of the ocean as you can find.

And then? Well, first you have to look at each and every one of them, and decide what the rules of the game are going to be. Are you going to use the proxies that are heteroskedastic (change in variance with time)? Are you going to use the proxies with missing data, and if so, how much missing data is acceptable? Are you going to restrict them to some minimum length? Are you only allowing proxies from a given geographical area? You need to specify exactly which proxies qualify and which don’t.

Then once you’ve made your proxy selection rules, you have to find each and every proxy that qualifies under those rules. Then you have to USE THEM ALL and see what the result looks like.

You can’t start by comparing the seashell records to the temperature that they are supposed to predict and throw out the proxies that don’t match the temperature, that’s a joke, it’s extreme data snooping. Instead, you have to make the rules in advance as to what kind of proxies you’re going to use, and then use every proxy that fits those rules. That’s the proper way to go about it.

PS–The Usual Request. If you disagree, quote what you disagree with. Otherwise, no one really knows what the heck you’re talking about.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
climatereason

Willis
You forgot to mention that sometimes the random data is inverted or truncated, but the answer will still be robust as the more random the data is, the more robust it becomes. (apparently)
tonyb

Hint to paleos…if your linearly detrended data show clear linear trends still, you have done something very wrong.

Willis Eschenbach

tonyb says:
April 4, 2014 at 12:23 am

Willis
You forgot to mention that sometimes the random data is inverted or truncated, but the answer will still be robust as the more random the data is, the more robust it becomes. (apparently)
tonyb

Quite right, tony … I just ran out of steam. As I said, I identified “at least some of the oddities and the problems” with the study, but by no means all of them.
w.

Mike Bromley the Kurd

“this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal.”
What???????? this does not mean that one could not improve a silk purse by reducing the number of Sow’s ears used if the purpose of removing sow’s ears is to enhance the desired silk purse.
This is astounding beyond all belief. They come right out and SAY that they fake it. Holy cow.

Willis
Perhaps we can have some promotional lapel badges?
“Duke Neukom’s secret sauce-now with added robustness.”
nice piece by the way.
tonyb

I’m reminded of the origins of http://en.wikipedia.org/wiki/Duke_Nukem way back when. BTW the wiki does not go into the origins. – a dispute with another Neukom.

dudleyhorscroft

My Physics Master used to call this the use of Cook’s Constant and Fudge’s Formula.
Fudge’s formula – Divide the result you want by the result you got. This gives you Cook’s Constant. Then use Fudge’s formula – multiply the result you got by Cook’s Constant. This gives you the result you want. QED. Success!
Used by University examiners to catch poor candidates – eg, giving data for an experiment to calculate the water equivalent of a copper calorimeter. The given data, if correctly worked, gives a value of, say, 0.01. Students knowing that the correct value is 0.1, manage to slip in a little error along the way, and turn in the result 0.1 Zero marks for that question, and look very, very, carefully at the working in all the other questions. Students a bit more honest turn in the correct answer of 0.01 and get full marks. Really bright students, knowing the answer should be 0.1, turn in the answer of 0.01, and add a rider – “I believe that the result of other experiments usually gives a value of 0.1. I would therefore question very carefully the data as recorded in this experiment, and/or the way it was carried out.”

thingadonta

Am I right in concluding that the average of random data and hockeyticks is still hockeysticks?
That is, the random data in the past pre ~20th century averages, cancels or smooths out (depending on the statistical method used) to produce a smooth shaft, whilst it combines with the more recent self- selected temperature upticks to give a hockeystick shape overall, because these have been preferentially weighted by the selection method to begin with?
I think, hesitantly, I’m not all that far off, and I am not trained in statistics. But why aren’t these papers reviewed by professional statisticians before they pass peer review?

Stephen Richards

Mike Bromley the Kurd says:
April 4, 2014 at 12:38 am
“this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal.”
This new super duper statistical method was used in one of the more famous pieces of fraud that SteveMc disected. I just can’t remember which one.

Willis Eschenbach

thingadonta says:
April 4, 2014 at 1:11 am

Am I right in concluding that the average of random data and hockeyticks is still hockeysticks?
That is, the random data in the past pre ~20th century averages, cancels or smooths out (depending on the statistical method used) to produce a smooth shaft, whilst it combines with the more recent self- selected temperature upticks to give a hockeystick shape overall, because these have been preferentially weighted by the selection method to begin with?

Obviously, my good man, you are a natural-born hockeystick miner … everything averages out except the recent years that they have calibrated against.
w.

thingadonta says: “That is, the random data in the past pre ~20th century averages, cancels or smooths out (depending on the statistical method used) to produce a smooth shaft, whilst it combines with the more recent self- selected temperature upticks to give a hockeystick shape overall’
JK — That is also my conclusion.
You will also get the average of noise AFTER the calibration period. Perhaps this explains the need to “hide the decline”
Thanks
JK

If readers are interested, I have posted an active viewer for the Neukom proxy data here.

Espen

In textbook statistics, the way to use methods like principal component analysis or stepwise multiple regression is as an exploratory tool to find suitable mathematical models for what you’re studying. When you’ve decided on your model, you should discard all data used during your model building and collect a completely new set of data that you can now test your model on. Then, and only then, can you say anything about the statistical significance of your results.
At least this was the way I learned it at university and the first time I looked at this paleo stuff I thought it was so bizarre and wrong that I was almost certain that I didn’t understand what they were doing, they couldn’t really be messing up things that thoroughly, could they? It seems they could. The whole field of climate science is tainted by the awful statistical wrongdoings of the tree wringers, and I can’t really trust anything of it until the serious researchers speak up against this.

Kon Dealer

Why can’t the people who reviewed this paper- who are supposed to be “experts” see the logical fallicies of the methodology?
Is it because they are incompetant, stupid, or mates of the authors (pal review)- or all 3?
I’ll leave you to ponder…

JDN

Willis:
That was great! I’ve said it before, you should do a course on stats, like on Youtube or Khan Academy, showing off your mad R skills and some of this real world data analysis on a systematic basis.
I’m sure you’ll exceed “every freaking record” (heh) they have for stats course attendance. Or just EFR for short.

Greg

OMG , this is Mann’s hockeystick all over again.
Following tonyb’s point , how many Tiljanders are there is retained “proxies”?
The best part is what the data shows in figure 2. The ‘mean screened proxies, clearly shows c 950 AD a good 0.5K warmer than today and a steady increase from about 1550. They manage to annihilate both those features in their final result.
The other thing they manage to remove is an apparent repetitive bump in the data. Since LIA there have been three bumps and we are currently at the top of a forth one.
If that grey line is the result of their suspect screening, they must have some extra chilli sauce to get to the red line as the final result.
In the caption of figure2 you label grey as ” Raw proxy data average ” yet in the legend you call it “mean screened proxies”. Could you clarify what the grey line is?
Is this data available?
thanks.

Oldseadog

Never mind the hip boots, you need chest waders for this one.

Doubting Rich

Leave them alone. There is a great history of post-hoc proxy selection in climate science. It has been serving climate scientists lucratively … I mean well for years. It is so common it should now count as standard technique in climate science.
So who was it that said that the best scientists go into physics and chemistry, not climate science?

Greg

SI fig 6 and 7 shows their uncertainties go through the roof at the end of the data, after generally declining in more recent times.
I guess that not that many proxies run that recently so results are getting unreliable.

Is “Duke Neukom” a video game reference or just a happy coincidence?

Thanks, Willis. A great analysis and wonderful read.
Regards

NikFromNYC

“Then comes the malfeasance. They compare the recent century or so of all of the proxies to some temperature measurement located near the proxy, like say the temperature of their gridcell in the GISS temperature dataset. If there is no significant correlation between the proxy and the gridcell temperature where the proxy is located, the record is discarded as not being a temperature proxy. However, if there is a statistically significant correlation between the proxy and the gridcell temperature, then the proxy is judged to be a valid temperature proxy, and is used in the analysis.”
Early on in my delving into climate “science,” one of Michael Mann’s followup hockey stick used this same type of procedure and my jaw just dropped and I soon began calling it “algorithmic cherry picking” and finally “Al-Gore-ithmic cherry picking.” I was roundly ridiculed by whole armies of online Gorebots, around 2008. My background was benchtop chemistry and nanofabrication with some genetics lab work too, and in all of that the main lesson driven home was to be oh so careful that you are not fooling yourself, that you really have what you think you do damn it. I couldn’t believe this sort of very simple cheating was allowed in another field of science. To this day such hockey sticks remain unretracted in the literature. It was very difficult and in the end impossible to use this as a useful debate point, online, since AGW enthusiasts simply had no idea how science really worked, how intense the discipline in it was to get things right, at least in the hard physical sciences. So I switched to dirt simple data plots that showed no AGW signal in various old thermometer and tide gauge records. Then there was no hand waving of it away, data on the ground, that is. The Marcott 2013 hockey stick that Willis plotted input data of that shows no blade in any of it, however, was the first hockey stick that anybody at all could competently debunk just by looking at the data. For myself though, I have to chuckle at the meaningless idea that you can average or somehow black box combine various proxy series that vary between each other in upwards, downwards, or kinked trends and expect the result to have any physical meaning at all. It’s the sheer audacity of the bad math involved that makes so few people so far really understand that true cheating is at work, not just over-enthusiasm and precautionary principle panic.

The same secret sauce as one finds in Mikey Mann’s ‘Nature Trick’ fraud, namely a deliberate use of statistical fraud based on malicious code malfeasance. I have IT headcount who will for no charge, audit this or any other climate-baloney application from top to bottom. My guess is that the main elements of the coding based fraud will be uncovered within 3 days or less. I offered said resources to the little-Mann, but so far, no interest. In the name of ‘science’ I would assume he would be delighted to prove that his ‘system’ is ‘sound’.

bernie1815

Nick: Do you see things differently from Willis?

Chris Wright

NikFromNYC says:
April 4, 2014 at 4:07 am
“Then comes the malfeasance…….”
Very nicely put. The sad, sad thing is that these frauds are still winning awards.
On page 153 of Montford’s ‘The Hockey Stick Illusion’ are twelve perfect hockey stick graphs created with Mann’s method. Problem is, eleven were created with random red noise. It’s blindingly obvious that Mann’s method was mining for hockey sticks. It looks like Neukom has been using essentially the same deeply flawed method.
So, here’s my question: does Neukom2014 also create perfect hockey sticks from red noise? Is it possible to replicate Neukom2014 and, if not, why not? Of course, if it can’t be replicated by other researchers then it’s not science, it’s an opinion piece. It sounds as if the proxy data is available, but what about the methodology and software used? Is this publicly available? I assume not.
It seems to me that the best way to prove fraud is to prove, using the various author’s data and methods, that these methods reliably create hockey sticks from random data.
Chris

bernie1815 says:April 4, 2014 at 4:22 am
“Nick: Do you see things differently from Willis?”

I’ve concentrated so far on visualizing the proxy data, rather than the analysis. I’ll note one thing though. Neukom et all looked at the effect of their screening. In Fig 20, they show the results of recon with:
1. Their screening with the 1000 km test
2, A screening with a 500 km test
3. No screening at all.
4. A simple average.
No screening and the 1000 km test gave very similar results. The 500 km test screened out more proxies, reducing 111 to 85, and made a bit more difference. A simple average was quite a lot different, but that is not surprising.

In the bad old days of medical research publishing, oncology studies would commit a similar post-hoc fallacy. Doctors would try a new cancer treatment on a group of patients. At some point the size of the tumors would be compared to the pre-treatment size, and patients would be divided into “responders” if the tumors shrunk, and “non-responders” if the tumors did not shrink. Treatment would be stopped in the non-responders, but continued in the responders until it stopped working for them too. So far, so good. Then the drug company representatives would write a paper on how much the new treatment improved life expectancy in responders when compared with a control group, and how the new treatment should become standard. The graphs were impressive.
By limiting their analysis to “responders” they selected only the patients with cancers that are susceptible to the drug. What about the non-responders? The morbidity of the treatment, combined with the morbidity of progressive disease, shortened their lives.
This created a situation where the 20% of patients who responded lived an extra 6 months, on average, while the 80% of non-responders survived an average of 2 fewer months. The group overall had 0.4 fewer months of survival, and yet the drug was being proposed as a new treatment because of the great job it did for the responders. The non-responders get thrown under the bus because post-hoc selection has crept into the study.
There may still be some hope that the drug can help patients, but not until some test can identify the responders up-front, before the non-responders get exposed to the treatment. Just as Willis describes an up-front selection of proxies, followed by an analysis of *all* the data, an up-front selection of patients, followed by an analysis of *all* the data is part of how we protect ourselves from statistical fallacies and wishful thinking.
I know how it feels to work in the heady times of a new field, where low-hanging fruit appears to be everywhere, and very junior people can make discoveries and be experts, and also what it is like to work in a mature field, where the grave markers of half-cocked theories dot the landscape. I think this is why scientists in more mature disciplines that have developed a culture of rigor and self-criticism, because they have been burned in the past, are more likely to be climate skeptics.

The Ghost Of Big Jim Cooley

Excellent, Willis. But I still want Kon Dealer’s question answered:
“Why can’t the people who reviewed this paper- who are supposed to be “experts” see the logical fallicies of the methodology?”

Aussiebear

I think this may get Modded. Why does http://www.populartechnology.net/ hate you?
What you write seems, on the face of reasonable.

rgbatduke

Now you see the first part of the problem. The selection procedure will give its blessing to random data just as readily as to a real temperature proxy. That’s the reason why this practice is “unique to dendroclimatology”, no one else is daft enough to use it … and sadly, this illegitimate procedure has become the go-to standard of the industry in proxy paleoclimate studies from the original Hockeystick all the way to Neukom2014.
What is really amazing is that statistics is so arcane and difficult, and climatology people so ill-trained in the art, that it is chock full of people that are precisely that daft. If you want to make cherry pie you have to pick cherries. If you want to sell catastrophic global warming you have to take the simple mean of the means of the 36 models in CMIP5 independent of their model independence, how many perturbed parameter ensemble runs contribute, or how well each model does in comparison with the data and then you have to pretend that the envelope of the results has some sort of statistical meaning as a measure of statistical variance in order to be able to make various claims “with confidence”.
Note that this is the exact opposite of what they are doing in proxy estimation. They are refusing to only give weight to models that are at least arguably working across the thermometric data to predict the future. This is one of the places where selectivity could easily be justified, as the models are not at all random samples and do not produce “noise” — comparison with the data is merely identifying probable occult bias, errors in computational methodology, or errors in the physics (all of which exist, I’m pretty sure, in profusion, in most of the models).
They acknowledge in AR5 that this procedure is flawed and means that when they use it they can no longer assess the predictivity of the result or any sort of measure of confidence (in a single line in the entire document that no policy maker will ever read) and then do it anyway.
Then we could go on to kriging, Cowtan and Way, or Trenberth’s paper on millidegree oceanic warming, and how to fill in a sparse grid and make statistically impossible claims for precision at the same time!
I don’t know if these guys understand it, but the predictions and claims of AR-N (for any value of N) are going to be subjected to the brutal effects of time and empirical verification no matter what they do. The data on global climate (thanks largely to enormous investment in technology) is getting to be vast enough, and based on enough unfutzable hardware, and dense enough (although it is a LONG way from adequate there) that it is getting to be very difficult to “readjust” existing temperatures still warmer to prolong the illusion of ongoing warming. RSS alone is putting a serious lid on surface temperatures, for example.
What, exactly, do they plan to do if temperatures actually start to fall as we move into the long slow decline associated with the current (already weak) solar cycle? Or if they merely remain flat? What will they do if arctic sea ice actively regresses to the mean while antarctic ice remains strong? What will they do if the current possible ENSO fizzles like the last two or (their worst nightmare) turns into a strong La Nina and chills the entire Northern Hemisphere? Or just turns out to be weak and have little effect on temperature?
In the case of tree rings, even trees that were selected not infrequently failed to be predictors when compared to known temperatures outside of the selection interval (e.g. the infamous bristlecone pine). That’s the problem with multivariate dependency in a proxy — it might well reflect temperature for fifty years and then turn around and reflect rainfall, or an ecological change, or depletion of nutrients, or predator prey cycles, or volcanic activity, or the flicked switch change associated with a flood.
rgb

Alan Robertson

Espen says:
April 4, 2014 at 3:01 am
“In textbook statistics, the way to use methods like principal component analysis or stepwise multiple regression is as an exploratory tool to find suitable mathematical models for what you’re studying. When you’ve decided on your model, you should discard all data used during your model building and collect a completely new set of data that you can now test your model on. Then, and only then, can you say anything about the statistical significance of your results.
At least this was the way I learned it at university and the first time I looked at this paleo stuff I thought it was so bizarre and wrong that I was almost certain that I didn’t understand what they were doing, they couldn’t really be messing up things that thoroughly, could they? It seems they could. The whole field of climate science is tainted by the awful statistical wrongdoings of the tree wringers, and I can’t really trust anything of it until the serious researchers speak up against this.”
_____________________________________
We know that Willis is a fun- loving guy, but he’s just shown us again that he’s serious about unmasking the endemic statistical malpractices of “climate science”, which at this point in time, look more like blatant and deliberate fraud.

Bill Illis

Why would anyone do this?
Anyone who is able to obtain a PhD and an academic position at any well-known university is going to know this is wrong mathematically and wrong ethically.
Its depressing that this is occurring and even more depressing that it is encouraged.
It is a symptom of something that has gone really, really wrong.

hunter

The AGW hypesters wave the scary pictures around and pretend they represent evidence.

Rob

Yes, proxy data reconstructions are perhaps the best example of “non science”.

ferdberple

Calibration is known statistically as “selection on the dependent variable”. It is forbidden mathematically because it leads to spurious (false) correlations.
However, in Climate Science, where you are trying to prove something that is not true, spurious correlations are a positive boon.

JustAnotherPoster

it won’t be long before rgb is termed a “denier”.

Oscar Bajner

I have been trying (as a scientific layman) to understand the (basics of the) science of climate, and the essence of the controversies as interpreted by skeptics, agnostics and cynics alike for probably 10 years now. I have followed several sagas in as much depth as I could stand (with
particular reference to climate audit issues of reconstructions), and I have reached the following firm conclusion:
Two men were walking the plains of the Serengeti when they came upon a pride of lions in the open. The men froze, until several of the lions roused themselves and began to plod purposefully towards them. One of the men sat down, ripped off his heavy boots and
produced a pair of Nike (TM) running shoes from his kitbag.
“What are the hell are you doing?”, asked the other man, “you’ll never manage to outrun those
lions!”
“I know” replied the man, furiously tying up his shoelaces, “I just have to outrun you!”
BTW: It is incomprehensible to me that scientific studies that utilize computers and software are not required to publish their source code, as Willis points out, from a bug catching point
alone, it is necessary.
All models are wrong, but some models are useful.
All software has bugs, but some bugs have been found.

dudleyhorscroft

rebatduke asks (4 April, 0505:
“What, exactly, do they plan to do if temperatures actually start to fall as we move into the long slow decline associated with the current (already weak) solar cycle? Or if they merely remain flat? What will they do if arctic sea ice actively regresses to the mean while antarctic ice remains strong?”
Simple. They will say that these prove that Climate Change exists and therefore we have to do something really quickly to stop it, because it will be disastrous, and in the mean time send them some more money.

Lance Wallace

jgbatduke’s illustrious predecessor J.B.Rhine (at Duke) used the same technique to derive gold from dross in his studies of ESP, telekinesis, etc. By running a large number of Duke students through his card-guessing games, there would be a few high scores. (“Responders” as mentioned by UnfrozenCavemanMD above in connection with drug effectiveness tests). Further tests on the responders might pick out someone doing well on both sets of tests, a super-responder. QED, ESP exists. The late great Martin Gardner dealt with this in his book Fads and Fallacies in the Name of Science.

Lance Wallace

Whoops, rgbatduke. (Sorry, rgb).

Professor Brown,
The Universities in this country seem to produce an awful lot more Mann’s and Neukom’s than people such as yourself. Could you discuss how you got where you are, and more particularly, how you have managed to stay there? We hear truth from you, every time, and deliberate lies from the “Climate Scientists” who are allowed to use the imprimatur of, say, Princeton, or Stanford, or U of NSW. The problem is not Mann, it is the University Presidents who permit his ilk to flourish.
When I was at Michigan I took an Econ course, and discovered that the professor was an active Communist preaching the drivel from the Club of Rome. I did not last long in that class!
Yes, something in our society has gone very very wrong…

http://www.nyu.edu/classes/nbeck/q2/geddes.pdf
“Most graduate students learn in the statistics courses forced upon them that selection on the dependent variable is forbidden, but few remember why, or what the implications of violating this taboo are for their own work.”
At the heart of statistics is the notion of the “random sample”. That you have chosen the data randomly. On this basis you can make statistical conclusions.
“Calibration” changes the data from a “random sample” to a “calibrated sample”. This sample is no longer random, thus your statistical conclusions are no longer valid.
Your statistics may well tell you there is a high correlation, but because your sample is no longer randomly selected, this is a spurious (false) correlation. Thus your conclusions are false, or at best unproven.
In Medicine this has been a hard learned lesson. Many of the treatment disasters of the past have resulted from this statistical mistake. Statistics requires that your sample be randomly selected. As soon as you seek to “qualify” the sample you cannot use statistics to test the results.
Unfortunately Climate Science is one of those soft sciences, where the results trump methodology. If the method gives the expected (desired) answer, the method is assumed to be correct. Snake oil salesmen use the same approach.

Lance Wallace says:
April 4, 2014 at 6:58 am
“Responders” as mentioned by UnfrozenCavemanMD
=============
“Responders” violate the statistical requirement of the random sample. This leads to false statistical conclusions.
The problem is that our common sense tells us that we should be able to “improve” the sample by selecting only “responders”. While forgetting that statistics forbids this.

Evan Jones

An advantage unique to dendroclimatology
Sounds like a lot of Gergis to me.
Like I said, hip boots are necessary for this kind of work.
It’s all too hip for me.

Craig Loehle

The way science is supposed to work is that things known to compromise your results must be avoided at all cost. That is why randomized double-blind trials were instituted in medicine–if patients knew they were taking the medicine they reported getting better, and the doctors thought they were better. If something violates conservation of energy, you check your equipment and calculations. And always always you must beware of random effects. You make sure your sample size is adequate. You watch out for spurious correlation. You keep samples for verification. post hoc proxy selection has been rigorously shown (and has been known in econometrics for decades) to be a risky procedure able to mine for spurious relationships easily. This problem has been ably demonstrated in stock market forecasting for example. When something has been clearly demonstrated to be likely to mislead, you guard against it, period. You don’t keep doing it over and over because you like the answer. And the reviewers are guilty of this also.

izen

@-“To show why this procedure is totally illegitimate, all we have to do is to replace the word “proxies” in a couple of the paragraphs above with the words “random data”, and repeat the statements.”
This is totally illegitimate reasoning because there is never any expectation o possibility that the correlation with random data is anything but coincidence, with a probability that can be calculated.
The correlation between proxies and recent temperature data IS legitimate because there are well established physical and biological processes that result in temperature changes altering the proxy measured as with dO18 isotope analysis.
A lack of correlation in such cases indicates that factors other than temperature are distorting the data so that it should be discarded.
@-“The name for this logical error is “post-hoc proxy selection”. This means that you have selected your proxies, not based on some inherent physical or chemical properties that tie them to temperature, but on how well they match the data you are trying to predict …”
You have this entirely reversed, I am not sure what the name for that logical error is but the proxies are chosen BECAUSE of their known potential for revealing past temperatures based on some inherent physical or chemical properties that tie them to temperature. The correlation with recent temperatures just confirms and quantifies that tie.

Greg

“No screening and the 1000 km test gave very similar results. The 500 km test screened out more proxies, reducing 111 to 85, and made a bit more difference. A simple average was quite a lot different, but that is not surprising.”
Is the simple average of the whole lot statistically less valid that cherry pie?
From Willis’ figure 2 the grey line looks like it may be more credible than the processed result.
Steve Mc says the data is now archived at NOAA, anyone have a link?

Greg
Rud Istvan

Taking a few steps back to survey the landscape, Willis’ excellent posts on Newkom illustrate two larger issues.
First, in the whole proxytology (pun intended) field the tradition of a fundamentally flawed procedure is so ingrained it automatically passes peer review. It is like phrenologists practicing phrenology in the 1800s. The most effective macro-response is not proxy by proxy, post-hoc by post hoc paper wack-a-mole, but rather discrediting proxytology as legitimate science in the first place.
Second, why the need to repeatedly try to establish a hockey stick? I think the main need is the shaft, not the blade. We have thermometer records for the last century showing small scale natural variability beyond reasonable dispute even according to the IPCC. Disappear the MWP and the LIA and you remove large scale natural variability from the AGW equation in order to make future catastrophe claims. Small scale natural variability (the pause) is the biggest thing going against the AGW model projections. Onlynif latge scale variability does not exist is any claim of unprecedented, impending doom… even possible. So that is really the only ‘service to the cause’ proxytologists can provide. Now, there are ways to get at large scale natural variability both qualitatively and quantitatively, for example ClimateReasons use of written historical records, that do not involve proxytology. Its practitioners must be feeling very threatened. They have an illegitimate science (treemometers) using illegitimate methods (post hoc selection) that can easily be rebutted by more legitimate means.
Existentially threatened.

Crappy cult science + crappy MSM reporting = dire predictions of doom. Without a warming trend, “fear” is all they have left in their arsenal. I’d venture 97% of Alarmists are “parrots,” just repeating what the MSM, and/or McKibben et al, tell them. This is a battle for minds, and (as the MSNBC poll showed the other day) they are losing. You don’t need a computer model to predict CAGW voices becoming shriller and claims more outrageous as the scientists who are crying wolf hurriedly pressure politicians to enshrine their failed theories into government policy, to support their ongoing bogus research. That’s why the demands of censorship of “deniers/skeptics” are increasing…silence the disbelievers and implement the manifesto….and ignore the record cold temperatures outside…this is going to be fun to watch this slow motion train wreck…the irony is Pachauri, a railroad engineer, is the locomotive’s driver….http://m.youtube.com/watch?v=6VIECzlFVUM. “Drivin’ that train, high on CO2 and methane, Mister Pa-chauri you’d better watch your speed….”

DocMartyn

They have used a proxy representing the Galapagos Islands; nice and red is is too. The ‘Cheifo’ looked at the temperature series of the Galapagos Islands and noticed its non-warming;
http://chiefio.files.wordpress.com/2011/02/galapagos-islands-temp-w-a.gif
http://www.wolframalpha.com/input/?i=Galapagos+Islands+temperature
Now, how did they calibrate their proxy?