Guest Post by Willis Eschenbach
In my last post, I talked about the “secret sauce”, as I described it, in the Neukom et al. study “Inter-hemispheric temperature variability over the past millennium”. By the “secret sauce” I mean the method which is able to turn the raw proxy data, which has no particular shape or trend, into a consensus-approved hockeystick … but in truth, I fear I can’t reveal all of the secret sauce, because as is far too common in climate science, they have not released their computer code. However, they did provide some clues, along with pretty pictures.
So what did they do, and how did they do it? Well, don your masks, respirators, coveralls, and hip boots, because folks, we’re about to go wading in some murky waters …
From my last post, Figure 2 shows the mean of the proxies used by Neukom, and the final result of cooking those proxies with their secret sauce:
Let me start with an overview of the whole process of proxy reconstruction, as practiced by far too many paleoclimatologists. It is fatally flawed, in my opinion, by their proxy selection methods.
What they do first is to find a whole bunch of proxies. Proxies are things like tree ring widths, or the thickness of layers of sediment, or the amounts of the isotope oxygen-18 in ice cores—in short, a proxy might be anything and everything which might possibly be related to temperature. The Neukom proxies, for example, include things like rainfall and streamflow … not sure how those might be related to temperature in any given location, but never mind. It’s all grist for the proxy mill.
Then comes the malfeasance. They compare the recent century or so of all of the proxies to some temperature measurement located near the proxy, like say the temperature of their gridcell in the GISS temperature dataset. If there is no significant correlation between the proxy and the gridcell temperature where the proxy is located, the record is discarded as not being a temperature proxy. However, if there is a statistically significant correlation between the proxy and the gridcell temperature, then the proxy is judged to be a valid temperature proxy, and is used in the analysis.
Do you see the huge problem with this procedure?
The practitioners of this arcane art don’t see the problem. They say this procedure is totally justified. How else, they argue, will we be able to tell if something actually IS a proxy for the temperature or not? Here is Esper on the subject:
However as we mentioned earlier on the subject of biological growth populations, this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal. The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.
“An advantage unique to dendroclimatology”? Why hasn’t this brilliant insight been more widely adopted?
To show why this procedure is totally illegitimate, all we have to do is to replace the word “proxies” in a couple of the paragraphs above with the words “random data”, and repeat the statements. Here we go:
They compare the recent century or so of all of the random data
proxiesto some temperature measurement located near the random data proxy, like say the temperature of their gridcell in the GISS temperature dataset. If there is no significant correlation between the random data proxyand the gridcell temperature, the random data proxyis discarded. However, if there is a statistically significant correlation between the random data proxyand the gridcell temperature, then the random data proxyis judged to be a valid temperature proxy, and is used in the analysis.
Now you see the first part of the problem. The selection procedure will give its blessing to random data just as readily as to a real temperature proxy. That’s the reason why this practice is “unique to dendroclimatology”, no one else is daft enough to use it … and sadly, this illegitimate procedure has become the go-to standard of the industry in proxy paleoclimate studies from the original Hockeystick all the way to Neukom2014.
The name for this logical error is “post-hoc proxy selection”. This means that you have selected your proxies, not based on some inherent physical or chemical properties that tie them to temperature, but on how well they match the data you are trying to predict …
The use of post hoc proxy selection in Neukom2014 is enough in itself to totally disqualify the study … but wait, it gets worse. I guess that comparing a proxy with the temperature record of the actual gridcell where it is physically located was too hard a test, and as a result they couldn’t find enough proxies
random data that would pass that test. So here is the test that they ended up using, from their Supplementary Information:
We consider the “local” correlation of each record as the highest absolute correlation of a proxy with all grid cells within a radius of 1000 km and for all the three lags (0, 1 or -1 years). A proxy record is included in the predictor set if this local correlation is significant (p<0.05).
“Local” means within a thousand kilometers? Dear heavens, how many problems and misconceptions can they pack into a single statement? Like I said, hip boots are necessary for this kind of work.
First question, of course, is “how many gridcells are within 1,000 kilometres of a given proxy”? And this reveals a truly bizarre problem with their procedure. They are using GISS data on a regular 2° x 2° grid. At the Equator, there are no less than 68 to 78 of those gridcells whose centers are within 1,000 km of a given point, depending on the point’s location within the gridcell … so they are comparing their proxy to ABOUT 70 GRIDCELL VALUES!!! Talk about a data dredge, that about takes the cake … but not quite, because they’ve outdone themselves.
The situation on the Equator doesn’t take the cake once we consider say a proxy which is an ice core from the South Pole … because there are no less than 900 2° x 2° gridcells within 1000 kilometres of the South Pole. I’ve heard of tilting the playing field in your favor, but that’s nonsense.
I note that they may be a bit uneasy about this procedure themselves. I say this because they dodge the worst of the bullet on other grounds, saying:
The predictors for the reconstructions are selected based on their local correlations with the target grid. We use the domain covering 55°S-10°N and all longitudes for the proxy screening. High latitude regions of the grid are excluded from the correlation analysis because south of 55°S, the instrumental data are not reliable at the grid-point level over large parts of the 20th century due to very sparse data coverage (Hansen et al., 2010). We include the regions between 0°N and 10°N because the equatorial regions have a strong influence on SH temperature variability.
Sketchy … and of course that doesn’t solve the problem:
Proxies from Antarctica, which are outside the domain used for proxy screening, are included, if they correlate significantly with at least 10% of the grid-area used for screening (latitude weighted).
It’s not at all clear what that means. How do you check correlation with 10% of a huge area? Which 10%? I don’t even know how you’d exhaustively search that area. I mean, do you divide the area into ten squares? Does the 10% have to be rectangular? And why 10%?
In any case, the underlying issue of checking different proxies against different numbers of gridcells is not solved by their kludge. At 50°S, there are no less than one hundred gridcells within the search radius. This has the odd effect that the nearer to the poles that a proxy is located, the greater the odds that it will be crowned with the title of temperature proxy … truly strange.
And it gets stranger. In the GISS temperature data, each gridcell’s temperature is some kind of average of the temperature stations in that gridcell. But what if there is no temperature station in that gridcell? Well, they assign it a temperature as a weighted average of the other local gridcells. And how big is “local” for GISS? Well … 1,200 kilometres.
This means that when the proxy is compared to all the local gridcells, in many cases a large number of the gridcell “temperatures” will be nothing but slightly differing averages of what’s going on within 1,200 kilometres.
Not strange enough for you? Bizarrely, they then go on to say (emphasis mine):
An alternative reconstruction using the full un-screened proxy network yields very similar results (Supplementary Figure 20, see section 3.2.2), demonstrating that the screening procedure has only a limited effect on the reconstruction outcome.
Say what? On any sane planet, the fact that such a huge change in the procedure has “only a limited effect” on your results should lead a scientist to re-examine very carefully whatever they are doing. To me, the meaning of this phrase is “our procedures are so successful at hockeystick mining that they can get the same results using random data” … how is that not a huge concern?
Returning to the question of the number of gridcells, here’s the problem with looking at through that many gridcells to find the highest correlation. The math is simple—the more times or places you look for something, the more likely you are to find an unusual but purely random result.
For example, if you flip a coin five times, the odds of all five flips coming up heads are 1/2 * 1/2 * 1/2 * 1/2 * 1/2. This is 1/32, or about .03. This is below the normal 0.05 significance threshold usually used in climate science.
So if that happened the first time you flipped a coin five times, five heads in a row, you’d be justified in saying that the coin might be weighted.
But suppose you repeated the whole process a dozen times, with each sample consisting of flipping the same coin five times. If we come up with five heads at some point in that process, should we still think the coin might be loaded?
Well … no. Because in a dozen sets of five flips, the odds of five heads coming up somewhere in there are about 30% … so if it happens, it’s not unusual.
So in that context, consider the value of testing either random data or a proxy against a hundred gridcell temperatures, not forgetting about checking three times per gridcell to include the lags, and then accepting the proxy if the correlation of any one of those is above 0.05 … egads. This procedure is guaranteed to drive the number of false positives through the roof.
Next, they say:
Both the proxy and instrumental data are linearly detrended over the 1911-1990 overlap period prior to the correlation analyses.
While this sounds reasonable, they haven’t thought it all the way through. Unfortunately, procedure this leads to a subtle error. Let me illustrate it using the GISS data for the southern hemisphere, since this is the mean of the various gridcells they are using to screen their data:
Now, they are detrending it for a good reason, which is to keep the long-term trend from influencing the analysis. If you don’t do that, you end up doing what is also known as “mining for hockeysticks”, because the trend of the recent data will dominate the selection process. So they are trying to solve a real problem, but look what happens when we do linear detrending:
All that this does is change the shape of the long-term trend. It does not remove the trend, it rises steadily after 1910. So they are still mining for hockeysticks.
The proper way to do this detrending is to use some kind of smoothing filter on the data to remove the slow swings in the data. Here’s a loess smooth, you can use others, the particular choice is not critical for these purposes:
And once we subtract that loess smooth (gold line) from the GISS LOTI data, here’s what we get:
As you can see, that would put all of the proxies and data on a level playing field. Bear in mind, however, that improving the details of the actual method of post-hoc proxy selection is just putting lipstick on a pig … it’s still post-hoc proxy selection.
And since they haven’t done that, they are definitely mining for hockeysticks. No wonder that their proxy selection process is so meaningless.
From there, the process is generally pretty standard. They “calibrate” each proxy using a linear model to determine the best fit of the proxy to the temperature data from whichever of the 70 gridcells that the proxy got the best correlation with. Then they use some other portion of the data (1880-1910) to “validate” the calibration parameters, that is to say, they check how well their formula works to replicate the early portion of the data.
However, in Neukom2014 they introduced an interesting wrinkle. In their words:
For most of these choices, objective “best” solutions are largely missing in literature. The main limitation is that the real-world performance of different approaches and parameters can only be verified over the instrumental period, which is short and contains a strong trend, complicating quality assessments. We assess the influence of these methodological choices by varying methodological parameters in the ensemble and quantifying their effect on the reconstruction results. Obviously, the range within which these parameters are varied in the ensemble is also subjective, but we argue that the ranges chosen herein are within reasonable thresholds, based our own experience and the literature. Given the limited possibilities to identify the “best” ensemble members, we treat all reconstruction results equally and consider the ensemble mean our best estimate.
OK, fair enough. I kind of like this idea, but you’d have to be very careful with it. It’s like a “Monte Carlo” analysis. For each step in their analysis, they generate a variety of results by varying the parameters up and down. That explores the parameter space of the model to a greater extent. In theory this might be a useful procedure … but the devil is in the details, and there are a couple of them that are not pretty. One difficulty involves the uncertainty estimates for the “ensemble mean”, the average of the whole group of results that they’ve gotten by varying the parameters of the analysis.
Now, the standard formula for the errors in calculating the mean has been known for a long time. the error of the mean is the standard deviation of the results, divided by the square root of the number of data points.
However, they don’t use that formula. Instead, they say that the error is the quadratic sum (the square root of the sum of the squares) of the standard deviation of the data and the “residual standard deviation”. I can’t make heads or tails out of this procedure. Why doesn’t the number of data points enter into the calculation of the standard error? Is this some formula I’m unaware of?
And what is the “residual standard error”? It’s not explained, but I think the “residual standard error” is the standard deviation of the residuals in the model for each proxy. This is a measure of how well or how poorly the individual proxy matched up with the actual temperature it was calibrated against.
So they are saying that the overall error can be calculated as the quadratic sum of the year-by-year average of the residual errors of all proxies contributing to that year and the standard deviation of the 3,000 results for that year … gotta confess, I’m not feeling it. I don’t understand even in theory how you’d calculate the expected error from this procedure, but I’m pretty sure that’s not it. In any case, I’d love to see the theoretical derivation of that result.
I mentioned that the devil is in the details. The second kinda troublesome detail about their Monte Carlo method is that at the end of the day, their method does almost nothing.
Here’s why. Let me take one of the “methodological parameters” that they are actually varying, viz:
Sampling the weight that each proxy gets in the PC analysis by increasing its variance by a factor of 0.67-1.5 (after scaling all proxies to mean zero and unit standard deviation over their common period).
OK, in the standard analysis, the variance is not adjusted at all. This is the equivalent of a variance factor of 1. Now, they are varying it above and below 1, from 2/3 to 3/2, in order to explore the possible outcomes. This gives a whole range of possible outcomes, they collected 3,000 of them
The problem is that at the end of the day, they average out all of the results to get their final answer … and of course, that ends them back where they started. They have varied the parameter up and down from the actual value used, but the average of all of that is just the actual value …
Unless, of course, they vary the parameter more in one direction than the other. This, of course, has the effect of simply increasing or decreasing the parameter. Because at the end of the day, in a linear model if you vary a parameter and average the results, all you end up with is what you’d get if you had simply used the average of the random parameters chosen.
Dang details, always messing up a good story …
Anyhow, that’s at least some of the oddities and the problems with what they’ve done. Other than that it is just more of the usual paleoclimate handwaving, addition and distraction. Here’s one of my favorite lines:
To determine the extent to which reconstructed temperature patterns are independently identified by climate models, we investigate inter-hemispheric temperature coherence from a 24-member multi-model ensemble
Yes siree, that’s the first thing I’d reach for in their situation, a 24-model climate circus, that’s the ticket …
If nothing else, this study could serve as the poster child for the need to provide computer code. Without it, despite their detailed description, we don’t know what was actually done … and given the fact that bugs infest computer code, they may not even have done what they think they’ve done.
Conclusions? My main conclusion is that almost the entire string of paleoclimate reconstructions, from the Hockeystick up to this one, are fatally flawed through their use of post-hoc proxy selection. This is exacerbated by the bizarre means of selection. In addition their error results seem doubtful. They are saying that they know the average temperature of the southern hemisphere in the year 1000 to within a 95% confidence interval of plus or minus a quarter of a degree C?? Really? … c’mon, guys. Surely you can’t expect us to believe that …
Anyhow, that’s their secret sauce … post-hoc proxy selection.
My best wishes to all,
CODA: With post-hoc proxy selection, you are choosing your explanatory variables on the basis of how well they match up with what you are trying to predict. This is generally called “data snooping”, and in real sciences it is regarded as a huge no-no. I don’t know how it got so widespread in climate science, but here we are … so given that post-hoc selection is clearly the wrong way to go, what would be the proper way to do a proxy temperature reconstruction?
First, you have to establish the size and nature of the link between the proxy and the temperature. For example, suppose your experiments show that the magnesium/calcium ratio in a particular kind of seashell varies up and down with temperature. What you do then is you get every freaking record of that kind of seashell that you can lay your hands on, from as many drill cores in as many parts of the ocean as you can find.
And then? Well, first you have to look at each and every one of them, and decide what the rules of the game are going to be. Are you going to use the proxies that are heteroskedastic (change in variance with time)? Are you going to use the proxies with missing data, and if so, how much missing data is acceptable? Are you going to restrict them to some minimum length? Are you only allowing proxies from a given geographical area? You need to specify exactly which proxies qualify and which don’t.
Then once you’ve made your proxy selection rules, you have to find each and every proxy that qualifies under those rules. Then you have to USE THEM ALL and see what the result looks like.
You can’t start by comparing the seashell records to the temperature that they are supposed to predict and throw out the proxies that don’t match the temperature, that’s a joke, it’s extreme data snooping. Instead, you have to make the rules in advance as to what kind of proxies you’re going to use, and then use every proxy that fits those rules. That’s the proper way to go about it.
PS–The Usual Request. If you disagree, quote what you disagree with. Otherwise, no one really knows what the heck you’re talking about.