Guest Post by Willis Eschenbach
In my last post, I talked about the “secret sauce”, as I described it, in the Neukom et al. study “Inter-hemispheric temperature variability over the past millennium”. By the “secret sauce” I mean the method which is able to turn the raw proxy data, which has no particular shape or trend, into a consensus-approved hockeystick … but in truth, I fear I can’t reveal all of the secret sauce, because as is far too common in climate science, they have not released their computer code. However, they did provide some clues, along with pretty pictures.
Figure 1. The overview graphic from Neukom2014. Click to embiggen.
So what did they do, and how did they do it? Well, don your masks, respirators, coveralls, and hip boots, because folks, we’re about to go wading in some murky waters …
From my last post, Figure 2 shows the mean of the proxies used by Neukom, and the final result of cooking those proxies with their secret sauce:
Figure 2. Raw proxy data average and final result from the Neukom2014 study. Note the hockeystick shape of the result.
Let me start with an overview of the whole process of proxy reconstruction, as practiced by far too many paleoclimatologists. It is fatally flawed, in my opinion, by their proxy selection methods.
What they do first is to find a whole bunch of proxies. Proxies are things like tree ring widths, or the thickness of layers of sediment, or the amounts of the isotope oxygen-18 in ice cores—in short, a proxy might be anything and everything which might possibly be related to temperature. The Neukom proxies, for example, include things like rainfall and streamflow … not sure how those might be related to temperature in any given location, but never mind. It’s all grist for the proxy mill.
Then comes the malfeasance. They compare the recent century or so of all of the proxies to some temperature measurement located near the proxy, like say the temperature of their gridcell in the GISS temperature dataset. If there is no significant correlation between the proxy and the gridcell temperature where the proxy is located, the record is discarded as not being a temperature proxy. However, if there is a statistically significant correlation between the proxy and the gridcell temperature, then the proxy is judged to be a valid temperature proxy, and is used in the analysis.
Do you see the huge problem with this procedure?
The practitioners of this arcane art don’t see the problem. They say this procedure is totally justified. How else, they argue, will we be able to tell if something actually IS a proxy for the temperature or not? Here is Esper on the subject:
However as we mentioned earlier on the subject of biological growth populations, this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal. The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.
“An advantage unique to dendroclimatology”? Why hasn’t this brilliant insight been more widely adopted?
To show why this procedure is totally illegitimate, all we have to do is to replace the word “proxies” in a couple of the paragraphs above with the words “random data”, and repeat the statements. Here we go:
They compare the recent century or so of all of the random data
proxiesto some temperature measurement located near the random dataproxy, like say the temperature of their gridcell in the GISS temperature dataset. If there is no significant correlation between the random dataproxyand the gridcell temperature, the random dataproxyis discarded. However, if there is a statistically significant correlation between the random dataproxyand the gridcell temperature, then the random dataproxyis judged to be a valid temperature proxy, and is used in the analysis.
Now you see the first part of the problem. The selection procedure will give its blessing to random data just as readily as to a real temperature proxy. That’s the reason why this practice is “unique to dendroclimatology”, no one else is daft enough to use it … and sadly, this illegitimate procedure has become the go-to standard of the industry in proxy paleoclimate studies from the original Hockeystick all the way to Neukom2014.
The name for this logical error is “post-hoc proxy selection”. This means that you have selected your proxies, not based on some inherent physical or chemical properties that tie them to temperature, but on how well they match the data you are trying to predict …
The use of post hoc proxy selection in Neukom2014 is enough in itself to totally disqualify the study … but wait, it gets worse. I guess that comparing a proxy with the temperature record of the actual gridcell where it is physically located was too hard a test, and as a result they couldn’t find enough proxies random data that would pass that test. So here is the test that they ended up using, from their Supplementary Information:
We consider the “local” correlation of each record as the highest absolute correlation of a proxy with all grid cells within a radius of 1000 km and for all the three lags (0, 1 or -1 years). A proxy record is included in the predictor set if this local correlation is significant (p<0.05).
“Local” means within a thousand kilometers? Dear heavens, how many problems and misconceptions can they pack into a single statement? Like I said, hip boots are necessary for this kind of work.
First question, of course, is “how many gridcells are within 1,000 kilometres of a given proxy”? And this reveals a truly bizarre problem with their procedure. They are using GISS data on a regular 2° x 2° grid. At the Equator, there are no less than 68 to 78 of those gridcells whose centers are within 1,000 km of a given point, depending on the point’s location within the gridcell … so they are comparing their proxy to ABOUT 70 GRIDCELL VALUES!!! Talk about a data dredge, that about takes the cake … but not quite, because they’ve outdone themselves.
The situation on the Equator doesn’t take the cake once we consider say a proxy which is an ice core from the South Pole … because there are no less than 900 2° x 2° gridcells within 1000 kilometres of the South Pole. I’ve heard of tilting the playing field in your favor, but that’s nonsense.
I note that they may be a bit uneasy about this procedure themselves. I say this because they dodge the worst of the bullet on other grounds, saying:
The predictors for the reconstructions are selected based on their local correlations with the target grid. We use the domain covering 55°S-10°N and all longitudes for the proxy screening. High latitude regions of the grid are excluded from the correlation analysis because south of 55°S, the instrumental data are not reliable at the grid-point level over large parts of the 20th century due to very sparse data coverage (Hansen et al., 2010). We include the regions between 0°N and 10°N because the equatorial regions have a strong influence on SH temperature variability.
Sketchy … and of course that doesn’t solve the problem:
Proxies from Antarctica, which are outside the domain used for proxy screening, are included, if they correlate significantly with at least 10% of the grid-area used for screening (latitude weighted).
It’s not at all clear what that means. How do you check correlation with 10% of a huge area? Which 10%? I don’t even know how you’d exhaustively search that area. I mean, do you divide the area into ten squares? Does the 10% have to be rectangular? And why 10%?
In any case, the underlying issue of checking different proxies against different numbers of gridcells is not solved by their kludge. At 50°S, there are no less than one hundred gridcells within the search radius. This has the odd effect that the nearer to the poles that a proxy is located, the greater the odds that it will be crowned with the title of temperature proxy … truly strange.
And it gets stranger. In the GISS temperature data, each gridcell’s temperature is some kind of average of the temperature stations in that gridcell. But what if there is no temperature station in that gridcell? Well, they assign it a temperature as a weighted average of the other local gridcells. And how big is “local” for GISS? Well … 1,200 kilometres.
This means that when the proxy is compared to all the local gridcells, in many cases a large number of the gridcell “temperatures” will be nothing but slightly differing averages of what’s going on within 1,200 kilometres.
Not strange enough for you? Bizarrely, they then go on to say (emphasis mine):
An alternative reconstruction using the full un-screened proxy network yields very similar results (Supplementary Figure 20, see section 3.2.2), demonstrating that the screening procedure has only a limited effect on the reconstruction outcome.
Say what? On any sane planet, the fact that such a huge change in the procedure has “only a limited effect” on your results should lead a scientist to re-examine very carefully whatever they are doing. To me, the meaning of this phrase is “our procedures are so successful at hockeystick mining that they can get the same results using random data” … how is that not a huge concern?
Returning to the question of the number of gridcells, here’s the problem with looking at through that many gridcells to find the highest correlation. The math is simple—the more times or places you look for something, the more likely you are to find an unusual but purely random result.
For example, if you flip a coin five times, the odds of all five flips coming up heads are 1/2 * 1/2 * 1/2 * 1/2 * 1/2. This is 1/32, or about .03. This is below the normal 0.05 significance threshold usually used in climate science.
So if that happened the first time you flipped a coin five times, five heads in a row, you’d be justified in saying that the coin might be weighted.
But suppose you repeated the whole process a dozen times, with each sample consisting of flipping the same coin five times. If we come up with five heads at some point in that process, should we still think the coin might be loaded?
Well … no. Because in a dozen sets of five flips, the odds of five heads coming up somewhere in there are about 30% … so if it happens, it’s not unusual.
So in that context, consider the value of testing either random data or a proxy against a hundred gridcell temperatures, not forgetting about checking three times per gridcell to include the lags, and then accepting the proxy if the correlation of any one of those is above 0.05 … egads. This procedure is guaranteed to drive the number of false positives through the roof.
Next, they say:
Both the proxy and instrumental data are linearly detrended over the 1911-1990 overlap period prior to the correlation analyses.
While this sounds reasonable, they haven’t thought it all the way through. Unfortunately, procedure this leads to a subtle error. Let me illustrate it using the GISS data for the southern hemisphere, since this is the mean of the various gridcells they are using to screen their data:
Figure 3. GISS land-ocean temperature index (LOTI) for the southern hemisphere.
Now, they are detrending it for a good reason, which is to keep the long-term trend from influencing the analysis. If you don’t do that, you end up doing what is also known as “mining for hockeysticks”, because the trend of the recent data will dominate the selection process. So they are trying to solve a real problem, but look what happens when we do linear detrending:
Figure 4. Linearly detrended GISS land-ocean temperature index (LOTI) for the southern hemisphere.
All that this does is change the shape of the long-term trend. It does not remove the trend, it rises steadily after 1910. So they are still mining for hockeysticks.
The proper way to do this detrending is to use some kind of smoothing filter on the data to remove the slow swings in the data. Here’s a loess smooth, you can use others, the particular choice is not critical for these purposes:
Figure 5. Loess smooth of GISS land-ocean temperature index (LOTI) for the southern hemisphere.
And once we subtract that loess smooth (gold line) from the GISS LOTI data, here’s what we get:
Figure 6. GISS land-ocean temperature index (LOTI) for the southern hemisphere, after detrending using a loess smooth.
As you can see, that would put all of the proxies and data on a level playing field. Bear in mind, however, that improving the details of the actual method of post-hoc proxy selection is just putting lipstick on a pig … it’s still post-hoc proxy selection.
And since they haven’t done that, they are definitely mining for hockeysticks. No wonder that their proxy selection process is so meaningless.
From there, the process is generally pretty standard. They “calibrate” each proxy using a linear model to determine the best fit of the proxy to the temperature data from whichever of the 70 gridcells that the proxy got the best correlation with. Then they use some other portion of the data (1880-1910) to “validate” the calibration parameters, that is to say, they check how well their formula works to replicate the early portion of the data.
However, in Neukom2014 they introduced an interesting wrinkle. In their words:
For most of these choices, objective “best” solutions are largely missing in literature. The main limitation is that the real-world performance of different approaches and parameters can only be verified over the instrumental period, which is short and contains a strong trend, complicating quality assessments. We assess the influence of these methodological choices by varying methodological parameters in the ensemble and quantifying their effect on the reconstruction results. Obviously, the range within which these parameters are varied in the ensemble is also subjective, but we argue that the ranges chosen herein are within reasonable thresholds, based our own experience and the literature. Given the limited possibilities to identify the “best” ensemble members, we treat all reconstruction results equally and consider the ensemble mean our best estimate.
OK, fair enough. I kind of like this idea, but you’d have to be very careful with it. It’s like a “Monte Carlo” analysis. For each step in their analysis, they generate a variety of results by varying the parameters up and down. That explores the parameter space of the model to a greater extent. In theory this might be a useful procedure … but the devil is in the details, and there are a couple of them that are not pretty. One difficulty involves the uncertainty estimates for the “ensemble mean”, the average of the whole group of results that they’ve gotten by varying the parameters of the analysis.
Now, the standard formula for the errors in calculating the mean has been known for a long time. the error of the mean is the standard deviation of the results, divided by the square root of the number of data points.
However, they don’t use that formula. Instead, they say that the error is the quadratic sum (the square root of the sum of the squares) of the standard deviation of the data and the “residual standard deviation”. I can’t make heads or tails out of this procedure. Why doesn’t the number of data points enter into the calculation of the standard error? Is this some formula I’m unaware of?
And what is the “residual standard error”? It’s not explained, but I think the “residual standard error” is the standard deviation of the residuals in the model for each proxy. This is a measure of how well or how poorly the individual proxy matched up with the actual temperature it was calibrated against.
So they are saying that the overall error can be calculated as the quadratic sum of the year-by-year average of the residual errors of all proxies contributing to that year and the standard deviation of the 3,000 results for that year … gotta confess, I’m not feeling it. I don’t understand even in theory how you’d calculate the expected error from this procedure, but I’m pretty sure that’s not it. In any case, I’d love to see the theoretical derivation of that result.
I mentioned that the devil is in the details. The second kinda troublesome detail about their Monte Carlo method is that at the end of the day, their method does almost nothing.
Here’s why. Let me take one of the “methodological parameters” that they are actually varying, viz:
Sampling the weight that each proxy gets in the PC analysis by increasing its variance by a factor of 0.67-1.5 (after scaling all proxies to mean zero and unit standard deviation over their common period).
OK, in the standard analysis, the variance is not adjusted at all. This is the equivalent of a variance factor of 1. Now, they are varying it above and below 1, from 2/3 to 3/2, in order to explore the possible outcomes. This gives a whole range of possible outcomes, they collected 3,000 of them
The problem is that at the end of the day, they average out all of the results to get their final answer … and of course, that ends them back where they started. They have varied the parameter up and down from the actual value used, but the average of all of that is just the actual value …
Unless, of course, they vary the parameter more in one direction than the other. This, of course, has the effect of simply increasing or decreasing the parameter. Because at the end of the day, in a linear model if you vary a parameter and average the results, all you end up with is what you’d get if you had simply used the average of the random parameters chosen.
Dang details, always messing up a good story …
Anyhow, that’s at least some of the oddities and the problems with what they’ve done. Other than that it is just more of the usual paleoclimate handwaving, addition and distraction. Here’s one of my favorite lines:
To determine the extent to which reconstructed temperature patterns are independently identified by climate models, we investigate inter-hemispheric temperature coherence from a 24-member multi-model ensemble
Yes siree, that’s the first thing I’d reach for in their situation, a 24-model climate circus, that’s the ticket …
If nothing else, this study could serve as the poster child for the need to provide computer code. Without it, despite their detailed description, we don’t know what was actually done … and given the fact that bugs infest computer code, they may not even have done what they think they’ve done.
Conclusions? My main conclusion is that almost the entire string of paleoclimate reconstructions, from the Hockeystick up to this one, are fatally flawed through their use of post-hoc proxy selection. This is exacerbated by the bizarre means of selection. In addition their error results seem doubtful. They are saying that they know the average temperature of the southern hemisphere in the year 1000 to within a 95% confidence interval of plus or minus a quarter of a degree C?? Really? … c’mon, guys. Surely you can’t expect us to believe that …
Anyhow, that’s their secret sauce … post-hoc proxy selection.
My best wishes to all,
w.
CODA: With post-hoc proxy selection, you are choosing your explanatory variables on the basis of how well they match up with what you are trying to predict. This is generally called “data snooping”, and in real sciences it is regarded as a huge no-no. I don’t know how it got so widespread in climate science, but here we are … so given that post-hoc selection is clearly the wrong way to go, what would be the proper way to do a proxy temperature reconstruction?
First, you have to establish the size and nature of the link between the proxy and the temperature. For example, suppose your experiments show that the magnesium/calcium ratio in a particular kind of seashell varies up and down with temperature. What you do then is you get every freaking record of that kind of seashell that you can lay your hands on, from as many drill cores in as many parts of the ocean as you can find.
And then? Well, first you have to look at each and every one of them, and decide what the rules of the game are going to be. Are you going to use the proxies that are heteroskedastic (change in variance with time)? Are you going to use the proxies with missing data, and if so, how much missing data is acceptable? Are you going to restrict them to some minimum length? Are you only allowing proxies from a given geographical area? You need to specify exactly which proxies qualify and which don’t.
Then once you’ve made your proxy selection rules, you have to find each and every proxy that qualifies under those rules. Then you have to USE THEM ALL and see what the result looks like.
You can’t start by comparing the seashell records to the temperature that they are supposed to predict and throw out the proxies that don’t match the temperature, that’s a joke, it’s extreme data snooping. Instead, you have to make the rules in advance as to what kind of proxies you’re going to use, and then use every proxy that fits those rules. That’s the proper way to go about it.
PS–The Usual Request. If you disagree, quote what you disagree with. Otherwise, no one really knows what the heck you’re talking about.
Stupid question:
Before any method is applied to real data should not one have to demonstrate that the method works correctly when fed random data?
What I am imagining is that you generate say a million sets of random data, apply your method and see if the results show a trend. If so then you know that your method has a bias and you go back to the drawing board.
Excellent perspective piece, thanks Willis!
Nick Stokes says:
April 4, 2014 at 3:01 am
Thanks, Nick.
w.
Thank you, Willis.
Steamboat Jack (Jon Jewett’s evil twin)
Izen,
How does rainfall correlate to temperature? How do tree ring widths or “latewood density” correlate to temperature? How does stream flow correlate to temperature? How do sediment layer thicknesses correlate to temperature?
The simple answer: Not in any way easy to quantify! Such “data” would have been thrown out of my 10th-grade biology class, and just because it comes from Stanford or wherever does not make it correlate.
Come back with something rational, we would all like to hear it…
The sad thing is that with a straight face, these guys claim to be doing science.
“The Neukom proxies, for example, include things like rainfall and streamflow … not sure how those might be related to temperature in any given location, but never mind.” You got it right in one and should quit right there.
I’ve never seen a single proxy where it’s been demonstrated that there is a physical connection between the proxy and temperature. For these studies to be valid, there have to be independent experiments demonstrating the connection, along with calibration curves, between temperature and [your proxy here]. (I once asked my arborist if tree rings were thermometers. He just laughed. Nope, they measure precipitation. Worse yet, if the north side of the tree got more water than the south side, the rings would be wider on the north side. There’s even a climategate email by a biologist stating this. ) Quite frankly, I think even the d18O measurements are suspect as temperature proxies. http://scienceofdoom.com/2014/02/24/ghosts-of-climates-past-seventeen-proxies-under-water-i/
An honest presentation of the data would also include error bars derived from the calibration curves. Take a look at the data from the recent BICEP2 experiment that measured the Cosmic Microwave Background. Every point has its one sigma error bar, the standard in physics. Ever see error bars on the data points in a proxy time series? Neither have I.
The entire paleo-proxy effort fails at the level of basic science. All the statistical manipulations in the world can’t change that.
Professor Brown: as always, thank you for your clear analysis. You ask some questions I’ve been asking (as have many others):
First they will pretend that the science is and always will be settled. This has already begun. Then, I suppose they will do what ideologues always do. They will become reactionary. They will dig in and defend the widespread changes in policies, regulations, laws, technology and attitudes that they have inspired and for which they lobbied. They will continue to proselytize and indoctrinate children in the old ways. There was always a goal for all this and it was legislative, social and cultural. Revolution by other means. And they have succeeded to a degree. Imposing new technology, passing laws, and decreeing regulations is difficult, but abandoning and/or repealing them when they become, omnipresent, burdensome, archaic and even destructive, is even more so.
Which is not to say that all that change has been negative. But there is much that the new generation of progressives will have to do to clean up the mess that was created by a combination of genuine environmental concern and the ability to raise enormous amounts of funding through the device of climate alarmism.
Thanks, Willis. A superb article.
“you have to make the rules in advance as to what kind of proxies you’re going to use, and then use every proxy that fits those rules”; This is the golden rule.
@-Michael Moon
“How does rainfall correlate to temperature? How do tree ring widths or “latewood density” correlate to temperature? ”
That they do is a fundamental part of the economic exploitation of timber.
The productivity, expected output of a forest assessed for logging is calculated from the temperature and rainfall records.
http://www.fsl.orst.edu/~waring/Publications/pdf/87%20-%20Copy.pdf
Don’t know about anyone else, but I’m looking forward to working “post hoc ergo proxy hoc” into conversation.
A “double-blind” approach to proxy selection would be to mix the actual proxy series with an equivalent number of random data series, either drawn from phenomena unrelated to climate (such as sports statistics) or constructed from a random number generator to resemble natural time series data.
That way, neither the proxy selectors nor the proxies themselves know which are real or fake.
If the selection process can differentiate real proxies from the random data, and the historical average of the proxies is materially different from that of the random data, then I might believe that the proxies reconstructions had some value.
“Greg says:
April 4, 2014 at 3:23 am
OMG , this is Mann’s hockeystick all over again. ”
See, this paleo stuff REALLY IS reproducible! 🙂
Can ANYONE explain to me why ANY OF THIS has ANY MEANING?
Temperature + Humidity (plus of course, a minor contribution of Atm pressure at the time of a reading) = ENTHALPY. (Or yields it.)
That means the ENERGY in a VOLUME of Air.
Averaging TEMPERATURES is MEANINGLESS. Even if you say, “Well, we are looking at the “changes”, not the averages of the temperatures…” ALL THE MORE B.S. (Barbara Streisand)
BECAUSE the “necessary and sufficient condition” for that to have MEANING would be a “non-
moving data set.” I.e., a “baseline” which said that for season/location…over some period of time
you could say..the temperature(of the observed air mass) will be at this VALUE. Again,
completely impossible to do!
Sorry, while I retreat to my BOMB shelter, to let ALL this “hogwash” go by me!
Stephen Richards says:
April 4, 2014 at 1:24 am
This new super duper statistical method was used in one of the more famous pieces of fraud that SteveMc disected. I just can’t remember which one.
———————————–
Wasn’t it Yamal, especially Yamal06?
izen, I suspect you mean well, but you are mistaken in exactly the way that is being discussed. I know the appeal to “common sense” feels strong, but in this case it is wrong. Every proxy is a mixture of signal and noise, even the ones with fairly straightforward temperature relationships, like oxygen isotopes, unlike most of the proxies used here, which are mostly noise to begin with, as you can see from the graph. The post-hoc selection of proxies allows noise that by chance correlates to recent temperature to be magically (fraudulently) converted into signal. Even people with higher education in science make this mistake because it is so seductive. So, don’t feel bad. Statistical rigor can be cruel, but we are better off for it.
The weird thing is that there is a number of paleotemperature proxies that actually work, and where the physical relationship between the proxy and the temperature is understood (or has at least been thoroughly tested amd verified):
D18O (from arctic icecaps)
TEX86
Alkenones
Foraminifera
Pollen analysis
Treeline changes
Faunal compositions
These all have limitations and problems, and none of them is very exact (the uncertainty is at least plus or minus a degree or two), but the strangest thing of all is that (with the possible exception of D18O) they are almost never used by “climate scientists”, who prefer to use proxies like tree-rings, streamflow and lake deposits whose relation to temperatures is, to put things mildly, very indirect.
I wonder how many folks used this kind of statistics to earn their PHD in the first place?
It’s fine to use correlation with T to select proxy types but not to cherry pick individual proxy series within each proxy type. So I will translate izen’s statement thus:
“The correlation between PROXY TYPES and recent temperature data IS legitimate because there are well established physical and biological processes that result in temperature changes altering the proxy measured as with dO18 isotope analysis. A lack of correlation in such PROXY TYPES indicates that factors other than temperature are distorting the data so that THOSE PROXY TYPES should be discarded.”
…for otherwise you are just data mining for recent correlation with thermometers while less recent noise just averages out to a horizontal hockey stick blade.
It strikes me that what proxy constructors have done is the equivalent of removing outliers in an experiment because, well, they are outliers and we don’t like them!!
Izen says:
“The productivity, expected output of a forest assessed for logging is calculated from the temperature and rainfall records.”
Exactly, temperature and rainfall. And in most parts of the World rainfall is the most important factor. You might find reasonably “pure” temperature dominated treering record close to the arctic treeline in areas where there is never any moisture stress (not many such places in the World), and provided you take samples from a very large number of trees to even out local effects. And you will still get plenty of noise and spurious signals from e. g. exceptionally late spring or early autumn frosts, major insect infestations, forest fires and large storms that fell trees over a large area
“there are well established physical and biological processes that result in temperature changes altering the proxy measured as with dO18 isotope analysis.”
Sometimes yes, sometimes no:
“The general peril in isotopic paleoclimate proxies is that the data may reflect a change in the source vapor due to a minor circulation change, rather than a widespread change in a major climate variable such as temperature or runoff” (R. T. Pierrehumbert) (my emphasis)
Relying just on the account given here, it would seem that a big problem with the proxy selection lies in the evident nonstationarity of (typical) 20th century temperature time series. Whatever the exact nature of these time series may be – integrated, trend stationary or other – they are clearly right for spurious correlation with any other time series with an ‘upward drift’, or even just high autocorrelation, over the same time period. Random walks for instance, especially if drift is added.
By way of explanation for those invoking physical arguments to connect proxy and temperature, ‘spurious’ in this sense does not imply there definitely is not a connection, just that the correlations obtained (and their p-values) have no statistical worth.
Thank you Willis.
Circular reasoning can be difficult to detect particularly when the premise and conclusion are widely separated by a series convoluted steps.
Izen, I happen to own several hundred acres of managed forest. Managed for maximum wildlife and maximum mixed deciduous hardwood yield (I won’t allow any cutting of the few residual northern white pines). Game includes whitetail, turkey, squirrel, coyote, ruffed grouse, raccoon, rabbit, the occaisional black bear, plus lots of hawks, lesser rodents, and other wonderful forest edge creatures-even visiting bald eagles up from the Uplands LWRV, a national scenic waterway.
The relationship between tree growth and environmental factors is much more complex than the website you note says. As a single example, every ‘wolf tree’ of any species should be cut at any selective logging. All the trees around it undergo a growth spurt for perhaps 15-20 years just because they have access to more sunlight, everything else being equal- which it never is. Google for an explanation. On my land, old hollow 200 year old burr oaks, prairie savannah remnants left over after we snuffed out fire in southwest Wisconsin, are but one example. We only spare the wild honey bee trees. There are exactly three. Darned big old box elders are another always cut.
There is no way trees make proxy thermometers. Neither in Wisconsin nor in Yamal. People should stop trying, and learn forestry instead.
And learning basic statistical principles would not hurt the proxytologists either, as Willis continues to point out.
Robert Scribbler has been talking about a coming Kelvin Wave for a while now.
Izen,
Sure, rainfall correlates to expected output, and temperature correlates to expected output. This does not remotely imply that rainfall correlates to temperature, not under any system of logic I have ever encountered. The hottest places on Earth are also the driest! You have given yourself yet another black eye…