A question about proxies and calibration with the adjusted temperature record

treemometer_MannWUWT reader Tom O’Hara writes in a question that seemed worthwhile to discuss. Paleo specialists can weigh in on this. It seems to me that he has a point, but like him, I don’t know all the nuances of calibrating a proxy. (Graphic at right by Willis Eschenbach, from another discussion.)

O’Hara writes:

[This] is a puzzle to me.

Everything we know about past climate is based on “proxies.”  As I understand the concept, science looks at “stuff” and finds something that tends to mirror the changes in temperature, or whatever, and uses that as a means to determine what the likely temperature would have been at an earlier time.  This is, I am sure, an oversimplified explanation.

So what we have, in essence, is a 150 year or so record of temperature readings to use to determine our proxy’s hopeful accuracy.

Now my question would be, if we are continuously adjusting the “readings” of that record, how does that affect the usefulness of the proxy information?

If I have correlated my proxy to a moving target, doesn’t that effect the likelihood that the proxy will yield useful information?

It would seem to me that this constant massaging of the database used to define and tune my proxy, would, in the end, destroy the utility of my proxy to deliver useful information.  Or have I got it all wrong?

A few primers for discussion:

1.Detecting instabilities in tree-ring proxy calibrationVisser et al

Abstract. Evidence has been found for reduced sensitivity of tree growth to temperature in a number of forests at high northern latitudes and alpine locations. Furthermore, at some of these sites, emergent subpopulations of trees show negative growth trends with rising temperature. These findings are typically referred to as the “Divergence Problem” (DP). Given the high relevance of paleoclimatic reconstructions for policy-related studies, it is important for dendrochronologists to address this issue of potential model uncertainties associated with the DP. Here we address this issue by proposing a calibration technique, termed “stochastic response function” (SRF), which allows the presence or absence of any instabilities in growth response of trees (or any other climate proxy) to their calibration target to be visualized and detected. Since this framework estimates confidence limits and subsequently provides statistical significance tests, the approach is also very well suited for proxy screening prior to the generation of a climate-reconstruction network.

Two examples of tree growth/climate relationships are provided, one from the North American Arctic treeline and the other from the upper treeline in the European Alps. Instabilities were found to be present where stabilities were reported in the literature, and vice versa, stabilities were found where instabilities were reported. We advise to apply SRFs in future proxy-screening schemes, next to the use of correlations and RE/CE statistics. It will improve the strength of reconstruction hindcasts.

Citation: Visser, H., Büntgen, U., D’Arrigo, R., and Petersen, A. C.: Detecting instabilities in tree-ring proxy calibration, Clim. Past, 6, 367-377, doi:10.5194/cp-6-367-2010, 2010.

2.

From WUWT A new paper now in open review in the journal Climate of the Past suggests that “modern sample bias “has “seriously compromised” tree-ring temperature reconstructions, producing an “artificial positive signal [e.g. 'hockey stick'] in the final chronology.”

Basically, older trees grow slower, and that mimics the temperature signal paleo researchers like Mann look for. Unless you correct for this issue, you end up with a false temperature signal, like a hockey stick in modern times. Separating a valid temperature signal from the natural growth pattern of the tree becomes a larger challenge with this correction.  More here

 

3.

Calibration trails using very long instrumental and proxy data

Esper et al. 2008

Introduction
The European Alps are one of the few places that allow comparisons of natural climate proxies, such as tree-rings, with instrumental and documentary data over multiple centuries. Evidence from local and regional tree-ring analyses in the Alps clearly showed that tree-ring width (TRW) data from high elevation, near treeline environments contain substantial temperature signals (e.g., Büntgen et al. 2005, 2006, Carrer et al. 2007, Frank and Esper 2005a, 2005b, Frank et al. 2005). This sensitivity can be evaluated over longer timescales by comparison with instrumental temperature data recorded in higher elevation (>1,500 m asl) environments back to the early 19th century, and, due to the spatially homogenous temperature field, back to the mid 18th century using observational data from stations surrounding the Alps (Auer et al. 2007, Böhm et al. 2001, Casty et al. 2005, Frank et al. 2007a, Luterbacher et al. 2004). Further, the combination of such instrumental data with even older documentary evidence (Pfister 1999, Brázdil et al. 2005) allows an assessment of temporal coherence changes between tree-rings and combined instrumental and documentary data back to AD 1660. Such analyses are outlined here using TRW data from a set of Pinus cembra L. sampling sites from the Swiss Engadin, and calibrating these data against a gridded surface air temperature reconstruction integrating long-term instrumental and multi-proxy data (Luterbacher et al. 2004).

paper here: Esper_et_al_TraceVol_6 (PDF)

About these ads
This entry was posted in Proxies. Bookmark the permalink.

102 Responses to A question about proxies and calibration with the adjusted temperature record

  1. nutso fasst says:

    How does CO2 concentration affect growth rates?

  2. Coach Springer says:

    “Lay person” reaction: Proxy is projection of the present onto the past. Change the present, change the projection. The issue serves to remind me that these are only projections rather than assuming a false accuracy just because different projections might produce minute variances. (A whole set of forecasts could be off by a mile but only vary from one another minimally. Sound familiar with regard to projections into the future?)

  3. Maybe that the reason for the divergence problem is also bad weather station placements, especially in the arctic. If you “train” your proxies using “bad” stations, where situation has been changed, you may see the artificial warming, which is not visible in the tree rings, simply because there was no real warming.

  4. David McKeever says:

    Statistics can look at the same data and look at it from a different angle (so to speak) and find a stronger correlation with some subset of the data (pca analysis is just one technique). Once you have a data set you aren’t frozen into one analysis. That also opens the door to abusing these same techniques (see Steve McIntyre on the hockey stick). Abusing the methods to find a predetermined pattern doesn’t nullify all the methods (used appropriately).

  5. Joseph Murphy says:

    A general rule of mine, you can not do hard science on anything with a specific date attached to it. Experimentation requires that time be irrelevant. (You can do an experiment that shows x causes y. But, if you know that y occurred at sometime in the past, you can not do an experiment to show that x was the cause of that y.) This post seems to be pondering some of the extra assumptions required when specific ‘times’ are incorporated into science.

  6. Jim G says:

    Don’t trees, like most living things, adapt over time to their environment? Plus the variables are many, ie CO2, moisture, temperature, sunlight, humidity, etc.

  7. BioBob says:

    - It is important to realize that field temperature readings are themselves proxies of particle velocity or kinetic energy.

    – In addition, the unit scales (Celsius, Kelvin, etc.) employed are also proxies for reality.

    – calibration is also a proxy for ‘accuracy’, since precision and the limits of observation make the resulting readings a ‘fuzzy’ probability cloud rather than a single value.

  8. James says:

    Related to proxies, what are the resolution of various proxies? I always hear (mostly from skepticalscience.com) about how proxies show that we’ve never seen as rapid a temperature rise as we have in the last century anywhere in the historical record. My impression, though, is that there’s not enough resolution in the proxies at the sub-centennial scale. Is this true? Can someone help shed some light on this for me? Thank you!

  9. rgbatduke says:

    All you are really pointing out is that tree rings in particular make lousy proxies, because tree growth rates are highly multivariate and because any process you use to include or exclude specific trees on the basis of IMAGINED confounding processes are open opportunities for undetectable confirmation bias to creep into your assessment. You can only reject trees if you think you know the answer they are supposed to be providing, outside of the usual statistical process of rejecting extreme outliers. But one of the problems with Bayesian reasoning in this context is that one man’s Bayesian prior can all too easily become another man’s confirmation bias that prejudices a particular answer. One has to have a systematic way of reassessing the posterior probabilities based on data.

    But data is what this approach can never obtain. We cannot ever know the temperatures in the remote, pre-thermometric past. Hell, we can barely assess them now, with thermometers! One could do a multi-proxy analysis, using things like O18 levels that might be a completely independent proxy with independent confounding errors to improve Bayesian confidence levels, but I’ve always thought “dendroclimatology” is largely nonsense because of my work on random number generator testers (dieharder).

    Here’s an interesting question. Once upon a time, before computer generation of pseudorandom numbers became cheap and reliable in situ, books like the CRC handbook or Abramowitz and Stegun (tables) often included pages of “tested” random numbers for people to use in Monte Carlo computations done basically by hand. Even into the 90’s, one of the premier experts on random number generators and testing (George Marsaglia) distributed tables of a few million “certified” random numbers — sets that passed his diehard battery of random number generator tests — along with the tests themselves on a CD you could buy. What is wrong with this picture?

    Random number generators are tested on the basis of a pure (null) hypothesis test. One assumes that the generator is a perfect generator (and that the test is a perfect test!), uses it to generate some systematically improvable/predictable statistic that can be computed precisely some other way, and then compute the probability of getting the answer you got from using the RNG if it were a perfect RNG. In case this is obscure, consider testing a coin, presumed to be 50-50 heads and tails. If we flip the coin 100 times and record the number of heads (say) we know that the distribution of outcomes should be the well-known binomial distribution. We know exactly how (un)likely it is to get (say) 75 heads and 25 tails — it’s a number that really, really wants to be zero. If we have a coin that produces 75 heads and 25 tails, we compute this probability — known as the p-value of the test — and if it is very, very low, we conclude that it is very, very unlikely that a fair coin would produce this outcome, and hence it is very, very unlikely that the coin is, indeed, the unbiased coin we assumed that it was. We falsify the null hypothesis by the data.

    Traditionally, one sets the rejection threshold to p = 0.05. There isn’t the slightest good reason for this — all this means is that a perfect coin will be falsely rejected on average 1 time out of 20 trials of many samples, no matter how many samples there are in each trial. A similar problem with accepting a result if it reaches at least a p-value of 0.05 “significance” plagues medical science as it enables data dredging, see:

    http://xkcd.com/882/

    which is an entire education on this in a single series of cartoon panels.

    However, there is a serious problem with distributing only sets of random numbers that have passed a test at the 0.05 level. Suppose you release 200 such sets — all of them pass the test at the 0.05 level, so they are “certified good’ random numbers, right? Yet if you feed all 200 sets into a good random number generator tester, it will without question reject the series! What’s up with that?

    It’s simple. The set is now too “random”! You’ve applied an accept/reject criterion to the sets with some basically arbitrary threshold. That means that all the sets of 100 coin flips that have just enough heads or tails to reach a p-value of 0.04 have been removed. But in 200 sets, 8 of them should have had p-values this low or lower if the coin was a perfectly random coin! You now have too few outliers. The exact same thing can be understood if one imagines testing not total numbers of heads but the probability of (say) 8 heads in a row. Suppose heads are 1’s and tails are 0’s. 8 heads in a row is something we’d consider (correctly) pretty unlikely — 1 in 256. Again, we “expect” most coin flips to have roughly equal numbers of heads and tails, so combinations with 4 0’s and 4 1’s are going to be a lot more likely than combinations with 8 0’s or 8 1’s.

    We are then tempted to reject all of the sets of flips that contain 6, 7, or 8 1’s or 0’s as being “not random enough” and reject them from a table of “random coin flips”. But this too is a capital mistake. The probability of getting the sequence 11111111 is indeed 1/256. But so is the probability of getting 10101010, or 11001010, or 01100101! In fact, the probability of getting any particular bit pattern is 1/256. A perfect generator should produce all such bit patterns with equal probability. Omitting any of them on the basis of accept/reject at some threshold results in an output data set that is perfectly biased and that will fail any elementary test for randomness except the one used to establish the threshold. This is one reason that humans make lousy random number generators. If you are asked to put down a random series of 1’s and 0’s on the page, or play rock-paper-scissors with random selection, you simply cannot do it. We aren’t wired right. We will always produce series that lack sufficient outliers because 11111111 doesn’t look random, where 10011010 does.

    With that (hopefully) clear, the relevance to data selection in any sort of statistical analysis should be clear. This is an area where angels should rightly fear to tread! Even the process of rejecting data outliers is predicated on a Bayesian assumption that they are more likely to be produced by e.g. spurious errors in our measuring process or apparatus than to be “real”. However, that assumption is not always correct! When Rutherford (well, really Geiger and Marsden) started bombarding thin metal foil with alpha particles, most passed through as expected. However, some appeared to bounce back at “impossible” angles. These were data outliers that contradicted all prior expectations, and it would have been all too easy to reject them as fluctuations in the apparatus or other method errors — in which case a Nobel prize for discovering the nucleus would have been lost.

    In the case of tree ring analysis, it is precisely this sort of accept/reject data selection on the basis of an arbitrary criterion that led Mann to make the infamous Hockey Stick Error in his homebrew PCA code — a bias for random noise to be turned into hockey sticks. Even participants in this sort of work acknowledge that it is as much guesswork and bias as it ever is science — usually off the record. In the Climategate letters, one such researcher laments that the trees in his own back yard don’t reflect the perfectly well known temperature series for that back yard, causing his own son to get a null result in a science fair contest (IIRC, it is some time since I read them:-). Then there is the infamous remark ON the record about the need to pick cherries to make cherry pie — to the US Congress!

    I spent 15 years plus doing Monte Carlo computations, which rely even now on very, very reliable sources of random numbers. Applying heuristic selection criteria to any data series with a large, unknown, unknowable set of “random” confounding influences in the remote past to “improve” the result compared to just sampling the entire range of data and hope that the signal exceeds the noise (eventually) by dint of sheer statistics is trying to squeeze proverbial statistical blood from a very, very hard no-free-lunch stone. Chances are excellent that your criterion will simply bias your answer in a way you can never detect and that will actually make your answers systematically worse as you gather more data relative to the unknown true answer.

    rgb

  10. Pamela Gray says:

    Proxies based on solar metrics may also find themselves with a published temperature paper that has morphed from a gold standard to one that is now questionable and possibly unreliable. But this should be seen as part of the scientific process and not reflect poorly on the authors of such papers. There have been many examples in the past where understanding at that time was accepted only to see that understanding nearly stand on its head decades later (or even a few years later) yet those papers were not pulled and can still be read today. Which is the way it should be. The fact that tree ring and other proxies are now being questioned, and temperature observations adjusted up or down is rather normal in the history of science advances and paradigm shifts rather than an exception, and the process whereby these things happen should remain in the journals instead of removed.

    Which reminds me of a very important step in defensible research. Do your literature review very thoroughly. That vetting process should not be quickly dispatched lest you find yourself basing your entire work on out of date information or somewhat paradoxically, current science fads that will eventually go down the same path.

  11. Robert of Texas says:

    There is a related issue I would like to hear someone address:

    Proxies such as Tree Ring measurements have multiple confounding factors: Temperature, Water Availability, Sunlight, CO2 Availability, Nutrient Availability (other than CO2), other stress factors (pests, disease, early winter). There may be others.

    So not only does the baseline move, but how do you assign the growth of a ring to all of these factors (and probably more I didn’t think of)? Each of these factors may change year to year or decade to decade. I just do not understand how you untangle them without introducing bias.

  12. Bob Kutz says:

    No, I think the part where dendro-chronology falls off the rails has nothing to do with revising data. The fact that our proxy goes completely the wrong way for about the last 50 years (aka “hide the decline” in the original context of ‘Mike’s Nature trick’) means that this particular proxy should be completely disregarded until such time as the difference can be reconciled.

    THAT, at least, shouldn’t be hard for anybody to understand.

    Too bad the media completely glossed over Meuller’s real comments on that point in favor of his ‘vindication’ of the historical temp. data.

  13. BioBob says:
    June 13, 2014 at 8:07 am

    - It is important to realize that field temperature readings are themselves proxies of particle velocity or kinetic energy.

    – In addition, the unit scales (Celsius, Kelvin, etc.) employed are also proxies for reality.

    – calibration is also a proxy for ‘accuracy’, since precision and the limits of observation make the resulting readings a ‘fuzzy’ probability cloud rather than a single value.

    The first two are true, but they have the advantage of being replicable (ie, I can build a thermometer in my kitchen, calibrate it to the freezing & boiling points of water, & I will be very close to everyone else’s thermometers), unlike the paleo-proxies, which have to simply be trusted.

    The third point is a great big “So what? That’s life, get used to it.”

  14. pouncer says:

    RGB, Plus One, as usual.

  15. Ron C. says:

    In Soviet Russia, they used to say: “The future is known, it is the past that keeps changing.”

  16. joe says:

    From the article above: “Furthermore, at some of these sites, emergent subpopulations of trees show negative growth trends with rising temperature. These findings are typically referred to as the “Divergence Problem” (DP). ”

    This is not a divergence problem – This is basic biology – all plants have optimum growing ranges – A bell curve – too cold – plants grow slowly, as it warms plants grow faster until it reaches an optimum growth, as temp gets even warmer, the plant growth slows down. An important question for the Detro experts – is how can you tell the difference along with all the other factors, light, nutrition, rainfall, etc

    Virtually all plant species have geographical ranges. There is a reason plants growing in northern latitudes dont grow well in southern latitudes and visa versa for plant species growing in southern latitudes.

    Is the yamal ural divergence problem due to getting too warm? I dont know.
    Are the proxies that are not picking up the MWP due to it getting too warm and therefore having slower growth?

  17. lemiere jacques says:

    using a proxy means you are making an assumption.
    being able to assess a proxy means you didn(t need it.

    well if you have several independant proxies you can begin to work more seriously.

  18. Stanleysteamer says:

    I have not seen a discussion about sampling technique and sampling bias. I teach a basic statistics course and have some insight into these problems when associated with any study. Personally, I do not think that taking a few trees in the Northern Hemisphere constitutes a valid sampling technique. In addition, a researcher must be very careful to extrapolate conclusions beyond the region in which the samples were taken. From my perspective, the only valid data we have for the entire planet is the satellite data and we just don’t have enough of it to be drawing firm conclusions about anything. Maybe an expert can weigh in on sampling.

  19. Gunga Din says:

    My conclusion? There’s more than one bug in Mann’s tree rings.

  20. JeffC says:

    at a minimum every time the data is “adjusted” any modeling done using said data becomes invalidated and must be rerun … if the modeler used hindcasting to tune his model he would have to retune the model with the new historic data and rerun the model for future forecasts …

  21. BioBob says:

    Stark Dickflüssig says: June 13, 2014 at 8:34 am The third point is a great big “So what? That’s life, get used to it.”
    ——————————-

    So what ? Every AGW graph, every temperature reading I have ever seen ignores point 3. Liquid in glass thermometers typically have a plus or minus .5 degree F limit of observability reported by the manufacturer and yet weather station report temperatures that employ such devices supposedly have a precision of .01 to .001 rather than to the nearest degree F that the instrument proxy can not discern.

    That’s what Stark. Read em and weep for “life as we do NOT know it”. The central limit theorem (which likely does not apply in any case) concerns variance, not instrument limitations.

  22. vukcevic says:

    By comparing two very respectable science proxy-based reconstructions and one observational data calculation, good agreement is attained ( HERE
    A provisional conclusion could be that the accord may not be coincidental, or at least unlikely.

    a – …annual band counting on three radiometrically dated stalagmites from NW Scotland, provides a record of growth rate variations for the last 300 years. Over the period of instrumental meteorological records we have a good historical calibration with local climate (mean annual temperature/mean annual precipitation), regional climate (North Atlantic Oscillation) and sea surface temperature (SST; strongest at 65-70°N, 15-20°W)….- Baker, A., Proctor, C., – NOAA/NGDC Paleoclimatology Program, Boulder CO, USA.
    b – ….. observational results indicate that Summer NAO variations are partly related to the Atlantic Multidecadal Oscillation.Reconstruction of NAO variations back to 1706 is based on tree-ring records from specimens collected in Norway and United Kingdom …. – C. Folland, UK Met Office Hadley Centre.
    c – …. Solar magnetic cycles (SIDC-SSN based) & geomagnetic variability (A. Jackson, J. Bloxham data) interaction. Calculations – vukcevic

  23. BioBob says:

    Stanleysteamer says: June 13, 2014 at 9:33 am From my perspective, the only valid data we have for the entire planet is the satellite data
    ———————-

    Even sat data has it’s problems or we would not have things like this:
    http://wattsupwiththat.com/2010/10/04/an-over-the-top-view-of-satellite-sensor-failure/

    example from the post:
    “The U.S. physicist agrees there may now be thousands of temperatures in the range of 415-604 degrees Fahrenheit automatically fed into computer climate models and contaminating climate models with a substantial warming bias. This may have gone on for a far longer period than the five years originally identified.”

  24. BioBob says:

    Stark Dickflüssig says: June 13, 2014 at 8:34 am I can build a thermometer in my kitchen, calibrate it to the freezing & boiling points of water, & I will be very close to everyone else’s thermometers
    —————–
    LOL – when pigs fly.

    Build your thermometer:
    1) demonstrate how it’s response is linear between -200 c to +100 c or whatever range you use,
    2) determine the limits of the changes it can reliable discern (limits of observation)
    3) let me know how you define “very close” when identical machine produced devices placed in identical seeming Stevenson screens at identical heights, etc vary significantly with age, etc etc.

    In short bullcrap !!

  25. Peter Miller says:

    Ron C, I hope you know that saying is the First Rule of Mann – The future is known, it is the past which keeps changing.

    The gatekeepers of the earth’s ground temperature records obviously believe the same.

  26. Mi Cro says:

    The fork in tree proxies for me is that I planted 10 or 12 trees of the same age/size 14 years ago in my yard, most of them are about the same size, but I have the largest (+10/20%) right next to the smallest (-10/20%). I know why they grew different (water), but once the trees were turned into lumber (IIRC where the oldest proxies came from), there’s no way you’d know.

  27. Kev-in-Uk says:

    In my humble opinion, ALL proxies must be viewed with extreme caution. For a multitude of reasons, but mostly because
    a) it is very unusual for a proxy measurement to be in effect a remote measurement of a single parameter itself determined by a single factor/parameter (I can’t think of one offhand). Hence, all proxies are based on assumptions of other factors (often many, as in tree ring width!).
    b) the accuracy of the ‘proxy’ measurement itself – i.e. the physical analysis and measurement of the proxy – e.g. isotope analysis – for such micro detections – accuracy (and therefore the deduced proxy effect) is paramount.
    c) the physical issue of the proxy itself – e.g. take ice cores, where we have the additional assumption that the trapped air is indeed ‘trapped’ and has not been altered since trapping or cross contaminated with adjacent ‘bubbles’, etc. Is there an error in there? How would we know?
    d) the cross comparison or calibration (if you like) with modern measurements. How do we know the modern measurement and calibration is realistic? In truth, we simply cannot ‘know’ for such long term things as climate assumptions and unless humankind lives on for another few millennia, it is unlikely we will ever be able to check our calibrations!

    When you add all these potential errors together – it is clear that they could compound/combine together very badly and give extremely misleading results, or at least a ‘value’ with very large error bars!. The primary scientific assumption, with which I do not agree – is that any errors will be evened out over the dataset. For example, I consider it a bit wild to assume that the effect of the modern calibration for isotope analysis of ice cores since the last ice age should be assumed/applied for much older times, e.g. before the last ice age!

  28. Phil. says:

    In the case of tree ring analysis, it is precisely this sort of accept/reject data selection on the basis of an arbitrary criterion that led Mann to make the infamous Hockey Stick Error in his homebrew PCA code — a bias for random noise to be turned into hockey sticks.

    No, that only happened if you use Monte-Carlo data with an extremely high autocorrelation parameter. The NRC had to use AR1(.9) to get that result.

  29. Steve McIntyre says:

    As someone that’s spent a lot of time on proxy data, I don’t regard adjustments to temperature data as an important issue in proxy reconstructions. Or even a minor issue.

  30. phi says:

    joe,

    “Is the yamal ural divergence problem due to getting too warm?”

    No. This is a problem with climatologists and their strange data processing. The divergence of dendros is a myth.

    http://imageshack.us/a/img21/1076/polar2.png

  31. basicstats says:

    Proxy construction is another area where climate research hopes that combining/averaging a number of mediocre estimators might improve the accuracy of the resulting estimate. Of course when the proxies are highly correlated, reduction in the variance/dispersion of the average or other linear combination will be small. Laws of large numbers (LLNs) require something close to independence (or else low correlation) between the things being averaged. A point rgbatduke often makes about averaging GCM simulations.

  32. Bruce says:

    My understanding follows, taking tree rings as an example:
    If we know what the temperatures were over a period of time and, further, if we know the amount the tree rings expanded over that time, we can ascribe (calibrate) a certain amount of tree expansion to a change in temperature.
    So it is assumed, ceteris paribus, that we may infer the ambient temperature from tree rings at a time when no temperature record exits.

  33. Steven Mosher says:

    “Is the yamal ural divergence problem due to getting too warm? I dont know.
    Are the proxies that are not picking up the MWP due to it getting too warm and therefore having slower growth?”

    The divergence has been variously explained.
    I recall one study that put some of the issue down to temperature adjustments
    (need to verify)
    More recently
    http://www.bishop-hill.net/blog/2014/5/8/divergence-problem-solved-allegedly.html

  34. DeNihilist says:

    Jez, I wish Jim Bouldin would join this discussion. He has become an outlier in the paleo world hisself.

  35. BioBob says:
    June 13, 2014 at 10:40 am

    1) demonstrate how it’s response is linear between -200 c to +100 c or whatever range you use,

    Demonstrate the difference between a possessive & a contraction, you illiterate goober.

  36. tgasloli says:

    Botanist and paleobotanist have spent the past 30 years telling the “climate scientist” that you can’t use tree rings as a proxy for temperature. The annual growth ring size is dependent on too many factors to be a proxy for temperature. I am surprised to see this website further the scientific nonsense of tree rings as temperature proxies. You should know better.

    You also really didn’t answer the question of how you can calibrate any proxy with only 150 years of poor quality temperature data. The simple fact is, they really aren’t calibrated; they are used to define temperatures that are outside the range available in the 150 years of poor quality data. Most of the proxies are really just bench tests (unverified by real world data), extrapolations beyond the range, and “educated guess work”.

    I short, there is very little actual science in “climate science.”

  37. Willis Eschenbach says:

    Well, now, that’s odd. I took a look at the Esper-Frank 2008 study linked to above. They compared tree rings and observational data. They say the stations they used were:

    Stations include Bernina Pass (Ber), Bever (Bev), Buffalora (Buf ), Samedan (Sam), Sils Maria (Sil), Station Maria (Stm).

    I think that what they call “Station Maria” is a misreading of the Santa Maria station, which is abbreviated “Sta. Maria”. And I’ve located Samedan and Buffalora in the Berkeley Earth dataset.

    However, none of those three have more than a few decades of data, and I can’t locate the other ones in either the GISS or the Berkeley Earth dataset (Switzerland station map).

    Anyone have any guesses about why this might be? The authors show data from ~1960 for their stations. I can’t find it.

    w.

  38. Follow the Money says:

    I will say this discussion is steps above IPCC science. The trees are limited here to “alpine” and northern treeline specimens, whose rings, even pre-Kyoto, were thought to be maybe partially related to summer temperature or summer season length. Remember, partially, not even mainly. However, IPCC science, perhaps I should specify as Australian Climate Science, uses studies (mostly Australian) that almost any old tree in Australia and NZ is a tree-mometer. What is it about Australia? These also show up in AR5 S. Hemisphere multi-proxy thingeroos.

  39. Willis Eschenbach says:

    Steve McIntyre says:
    June 13, 2014 at 12:25 pm

    As someone that’s spent a lot of time on proxy data, I don’t regard adjustments to temperature data as an important issue in proxy reconstructions. Or even a minor issue.

    Thanks for that, Steve. Any ideas on my question immediately above?

    Also, it seems to me that whether adjustments are important in proxy reconstructions depends on how they’ve calibrated their reconstruction. Esper and Frank appear to have used the Luterbacher temperature reconstruction to calibrate their results. As a result, it seems that how that reconstruction was created, including adjustments to the stations, could have an effect on their results. The problem from my perspective is that the overall trend in the proxy reconstruction is generally some linear function of the overall trend in the temperature data used for the reconstruction.

    w.

  40. I learned more than enough about proxies and statistics from “Hockey Stick Illusion.” I consider proxies not far removed from tea leaves and Ouija broads. How can any reasonable person take the results seriously? And to “..torture and molest..” a line out of a huge cloud of proxy data and then plot that “result” with real instrument data from MLO, that’s not just sloppy data presentation, IMHO it’s plain, ordinary fraud!

  41. vukcevic says:

    RGB: tree rings in particular make lousy proxies, because tree growth rates are highly multivariate

    C. Folland from the UK’s Met Office used tree-ring records collected from specimens in UK and Norway, suggesting the data should be a good proxy for the summer NAO (atmospheric pressure – related to both temperature and precipitation during growing season). As it happens it also appears to be an excellent proxy for solar/geomagnetic intra-activity for the 1700-1850 period (see second graph in my comment above ( June 13, 2014 at 10:20 am). Alas, nothing is forever, the following 150 years correlation is only sporadic (non-stationary). As it happens the Scottish stalagmites’ growth has a longer tolerable range. Just another natural coincidence, you might say.

  42. phi says:

    Willis Eschenbach,
    Sils Maria, homogenized data from 1864 :
    http://www.meteosuisse.admin.ch/files/kd/homogreihen/homog_mo_SIA.txt
    Apparently, Esper uses the raw series.

  43. Michael Moon says:

    That’s it, Global Dimming!

    “So the trees aren’t acting as thermometers over a significant fraction of the instrumental era.
    Ah yes it could be “increased CO2″, “global dimming”, “atmospheric nitrate deposition”.
    Virtually anything in fact.”

    Mosher, you are a hoot…

  44. Gary Pearse says:

    Using apriori reasoning:

    Wouldn’t an ancient forest have richer nutrients, particularly minerals, than it would a thousand years later? I know that virgin prairie soil grew crops like crazy for a few generations and then nutrients had to be added annually.

    How is it possible to control out the effect of limitations on moisture and the putative temperature differences in the signal? Wouldn’t a drought look like cold temperatures in the tree growth. Wouldn’t following years of adequate moisture look like warmer temperatures?

    Wouldn’t too much water reduce a tree’s growth?

    Would it not be better to look at several species at a time? If you had a forest of pine, spruce but with some boggy areas that might support ash, willow or other water-loving tree and you had a drought, wouldn’t you tend to find the ash effected more, pine the most resistant and spruce in between? Wouldn’t the spruce tend to encroach on the ash bog as it dried?

    Do proxilitizers consider these types of questions?

  45. Steve McIntyre says:

    RE Mosher comment above – I tried to look at the data underpinning the argument supposedly linking divergence to global dimming. Unfortunately the global dimming data was password protected and I have thus far been unsuccessful in getting access to it. If I can’t get data, it’s hard to analyse the supposed linkage.

    Willis, don’t know what question you’re talking about? Is it about station data? if it is, I don’t know.

  46. Pat Frank says:

    Steve Mc, as temperature data are used to calibrate proxy data, any magnitude uncertainty in the temperature record is immediately transferred to the proxy reconstruction. Accuracy is a major issue, quite independent of whether the proxy is actually a proxy, or whether the proxy record exhibits proper statistics.

  47. Kurt says:

    Maybe there are different issues here. My understanding (which may be wrong) is that the proxy data is initially calibrated or trained against a first subset of the instrumental record and then tested against a second, different subset of the instrumental record as a verification step. This always smacked of something that would be susceptible to cherry picking to me, as I don’t think I read anything about accepted standards for demarking the boundary between the training set and the testing set, so that someone could just adjust that boundary (or the relative sizes of the sets) until the proxy trends matched the interval in the instrumental record set aside for verification. In other words, you just determine what percentage of the instrumental record is used to calibrate the reconstruction in a manner such that the reconstruction matches the remainder of the instrumental record.

    Anyway, what happens if you have a proxy reconstruction that showed a good match against that portion of the instrumental record set aside as the testing data when the reconstruction was made, but ten years later the instrumental record was “adjusted” in light of newly discovered biases or supposedly better statistical adjustment techniques, and the reconstruction no longer matches the instrumental record.

    Similarly, if the portions of the instrumental record by which the reconstruction has been trained is later revised, does the proxy reconstruction have to be adjusted. And do you get to start all over again and select a new interval as the training data and a new interval as the testing data to try to keep the reconstruction from changing too much?

  48. Latitude says:

    As someone who’s spent the greater part of my adult life trying to divine this crap…..all proxies are crap!
    Too many assumptions have to be made, too much data has to be ignored…and all of it is built on precious proxies that made the same assumptions and picked data

  49. Mike Jonas says:

    rgb – I find your explanation of random numbers and their pitfalls intriguing. Logically, it is impossible to examine a set of numbers and determine whether they are random. [Explanation: Start with a sequence of binary numbers, length one. You can't tell. Length two - no matter what the combination 00, 01, 10, 11 - you can't tell. Etc, ad infinitum]. So you can test a generator for randomness, but you can’t do that just by testing its output!

    On proxies – IMHO if you have to torture the data then what you end up with is of little or no value. I can’t see how random numbers can be relevant to proxies if you aren’t torturing – either the proxies have a clear theoretical basis and give a clear picture or they don’t. Re temperature, trees don’t have a clear theoretical basis (too many competing factors) so they are of little or no value as proxies. Period.

  50. Catherine Ronconi says:

    Steven Mosher says:
    June 13, 2014 at 1:30 pm

    Has anyone here read the awful offal on Yamal currently stinking up Rational Wiki? It bears the smelly imprint of Connelly, to wit:

    “The Yamal controversy was an explosion of drama in the global warming blog wars that spilled over into the pages of mainstream newspapers. In the wake of Climategate, deniers latched onto a set of tree-ring data called the Yamal series that had been the topic of some of the leaked e-mails (after they were done squawking about “nature tricks” and “hiding the decline,” of course). The Yamal series refers to the tree-ring data taken from the Yamal Peninsula in Siberia by a team of Russian researchers, Hantemirov and Shiyatov, in the late ’90s. Hantemirov and Shivatov released more of their data in 2009 and Steve McIntyre jumped all over it, snarking:
    “”I’m assuming that CA readers are aware that, once the Yamal series got on the street in 2000, it got used like crack cocaine by paleoclimatologists, and of its critical role in many spaghetti graph reconstructions, including, most recently, a critical role in the Kaufman reconstruction.[1]

    “Keith Briffa, a climatoligist at the Climatic Research Unit (CRU) in East Anglia, had based a number of temperature reconstructions on a subset of the Yamal data. He claimed he had used a different methodology than Hantemirov and Shivatov because the original methodology didn’t preserve long-term climate change.[2] McIntyre accused Briffa of cherry-picking. Of course, it would be perfectly legitimate to criticize Briffa’s reconstruction and perform a new reconstruction on one’s own. However, McIntyre just downloaded some other unrelated Yamal dataset from the internet and chucked it into the original set.[3] Deniers, obviously, failed to care about this and the “Yamal is a lie!” claim shot through the deniosphere, with Anthony Watts picking up the story next.[4] It then found its way into the right-wing rags, with James Delingpole and others declaring that the “hockey stick” graph had been soundly “debunked.”[5][6]

    “However, Briffa’s Yamal reconstructions were only included in four of the twelve hockey stick reconstructions and even McIntyre criticized other deniers for blowing his “critique” of Briffa out of proportion and walked back his accusations of cherry-picking. Sure enough, both Briffa and a member of the original Russian team released full reconstructions using the previously unreleased data and the hockey stick shape returned, confirming Briffa’s original assertions.[7][8]

    “However, the incident was still missing something: That classic McIntyre hypocrisy. McIntyre had been whining for quite some time that Briffa had been blowing him off (gee, wonder why?). However, Briffa, even though he had a good excuse, hadn’t been stonewalling McIntyre — the complete dataset was under the control of the Russian team that had collected it. After Briffa notified him of this, McIntyre then flippantly replied he had had the data all along!
    “”In response to your point that I wasn’t “diligent enough” in pursuing the matter with the Russians, in fact, I already had a version of the data from the Russians, one that I’d had since 2004.[9]”

    Correct me if wrong, but doesn’t “Yamal” come down to a single tree, which somehow missed out on the now touted “global dimming” magically affecting all its neighboring trees?

  51. Philip Mulholland says:

    Well there is always this paper to study:-
    Libby, L. M. & L. J. Pandolfi. (1974) Temperature Dependence of Isotope Ratios in Tree Rings Proc. Nat. Acad. Sci. USA Vol. 71, No. 6, pp. 2482-2486.

  52. Bill Illis says:

    Do you core a tree from the left side or the right side or the north side.

    Look at Michael Mann holding the tree cross-section above. It completely depends on what location you drill the core from. It probably changes from foot to foot high and angle to angle across the whole tree from top to bottom.

    Secondly, Bristle Cone pine trees, which Mann relied on in his hockey stick. You cannot get a reliable core from these trees if you life depended on it. Only 10% of the tree is alive at any one time and any core of the tree just gives a non-sensical result.

    Then one gets into age, precipitation versus temperature, nutrients, nearby trees blocking or not blocking the Sun etc. tre rings do not depend on temperature.

    I have no rational for why the tree ring width methodology has been used at all,

    I can be convinced that C14 isotope dating data (to give an age of the ring) or O18 isotope temperature calibration (providing some measure of the temperature when the ring was formed) could provide some useful information. But ring-width and ring density is just faking up numbers.

    This is a field of science that would be described as moribund.

  53. richard verney says:

    I hate proxy ‘evidence’ At best a proxy can be used as a rough and ready indicator, but no proxy should vbe compared to another, let alone an attempt be made to cut and splice..

    As regards the tree rings, I cannot recall the precise facts but seem to recall that Mann tuned the tree ring data to the thermometer temperature record between the early 1900s and about 1960, or the late 1800s and about 1960. One thing I do recall is that he found a divergence between tree rings and post 1960s temperatures. The tree rings suggesting that temperature was falling.

    If one looks at this another way, the post 1960s thermometer record suggested that temps were rising faster than one would expect from tree rings. Perhaps that is not surprising if the thermometer temperature record became polluted by UHI (and/or possible station drop outs and/or other inappropriate homegenisation adjustments).

    Perhaps the trees are telling us something of importance, namely that the temperature record post 1960 is suspect.

    One thing is clear, Mann was well aware from the divergence problem that one could not cut and splice the two proxy chains together (tree rings and thermometers). He knew from this that either the tree proxy was wrong, or the thermometer record was wrong, or both were erroneous and unreliable.

    Mann had found something of interest (subject to the other issue regarding sampling integrity), it is a pity that he did not more honestly share his findings with then world. An honest scientist would have pointed out that one conclusion from his study was that perhaps the thermometer record post the 1960s is suspect and perhaps it shows too much warming (an inference supported by the satellite temperature record).

  54. Michael Moon says:

    Mosher,

    If that did not sting, then you do not actually read the comment here directed to you, which would explain a lot!

    Decepticon…

  55. Willis Eschenbach says:

    Michael Moon says:
    June 13, 2014 at 10:33 pm

    Mosher,

    If that did not sting, then you do not actually read the comment here directed to you, which would explain a lot!

    Decepticon…

    Michael, I have absolutely no idea what you are talking about. You have not identified whatever you are calling “that”, as in “that did not sting”. You have not identified whatever it was that Mosher said that has your knickers in such a twist.

    As a result, your comment is both totally incoherent, and nothing but mudslinging.

    If you object to something Mosher said, QUOTE IT so we all can see what you are on about.

    w.

  56. Greg Goodman says:

    Esper et al. 2008

    Introduction
    The European Alps are one of the few places that allow comparisons of natural climate proxies, such as tree-rings, with instrumental and documentary data over multiple centuries.


    References
    Auer, I., and 31 Co-authors (2007): HISTALP – Historical instrumental climatological surface time
    series of the Greater Alpine Region. International Journal of Climatology 27: 17-46.
    Böhm, R., Auer, I., Brunetti, M., Maugeri, M., Nanni, T., Schöner, W. (2001): Regional temperature
    variability in the European Alps: 1760-1998 from homogenized instrumental time series.
    International Journal of Climatology 21: 1779-1801.

    ====

    “one of the few places…” While this appears to be the case, the temperature data they are using is HISTALP series. The centennial variability in these data are _almost totally_ the result of “corrections”. Specifically the “homogenisation” efforts of Bohm et al.

    Before “correction” they snow very little increase over 250 years.

    Worse both the austrian and swiss met. services are very possessive with their “unhomogenised” data and only make the manipulated temperature records publicly available, without a considerable payment ( contrary to WMO rules ) and signature of non-disclosure agreement

    This means that Bohm et al’s adjustments are neither verifiable nor falsifiable. The non-disclosure agreement would also prevent any work contradicting or modifying their adjustments from being published in a verifiable way.

    This underlines the point O’Hara is making. Adding the opacity of those jealously controlling the original data renders studies based on them , like Esper et al. 2008, equally unverifiable and sadly worthless scientifically.

  57. Gunga Din says:

    Willis Eschenbach says:
    June 13, 2014 at 2:17 pm

    Well, now, that’s odd. I took a look at the Esper-Frank 2008 study linked to above. They compared tree rings and observational data. They say the stations they used were:

    Stations include Bernina Pass (Ber), Bever (Bev), Buffalora (Buf ), Samedan (Sam), Sils Maria (Sil), Station Maria (Stm).

    I think that what they call “Station Maria” is a misreading of the Santa Maria station, which is abbreviated “Sta. Maria”. And I’ve located Samedan and Buffalora in the Berkeley Earth dataset.

    However, none of those three have more than a few decades of data, and I can’t locate the other ones in either the GISS or the Berkeley Earth dataset (Switzerland station map).

    Anyone have any guesses about why this might be? The authors show data from ~1960 for their stations. I can’t find it.

    ====================================================================
    You might try here.

    http://web.archive.org/web/20051214033741/http://data.giss.nasa.gov/gistemp/station_data/

  58. vukcevic says:

    Further up the thread phi posted link for Swiss data.
    Data’s spectral response should cheer-up Svensmark’s devotees.
    http://www.vukcevic.talktalk.net/SwissData.htm

  59. Kurt says:
    June 13, 2014 at 5:20 pm

    My understanding (which may be wrong) is that the proxy data is initially calibrated or trained against a first subset of the instrumental record and then tested against a second, different subset of the instrumental record as a verification step.
    ——————-

    That is a misunderstanding on your part, …. except maybe for proxy data pertaining to the past 50 years. Most all proxy data pre-dates all the thermometer based instrument records.

    Anyway, it is of my learned opinion that:

    Proxy records, thermometer based instrument records and sports statistic records are like three (3) peas in a pod ….. except that the sports statistic records are the only ones that are highly accurate and believable. But, the only real value for all three (3) record types are for “reference” data of past events …… and “heated” discussions of what “cudda” been, “shudda” been, “woudda” been or ”mighta” been.

    Sports statistics that are less than ten (10) months old are useful in “making a bet” on “who” will likely be the “winner” of the next event, ….. but none of the three (3) aforementioned “records” are worth a damn for making accurate predictions of future events.

    If anyone think otherwise, ….. then “bet-your-farm” today, ….. on which NFL team will win the next Super Bowl game (Jan 2015).

    Don’t be making bets on repeated “emergent phenomenon” occurring, …. the “deck” is stacked against you winning.

  60. Michael Moon says:

    Willis, well, clearly I was referring to Mosher’s claim that Global Dimming explains the Divergence Problem. This is ludicrous, absurd, sense-free: Global Dimming from aerosols could not explain rising temps and thinning tree-rings. Try to keep up.

  61. Kurt says:

    Samuel C Cogar says:
    “That is a misunderstanding on your part, …. except maybe for proxy data pertaining to the past 50 years. Most all proxy data pre-dates all the thermometer based instrument records.”

    Tree ring and other proxy data may begin long before the instrumental record begins, but the data ends at the date you take the sample, which means there should always be a period of overlap to calibrate the proxy data to the instrumental record. A tree ring for a 400 year old tree for example will always have tree rings corresponding to the instrumental record and some rings going back before that.

  62. Greg Goodman says:

    “Anyone have any guesses about why this might be? The authors show data from ~1960 for their stations. I can’t find it.”

    Sta Maria is in the Swiss part of HISTALP.

    They are even more secretive with their temperature records than they are with identity of their numbered bank accounts.

    You probably need a personal recommendation for Dr. “poor Phil” Jones to get the data and a signed letter from you mum saying you won’t let anyone else see it.

  63. Willis Eschenbach says:

    Michael Moon says:
    June 14, 2014 at 1:15 pm

    Willis, well, clearly I was referring to Mosher’s claim that Global Dimming explains the Divergence Problem.

    No, Michael, without a quote or a citation that was not clear in the slightest.

    This is ludicrous, absurd, sense-free: Global Dimming from aerosols could not explain rising temps and thinning tree-rings.

    Given our current state of understanding of the climate, and particularly the poor understanding of the indirect effects of aerosols, I’d be cautious about claims that you can tell us what aerosols can and can’t do …

    Try to keep up.

    Michael, when you don’t cite, quote, or in any way indicate which of the hundreds of comments you are talking about, the subject may be perfectly clear to you. Your assumption that it is equally clear out here is a joke. It’s not all about you. We don’t follow your comments with bated breath, and many of us may not have even seen the comment you are referring to.

    As a result, your nasty parting shot is unpleasant, unnecessary, and untrue. The person who isn’t keeping up their end is you, my friend. If you don’t clearly identify what you’re talking about, that’s your fault, not mine.

    w.

  64. Willis Eschenbach says:

    phi says:
    June 13, 2014 at 3:04 pm

    Willis Eschenbach,
    Sils Maria, homogenized data from 1864 :
    http://www.meteosuisse.admin.ch/files/kd/homogreihen/homog_mo_SIA.txt
    Apparently, Esper uses the raw series.

    Thanks, phi. Unfortunately, the Swiss only make data from 14 stations available for free.

    Also, I looked at HISTALP as someone suggested … it covers everywhere but Switzerland.

    Gotta love a nation of bankers …

    Net result? Since Esper et al. haven’t archived their data and the Swiss want money for their data, it’s just advertisement and not science.

    w.

  65. Tom O’Hara’s question is a reasonable one, but I would partially agree with Steve McIntyre’s June 13, 2014 at 12:25 pm comment:

    As someone that’s spent a lot of time on proxy data, I don’t regard adjustments to temperature data as an important issue in proxy reconstructions. Or even a minor issue.

    I partially agree for several reasons:
    1. The net effect of most of the current adjustments (e.g., NOAA NCDC, NASA GISS) to regional/global trends is quite modest (~0.1°C/century).

    2. The “calibration procedure” used for most of the global temperature proxy reconstructions is quite crude. For example, with the “Composite-plus-scale” (CPS) approach, proxy series are simply re-scaled to have the same mean (“average”) value and variance (“range”) as the instrumental record over the calibration period. The instrumental adjustments used by NOAA NCDC, NASA GISS etc. don’t majorly change these two particular values.

    So, a proxy series calibrated to unadjusted data will be fairly similar to one calibrated to data “homogenized” using the NCDC/GISS adjustments.

    At any rate, the differences between an “unadjusted data” calibration versus an “adjusted data” calibration would often be smaller than the differences between different proxy series!

    3. Rescaling a proxy series in this way doesn’t alter the relative “warmth”/”coolness” of different periods, i.e., the Medieval Warm Period/Little Ice Age/Current Warm Period (MWP/LIA/CWP) ratios will be the same regardless of what mean/variance you rescale to.

    4. As it is, the proxy reconstructions aren’t particularly great at reproducing the instrumental record. For instance, many reconstructions are plagued by the so-called “divergence problem” and also the “convergence problem”, as we discuss in Section 2 of our “Global temperatures of the last millennium” paper, which we have submitted for open peer review to our OPRJ website: http://oprj.net/articles/climate-science/16

    By the way, Steve, have you had a chance to read our paper yet? We reference your work quite extensively there, so would particularly welcome your feedback (whether positive or negative).

    —-

    Having said that, I do not think that simply “rescaling” a proxy series and treating it as a perfect “thermometer” is a good idea! While many paleoclimatologists are careful to stress that their proxies are not to be treated as exact “thermometer records”, this is effectively what happens when people start treating global temperature proxy reconstructions, such as the “hockey stick graphs” as reliable replacements for instrumental data.

    Also, as I have written elsewhere, I do not think the current NCDC/GISS adjustments are adequate.

    In particular, we have found that:
    1.) The NCDC homogenization algorithm (i.e., Menne & Williams, 2009) is very poor at removing urbanization bias or the siting biases that Anthony and the Surfacestations crew have identified.
    2.) There are serious flaws in the GISS urbanization bias adjustment algorithm, and the adjustments are often inadequate, inappropriate or just plain wrong.

  66. Kurt says:
    June 14, 2014 at 3:10 pm

    A tree ring for a 400 year old tree for example will always have tree rings corresponding to the instrumental (temperature) record, and some rings going back before that.
    —————-

    That is true, Kurt, …… but only iffen your “instrument” is a tape-measure or ruler. And Kurt,
    figuring that you will also disagree and/or discredit any further commentary of mine, ….. I therefore offer the following excerpted commentary for you to ponder the scientific value of … and in which I have denoted important segments via “bold-face” type.

    And ps, Kurt, as you are pondering the following commentary please keep-in-mind the FACT that ….. tree rings are only capable of POTENTIALLY corresponding to the instrumental (temperature) record for four (4) months out of each year (April thru July). Thus, guesstimated surface temperatures for four (4) months out of twelve (12) [1/3 year] is absolutely inadequate even for ESTIMATING …… Yearly Average Surface Temperatures. (I’ll explain my accusation if you wish.)
    —————–

    TREES AND THEIR TYPICAL AGES AND GROWTH RATES
    By: Thomas 0. Perry, professor of tree physiology, School of Forest Resources, North Carolina State University, Raleigh, North Carolina

    ”How long do trees typically live and how rapidly do they grow? The following is a summary of data that may answer these questions.

    DATA FROM FORESTS

    Foresters tabulate the numbers of trees, their ages, and sizes for typical stands of many species. A review of these yield table data shows that Darwin’s laws of geometric ratio of the increase and survival of the fittest hold. Competition in the young forest stand is intense with tens of thousands of seedlings per acre struggling to survive and dominate the main canopy. A typical hardwood forest will contain 25,000 or more stems per acre at year l; 10,000 stems per acre at year 5; 1,500 stems per acre at year 20 and fewer than 200 stems per acre at year 100. Thus, less than one tree per hundred will live a hundred years.

    Visits to virgin forests like those in the Olympic National Park in Washington reveal a similar pattern of mortality. Only the rarest trees in the park may be 1900 years old. The typical maximum age of trees in this virgin forest is between 200 and 600 years old and these trees are confined to narrow bands along the streams. Most of the dominant trees of this forest are less than 250 years old and are, as with typical forests, the product of a self-thinning process that eliminated the vast majority of trees before they were 20 years old

    The situation is no different in the Joyce Kilmer Memorial Forest in the Southern Appalachians. The big trees there are confined to a few sheltered coves that occupy fewer than 100 of the 3,000 acres in the forest. New trees die there every year and there is more dead wood on the ground than in the main canopy. The highest number of growth rings I have ever counted among the many trees that have fallen across the trails is 320. The size an age of the trees decreases rapidly as one leaves the moist streams and sheltered coves and goes upslope to where fires and wind play an active role in addition to the normal processes of competition. The oldest trees on the ridge tops are much twisted and penetrated by rots that developed after historic fires. The average maximum age of trees on the ridge tops is between 100 and 220 years. Again the typical tree in this virgin forest dies before it has lived 20 years.

    Rates of growth are highly variable in a crowded forest, and size bears little relationship to age. Trees growing without competition commonly attain diameters of 30 inches (76.2 cm.) or more by age 50 years while the same trees would attain diameters of only 3 inches (7.62 cm.) when growing in a crowded forest. Rates of growth vary radically with the depth of the soil and availability of moisture and oxygen for a given site. The size a tree achieves at a given age is made even more unpredictable when these environmental variables are combined with the variable effects of crowding. Knowledge of the tremendous variation involved makes foresters fall into embarrassed silence when challenged with the question “how old is that tree?”

    Competition, fires, wind, insects, rot, and other agents, but particularly competition of other trees combine to make life harsh and short for most trees of the forest.”

    The above excerpted from: http://www.ces.ncsu.edu/fletcher/programs/nursery/metria/metria01/m11.pdf

  67. Steve McIntyre says:

    Ronan, I quickly read your able and interesting article on proxy reconstructions. It’s gratifying that you’ve so clearly understood so many issues, particularly when so much commentary )on both sides of the aisle) has misunderstood the critique. I don’t know whether I have time or energy to comment on details.

    I urge readers interested in the topic to look at Ronan’s article and will try to cover it at some point at CA as well.

  68. Ian W says:

    rgbatduke says:
    June 13, 2014 at 8:25 am

    ……. Then there is the infamous remark ON the record about the need to pick cherries to make cherry pie — to the US Congress!……

    ======
    It is appropriate in this case that in Cockney Rhyming Slang – “Cherry Pie” is the slang for “Lie

  69. Steve Garcia says:

    In the given Abstract, as I read the context, the use of the term “dendrochologists” should instead be “dendroclimatologists.” Dendrochrologists study AGE, not “sensitivity of tree growth to temperature “, and they do it by counting the tree rings.

    If I am correct, it give me doubts that people with such a careless error are doing good and solid work. I tend to AGREE with their premise, but I wonder how many more slip-ups they have in the work.

  70. richardscourtney says:

    Ian W:

    At June 15, 2014 at 11:00 am you assert

    It is appropriate in this case that in Cockney Rhyming Slang – “Cherry Pie” is the slang for “Lie“

    Sorry, but No.
    In Cockney Rhyming Slang – “Pork Pie” is the slang for “Lie“.
    Hence, in England the phrase ‘telling porkies’ means telling lies; it is an abbreviation of “telling porky pies”.

    Richard

  71. Kurt says:

    Samuel C Cogar says:
    June 15, 2014 at 10:16 am

    “That is true, Kurt, …… but only iffen your “instrument” is a tape-measure or ruler. And Kurt,
    figuring that you will also disagree and/or discredit any further commentary of mine, ….. I therefore offer the following excerpted commentary for you to ponder the scientific value of … and in which I have denoted important segments via “bold-face” type.”

    It looks like you have completely misunderstood my original post, and nothing you have presented has any relevance to the actual procedures used to associate width of tree rings to temperature. My information comes from “The Hockey Stick Illusion” which describes that procedure as one where the portion of the tree rings corresponding to the instrumental temperature record (meaning thermometers) is divided into a calibration period and a verification period. The calibration period tree rings are used to determine a mathematical relationship between ring width and average temperature. The verification period rings are used to test that mathematically derived association. Assuming that the verification step is satisfied, then the mathematical relationship is used to reconstruct temperatures prior to the instrumental record.

    The two points of my original post were (1) to express skepticism that this procedure accurately associates tree ring width to temperatures given that the boundary between the calibration period and the verification period seems to be an arbitrary one, so you can simply find the boundary that gives you the best match to the verification period and use that; and (2) to indicate that, assuming that this procedure were being used, and the instrumental record was subject to later change, then the accuracy of the whole reconstruction is questionable.

    Your first response to my post alleged that I was “misinformed” and substantiated that allegation with the cursory assertion that nearly all proxy data represented time periods prior to the instrumental record. If you meant that only a small portion of the time series for any given proxy record overlapped with the instrumental record, your observation would be irrelevant as my post dealt with the procedures that used the period of overlap which did exist. If, on the other hand, you meant that most proxy data began and ended before the instrumental record, that would simply be wrong. In either circumstance, I don’t see how you had shown me to be “misinformed.”

    Your second response cites an article that is again irrelevant as it simply calls into question the assumption that tree ring width can be accurately correlated to temperature – an argument that in no way conflicts with what I have been saying. I happen to agree with the points in that article, and more generally in the idea that proxy data is unreliable or at least unverifiable. Maybe before you get snarky with someone, you should first take the time to understand what they are saying.

  72. Willis Eschenbach says:

    Ronan Connolly says:
    June 15, 2014 at 5:45 am

    Tom O’Hara’s question is a reasonable one, but I would partially agree with Steve McIntyre’s June 13, 2014 at 12:25 pm comment:

    As someone that’s spent a lot of time on proxy data, I don’t regard adjustments to temperature data as an important issue in proxy reconstructions. Or even a minor issue.

    I partially agree for several reasons:
    1. The net effect of most of the current adjustments (e.g., NOAA NCDC, NASA GISS) to regional/global trends is quite modest (~0.1°C/century).

    Thanks for that, Ronan. The US has one of the better temperature datasets … here are the adjustments to that record ..

    The trend due solely to adjustments is about 0.5°C 0.3°C per century …

    Also, while the regional/global adjustments may be smaller, a common procedure is to compare the proxy to the nearest station or stations … and their amount and and even sign of adjustment could be anywhere on the map.

    2. The “calibration procedure” used for most of the global temperature proxy reconstructions is quite crude. For example, with the “Composite-plus-scale” (CPS) approach, proxy series are simply re-scaled to have the same mean (“average”) value and variance (“range”) as the instrumental record over the calibration period. The instrumental adjustments used by NOAA NCDC, NASA GISS etc. don’t majorly change these two particular values.

    Actually, what the CPS procedure does on average is to adjust the trend of the proxy during the calibration period to agree with the trend of the instrumental record over that period. As a result, a change in the instrumental record trend from some adjustment will be matched by an equal change in the proxy trend.

    However, there is a problem with this, which is the length of the reconstruction. The procedure you describe “pins” the recent end of the proxy reconstruction to the mean of the instrumental record, with the overall trend of the proxy adjusted so that the proxy trend during the calibration period matches the trend of the instrumental data.

    Now, let’s say we have a thousand-year proxy. Consider an adjustment in the instrumental trend of let’s say 0.2°C/century, a mid-range kind of number. It’s pinned at the near end and we’re adjusting the overall trend of the proxy. By the time we’ve gone back 10 centuries, the adjustment of 0.2°C/century in the instrumental data is reflected as a full 2°C of change in the temperature a millennium ago.

    This is one of the reasons that I just laugh when I see the claimed uncertainty ranges for such reconstructions …

    Generally, I agree with Steve McIntyre that the adjustments aren’t much of an issue in proxy reconstructions. However, the only way to really find out would be to run the analysis both ways and measure the difference … I’ll leave that for someone else, in general proxy reconstructions are a dry hole for me.

    Best regards,

    w.

  73. Nick Stokes says:

    Willis Eschenbach says: June 16, 2014 at 2:02 am
    “The trend due solely to adjustments is 0.5°C per century “

    Yet another misreading of the y-axis on that plot.

  74. Willis Eschenbach says:

    Nick Stokes says:
    June 16, 2014 at 2:53 am

    Willis Eschenbach says: June 16, 2014 at 2:02 am

    “The trend due solely to adjustments is 0.5°C per century “

    Yet another misreading of the y-axis on that plot.

    First time I’ve misread it, so your claim of “another” isn’t clear … in any case, thanks, fixed.

    w.

  75. Steve,
    Thank you for your kind words & encouraging reply! :)

    Willis,
    Thanks for your reply!

    The US has one of the better temperature datasets … here are the adjustments to that record

    I agree that the net USHCN adjustments are quite substantial. Greater than +0.3°C/century in fact! Specifically, Time of OBservation (TOB) adjustments ≈ +0.19°C/century; Step-change adjustments ≈ +0.16°C/century [FILNET "infilling" apparently also introduces a slight warming trend]. However, for the non-USHCN component of the GHCN, the net adjustments are much less.

    Speaking of which, have you had the chance to read our “Urbanization bias III” paper yet? It’s another paper which we have submitted for open peer review at our OPRJ website: http://oprj.net/articles/climate-science/34. In Section 4, we assess the various adjustments applied to both the USHCN and the GHCN. In Figures 19 & 21, we provide a breakdown of the net gridded mean adjustments and in Figure 21, we split the figure you show into the TOB/step-change components.

    Actually, what the CPS procedure does on average is to adjust the trend of the proxy during the calibration period to agree with the trend of the instrumental record over that period. As a result, a change in the instrumental record trend from some adjustment will be matched by an equal change in the proxy trend.

    The description of the actual reconstruction method is unhelpfully obtuse/terse in many publications, so you may well be right that some studies use the trend as their fitting parameter. But, as far as I know, the usual CPS method involves simply scaling the mean and variance of the proxy to match that of the instrumental record for the calibration period.

    Having said that, if you increase/decrease the variance and/or flip the sign of a proxy, this would usually alter the slope, i.e., the trend. So, what you’re saying about the trend of the instrumental record influencing the trend of the “calibrated” proxy is, in principle, correct.
    However:

    1. This does not alter the relative MWP:LIA:CWP ratios of a proxy (for instance).

    Admittedly, if you combine a short, high-variance proxy series with a long, low-variance series, this can alter the ratios – as Steve has illustrated for the Jones et al., 1998 NH estimate: http://climateaudit.org/2006/06/04/making-hockey-sticks-the-jones-way/. But, if all series have a similar variance, then the ratios should remain the same.

    2. On average, the current adjustments mainly alter year-to-year values, and don’t majorly alter the variance of the record. CPS simply rescales the proxies to match the overall mean & variance over the entire calibration period, so it’s mostly the average variance over the calibration period that counts…

    For the record, I think that the CPS approach is an inadequate “calibration” method for the proxies used in these reconstructions, and I think that the NCDC/GISS/Berkeley/etc. adjustments are inadequate (as we discuss in our 4 papers on the instrumental thermometer datasets). However, I agree with Steve that, as it stands, the current reconstructions shouldn’t be majorly influenced by the differences between the current adjusted instrumental records and the unadjusted instrumental records – the “calibration process” is too crude & the current adjustments are too small on average.

    By the way, thanks for your positive comment on our paper at OPRJ! I agree with your suggestions, and we had alluded to some of them, e.g., we provide a brief (albeit indirect) reference to Ababneh’s findings (p15, Ref B57). But, we will try and incorporate a more detailed discussion of your suggestions into version 0.2 of our paper.

    Your “cluster analysis” could be a useful technique for future reconstructions, and I’ll try to mention it. Perhaps in Section 3.4 where we discuss the lack of consistency between proxies (as cluster analysis should help identify similarities/differences between proxies)?

  76. Kurt,

    Several of the reconstructions split up the instrumental record into a “calibration period” and a “verification period”, as you say – in particular the Mann et al. studies. However, as we mention in Section 2.1 of our paper (link in my first comment above), many of the reconstructions don’t bother with a “verification period”.

    You are correct to point out that changing the “calibration period” can substantially alter the reconstruction. For instance, if you look at Fig S10 in the Supplementary Information on Mann’s website for Mann et al., 2008 (http://www.meteo.psu.edu/holocene/public_html/supplements/MultiproxyMeans07/SuppInfo.pdf), there are recognisable differences between the 1850-1949-calibrated vs 1896-1995-calibrated reconstructions.

    Having said that, Mann et al. used roughly 2/3 of their instrumental record for their calibration period, and made sure it was over a continuous period. So, there’s not an awful lot of room for “cherry-picking” there.

    Have you seen McShane & Wyner, 2011, though? They did an interesting analysis where they studied the effects of systematically varying the “verification”/”calibration” periods for the Mann et al., 2008 dataset. Their paper is paywall, but if you don’t have access, there’s a pre-print available from ArXiV here: http://arxiv.org/abs/1104.4002

  77. Kurt says:
    June 15, 2014 at 7:34 pm

    Maybe before you get snarky with someone, you should first take the time to understand what they are saying.
    ———

    Kurt, this is the statement you made and the one I responded to, to wit:
    ================

    Kurt says: June 13, 2014 at 5:20 pm “My understanding (which may be wrong) is that the proxy data is initially calibrated or trained against a first subset of the instrumental record ………..
    ——————-

    Kurt, please cite me one (1) example of specific proxy data that WAS ever “initially calibrated against the instrument (thermometer) record”. Just one (1).

    Here is a graph of “Holocene Temperature Variations” which shows 12K years of plotted data from eight (8) different proxies, …… with an inset graph showing the most recent 2K years of plotted data from four (4) more different proxies. The “data sources” for all twelve (12) separate proxies are specified at the bottom of the article. To wit: http://commons.wikimedia.org/wiki/File:Holocene_Temperature_Variations.png

    Kurt, the proxy data …. is the proxy data.

    And the “instrument records” are just more of the same, ….. more per se proxy data.

    Why are you so damn sure that you are now “right” …. when you initially stated you “may be wrong”?

  78. phi says:

    Ronan Connolly,

    “However, for the non-USHCN component of the GHCN, the net adjustments are much less.”

    This raises the problem of the definition of adjustments. For example, BEST doesn’t adjust in this sense. This does not prevent BEST to adjust massively in the aggregation stage.

    Quantification of adjustments depends greatly on the average length of the series. With short (GHCN) or cut (BEST) series, the very real adjustments at the aggregation stage are not recorded.

    We can give another definition of adjustments : the average shift operated relative to absolute temperatures. This definition obviously has a weakness related to a possible bias in altitudes. However, in cases using preferentially long series, adjustments lead to an artificial warming of about 0.5 ° C per century (see eg Begert et al fig. 4.29 p. 87 and Böhm et al. 2001).

  79. On the subject of “the instrument (thermometer) record” …. I would like to see a plotted graph that depicts the number of “points” or places on the earth’s surface where thermometers were being used to record “daily surface temperatures” …. for each year … for the years from say 1650 to present.

    In other words, a graph that depicts the “# of” per se, active Temperature Stations …. per each year …. for the past 360 years.

    Such a graph would be excellent reference material for any one that is curious of the “data sources” that are used for calculating Global Average Surface Temperatures for each of the years inclusive of the aforementioned “time period”.

    And also would be excellent reference material for discussions on “adjustments to the instrument (thermometer) record”.

    Cheers

  80. Phi,

    Quantification of adjustments depends greatly on the average length of the series. With short (GHCN) or cut (BEST) series, the very real adjustments at the aggregation stage are not recorded.

    That’s actually one of the main reasons why the net adjustments for the GHCN are so small relative to the USHCN. The USHCN records are relatively long: ~93 yrs compared to ~44 yrs for the GHCN.

    As a result, the overlap between target stations and their neighbours is much less for the GHCN than the USHCN – see Table 5 (p28) of our Urbanization bias III paper.
    Also, the average distances between target stations and their neighbours is much greater for the GHCN.

    For these reasons, the Menne & Williams, 2009 algorithm is less effective at identifying potential breakpoints in the GHCN, and so the net adjustments are much less.

    Also NCDC never collected station history files/time-of-observation changes for the non-USHCN component, so they don’t bother with TOB adjustments.

    As for the Berkeley Earth (BEST) adjustments, we found in Section 4.8 of our Urbanization bias I paper that their “Scalpel” technique seems to introduce a warming trend of about +0.43°C/century relative to using the “Common Anomaly Method” (see Figure 30, p41). We suspect this is a consequence of them shortening the records too much. If so, it may be that their adjusted records are problematic for studying long-term trends.

    Samuel,

    I would like to see a plotted graph that depicts the number of “points” or places on the earth’s surface where thermometers were being used to record “daily surface temperatures” …. for each year … for the years from say 1650 to present.

    In other words, a graph that depicts the “# of” per se, active Temperature Stations …. per each year …. for the past 360 years.

    Have a look at Section 2 (p2-5) of our “Urbanization bias III” paper (link above) for the GHCN dataset. This is the main one used by NOAA, NASA and the Japan Meteorological Agency and is fairly similar to the CRU’s dataset.

    For the Berkeley Earth dataset (BEST), have a look at Figure 27 of our “Urbanization bias I” paper (link also above) on p40.

    Is that of any help?

  81. Kurt says:

    Samuel C Cogar says:
    June 16, 2014 at 8:27 am

    “Kurt, please cite me one (1) example of specific proxy data that WAS ever initially calibrated against the instrument (thermometer) record.”

    Every tree ring series would have to be calibrated against the thermometer record in order to associate any given width of a tree ring with some average temperature for the season. You have to calibrate width against something, after all, to figure out how to get temperature from a ring width. You don’t have to take my word for it – many of the posts in this thread confirm that proxy data is calibrated against the thermometer record. Willis Eschenbach says on June 16, 2014 at 2:02 am for example that in the “’calibration procedure’ used for most of the global temperature proxy reconstructions . . . proxy series are simply re-scaled to have the same mean (“average”) value and variance (“range”) as the instrumental record over the calibration period.'” There is a calibration period where proxy data overlaps the instrumental record and that the instrumental record is used to determine a relationship between tree ring width and temperatures. I’m not sure about the exact procedure – I’d guess that the ring widths are mapped to some kind of a temperature- width curve to account for declining growth in tree rings as the tree ages, with the instrumental record for the region that the tree was in used to figure out the best coefficients and/or exponents to minimize a mean square error or some other criterion like a mean that Willis Eschenbach mentions above, but one thing for certain you have to do is use the thermometer record in some way to figure out how to correlate a tree ring width to temperature.

    “Why are you so damn sure that you are now “right” …. when you initially stated you “may be wrong”?”

    You ellipsed out the relevant potion of the quote from my original post. What I wasn’t sure about was whether the period during which a proxy series overlapped the instrumental record was divided into a calibration period and a verification period, and more particularly whether the person doing the reconstruction gets to just pick where to set the boundary between the calibration period and the verification period. Apparently I was only partially right as
    Ronan Connoly just noted (thanks for that – I’ll be sure to read the article you linked when I have the chance) in that this procedure is only sometimes used, and when for example Mann et al. used it they selected a dividing line that left a full 1/3 as the verification period with 2/3 as the calibration period – which means that there is significant data to both calibrate and verify the reconstruction.

  82. Willis Eschenbach says:

    Ronan Connolly says:
    June 16, 2014 at 2:33 pm

    As for the Berkeley Earth (BEST) adjustments, we found in Section 4.8 of our Urbanization bias I paper that their “Scalpel” technique seems to introduce a warming trend of about +0.43°C/century relative to using the “Common Anomaly Method” (see Figure 30, p41). We suspect this is a consequence of them shortening the records too much. If so, it may be that their adjusted records are problematic for studying long-term trends.

    Ronan, on another thread this week we discussed the fact that if you take a number of trendless sawtooth waves of different frequencies, and you subject them to the “scalpel” method, you’ll end up with a trend, despite the fact that you started with trendless data.

    I’ve been a long-time supporter of the “scalpel” method, so this was a great surprise to me. I asked Mosher and Zeke for a comment, but they evaporated …

    w.

  83. Ronan Connolly says:
    June 16, 2014 at 2:33 pm

    Samuel,

    Have a look at Section 2 (p2-5) of our “Urbanization bias III” paper (link above) for the GHCN dataset. This is the main one used by NOAA, NASA and the Japan Meteorological Agency and is fairly similar to the CRU’s dataset. Is that of any help?
    —————-

    Ronan,

    Thanks, but “No”, I have no interest in the above cited “data sets”. My request was for a “plotted graph” of the original “source locations” from which the thermometer records were obtained, …. not the massaged and manipulated “data sets” that were created from the recorded and reported thermometer “readings”.

    It is foolish for anyone to calculate the “average” of twenty-four (24) surface temperatures that were recorded in 1870 ….. and then compare it to the calculated “average” of 2,000+- surface temperatures that were recorded in 2012 ….. and then claim the former is “the highest ever recorded Yearly Average Surface Temperature for the entire US of A”.

    Especially given the fact that the “accuracy” of the recorded data being used to create your above cited “data sets” is highly questionable to say the least. And a “reading” of the history of the NWS should enlighten you on that fact.

    Ronan, here are a few excerpts from one of said histories, to wit:

    Evolution to the Signal Service Years (1600-1891)
    Source link: http://www.nws.noaa.gov/pa/history/signal.php

    By the end of 1849, 150 volunteers throughout the United States were reporting weather observations to the Smithsonian regularly. By 1860, 500 of Henry’s stations were furnishing daily telegraphic weather reports to the Washington Evening Star, and as Henry’s network of volunteer observers grew, other existing systems were gradually absorbed, including several state weather services.

    At 7:35 a.m. on November 1, 1870, the first systematized and synchronous meteorological reports were taken by observer-sergeants at 24 stations in the new agency. These observations, which were transmitted by telegraph to the central office in Washington, D.C., commenced the beginning of the new division of the Signal Service.

    Since the original congressional resolution covered only the Gulf and Atlantic coasts, and the Great Lakes, the early forecasts were made only for these areas. On June 10, 1872, an act of Congress extended the service throughout the United States, “for the benefit of commerce and agriculture.” However, a sample forecast of 1872 reflects the lack of data west of the Mississippi River.

    The Signal Service’s field stations grew in number from 24 in 1870 to 284 in 1878. Three times a day (usually 7:35 a.m., 4:35 p.m., and 11:35 p.m.), each station telegraphed an observation to Washington, D.C. These observations consisted of:

    1. Barometric pressure and its change since the last report.
    2.Temperature and its 24-hour change.
    3.Relative humidity.
    4.Wind velocity.
    5.Pressure of the wind in pounds per square foot.
    6.Amount of clouds.
    7.State of the weather

    …………on October 1, 1890, an act transferring the weather service to the Department of Agriculture was signed into law by President Benjamin Harrison.
    =======================

    Now Ronan, what is the source “location” data for the recorded temperatures that are used to calculate the Yearly Average Global Surface Temperatures for the years of 1650 thru to 2014? What percentage of the earth’s surface was actually being monitored during each of those years? Curious minds would like to know.

  84. Kurt says:
    June 16, 2014 at 3:04 pm

    You have to calibrate width against something, after all, to figure out how to get temperature from a ring width.
    —————–

    Kurt, I know a little about tree rings, …. and xylem and phloem and cambium layers and apical meristems, etc., etc. I am learned in biology, including botany. And in my learned opinion one has to do a lot of “hard squeezing” to extract an average “yearly” near surface air temperature out of tree ring growth.

    Little things can make a big difference in tree ring growth. Like, was the tree “shaded” by another tree? Did the tree receive mostly “morning” Sunshine or mostly “afternoon” Sunshine? Kurt, do you realize that shorter young saplings “leaf out” earlier than the taller older trees? Sure nuff, they do. They gotta get their new Spring growth in before the canopy of the older trees “blocks” all their chances of receiving any direct Sunlight.

    Kurt says:

    You don’t have to take my word for it – many of the posts in this thread confirm that proxy data is calibrated against the thermometer record. Willis Eschenbach says on June 16, 2014 at 2:02 am for example that in the “’calibration procedure’ used for most of the global temperature proxy reconstructions . . . proxy series are simply re-scaled to have the same mean (“average”) value and variance (“range”) as the instrumental record over the calibration period.’”
    —————

    Kurt, what Willis E said ….. and what you think he said ….. are two (2) different things.

    Stipulating a ’calibration procedure’ …. and actually calibrating something to a known standard are not the same thing.

    Me thinks that Willis was simply implying the need for an “adjustment procedure” for aligning two (2) different “scales” to the same “baseline”.

    Just like one has to “adjust” Centigrade temperatures if they are to be “aligned” (re-scaled) to a Fahrenheit “baseline”. Or to “adjust” atmospheric CO2 quantities from … “CO2 ppm increases per decade” …. to …. “CO2 ppm increases per year”.

    What purpose would it serve to “re-calibrate” the proxy data so that it matched the instrument data? That is, other than to prove or justify one’s “junk science” claims? AKA: “Michael Mann’s “hide the decline“”.

  85. Willis Eschenbach says:
    June 16, 2014 at 10:47 pm

    Ronan, on another thread this week we discussed the fact that if you take a number of trendless sawtooth waves of different frequencies, and you subject them to the “scalpel” method, you’ll end up with a trend, despite the fact that you started with trendless data.

    Are you referring to this guest essay by Bob Dedekind? http://wattsupwiththat.com/2014/06/10/why-automatic-temperature-adjustments-dont-work/

    If so, I agree with a lot of what he says, and we made similar arguments in Section 4.3 of our Urbanization bias III paper – in particular, see the discussion of the schematics in Figures 28 & 29 (p29-30).

    However, I disagree with his claim that:

    Now, not every station is going to have sheltering problems, but there will be enough of them to introduce a certain amount of warming. The important point is that there is no countering mechanism – there is no process that will produce slow cooling, followed by sudden warming. Therefore the adjustments will always be only one way – towards more warming.

    He correctly points out that the growth of trees around a weather station can introduce a gradual “warming” trend (by blocking cooling winds). But, under some conditions, tree growth can also lead to gradual cooling. For instance, trees can also block sunlight and make the area “cooler”, which is why people sometimes lie “in the shade” of a tree on a hot summer’s day!

    Have you read Runnalls & Oke, 2006 (open access)? In their conclusions section, they point out that microclimate biases could easily introduce a non-climatic trend if the majority of the biases are of the same sign. But, they are careful to point out that these trends could potentially be of either sign. I’d agree with the conclusions of Runnalls & Oke, 2006.

    I do agree with Dedekind that step-change adjustments are not enough when there are also non-climatic trend biases in the data. This was one of our primary conclusions in our Urbanization bias III paper. This applies to both the Menne & Williams algorithm used by NCDC and the “Scalpel” method used by the Berkeley group.

    And, if there is a tendency for non-climatic trend biases to be “warming biases”, then what Dedekind said is valid. An automated step-change breakpoint homogenization which only removes step biases and leave the trend biases, will lead to a net warming bias.

    When we realise that about half of the current stations are currently urbanized (depending on the metric you use), and wouldn’t have been as urbanized 100 years ago, we can see that this IS a serious problem.

    Berkeley Earth attempt to minimise the trend bias problem by reducing the weighting of stations with trends different from their neighbours. But, this just leads to a slightly different version of the “urban blending” problem which we discuss on p30-35. That is, in urbanized areas, the few rural neighbours unaffected by urbanization bias will tend to be in the minority. Therefore, their trends will be the “anomalous” trends that are de-weighted, and the trends of the (already more numerous!) urbanization biased stations will dominate regional trends even more.

    For the Menne & Williams algorithm, things are more insidious because:
    a) trend biases are adjusted as if they were “step biases” – leading to the problem illustrated in our Fig. 29
    b) step change adjustments are based on the trends of the 40 neighbouring stations. In urban areas, this will lead to urban blending, i.e., “homogenization” will introduce a warming bias into the rural neighbours.

    If the majority of stations are affected by a similar non-climatic bias, then “homogenization” will spread this bias uniformly amongst all the neighbouring stations, yielding a “homogeneous” blended bias.

    Anthony’s Surfacestations project has shown that the majority of USHCN stations are affected by siting biases. Menne et al. imagine their homogenization algorithm somehow “removes” all of these biases. To me, this is kind of like mixing strawberries and bananas in a blender, and expecting your “homogenous”, blended smoothie to consist of pure strawberries…

  86. Samuel,

    Ha! Have a proper look at the figures I suggested:

    Section 2 (p2-5) of this paper: http://oprj.net/oprj-archive/climate-science/34/oprj-article-climate-science-34.pdf

    Figure 15 (p30) and figure 27 (p40) of this paper: http://oprj.net/oprj-archive/climate-science/28/oprj-article-climate-science-28.pdf

    Also, have a read of Section 2.4 (p7-8) of this paper: http://oprj.net/oprj-archive/climate-science/19/oprj-article-climate-science-19.pdf

    I suspect we’re not in as much disagreement as you think we are! ;-)

  87. Ronan Connolly says:
    June 17, 2014 at 6:39 am

    He correctly points out that the growth of trees around a weather station can introduce a gradual “warming” trend (by blocking cooling winds). But, under some conditions, tree growth can also lead to gradual cooling.
    ——————-

    Ronan, when I read your above statement about “trees n’ warming” ….. it reminded me of a “puzzling” question that has been bugging me since February 08, 2012.

    Now I’m pretty good at figuring out “oddities” that one often sees in nature …. but here is one that I observed that I have a couple “maybes” that might explain it, but not a factual scientific reason that I am positive of. Therefore I would appreciate your expert opinion on the matter.

    Following are “links” to three (3) pictures I took around 9:30 AM on the morning after a brief snowfall the night before …. and are pictures of what I have labeled as being “tree circles” or “snow circles”. One can see six (6) different “circles” in said 3 pictures.

    The pictures are self-explanatory ….. but my tired ole brain is not. (a case of CRS, maybe.)

    Ronan, my question for you is, ….. why no snow underneath those “naked” trees?

    Or, …. why did the snowfall underneath those “naked” trees melt so much quicker than the other snowfall?

    See pictures @:
    http://i1019.photobucket.com/albums/af315/SamC_40/1treecircle.jpg
    http://i1019.photobucket.com/albums/af315/SamC_40/2treecircles.jpg
    http://i1019.photobucket.com/albums/af315/SamC_40/3treecircles.jpg

  88. Samuel C Cogar says:
    June 18, 2014 at 3:35 am

    Ronan, my question for you is, ….. why no snow underneath those “naked” trees?

    Or, …. why did the snowfall underneath those “naked” trees melt so much quicker than the other snowfall?

    Samuel, that is an excellent example of how trees can alter the local microclimate, and your photos illustrate the phenomenon perfectly! Thanks!

    There are several factors that could be involved. I think the standard explanation is as follows:

    Snow is “white” because it reflects ~90% of visible (and also UV) light (in climate science, we say snow has a high “albedo”). This is why there is often a “glare” from the snow, and also why you can sometimes get sunburnt from the reflected light of snow. However, snow (like all forms of water) easily absorbs infrared light.

    On the other hand, tree trunks (which are “dark”) absorb quite a lot of the sunlight. They also emit infrared light, and as the trees heat up they emit more infrared light.

    So, in the morning, most of the sunlight falling on the open ground is reflected, while most of the sunlight falling on the tree trunk is absorbed, heating the trees up. The trees emit infrared light to the surroundings, and some of this is absorbed by the snow nearest the tree, causing the snow to melt quicker.

    There are other possible explanations, however. For instance, at night the ground cools by emitting infrared light to the sky (particularly on a clear, cloudless night). Even though the branches were “leafless”, they would still “block” some of the escaping infrared light, and thus the ground nearer the trees wouldn’t cool as much as the open ground. This would keep the ground a bit warmer, and cause the snow to melt quicker.

    [By the way, this is thought to be one of the factors in the so-called "urban heat island" phenomenon. Tall buildings block some of the outgoing infrared light, and thereby reduce the rate of "infrared cooling" at night. This means that those urban areas cool less at night than rural open spaces, leading to higher night-time minimum temperatures ("Tmin").]

    Another possibility is that if the root structure is near the ground, it could be respiring enough to slightly heat the surrounding soil – thus melting the snow slightly quicker. The root structure would be densest closer to the trees (and also the tree trunk would be warmer as mentioned above).

    Do you know were your “tree circles” there before sunrise, or did only they form once the sun started shining?

    If they only started forming once the sun had risen, then the first explanation seems likely. However, if the “tree circles” were there at night, then one of the other explanations would seem more likely.

  89. richardscourtney says:

    Ronan Connolly:

    Thankyou for your superb post at June 18, 2014 at 12:54 pm.

    It is clear, interesting and informative. I enjoyed it, and I think it to be the best post of this day on WUWT.

    Richard

  90. Ronan Connolly says:
    June 18, 2014 at 12:54 pm

    Samuel, that is an excellent example of how trees can alter the local microclimate, and your photos illustrate the phenomenon perfectly! Thanks!
    ———————

    Ronan, my thoughts EXACTLY, …. but I didn’t know EXACTLY how or why, ….. and thus the reason I took those pictures because there was no way in ell I could have adequately described said “event” in verbal only terms.

    And Ronan, thank you for your response, ….. but, …… and I don’t know how to tell you this other than to say, ….. it was of absolutely no value to me in resolving my quandary. Thus I will ignore all of it except for your question of …. “Do you know were your “tree circles” there before sunrise, or did only they form once the sun started shining?

    Ronan, I do not know if said “tree circles” were there before sunrise (daylight). But in my opinion, it is highly likely that they were.

    Anyway, I would like to reiterate the fact(s) that those pictures were taken during the “wintertime” on February 08, 2012, around 9 to 9:30 AM …. under heavily overcast skies.

    And being “wintertime 2-8-12”, …. those trees are dormant …. and I could not see that clearly before “daylight” …. which does not occur here until like 7:45 to 8:00 AM EST. Thus, no Sunshine, no IR radiation from the soil, no IR from the root structure, no IR from the tree trunks or limbs, …. but potentially a wee bit of IR radiation from the “overcast” cloud layer.

    Now the roadway, sidewalks and rooftops are “snow free” because of the “heat island” effect …. but their IR emission could not be, would not be, “directed” to a circular area underneath the “circumference” of the out-stretched tree limbs.

    And the trees are of three (3) different species. One (1) huge “pignut” Hickory tree, 1 large and 2 smaller Sugar Maple (Acer saccharum) and the other 2 are a “flowering” tree that I do not know the name of because they are not “native” to this locale.

    Ronan, just so you know …. “where I am coming from”, …. I am Degreed (AB) in both the Physical and Biological Sciences (which includes Botany) …. and the only “reason” I could think up that might be responsible for creating those “tree circles” is, to wit:

    It was a “wet” snowfall, with no wind or breeze blowing ….. and thus the “snowflakes” stuck to the tree limbs as well as covering the entire surface (grass, sidewalks, roadway and roof tops). But then, after the snowfall ceased …. but before daylight, ….. a “warm” air mass moved in underneath the over-cast skies …. and that “warm” air “conducted” some of its energy to the snowfall that was “stuck” to all the limbs of those trees. And that absorbed energy caused the snow to “melt” …. and the melt-water “dripped” down to the surface thus exacerbating the surface “melting”.

    Anyway, I wanted another (author original) learned “opinion” or explanation before I started “touting” and/or “claiming” my above “solution” is a factuality that is based in/on logical reasoning, intelligent deductions and factual science.

    Cheers, Sam C

  91. Samuel C Cogar says:
    June 19, 2014 at 6:28 am

    Ronan, I do not know if said “tree circles” were there before sunrise (daylight). But in my opinion, it is highly likely that they were.

    Well, what are you basing your opinion on? Did you make any observations before sunrise, or is it just that this better suits your explanation? Not having been there, or knowing anything about your examples other than what you have told me, both possibilities seem likely.

    If you did observe the circles before sunrise, we could then rule out any explanations which required sunlight for the circle formation. But, if you didn’t make any pre-sunrise observations, then we have to consider both possibilities. It’s not a matter of “opinion” – you either made the observations or you didn’t. If you did, then that’s great and we can use the information. But, if you didn’t, it’s not a big deal – we just have to consider both possibilities.

    ”If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts he shall end in certainties.” – Sir Francis Bacon

    Infrared cooling of the ground is certainly greater on clear, cloudless nights (e.g., more likely to have ground frost after a “starry” night), however it still occurs on cloudy nights – all solid materials with a temperature greater than -273°C (i.e., 0K) emit infrared light.

    I agree with you that the trees would have been dormant. However, “dormant” does not mean all metabolic activity completely stops. It just slows down dramatically. As a general rule of thumb for most organisms (plants/animals/bacteria/etc), biological activity roughly halves with a 7°C drop (provided the organism remains alive that is!).

    In trees that have a winter dormancy period, the root structure still continues to grow during winter (absorbing water & nutrients and respiring) – just at a much slower rate. To be honest, I don’t think soil heating from root activity would be a major factor in your case, but it is a possible factor which we need to consider. One way to check would be to compare soil temperatures near the roots and away from the roots.

    Your theory might well explain the circles, but so far I don’t think we’ve ruled out the other explanations. So I would be reluctant to choose one of the 4 proposed explanations (or some other explanation) over the others, until we had more information. Indeed, it may be that several factors are involved.

    Did you actually see the snowfall “stuck” on the limbs before daylight? If you did, this would be an important bit of information, as the branches are bare in the photographs. So, if we knew that there had been snow “stuck” on the limbs during the night, but it was gone by 9am, this would definitely be relevant. But, if it’s just a guess, then we still don’t know.

    If melt-water drips from the branches were responsible for the circles, I would expect the melting to be mostly confined to underneath the branches. It’s hard to tell from the photos – did the branches completely cover the “tree circle” regions? The circles seem to be more uniform in nature and cover a larger area, but that could just be the angles the photos are taken from.

    Of course, if the branches did exactly cover the circles, that still wouldn’t rule out the other explanations, but it would at least provide us more information…

  92. Ronan Connolly says:
    June 21, 2014 at 3:43 am

    If you did observe the circles before sunrise, we could then rule out any explanations which required sunlight for the circle formation. But, if you didn’t make any pre-sunrise observations, then we have to consider both possibilities.
    ————–

    I have to assume that you are intentionally ignoring the IR absorption, emission and/or reflectivity properties of atmospheric H2O vapor in/of the specific form of “heavy cloud cover”.

    And secondly, even if there were no heavy cloud cover …. the trees and surface area in question are not subject to any direct Sunlight within the stipulated “time frame” ….. and will not be subjected to said until like two (2) +- hours after “1st light” each morning because of the surrounding hills which “blocks” said direct solar energy.

    And thirdly, even if the branches and trunks of the trees in question were subjected to direct Sunlight at the specified hour(s) of the morning in question, …. then >50% of their surface area would be subject to any solar “warming”…. and that surface area would be at 90 degrees perpendicular to the “rising” Sun, ….. thus there is no way in hell the “warmed” or “heated” surface area of said trees could possibly radiate their IR energy around to the “back” or “dark” side of said trees, ….. nor is it possible for said IR energy to have been radiated directly toward the surface area surrounding each of said trees.

    Therefore, way less than 50% of the surface area of each tree would have been “warmed” …… and way less than 5% of said “warmth” (IR energy) would have been radiated directly toward the surface underneath the branch “canopy” of each of said trees.

    In trees that have a winter dormancy period, the root structure still continues to grow during winter (absorbing water & nutrients and respiring) – just at a much slower rate. To be honest, I don’t think soil heating from root activity would be a major factor in your case, but it is a possible factor which we need to consider.

    Ronan, …. “Give it up”, …. you don’t have a clue what you are talking about and your above obfuscations are little more than a silly CYA attempt to “bedazzle” me into thinking you are learned in/on the subject matter.

    Your ability to accurately measure the “warming” effect that root activity contributes to the “warming” of the soil is no better than your ability to accurately measure the “warming” effect that atmospheric CO2 molecules contributes to the “warming” of the near surface atmosphere via direct absorption of radiated IR energy. Which, by the way, said “ability” is zero, nada, zilch, none …. whatsoever.

    Ronan, here is a url “link” which I thinks originates from your own “backyard”, ….. so best you read the 1st paragraph therein …… and pay close attention to what it is telling you.
    http://www.forestry.gov.uk/pdf/GrowingGreenActivityPack-Roots.pdf/$FILE/GrowingGreenActivityPack-Roots.pdf

    Did you actually see the snowfall “stuck” on the limbs before daylight?

    Ronan, I don’t have to actually see the Rabbit that left its “tracks” in the snow …. to know damn well that a Rabbit had been there prior to my observing of said “tracks”. And please don’t infer that it might have been a Weasel, Raccoon, deer, Chipmunk or dog that left said “tracks” in the snow.

    If it had been a “dry” snowfall that had not “stuck” to the limbs, ….. then the area around the trunk of the trees would have had the same amount of “snow cover” as the rest of the surface area.

    Your reliance on the possible interference by the Flying Spaghetti Monster does little to bolster or confirm your expertise or credibility on such matter of science.

  93. Samuel,

    In your June 18, 2014 at 3:35 am comment, you asked me for my opinion on a “puzzling” question that has been bugging me since February 08, 2012.

    I gave you several plausible explanations, including one of the “standard explanations” for “tree circles”, e.g., see here, or here.

    Apparently, there are many different reasons why “tree circles” will form, depending on the circumstances, e.g., some flowering trees can produce heat to help speed up melting. This obviously doesn’t apply to your case as the trees hadn’t started to bud. However, the point is there doesn’t seem to be a single, universal answer to the phenomenon. Yet, it seems because I didn’t include the one you wanted to hear (i.e., your favoured explanation), that you didn’t like my reply.

    I have to admit, I’m now the puzzled one! If you were already convinced you knew “the answer”, why did you ask me in the first place?

    To be honest, you haven’t convinced me that your explanation is “the answer”. It’s one possible explanation, but there are several reasons why “tree circles” form, and I don’t think you’ve satisfactorily shown that the others are “wrong”, and that yours is “right”. But, if you’ve already convinced yourself you have “the answer”, why do you need my approval to start ‘ “touting” and/or “claiming” [your] above “solution” is a factuality’?

    At any rate, I do think your tree circle photos are a great example of how trees can significantly alter the local microclimate (regardless of what specific factors are involved in your particular case). So, I do appreciate you posting them and, before you started insulting me, I was finding the discussion interesting. So, thanks for sharing the photos.

  94. Max™ says:

    Yeah, I found that exchange a little puzzling myself, “hey, got a question which I’d like to hear your thoughts on” turning into “ha ha, you’re wrong because I already had an answer I liked and you didn’t give it to me, take that!” is a bit out there.

  95. Ronan Connolly says:
    June 21, 2014 at 11:23 am

    and, before you started insulting me, I was finding the discussion interesting. So, thanks for sharing the photos.
    ———-

    Ronan, ……. (and also for Max™’s reading pleasure)

    In all you’re previous postings/commentary to this article you inferred that you were quite knowledgeable and/or learned on the subjects of climate, weather, temperatures, thermal energy transfers, etc. … and thus the reason I specifically ask for your expertise and/or expert opinion on my “tree circle” pictures.

    But what I got in reply from you, via your June 21, 2014 at 3:43 am posting, … was little more than patronizing crapolla with the implication that you were responding to a question posited by a dim-witted High School student.

    And that highly irritated me …. and confirmed the fact that you surely thought likewise about all my other posted commentary. And now you are “crying-the-pity-party-blues” about me insulting you. GIMME A BREAK

    And after I jumped your arse about the contents of your “3:43 am” post you did some quickie “searching” and posted a bunch of paraphrased and mimicked commentary in your above “11:23 am” posting, thus doing little more than “adding insult to your injury”.

    Try again, Ronan, there is nothing in your above “11:23 am” posting that explains the “cause” of my pictured “tree rings” formation given the known environmental criteria that existed at the time of their formation. I specifically stated what said “known environmental criteria” were, … yet you come back with your 2nd post with more BS about flowering plants & trees, spring growth of bulbs & tubers and bright Sunshine heating up different parts of the environment. The next time, “write down” said controlling criteria ….. and eliminate everything that doesn’t conform to said ….. before you start with your mimicry of “what ifs” and “maybes”.

    I’m too damn old to worry about someone getting “hurt feelings”, p-faced and/or pouty just because I point out their ignorance in/of/on a subject matter being discussed. One never learns a damn thing ….. if every one is nice to them and never contradicts anything they say or do. If one has been miseducated in the Sciences ……. then they have to be consciously “willing” to correct their problem …… but they have to be told what their “problem” is before there is any chance or hope of them ever correcting it. It is their conscious “choice” to make. No one can make it for them.

  96. Thanks for the support, Max™

  97. Hi Samuel,
    I’ve re-read my June 21, 2014 at 3:43 am comment, and I’m not sure which bits you felt were patronising. But, I’m sorry if you perceived them as such. I try to be as clear, specific and straightforward as I can when I’m commenting & to try and document my reasoning as well as I can. Perhaps this approach can sometimes come across as “patronising”, but if so, it’s unintended.

    I did ask you to back up your opinions with the experimental data/observations that you used for reaching your conclusions. But, this is the approach I would take when peer reviewing a paper. If I felt you were just a “a dim-witted High School student”, I probably wouldn’t have bothered.

    I maintain science should be based on experimental data, not just opinions. Indeed, this is one of my main criticisms of much of modern “climate science”. Many climate scientists (who should know better) seem more interested in silly “scientific consensus” claims, or making sure their conclusions give the “right” answer than in critically checking the data for themselves!

    I had assumed you shared this data-based approach to science & were genuinely interested in reaching a scientifically rigorous & robust explanation. Was I wrong?

    P.S. As far as I’m concerned you still haven’t answered my June 21, 2014 at 3:43 am comment. If you genuinely want to continue the discussion, I don’t mind (although I’m not sure what the other commenters on this thread think??).

    But, if you just want to claim people who are unimpressed by your opinion-based theories are “ignoran[t] in/of/on a subject matter being discussed” & just saying “crapolla” and/or “BS”, then go ahead…

  98. Ronan Connolly says:
    June 23, 2014 at 3:05 pm

    I did ask you to back up your opinions with the experimental data/observations that you used for reaching your conclusions.

    I maintain science should be based on experimental data, not just opinions.
    —————

    Ronan,

    That means that you discredit 80+% of all science, ….. is that correct?

    All opinions voiced by Stephen Hawking.
    All opinions voiced by Einstein.
    90+% of all opinions voiced by Astronomers.
    And 95+% of all opinions voiced by psychiatrists, psychologists and neural research scientists.

    RIGHT? All discredited by you.

    And last but not least, you discredited Arthur C. Clarke for his silly opinion concerning a geostationary satellite, …… right?.

    Ronan, , why in hell are you asking for my “experimental data” when it should have been plainly obvious to you that none was need or required? I dun told yu, …. the pictures “speak for themselves”.

    Ronan, the “funny” thing about an “original thinker” ….. is the fact that it usually “takes one to know one”, …… and those that are “not one” are highly prone to DISCREDIT those that “are one”.

    Ronan, me thinks you have been “brainwashed”, badly nurtured and/or miseducated with a “religious belief” that Degree status and/or employment title/position automatically and selectively ……. “trumps” common sense, logical reasoning, intelligent deductions and/or “original thinking”.

    Science does not bow to any Monarch or Titled gentry.

    Fer shame, fer shame, … because those who ascribe to the aforesaid will never be as smart as, … or smarter than, ….. their mentors are.

    Ronan, here is a “link” to some “original thinking” commentary which is solely the result of “my learned opinion” ….. and which was not based in/on any data resulting from my doing of any “experimental research” of the subject matter ….. and which maybe you would care to offer your … discrediting critique of said, to wit:

    http://snvcogar.newsvine.com/_news/2013/04/24/17899899-biology-of-a-cell-genetic-memory-verses-environmental-memory

  99. Hi Samuel,
    I suspect we might be talking at cross purposes (or perhaps shouting at cross purposes!)

    I never said opinions were irrelevant for science! I said,

    I maintain science should be based on experimental data, not just opinions.

    (highlighted for emphasis)

    There are apparently many different approaches to “doing science”. However, my approach to science is a strongly empirical one – I have always favoured the “data is king” approach, similar to what Feynman suggested in his classic 1964 lecture:

    I also agree with a lot of what Karl Popper said in his 1963 “falsification” essay.

    But, other scientists take other approaches. For instance, a number of philosophers (Kuhn, Feyerabend, etc.) have strongly criticised Popper’s “falsification” approach, and said it’s not how science is usually done, e.g., here.

    So, when I said ”I maintain science should be…”, I wasn’t saying “science is…” – different scientists take different approaches to “doing science”. I was explaining that I personally favour the “data is king” approach.

    It sounds like you prefer the “theory is king” approach – is that right? If so, you’re not alone. A lot of scientists seem to prefer the “theory is king” approach… and these days, even the “computer model is king” approach (particularly popular in climate science, it seems!)

    Ronan, me thinks you have been “brainwashed”, badly nurtured and/or miseducated with a “religious belief” that Degree status and/or employment title/position automatically and selectively ……. “trumps” common sense, logical reasoning, intelligent deductions and/or “original thinking”.
    Science does not bow to any Monarch or Titled gentry.

    I’m not sure where you got this impression. But, nothing could be further from the truth. As I mentioned above, I favour the “data is king” approach to science.

    “Nature” doesn’t care what we happen to think, what degrees we got, who our parents were, etc. That’s actually why I’m more interested in what the data says, than in people’s opinions of what the data should say.

    I’m definitely interested in people’s opinions… but when it comes to drawing conclusions, I always try to go back to the data, and figure out what “Nature” is saying…

    On your article on how the brain stores memory, it’s a good article. I think your concept of distinguishing between “genetic memory” and “environmental memory” is useful.

    Have you heard of the studies of London cab drivers by Eleanor Maguire (and others)? There’s a short summary here.

    Apparently, the hippocampus part of the brain is much larger than average in London cab drivers because they have to develop a very strong spatial memory of the city – although, perhaps it’s not as important now that cab drivers have GPS machines. It seems that the size of the hippocampus increases the longer the cab drivers have been driving and before training their hippocampus is average in size. So, this indicates that we can & do alter our physical brain structure depending on our mental activity. Does this fit into your “environmental memory” concept?

  100. Ronan Connolly says:
    June 24, 2014 at 11:37 am

    I have always favoured the “data is king” approach,
    ——————

    That is fine, and the way it should be. BUT, … only if and when data is available …. AND …. then only if said data has been proven to be quantitatively true and factual …… and directly related to the subject in question.
    =============

    I said, … I maintain science should be based on experimental data, not just opinions.
    ————–

    That is correct, BUT, ….. in your responses to me …… you intentionally prerequisite’ed the word “data” with the word “experimental”, which was foolish on your part, … and irritated me, …. because it matters not a twit what the source of said data is or was as long as it conforms to the above noted criteria.
    ===========

    It sounds like you prefer the “theory is king” approach – is that right?
    ————–
    NO, it was not right. I prefer and abide by ….. “And, spite of pride, in erring reason’s spite, ….. One truth is clear, whatever is, is right. ….. [Alexander Pope]
    ===============

    So, this indicates that we can & do alter our physical brain structure depending on our mental activity. Does this fit into your “environmental memory” concept?
    ——————–

    Ronan, this conversation between you and I has altered your physical brain structure ….. by creating “new” synaptic links between brain neurons that were not previously linked together prior to our conversation.

    Ronan, to learn more about “Why you are what you are”, …… read this commentary, to wit:
    http://snvcogar.newsvine.com/_news/2010/11/04/5408053-a-view-of-how-the-human-mind-works

  101. Ok, I think we are actually mostly in agreement then!

    By the way, good luck in your neuroscience commentaries – it’s a fascinating subject, and the two articles you gave me seem interesting!

Comments are closed.