Resolution and Hockey Sticks, Part 1

Guest sciency schist by David Middleton

Resolution vs. Detection

Geoscientists in the oil & gas industry spend much of our time integrating data sets of vastly different resolutions.  Geological data, primarily well logs, are very high resolution (6 inches to 2 feet vertical resolution).  Geophysical data, primarily reflection seismic surveys, are of much lower and highly variable resolution, dependent on the seismic velocities of the rocks and the frequency content of the seismic data.  The rule of thumb is that a stratigraphic unit must be at least as thick as one-quarter of the seismic wavelength (λ/4) to be resolved.

Figure 1a. Seismic wavelength vs velocity for 10, 25, 50 and 100 Hz dominant frequencies. (SEG Wiki)

Thinner beds can be detected, but not resolved.  The same principle applies to normal faults.  If the throw (vertical offset) is at least λ/4, the fault can be resolved and the throw can be accurately estimated from the seismic data. If the throw is less than λ/4, the best I can determine is that a fault is probably present.

ch11_fig1-2
Figure 1b. Seismic expression of a normal fault at λ/16, λ/8, λ/4, λ/2 and λ. (SEG Wiki)

When we integrate geological and geophysical data, we don’t just splice the well log onto a seismic profile.   We convert the well logs into a synthetic seismogram.  This is most effectively accomplished using sonic and density logs.

800px-Synthetic-seismograms_fig1
Figure 2. Synthetic Seismogram (AAPG Wiki)

The sonic and density logs are used to calculate acoustic impedance and a reflection coefficient series (RC).  The RC is convolved with a seismic wavelet, often extracted from the seismic data near the well.  The synthetic seismic traces (3 panels on the right) can then be directly compared to the seismic profile.  The resolution difference is quite large.  The trough-to-trough interval on the seismic pulse below is about 150 m.

19seisres
Figure 3. Schematic comparison or a well log and a seismic pulse with a wavelength of 150 m. (University of Maryland)

How Does This Relate to Climate “Science”

Signal theory and signal processing principles apply to all signals, not just seismic data. A signal is a time-variant sequence of numbers. Almost all temperature, carbon dioxide and sea level reconstructions employ many of the same data processing methods as seismic data processing. Deconvolution is particularly essential to ice core carbon dioxide chronologies. Sometimes the signal processing methods are properly employed. Van Hoof et al., 2005 demonstrated that the ice core CO2 data represent a low-frequency, century to multi-century moving average of past atmospheric CO2 levels. They essentially generated the equivalent of a synthetic seismogram from the stomata chronology and tied it to the ice core.

Figure 4. Panel A is stomatal frequency curve. Panel B is the D47 Antarctic ice core. The dashed line on Panel B is the “synthetic” ice core generated from the stomatal frequency curve. (Van Hoof et al., 2005)

From my geological perspective, most climate “hockey sticks” are the result of the improper integration of high resolution instrumental data (akin to well logs) and lower resolution proxy data (akin to reflection seismic data).  Many of these hockey sticks appear to have been the result of a careless, if not reckless, disregard of basic signal processing principles.

Temperature Reconstruction Hockey Sticks

Figure 5. “Mike’s Nature Trick” Older is toward the left.

One of the most egregious violations of signal processing principles is the generation of climate reconstruction “hockey sticks” through variations of “Mike’s Nature Trick.”

In the aftermath of the “Climategate” scandal, Penn State conducted an “investigation” of Mike’s Nature Trick.   The Penn State whitewash was ludicrous…

After careful consideration of all the evidence and relevant materials, the inquiry committee finding is that there exists no credible evidence that Dr. Mann had or has ever engaged in, or participated in, directly or indirectly, any actions with an intent to suppress or to falsify data.

RA-10 Inquiry Report

It can’t be proven that he intended to suppress or falsify inconvenient data. It’s entirely possible that he accidentally devised a method to suppress or falsify inconvenient data.

This bit here was laughable…

In fact to the contrary, in instances that have been focused upon by some as indicating falsification of data, for example in the use of a “trick” to manipulate the data, this is explained as a discussion among Dr. Jones and others including Dr. Mann about how best to put together a graph for a World Meteorological Organization (WMO) report. They were not falsifying data; they were trying to construct an understandable graph for those who were not experts in the field. The so-called “trick”1 was nothing more than a statistical method used to bring two or more different kinds of data sets together in a legitimate fashion by a technique that has been reviewed by a broad array of peers in the field. 

RA-10 Inquiry Report

The most benign possible interpretation of the “trick” is that they edited part of Keith Briffa’s reconstruction because the tree ring chronology showed that the 1930s to early 1940′s were warmer than the late 1990′s. So, they just substituted the instrumental record for the tree ring chronology.  In the private sector, this sort of behavior isn’t benign… It’s grounds for immediate termination or worse.

I suppose that there is no evidence that they did this with intent to deceive.  However, the fact that they called it “Mike’s nature trick” sure makes it seem like this sort of thing was standard operating procedure.

Taking a set of data that shows that the 1930′s were warmer than the 1990′s and using another data set to reverse that relationship is not bringing “two or more different kinds of data sets together in a legitimate fashion.” It’s a total bastardization of the data.

To see an example of “Mike’s Nature Trick,” go here… Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia 

Click this… EIV Temperature Reconstructions

Open up any of the **cru_eiv_composite.csv or **had_eiv_composite.csv files. All of them splice the high frequency instrumental data into the low frequency proxy data. To Mann’s credit,  he at least documents this one enough to sort it out.

This statement from their PNAS paper is totally unsupported by proxy reconstructions… “Recent warmth appears anomalous for at least the past 1,300 years whether or not tree-ring data are used. If tree-ring data are used, the conclusion can be extended to at least the past 1,700 years.”

The anomalous nature of the “recent warmth” is entirely dependent on the “tricky” use of the instrumental data. He didn’t use any proxy data post-1850. The eivrecondescription file states “Note: values from 1850-2006AD are instrumental data”.

This image from Mann’s 2008 paper implies that all of the reconstructions are in general agreement regarding the claim that the “recent warmth appears anomalous for at least the past 1,300 years”…

Figure 6. “Spaghetti” graph from Mann et al., 2008. Older is toward the left.

By cluttering up the image with many reconstructions and plastering the instrumental record onto end of the graph, it’s impossible to see any details.

Here are Mann (Cru_EIV), Moberg, 2005, Christiansen & Ljungqvist 2012 (un-smoothed) Ljungqvist 2010 and Esper 2002 (low frequency)…

Figure 7. Clutter can work both ways. Older is toward the left.

Zoomed in on post-1800 with HadCRUT4 NH added:

Figure 8. Older is toward the left.

The Modern Warming only appears anomalous because of the higher resolution of the instrumental record and its position at the tail-end of the time series.

Ljungqvist (2010) clearly explained the problem with directly comparing instrumental data to proxy reconstructions.

The amplitude of the reconstructed temperature variability on centennial time-scales exceeds 0.6°C. This reconstruction is the first to show a distinct Roman Warm Period c. AD 1-300, reaching up to the 1961-1990 mean temperature level, followed by the Dark Age Cold Period c. AD 300-800. The Medieval Warm Period is seen c. AD 800–1300 and the Little Ice Age is clearly visible c. AD 1300-1900, followed by a rapid temperature increase in the twentieth century. The highest average temperatures in the reconstruction are encountered in the mid to late tenth century and the lowest in the late seventeenth century. Decadal mean temperatures seem to have reached or exceeded the 1961-1990 mean temperature level during substantial parts of the Roman Warm Period and the Medieval Warm Period. The temperature of the last two decades, however, is possibly higher than during any previous time in the past two millennia, although this is only seen in the instrumental temperature data and not in the multi-proxy reconstruction itself.

[…]

The proxy reconstruction itself does not show such an unprecedented warming but we must consider that only a few records used in the reconstruction extend into the 1990s. Nevertheless, a very cautious interpretation of the level of warmth since AD 1990 compared to that of the peak warming during the Roman Warm Period and the Medieval Warm Period is strongly suggested.

[…]

The amplitude of the temperature variability on multi-decadal to centennial time-scales reconstructed here should presumably be considered to be the minimum of the true variability on those time-scales.

Ljungqvist, 2010
Figure 9. Ljungqvist demonstrated that the modern warming had not unambiguously exceeded the range of natural variability. The bold black dashed line is the instrumental record. I added the red lines to reflect the margin of error of the proxy data relative to the instrumental data. Older is toward the left.

Direct comparisons of the the modern instrumental record to the older proxy reconstructions are not robust because the proxy data are of much lower resolution. The proxy data indicate the “minimum of the true variability on those time-scales.” The instrumental data are depicting something closer to actual variability.

The proxy data lack the high frequency component of the signal.  When the high frequency component of a signal is filtered out, it attenuates the amplitude.

Figure 10. Sine wave with 10-pt smoothing average applied. Note the reduction in amplitude due to filtering and  smoothing. (Wood for Trees) Older is toward the left.

The direct comparison of instrumental data to proxy data becomes even more problematic when the record length is extended beyond 2,000 years.

Figure 11. Holocene Climate Reconstruction, Andy May WUWT. Older is toward the right.

The supposedly “four warmest years on record” have occurred about 300 years after the coldest century of the past 100 centuries.  This could only be described as a “climate crisis” or “climate emergency” by someone who was totally ignorant of basic scientific principles, particularly Quaternary geology and signal processing.  It’s actually a helluva a lot better than just about any other possible evolution of Earth’s “climate.”

The longer the record length of the reconstruction, the more important the consistency of the temporal resolution becomes.

“Consistency of the temporal resolution” means that the resolution of the older proxies are consistent with the recent proxies. Temporal resolution is a function of the sampling interval…

We believe the greater source of error in these reconstructions is in the proxy selection. As documented in this series, some of the original 73 proxies are affected by resolution issues that hide significant climatic events and some are affected by local conditions that have no regional or global significance. Others cover short time spans that do not cover the two most important climatic features of the Holocene, the Little Ice Age and the Holocene Climatic Optimum.

[…]

We also avoided proxies with long sample intervals (greater than 130 years) because they tend to reduce the resolution of the reconstruction and they dampen (“average out”) important details. The smallest climate cycle is roughly 61 to 64 years, the so-called “stadium wave,” and we want to try and get close to seeing its influence. In this simple reconstruction, we have tried to address these issues.

Andy May WUWT.

This is a table of all of the “used” proxies. They have fairly consistent temporal resolution. They have long record lengths and most, if not all, cover the Holocene Climatic Optimum and Little Ice Age, the warmest and coldest climatic phases of the Holocene. It’s about as close to “apples and apples” you can get with a >10,000-yr global temperature reconstruction. Andy’s proxies have an average resolution of 75 yrs and an average record length of 11,697 yrs with low standard deviations (by proxy series standards). There is no significant trend of degrading resolution with time, as occurs in most proxy reconstructions.

Andy May Resolution
Figure 12. Temporal resolution (left axis) and record length (right axis). Older is toward the right.

Andy’s reconstruction demonstrates that the nadir of the Little Ice Age was the coldest climatic period of the Holocene. This is a feature of every non-hockey stick reconstruction and even most hockey stick reconstructions, including the serially flawed Marcott et al., 2013.  It also demonstrates that the modern warming is inconspicuous relative to the Holocene’s pervasive millennial-scale climate signal (Bohling &Davis, 2001).

If you open the Reconstruction References spreadsheet and go to the far right column (Comments), Andy notes whether the proxy was “used” or explains why it was rejected. The three most common reasons for rejecting proxy series were:

  1. Coarse resolution (denoted as “resolution too big”)
  2. Not old enough
  3. Not Young enough

Andy could have spliced the instrumental record onto the end of this and made a hockey stick… But that would be fraudulent anywhere outside of academic and government “science”. It’s akin to splicing a well log into a seismic line and calling it an anomaly.

Regarding Marcott, the authors even state that their Holocene reconstruction can’t be directly compared to instrumental data due to resolution differences… Yet they do so anyway.

Q: What do paleotemperature reconstructions show about the temperature of the last 100 years?

A: Our global paleotemperature reconstruction includes a so-called “uptick” in temperatures during the 20th-century. However, in the paper we make the point that this particular feature is of shorter duration than the inherent smoothing in our statistical averaging procedure, and that it is based on only a few available paleo-reconstructions of the type we used. Thus, the 20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions. Our primary conclusions are based on a comparison of the longer term paleotemperature changes from our reconstruction with the well-documented temperature changes that have occurred over the last century, as documented by the instrumental record. Although not part of our study, high-resolution paleoclimate data from the past ~130 years have been compiled from various geological archives, and confirm the general features of warming trend over this time interval (Anderson, D.M. et al., 2013, Geophysical Research Letters, v. 40, p. 189-193; http://www.agu.org/journals/pip/gl/2012GL054271-pip.pdf).


Q: Is the rate of global temperature rise over the last 100 years faster than at any time during the past 11,300 years?

A: Our study did not directly address this question because the paleotemperature records used in our study have a temporal resolution of ~120 years on average, which precludes us from examining variations in rates of change occurring within a century. Other factors also contribute to smoothing the proxy temperature signals contained in many of the records we used, such as organisms burrowing through deep-sea mud, and chronological uncertainties in the proxy records that tend to smooth the signals when compositing them into a globally averaged reconstruction. We showed that no temperature variability is preserved in our reconstruction at cycles shorter than 300 years, 50% is preserved at 1000-year time scales, and nearly all is preserved at 2000-year periods and longer. Our Monte-Carlo analysis accounts for these sources of uncertainty to yield a robust (albeit smoothed) global record. Any small “upticks” or “downticks” in temperature that last less than several hundred years in our compilation of paleoclimate data are probably not robust, as stated in the paper.

Real Climate

If the “the 20th century portion of our paleotemperature stack is not statistically robust”… Why was it included in the publication? The modern instrumental record would be a single data point at the resolution of Marcott’s reconstruction.

Why does this matter?

So, what would it mean, if the reconstructions indicate a larger (Esper et al., 2002; Pollack and Smerdon, 2004; Moberg et al., 2005) or smaller (Jones et al., 1998; Mann et al., 1999) temperature amplitude? We suggest that the former situation, i.e. enhanced variability during pre-industrial times, would result in a redistribution of weight towards the role of natural factors in forcing temperature changes, thereby relatively devaluing the impact of anthropogenic emissions and affecting future predicted scenarios. If that turns out to be the case, agreements such as the Kyoto protocol that intend to reduce emissions of anthropogenic greenhouse gases, would be less effective than thought.

Esper et al., 2005

It matters because the only way to directly compare the instrumental data to the pre-industrial proxy data is the filter the instrumental data down to the resolution of the proxy data.  This leads to climate reconstructions with “enhanced variability during pre-industrial times” and “result in a redistribution of weight towards the role of natural factors in forcing temperature changes, thereby relatively devaluing the impact of anthropogenic emissions and affecting future predicted scenarios.”

It matters because the advocates of the Anthropocene as a geologic epoch are relying on the Marcott hockey stick.

Figure 13. Run Away! The Anthropocene Has Arrived!!! Older is toward the left.

It matters because hockey sticks are being used to justify policy changes, carbon taxes and destroy individual liberty and prosperity.

Most of the asserted evidence that recent climate changes deviate from the norms of the Holocene are equally consistent with being the result of differences in the resolution of paleo-climate data and instrumental records.

Part deux will address carbon dioxide and sea level hockey sticks.

References

Anklin, M., J. Schwander, B. Stauffer, J. Tschumi, A. Fuchs, J. M. Barnola, and D. Raynaud (1997), “CO2record between 40 and 8 kyr B.P. from the Greenland Ice Core Project ice core,” J. Geophys. Res., 102(C12), 26539–26545, doi: 10.1029/97JC00182.

Christiansen, B. and F.C. Ljungqvist. 2012. “The extra-tropical Northern Hemisphere temperature in the last two millennia: reconstructions of low-frequency variability”. Climate of the Past, Vol. 8, pp. 765-786. www.clim-past.net/8/765/2012/ doi:10.5194/cp-8-765-2012

Davis, J. C., and G. C. Bohling. “The search for patterns in ice-core temperature curves. 2001, in L. C. Gerhard, W. E. Harrison, and B. M. Hanson, eds., Geological perspectives of global climate change, p. 213–229.

Esper, J., E.R. Cook, and F.H. Schweingruber. 2002.  “Low-Frequency Signals in Long Tree-Ring Chronologies for  Reconstructing Past Temperature Variability”.  Science, Volume 295, Number 5563, 22 March 2002.

Esper, J., R.J.S. Wilson,  D.C. Frank, A. Moberg, H. Wanner, & J. Luterbacher.  2005.  “Climate: past ranges and future changes”.  Quaternary Science Reviews 24: 2164-2166.

Etheridge, D.M., L.P. Steele, R.L. Langenfelds, R.J. Francey, J.-M. Barnola and V.I. Morgan. 1998. “Historical CO2 records from the Law Dome DE08, DE08-2, and DSS ice cores”. In Trends: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tenn., U.S.A.

Finsinger, W. and F. Wagner-Cremer. “Stomatal-based inference models for reconstruction of atmospheric CO2 concentration: a method assessment using a calibration and validation approach”. The Holocene 19,5 (2009) pp. 757–764

Kouwenberg, LLR. 2004. “Application of conifer needles in the reconstruction of Holocene CO2 levels”. PhD Thesis. Laboratory of Palaeobotany and Palynology, University of Utrecht.

Ljungqvist, F.C. 2009. N. Hemisphere Extra-Tropics 2,000yr Decadal Temperature Reconstruction. IGBP PAGES/World Data Center for Paleoclimatology Data Contribution Series # 2010-089. NOAA/NCDC Paleoclimatology Program, Boulder CO, USA.

Ljungqvist, F.C. 2010. “A new reconstruction of temperature variability in the extra-tropical Northern Hemisphere during the last two millennia”. Geografiska Annaler: Physical Geography, Vol. 92 A(3), pp. 339-351, September 2010. DOI: 10.1111/j.1468-459.2010.00399.x

Mann, Michael,  Zhihua Zhang, Malcolm K Hughes, Raymond Bradley, Sonya K Miller, Scott Rutherford, & Fenbiao Ni. (2008). “Proxy-based Reconstructions of Hemispheric and Global Surface Temperature Variations over the Past Two Millennia”. Proceedings of the National Academy of Sciences of the United States of America. 105. 13252-7. 10.1073/pnas.0805721105.

McElwain et al., 2002. “Stomatal evidence for a decline in atmospheric CO2 concentration during the Younger Dryas stadial: a comparison with Antarctic ice core records”. J. Quaternary Sci., Vol. 17 pp. 21–29. ISSN 0267-8179

Moberg, A., D.M. Sonechkin, K. Holmgren, N.M. Datsenko & W. Karlén. 2005. “Highly variable Northern Hemisphere temperatures reconstructed from low-and high-resolution proxy data”. Nature, Vol. 433, No. 7026, pp. 613-617, 10 February 2005.

Van Hoof, Thomas, Karsten A. Kaspers, Friederike Wagner-Cremer, R.S.W. Wal, Wolfram Kürschner & Henk Visscher. (2005). “Atmospheric CO 2 during the 13th century AD: reconciliation of data from ice core measurements and stomatal frequency analysis”. Tellus B. 57. 10.3402/tellusb.v57i4.16555.

For additional references, see these Watts Up With That posts:

Addenda

Comparison of CL2012 and HadCRUT4 50-yr averages.
Advertisements

145 thoughts on “Resolution and Hockey Sticks, Part 1

  1. The question still remains whether Michael Mann deliberately used a deceptive method, or is just a victim of his own noble cause corruption and did not doubt a result that fit his purpose.

    • I tend to think it’s the latter. In my discussions with the authors of the Anthropocene paper, I got the distinct impression that they didn’t understand the problem.

      I think much of this can be attributed to Kuhn’s “different worlds” phenomena…

      Practicing in different worlds, the two groups of scientists see different things when they look from the same point in the same direction. Again, that is not to say that they can see anything they please. Both are looking at the world, and what they look at has not changed. But in some areas they see different things, and they see them in different relations one to the other. That is why a law that cannot even be demonstrated to one group of scientists may occasionally seem intuitively obvious to another.”

      • Great article!

        Many of these hockey sticks appear to have been the result of a careless, if not reckless, disregard of basic signal processing principles.

        It’s highly likely the perpetrators have no clue about basic signal processing principles. It’s a PhD version of Kruger-Dunning wherein folks assume that the possession of said PhD means they know everything.

        The problem is exacerbated by the ease with which one can throw statistical tools at a dataset until one of them produces an interesting result. It’s called P-hacking.

        • Wasn’t there one of the climate-gate emails where one of the climate crew reported doing something to obtain a false hockey stick and reporting that he finally ‘understood what this McIntyre person’ meant?

          That seemed as close as you can get to a confession that they didn’t really understand the issues, and were very reluctant to even consider them, much less publicly admit error.

      • I tend to think it’s the latter.

        When one considers part two of the story, it is more difficult to come to that conclusion. In the climategate emails, Jones says he used “Mike’s Nature Trick” to “hide the decline”. What decline?

        Jones was referring to the decline in tree ring temperature reconstructions in the most recent few decades while the instrumental record was rising, commonly called the “divergence problem”. To allow the public to easily see that tree ring data moved in the opposite direction from instrumental data for nearly 1/3 of the instrumental data set would have discredited tree rings as a proxy in a single, obvious, graph. Jones used Mike’s trick to hide that decline.

        I don’t think you can discuss Mike’s trick in complete isolation from the manner in which Jones used it. Once you have both pieces of the story, coming to the conclusion that it was “benign” is pretty tough, for me at least.

          • Quite a few things were done wrong… Mike’s Nature Trick might have actually improved MBH98… 😎

          • OK, let’s start with the first part. What was actually done in MBH98 (Nature)?

            Nice try. The whole article was about the first part. Why so anxious to not discuss the second part Nick?

          • “The whole article was about the first part.”
            No. The first part was what Mann actually did in MBH98 (Nature). I’ve said a bit, but no-one else has actually said what it was.

          • David,
            A lot of the article is about “Mike’s Nature trick”. That is, whatever it was that he did in Nature. MBH98 is the only HS article he has written in Nature. So whatever he actually did is highly relevant.

          • From: Phil Jones

            To: ray bradley ,mann@xxxxx.xxx, mhughes@xxxx.xxx

            Subject: Diagram for WMO Statement

            Date: Tue, 16 Nov 1999 13:31:15 +0000

            Cc: k.briffa@xxx.xx.xx,t.osborn@xxxx.xxx

            Dear Ray, Mike and Malcolm,

            Once Tim’s got a diagram here we’ll send that either later today or

            first thing tomorrow.

            I’ve just completed Mike’s Nature trick of adding in the real temps

            to each series for the last 20 years (ie from 1981 onwards) amd from

            1961 for Keith’s to hide the decline. Mike’s series got the annual

            land and marine values while the other two got April-Sept for NH land

            N of 20N. The latter two are real for 1999, while the estimate for 1999

            for NH combined is +0.44C wrt 61-90. The Global estimate for 1999 with

            data through Oct is +0.35C cf. 0.57 for 1998.

            Thanks for the comments, Ray.

            Cheers

            Phil

            Prof. Phil Jones

            Climatic Research Unit Telephone +44 (0) xxxxx

            School of Environmental Sciences Fax +44 (0) xxxx

            University of East Anglia

            Norwich Email p.jones@xxxx.xxx

            NR4 7TJ

            UK

            It doesn’t appear that Mike used his Nature Trick, as described in Phil Jones’ email in MBH98…

            I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd from 1961 for Keith’s to hide the decline.

            The main problem with MBH98 was a fatally flawed principle components analysis. That and over-reliance on tree ring chronologies of Bristlecone Pine trees, which reflected CO2 fertilization, not warming.

            This was Mann’s response on Real Climate regarding Mike’s Nature Trick…

            No researchers in this field have ever, to our knowledge, “grafted the thermometer record onto” any reconstrution. It is somewhat disappointing to find this specious claim (which we usually find originating from industry-funded climate disinformation websites) appearing in this forum.

            https://wattsupwiththat.com/2009/11/20/mikes-nature-trick/

            You are correct that the original Mike’s Nature Trick wasn’t simply grafting the instrumental data onto the reconstructions.

            When smoothing these time series, the Team had a problem: actual reconstructions “diverge” from the instrumental series in the last part of 20th century. For instance, in the original hockey stick (ending 1980) the last 30-40 years of data points slightly downwards. In order to smooth those time series one needs to “pad” the series beyond the end time, and no matter what method one uses, this leads to a smoothed graph pointing downwards in the end whereas the smoothed instrumental series is pointing upwards — a divergence. So Mann’s solution was to use the instrumental record for padding, which changes the smoothed series to point upwards as clearly seen in UC’s figure (violet original, green without “Mike’s Nature trick”).

            However, that’s exactly what he did in Mann et al., 2008. And that hockey stick is still one of the “go to” reconstructions for policy makers and scientists unfamiliar with its flaw.

        • I think that an important point to be made here is that if the email exchange had said, “I used Mike’s Nature Trick to better illustrate what was happening,” we could reasonably assume a benign intent. However, the phrase “to hide the decline,” casts it in an entirely different light!

          • Christiansen & Ljungqvist, 2012 (gray curve) looks very much like the pre-Mike’s Nature Trick reconstructions…

            41 of their 91 proxies were tree ring chronologies, mostly ring density and width.

            Tree rings are useful high resolution proxies; but they do diverge from the instrumental data because they aren’t “tree-mometers”. This is why you can’t just substitute the instrumental data for the proxies. They aren’t interchangeable. Unless there is a direct mathematical translation, like d18O, you can’t even directly compare them.

      • “To hide the decline”. Those were the words used. Using “hide” about data in science is simply not something that should be done. And they knew the proxy data had to be hidden.

    • People may want to give Mann the benefit of the doubt but he has a history of skillfully wording statements that make lies by omission and obfuscation – even in congressional hearings. He does it too well for it to be accidental. This is his little intellectual kingdom and he polices it like a tyrant.

      • And has he ever come out and admitted what he did was wrong? No, he continues to defend it.

    • Another possibility is that there knowledge of basic statistics is so lacking that they simply aren’t able to comprehend why what they did was invalid.

      • He had the proxy data showing a decline in temperature, which shows there is a serious problem with the proxies if you trust the instrumental record. So he omitted those proxies and didn’t mention the problem.

        I’m struggling to see anything other than a deliberate attempt to not portray the full picture.

        • The only real defense is that they needed a “quick fix” for what they viewed as a minor problem.

    • I’t is reasonable to give them the benefit of the doubt and to assume that they thought they had gotten the resolutions right. What is impossible then to accept – even for the scientifically illiterate – is that their proxy record was a “proxy” for temperature at all, because it was turning down right at the point they spliced in the upward marching instrumental record! ;-(

      • I genuinely don’t think they understand the problem because the keep reverting to the same argument: But, the instrumental data demonstrate warming!”

        They really do seem clueless.

  2. I think a lot of the almost absurd statistical ineptness of climate scientists is due to the existence of easy to use ”statistical packages”. In the past if you wanted to use statistics you had to make the calculations by hand and that meant you had to learn and understand the methods you used. No longer. You just input your data, tick a number of boxes and out comes the pretty graphics, but you have not the slightest idea whether the algorithms used are really appropriate or even applicable.

    I’ve seen any number of elementary blunders, like assuming normal distribution for data that obviously isn’t, ignoring autocorrelation, ignoring regression dilution, using absurd bayesian priors and so on ad nauseam.

    Usually a quick scan of the latest unprecedentedly-much-worse-than-we-thought paper will find some such elementary blunder, that should of course have been caught by the peer review, but wasn’t.

    • I think a lot of the almost absurd statistical ineptness of climate scientists is due to the existence of easy to use ”statistical packages”.

      Too right.

      It is interesting that when John Ioannidis criticized the statistical methods often used in medical research https://journals.plos.org/plosmedicine/article%3Fid%3D10.1371/journal.pmed.0020124 there was general agreement that his criticism had some validity and that researchers could do better. When people make similar criticisms of climate science, they get called names.

      I think there is a similar issue with data fits. Typically, a variety of data fits are just a mouseclick or a couple of keystrokes away. I don’t think folks get much guidance about which to use when or to the possible pitfalls of a poor choice.

    • Peer review isn’t about whether the science is right. Just if it complies with the correct narrative.

  3. As a fellow oil and coal geologist, I thank you David for presenting this. For the past twenty years I have been screaming (mainly in my head) about the incredibly poor use of data in “climate science”. For what the data is is being used for, it is really barely fit for purpose. Your comparison of the down hole geophysical log with seismic is a perfect analogy, I firmly believe geologists ( I refuse to use the term “geoscientist” ) are well placed to understand disparate date, we have had to learn to utilise anything we can get.
    I enjoy your geological perspectives.

    • +1 You did a masterful job of the log to seismic analogy (and never once mentioned Msr. Fourier!)

    • I use geoscientist because I have lost track of what my job is.

      I have a BS in Earth Science.
      From 1981-2007, my job title included the word geophysicist.
      From 2007-2013, it was VP of Exploration.
      Since 2013, it’s been Sr. Geologist.
      I”m an active member of the SEG (1981) and an associate member of the AAPG (2004).

      I figure I’m now either a geoscientist or a dot-connecting PowerPointer… 😎

      • I love geology, it’s probably one of the most unreliable ways to make a living but it just gets under your skin.

      • Don’t forget your an accomplished “Excel-er” too.

        (I have forgotten so much of my Excel stuff from 9 year ago in trying to do my Solar Flare/AR triggering project paper … sad how quickly one loses it. And my MatLab … even worse.)

      • You are a scientist, Dr Middleton. our jobs are to take data, analyze it honestly and to the best of our ability, understand and vet our tools. Then move to PowerPoint to explain what we have learned to those who pay us for our work. That is usually the impossible dream.

  4. “So, they just substituted the instrumental record for the tree ring chronology. In the private sector, this sort of behavior isn’t benign… It’s grounds for immediate termination or worse.”
    As Steve McIntyre unsuccessfully tried to explain for many years, that isn’t “Mike’s Nature trick”. MBH98 did no such splicing. What SM objected to was that in smoothing the proxy curve to present, they used the instrumental values as padding. When you smooth any curve to the endpoint, you have to incorporate some idea of what lies beyond, even if it is only an extrapolation. One can well ask whether the curve should be smoothed to the end. But if it is done, one should certainly use the best information available. Instrumental data can’t be ignored.

    • Between the splicing “trick” and “hide the decline” deception, without letting the reader know what you did is clear evidence they knew they were being deceptive.
      All those perps (authors) should have been life-time banned from further NSF/government grants.

      Because in in any other field that would have been the outcome. Climate had become politicized. The need for the hockey stick meant it couldn’t be allowed to be falsified/retracted. The perps dug their heels in knowing they had top-cover.

    • Bullshit, Nick. The “Team” deliberately set out to deceive their audience:

      1) They arbitrarily cut off the proxy data when it began to deviate from the narrative;

      2) They jiggered with the data for the proxy endpoint and the instrument beginning point to make them visually mesh in the graph; and

      3) They used the exact same color for the proxy and instrument lines on the graph to hide the fact that they were different quantities.

      Those facts, in and of themselves, are enough to prove multi-Trillion dollar fraud.

      • Agreed Dave.

        they had a clear “intent to deceive” = Fraud.
        In retrospect to 1998 and everything hence, there is no other interpretation when everything is put on the table and examined.

      • “Those facts, in and of themselves, are enough to prove multi-Trillion dollar fraud.”
        I doubt that Jones putting a figure on the cover of the WMO 2000 report constitutes a “multi-Trillion dollar fraud”.

    • Bullshit again, Nick. The data to the endpoint was there; they just didn’t like it and fraudulently replaced it with data they liked. Nobody who is honest “pads” data with other, unrelated data.

    • they just substituted the instrumental record for the tree ring chronology

      MBH98 did no such splicing

      in smoothing the proxy curve to present, they used the instrumental values as padding

      Nick – it sounds as if you’re splitting hairs, for no more reason than needing to paint David as having made a factual misstatement. It doesn’t matter much to me whether MBH98 substituted instrumental for proxy data, or blended them in some arcane way (is “padding” a recognised statistical procedure? – I don’t know enough statistics to judge).

      If you’re going to use different kinds of data to show a trend over time, you should plot them together, and try whatever statistical tools you think are appropriate, to assess how well they correlate with each other. If they correlate well over a chosen test period (or better, 2 or 3 test periods), then you can start blending them and using one to extrapolate the other beyond its own range.

      But they didn’t correlate well, did they? The instrumental data showed a distinct 20th century warming, and the tree ring proxies didn’t; in fact they showed a bit of a 20th century decline. That’s why “we need to hide the decline”, and “Mike’s Nature trick” was a way of doing just that (but never actually saying so).

      And then this (have you been drinking, Nick?):

      When you smooth any curve to the endpoint, you have to incorporate some idea of what lies beyond

      No, no and no. You can smooth the data to the end; there’s nothing at all wrong with that, but you should plot the smoothed numbers at the mid-point of the smoothing interval. You can choose to plot the smoothed data at the end of the smoothing interval, but that’s really just a trick to make it look more current than it is. The data tell you nothing about what lies beyond the end, unless they show an unequivocal pattern, cyclic or monotonic or whatever, over their whole range that you can extrapolate. That wasn’t the case with the tree rings, and generally climate data don’t show well enough defined trends to extrapolate them beyond their terms.

      Using tree rings to estimate palaeo-temps would be OK if there was a lot of hard data to show how well it works. If there is such hard data, I haven’t seen any of it yet.

      In my opinion, David has been much too kind in focussing on differences in temporal resolution between instrumental and tree ring temperatures. All good, but IMHO, more important is the dubious value of tree rings as a temperature proxy.

      • “It doesn’t matter much to me whether MBH98 substituted instrumental for proxy data”

        It matters to know what we are talking about. Otherwise there is no point in talking. What is supposed to be wrong with MBH98 plotting?

        “You can smooth the data to the end; there’s nothing at all wrong with that, but you should plot the smoothed numbers at the mid-point of the smoothing interval.”
        If you do that, the smoothing interval has to contract to zero, and it isn’t smooth any more. And a plot with varying smoothing interval throughout can be misleading in other ways. I think there is rightful argument about whether smoothing to the end should be done at all. But if you do, then yes, you have to use some kind of padding. Otherwise the smoothed points do not represent the points in time stated, but are centered further back.

        Smoothing to the end necessarily involves some idea about the time beyond, but it is not attempting to forecast. It is just giving the best possible estimate of the end value.

        • Give up Nick. You are trying to defend a fraud. Your “Smoothing to the end necessarily involves some idea about the time beyond, but it is not attempting to forecast.” indicates you are abandoning logic in the attempt.

  5. David,
    “The proxy data lack the high frequency component of the signal. When the high frequency component of a signal is filtered out, it attenuates the amplitude.”
    I think you need to be more specific about what signal and what proxies you are referring to. Instrumental would usually be annual averages, with possibly more smoothing. Tree-ring data has annual resolution.

    • The raw instrumental data are sampled once or twice a day. HadCRUT4 and other hemispheric and global instrumental reconstructions have at least monthly resolution. Tree ring chronologies, probably the highest frequency proxy, are, at best, annual averages and more reflective of growing season than annual averages.

      If you record data at a much higher sample rate it still has more frequency content and a greater dynamic amplitude range, than data recorded at a lower sampling rate, even if you present them at a common resolution.

      • David,
        “Tree ring chronologies, probably the highest frequency proxy, are, at best, annual averages”

        Trees get sunshine every day. The growth over a season represents the integral as performed by the tree.

        Instrumental averages for comparison are usually calculated over a growing season. But in terms of warming, it usually doesn’t matter what annual assessment you use.

        I can’t see what bad effect you expect from the high frequency components of instrumental, after damping through annual averaging, and then probably 5 or 10 year smoothing. It’s contrary to your seismo analogy, since the effect you are looking for, decadal warming, is easily resolved on both measuring scales.

        • Nick,

          The seismic analogy is an analogy… not the exact same thing. I’ll try to explain this more clearly in a future comment.

        • Trees get sunshine every day. The growth over a season represents the integral as performed by the tree.

          Yeah sure. Let’s just pretend that rainfall, last frost in spring, first frost in fall, pests and diseases (for starters) are all exactly the same every year and the only factor that varies is sunshine.

        • That’s an idiotic statement in the context of this discussion! If you had a high resolution thermometer and a low resolution one, you might have a point. But you are comparing a low resolution PROXY for temperature with a high resolution MEASUREMENT of temperature. Temperature is one factor in tree growth. Cold and sunny vs same temp and cloudy. Rainfall. I’m sure there are more.

          The same problems exist in ice cores and yet some ‘scientists’ have the nerve to say ‘we can only explain the modern warming with a greenhouse effect enhanced by CO2’ and ‘we have not had CO2 levels this high for 10s of thousands of years’.

          That constitutes fraud and you seem to be condoning it.

          • Heh. Back on Climate Audit, he had “Texas Sharpshooter” applied to him, if I am remembering correctly. Yes, Nick’s been going at this for roughly a decade. He had a lawyer one applied, too; that escapes me at this moment. Oh, yeah, Racehorse, as in “Racehorse” Haynes, from Texas.

  6. As soon as I read the title I knew it was another great post by David Middleton…Keep it coming…lol

  7. “Part deux will address carbon dioxide and sea level hockey sticks.”

    David….serious question

    Fossil fuel use has increased exponentially…
    …CO2 emissions have increased exponentially

    How is it that measured atmospheric CO2 continues to increase linearly?

    what I see is..they’re just cherry picking dates on sea level…La Nina to El Nino

    • Assume a growing cow with a spherical stomach. It’s appetite grows to the cubic power of its stomach’s radius.

  8. David Middleton
    ‘It matters because the only way to directly compare the instrumental data to the pre-industrial proxy data is the (to)? filter the instrumental data down to the resolution of the proxy data.”

    Sorry if it takes awhile….My comments take about 5 hours to post, IF they get posted at all…lol. (7:34)

    [Your posts, and others, get caught in the WordPress Blacklist, and need to be manually retrieved from Trash. The good news is that the Blacklist appears to be apolitical. Mod.]

  9. Watch or read Mike Mann’s testimony today where he brags that he is the author of the “iconic” hockey stick. Later he claims that his evaluation of sediments shows that hurricanes were less strong in the past. I suspect that much of this analysis would discredit that evaluation too.

    • Roger Caiazza
      Yes, both temporal and spatial resolution decreases the farther back one goes in time. That is, resolution varies inversely with the age of sediments. Any claims made about modern events being “unprecedented” without taking the resolution limitations into account reflect ignorance, willful misinterpretations, or both.

      • Yep… I didn’t even address horizontal resolution in this post… And it’s a big smoothing factor in multi-proxy reconstructions.

    • I happened to turn on the TV for the question when Mann was asked a question about the fires, confidentially answering that half were due to human caused climate and half to the other problems. He struck me as a poorly educated salesman, like used-car of old, they are smarter nowadays.

      I hope the Middleton School of Complete Earth and Ocean Geosciences exists long enough to put a treatise together for the coming millennia. Don’t forget the history of seismology, including doodlebugs and their sine waves.

  10. From the article: “The most benign possible interpretation of the “trick” is that they edited part of Keith Briffa’s reconstruction because the tree ring chronology showed that the 1930s to early 1940′s were warmer than the late 1990′s.”

    So, again, there is evidence that temperatures in the 1930’s, caused by Mother Nature, were as warm as the temperatures today. Which means there is no CAGW going on today, it’s just Mother Nature doing Her thing.

    I think we should concentrate more on the 1930’s as a way to debunk the claims of unprecedented warming today caused by CO2. The IPCC says the warming in the 1930’s was mostly driven by Mother Nature, and since the current warming today is no warmer than the 1930’s, it could very well be Mother Nature drving the current warming. A good scientist would assume it was Mother Nature’s doing until there is evidence to the contrary. There is no evidence to the contrary regarding CO2.

    We should point to the past warm periods like the Roman warm period and the Medieval warm period, but it seems to me that the 1930’s are a similar high point and we have so much data showing the warmth of the 1930’s: unmodified temperature records and newspaper accounts of the severe weather happening all over the globe during that period.

    The warmth of the 1930’s puts the lie to the CAGW fraud. That’s why the Climategate conspirators conspired to make the 1930’s warmth disappear from the official temperature records. But we still have the records and what they looked like before the conspirators got their hands on them and those unmodified charts tell a completely different story than what the bogus, bastardized Hockey Stick charts tell. The story is the Hockey Stick and its depiction of a CAGW “hotter and hotter” world is a Big Lie perpetrated by dishonest people.

    We aren’t any warmer now than in the 1930’s. In fact, we are about 0.5C cooler than 1998 and 2016, the two closest hottest years, and we are about 1.0C cooler than it was in 1934.

    • The global temperature graphs suffer for want of ocean temperature data prior to the satellite era. we have some shipping lane information, and the occasional research ship. As Phil Jones said, ‘we made it up”. Today we probably run the models backwards, which means they produce past cooling as a mirror reflection of future warming. No wonder the graph ends up following the CO2 concentration.. voila! CO2 drives it!

  11. “It matters because hockey sticks are being used to justify policy changes, carbon taxes and destroy individual liberty and prosperity.”
    The policy issue is that, for over a hundred years, scientists have been saying that burning a whole lot of carbon will lead to warming. We are burning a whole lot of warming. Is that safe?

    A natural query is, well, we’ve burnt a first instalment. Did that lead to warming? And it did get warmer, at about the rate expected.

    A further query is then, has similar warming been common in the past? Even if it had, the fact that this instance of warming matches the predicted effect of carbon burning would be significant. But the answer from paleo is, no, similar sudden warming events are not common – in fact possibly unprecedented in the Holocene.

    So then comes the resolution argument. Could there have been sudden warmings that we couldn’t resolve? This is really a long stretch from the main policy issues. But again, at least in the last millennium or so, the answer is no – there is enough in the tree-ring record to counter that. And even the lower resolution proxies aren’t that low. A fifty year warming, on the 1/4 wavelength rule of thumb, should be apparent in a proxy covering two centuries wavelength.

    • Nick,

      You’ve asked several great, thoughtful questions. I’ll put together coherent responses tomorrow, when I get back to the office.

      • David,

        Glad I read your comment and look forward to your follow-up. I was about ready to put together a rather incoherent response. 🙂

    • Scientists say many things and some turn out (more or less) wrong. What counts is evidence. Even ignoring evidence, the “carbon burning leads to unsafe warming” looks like a very bad science. It looks like BS propaganda and cheating.
      Here is a scientist saying that “burning a whole lot of carbon” will leed to (slight) cooling (ignoring other factors).
      https://www.researchgate.net/publication/239441369_Modeling_of_the_Earth's_Planetary_Heat_Balance_with_Electrical_Circuit_Analogy

      So yes, it is safe and good. Natural climate change trumps any hypothetical anthropogenic factors. Wasting money, resources and time on the carbon hysteria is very unsafe and bad. And it does nothing to reduce the CO2 emissions significantly, if at all. And it makes the rich richer and the poor poorer. It will do a great deal of damage to science and liberalism when the truth outs.

      The warming in the first half of the 20th century (natural) was about the same rate as the AGW in the second half. Various other evidence shows that it is not unprecedented in the Holocene, not even close.

        • Well, at least he knows how to solve a heat transfer problem. He is astrophysicist by the way, but it shouldn’t matter. Your response shows it all. Crackpot? Looks like a projection to me.

          • “Well, at least he knows how to solve a heat transfer problem.”
            Well, he knows how to solve this problem:
            “These simplifications are brought to the idealized model of the system which contains the isothermal spherical core inside the isothermal spherical cover. The heat sources caused by the incoming solar radiation acts on the cover surface and in the cover (in W/m2). These heat sources are uniformly distributed. “
            But he doesn’t even get to a conclusion. He says
            “It is found that trends of the climate change caused by the increasing of the carbon dioxide emission depends on the whole set of parameters realized actually nowadays. There is the great interest to determine the values of the parameters as reliably and quickly as possible.”
            Great.

    • We are burning a whole lot of warming. Is that safe?

      Is the alternative, NOT burning a whole lot of fossil fuel safe? A resounding no! That would sentence billions to starvation, misery and death. You cannot look at these issues in isolation.

      A natural query is, well, we’ve burnt a first instalment. Did that lead to warming? And it did get warmer, at about the rate expected.

      It got warmer at about the same rate as it has since the LIA. That is ALSO expected. So how much is which? You don’t know, so no conclusions can be drawn from your observation. However NOT burning fossil fuels still sentences billions…

      A further query is then, has similar warming been common in the past? Even if it had, the fact that this instance of warming matches the predicted effect of carbon burning would be significant.

      What predictions are those? The ones from the climate models? The ones that they IPCC set aside for running too hot? Or the academic papers which now range anywhere from 1.5 C per doubling to 3+? ANY change temperature commensurate with the recovery from the LIA can be found in the models/papers. This isn’t even the Texas sharp shooter fallacy. Its fill the side of the barn with bullet holes, paint a target around a few of them, and say LOOK! Bullseye! IGNORE THOSE OTHER HOLES.

      But the answer from paleo is, no, similar sudden warming events are not common – in fact possibly unprecedented in the Holocene.

      So then comes the resolution argument. Could there have been sudden warmings that we couldn’t resolve? This is really a long stretch from the main policy issues. But again, at least in the last millennium or so, the answer is no – there is enough in the tree-ring record to counter that. And even the lower resolution proxies aren’t that low. A fifty year warming, on the 1/4 wavelength rule of thumb, should be apparent in a proxy covering two centuries wavelength.

      Yes Nick, we’ll gloss over the divergence problem. Over the course of the last 60 years or so, tree ring data has declined. while instrumental data as climbed. That’s more than a third of the instrumental record! In other words, it doesn’t MATTER what the resolution is when we KNOW that the tree rings did NOT capture the CURRENT temperature rise so there is NO REASON so assume that they captured similar temperature rises at any OTHER time in history.

      Which gets us back to we really don’t know very much about how much warming is man made, let alone if it is dangerous. What we do know is that literally billions of lives depend on the use of fossil fuels, and ceasing to use them would be a horrific disaster.

      • “It got warmer at about the same rate as it has since the LIA. That is ALSO expected. “
        A common fallacy. LIA is just a descriptive term applied to a cold period. We use the term because we know it got warmer. Just calling it the LIA does not give a reason why it should get warmer.

        But no, there is no period in the recent record where the temperature rose at 1.8 °/century for 44 years, total about 0.8°C. And more important, there is no sign that that current uptrend is going to end.

        “What predictions are those?”
        They go back to Arrhenius, who predicted 4°C/doubling. Estimates of sensitivity have since varied, but they are all positive. Warming was predicted, warming happened.

        “Over the course of the last 60 years or so, tree ring data has declined. while instrumental data as climbed. “
        Actually, not true. The decline that Briffa wrote about was in Fenno-Scandia and N Russia. As I noted elsewhere, Mann 2008 explicitly set out in a series of plots how various proxy data behaved since 1850, compared with instrumental.

        “ceasing to use them would be a horrific disaster”
        We’ll run out some day, if we keep burning. Figuring out how to manage without will come. We can try to do it sooner.

        • Nick, your “But no, there is no period in the recent record where the temperature rose at 1.8 °/century for 44 years, total about 0.8°C.” is a blatant exercise in misdirection. We had a similar warming rate for 30+ years in the early 20th Century. Additionally, without the recent Super El Nino there would be no 44 years. [0.8C in the last 44 years blows right by the alarmists’ 1.5C existential limit.]

          Your “And [sic.] more important, there is no sign that that current uptrend is going to end.” is a screamer. Do you rent out your crystal ball? Also, your “current uptrend” includes the Super El Nino, and it has already been in a downtrend.

          • “We had a similar warming rate for 30+ years in the early 20th Century”
            My numbers are Hadcrut. 1910-45 is 35 years, warming at 1.39C/century, total rise 0.54°C. And still with rising GHGs, though at a smaller rate. And there were Ninos then too.
            As for crystal ball, I said there was no indication. We know the earlier rise was followed by a pause (probably aerosols). This one has not ended.

        • A common fallacy. LIA is just a descriptive term applied to a cold period. We use the term because we know it got warmer.

          Uhm… that was my point.

          Just calling it the LIA does not give a reason why it should get warmer.

          Which was also my point…

          And more important, there is no sign that that current uptrend is going to end.

          Nor any that its not. Unless you take into account physics which REQUIRES that it stop at some point…or are you just going to ignore Stefan=Boltzmann now?

          They go back to Arrhenius, who predicted 4°C/doubling. Estimates of sensitivity have since varied

          Uhm… that was my point. Again.

          Actually, not true. The decline that Briffa wrote about

          THEN WHY BOTHER TO HIDE IT?!?

          We’ll run out some day, if we keep burning. Figuring out how to manage without will come. We can try to do it sooner.

          We’re in the middle of the Sahara desert Nick. We’ve got a semi-trailer full of water bottles. I note that if we don’t drink the water, we’ll die. You say no, no, we shouldn’t drink the water because to much water will kill you (true) we’ll eventually run out (also true) so we should start figuring out how to survive without water.

          Well you can sentence yourself to certain death Nick, but leave the rest of us out of it. We’re gong to drink the water we have while we walk out of the desert looking for new sources of water. I haven’t seen a thread where you made such a total mess of it in a very long time. Perhaps ever. There was a time when I considered you an informative through frustrating pain in the butt. Now you’re just silly.

          • You apparently did not spend much time over at Climate Audit when Steve was actively auditing. Over there Nick was given the nickname “Racehorse Nick” and it seemed it was well deserved. Go over there and dig around and you will see Nicks true colors. He defends “the cause” until death it seems.
            Over the years he defended Mann unequivocally and as I recall he defended Mann’s 98 paper as a good first attempt at temp reconstruction.

    • Bullshit again, Nick. The recent minor warming does not match your “… predicted effect of carbon burning …” Warming has been running 2 to 3 times less than that predicted by the UN IPCC climate models.

      • Not true. But in any case, the last decade or so where we have IPCC predictions is a small part of the total. We have over 40 years of warming at about the expected rate.

        • Nick Stokes

          We have over 40 years of warming at about the expected rate.

          And before that we had 30 years of cooling.
          And before that we haad 30 years of warming at the same rate.
          And before that we had 30 years of cooling at the same rate.

          We have been recovering from the Little Ice Age since 1650.
          Ain’t reached the top they have in the Medieval Warming Period. Yet.
          Ain’t reached the top they had in the Roman Warming Period. Yet.
          Ain’t reached the top they have in the Minoan Warming Period. Yet.

          • Mike’s trees aren’t part of this…

            Not true. But in any case, the last decade or so where we have IPCC predictions is a small part of the total. We have over 40 years of warming at about the expected rate.

          • But the mid-troposphere is where GHGs are supposed to (do) act. Theory, and the UN IPCC climate models, tell us that warming trends in the troposphere are greater than surface warming trends.

            Actual data tell us that is not true. Who you gonna believe, the UN or your lyin eyes?

          • In the Warmunist world, CO2 has to be measured at elevations above ground effects, or where no ground effects are present… But temperatures measured above the elevation of ground effects are invalid… These must be measured at “ground effects central” (airports) in order to be accurate… Do I need a /Sarc tag?

          • David, Your graph of the comparisons from John Christy’s 2015 Senate testimony has several flaws. First off, if your moving averages are applied as trailing averages, not centered averages, thus those curves are shifted by 2.5 years relative to the dates of measurements. Christy admitted that the last few data points were filled in to give the appearance of complete data thru 2015. Also, using a 5 year moving average adds the sample period into the filtered data, which your other wavelet filters minimize.

            Then too, the UAH graph doesn’t plot actual model output, but the results of a conversion process to simulate the UAH satellite product, which isn’t specified on the graph. A similr conversion must be applied to the balloon data as well. It might have been their “MT”, which is contaminated with an influence from the stratosphere, which has a well known documented cooling trend. That’s the reason for the use of the conversion, a process which does not take into account of the fact that there are differences between a fixed set of mid-latitude weights and the proper weightings required for both seasonal and latitude variations. Of course, your addition of surface data completely misses the fact that the UAH result is “mid-troposphere” and isn’t representative of surface data.

          • I think my HadCRUT4 and UAH plots were centered 13-month averages.

            The greenhouse effect mostly occurs in the mid-troposphere.

            Let’s just take John Christy out of the equation…

            Fig. 1. Global (70S to 80N) Mean TLT Anomaly plotted as a function of time. The black line is the time series for the RSS V4.0 MSU/AMSU atmosperhic temperature dataset. The yellow band is the 5% to 95% range of output from CMIP-5 climate simulations. The mean value of each time series average from 1979-1984 is set to zero so the changes over time can be more easily seen. Note that after 1998, the observations are likely to be in the lower part of the model distribution, indicating that there is a small discrepancy between the model predictions and the satelllite observations.(All time series have been smoothed to remove variabilty on time scales shorter than 6 months.)

            http://www.remss.com/research/climate/

            Less than 95% of the models isn’t warming “at about the rate expected”. The only time the observations approach the model-mean is during strong El Niño events.

            Even if you constrain the models to RCP4.5 (a strong mitigation scenario), it still takes a strong El Niño to touch the mean.

            RCP4.5 at least mostly stays above the 5% band.

          • The HadCRUT4 and UAH plots are centered 5-yr running averages, not trailing moving averages.

          • The word “about” covers a whole lot of sins.

            I love how Nick is always trying to drag in unrelated items every time he falls behind in an argument.

          • David, The UAH series covers only data collected thru Nov 2015. Applying a 5 year centered MA would produce a series with an end point at Nov 2013. There are two extra end points in the series for both the satellite series and the balloon series, which Christy may have added as padded with 4 yr and 3 year averages. Also, the balloon series is shown beginning in 1979, yet balloon data collection began before 1979, thus that series should be shown beginning in the same year as the model results, which appears to begin with 1977, or perhaps both could start earlier. Christy and Spencer don’t indicate whether they calculated their trends using the full series or the series after applying the 5 year moving average, which would be influenced by aliasing the 5 year averaging period into the resulting series.

            I looked at the RSS site using your link and did not find a description of their method used (if any) to create those model bounds. If they didn’t model the TLT weighting, I would not expect agreement. Furthermore, their choice of a base period of 1979-1984 is rather short and includes the influence of the El Chichón eruption in 1982. A longer base period may have been better, such as 1980 thru 1991.

          • The 5-yr centered averages are on the HadCRUT4 and UAH that I plotted on a copy of the Christy graph. The overlaid curves are 5-yr centered averages. I put it together in November 2018. The input data were from 1979 to what ever was available last November. Both were adjusted to set 1979 as the zero-crossing. UAH runs a little higher than Christy’s version, but still near the bottom of all of the model runs.

            Here are the models runs from Ed Hawkins Climate Lab Book with HadCRUT4 plotted on them…

            HadCRUT4 tracks in the RCP2.5 and RCP4.5 cluster near the bottom of he ensemble, barely above the bottom of the 5-95% band.

            It’s currently at the bottom of the upper reference period 5-95% band.

            It’s near the bottom of the AR4 CMIP4 ensemble…

            The trend line since 1950 scrapes the bottom. The trend line since the 1976 Pacific Climate Shift is closer to the bottom than the mean. Both trend lines track the bottom 20% or so of FAR, TAR and AR4.

            Remote Sensing Systems explains exactly what they did…

            To illustrate this last problem, we show several plots below. Each of these plots has a time series of TLT temperature anomalies using a reference period of 1979-2008. In each plot, the thick black line are the results from the most recent version of the RSS satellite dataset. The yellow band shows the 5% to 95% envelope for the results of 33 CMIP-5 model simulations (19 different models, many with multiple realizations) that are intended to simulate Earth’s Climate over the 20th Century. For the time period before 2005, the models were forced with historical values of greenhouse gases, volcanic aerosols, and solar output. After 2005, estimated projections of these forcings were used. If the models, as a whole, were doing an acceptable job of simulating the past, then the observations would mostly lie within the yellow band.

            http://www.remss.com/research/climate/

    • “It matters because hockey sticks are being used to justify policy changes, carbon taxes and destroy individual liberty and prosperity.”
      The policy issue is that, for over a hundred years, scientists have been saying that burning a whole lot of carbon will lead to warming. We are burning a whole lot of warming. Is that safe?

      It’s a helluva a lot safer than not “burning a whole lot of carbon.”

      However, the purpose of climate reconstructions isn’t to determine the safety of fossil fuels. The purpose is to establish pre-industrial climatic conditions.

      A natural query is, well, we’ve burnt a first instalment. Did that lead to warming? And it did get warmer, at about the rate expected.

      At a rate much lower than expected.

      A further query is then, has similar warming been common in the past? Even if it had, the fact that this instance of warming matches the predicted effect of carbon burning would be significant. But the answer from paleo is, no, similar sudden warming events are not common – in fact possibly unprecedented in the Holocene.

      Nonsense. The proxy data lack the resolution to draw any such conclusions.

      So then comes the resolution argument. Could there have been sudden warmings that we couldn’t resolve? This is really a long stretch from the main policy issues. But again, at least in the last millennium or so, the answer is no – there is enough in the tree-ring record to counter that. And even the lower resolution proxies aren’t that low. A fifty year warming, on the 1/4 wavelength rule of thumb, should be apparent in a proxy covering two centuries wavelength.

      1/4 wavelength is a rule of thumb for seismic vertical resolution. Tree ring chronologies are useful high resolution proxies, but don’t mesh well with 20th century temperatures. While they have seasonal or annual resolution, they aren’t precise, unlike oxygen isotope data.

      About half of CL2012’s proxies were tree ring chronologies.

    • “We are burning a whole lot of warming. Is that safe?”

      There isn’t a shred of evidence that it isn’t safe, and much evidence that it is.

      The two biggest are that:
      1) The earth has been much warmer in the near and distant past, and nothing bad happened. In fact times of warmth have always been good times for life in general.
      2) CO2 levels have been much greater in the past, and nothing bad happened. In fact CO2 is good for plants, which in turn is good for things that eat plants, and for the things that eat things that eat plants.

      • A wag once pronounced “Life is a schist sandwich. The only thing that makes it taste any better is more bread!” The ‘bread’ can be interpreted as a double entendre, which in American slang also means money and/or wealth.

      • Only two things are infinite, the universe and human stupidity, and I’m not sure about the former. – Albert Einstein

        The climate hustlers use Einstein’s Infinity to great advantage.

      • Especially in Calipornia, where the streets are paved in golden “schist.”….. : )

    • Tom
      If there is a lot of horst schist around, it is inevitable that one will see the word “schist” frequently.

  12. Another geologist (and mining/metallurgical eng.) that enjoys all your fine educational and entertaining articles. This one demonstrates that other disciplines are perfectly qualified to criticize climate science without being a climate scientist. Indeed, I’m convinced that other disciplines that use statistical methods may be more qualified than “climate scientists” of the consensus that have done such a slipshod job of it. Steve McIntyre said it best:

    “In my opinion, most climate scientists on the Team would have been high school teachers in an earlier generation – if they were lucky. Many/most of them have degrees from minor universities. It’s much easier to picture people like Briffa or Jones as high school teachers than as Oxford dons of a generation ago. Or as minor officials in a municipal government.

    Allusions to famous past amateurs over-inflates the rather small accomplishments of present critics, including myself. A better perspective is the complete mediocrity of the Team makes their work vulnerable to examination by the merely competent.”

    – Steve McIntyre, Climate Audit Aug 1, 

  13. Speaking of Hockey Sticks, how about those St. Louis Blues?!! Hoist that Stanley Cup!
    “My Cup runneth over….”

  14. Interesting and technically compelling post but I disagree with some of your opinions. And anyway I thought you said that if hockey stick graphs they were “properly annotated” you were happy with them.

    Do you believe if some researcher from the year 3000 was pouring through the data they wouldn’t see any hockey-stick shaped graphs? Like we do today they’d see the HS graph of human population:
    https://qph.fs.quoracdn.net/main-qimg-a1a8e802d002c2bc0887463cf9964f17.webp
    They’d see the HS graph of CO2 rise: http://www-das.uwyo.edu/~geerts/cwx/notes/chap01/Image18.gif
    They’d see the HS graph of soil carbon loss: https://www.pnas.org/content/pnas/114/36/9575/F2.large.jpg
    The Arctic seaice flash melt: https://tamino.files.wordpress.com/2014/02/kinnard1450.jpg?w=500&h=375
    Plenty of more examples of human caused hockey sticks where they came from – all of them painting a picture of abrupt change.

    “It matters because the advocates of the Anthropocene as a geologic epoch are relying on the Marcott hockey stick.”

    No they aren’t. “They” are citing dozens of different metrics: the deposit of isotopes and a layer of plastic and sundry other synthetic products since the 1950s and abrupt changes in sedimentation rates to name a few.

    “It matters because hockey sticks are being used to… destroy individual liberty and prosperity.”

    And there’s the rub, it threatens your cosseted world view and conflicts with your political opinions.

    • Loydo,

      Hockey sticks are, by definition, not properly annotated. Posting the instrumental data alongside the proxy data in a properly annotated manner is legitimate. Splicing instrumental data onto the proxy-based reconstruction and calling it an anomaly is illegitimate.

      • “Posting the instrumental data alongside the proxy data in a properly annotated manner is legitimate.”
        Here is the main hockey stick figure from MBH98 (the Nature paper). It is done just as you prescribe.

        • My “beef” is with how it was displayed in the IPCC report and how they modified Briffa’s reconstruction. The genesis of the MBH98 hockey stick is far more complicated than Mike’s Nature Trick.

          The specific example I cited in the post was Mann et al., 2008. While the use of instrumental was documented, it wasn’t posted alongside the proxy data. It was spliced onto the end of the proxy data. This Hockey Stick is probably the most frequently cited reconstruction in policy-oriented literature.

          • David,
            “The specific example I cited in the post was Mann et al., 2008. While the use of instrumental was documented, it wasn’t posted alongside the proxy data. “

            Instrumental are shown in separate colors and marked. In fact in that paper they gave a panel of 4 of plots (Fig 2) in close-up showing the comparison, very explicitly marked, of composite with various proxies in the period since 1850, plotted against instrumental. The purpose was to show that proxies did reflect the recent warming (exhibiting the incline). In the SI, they gave similar plots with a hemisphere breakdown. Here is the first panel of Fig 2.

            ps I see I messed up the link to the MBH98 plot 5B – it is here.

          • Nick,

            I stated in the post that Mann et al., 2008 documented their misuse of the instrumental data.

          • “100% fraudulent” referred to splicing high resolution carbon dioxide instrumental data onto low resolution ice cores. Try to comprehend what you read,then quote me in context, rather lying abour what I wrote.

          • David,
            “documented their misuse of the instrumental data”
            But what is the misuse? You say that the **cru_eiv_composite.csv type files show instrumental files being folded in, but I can’t see where that is happening. The later data in those files do not look at all like HADCRUT or CRUTEM. If the CPS and EIV files are just proxy, as it seems to me, then the plot correctly shows instrumental and proxy separately, and Fig 2 shows the contrast in much greater detail.

          • David, stick with “beef”.

            A panel of scientists convened by the National Research Council was set up, which reported in 2006, supporting Mann’s findings with some qualifications, including agreeing that there were some statistical failings but these had little effect on the result.

            The results have been repeated and verified as nauseum but 20 years later you’re still doubt-mongering about it? smfh.

          • Nice try… But the post isn’t about MBH98.

            But… Since I love to torch straw man fallacies.

            “Citing the work of Dr. Mann and others, the U.N. concluded there was a 60% to 90% chance that temperatures in the 1990s had been the warmest since 1000, and that 1998 was the warmest single year.

            Panel chairman Gerald R. North, a climatologist at Texas A&M University, said his committee’s findings couldn’t support that claim. Dr. North said the limited data available on ancient climate means that scientists can say with high confidence only that the “last few decades” of the 20th century were the warmest period in the past 400 years, and with “less confidence” that they were the warmest in the past 900 years.”

            –Wall Street Journal, June 23, 2006

            The National Academies press release on the North Report…

            Date: June 22, 2006
            ‘High Confidence’ That Planet Is Warmest in 400 Years;
            Less Confidence in Temperature Reconstructions Prior to 1600

            WASHINGTON — There is sufficient evidence from tree rings, boreholes, retreating glaciers, and other “proxies” of past surface temperatures to say with a high level of confidence that the last few decades of the 20th century were warmer than any comparable period in the last 400 years, according to a new report from the National Research Council. Less confidence can be placed in proxy-based reconstructions of surface temperatures for A.D. 900 to 1600, said the committee that wrote the report, although the available proxy evidence does indicate that many locations were warmer during the past 25 years than during any other 25-year period since 900. Very little confidence can be placed in statements about average global surface temperatures prior to A.D. 900 because the proxy data for that time frame are sparse, the committee added.

            http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=11676

            It clearly is warmer now than it was 400 years ago. I don’t think anyone doubts that it’s currently warmer than the coldest climate since the end of the Pleistocene.

  15. The modern instrumental record has to replace proxies because the proxies drop out at irregular intervals over time and affect the mean. Some later tree ring records had also likely been affected by factors such as acid rain.
    Andy May’s reconstruction looks quite different to those in the scientific literature. Can you reference a refereed published paper for this work?

    • Simon, if everything sceptics say is totally wrong and everything consensus science says is totally right, do you understand that this is cheerleading from the sidelines. Do you understand that you can be intellectually replaced with an eight word billboard sign.

      You have a degree in the humanities, but this shouldn’t stop an intelligent interested party from any field making insightful comments. Do you want to know the magic of having your thoughts valued. Try agreeing occasionally with your antagonist where this seems justified. Try saying, “Oh, I never looked at this way before”when you see something that makes sense on the other side? Or, for your ‘own’ side of issue, “Yeah, this is a silly study, an own goal. It only serves to give sceptics something to ridicule us about.”

      We all have been given a very good tutorial here on the statistics of signal and noise by an experienced and very successful petroleum geologist who employs such methods to extend and locate new oil and gas deposits, not by 2050, not by 2100, but where to spud a hole to test this tomorrow! It’s not funded by government, Tom Steyer, the Clinton Foundation or the like. It’s funded by the for-profit free enterprise industry, and he better be right a lot more than he is wrong if he wants to keep his job. Even you Simon should value some of the things he says about doing statistics correctly. Don’t be a statistic yourself. Not on the world’s most highly acclaimed science site.

      • Gary, you didn’t need to respond to Simon because he didn’t start his reply with “Simon says”…… 😉

  16. Of all the problems tree rings have for indicating temperature, time resolution is not one of them. Tree rings show down to the year, sometimes to the season, how favorable things were for the tree to grow or to have certain characteristics of its wood. The problem is determining what to attribute variations of tree ring parameters to.

    There must be some places (maybe not many) where at least one kind of tree has rings that did (or usually did) a good job of indicating local temperature with smoothing of a year or only a few months. One issue is that local temperature in some of these places may not correlate well with global temperature. Another is that ability of tree rings to indicate variations of temperature or other factors that affect tree growth could change with new modern factors such as pollution, invasive species, increase of population of animals eating or otherwise stressing trees due to hunting of their enemies, precipitation pattern shifts from large urban heat islands or other similarly major land use change, or soil moisture change from a nearby major dam, nearby farming or a bunch of wells.

    • To minmize cultural effects, you should select your samples prudently. They normally choose trees at the extremes of their range, high on a mountain or near the treeline, etc., where temperature is likely to be a significant factor in growth.

      Old stumps and deadfall in boreal regions that today don’t support such trees or trees as large are excellent examples for detecting former periods when it was much warmer than today. Somehow, alarmist scientists don’t favor these types of studies. Glacial meltbacks that expose the broken stubs of a formerly forrested area of a thousand years ago, also get no interest from consensus science or the funding collaborators, nor do they favor Greenlandic agricultural history much, nor the presently ice-locked, driftwood strewn well developed beaches on Greenlands northern coast dated at 5000ybp.

      A few dozen self evident studies would put paid to the ‘warmest period in 800,000yrs nonsense.

    • Tree rings only tell us about the growing season. They tell us nothing about what the rest of the year was like.

      All other things being equal, warmer temperatures during the growing season would result in more growth. (Unless the temperature got to warm, in which case it would lead to less growth.)
      However if the rest of the year was cooler than average, the year itself would be cooler than average, however the trees would still show more growth.

  17. David,
    You always write good articles. However, I think that this is one of your better ones.

  18. Excellent post David! This should be required reading in undergrad geoscience and climate science classes.

  19. You can judge the character of Mann and his work , not in how his opponents view him but how this colleagues do. And that is pretty poor view .
    Bullying , arrogant and willing to ‘make the data tell the right story ‘ no matter what .

  20. Thanks David, great post. I am a geophysicist of 35 years standing and all this is part of my day job.

    The issue of resolution is one part of the key. In geostatistics this issue is called the support of the measurement and I have been involved with a lot of research on this topic and published a bit. I have also been heavily involved in seismic inversion and stochastic seismic inversion, so this is very much in my area of expertise (there is another point about low frequency trends in seismic inversion that is very relevant to why climate models behave like they do – that’s a separate topic for another day). It is simply not valid to draw conclusions by comparing 20th century high resolution data to low resolution data over millennia or longer.

    I would note some other points too. Marcott is a particularly egregious example of the problem of comparing high and low resolution. Indeed, the BBC still carries a web page with the Marcott reconstruction image with a misleading conclusion, despite my complaining about it. However, it should be noted that the results Marcott presented for PhD thesis do not include the splicing on of modern day temperature data. The splice was only added for journal publication. Why? Steve McIntyre has quite a lot to say about it.

    Regarding tree rings and temperature reconstructions, there is another fundamental problem with methodology here as well. Firstly, the idea that tree rings are simply thermometers is absurd. Ask any forestry expert and they will point out many other limiting factors that influence tree growth. Furthermore, the idea that tree rings (and tree growth) respond linearly to temperature is also absurd. Most living things have an optimal environmental range and any extremes, both higher and lower, will impair growth. For this reason it means that growth response curves are likely to be an inverted-U shape and therefore multivalued. Reconstructing temperature from such a response curve is essentially impossible, regardless of compounding factors.

    My final point and ultimate criticism of the hockey-stick reconstructions are based on a simple numerical experiment. If you generate a series of random, trendless but auto-correlated long-term time series and compare them by correlation in their latter parts to a training period (temperature say in the 20th Century) and only accept the best correlations (what Steve M. calls post-hoc screening) and them sum the best results, what you find is that you get a long, downward trending low frequency handle and then an uptick into the modern period caused by the presence of the auto-correlation and the post-hoc selection process. Of course, because the input data were random auto-correlated time series their sum should tend to a long term trendless series with a mean of zero (assuming that was the chosen mean). Instead, performing this test in a spreadsheet with random auto-correlated time series produces an output with exactly the same features as the tree-ring hockey-stick reconstructions: Long, low frequency slightly down-trending handle, strong uptick into the training period of temperature increase. The hockey-stick is an artefact of the methodology, nothing more.

    • I didn’t really get into the smoothing effect of proxies that aren’t like isotope thermometers. d18O is effectively a temperature measurement. You can mathematically translate it to temperature, like you can mathematically translate sonic & density logs to synthetic seismic traces. Tree rings, pollen, plant stomata etc., are all effective paleo-climate proxies; but the math is far more uncertain. Although they tend to be high resolution proxies, they are also very noisy and have to be smoothed to be useful.

      I also didn’t get into horizontal resolution. The horizontal resolution of multi-proxy reconstructions is degraded by the dating uncertainty and determining how the proxies should be stacked together. My recollection is that there were serious issues with how Marcott adjusted the ages of their proxies.

      • David, You are right to avoid mixing low resolution proxies with ones with higher resolution. Ocean sediment cores are another example of this problem, with difficulties of dating confounding the results, though varved sediments in fresh water environments may exhibit higher resolutions. I haven’t noticed mention of the work of Loehle yet, who produced a fatally flawed paper published in E&E in 2007, which resulted in a quick “correction” which was not much better, as I pointed out in a Letter to the Editor, published September 2008. That 2008 paper by Loehle is still a favorite among the denialist camp, which I find yet another example of disinformation from that side of the so-called “debate”.

  21. What I have been pondering is, what effect have proxies on the temperature signal? For a simplified example, let’s consider a temperature signal with annual sampling frequency and a proxy with 50 years resolution.

    As a first approximation, we can think of the proxy as a decimator, taking only one point every 50 of the signal. But even so, where do we start: from the first point in the series going forward? Or otherwise?

    Still, proxy as decimator is probably not correct. More likely they act as some sort of low-pass filter, alas with unknown transfer function.
    We can at first assume, maybe, that proxyes work as a simple moving average (because physical systems cannot see into the future, it must not be a centered average), in this case with n = 50.

    Then I’m not sure how to proceed.

  22. Donald L. Klipstein

    Of all the problems tree rings have for indicating temperature, time resolution is not one of them. Tree rings show down to the year, sometimes to the season, how favorable things were for the tree to grow or to have certain characteristics of its wood. The problem is determining what to attribute variations of tree ring parameters to.

    There must be some places (maybe not many) where at least one kind of tree has rings that did (or usually did) a good job of indicating local temperature with smoothing of a year or only a few months. One issue is that local temperature in some of these places may not correlate well with global temperature. Another is that ability of tree rings to indicate variations of temperature or other factors that affect tree growth could change with new modern factors such as

    pollution,

    invasive species, increase of population of animals eating or otherwise stressing trees due to hunting of their enemies,
    _________________________________________________________

    tree growth could change with new modern factors such as

    pollution:

    Pollution is no way “modern factors”:

    https://www.google.com/search?client=ms-android-huawei&ei=XFIYXYe4MovRrgTEmICwDQ&q=Tons+Of+Cosmic+Dust+Fall+To+Earth+Every+Day&oq=Tons+Of+Cosmic+Dust+Fall+To+Earth+Every+Day&gs_l=mobile-gws-wiz-serp.

    _________________________________________________________

    Touch any boathouse at sea level or touch a stone on the Himalayas: your hands will turn black from old-fashioned pollution. Who is qualified to distinguish between old-fashioned star dust and “anthropogenic modern pollution”.

  23. it matters because the only way to directly compare the instrumental data to the pre-industrumental proxy data is the filter the instrumental data down to the resolution of the proxy data. –>

    it matters because the only way to directly compare the instrumental data to the pre-industrumental proxy data is they filter the instrumental data down to the resolution of the proxy data.

Comments are closed.