Proxy Science and Proxy Pseudo-Science

Guest post by Pat Frank

It’s become very clear that most published proxy thermometry since 1998 [1] is not at all science, and most thoroughly so because Steve McIntyre and Ross McKitrick revealed its foundation in ad hoc statistical numerology. Awhile back, Michael Tobis and I had a conversation here at WUWT about the non-science of proxy paleothermometry, starting with Michael’s comment here and my reply here. Michael quickly appealed to his home authorities at, Planet3.org. We all had a lovely conversation that ended with moderator-cum-debater Arthur Smith indulging a false claim of insult to impose censorship (insulting comment in full here for the strong of stomach).

But in any case, two local experts in proxy thermometry came to Michael’s aid: Kaustubh Thimuralai, a grad student in proxy climatology at U. Texas, Austin and Kevin Anchukaitis, a dendroclimatologist at Columbia University. Kaustubh also posted his defense at his own blog here.

Their defenses shared this peculiarity: an exclusive appeal to stable isotope temperature proxies — not word one in defense of tree-ring thermometry, which provides the vast bulk of paleotemperature reconstructions.

The non-science of published paleothermometry was proved by their non-defense of its tree-ring center; an indictment of discretionary silence.

Nor was there one word in defense of the substitution of statistics for physics, a near universal in paleo-thermo.

But their appeal to stable isotope proxythermometry provided an opportunity for examination. So, that’s what I’m offering here: an analysis of stable isotope proxy temperature reconstruction followed by a short tour of dendrothermometry.

Part I. Proxy Science: Stable Isotope Thermometry

The focus is on oxygen-18 (O-18), because that’s the heavy atom proxy overwhelmingly used to reconstruct past temperatures. NASA has a nice overview here. The average global stable isotopic ratios of oxygen are, O-16 = 99.757%, O-17 = 0.038 %, O-18 = 0.205 %. If there were no thermal effects (and no kinetic isotope effects), the oxygen isotopes would be distributed in minerals at exactly their natural ratios. But local thermal effects cause the ratios to depart from the average, and this is the basis for stable isotope thermometry.

Let’s be clear about two things immediately: first, the basic physics and chemistry of thermal isotope fractionation is thorough and fully legitimate. [2-4]

Second, the mass spectrometry (MS) used to determine O-18 is very precise and accurate. In 1950, MS of O-18 already had a reproducibility of 5 parts in 100,000, [3] and presently is 1 part in 100,000. [5] These tiny values are represented as “%o,” where 1 %o = 0.1% = 0.001. So dO-18 MS detection has improved by a factor of 5 since 1950, from (+/-)0.05%o to (+/-)0.01%o.

The O-18/O-16 ratio in sea water has a first-order dependence on the evaporation/condensation cycle of water. H2O-18 has a higher boiling point than H2O-16, and so evaporates and condenses at a higher temperature. Here’s a matter-of-fact Wiki presentation. The partition of O-18 and O-16 due to evaporation/condensation means that the O-18 fraction in surface waters rises and falls with temperature.

There’s no dispute that O-18 mixes into CO2 to produce heavy carbon dioxide – mostly isotopically mixed as C(O-16)(O-18).

Dissolved CO2 is in equilibrium with carbonic acid. Here’s a run-down on the aqueous chemistry of CO2 and calcium carbonate.

Dissolved light-isotope CO2 [as C(O-16)(O-16)] becomes heavy CO2 by exchanging an oxygen with heavy water, like this:

CO2 + H2O-18 => CO(O-18) + H2O-16

This heavy CO2 finds its way into the carbonate shells of mollusks, and the skeletons of foraminifera and corals in proportion to its ratio in the local waters (except when biology intervenes. See below).

This process is why the field of stable isotope proxy thermometry has focused primarily on O-18 CO2: it is incorporated into the carbonate of mollusk shells, corals, and foraminifera and provides a record of temperatures experienced by the organism.

Even better, fossil mollusk shells, fossil corals, and foraminiferal sediments in sea floor cores promise physically real reconstructions of O-18 paleotemperatures.

Before it can be measured, O-18 CO2 must be liberated from the carbonate matrix of mollusks, corals, or foraminifera. Liberation of CO2 typically involves treating solid CaCO3 with phosphoric acid.

3 CaCO3 + 2 H3PO4 => 3 CO2 + Ca3(PO4)2 + 3 H2O

CO2 is liberated from biological calcium carbonate and piped into a mass spectrometer. Laboratory methods are never perfect. They incur losses and inefficiencies that can affect the precision and accuracy of results. Anyone who’s done wet analytical work knows about these hazards and has struggled with them. The practical reliability of dO-18 proxy temperatures depends on the integrity of the laboratory methods to prepare and measure the intrinsic O-18.

The paleothermometric approach is to first determine a standard relationship between water temperature and the ratio of O-18/O-16 in precipitated calcium carbonate. One can measure how the O-18 in the water fractionates itself into solid carbonate over a range of typical SST temperatures, such as 10 C through 40 C. A plot of carbonate O-18 v. temperature is prepared.

Once this standard plot is in hand, the temperature is regressed against the carbonate dO-18. The result is a least-squares fitted equation that tells you the empirical relationship of T:dO-18 over that temperature range.

This empirical equation can then be used to reconstruct the water temperature whenever carbonate O-18 is known. That’s the principle.

The question I’m interested in is whether the complete physico-chemical method yields accurate temperatures. Those who’ve read my paper pdf on neglected systematic error in the surface air temperature record, will recognize the ‘why’ of focusing on measurement error. It’s the first and minimum error entering any empirically determined magnitude. That makes it the first and basic question about error limits in O-18 carbonate proxy temperatures.

So, how does the method work in practice?

Let’s start with the classic: J. M. McCrea (1950) “On the Isotopic Chemistry of Carbonates and a Paleotemperature Scale“[3], which is part of McCrae’s Ph.D. work.

McCrae’s work is presented in some detail to show the approach I took to evaluate error. After that, I promise more brevity. Nothing below is meant to be, or should be taken to be, criticism of McCrae’s absolutely excellent work — or criticism of any of the other O-18 authors and papers to follow.

McCrae made truly heroic and pioneering experimental work establishing the O-18 proxy temperature method. Here’s his hand-drawn picture of the custom glass apparatus used to produce CO2 from carbonate. I’ve annotated it to identify some bits:

Figure 1: J. McCrae’s CO2 preparative glass manifold for O-18 analysis.

I’ve worked with similar glass gas/vacuum systems with lapped-in ground-glass joints, and the opportunity for leak, crack, or crash-tastrophe is ever-present.

McCrae developed the method by precipitating dO18 carbonate at different temperatures from marine waters obtained off East Orleans, MA, on the Atlantic side of Cape Cod, and off Palm Beach, Florida. The O-18 carbonate was then chemically decomposed to release the O-18 CO2, which was analyzed in a double-focusing mass spectrometer, which they apparently custom built themselves.

The blue and red lines in the Figure below show his results (Table X and Figure 5 in his paper). The %o O-18 is the divergence of his experimental samples from his standard water.

Figure 2, McCrae, 1950, original caption (color-modified): “Variation of isotopic composition of CaCO3(s) with reciprocal of deposition temperature from H2O (Cape Cod series (red); Florida water series (blue)).” The vertical lines interpolate temperatures at %o O-18 = 0.0. Bottom: Color-coded experimental point scatter around a zero line (dashed purple).

The lines are linear least square (LSQ) fits and they reproduce McCrae’s almost exactly (T is in Kelvin):

Florida: McCrae: d18O=1.57 x (10^4/T)-54.2;

LSQ: d18O=1.57 x (10^4/T)-53.9; r^2=0.994.

Cape Cod: McCrae: d18O=1.64 x (10^4/T)-57.6;

LSQ: d18O=1.64 x (10^4/T)-57.4; r^2=0.995.

About his results, McCrae wrote this: “The respective salinities of 36.7 and 32.2%o make it not surprising that there is a difference in the oxygen composition of the calcium carbonate obtained from the two waters at the same temperature.(bold added)”

The boiling temperature of water increases with the amount of dissolved salt, which in turn affects the relative rates that H2O-16 and H2O-18 evaporate away. Marine salinity can also change from the influx of fresh water (from precipitation, riverine, or direct runoff), or from upwelling, from wave-mixing, and from currents. The O-16/O-18 ratio of fresh water, of upwelling water, or of distant water transported by currents, may differ from a local marine ratio. The result is that marine waters of the same temperature can have different O18 fractions. Disentangling the effects of temperature and salinity in a marine O-16/O-18 ratio can be difficult to impossible in paleo-reconstructions.

The horizontal green line at %o O18 = zero intersects the Florida and Cape Cod lines at different temperatures, represented by the vertical drops to the abscissa. These show that the same dO-18 produces a difference of 4 C, depending on which equation you choose, with the apparent T covarying with a salinity change of 0.045%o.

That means if one generates a paleotemperature by applying a specific dO18:T equation to paleocarbonates, and one does not know the paleosalinity, the derived paleotemperature can be uncertain by as much as (+/-)2 C due to a hidden systematic covariance (salinity).

But I’m interested in experimental error. From those plots one can estimate the point scatter in the physico-chemical method itself as the variation around the fitted LSQ lines. The point scatter is plotted along the purple zero line at the bottom of Figure 2. Converted to temperature, the scatter is (+/-)1 C for the Florida data and (+/-)1.5 C for the Cape Cod data.

All the data were determined by McCrae in the same lab, using the same equipment and the same protocol. Therefore, it’s legitimate to combine the two sets of errors in Figure 2 to determine their average, and the resulting average uncertainty in any derived temperature. The standard deviation of the combined errors is (+/-)0.25 %o O-18, which translates into an average temperature uncertainty of (+/-)1.3 C. This emerged under ideal laboratory conditions where the water temperature was known from direct measurement and the marine O18 fraction was independently measured.

Next, it’s necessary to know whether the errors are systematic or random. Random errors diminish as 1/sqrtN, where N is the number of repetitions of analysis. If the errors are random, one can hope for a very precise temperature measurement just by repeating the dO-18 determination enough times. For example, in McCrae’s work, 25 repeats reduces the average error in any single temperature by 1.3/5 => (+/-)0.26 C.

To bridge the random/systematic divide, I binned the point scatter over (+/-)3 standard deviations = (+/-)99.7 % certainty of including the full range of error. There were no outliers, meaning all the scatter fell within the 99.7 % bound. There are only 15 points, which is not a good statistical sample, but we work with what we’ve got. Figure 3 shows the histogram plot of the binned point-scatter, and a Gaussian fit. It’s a little cluttered, but bear with me.

Figure 3: McCrae, 1950 data: (blue points), binned point scatter from Figure 2; red line, Two-Gaussian fit to the binned points; dashed green lines, the two fitted Gaussians. Thin purple points and line: separately binned Cape Cod point scatter; thin blue line and points, separately binned Florida point scatter.

The first thing to notice is that the binned points are very not normally distributed. This immediately suggests the measurement error is systematic, and not random. The two Gaussian fit is pretty good, but should not be taken as more than a numerical convenience. An independent set of measurement scatter points from a different set of experiments may well require a different set of Gaussians.

The two Gaussians imply at least two modes of experimental error operating simultaneously. The two thin single-experiment lines are spread across scatter width. This demonstrates that the point scatter in each data sets participates in both error modes simultaneously. But notice that the two data sets do not participate equivalently. This non-equivalence again indicates a systematic measurement error that apparently does not repeat consistently.

The uncertainty from systematic measurement error does not diminish as 1/sqrtN. The error is not a constant offset and does not subtract away in a difference between data sets. It propagates into a final value as (+/-)sqrt[(sum of N errors)^2/(N-1)].

The error in any new proxy temperature derived from those methods will probably fall somewhere in the Figure 3 envelope, but the experimenter will not know where. That means the only way to honestly present a result is to report the average systematic error, and that would be T(+/-)1.3 C.

This estimate is conservative, as McCrae noted that, “The average deviation of an individual result from the relation is 0.38%o.”, which is equivalent to an average error of (+/-)2 C (I calculated 1.95 C; McCrae’s result). McCrae wrote later, “The average deviation of an individual experimental result from this relation is 2°C in the series of slow precipitations just described.

The slow precipitation experiments were the tests with Cape cod and Florida water, shown in Figure 2, and McCrae mentioned their paleothermal significance at the end of his paper, “The isotopic composition of calcium carbonate slowly formed from aqueous solution has been noted to be usually the same as that produced by organisms at the same temperature.

Anyone using McCrae’s standard equations to reconstruct a dO-18 paleotemperature must include the experimental uncertainty hidden inside them. However, they are invariably neglected. I’ll give an example below.

Another methodological classic is Sang-Tae Kim et al. (2007) “Oxygen isotope fractionation between synthetic aragonite and water: Influence of temperature and Mg2+ concentration“.[6]

Kim, et al., measured the relationship between temperature and dO-18 incorporation in Aragonite, a form of calcium carbonate found in mollusk shells and corals (the other typical form is calcite). They calibrated the T:dO-8 relationship at five temperatures, 0, 5, 10, 25, and 40 C which covers the entire range of SST. Figure 4a shows their data.

Figure 4: a. Blue points: Aragonite T:dO-18 calibration experimental points from Kim, et al., 2007; purple line: LSQ fit. Below: green points, the unfit residual representing experimental point-scatter, 1-sigma = (+/-)0.21. b. 3-sigma histogram of the experimental unfit residual (points) and the 3-Gaussian fit (purple line). The thin colored lines plus points are separate histograms of the four data sub-sets making up the total.

The alpha in “ln-alpha” is the O-18 “fractionation factor,” which is a ratio of O-18 ratios. That sounds complicated, but it’s just (the ratio of O-18 in carbonate divided by the ratio of O-18 in water): {[(O-18)c/(O-16)c] / [(O-18)w/(O-16)w]}, where “c” = carbonate, and “w” = water.

The LSQ fitted line in Figure 4a is 1000 x ln-alpha = 17.80 x (1000/T)-30.84; R^2 = 0.99, which almost exactly reproduces the published line, 1000 x ln-alpha = 17.88 x (1000/T)-31.14.

The green points along the bottom of Figure 4a are the unfit residual, representing the experimental point scatter. These have a 1-sigma standard deviation = (+/-)0.21, which translates into an experimental uncertainty of (+/-)1 C.

In Figure 4b is a histogram of the unfit residual point scatter in part a, binned across (+/-)3-sigma. The purple line is a three-Gaussian fit to the histogram, but with the point at -0.58,3 left out because it destabilized the fit. In any case, the experimental data appear to be contaminated with at least three modes of divergence, again implying a systematic error.

Individual data sub-sets are shown as the thin colored lines in Figure 4b. They all spread across at least two of the three experimental divergence modes, but not equivalently. Once again, that means every data set is uniquely contaminated with systematic measurement error.

Kim, et al., reported a smaller analytical error (+/-)0.13, equivalent to an uncertainty in T = (+/-)0.6 C. But their (+/-)0.13 is the analytical precision of the mass spectrometric determination of the O-18 fractions. It’s not the total experimental scatter. Residual point scatter is a better uncertainty metric because the Kim, et al., equation represents a fit to the full experimental data, not just to the O-18 fractions found by the mass spectrometer.

Any researcher using the Kim, et al., 2007 dO-18:T equation to reconstruct a paleotemperature must propagate at least (+/-)0.6 C uncertainty into their result, and better (+/-)1 C.

I’ve done similar analyses of the experimental point-scatter in several studies used to calibrate the T:O-18 temperature scale. Here’s a summary of the results:

Study______________(+/-)1-sigma______n_____syst err?____Ref.

McCrae________________1.3 C_________15_____Y________[3]

O’Neil_________________29 C_________11______?________[7]

Epstein_______________0.76 C________25______?_________[8]

Bemis________________1.7 C_________14______Y________[9]

Kim__________________1.0 C_________70______Y________[6]

Li____________________2.2 C__________5______________[10]

Friedman______________1.1 C__________6______________[11]

O’Neil’s was a 0-500 C experiment

All the Summary uncertainties represent only measurement point scatter, which often behaved as systematic error. The O’Neil 1969 point scatter was indeterminate, and the Epstein question mark is discussed below.

Epstein, et al., (1953), chose to fit their T:dO-18 calibration data with a second-order polynomial rather than with a least squares straight line. Figure 5 shows their data with the polynomial fit, and for comparison a LSQ straight line fit.

Figure 5: Epstein, 1953 data fit with a second-order polynomial (R^2 = 0.996; sigma residual = (+/-)0.76 C) and with a least squares line (R^2 = 0.992; sigma residual = (+/-) 0.80 C). Insets: histograms of the point scatter plus Gaussian fits; Upper right,  polynomial; lower left, linear.

The scatter around the polynomial was pretty Gaussian, but left a >3-sigma outlier at 2.7 C. The LSQ fit did almost as well, and put the polynomial 3-sigma outlier within the 3-sigma confidence limit. The histogram of the linear fit scatter required two Gaussians, and left an unfit point at 2.5-sigma (-2 C).

Epstein had no good statistical reason to choose the polynomial fit over the linear fit, and didn’t mention his rationale. The poly fit came closer to the high-temperature end-point at 30 C, but the linear fit came closer to the low-T end-point at 7 C, and was just as good as through the internal data points. So, the higher order fit may have been an attempt to save the point at 30 C.

Before presenting an application of these lessons, I’d like to show a review paper, which compares all the different dO-18:T calibration equations in current use: B. E. Bemis, H. J. Spero, J. Bijma, and D. W. Lea, Reevaluation of the oxygen isotopic composition of planktonic foraminifera: Experimental results and revised paleotemperature equations. [9]

This paper is particularly valuable because it reviews the earlier equations used to model the T:dO18 relationship.

Figure 6 below reproduces an annotated Figure 2 from Bemis, et al. It compares several T:dO-18 calibration equations from a variety of laboratories. They have similar slopes but are offset. The result is that a given dO-18 predicts a different temperature, depending on which calibration equation one chooses. The Figure is annotated with a couple of very revealing drop lines.

Figure 6: Original caption”Comparison of temperature predictions using new O. universa and G. bulloides temperature:dO-18 relationships and published paleotemperature equations. Several published equations are identified for reference. Equations presented in this study predict lower temperatures than most other equations. Temperatures were calculated using the VSMOW to VPDB corrections listed in Table 1 for dO-18w values.

The green drop lines show that a single temperature associates with dO-18 values ranging across 0.4 %o. That’s about 10-40x larger than the precision of a mass spectrometer dO-18 measurement. Alternatively, the horizontal red extensions show that a single dO-18 measurement predicts temperatures across a ~1.8 C range, representing an uncertainty of (+/-)0.9 C in choice of standards.

The 1.8 C excludes the three lines, labeled 11-Ch, 12-Ch, and 13-Ch. These refer to G. bulloides with 11-, 12-, and 13-chambered shells. Including them, the spread of temperatures at a single dO-18 is ~3.7 C (dashed red line).

In G. bulloides, the number of shell chambers increases with age. Specific gravity increases with the number of chambers, causing the G. bulloides to sink into deeper waters. Later chambers sample different waters than the earlier ones, and incorporate the ratio of O-18 at depth. Three different lines show the vertical change in dO-18 is significant, and imply a false spread in T of about 0.5 C.

Here’s what Bemis, et al., say about it (p. 150), “Although most of these temperature:d18O relationships appear to be similar, temperature reconstructions can differ by as much as 2 C when ambient temperature varies from 15 to 25 C.

That “2 C” reveals a higher level of systematic error that appears as variations among the different temperature reconstruction equations. This error should be included as part of the reported uncertainty whenever any one of these standard lines is used to determine a paleotemperature.

Some of the variations in standard lines are also due to confounding factors such as salinity and the activity of photosynthetic foraminiferal symbionts.

Bemis, et al., discuss this problem on page 152: “Non-equilibrium d18O values in planktonic foraminifera have never been adequately explained. Recently, laboratory experiments with live foraminifera have demonstrated that the photosynthetic activity of algal symbionts and the carbonate ion concentration ([CO32-]) of seawater also affect shell d18O values. In these cases an increase in symbiont photosynthetic activity or [CO32-] results in a decrease in shell d18O values. Given the inconsistent SST reconstructions obtained using existing paleotemperature equations and the recently identified parameters controlling shell d18O values, there is a clear need to reexamine the temperature:d18O relationships for planktonic foraminifera.

Bemis, et al., are thoughtful and modest in this way throughout their paper. They present a candid review of the literature. They discuss the strengths and pitfalls in the field, and describe where more work needs to be done. In other words, they are doing honest science. The contrast could not be more stark between their approach and the pastiche of million dollar claims and statistical maneuvering that swamp AGW-driven paleothermometry.

When the inter-methodological ~(+/-)0.9 C spread of standard T:dO-18 equations is added as the rms to the (+/-)1.34 C average measurement error from the Summary Table, the combined 1-sigma uncertainty in a dO-18 temperature =(+/-)sqrt(1.34^2+0.9^2)=(+/-)1.6 C. That doesn’t include any further invisible environmental confounding effects that might confound a paleo-O18 ratio, such as shifts in monsoon, in salinity, or in upwelling.

A (+/-)1.6 C uncertainty is already 2x larger than the commonly accepted 0.8 C of 20th century warming. T:dO-18 proxies are entirely unable to determine whether recent climate change is in any way historically or paleontologically unusual.

Now let’s look at Keigwin’s justly famous Sargasso Sea dO-18 proxy temperature reconstruction: (1996) “The Little Ice Age and Medieval Warm Period in the Sargasso Sea.” [12] The reconstructed Sargasso Sea paleotemperature rests on G. ruber calcite. G. ruber has photosynthetic symbionts, which induces the T:dO-18 artifacts mentioned by Bemis, et al. Keigwin is a good scientist and attempted to account for this by applying an average G. ruber correction. But removal of an average bias is effective only when the error envelope is random around a constant offset. Subtracting the average bias of a systematic error does not reduce the uncertainty width, and may even increase the total error if the systematic bias in your data set is different from the average bias. Keigwin also assumed an average salinity of 36.5%o throughout, which may or may not be valid.

More to the point, no error bars appear on the reconstruction. Keigwin reported changes in paleotemperature of 1 C or 1.5 C, implying a temperature resolution with smaller errors than these values.

Keigwin used the T:dO-18 equation published by Shackleton in 1974,[13] to turn his Sargasso G. ruber dO-18 measurements into paleotemperatures. Unfortunately, Shackleton published his equation in the International Colloquium Journal of the French C.N.R.S., and neither I nor my French contact (thank-you Elodie) have been able to get that paper. Without it, one can’t directly evaluate the measurement point scatter.

However in 1965, Shackleton published a paper demonstrating his methodology to obtain high precision dO-18 measurements. [14] Shackleton’s high precision scatter should be the minimum scatter in his 1974 T:dO-18 equation.

Shackleton, 1965 made five replicate measurements of the dO-18 in five separate samples of a single piece of Italian marble (marble is calcium carbonate). Here’s his Table of results:

Reaction No. _1____2____3____4____5____Mean____Std dev.

dO-18 value__4.1__4.45_4.35__4.2__4.2____4.26%___0.12%o.

Shackleton mistakenly reported the root-mean-square of the point scatter instead of the standard deviation. No big deal, the true 1-sigma = (+/-)0.14%o; not very different.

In Shackleton’s 1965 words, “The major reason for discrepancy between successive measurements lies in the difficulty of preparing and handling the gas.” That is, the measurement scatter is due to the inevitable systematic laboratory error we’ve already seen above.

Shackleton’s 1974 standard T:dO-18 equation appears in Barrera, et al., [15] and it’s T = 16.9 – 4.38(dO-18) + 0.10(dO-18)^2. Plugging Shackleton’s high-precision 1-sigma=0.14%o into his equation yields an estimated minimum uncertainty of (+/-)0.61 C in any dO-18 temperature calculated using the Shackleton T:dO-18 equation.

At the ftp site where Keigwin’s data are located, one reads “Data precision: ~1% for carbonate; ~0.1 permil for d18-O.” So, Keigwin’s independent dO-18 measurements were good to about (+/-)0.1%o.

The uncertainty in temperature represented by Keigwin’s (+/-)0.1%o spread in measured dO-18 equates to (+/-)0.44 C in Shackleton’s equation.

The total measurement uncertainty in Keigwin’s dO-18 proxy temperature is the quadratic sum of the uncertainty in Shackleton’s equation plus the uncertainty in Keigwin’s own dO-18 measurements. That’s (+/-)sqrt[(0.61)^2+(0.44)^2]=(+/-)0.75 C. This represents measurement error, and is the 1-sigma minimum of error.

And so now we get to see something possibly never before seen anywhere: a proxy paleotemperature series with true, physically real, 95% confidence level 2-sigma systematic error bars. Here it is:

Figure 7: Keigwin’s Sargasso Sea dO-18 proxy paleotemperature series, [12] showing 2-sigma systematic measurement error bars. The blue rectangle is the 95% confidence interval centered on the mean temperature of 23.0 C.

Let’s be clear on what Keigwin accomplished. He reconstructed 3175 years of nominal Sargasso Sea dO-18 SSTs with a precision of (+/-)1.5 C at the 95% confidence level. That’s an uncertainty of 6.5% about the mean, and is a darn good result. I’ve worked hard in the lab to get spectroscopic titrations to that level of accuracy. Hat’s off to Keigwin.

But it’s clear that changes in SSTs on the order of 1-1.5 C can’t be resolved in those data. The most that can be said is that it’s possible Sargasso Sea SSTs were higher 3000 years ago.

If we factor in the uncertainty due to the (+/-)0.9 C variation among all the various T:dO-18 standard equations (Figure 6), then the Sargasso Sea 95% confidence interval expands to (+/-)2.75 C.

This (+/-)2.75 C = (uncertainty in experimenter d-O18 measurements) + (uncertainty in any given standard T:dO-18 equation) + (methodological uncertainty across all T:dO-18 equations).

So, (+/-)2.75 C is probably a good estimate of the methodological 95% confidence interval in any determination of a dO-18 paleotemperature. The confounding artifacts of paleo-variations in salinity, photosynthesis, upwelling and meteoric water will bring into any dO-18 reconstruction of paleotemperatures, further errors that are invisible but perhaps of analogous magnitude.

At the end, it’s true that the T:dO18 relationship is soundly based in physics. However, it is not true that the relationship has produced a reliably high-resolution proxy for paleotemperatures.

Part II: Pseudo-Science: Statistical Thermometry

Now on to the typical published proxy paleotemperature reconstructions. I’ve gone through a representative set of eight high-status studies, looking for evidence of science. Evidence of science is whether any of them make use of physical theory.

Executive summary: none of them are physically valid. Not one of them yields a temperature.

Before proceeding, a necessary word about correlation and causation. Here’s what Michael Tobis wrote about that, “If two signals are correlated, then each signal contains information about the other. Claiming otherwise is just silly.

There’s a lot of that going around in proxythermometry, and clarification is a must. John Aldrich has a fine paper [16] describing the battle between Karl Pearson and G. Udny Yule over correlation indicating causation. Pearson believed it, Yule did not.

On page 373, Aldrich makes a very relevant distinction: “ Statistical inference deals with inference from sample to population while scientific inference deals with the interpretation of the population in terms of a theoretical structure.

That is, statistics is about the relations among numbers. Science is about deductions from a falsifiable theory.

We’ll see that the proxy studies below improperly mix these categories. They convert true statistics into false science.

To spice up the point, here are some fine examples of spurious correlations, and here are the winners of the 1998 Purdue University spurious correlations contest, including correlations between ice cream sales and death-by-drowning, and between ministers’ salaries and the price of vodka. Pace Michael Tobis, each of those correlated “signals” so obviously contains information about the other, and I hope that irony lays the issue to rest.

Diaz and Osuna [17] point out that distinguishing, “between alchemy and science … is (1) the specification of rigorously tested models, which (2) adequately describe the available data, (3) encompass previous findings, and (4) are derived from well-based theories. (my numbers, my bold)”

The causal significance of any correlation is revealed only within the deductive context of a falsifiable theory that predicts the correlation. Statistics (inductive inference) never, ever, of itself reveals causation.

AGW paleo proxythermometry will be shown missing Diaz and Osuna elements 1, 3, and 4 of science. That makes it alchemy; otherwise known as pseudoscience.

That said, here we go: AGW proxythermometry:

1. Thomas J. Crowley and Thomas S. Lowery (2000) “How Warm Was the Medieval Warm Period?.” [18]

They used fifteen series: three dO-18 (Keigwin’s Sargasso Sea proxy, GISP 2, and the Dunde Ice cap series), eight tree-ring series, the Central England temperature (CET) record, an Iceland temperature (IT) series, and two plant-growth proxies (China phenology and Michigan pollen).

All fifteen series were scaled to vary between 0 and 1, and then averaged. There was complete and utter neglect of the physical meaning of the five physically valid series (3 x dO18, IT, and CET). All of them were scaled to the same physically meaningless unitary bound.

Think about what this means: Crowley and Lowry took five physically meaningful series, and discarded the physics. That made the series fit to use in AGW-related proxythermometry.

There is no physical theory that coverts tree ring metrics into temperatures. That theory does not exist and any exact relationship remains entirely obscure.

So then how did Crowley and Lowery convert their unitized proxy average into temperature? Well, “The two composites were scaled to agree with the Jones et al. instrumental record for the Northern Hemisphere…,” and that settles the matter.

In short, the fifteen series were numerically adjusted to a common scale, averaged, and scaled up to the measurement record. Then C&L reported their temperatures to a resolution of (+/-)0.05 C. Measurement uncertainty in the physically real series was ignored in their final composite. That’s how you do science, AGW proxythermometry style.

Any physical theory employed?: No

Strictly statistical inference?: Yes

Physical content: none.

Physical validity: none.

Temperature meaning of the final composite: none.

2. Timothy J. Osborn and Keith R. Briffa (2006) The Spatial Extent of 20th-Century Warmth in the Context of the Past 1200 Years.” [19]

Fourteen proxies — eleven of them tree rings, one dO-18 ice core (W. Greenland) — were divided by their respective standard deviation to produce a common unit magnitude, and then scaled into the measurement record. The ice core dO-18 had its physical meaning removed and its experimental uncertainty ignored.

Interestingly, between 1975 and 2000 the composite proxy declined away from the instrumental record. Osborn and Briffa didn’t hide the decline, to their everlasting credit, but instead wrote that this disconfirmation is due to, “the expected consequences of noise in the proxy records.

I estimated the “noise” by comparing its offset with respect to the temperature record, and it’s worth about 0.5 C. It didn’t appear as an uncertainty on their plot. In fact, they artificially matched the 1856-1995 means of the proxy series and the surface air temperature record, making the proxy look like temperature. The 0.5 C “noise” divergence got suppressed and looks much smaller than it really is. Actual 0.5 C “noise” error bars scaled onto the temperature record of their final Figure 3 would have made the whole enterprise theatrically useless, no matter that it is bereft of science in any case.

Any physical theory employed?: No

Strictly statistical inference?: Yes

Physical uncertainty in T: none.

Physical validity: none.

Temperature meaning of the composite: none.

3. Michael E. Mann, Zhihua Zhang, Malcolm K. Hughes, Raymond S. Bradley, Sonya K. Miller, Scott Rutherford, and Fenbiao Ni (2008) “Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia.” [20]

A large number of proxies of multiple lengths and provenances. They included some ice core, speleothem, and coral dO-18, but the data are vastly dominated by tree ring series. Mann & co., statistically correlated the series with local temperature during a “calibration period,” adjusted them to equal standard deviation, scaled into the instrumental record, and published the composite showing a resolution of 0.1 C (Figure 3). Their method again removed and discarded the physical meaning of the dO-18 proxies.

Any physical theory employed?: No

Strictly statistical inference?: Yes

Physical uncertainty in T: none.

Physical validity: none.

Temperature meaning of the composite: none.

4. Rosanne D’Arrigo, Rob Wilson, Gordon Jacoby (2006) “ On the long-term context for late twentieth century warming .” [21]

Tree ring series from 66 sites, variance adjusted, scaled into the instrumental record and published with a resolution of 0.2 C (Figure 5 C).

Any physical theory employed?: No

Strictly statistical inference?: Yes

Physically valid temperature uncertainties: no

Physical meaning of the 0.2 C divisions: none.

Physical meaning of tree-ring temperatures: none available.

Temperature meaning of the composite: none.

5.Anders Moberg, Dmitry M. Sonechkin, Karin Holmgren, Nina M. Datsenko and Wibjörn Karlén (2005) “Highly variable Northern Hemisphere temperatures reconstructed from low- and high-resolution proxy data.” [22]

Eighteen proxies: Two d-O18 SSTs (Sargasso and Caribbean Seas foraminiferal d-O18, and one stalagmite d-O18 (Soylegrotta, Norway), seven tree ring series. Plus other composites.

The proxies were processed using an excitingly novel wavelet transform method (it must be better), combined, variance adjusted, intensity scaled to the instrumental record over the calibration period, and published with a resolution of 0.2 C (Figure 2 D). Following standard practice, the authors extracted the physical meaning of the dO-18 proxies and then discarded it.

Any physical theory employed?: No

Strictly statistical inference?: Yes

Physical uncertainties propagated from the dO18 proxies into the final composite? No.

Physical meaning of the 0.2 C divisions: none.

Temperature meaning of the composite: none.

6. B.H. Luckman, K.R. Briffa, P.D. Jones and F.H. Schweingruber (1997) “Tree-ring based reconstruction of summer temperatures at the Columbia Icefield, Alberta, Canada, AD 1073-1983.” [23]

Sixty-three regional tree ring series, plus 38 fossilwood series; used the standard statistical (not physical) calibration-verification function to convert tree rings to temperature, overlaid the composite and the instrumental record at their 1961-1990 mean, and published the result at 0.5 C resolution (Figure 8). But in the text they reported anomalies to (+/-)0.01 C resolution (e.g., Tables 3&4), and the mean anomalies to (+/-)0.001 C. That last is 10x greater claimed accuracy than the typical rating of a two-point calibrated platinum resistance thermometer within a modern aspirated shield under controlled laboratory conditions.

Any physical theory employed?: No

Strictly statistical inference?: Yes

Physical meaning of the proxies: none.

Temperature meaning of the composite: none.

7. Michael E. Mann, Scott Rutherford, Eugene Wahl, and Caspar Ammann (2005) “Testing the Fidelity of Methods Used in Proxy-Based Reconstructions of Past Climate.” [24]

This study is, in part, a methodological review of the recommended ways to produce a proxy paleotemperature made by the premier practitioners in the field:

Method 1: “The composite-plus-scale (CPS) method, “a dozen proxy series, each of which is assumed to represent a linear combination of local temperature variations and an additive “noise” component, are composited (typically at decadal resolution;…) and scaled against an instrumental hemispheric mean temperature series during an overlapping “calibration” interval to form a hemispheric reconstruction. (my bold)”

Method 2, Climate field reconstruction (CFR): “Our implementation of the CFR approach makes use of the regularized expectation maximization (RegEM) method of Schneider (2001), which has been applied to CFR in several recent studies. The method is similar to principal component analysis (PCA)-based approaches but employs an iterative estimate of data covariances to make more complete use of the available information . As in Rutherford et al. (2005), we tested (i) straight application of RegEM, (ii) a “hybrid frequency-domain calibration” approach that employs separate calibrations of high (shorter than 20-yr period) and low frequency (longer than 20-yr period) components of the annual mean data that are subsequently composited to form a single reconstruction, and (iii) a “stepwise” version of RegEM in which the reconstruction itself is increasingly used in calibrating successively older segments. (my bold)”

Restating the obvious: CPS: Assumed representative of temperature; statistical scaling into the instrumental record; methodological correlation = causation. Physical validity: none. Scientific content: none.

CFR: Principal component analysis (PCA): a numerical method devoid of intrinsic physical meaning. Principal components are numerically, not physically, orthogonal. Numerical PCs are typically composites of multiple decomposed (i.e., partial) physical signals of unknown magnitude. They have no particular physical meaning. Quantitative physical meaning cannot be assigned to PCs by reference to subjective judgments of ‘temperature dependence.’

Scaling the PCs into the temperature record? Correlation = causation.

‘Correlation = causation is possibly the most naive error possible in science. Mann et al., unashamedly reveal it as undergirding the entire field of tree ring proxy thermometry.

Scientific content of the Mann-Rutherford-Wahl-Ammann proxy method: zero.

Finally, an honorable mention:

8. Rob Wilson, Alexander Tudhope, Philip Brohan, Keith Briffa, Timothy Osborn, and Simon Tet (2006), “Two-hundred-fifty years of reconstructed and modeled tropical temperatures.”[25]

Wilson, et al, reconstructed 250 years of SSTs using only coral records, including dO-18, strontium/calcium, uranium/calcium, and barium/calcium ratios. I’ve not assessed the latter three in any detail, but inspection of their point scatter is enough to imply that none of them will yield more accurate temperatures than dO-18.

However, all the Wilson, et al., temperature proxies had real physical meaning. What a great opportunity to challenge the method, and discuss the impacts of salinity, biological disequilibrium, and how to account for them, and explore all the other central elements of stable isotope marine temperatures.

So what did they do? Starting with about 60 proxy series, they threw out all those that didn’t correlate with local gridded temperatures. That left 16 proxies, 15 of which were dO-18. Why didn’t the other proxies correlate with temperature? Rob Wilson & co., were silent on the matter. After tossing two more proxies to avoid the problem of filtering away high frequencies, they ended up with 14 coral SST proxies.

After that, they employed standard statistical processing: divide by the standard deviation, average the proxies together (they used the “nesting procedure,” which adjusts for individual proxy length), and scale up to the instrumental record.

The honorable mention for these folks derives from the fact that they used only physically real proxies, and then discarded the physical meaning of all of them.

That puts them ahead of the other seven exemplars, who included proxies that had no known physical meaning at all.

Nevertheless,

Any physical theory employed?: No

Strictly statistical inference?: Yes

Any physically valid methodology? No.

Physical meaning of the proxies: present and accounted for, and then discarded.

Temperature meaning of the composite: none.

Summary Statement: AGW-related paleo proxythermometry as ubiquitously practiced consists of composites that rely entirely on statistical inference and numerical scaling. They not only have no scientific content, the methodology actively discards scientific content.

Statistical methods: 100%.

Physical methods: nearly zero (stable isotopes excepted, but their physical meaning is invariably discarded in composite paleoproxies).

Temperature meaning of the numerically scaled composites: zero.

The seven studies are typical, and representative of the entire field of AGW-related proxy thermometry. As commonly practiced, it is a scientific charade. It’s pseudo-science through-and-through.

Stable isotope studies are real science, however. That field is cooking along and the scientists involved are properly paying attention to detail. I hereby fully except them from my general condemnation of the field of AGW proxythermometry.

With this study, I’ve now examined the reliability of all three legs of AGW science: Climate models (GCMs) here (calculations here), the surface air temperature record here (pdf downloads, all), and now proxy paleotemperature reconstructions.

Every one of them thoroughly neglects systematic error. The neglected systematic error shows that none of the methods – not one of them — is able to resolve or address the surface temperature change of the last 150 years.

Nevertheless, the pandemic pervasiveness of this neglect is the central mechanism by which AGW alarmism survives. This has been going on for at least 15 years; for GCMs, 24 years. Granting integrity, one can only conclude that the scientists, their reviewers, and their editors are uniformly incompetent.

Summary conclusion: When it comes to claims about unprecedented this-or-that in recent global surface temperatures, no one knows what they’re talking about.

I’m sure there are people who will dispute that conclusion. They are very welcome to come here and make their case.

References:

1. Mann, M.E., R.S. Bradley, and M.S. Hughes, Global-scale temperature patterns and climate forcing over the past six centuries. Nature, 1998. 392(p. 779-787.

2. Dansgaard, W., Stable isotopes in precipitation. Tellus, 1964. 16(4): p. 436-468.

3. McCrea, J.M., On the Isotopic Chemistry of Carbonates and a Paleotemperature Scale. J. Chem. Phys., 1950. 18(6): p. 849-857.

4. Urey, H.C., The thermodynamic properties of isotopic substances. J. Chem. Soc., 1947: p. 562-581.

5. Brand, W.A., High precision Isotope Ratio Monitoring Techniques in Mass Spectrometry. J. Mass. Spectrosc., 1996. 31(3): p. 225-235.

6. Kim, S.-T., et al., Oxygen isotope fractionation between synthetic aragonite and water: Influence of temperature and Mg2+ concentration. Geochimica et Cosmochimica Acta, 2007. 71(19): p. 4704-4715.

7. O’Neil, J.R., R.N. Clayton, and T.K. Mayeda, Oxygen Isotope Fractionation in Divalent Metal Carbonates. J. Chem. Phys., 1969. 51(12): p. 5547-5558.

8. Epstein, S., et al., Revised Carbonate-Water Isotopic Temperature Scale. Geol. Soc. Amer. Bull., 1953. 64(11): p. 1315-1326.

9. Bemis, B.E., et al., Reevaluation of the oxygen isotopic composition of planktonic foraminifera: Experimental results and revised paleotemperature equations. Paleoceanography, 1998. 13(2): p. 150Ð160.

10. Li, X. and W. Liu, Oxygen isotope fractionation in the ostracod Eucypris mareotica: results from a culture experiment and implications for paleoclimate reconstruction. Journal of Paleolimnology, 2010. 43(1): p. 111-120.

11. Friedman, G.M., Temperature and salinity effects on 18O fractionation for rapidly precipitated carbonates: Laboratory experiments with alkaline lake water ÑPerspective. Episodes, 1998. 21(p. 97Ð98

12. Keigwin, L.D., The Little Ice Age and Medieval Warm Period in the Sargasso Sea. Science, 1996. 274(5292): p. 1503-1508; data site: ftp://ftp.ncdc.noaa.gov/pub/data/paleo/paleocean/by_contributor/keigwin1996/.

13. Shackleton, N.J., Attainment of isotopic equilibrium between ocean water and the benthonic foraminifera genus Uvigerina: Isotopic changes in the ocean during the last glacial. Colloq. Int. C.N.R.S., 1974. 219(p. 203-209.

14. Shackleton, N.J., The high-precision isotopic analysis of oxygen and carbon in carbon dioxide. J. Sci. Instrum., 1965. 42(9): p. 689-692.

15. Barrera, E., M.J.S. Tevesz, and J.G. Carter, Variations in Oxygen and Carbon Isotopic Compositions and Microstructure of the Shell of Adamussium colbecki (Bivalvia). PALAIOS, 1990. 5(2): p. 149-159.

16. Aldrich, J., Correlations Genuine and Spurious in Pearson and Yule. Statistical Science, 1995. 10(4): p. 364-376.

17. D’az, E. and R. Osuna, Understanding spurious correlation: a rejoinder to Kliman. Journal of Post Keynesian Economics, 2008. 31(2): p. 357-362.

18. Crowley, T.J. and T.S. Lowery, How Warm Was the Medieval Warm Period? AMBIO, 2000. 29(1): p. 51-54.

19. Osborn, T.J. and K.R. Briffa, The Spatial Extent of 20th-Century Warmth in the Context of the Past 1200 Years. Science, 2006. 311(5762): p. 841-844.

20. Mann, M.E., et al., Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia. Proc. Natl. Acad. Sci., 2008. 105(36): p. 13252-13257.

21. D’Arrigo, R., R. Wilson, and G. Jacoby, On the long-term context for late twentieth century warming. J. Geophys. Res., 2006. 111(D3): p. D03103.

22. Moberg, A., et al., Highly variable Northern Hemisphere temperatures reconstructed from low- and high-resolution proxy data. Nature, 2005. 433(7026): p. 613-617.

23. Luckman, B.H., et al., Tree-ring based reconstruction of summer temperatures at the Columbia Icefield, Alberta, Canada, AD 1073-1983. The Holocene, 1997. 7(4): p. 375-389.

24. Mann, M.E., et al., Testing the Fidelity of Methods Used in Proxy-Based Reconstructions of Past Climate. J. Climate, 2005. 18(20): p. 4097-4107.

25. Wilson, R., et al., Two-hundred-fifty years of reconstructed and modeled tropical temperatures. J. Geophys. Res., 2006. 111(C10): p. C10007.

About these ads
This entry was posted in Paleoclimatology and tagged , , , , , , , . Bookmark the permalink.

183 Responses to Proxy Science and Proxy Pseudo-Science

  1. kim2ooo says:

    Watching this thread with interest.

  2. Jessie says:

    Sweet, thanks Pat Frank.
    Besides this great learning curve
    and

    Spencer shows compelling evidence of UHI in CRUTem3 data

    http://wattsupwiththat.com/2012/03/30/spencer-shows-compelling-evidence-of-uhi-in-crutem3-data/

    I have organised my weekend around some serious study and social events. And enlightenment.

    Thank you for penning, and the hard work undertaken in providing such a comprehensive piece.

  3. Mark Smith says:

    Deriving a physical theory in rigorous method would hideously difficult for any of the nonradiological processes. Take tree rings, the number of factor that go into tree growth are enormous and many factors like local rainfall are simply unknown at the start before you even derive how it would affect the trees.

  4. Robin says:

    A tour de force! What Journal would publish this, I wonder? Pat Frank has really gone to town, and one can only hope that the authors of the studies that he has examined will have the courage to respond to his challenge.

    As someone who has been involved in industrial applications of the statistics of precision estimation I can only contribute my congratulations.

    Robin

  5. johnfpittman says:

    Did you send M Tobis amd A Smith a copy? And an invitation to post? I would like to read the response.

    You do need to change your rankings though. ANY that use methods as outlined here:

    7. Michael E. Mann, Scott Rutherford, Eugene Wahl, and Caspar Ammann (2005) “Testing the Fidelity of Methods Used in Proxy-Based Reconstructions of Past Climate.” [24]

    get -1 from biophysics. The meta data from the Climategate 1 emails indicated that as early as 1998 physical evidence existed that refuted the basic methodological assumption outlined in the referenced study.

    I beleive that is almost all of them that use tree rings.

  6. Ric Werme says:

    There are several URL link problems in this post. Typical is Ross McKitrick revealed itswhich goes to http://wattsupwiththat.com/2012/04/03/proxy-science-and-proxy-pseudo-science/%E2%80%9D (that’s a quote mark on the end).

    I’d volunteer to clean them up, but I have neither access nor time today.

  7. 01wmarsh says:

    Interesting post, although I will admit that a good deal of it was over my head. :)

    I’m still confused by the link to the ‘insulting comment’ provided. Traveling to the link destination yields a post on this blog called “”Climate flicker” at the end of the last glacial period” by Mr Watts. Was that the intended destination and, if it was I’m having a hard time identifying what part of the post and/or comments were deemed insulting.

  8. Keith Battye says:

    Dear Pat Frank thank you so very much for that cogent and eminently readable article.

    It does leave one wondering what on Earth these people think they are doing and more importantly why.

    I will be spreading this message far and wide. Once again thanks.

    (BTW many of the links you provide just go 404 )

  9. Bill Marsh says:

    Interesting post, although I will admit most was over my head.

    I am confused about the link you provided to the ‘insulting comment’. Traveling the link yields an Anthony Watts post ”Climate flicker” at the end of the last glacial period”. Was that the intended destination? If so, I am having trouble identifying exactly what was insulting about the post. Could you be more specific?

  10. Peter Miller says:

    Wow!

    Absolutely, definitely not something which will find its way into the next IPCC.Report.

    I always enjoy some genuine factual analysis which does the Team and their Cause a well-deserved piece of no good.

  11. jim says:

    Keith: “…what on Earth these people think they are doing and more importantly why. ”
    JK: Collecting research grants, award money, fame and some business income.

    Thanks
    JK

  12. Dan Johnston says:

    Well done. Your analysis shows just how easy it is to make good science go bad.

  13. kadaka (KD Knoebel) says:

    Arghh! Pull the post, pull it now! More than half the links are bad from an added double-quote at the end, and three links in the first two paragraphs are trying to reference this post.

    Please fix!

  14. David A says:

    Thank you Pat Frank. I read the entire article which is certainly above a layman’s pay grade. However the basic understanding of how systemic error is not recognised, physical mechanism not accounted for, and physical uncertainties, not properly accounted for, comes through.

    It started a little slow in that the link in this sentance appears to have nothing to do with M&Ms work, but is instead an article on rapid climate flux 12,000 years ago. “It’s become very clear that most published proxy thermometry since 1998 [1] is not at all science, and most thoroughly so because Steve McIntyre and Ross McKitrick revealed its foundation in ad hoc statistical numerology.”

    Did M&M point out an entirely different line of criticism to the same papers in their evaluations?
    Do all the reconstructions, including those showing a strong MWP linked at CO2 science, have similar limitations? (I understand if you have not looked at all of them, ;-) It is also true that the numerous solar reconstructions, which statisticaly have a greater correlation to assumed T then CO2, also lack a definitive physical mechanism which can be tested? Finally, if the MWP was as warm or warmer then present, then, assuming the “teams” flat lined CO2 levels, are the climate science models senstivity to CO2 likely very wrong? …and is this why they work so hard to minimize the MWP? (sorry about asking five questions)

  15. Dr Burns says:

    Bad links:
    >>I’ve now examined the reliability of all three legs of AGW science: Climate models (GCMs) here (calculations here), the surface air temperature record here (pdf downloads, all),

  16. Bloke down the pub says:

    Pity the links to the correlation isn’t causation were broken, they could have been useful.
    Question, in science is correlation without causation better or not than causation without correlation?

  17. j ferguson says:

    Thank you, Pat Frank,
    this is a very compelling review.

    btw, these links don’t work:
    “With this study, I’ve now examined the reliability of all three legs of AGW science: Climate models (GCMs) here (calculations here), the surface air temperature record here (pdf downloads, all), and now proxy paleotemperature reconstructions.”

    I look forward to their repair and reading the pdfs.

    I’ve had a lot of personal difficulty reconciling my suspicion that the tree-ring paelothermometry was nonsense with the possibility that it actually is nonsense. You’ve helped immensely.

    Thanks again.

  18. Lance says:

    The emperor has no clothes.

    Of course he has been strutting around naked for years now.

    Hopefully Mr Frank’s work will help to increase the laughter in the crowd until the old fellow retreats in embarrassment.

  19. tallbloke says:

    A tour de force. Clearly written and presented, and a damning indictment of the big money paleo hockey team. The salinity issue is particularly interesting wrt to millenial scale series. The TOC is around a thousand years, and is possibly temperature related. The salinity variable may either be amplifying or diminishing the temperature signal in results. but which is it?

    Minor note.
    There are spurious quote marks on the ends of the spurious correlations links which are breaking them.

  20. ColinW says:

    [some of the web links in the article have a spurious quotation mark " at the end of the link, breaking them]

  21. David A says:

    Oh, one more comment Mr Frank. Do you suppose this is somewhat why one of the team members (Edward Cook) suggested they all (the team) get together, do their best combined work, after which they would know “snip” all about less then 100 year reconstructions, and “snip all” about greater then 100 year reconstructions as far as the TRUTH (his caps, not mine) relating to T?

    Yes Sir, I think in the climategate e-mails you have a “Willis like elevator speech” to summarize your article, which is perhaps why, when being secretly honest, they thought they should then publish, retire, and not leave a forwarding address. Link to entire quote here. http://www.google.com/url?sa=t&rct=j&q=&esrc=s&frm=1&source=web&cd=1&ved=0CCYQFjAA&url=http%3A%2F%2Fjunkscience.com%2F2011%2F11%2F26%2Fclimategate-2-0-we-know-f-all%2F&ei=Ael6T7meE6muiQLrx-GpCg&usg=AFQjCNFvGcCz8RH_ZMAR7fHomqJ6T7435g&sig2=1FHlOXR2iVKjLfEZlxaXAQ

  22. Kaboom says:

    The three PDF download links near the end are all broken due to an extra ” at the end of the URL. Please fix and delete this comment :)

  23. James Ard says:

    I’m not a scientist, but using trees as a temperature proxy seems crazy. Both temperature and co2 levels affect the growth rate of trees, among other factors, how do you attribute what growth to what?

  24. Craig Goodrich says:

    Thanks for an illuminating analysis. One point, though: all of your links end with a double-quote, which confuses Internet Explorer and have to be removed manually to retrieve the page.

  25. Bill Illis says:

    The δ18O isotopes vary considerably depending on temperature, latitude, altitude, proximity to the ocean, rate of precipitation, seasonally and local climate and geologic effects. On long time scales over millions of years, the amount of δ18O declines in sedimentary rock due to diagenesis.

    So there are many formulae on how to convert δ18O into a local temperature and a global temperature. Climate science has often misused the different formulae to obtain whatever number they want to.

    The best explanation I have found of how to use it properly is by Jan Veizer (although there are several textbooks on this). This is only for those who might end up working with the data.

    http://www.science.uottawa.ca/eih/ch3/ch3.htm

    Basically, you have to know how the δ18O varies with temperature (and other potential impactors such as seasonal changes) for the particular location and source you are using. If you on top of a 3 km high glacier, you need a different formula than if you taken deep sea rock cores at the equator.

    Other than that, however, the δ18O isotopes are the best temperature proxy we have.

  26. David A says:

    James Ard says:
    April 3, 2012 at 5:23 am
    I’m not a scientist, but using trees as a temperature proxy seems crazy. Both temperature and co2 levels affect the growth rate of trees, among other factors, how do you attribute what growth to what?
    ——————————————————————-
    Well James, you can’t, unless you are a climate scientist, and part of the team, then you can feel the soul of the tree.

    “…..I have wondered about trees.

    They are sensitive to light, to moisture, to wind, to pressure.
    Sensitivity implies sensation. Might a man feel into the soul of a tree
    for these sensations? If a tree were capable of awareness, this faculty
    might prove useful. ”

    “The Miracle Workers” by Jack Vance

  27. Dr Burns says:

    Pat,
    An interesting paper. I’d be keen to hear your comments about the claimed errors in CRU’s measured ‘global temperature’ rather than just proxies.

    CRU states of global average temperatures:
    “Annual values are approximately accurate to +/- 0.05°C (two standard errors) for the period since 1951.” Global temperatures are presented to 0.001 degrees back to 1850.

    http://www.cru.uea.ac.uk/cru/data/temperature/

    CRU errors seem to be calculated on the assumption that ‘global temperature’ is a physical quantity that has random errors. More measurements are claimed to reduce these errors, as they would for random errors.

    Temperatures around 1951 were recorded to +/- 0.5 degrees
    http://www.srh.noaa.gov/ohx/dad/coop/EQUIPMENT.pdf page 11

    The error for more than 90% of individual weather stations is greater than 1.0 degree

    http://www.surfacestations.org/

    The accuracy of measurements seems little better than a finger held aloft. It seems to me that if I used my forefinger as a temperature sensor, and took a sufficient number of readings, I too could claim an accuracy of +/- 0.05°C for my forefinger.

    I’d appreciate your comments as to what the true accuracy of CRU temperatures might be.

  28. I guess it’s time to ponder some more about the water isotope paleothermometer. I’ll prepare something for that.

    Hint: Non calor sed umor

  29. David A says:

    tallbloke says:
    April 3, 2012 at 5:11 am
    —————————————————

    Tallbloke, I also was curious if the proxy signa, relating to T, was biased up or down with more salinity.

  30. proskeptic says:

    Apologies for hijacking this thread for a moment but you guys have seen this interview with James Lovelock from 2010 right? It’s probably old news – but wtf? And the Comments section! A bunch of supposed liberals with nothing to say about their hero advocating a suspension of democracy!

    http://www.guardian.co.uk/environment/blog/2010/mar/29/james-lovelock

    “We need a more authoritative world. We’ve become a sort of cheeky, egalitarian world where everyone can have their say. It’s all very well, but there are certain circumstances – a war is a typical example – where you can’t do that. You’ve got to have a few people with authority who you trust who are running it. And they should be very accountable too, of course.

    But it can’t happen in a modern democracy. This is one of the problems. What’s the alternative to democracy? There isn’t one. But even the best democracies agree that when a major war approaches, democracy must be put on hold for the time being. I have a feeling that climate change may be an issue as severe as a war. It may be necessary to put democracy on hold for a while.”

    [snip . . OT . . kbmod]

  31. Mickey Reno says:

    Thanks for your thorough and elucidating look at the state of the art of the modern ‘science’ of proxy-paleo-thermometry. Your honorable mention example #8 actually had me laughing.

    I’m left wondering exactly when it became acceptable in any science to ignore the axiom that correlations don’t imply causation? And why are scientific journals allowing papers that make such presumptions to be published? This is one of the most stunning parts of climate science as practiced by “Real” climate scientits.

  32. Bill Wood says:

    Correlation does not equal causation. Causation normally follows the arrow of time. Two simultaneous events indicate they may have common causation. It is up to physical science to develop the theoretical basis for a causative relatonship.

    One of the constant relationshps is hemlines to stock market performance. This makes Chairman Bernanke the ultimate arbiter of fashion. I am looking forward to this summer if he continues to flood the markets with liquidity.

  33. richardscourtney says:

    Several commentators miss the main point in the above article; i.e.

    Correlation implies nothing about causation
    but
    absence of correlation disproves direct causation.

    So, anything which assumes causation exists because correlation exists is pseudoscience. And almost all proxy studies in climatology make that assumption.

    Simply, I am dumbfounded by the quotation in the article which reports Michael Tobis having written;
    “If two signals are correlated, then each signal contains information about the other. Claiming otherwise is just silly.”
    An undergraduate would obtain a fail mark for writing that in an assignment.

    It is important to note that the assumption of correlations resulting from unknown causal links is common to almost all proxy studies used in ‘climate science’.

    As an addendum, correlation may suggest the possibility of a causal link (because absence of correlation disproves direct causation) but that possibility needs to be investigated before any assumption that a causal link exists. When (n.b. only when) the mechanism of a causal link between two parameters has been demonstrated to exist then the correlation may be assumed to be an indicator of one parameter by the other.

    Richard

  34. Steve Richards says:

    Stunning!

    Someone who understands science, statistics and is prepared to put his head above the parapet.

  35. trbixler says:

    Impressive review of so much pseudo science that has received so much fanfare and money. Thank You for the effort and Thank You to Anthony for the place to post this information.

  36. Orson Olson says:

    Long and detailed; thoughtful and meaty. Thank you Pat Frank. Many of us have wanted such a tutorial for a great many years. Now we’ve got one. Thanks again!

  37. elmer says:

    The problem with ice cores is they aren’t annual rings or layers, they just indicate cold/warm temperatures. In Minnesota one storm can produce several different layers of snow types. Tree rings are also problematic, although they are annual rings, temperature is just one variable in a trees annual growth. Water, soil and EVEN CO2 all have an impact on the tree’s annual growth. If CO2 increases in a certain area the trees will grow faster regardless of temperature.

  38. Dixon says:

    This is brilliant (if rather more than a primer :)

    Presumably the paleotemp from a shell is some kind of time and growth-weighted average of the temperature while the animal was alive? If so, and given how much the temperature in the water column can vary in coastal environments I can see potential pitfalls in using this sort of information as a temperature proxy (as well as depth/animal size). For instance off Western Australia last March we had an extreme heatwave which literally wiped out huge swathes of Haliotis Roeii (abalone). The mortality was caused by a few days of especially hot still weather which allowed water temperature to climb above those tolerated by the abalone. These shells will now be over-represented in any deposit, and the growth periods within them not reflective of the heat event at all.

    How would seasonal variation in water temps get captured in the isotope ratios?

    Are we sure that a shell can sit in a liquid CO2 rich medium without any exchange processes occurring between the shell matrix and the liquid? Presumably the assumption in this type of paleothemo approach is that there are none (much as ice cores are allegedly sealed and stable over the period the core is accumulating, and is removed and stored). It would be interesting to know if this has been considered?

    BTW I loved your distinctions between statistical and scientific inferences, that should be explicitly stated more often!
    Thank you.

  39. elmer says:

    It’s a self fullfilling prophecy

    Increased CO2 levels make tree rings bigger.
    Bigger tree rings are being used to show an increase in temperature.
    Therefore increased CO2 levels cause temperatures to increase.

  40. Max Hugoson says:

    I’d be very careful about upsetting the vendor’s tables in the Temple, around Easter time.

    You know what happens to prophets in Jerusalem. The ‘religious authorities’ are looking for someone to betray you right now. Don’t pray in any gardens and be glad the Ides of March are behind us!

  41. theduke says:

    I have used the phrase, “Correlation is not causation” frequently in posts about climate and I now have a much better understanding of what I meant!

  42. wayne Job says:

    Mr Frank you have been indeed frank and if your analysis is indeed correct you have laid down a guantlet. The gloves are off and expect a bare knuckle fight, pulling the rug out from these leading lights of global warming will most likely elicit a severe response. Thank you it is time they got a blood nose.

  43. Joe Born says:

    I’d like to come back and read this again when the links work.

  44. Brian D says:

    Was it just me or did i just see the scientific version of Hulk Hogan body slamming Andre the giant?

  45. gnomish says:

    thank you for a most cogent article. it was a pleasure to read. it is a pleasure to know there is someone out there remaining who is capable of writing informatively, implying that he credits the reader with intelligence.
    contains no hawafeena!

  46. Gary Pearse says:

    This thorough and very readable work, which includes detail on how the oxygen isotopes respond to temperature and their incorporation into calcium carbonate is masterful – all so that the reader can properly follow the criticism of paleoproxies- deserves very wide distribution. I think you are too generous in your “Granting integrity, one can only conclude that the scientists, their reviewers, and their editors are uniformly incompetent.” Incompetence without doubt but the level of integrity has been fully revealed in climategate and other places as being disgustingly absent. The AGW industry has sprung from ideology and a greed fed by an unprecedented deluge of cash for the destruction of civilization. Honest science could only get in the way of this movement.

    Sadly, there is a slim chance of publishing this excellent paper in the A-list journals. Roy Spencer’s recent bemoaning the unliklihood of his paper being published (on UHI accounting for much of the uptrend in local (and probably global) temperature, suggests that a new generation of scientific journals has become necessary for the health of science – please don’t give Nature, Scientific American, Proceedings of the Royal Society…. a pardon after this nightmare has ended.

  47. wmconnolley says:

    So many words, and so little point. Meanwhile, the stream of science flows on around and over you, and doesn’t even notice you exist.

    The obvious thing to look at is the comparison of borehole thermometry to D-O18 in Greenland.

  48. wayne says:

    It’s great to see some good and proper science in climatology every now and then. Well written and constructed Pat, and I have to agree with your conclusion. I have little to add, you seem to have covered it all.

  49. woodNfish says:

    “Granting integrity, one can only conclude that the scientists, their reviewers, and their editors are uniformly incompetent.”

    Sorry Dr. Frank, but these people have no integrity. They are outright frauds and they know it and they are doing it anyway.

  50. Stephen Richards says:

    Thank you for penning, and the hard work undertaken in providing such a comprehensive piece.

    I echo this and all the above, Pat. A nice piece of cogent argument which goes to the heart of how science is done and written.

    Many thanks. If only more people from the establishment would do the same.

  51. ckb says:

    “So many words, and so little point. Meanwhile, the stream of science flows on around and over you, and doesn’t even notice you exist.

    The obvious thing to look at is the comparison of borehole thermometry to D-O18 in Greenland.”

    Oh look, a little comment from a small man. Content free, per usual. Do you disagree with the analysis? Do tell…

  52. Zeke says:

    “So then how did Crowley and Lowery convert their unitized proxy average into temperature? Well, “The two composites were scaled to agree with the Jones et al. instrumental record for the Northern Hemisphere…,” and that settles the matter.

    In short, the fifteen series were numerically adjusted to a common scale, averaged, and scaled up to the measurement record. Then C&L reported their temperatures to a resolution of (+/-)0.05 C. Measurement uncertainty in the physically real series was ignored in their final composite.”

    This is a memorable lesson in forming a larger system from smaller systems. This is where the malpractice will lie. Then it gets fed into a computer model, which is basically a scientific diagram which you don’t get to look at.

  53. James Ard says:

    Years ago I would never have believed Elmer’s self fulfilling prophecy comment could be anything but a joke. But my experience watching these frauds leads me to believe that’s exactly why trees were selected as a proxy.

  54. Excellent review and explanation. One must now ask: One, how do these authors and the reviewers of their work justify publishing results without error bars and statements of systematic error? Two, how is it that unscientific foolishness such as paleo temperatures from tree rings even gets into the “scientific literature”?

  55. Gary says:

    It also should be mentioned that Nick Shackleton was a careful and prolific scientist who pioneered mass-spectrometry and performed thousands of O-18 analyses. A major contribution he made was refining the dating of sediment cores which is fundamental to their use as proxies.

  56. Jan says:

    I think these are the correct links with respect to this:

    Michael quickly appealed to his home authorities at, Planet3.org (http://planet3.org/2012/02/21/singers-proxy-argument-refuted/) . We all had a lovely conversation that ended with moderator-cum-debater Arthur Smith indulging a false claim of insult to impose censorship (insulting comment in full here (http://planet3.org/moderated-comments/#comment-4647) for the strong of stomach).

    You need to scroll down to the bottom of the page, past the flowering crab to see the discussion:

    http://planet3.org/2012/02/21/singers-proxy-argument-refuted/

    The link was moderated and put into the ‘bore hole’ and shown in full here:

    http://planet3.org/moderated-comments/#comment-4647

  57. LamontT says:

    Very nice. Yes things CAN be used as a proxy for temperature BUT it requires knowing a lot about what was going on with the proxy. You can’t simply say that something is a proxy and then blindly go and apply it across the board. This is one of the points where climate science broke down.

    They declared as a flat out rule that tree rings are always a proxy for temperature. And then use any and all tree rings that way in any and all regions. Then when they can actually compare their proxy to actual recorded temperatures in various places it is found not to work that well. It is clear that they didn’t stop and wonder about the mechanics of what is going on and investigate it. Instead they hand waved and went on.

    The reality is anything used as a proxy only works some of the time when the conditions are right. And you need to know those conditions before you can safely use it as a temperature proxy. There are other things that can cause the same effect that is assigned to temperature. This means that temperature isn’t always what creates measured data. And that means that you can’t simply use something like tree rings or other proxy’s blindly as temperature proxies. You must know a lot more about each measured point and what is going on as a secondary effect before you can reliably use any set as a proxy. And this is a massive amount of work that isn’t being done.

    Essentially once something has been declared a proxy it gets blindly used from then on and that can’t work. Not reliably. Sad really I suspect that the entire proxy business will be thrown out because of the defense of it rather than people sitting down and doing the massive amount of work needed to determine valid versus invalid proxies. Oh and if you can’t determine the secondary factors for any time period you can’t reliably use the proxy for that time period. A big pain but this is the reality of science. it isn’t neat and orderly but a lot of work if you want to do it right.

  58. James Sexton says:

    wmconnolley says:
    April 3, 2012 at 8:42 am

    So many words, and so little point. Meanwhile, the stream of science flows on around and over you, and doesn’t even notice you exist.

    The obvious thing to look at is the comparison of borehole thermometry to D-O18 in Greenland.
    ===========================================================
    Thanks Billy! Your thoughtful insights are a welcomed contribution to the discussion!!

    Did you just do a hand wave? Don’t look here….. look there!! I’m a bit disappointed. After such a lengthy essay, the response is “go look at Greenland.” :-|

    In the meantime, science hasn’t continued. Like a damn built with mired and muddied thought, unwarranted assumptions, untestable hypothesis’s and braced with ideological pursuits, the flow of science, particularity in the climate arena, has essentially stood still going on about 2-3 decades now, with only trickles released from time to time.

    But, thanks to people like Pat Frank, the dam will burst soon enough.

  59. Don Keiller says:

    Michael Tobis was correct when he said “If two signals are correlated, then each signal contains information about the other. Claiming otherwise is just silly.”
    Any climate reconstruction whose authors include “Jones” or “Mann” will be devoid of real science and vice versa.

  60. Pat Frank says:

    Sorry about the busted links, everyone. They all worked when I pre-tested them. But I’m no longer in charge. :-) This evening, I’ll check them all and for the broken ones, post the proper links down here in the reply-zone.

    Also, thanks for the commentary so far.

  61. richardscourtney says:

    wmconnolley:

    You use few words and they have no point.

    Perhaps you would be willing to share why you bothered to make such a pointless post which only serves to show that you did not read the article?

    Richardf

  62. Richard M says:

    wmconnolley says:
    April 3, 2012 at 8:42 am

    So many words, and so little point.

    So few words and absolutely no point. Pretty much says it all.

  63. Pat Frank
    Re: “Every one of them thoroughly neglects systematic error. ”
    Thanks for your excellent effort in exposing systemic uncertainties and unphysical methods.

    For the formal uncertainty methodology that is absent in almost all climate temperature modeling, I refer readers to NIST’s Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, Barry N. Taylor and Chris E. Kuyatt, NIST Technical Note 1297, 1994 Edition (Supersedes 1993 Edition)

    Re:

    A (+/-)1.6 C uncertainty is already 2x larger than the commonly accepted 0.8 C of 20th century warming. . . .
    So, (+/-)2.75 C is probably a good estimate of the methodological 95% confidence interval in any determination of a dO-18 paleotemperature.

    My pragmatic rule of thumb:
    If systematic error is not mentioned, double the uncertainty reported.

    Takeway:

    it’s clear that changes in SSTs on the order of 1-1.5 C can’t be resolved in those data. The most that can be said is that it’s possible Sargasso Sea SSTs were higher 3000 years ago.

  64. Kev-in-Uk says:

    To Pat Frank – An excellent factual review of the scientific content – and one which I feel (sadly) will be over the heads of many. (that’s no reflection on others, just that those of us who learnt science and methodology years ago – were taught how to work through the error assessments, something which current climate science doesn’t seem to want to be bothered to do!)

    I have to say well done for taking the time to work through the methods, etc – but equally, I am fairly sure the reason you did so in the first place, is that you ‘knew’, probably intuitively, but primarily as a scientist, that the accuracy and inherent errors did not justify the claims arising? Certainly, when I have read these kinds of palaeo reconstructions, that is exactly what I first think of – i.e. what are the measurement and methodological error and what effect doers this have on the results. (on a previous thread I rhetorically asked Steven Mosher how many papers explain the error bars properly these days!)

    Many comments in respect of measurements and stated accuracy have been posted on WUWT over the years, and I find it semi-amusing that very few people actually appreciate what is involved in error assessment. I hope this will open a few eyes/minds but unless it makes the authors of some of these papers actually SHOW their workings and incorporate the errors as FULL disclosure into their published works, I fear it will be in vain.

    The demonstation of the error margins and confidence level of the Sargasso sea reconstruction is exactly the kind of thing that needs to be made public and easily understood for the layman.

    As an extreme example – basically your graph of the reconstruction with the blue 95% confidence box overlay – could perhaps be better described for Joe Public as ‘This is the box within which the plotted points could, technically, be anywhere due to the errors in the analysis and method!’
    And as you perfectly correctly state, the only REAL valid scientific conclusion from that reconstruction is that it was probably warmer some 3000 years ago, and possibly about 1000 years ago too! Other than that, the fundemental conclusion is that Keigwins excellent work tells us very little if we properly consider the errors!

    I’d pay good money to watch/listen to Mann et al explain some of their methods and error bars from start to finish! Man, (excuse the pun) that would be funny! LOL

  65. kim says:

    Bah, sorcerer’s apprentices.
    ========

  66. Steve from Rockwood says:

    David L. Hagen says:
    April 3, 2012 at 9:45 am

    Pat Frank
    Re: “Every one of them thoroughly neglects systematic error. ”
    Thanks for your excellent effort in exposing systemic uncertainties and unphysical methods.

    [...]

    Re:
    A (+/-)1.6 C uncertainty is already 2x larger than the commonly accepted 0.8 C of 20th century warming. . . .
    So, (+/-)2.75 C is probably a good estimate of the methodological 95% confidence interval in any determination of a dO-18 paleotemperature.
    My pragmatic rule of thumb:
    If systematic error is not mentioned, double the uncertainty reported.

    I would just add that as the number of samples rises, systematic error becomes even more of a problem relative to random error. This is why claims of very low error margins in studies with a high density of sample points should be suspect if they do not confront possible systematic errors. Unfortunately this covers almost every aspect of climate science, expect for perhaps a few trees in Yamal.

  67. Allen says:

    The use of statistical methods in science is poorly taught, so I appreciate this post. The peer reviewers in climate science journals should collectively get a failing grade for not recognizing spurious associations and pointing them out to authors.

  68. Hannibal Barca says:

    While not a “climate scientist” I am a geologist who, as an undergrad, worked in a stable isotopes laboratory extracting oxygen from whole rock and mineral separates. While the laboratory techniques are quite valid and reproducible, a problem often arises from the sample from which the O18 and O16 were obtained. We often used this to determine if the igneous or metamorphic rock were pristine or altered by hydrothermal meteoric water (useful in finding mineral deposits). Often times, the alteration was readily apparent (by the variations in mineral content/degradation of minerals caused by the alteration) other times it was not.
    Delta 018 from foram and other shells could be even more problematic. Just as trees may react to local and environmental changes in growth, shell forming organisms also do so. This introduces a second layer of uncertainty if you will. Some of the pitfalls associated with using these as a temperature proxy include (but are not limited to):
    1) Aragonite, the primary calcite mineral which forms the shells of sea creatures is unstable and will alter to calcite at standard temperature and pressure.
    2) The tests of some shell forming sea creatures naturally may have both forms of calcite in them based upon the physiology of the animal and the ocean which they inhabit. One could surmise that, just as tree growth among other things is a function of precip., nutrients, and temp, shell growth might be altered similarly. It may include the animal creating different crystal forms of calcium having different isotopic oxygen composition. Thus, for the same reasons Craig Loehle might say “Treemometers don’t necessarily make good thermometers”, I would say “shellmometers may also not make good thermometers.”(sorry Craig – don’t mean to put words in your mouth). Just as alteration/variation in mineralogy in the non-sed rocks may not be apparent, the same could be said of carbonate tests, and the difference could be even more subtle, i.e., different crystal forms of the same mineral.
    3) Alteration of the shell may occur during deposition and lithification of the sediment in which they are laid down. Some of this alteration may be highly localized due to local water/sediment chemistry. For example, if some nearby organic material is present in the sediment causing a very localized change in pH or chemistry.

  69. Steven Mosher says:

    “(on a previous thread I rhetorically asked Steven Mosher how many papers explain the error bars properly these days!)”

    1. I’ve been complaining about this since 2007. do some reading.
    2. get off your butt and count for yourself. I aint your data monkey.
    3. do more reading and less commenting.

    Further Pat continues to make the same mistakes and I wouldn’t waste my time on him.
    Lucia, Jeff Id, Roman M, and others with a decidely skeptical bent try to talk sense to him but
    he refuses to engage the argument. Not worth the time.

  70. Robert R. Prudhomme says:

    BOBP
    I have always questioned the use of statistics by Mann tand others to refute the abundant historical evidence of The Medieval Warm Period ( Idso ) and The Little Ice Age . It is similar to proving Julius Ceasar never existed by dubious statistical methods. Numerous newspaper atricles thru the last century have refuted various alarmist assertions from the AGW propo .

  71. Tenuk says:

    Spot on, Pat Frank.

    It’s not just proxy reconstructions that have problems with lack of ‘honest’ error bars, but the problem seems to be endemic across the whole spectrum of climatology. Even when errors are discussed/calculated they still often fudge things by adding the errors, rather than multiplying, to arrive at the total – another fudge.

    No wonder we have no trust left in IPCC cabal of cargo cult climate scientists!

  72. Craig Bannister reports:

    I asked Jim Barlow, director of science and research communications, University of Oregon when and why the sentence was changed. Here’s his response:

    “I intended the original first sentence of the news release to function as a play-on-words on our researcher’s message about recognizing and addressing cultural inertia. Unfortunately, the word “treated” became the focus of the story, leading to inaccurate portrayals. In an effort to shift the focus back to the actual topic of the conference presentation, I chose at midday Monday to remove the word from the version of the news release that appears on our website.”

  73. Apologies: – added to wrong post – please delete previous.

  74. Frank says:

    Pat, you write:

    “CO2 is liberated from biological calcium carbonate and piped into a mass spectrometer. Laboratory methods are never perfect. They incur losses and inefficiencies that can affect the precision and accuracy of results. Anyone who’s done wet analytical work knows about these hazards and has struggled with them. The practical reliability of dO-18 proxy temperatures depends on the integrity of the laboratory methods to prepare and measure the intrinsic O-18.”

    The absolute size of peaks in a mass spectrum is irrelevant; only ratios are reported. Any “losses and inefficiencies” won’t cause a change in the ratio unless they provide a path for separating some CO2 with O18 from ordinary CO2. Even IF such a path existed, that wouldn’t necessarily cause a problem WHEN we interpret CHANGES in these ratios, not the absolute value of these ratios. If the amount of O18 in a series of samples were consistently 1% too high, the data might still be correctly interpreted.

    The appropriate issue is: How reproducible are these measurements over the full period of the study? If every fifth or tenth sample of shell analyzed were a control from modern shells and or plentiful older material, how tight is the isotope ratio data? The ability of the instrument to detect one part in 100,000 is much less relevant than the ability to get the same answer to one part in 100,000 or 10,000 from repeated runs with the same sample in the presence of varying amounts of typical non-calcium carbonate impurities. Limited information of this type can be found on some of the above graphs. (In some fields of analytical chemistry, co-workers are asked to provide control samples whose composition is kept secret (blinded) from the analyzing chemist until he has completed his analysis of all of the samples.)

  75. Volker Doormann says:

    Summary conclusion: When it comes to claims about unprecedented this-or-that in recent global surface temperatures, no one knows what they’re talking about.
    I’m sure there are people who will dispute that conclusion. They are very welcome to come here and make their case.

    Well, there is a relation of planetary functions of high accuracy in time. These functions can be taken to calibrate 14C and or 18O data in time, because it is well known that decay times on Earth are changed with solar activity, depending on these planetary functions. Additionally measured global temperatures can be aligned by fitting the strength of the planetary functions.

    V.

  76. dave38 says:

    “The O-18/O-16 ratio in sea water has a first-order dependence on the evaporation/condensation cycle of water. H2O-18 has a higher boiling point than H2O-16, and so evaporates and condenses at a higher temperature.”

    I can accept that, but i wonder what effect the prescence of H2 Deuterium has on the temperature and the evaporation/ condensation and can it make much difference?
    I am not a physisit or chemist so i have no idea of this effect could be significant

  77. kim2ooo says:

    wmconnolley says:
    April 3, 2012 at 8:42 am

    So many words, and so little point. Meanwhile, the stream of science flows on around and over you, and doesn’t even notice you exist.

    The obvious thing to look at is the comparison of borehole thermometry to D-O18 in Greenland.

    xxxxxxxxxxxxxxxxxxxxxxxxxxxx

    Ohhh my my Mr Connolley :)
    You might be under the impression that you’ve contributed to Normal Science?
    Was it your contribution of wikipedia editing?

    I hope not, as that was pure poly-science shenanigans.
    And that Sir…is what you will be, forever, remembered for. :)

  78. Kev-in-UK says:

    Steven Mosher says:
    April 3, 2012 at 10:44 am

    ”1. I’ve been complaining about this since 2007. do some reading.”
    I wouldn’t actually believe that statement based on your consistent defence of the use of anomaly analysis – implying you believe that trend/anomaly analysis negates errors?
    Nobody disputes that it is reasonable to use trends and anomalies for some purposes, but only when one is comparing apples to apples – so for example, any global temp anomaly gathered from early temp data will potentially be less accurate that measured today (ignoring all the station siting issues, etc). Splicing the data together is misleading but this is never explained.
    The other 2 points are not really worth a response – suffice to say that reading is not the most important criteria, especially if what one is reading is incorrect or simply AGW propaganda drivvel! For my money, the most important thing is to UNDERSTAND, which give your apparent reluctance to explain yourself suggests it is you who doesn’t understand?

    ”Further Pat continues to make the same mistakes and I wouldn’t waste my time on him.”
    Wow – just Wow! – I’d guess Pat has spent some considerable time reading and preparing his analysis! At the very least, if you have some criticism, his EFFORTS would warrant some response if you feel there is significant criticism to be made or indeed, a valid point to be made. That my friend, is the scientific way!

  79. kim2ooo says:

    Steven Mosher says:
    April 3, 2012 at 10:44 am

    ”Further Pat continues to make the same mistakes and I wouldn’t waste my time on him.”

    xxxxxxxxxxxxxxxxxxxxxxxxx

    Processed cheeseses!
    Sooooo you won’t debate?

    The more I read from people such as you and Mr. Connolley, with your follow the pea – hand-waving… snides, the more I wonder just what about Normal Science Protocol has been lost?

  80. kim2ooo says:

    richardscourtney says:
    April 3, 2012 at 6:15 am

    Several commentators miss the main point in the above article; i.e.

    Correlation implies nothing about causation but absence of correlation disproves direct causation.

    So, anything which assumes causation exists because correlation exists is pseudoscience. And almost all proxy studies in climatology make that assumption.

    Simply, I am dumbfounded by the quotation in the article which reports Michael Tobis having written;
    “If two signals are correlated, then each signal contains information about the other. Claiming otherwise is just silly.”

    An undergraduate would obtain a fail mark for writing that in an assignment.

    It is important to note that the assumption of correlations resulting from unknown causal links is common to almost all proxy studies used in ‘climate science’.

    As an addendum, correlation may suggest the possibility of a causal link (because absence of correlation disproves direct causation) but that possibility needs to be investigated before any assumption that a causal link exists. When (n.b. only when) the mechanism of a causal link between two parameters has been demonstrated to exist then the correlation may be assumed to be an indicator of one parameter by the other.
    xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

    IMO Well Said!

  81. Frank says:

    Pat: The error bars you added to Keigwin’s Sargasso Sea reconstruction (Figure 7) are potentially correct and at the same time GROSSLY misleading. Stable isotopes are much more accurate describing temperature change, rather than absolute temperature. Most publications plot stable isotope ratios, rather than derived temperatures, on the vertical axis for precisely this reason.

    In Figure 6, you show many different lines relating stable isotope radio (d) to temperature (t), but they are all fairly linear. In simple terms,

    t = m*d + b

    The slope, m, is fairly similar in all of these lines, but the y-intercept, b, is not: On the isotope delta scale -1 %o is always about +5 degC. If we consider two isotope measurements, d1 and d2 and calculate the temperature difference they represent:

    t2 – t1 = m * (d2 – d1)

    Now, to a first approximation, only the uncertainty in the slope and the uncertainty in the difference in isotope measurements contribute to the uncertainty in temperature DIFFERENCE. (This approach is valid only when one doesn’t expect b to change. In Figure 2, you show that b is different for Cape Cod and Florida, so d2 can’t come from Florida and d1 from Massachusetts. However, d1 and d2 usually come from the same site at different times. How b varies with time at a given site is an intractable unknown; we can’t go back in time. Many publications address this problem by analyzing multiple proxies for consistent trends.)

    Looking at Figure 7, it may be possible to conclude that the temperature in the Sargasso Sea rose about 1-2 degC between 1500 years ago and 1000 years ago even though the 95% confidence interval around each temperature is +/-1.5 degC. Furthermore, the period around 1500 years ago contains four measurements of relatively cold temperature. If we believe that there was a “cold period” around that time, we have four separate readings (n=4) that formally reduce the uncertainty for that period by a factor of two. Random data could certainly could produce four low readings in a row, but we are probably looking at data with a high degree of autocorrelation.)

    Furthermore, if we know today’s temperature and isotope ratio (as well as m) we can calculate b for the site and completely eliminate its uncertainty. Unfortunately, it is difficult to learn much from the loose material at the top of a sediment core.

    Perhaps a Monte Carlo approach would help. If you know the uncertainty in m and b for a given formula, you could translate one set of mock temperature data into one thousand sets of mock isotope data using one thousand randomly chosen sets of m and b with appropriate mean and standard deviation. Then you could add one thousand random estimates of the uncertainty in the isotope measurements to get one thousand possible sets of mock isotope data. Translating all of these to temperature using a fixed formula would give a reasonable estimate of the uncertainty in reconstructed temperature. If your mock temperature data consisted of a sine curve with a period of 1000 years and an amplitude of +/- 0.5 or 1.0 degC, would you still be able to identify this trend in the reconstruction? How far off is your estimate of the amplitude?

  82. Tom Ragsdale says:

    Obviously Mr. Frank is funded by “Big Oil” which of course means he is lying.
    Excellent job!

  83. Evil Denier says:

    An absolute tour de force. To switch to another language Bravo!
    You gather that I approve. Mightily. But: please fix the links! (Anthony, mods: I know you’re overworked, but please can you help?)

  84. Another blinder of a post Pat, thanks.

    Interesting to contrast Frank’s comment with Mosher’s and Connolley’s.

    Frank wrote:

    Furthermore, if we know today’s temperature and isotope ratio (as well as m) we can calculate b for the site and completely eliminate its uncertainty.

    Uncertainty, like death and taxes, will always be with us Frank. What we can do is reduce the bounds of our uncertainty.

  85. Hank McCard says:

    Steve Mosher,

    “Further Pat continues to make the same mistakes and I wouldn’t waste my time on him.
    Lucia, Jeff Id, Roman M, and others with a decidely skeptical bent try to talk sense to him but
    he refuses to engage the argument. Not worth the time.”
    ———-\
    In the past, I found your comments to be thoughtful and informative, In this instance, you sound more like Wm. Connolley. I don’t care if you like Pat Frank or not, If it isn’t worth your time to respond to him and comment on his mistakes, fine, IMO, you should let others decide for themselves.

  86. mondo says:

    Steve Mosher. We can hear your frustration with Pat, but can I point out that making a comment of the nature you have on this thread doesn’t help much. It is likely that the WUWT audience is larger and different from the audiences at Lucia’s and The Air Vent. This is an opportunity to succinctly present the arguments that contradict Pat’s conclusions. I for one would very much appreciate that, and so I am sure would a lot of other readers here.

  87. Brian H says:

    elmer says:
    April 3, 2012 at 7:30 am

    It’s a self fullfilling prophecy

    Increased CO2 levels make tree rings bigger.
    Bigger tree rings are being used to show an increase in temperature.
    Therefore increased CO2 levels cause temperatures to increase.

    No, it’s a circular argument, into which an assumption about temperature causing growth has been inserted. Also known as “begging the question”.

    A “self-fufilling prophesy” is a prophecy whose existence itself causes the event to occur. A common example is a projection of a share price fall by an influential analyst, which then stimulates a wave of selling. (During which said analyst may or may not be buying up the stock at cheap prices.)

  88. Brian H says:

    Kev-in-Uk says:
    April 3, 2012 at 9:51 am

    doers this have on

    This is the box within which the plotted points could, technically, be anywhere due to the errors in the analysis and method!’

    Is that how you pronounce “does” there “in-UK”? Fascinating!! ;)

    The “could be anywhere” should perhaps be graphically shown by presenting the points as huge oval blobs spanning that error box’s full height. That would make for a dramatic plot! And even a truthful one — a line of indistinguishable blobs. :)

  89. Brian H says:

    wmc;
    something is rolling and engulfing, etc., but it isn’t science. Your job as a kind of Maxwell’s Demon, excluding valid research from Wiki and only admitting the other kind, has long since given you anosmia.

  90. Slabadang says:

    Wel!?

    PNAS stands for Pseudo Narrativ Amateurish Stupidity ? Cant someone please put the PNAS out of its misery. Every honets scientist has all ready left the corrupted gouvernment kiss ass guided politizised national “science” organisations or are in silent morne ower the loss of integrity and trustwortyness. The systematic breach of the most basic scientific principles in climate science is lika a tombstone risen ower the free science killed by politicians and NGOs. This audit of paleo reconstructions are devestating. ” The IPCC has recruited the worlds top climate scientists” it make me wonder what the bad ones would be like.

  91. boston12gs says:

    wmconnolley says:
    April 3, 2012 at 8:42 am
    So many words, and so little point. Meanwhile, the stream of science flows on around and over you, and doesn’t even notice you exist.

    You’re still here, wmconnolley? Bravo!

  92. markx says:

    I sometimes wonder if people like Connolley realize that in the future they are very likely to be thought of as important historical figures.

    But not for the reasons they’d like to think they might be.

  93. jorgekafkazar says:

    Very nice, Pat. Many thanks.

  94. Gail Combs says:

    WOW, Thanks for all the hard work. I never could understand how they were getting any real numbers on the right side of the decimal place.

  95. LazyTeenager says:

    Well that was an interesting article. But this statement: “but their physical meaning is invariably discarded in composite ”

    I don’t know what is meant by that. It feels kind of hand waving.

    My ignorance of principal component analysis knows no bounds, but it sort of looks like adding up a bunch of timeseries data to get an average. If the relationships of proxy to temperature is just linear than fiddling with scale factors boils down to figuring the relative weights. If this is all calibrated to temperature in the end it does not matter much.

    So the crux is whether all of the proxy methods correlate with temperature under all relevant conditions. If they do then a proxy measurement can be used to infer a temperature irrespective of its physical basis.

    E.g. Epicycles have no physical basis, like gravity, but they are effective for prediction of and description of planetary orbits.

  96. mtobis says:

    The correlation between two series is a measure of the extent to which information about the one series can be obtained from measurements of the other.

  97. Pat Frank says:

    Here are the promised working links, and I really apologize that things got so screwed up; my fault for composing links in Word:

    McIntyre ad McKitrick pdf.

    Michael Tobis’ comment: here.

    My reply, here.

    Planet3.org proxy discussion, here.

    My horrid and disgusting and deleted insulting comment posted in full at WUWT, here.

    Kevin Anchukaitis’ professional site here.

    The link to Kaustubh Thimuralai’s defense was good, but here it is again.

    NASA’s O-18 page is here.

    The mass spectrometry page is here.

    Wiki’s O-18 page is here.

    The Foraminifera page is here.

    The corals page is here.

    My paper on uncertainty in the surface air temperature record is here.

    The second mass spectrometry link is the same as the first

    All the links to articles seem to be OK, except maybe to Keigwin’s Sargasso Sea paper, which is here.

    Michael Tobis on signal correlation is here

    The spurious correlations are here.

    The Purdue spurious correlation contest entries are here.

    The first seven AGW-paleothermometry links were good, but not number eight to Rob Wilson, et al., Here’s that one.

    The strontium/calcium proxy T paper is here.

    The uranium/calcium proxy T paper is here

    The barium/calcium proxy T paper is here.

    My skeptic article on the reliability of GCMs is here and the calculations backing it up are here (pdf downloads).

    And again the critical look at measurement error in the surface air temperature record is here.

    That about does it. Again, true regrets for all the trouble and frustration I caused with those bad links. I’ve checked all these links, and they all worked for me. I have a Stanford proxy-server that gives me access to the journals, but am hoping the abstract pages will be open to everyone.

  98. markx says:

    LazyTeenager says: April 3, 2012 at 8:30 pm

    Well that was an interesting article. But this statement: “but their physical meaning is invariably discarded in composite ” I don’t know what is meant by that.

    As much as I usually disagree with LT, in this instance I too would be interested in an elaboration of the quoted statement.

  99. Pat Frank says:

    Thanks, everyone, for your encouraging comments. It’s really appreciated. All the working links have been posted here, and again my apologies to all for the total mess-up.

    Mark, you’re right that deriving a tree-ring/Temp theory would be hugely difficult. But postulating that one exists, and then doing the greenhouse experiments to try and derive some semi-empirical relationships should be possible. Even that might take decades to work out, but it would be a worthwhile endeavor.

    John Pittman, one may guess that they all know already. I agree with your -1 grade, though the numbering wasn’t rankings. They were like the T-shirt drawer. Whichever one’s on top gets worn that day.

    David A, the M&M link has been posted above. Regarding your qustions:

    1. M&M’s criticisms were methodologically comprehensive. I don’t believe they discussed whether dendro T-reconstructions qualified as science, or not. And they didn’t discuss measurement error.

    2. Any method that relies on statistical inference absent a physical theory is not science. Any dendro proxy-T study that scales tree rings into the recent surface temperature record is not science no matter what it shows over the Medieval times (or any other time).

    3. I don’t have any insight into solar-climate.

    4. Climate sensitivity varies over a factor of about 2 among climate models. Modelers apparently adjust aerosol forcing to get the GCM temperature outputs to match the centennial temperature record.

    5. The Medieval Warm Period is an embarrassment because the precedent it sets for non-Anthropogenic warming can be applied to the 20th century. Hence the interest in making it a regional European phenomenon.

    Bloke, one gets both. Chaos gives causation without correlation, and the fixed links provide examples of the opposite.

  100. Pat Frank says:

    Thanks, tallbloke, that’s high praise, coming from you. Your point about salinity is dead on IMO. It could be going either way, and no one really knows. I recall some folks looking at alkaline earth ratios in carbonates to find a salinity proxy, but don’t recall anything that seemed promising.

    David A, guessing here but I’d suppose Ed Cook’s comment was meant to apply to the entire dendro proxy-T effort.

    Bill Illis, your description is the impression I got when reading the lit. One almost must have all the information already in hand to get dO-18 as a reliable temperature proxy. That is, T:dO-18 must be the only unknown variables.

    Dr. Burns, I’ve expressed a very considered view of measurement uncertainty in the surface air temperature record here, and a slightly tangential extension here (both pdf downloads). In short, it’s as you suspect. Everyone is totally neglecting systematic error. They’re assuming implicitly (i.e., without stating it plainly) and without evidence that the Central Limit Theorem applies to the error of the entire temperature record. It’s all random and just averages away, and they claim (+/-)0.05 C accuracy for later 20th century record.

    No one knows the true accuracy of the record, but my estimated lower limit of accuracy is 1-sigma=(+/-)0.46 C, very close to your 1950’s estimate of (+/-)0.5 C.

    Mickey Reno, thanks, and I don’t know.

    richardscourtney, thoughtful as usual. :-) But recall that chaos can be very non-linear, meaning you can have a cause without producing a correlated result. I.e., small causes can produce outsize effects. Chris Essex had a paper about this in JGR-Atmospheres, in the context of the predictability of climate. The abstract page is here.

    Dixon, people do worry about such things. Shells don’t carry short event information. One is lucky to get annual resolution. The scientists know about variations in O18 with shell depth and actually grind off different shell layers one at a time to get higher resolution information.

    I found the O-18 folks, and indeed the stable isotope community in general (to the small extent I’ve surveyed it), to be serious about their work; little or no grand claims.

  101. Len says:

    Great article. Ignoring the most important sources of uncertainty does not make sense unless one is unaware of them (time to go back to school) or is presenting false information to fit previously accepted conclusions (time to go to jail if you took public funds, time to just go if it was using your own money).
    Incidently, I remember reviewing a paper some 30 years ago that showed throughout much of the SW USA precipitation dominated tree ring thicknesss.
    Again, great article. Thank you very much.

  102. Pat Frank says:

    And so, wmconnolley, how large are the systematic error bars around a borehole temperature? Scientifically stream that by us, will you? Thanks.

    Dennis Nikols, thanks and for a good while I’ve been scratching my head over those very questions, too. Until seeing it happen right before my eyes, I’d never have thought it possible. People — scientists — seem to have so amazingly easily bought right in to the politics and then adjusted their view of science to suit. And then got righteous and censorious about it. Honestly, I don’t understand it.

    Gary, I have nothing but respect for Shackleton. He was a pioneer and from what I could see did excellent work.

    Thanks for the NIST links, David, they’re a great resource.

    Kev-in-UK, really, I went in to the analysis wondering what I’d find, after Kaustubh T and Kevin A defended proxy-T strictly in terms of stable isotopes. I knew about the method, but the details were new to me. On the other hand, no one seemed to discuss measurement error, and so that seemed like a good place to look. And the rest is history this post.

    Your comment, that, “This is the box within which the plotted points could, technically, be anywhere due to the errors in the analysis and method!” is very astute.

    Some people don’t understand the difference between an accuracy bound (like systematic error) and a precision bound (random error). In a precision bound the mean line (running through the center of the error bars) really represents the most probable values.

    But in an accuracy bound, the true line could be anywhere within the limits. The mean line through the center has no special significance at all. One really doesn’t know where the ‘best’ values are, between the accuracy error bars.

    Hannibal Barca, thanks for your knowledgeable and very interesting comments. There’s nothing like a pro in the field to really illuminate the complexities. Any field always turns out to be harder than we expected, when we were naive. And more interesting to that exact degree. So, did you work in the lab before, or after, you fought the Romans? :-)

    Given your choice of name, by the way, you might be interested that I worked on wood from the Acqualadrone rostrum — a military ram dating from the first Punic War. A manuscript is in review now. One of your relatives may have seen it new.

  103. Volker Doormann says:

    Guest post by Pat Frank

    Summary conclusion: When it comes to claims about unprecedented this-or-that in recent global surface temperatures, no one knows what they’re talking about.
    I’m sure there are people who will dispute that conclusion. They are very welcome to come here and make their case.”

    Wasted time.

    “Thanks, everyone, for your encouraging comments. .. I don’t have any insight into solar-climate.

    I have.

    EOD.

    V.

  104. Pat Frank says:

    Steve Mosher, your comments in reverse order: Anyone can take a look at my discussion with Jeff id, Lucia, and Roman here part 1, and here part 2, and see for themselves whether I, “[refused] to engage the argument.” The content seems to refute you. Granting you integrity, we’ll assume that you never read those threads before making that accusation.

    Roman M actually disagreed with Jeff and Lucia, and concisely restated exactly my case.

    Finally, you say I continue “to make the same mistakes.” Jeff and Lucia claimed my mistake was to interpret weather noise as though it were an error. They were not correct. Presumably, though, that’s my “same mistake” is it? Where does a weather noise mistake appear in amongst the dO-18 analysis?

    Or is it a different mistake that I’m continuing to make? What is it, exactly? You neglected to specify it.

    I’m not interested in re-igniting the debate about my air temperature paper, by the way. Those two threads have sufficient content to allow anyone to make up their mind after some resolute reading. My take home lesson from that experience was that no matter how hard one tries to write an essay clearly, someone is going to find a way to misinterpret it.

  105. Pat Frank says:

    It’s late now and work arrives in the morning. I promise to continue responses tomorrow (today, actually) evening.

  106. William M. Connolley says:

    > And so, wmconnolley, how large are the systematic error bars around a borehole temperature? Scientifically stream that by us, will you?

    Interesting to see how you react to new leads, new ideas – you ignore them. And this is important stuff in the published literature, directly addressing the point you claim to care about – how well does D-O-18 match temperature?

    > richardscourtney says: You use few words and they have no point. Perhaps you would be willing to share why you bothered to make such a pointless post which only serves to show that you did not read the article?

    You must have missed the point about borehole thermometry. I usually try to use as few words as possible so that things are hard to miss. As for the rest: this stuff here, and so much at WUWT, is just lost; wandering around in the darkness. People (well, at least some people here) clearly have an interest in science, and maybe a desire to join in – science is fun and exciting, after all. But with no idea of what science actually is, that isn’t going to work.

    > I sometimes wonder if people like Connolley realize that in the future they are very likely to be thought of as important historical figures

    Very unlikely I think. All this stuff – all these arguements, all this fire and fury – that seems so exciting now, will just crumble away into trivia seen from the distance.

  107. richardscourtney says:

    LazyTeenager:

    A point in your post at April 3, 2012 at 8:30 pm indicates that post also contains a typing error. The point is your assertion that;

    “So the crux is whether all of the proxy methods correlate with temperature under all relevant conditions. If they do then a proxy measurement can be used to infer a temperature irrespective of its physical basis.”

    NO! IT CANNOT!
    Firstly, it is not possible to know what “all relevant conditions” are.
    Secondly, the correlation over the available calibration range may be coincidental.
    etc.

    Please read my post at April 3, 2012 at 6:15 am for a basic explanation of how correlation can be used by scientists.

    Your apparent typing error indicated by your daft assertion says;
    “My ignorance of principal component analysis knows no bounds”

    Surely you intended to write;
    “My ignorance knows no bounds”.

    Richard

  108. richardscourtney says:

    Pat Frank:

    You say to me;

    “ But recall that chaos can be very non-linear, meaning you can have a cause without producing a correlated result. I.e., small causes can produce outsize effects. Chris Essex had a paper about this in JGR-Atmospheres, in the context of the predictability of climate. The abstract page is here.”

    True, but that is another addendum to my point. And there is much more that could be said concerning inappropriate use of correlation. I repeat, in the context of your article the important issue is;

    Correlation implies nothing about causation
    but
    absence of correlation disproves direct causation.

    Please note my use of the word “direct” in the latter clause.

    Richard

  109. LazyTeenager says:

    kim2ooooo says
    Simply, I am dumbfounded by the quotation in the article which reports Michael Tobis having written;
    “If two signals are correlated, then each signal contains information about the other. Claiming otherwise is just silly.”

    An undergraduate would obtain a fail mark for writing that in an assignment.
    ———–
    I suspect you are confusing 2 separate questions:
    1. Does correlation allow cause and effect to be inferred? Answer no.

    2. Does correlation allow the value of one variable to be inferred from the correlated variable? Answer yes.

    Since we are talking proxy measurements here, and correlation, question 1 is kinda irrelevant.

  110. richardscourtney says:

    William M. Connolley:

    Your post at April 4, 2012 at 12:33 am quotes me having asked you;
    “You use few words and they have no point. Perhaps you would be willing to share why you bothered to make such a pointless post which only serves to show that you did not read the article?”

    And it replies;
    “You must have missed the point about borehole thermometry.”

    Thankyou for your having taken the trouble to write something in response to my question, but it is not an answer to my question.

    You made no “point” about “borehole thermometry”. You mentioned it as an arm-waving exercise seemingly because you had found nothing to comment upon in the above article.

    Indeed, as Pat Frank said to you in his post at April 3, 2012 at 11:20 pm;
    “And so, wmconnolley, how large are the systematic error bars around a borehole temperature? Scientifically stream that by us, will you? Thanks.”

    And your response to that (in the same post that provided an answer to me) is;

    “Interesting to see how you react to new leads, new ideas – you ignore them. And this is important stuff in the published literature, directly addressing the point you claim to care about – how well does D-O-18 match temperature?”

    In other words,
    (a) You had no point, but mentioned “borehole thermometry”,
    (b) You have evaded Frank’s reasonable request for explanation of what you wrote,
    And
    (c) You attempt to obscure your evasion by answering Frank’s reasonable question by asking him a question.

    I know it is difficult for you when you cannot censor opposing views, but I think that when you post on WUWT you need to remember the age-old advice that says

    It is better to be thought a fool than to say something which proves you are a fool.

    Richard

  111. LazyTeenager says:

    James Ard on April 3, 2012 at 5:23 am said:
    I’m not a scientist, but using trees as a temperature proxy seems crazy. Both temperature and co2 levels affect the growth rate of trees, among other factors, how do you attribute what growth to what?
    ———-
    I believe the trees are selected from an environment that makes temperature the limiting factor on tree growth.

    They are not just any old trees. They are likely not tropical trees for example

  112. barn E. rubble says:

    Readers may find some interesting reading here:

    “Techniques used
    Paleoclimatology studies require assessment of temperature and seasonality changes in the past. To this end, the SIL utilises O, C, H and N isotope systems, together with C/N ratios on a variety of materials which record time-related changes.”

    The Saskatchewan Isotope Laboratory, University of Saskatchewan:

    http://sil.usask.ca/palaeoclimatology.htm

  113. Dixon says:

    Thanks for the reply Pat.

  114. Kev-in-Uk says:

    As an ex-oilfield geologist, I am intrigued by the mention of ‘borehole thermometry’ by Mr Connelly. Is he meaning the palaeo temps derived from analysis of mineral content of stratigraphic layers encountered within boreholes? or the actual borehole temperatures?
    if it’s the former – surely thats essentially the same as all palaeo climate derived analysis of rocks/minerals, with the same inherent errors. if it’s the latter, this is probably irrelevant to a palaeo climate discussion (although there is a valid point to be made about post depositional diagenetic changes of minerals and any subsequent analysis thereon!)

    (as a slight aside, for those who may want to know, basically once a mineral has been deposited, in really simple geological terms, it could well be squashed and heated, overpressured by several miles of other rock lying on top, smashed together with other rocks, washed through with superheated groundwater, vulcanism, etc, etc – resulting in changes and alterations to the initial constituent minerals (the general term metamorphism applies) – thus any later analysis of such minerals therefore needs to understand what diagenetic changes have taken place before any meaningful conclusions can be drawn. Hence, past climates from ‘buried’ rocks becomes increasingly difficult to assess – it’s not much easier for shallow rocks either, as normal surface weathering can also cause significant mineral changes – anyways, I hope non geologist types can get the picture….)

  115. Kev-in-Uk says:

    Ah! I’ve just twigged – Mr Connelly is probably referring to borehole thermometry of the ice streams/sheets and the analysis of o18 etc within layers! apologies for my earlier comment

  116. NW says:

    Pat Frank,

    This was a very nice post, from which I learned a great deal. However, I also found this comment by Frank interesting and of potential importance:

    http://wattsupwiththat.com/2012/04/03/proxy-science-and-proxy-pseudo-science/#comment-943912

    I hope you will respond to it, as it makes a potentially important point about the distinction between levels and changes. For some purposes, knowing levels might be critical; for others, knowing changes might be sufficient.

    Thanks,
    NW

  117. wmconnolley says:

    > borehole thermometry of the ice streams/sheets and the analysis of o18 etc within layers

    You might find Jouzel et al. interesting (http://courses.washington.edu/proxies/JouzelJGR1997.pdf). Since its one of the foundation papers for ice-core interpretation of d-O-18, its odd to find this “comprehensive” review missed it.

    > Mr Connelly

    Dr (I mention that every now and again because there is a minority here that is interested in politeness; you might well be one of them). And the spelling, of course.

  118. Aaron says:

    Another thing to think about in terms of induced error is the phosporic acid itself. Since H3PO4 has oxygen in it, the ratio of O-18 to O-16 in the reagent acid used for the analysis may also introducing a systemic error. According to WIKI:

    “This very pure phosphoric acid is obtained by burning elemental phosphorus to produce phosphorus pentoxide and dissolving the product in dilute phosphoric acid. This produces a very pure phosphoric acid, since most impurities present in the rock have been removed when extracting phosphorus from the rock in a furnace.”

    The point is I wonder if the possible error introduced by the H3PO4 oxygen isotope ratio has also been taken into consideration?

  119. Bill Illis says:

    Borehole thermometry is a joke.

    The Greenland borehole temperature reconstructions have set-back the science by a decade now and it is only recently that the Greenland ice core scientists have started to re-write the record to what it should have been.

  120. Bill Illis says:

    By the way, what does the borehole temperature calibration from Jouzel 1997 say Greenland was in the Eemian Interglacial?

    +10.0C.

    Now that we can go back to 123,000 years ago with the Greenland ice cores, it is recognized that the borehole calibration is not correct since there would have been little glacier left at +10.0C. The standard dO18 calibration method shows +4.0C, similar to what Antarctica had and more consistent with the sea level estimates and the other proxy information for the period.

  121. “because there is a minority here that is interested in politeness..” wmconnolley

    Lose any arguments recently?

  122. Smokey says:

    The Poems of Our Climate,

    It is extremely impolite to arbitrarily censor the views of others simply because the censor doesn’t agree with them. Willy has insinuated himself into a position where he can be the censor. That makes his fake politeness a cover for his very impolite censorship. This directly applies to willy connolley.

    Sorry willy, but you are as impolite as anyone can be. Got that, willy?

  123. Kev-in-UK says:

    @Dr Connolley – with a double ‘n’, double ‘l’ and ‘ey’ – I’ll try to remember in future, but of course, why should I know you’re a PhD? (likewise, how would you know my quals? an M.Sc. as it happens)
    Anyways, in my considerable 30yrs of experience, most professionals only use their titles in professional documentation and rarely prefer to be called by them in general conversation (I’ll except those in academia (i.e. lecturers and the like) as some of them sometimes need to feel some ‘worth’ as opposed to professionals in the non-academic world who are more directly appreciated – just my opinion! – I particularly recall several Professors who seemingly ‘liked’ the sound of their titles, perhaps it massaged their egos? but I personally never minded calling them ‘Prof’ and have been lucky to have had contact with mostly really good professors!)
    I possibly read Jouzel several years ago, I truly cannot recall – but a brief perusal would suggest that the errors are still indeed present as in Pat Frank’s outline.

    The important point in all of this ‘error assessment’ is that the errors are recognised AND reported -something that I generally do not see being given a high priority in the climate science meme. Of course, one then has to wonder why? Good science should always report the errors and limitations as a priority. I am sure if the general public were filling their cars with fuel measured to a similar level of accuracy/reliability of some AGW works they would be up in arms! It follows that applying primary policy decisions on potentially significantly inaccurate information should equally have them ‘up in arms’? if we then start discussing real monetary costs and economic impacts, well…….

  124. Keith Sketchley says:

    Wow. It is beyond my knowledge to judge the validity of Pat Frank’s thesis, without years of study, but it seems thorough and is well presented. Thankyou.

  125. Keith Sketchley says:

    Stephen Mosher: Your reaction to “Kev-in-Uk” may be a misunderstanding of word use. Encarta 2005 defines “rhetorical question” as [question requiring no answer: a question asked for effect that neither expects nor requires an answer], whereas the word “rhetorical” by itself has several meanings different from “rhetorical question”.

    Also, Stephen Mosher, please clarify the meaning of your remark “Further Pat continues to make the same mistakes and I wouldn’t waste my time on him.” What mistakes? (For example, possible meanings include that “Pat” keeps asking you something or keeps making the same scientific mistake. In the latter case you need to at least briefly identify the mistake. And use full name, it appears you are speaking of Pat Frank but Pat is a common name and most posters use pseudonyms hiding a real name you are more likely than I to know.)

  126. Keith Sketchley says:

    The speculation by “Kev-in-Uk” about Pat Frank intuitively knowing something before his thorough review of the subject may be confusing. The word “intuitive” is vague and mis-used, a floating abstraction in many cases, often mis-used.

    People can have discomfort, or inklings, from subconscious processing of information, but whatever pops up must be validated. (The cause might be an error of their own somewhere – misunderstanding some information for example, or a true contradiction in information.) People can suspect something, but they have to investigate and validate.

    People can be concerned about a claim, as I am about alarmists’ anti-human conclusions, but the claim must be examined – doing that objectively is a challenge.

  127. Dr. Deanster says:

    wmconnelley says ….
    [i]> borehole thermometry of the ice streams/sheets and the analysis of o18 etc within layers

    You might find Jouzel et al. interesting (http://courses.washington.edu/proxies/JouzelJGR1997.pdf). Since its one of the foundation papers for ice-core interpretation of d-O-18, its odd to find this “comprehensive” review missed it.

    > Mr Connelly

    Dr (I mention that every now and again because there is a minority here that is interested in politeness; you might well be one of them). And the spelling, of course.
    [/i]

    Well Dr. Connelley … I read your paper, and it deals with a completely different subject than that illustrated in the OP. The OP was concerned with the experimental error regarding the isotope readings liberated from Ca deposits, error intrinsic to the extraction method and error intrinsic to the formation process.

    In contrast, the paper you linked is concerned with directly measuring D and O18 in water samples from precipitation. As noted in figure 8, the further one goes back in time, the more scatter we see, to the point that the paper itself proves the issue of the OP .. that being that isotopes in paleothermometers are not as accurate as one would think. The only point on the graph where the scatter is tight is at ZERO years before present.

    One of the limitations mentioned in the text is again constistent with the OP. Conditions at the time of formation of precipitation have a significant influence on the D or O18 formed in the precipitate. We find this same difficulty with the Ca deposits. Further, spatial differences are also a source of error, as illustrated by the fact that the curves change over space, and isotope in precipitation are practically useless in the tropics and equatorial areas.

    There are several other issues that you seem to ignore as well. The OP is talking about estimating tempeature within 1C. I don’t see the isotopes in this study making that fine of a measurement across time, not in the graphs, or in the text. Even the “calculated” vs “observed” graphs show differences in spacial differences as much as 2C and greater. … again consistent with the claim in the OP of the 2.75C error that is resident in isotope studies of Ca.

    A final issue, is that the the OP really doesn’t criticize the isotopye guys .. and in fact, makes claims that they are very good science. However, the “tree-ring circus” lacks sound scientific theory to back up its claims.

  128. Pat Frank says:

    Tenuk, agreed. I’ve found the same lack of physical error bars looking at GCM outputs and at the surface temperature record. The neglect is endemic in AGW-relevant climatology.

    Frank, you wrote, “The absolute size of peaks in a mass spectrum is irrelevant; only ratios are reported.

    Mass spectrometers record M/z peaks, which means mass divided by charge. In heavy isotope mass spec, that’s the mass of the parent ion. The absolute intensity of all peaks are necessarily measured. The absolute peak intensities are needed to calculate the ratio of heavy isotope parent to light isotope parent. Your first statement, therefore, is wrong.

    Here, for your edification, is a mass spectrum from K.I. Öberg, et al. “Photodesorption of ices I: CO, N2, and CO2. Among other things, they reported this heavy isotope mass spectrum:
    <IMG src="http://i41.tinypic.com/2ivbrsg.gif&quot; length=300 width=400
    Partial Figure Legend: “Mass spectra acquired during irradiation of a 6.2 ML thick 13C18O2 ice at 20 and 60 K … there are some background CO (m/z=12, 16 and 28), CO2 (m/z=44) and possibly some background H2O as well (m/z=18).

    Note the peaks: absolute intensities, not ratios. Also relevant to our interests here, note the 18O2 at M/z = 36, and the 13C18O2 at M/z=49.

    You wrote, “Any “losses and inefficiencies” won’t cause a change in the ratio unless they provide a path for separating some CO2 with O18 from ordinary CO2.

    Your confidence is directly refuted by Shackleton’s scattered results, noted above. Using the identical sample, his measured dO-18 varied by (+/-)0.14%o. The scatter present in everyone else’s data likewise provides direct evidence of variably discrepant isotope measurements.

    Clearly, there are uncontrolled variables affecting measured isotope ratios.

    This comment, “Even IF such a path existed, that wouldn’t necessarily cause a problem WHEN we interpret CHANGES in these ratios, not the absolute value of these ratios.” assumes a constant systematic error that can be subtracted away. There’s no reason to suppose constant systematic error.

    In fact, systematic error is rarely constant. It’s typically due to uncontrolled variables — not uncontrolled constants. And not necessarily uncontrolled variables of the same magnitude or influence. The relative impacts of uncontrolled variables can change with operator, with method, with instrument, and so forth — including between laboratories — and no one can predict the outcome. Look at the scatter in Figures 2-5. None of the systematic error is constant. I’ve yet to run across a case where it’s constant.

    You wrote, “The appropriate issue is: How reproducible are these measurements over the full period of the study?” and the answer is in Figures 2-5, the table of Shackleton’s results, and the “~0.1 permil” error reported in Keigwin’s mass spec data. All of them indicate variable scatter.

    Volker Doormann, 18-O is a stable isotope. It doesn’t have a decay time.

  129. Pat Frank says:

    Well, too bad. The image of the mass spectrum didn’t come through.

  130. Pat Frank says:

    But if you click on the link, you’ll see the mass spectrum at tinypic.com.

  131. Kev-in-Uk says:

    Keith Sketchley says:
    April 4, 2012 at 4:23 pm

    Actually, I was meaning in the context of say a car mechanic, who, on hearing someone describe a symptom, will, from experience – likely ‘know’ roughly what the problem is. So, for myself, when reading some of the reports I have to review – if I read something and think ‘that doesn’t sound right’ I’ll re-read and double check. Within the context of Pat Franks review, this means/meant going back through to the roots (including previously quoted or referenced papers within other papers) to work it through and demonstrate the problem(s) to others. There are many papers technically ‘reliant’ on previous works, often as a result of the general acceptance that a published peer reviewed paper is ‘correct’. Clearly, this is not necessarily a good aspect of the science process when it happens.

  132. Pat Frank says:

    dave38, deuterium in water does produce a heavy water — DHO instead of H2O. DHO does have a higher boiling point than H2O. But deuterium and O-18 are so rare that the amount of DHO-18 in the world is vanishingly small.

    Frank, you wrote, “Pat: The error bars you added to Keigwin’s Sargasso Sea reconstruction (Figure 7) are potentially correct and at the same time GROSSLY misleading. Stable isotopes are much more accurate describing temperature change, rather than absolute temperature. Most publications plot stable isotope ratios, rather than derived temperatures, on the vertical axis for precisely this reason.

    Well, at least you think the error bars correct. :-) But you’re wrong about the grossly misleading part, as well as in the rest of that paragraph. Stable isotopes are incorporated according to local temperature, not according to the change in local temperature. Let’s be clear. All else being constant, if local temperature changes the O-18 ratio changes. It changes from the old O-18 ratio reflecting the prior temperature to the new O-18 ratio reflecting the new temperature. If you want a difference between two O-18 temperatures, you can take the difference between two O-18 ratios. The O-18 isotope ratios are a direct proxy for temperature, not temperature differences.

    An O-18 ratio is taken between the O-18 in water v. the O-18 in carbonate at some temperature and at some given time. Or in some O-18-containing measurement standard at a series of set temperatures. A single ratio does not reflect different temperatures at different times.

    I suspect you’re confusing a ratio with a difference (an anomaly).

    But as O-18 ratios do not transmit temperature differences, they transmit temperature. They are not more accurate than shown in Figures 2-7.

    Furthermore, temperature differences (anomalies) do not decrease systematic error unless the error is of constant magnitude in both temperatures. That is rarely the case, and virtually never the case in real-world field-measurements.

    In short, the error bars in Keigwin’s Sargasso Sea reconstruction are good estimates of his systematic measurement error. I’ll demonstrate that in discussing your next mistake.

    You wrote, “In simple terms,
    t = m*d + b

    “The slope, m, is fairly similar in all of these lines, but the y-intercept, b, is not: On the isotope delta scale -1 %o is always about +5 degC. If we consider two isotope measurements, d1 and d2 and calculate the temperature difference they represent:
    t2 – t1 = m * (d2 – d1)…

    Let’s take your d2-d1 and apply that to Keigwin’s data. Each T has a systematic error of 1-sigma=(+/-)0.75 C. Here is a nice page on error propagation.

    Scroll down until you find this: “Addition and Subtraction: The square of the uncertainty in the sum or difference of two numbers is the sum of the squares of individual absolute errors.

    Got that? The sum of the squares of the individual errors. So now let’s look at your d2-d1 difference. The error in your (d2-d1)=e(2-1) = sqrt[(e1)^2+(e2)^2].

    For any two of Keigwin’s temperatures, the error in your temperature difference is e(2-1)=sqrt(0.75^2+0.75^2) = (+/-)1.06 C. That makes the 95% confidence limit of e(2-1) = (+/-)2.12 C in your difference temperature, up from (+/-)1.5 C. Your method has increased the uncertainty in the result by 71%.

    The reason the error propagates like that is because the error is systematic, not random. But don’t feel too badly. AGW-related climate scientists make the same mistake all the time.

    By this time, you should know that, “Looking at Figure 7, it may be possible to conclude that the temperature in the Sargasso Sea rose about 1-2 degC between 1500 years ago and 1000 years ago” is wrong.

    And this comment, “Furthermore, if we know today’s temperature and isotope ratio (as well as m) we can calculate b for the site and completely eliminate its uncertainty.” merely shows that you’ve failed to grasp that the errors I’ve calculated have nothing to do with knowledge of “m” or “b,” which after all are only fitted constants.

    The errors I calculated are methodological systematic errors made during the course of laboratory measurement. They are from the empirical point scatter around the line defined by your “m” and “b,” and are independent of “m” and “b.”

    Perhaps a Monte Carlo approach would help.
    Monte Carlo assumes random distributions. My post has to do with systematic error. A Monte Carlo approach could not be more irrelevant.

  133. Pat Frank says:

    Tom Ragsdale, if you know that for sure, where the heck is my money? :-)

    PG, thanks and good point. Frank mounted a real effort.

    Mondo, I gave links to the discussion that’s got Steve Mosher so steamed. Do take a look and decide for yourself whether he’s got a case.

    LazyTeenager,, how’s this: ‘stable isotopes excepted [from a diagnosis of zero physical methods], but [the] physical meaning [of stable isotope proxies] is invariably discarded in composite paleoproxies.’

    You wrote, “If [proxy methods correlate with temperature under all relevant conditions] then a proxy measurement can be used to infer a temperature irrespective of its physical basis.

    You’re equating induction to deduction, LT. Doing so is entirely wrong. Proxies have no physical meaning outside a physical theory. It doesn’t matter how well they correlate with temperature.

    At best, a strong correlation may imply an underlying causal process somewhere. However, that process may be independently driving both the “proxy” and the temperature. If that’s true, the “proxy” is not a proxy. If the cause changes or turns off, the correlation disappears (as late 20th century tree rings have done).

    Without a physical theory, there is no way to know anything about why the proxy is behaving as it does. And total ignorance makes it a “proxy.” As such, its use would be no more than an irony of science. It’s not a true proxy.

    PCA isn’t adding up time series. PC’s have no physical meaning. Without an organizing physical theory, they’re no more than just a series of numbers.

    Epicycles were an empirical model with parameters adjusted by observations. PCA is none of that.

    Michael Tobis, so, then, what information do we get about ministers salaries from recordings of the price of vodka?

    markx, are we OK now? :-)

    Len, thanks.

    William M. Connolley, I asked you a fair question in reply. You ignored that. What’s the point of referencing boreholes if they have a large uncertainty bound?

    LazyTeenager, the answer to your #2 is, ‘no, not out of bound.’ Two correlated series can only reproduce one another’s values within the length of the correlation. They have no predictive power — a correlation doesn’t predict past the regressed length of the correlated series. And that means they have zero explanatory power.

    NW, hope the response worked for you.

    Aaron, the early work looked at the effects of different acids. McCrae made such a study, for example. Phosphoric acid gave by far the best dO-18 results in all the test experiments.

  134. kim2ooo says:

    wmconnolley says:
    April 4, 2012 at 7:21 am

    Will Mr Connolley beaddressing me as Mistress Kim2ooo? :)

  135. kim2ooo says:

    Lazy Teenager

    I think you have your logic reasoning backwards….

    http://www.socialresearchmethods.net/kb/dedind.php

  136. Roy says:

    Excellent post. Once you get past the not inconsiderable difficulties of measurement, interpretarion of the results adds a whole new layer of of uncertainty. Leaving aside salinity, a further correction for the extent of glaciation is required as precipitation which forms the polar icecaps is deficient in O18 by about 60 parts per thousand compared to ocean water. This can mean that ‘every isotopic curve has to be re read taking cold to mean extensive continental glaciation and warm to mean glaciers reduced to their present level’. This is not new science, its a quote from Shackleton NJ (1967) Oxygen Isotopic Anayses and Pleistocene Temperatures Reassessed. Nature 215 pp15-17 Available without paywall from http://www.mendeley.com/research/oxygen-isotope-analyses-pleistocene-temperatures-reassessed/#

  137. William M. Connolley says:

    > why should I know you’re a PhD?

    I’m not.

    > the climate science meme

    Meme? What are you talking about?

    > general public were filling their cars with fuel measured

    Rather a poor example. Fuel is measured, but error bars are not given. Obviously. Have another go?

    > Will Mr Connolley beaddressing me as Mistress Kim2ooo? :)

    Since you say nothing at all worth responding to I rather doubt I’ll be addressing you at all. Oh, wait…

    > It is beyond my knowledge to judge the validity of Pat Frank’s thesis, without years of study, but it seems thorough and is well presented.

    Errm, can no-one see how crass this, and similar comments are? If you can’t see beyond the surface, you have no business to be praising it. Unless the surface gloss is all you’re interested in, of course.

    > the paper you linked is concerned with directly measuring D and O18 in water samples from precipitation

    Well spotted. You’ll immediately see the relevance, I’m sure.

    > I’ve found the same lack of physical error bars looking at GCM outputs

    GCMs don’t have error bars in the usual sense, because the output is exact, of course. But GCMs have interannual variability, and you’ll find that reported. If you actually read the papers.

    > (a) You had no point, but mentioned “borehole thermometry”

    If you can’t see the relevance of the borehole thermometry, no amount of further explanation form me will help you.

    > William M. Connolley, I asked you a fair question in reply.

    No. You asked a question that amounted to “I don’t want to read the paper you referenced, so I’m going to ask temporising questions that appear to excuse my not bothering to read it”. If you were actually interested in the information, that wouldn’t be your response.

  138. Frank says:

    Pat: Thanks for taking the time to reply. I will, however, decline your kind invitation to join the ranks of pro-AGW climate scientists who don’t (or do) understand error propagation, and attempt to improve the quality of the science being offered to WUWT readers. (I see from Steve Mosher’s comment that you have tussled with picky skeptics before. I added a comment at the end of the Air Vent post he linked (:)). I would accept an invitation to join their company, but I haven’t earned that privilege.)

    We are far more interested in climate change than in the exact mean annual temperature at any particular location. Therefore, the uncertainty in temperature change (t2 – t1) in any analysis is far more important than the uncertainty in temperature (t2 or t1), which is what you misleadingly included in the absurd error bars in Figure 7. There are at least two ways to estimate the uncertainty in temperature change (t2 – t1). The dumb way is by the formula in your reply for the uncertainty of a sum or difference.

    The key point of my post was that the equation below provides a method for calculating a low uncertainty for temperature differences (t2 – t1) from the low uncertainty in the mass spec data (d2 – d1), when the y-intercept b is constant. Here is an improved explanation:

    t2 – t1 = m * (d2 – d1)

    According to your post, modern mass spectrometers can measure d2 and d1 with an accuracy of 1 part in 100,000. From your post, 1%o is 1 part in 1,000. The mass spec data therefore allow us to distinguish between d = -1.00 %o and d = -1.02 %o on the x-axis of Figure 6. When one translates uncertainty on the x-axis (%o) to the y-axis (temperature) via a slope of approximately -5 degC per 1%o, we should be able to trust temperature DIFFERENCES of roughly 0.1 degC – even though you believe the uncertainty in INDIVIDUAL temperatures is greater than 1 degC! (I didn’t personally check your error bars, but your discussion seemed sensible.) Since the uncertainty in d2 – d1 is so small, it doesn’t make any practical difference whether the slope of the lines in Figure 6 is -4, -5 or -6 degC per 1%o.

    As I said in my original comment, we MUST be analyzing data belonging on a single line of Figure 6; the y-intercept, b, must be constant. Different values of b and m arise from different kinetic isotope effects during the processes that incorporate O18 into the proxy material. We can’t use the difference between d2 from Cape Cod oysters and d1 from Florida coral or – as you absurdly noted – d2 from water and d1 from calcium carbonate. HOWEVER, the data in your Figure 7 is presumably all from a sediment core from the Sargasso Sea, and probably from CaCO3 in that core. If you want to assert that b varies for this core, you need to explain why incorporation of O18 into the shells of the organisms that presumably deposited this CaCO3 has changed over the past several thousand years. Keigwin’s paper may explain how they ensured that their proxy material was as homogeneous as possible (a single organism?). Maybe these researchers were sloppy and there is no reason to assume b and m are constant, but you need to convince readers why different lines in Figure 6 are appropriate for the data points in Figure 7.

    If my uncertainty analysis (+/- 0.1 degC) is correct, do you still think your error bars properly reflect our understanding of temperature CHANGE in the Sargasso Sea? Does anyone care if the temperature there was 22 or 25 degC a 1000 years ago?

    Some publications leave isotope data in %o form (accurate to 0.01%o), rather than confront the complications of converting to temperature that you have exaggerated in this post. If Keigwin translated his isotope data into temperature data, he may have had good reasons for believing he knew a reliable method for translating isotope data into temperature data at his site and may have discussed his method in his paper.

  139. Dr. Deanster says:

    Dr. Deanster > the paper you linked is concerned with directly measuring D and O18 in water samples from precipitation

    wmconnelley says > Well spotted. You’ll immediately see the relevance, I’m sure.

    Dr. Deanster says … I immediately see how shallow this response is, and how it ignored the rest of the post regarding observations of your linked study. At least you could respond to the meat of the post, as opposed to a sound bite … that says nothing.

  140. Pat Frank says:

    William M. Connolley wrote, “GCMs don’t have error bars in the usual sense, because the output is exact, of course. But GCMs have interannual variability, and you’ll find that reported. If you actually read the papers.

    I’ve read the papers. The reason GCMs don’t have error bars — true physical error bars — is because no one knows what they are. No one has propagated the error through the physical theory in a GCM. No one has propagated the physical error per time step into a projection.

    The GCM interannual variability, without physical error bars, represents little more than the internal variability of the model. It has numerical meaning only. Reporting those alone is a charade.

    I’ve calculated the average cloudiness error made by GCMs (here, here) and equates to (+/-)100% of all the Anthro-GHG forcing, per time step. And that propagates into rapidly increasing error with each time step. GCMs tell us nothing about future climate.

    You wrote, first quoting me, “William M. Connolley, I asked you a fair question in reply.”
    No. You asked a question that amounted to “I don’t want to read the paper you referenced,…

    Except that you had referenced no paper. Here’s your entire original comment to me: “The obvious thing to look at is the comparison of borehole thermometry to D-O18 in Greenland.
    I don’t see a reference there, do you? You made a lazy off-hand riposte, worth exactly nothing. My question about error had been entirely appropriate. Your next response was another imposture, “Interesting to see how you react to new leads, new ideas…,” showing a lack of awareness that you had offered no new leads, no new anything
    And now you wrote, “No. You asked a question that amounted to “I don’t want to read the paper you referenced, so I’m going to ask temporising questions that appear to excuse my not bothering to read it”. If you were actually interested in the information, that wouldn’t be your response.” which is a crock because you had never referenced a paper.
    So stop with the pious posturing, William. Your own words expose you.
    Finally, in your next post, you actually did link to a paper — Jouzel’s 1997 ice core paper — as your evidence of the high-level reliability that will refute my critique.
    I have that paper and am guessing you never read it, because immediately in Figure 1 and in Figure 2, the scatter in the T:dO-18 points is so large that the error they represent will dwarf the errors described in my head post analysis.
    The Greenland dO-18 data in Jouzel Figure 1 exhibit by far the least amount of scatter. Those data are referenced to S. J. Johnsen and J. W. C. White (1989) “The origin of Arctic precipitation under present and glacial conditions” Tellus 418, 452-468, where they appear in Figure 3.
    So, here’s what I did for you, William. I digitized Johnsen’s Greenland dO-18 data, and regressed it against T, just as he did.
    Johnsen’s equation:_dO-18%o = [0.67(+/-)0.02]*T-[13.7(+/-)0.5]
    My regression:____dO-18%o = [0.71(+/-)0.02]*T-[13.01(+/-)0.5]; r^=0.99; not a bad reproduction.
    Standard deviation of Johnsen’s point scatter about that line: (+/-)dO-18 = 0.359%o => (+/-)0.5 C
    That (+/-)0.5 C is the lower limit of error in Jouzel’s (Johnsen’s) dO-18:T relationship — your exemplar of AGW-recouping wonderfulness, remember.
    That makes the 95% confidence limit (+/-)1.0 C.
    And that once again obviates any possible conclusion about historical or millennial, or geological unprecedentedness in 20th century temperature from dO-18 measurements.

    Now what, William M. Connolley?

  141. Pat Frank says:

    Frank, Steve Mosher’s comment was that I “[refuse] to engage the argument.” He was untruthful. He also wrote that I “continue to make the same mistakes,” which is conveniently non-specific. And Steve didn’t link the tAV post. I did.

    Your up-dated explanation is just as mistaken as your original. The error I discuss is not mass spec error. It’s laboratory error. It’s sample prep error. It’s handling error. It’s systematic methodological error that shows up as point scatter in the results. That point scatter has nothing to do with the slopes of the lines.

    You wrote, “The mass spec data therefore allow us to distinguish between d = -1.00 %o and d = -1.02 %o on the x-axis of Figure 6.

    No, they don’t because the lines on Figure 6 don’t represent data. They are just the lines fitted to T:dO-18 data. Look again at Figure 6. One of the arrows points to a line and says, “Epstein, 1953.” Now look at Figure 5. That shows the data of Epstein, 1953. Look at the point scatter in the dO-18 measurements. The temperature standard deviation of the scatter itself is (+/-)0.76 C.

    The reverse polynomial regression yields 0.16%o as the experimental scatter in dO-18. That is again methodological error and it is 16 times larger than the mass spec precision, on which you have mistakenly focused. From Epstein’s equation, that (+/-)0.16%o scatter in dO-18 is equivalent to a systematic temperature uncertainty of (+/-)0.7 C.

    Your point about mass spec accuracy is irrelevant.

    Difference temperatures or difference dO-18s each encounters the equivalent systematic error width. It doesn’t matter which one you decide to emphasize. The error remains present and equivalent. Systematic errors combine in differences as the sum of their squares, as I already pointed out to you. You called error propagation “dumb,” and are advised to re-think that.

    You’re also disputing direct evidence of uncertainty in result that the scientists involved in the dO-18 method development have themselves openly acknowledged. How smart is that?

    You wrote, “We can’t use the difference between d2 from Cape Cod oysters and d1 from Florida coral or – as you absurdly noted – d2 from water and d1 from calcium carbonate.

    You entirely misunderstood the point of Florida v. Cape Cod. The difference in T per dO-18 is due to the difference in salinity. The point to be raised inthat context is that no one knows what the paleosalinity was, when fossil shells were formed. That ignorance provides a potential for an error in derived T of up to about 2 C.

    And your comment about “absurdly” using carbonate v. water O-18 merely shows that you don’t at all understand the method you’re arguing about.

    You wrote, “… you need to convince readers why different lines in Figure 6 are appropriate for the data points in Figure 7.

    Frank, I begin to think you didn’t even read my post before arguing about its content. I didn’t apply different lines from Figure 6 to the data of Figure 7 (Keigwin’s data).

    The error bars in Figure 7 come from the (+/-)0.14%o scatter in Shackleton’s method — Keigwin used Shackleton’s equation — and Keigwin’s own reported (+/-)0.1%o mass spec limit of precision. Those error bars reflect the empirical uncertainty in Keigwin’s own experimental method.

    You wrote, “If my uncertainty analysis (+/- 0.1 degC) is correct…“. It isn’t. Nothing you’ve written has been correct, including calling standard error propagation, “dumb.”

  142. William M. Connolley says:

    > The reason GCMs don’t have error bars — true physical error bars — is because no one knows what they are.

    So, you haven’t read the papers. Like I say, all this stuff is lost and confused. Read up on the basics before trying to walk on your own.

    > I’ve calculated the average cloudiness error made by GCMs

    Why did you do that? There are plenty of papers out there that do it properly. You should read them, instead.

    > I digitized Johnsen’s Greenland dO-18 data, and regressed it against T, just as he did.

    Well, that was pointless then. Try reading the paper instead.

    There’s a theme here.

  143. Smokey says:

    Willy Conn says in reply to Dr Frank:

    > I’ve calculated the average cloudiness error made by GCMs

    Why did you do that? says Conn-man. There are plenty of papers out there that do it properly. You should read them, instead.

    “Papers” are not a substitute for calculations, chump.

    And:

    > I digitized Johnsen’s Greenland dO-18 data, and regressed it against T, just as he did.

    Well, says the Wiki-Connster, that was pointless then. Try reading the paper instead. There’s a theme here.

    Yes, and the ‘theme’ is the Conn-man’s anti-science narrative that claims a “paper” trumps data. It does not, particularly in climate science. Willy Con should stick to the only thing he is competent at: censoring scientific views different from his own at the internet’s http://2012.bloggi.es/#science>Best Science site, who are not Conned by this jamoke. His censorship only works at Wikipedia. Not here. Tough noogies, conman.

  144. Gail Combs says:

    When William M. Connolley comes on to WUWT to jump down on a concept with both feet, it is a good indication that someone is posting important information.

    I am not nearly as knowledgeable as Pat Frank is in statistics (a major failing of our university science programs IMHO) but as a chemist I certainly see his point about the error inherent in any and all test methods and how that error is compounded when you are talking multiple lab techs, different laboratories and samples from different locations with who knows what confounding factors built in.

    REPLY: I wonder who pays Connolley to spend so much time here? Obviously he’s on a mission – Anthony

  145. Agile Aspect says:

    Excellent article!

    The link to your paper on global temperatures is still broken (although I was able to edit the URL in my browser and get the download to work.)

    This might explain why moser, frank, mtobis and the wiki molester are having trouble grokking standard experimental error analysis taught in undergraduate physical science courses.

    It would nice if someone could follow up this post with a post on the 83 year shift in the CO2 data from Shipley by Keeling.

  146. Agile Aspect says:

    dave38 says:
    April 3, 2012 at 11:40 amdave38 says:
    April 3, 2012 at 11:40 am

    “The O-18/O-16 ratio in sea water has a first-order dependence on the evaporation/condensation cycle of water. H2O-18 has a higher boiling point than H2O-16, and so evaporates and condenses at a higher temperature.”

    I can accept that, but i wonder what effect the prescence of H2 Deuterium has on the temperature and the evaporation/ condensation and can it make much difference?

    ;————————–

    The isotopic composition of rain and snow can vary by 4% at mid latitudes and up to 40% at the poles as a result of evaporation and condensation.

    The isotopic composition of deep offshore ocean water is remarkably uniform.

  147. Frank says:

    Pat: Let’s see what we can agree upon anything.

    1) When interpreting Figure 7, I believe we want to know which large temperature changes are significant and which smaller changes may be due to experimental variability. The accuracy of the absolute temperature at any one time and place usually isn’t important. Do you agree?

    2) Error propagation is important, but analyzing data in a manner which unnecessarily inflates the error is dumb. Statistics and signal processing are all about extracting reliable information from noisy data. My earlier comments show a better method for calculating the uncertainty in temperature CHANGE than the brute force method you applied to the absolute temperatures. However, my method only works when all of the data points belong on the same O18/temperature calibration line. Do you agree?

    3) When a dO18/temperature calibration graph is constructed (with temperature on the y-axis as traditionally shown, even though temperature is really the independent variable during calibration), the isotope data should be presented with a horizontal error bar to reflect standard error of the mean for the isotope ratio obtained from shells grown at a single temperature. (See Figure 1a in Bemis) In theory, the width of those error bars can be reduced by more reproducible experimental technique and analyzing more samples. Do you agree?

    4) You were right, it was ridiculous for me to suggest that the width of the horizontal error bars was determined by the ability of a mass spec to resolve 1.00 and 1.02%o. I should have read your post more careful.

    5) With some caveats discussed below, the standard error of the isotope data used to construct a calibration curve can be converted into the standard error of the reconstructed temperatures by multiplying by the absolute value of the slope of the calibration curve (4.8 degC per %o). The variability of this slope isn’t large enough to influence this conversion. Do you agree?

    6a) Table 1 of a Lea review article summarizing various isotope temperature proxies claims the standard error of O18 temperature reconstructions is 0.5 degC (when O18 in seawater is known), suggesting that standard error in calibrating isotope data typically is 0.1%o. (Lea http://www.geol.ucsb.edu/faculty/lea/pdfs/Lea%20TOG%20proof.pdf) One paper I looked at reported a figure of 0.08%o in their methods section. (http://epic.awi.de/24738/1/Wit2010e.pdf) In the absence of further information, we should trust the significance of temperature changes greater than 1 degC between any two points. Changes of 0.5-1.0 degC between periods with multiple readings (the MWP vs the LIA) could be significant.

    6b) Main caveat: Foraminifera deposited at a real site will be less homogeneous than the foraminifera used to construct a calibration. Temperature changes during the year, so the spread in the O18 data will reflect the temperature range during the year and the mean O18 will reflect the mean temperature. Laboratory studies show that salinity, light and especially seawater O18 (which changes with rainfall, evaporation and ice ages) can influence O18 incorporation. The calcium carbonate in real samples will have a mean and spread of O18 values that reflect the annual variability in all of these factors. As long as the mean of these influences doesn’t change appreciably with time, changes in the O18 record will reflect changes in the LOCAL mean annual temperature. (O18 is a dubious proxy for analyzing changes due to ice ages because O18 in sea water changed with the size of the ice caps.)

    6c) I agree with you that reconstructed absolute temperatures from different sites need error bars reflecting all of the possible lines in Figure 6.

    7) If you believe that changes in mean annual salinity (or other factor) might significantly increase the uncertainty of O18 temperature reconstructions, calculate how large a salinity change is required to produce a bias of 0.25 degC and make the case that salinity could have changed this much in the Sargasso Sea over the last 3000 years. If you can’t make such a case, tell your readers.

    8) The reliability of Keigwin’s reconstruction depends on the standard error of the O18 data HIS LAB obtained from control foraminifera raised under well-defined conditions. Lea’s standard error for 018 reconstructions (0.5 degC) is only valid if Keigwin’s standard error for control 018 samples was 0.1%o.

    9) Bemis varied temperature in the lab by almost 10 degC, so he could easily study the relationship between O18 and temperature (without tight O18 data). The standard error of his O18 data (see his FIgure 1) was much greater than 0.1%o; his methodology certainly wouldn’t have resulted in a reconstruction with a standard error of 0.5 degC. However, you have no business attaching error bars from Bemis to results from a study by Keigwin. You need Keigwin’s quality control data to attach error bars to Keigwin’s study.

    Adding insult to injury in your Figure 6, you show an isotope uncertainty of 0.4%o from Bemis intersecting DIFFERENT lines before being translated horizontally into temperature uncertainty. Until you demonstrate that the factors that produced the different lines in Figure 6 changed over time in the Sargasso Sea, you should have intersected only one of these lines.

    10) WIth many graphs, it is traditional to put error bars that extend one standard deviation of the mean above and below the point. When the top of the error bar for a low result overlaps the bottom of the error bar for a high result, the difference between the low and high results is usually not significant. Your 95% confidence intervals probably confused many readers.

    11) The most important thing I learned from this debate was that the standard error (according to Lea) for other isotope reconstructions is far lower for O18 than for other methods:
    Mg/Ca, +/-1 degC; alkenone index +/- 1.5 degC. It is probably easier to find scandalous misuse of these proxies than O18.

  148. Pat Frank says:

    William M. Connolley, it’s very clear that you are unable or unwilling to mount a constructive argument in your own defense.

    Smokey, please, Pat, not Dr. Frank. :-)

    Gail I didn’t know you were a fellow experimental chemist. It’s good to be in the company of someone who understands what a struggle it is to get low-error experimental data. Geoff Sherrington is a chemist as well, an analytical chemist in fact, and like you has a total respect for experimental error and its implications about reliability.

  149. Pat Frank says:

    Frank, in reply and using your numbering:

    1) You show no understanding of the meaning of systematic error.

    2) The error is in the data itself. Into what units the data are converted is irrelevant. Taking differences does not necessarily remove error. Your method doesn’t work at all. You really need to learn about systematic error, and propagation of error.

    3) No. Systematic error can either shrink or grow with repeated sampling.

    4) I didn’t use the word “ridiculous.” I used ‘mistaken.’

    5) OK for linear standards, not OK for exponential standards.

    6a) I’ve looked at Lea’s 1997 article. Table 1 gives dO-18 standard error of (+/-)0.5 C, referencing Kim and O’Neil, 1997 under 6.14.3.3. I’ve now evaluated the CaCO3 calibration data in Figure 2 and Table 1 of Kim and O’Neil, 1997. The measurement uncertainty in their 5 mM T:dO-18 calibration is (+/-)2.2 C. In their 25 mM T:dO-18 calibration it’s (+/-)4.0 C.

    The Kim & O’Neil, 1997 25 mM data set is large enough (24 points) to evaluate the error envelope. There are at least two methodological error modes operating simultaneously in their experiment. The error therefore behaves as systematic, not random.

    So, the error quoted by Lea is 4x to 8x smaller than just the measurement error actually in the data of the paper he cited.

    Therefore, changes of 0.5-1.0 C are well below the level of resolution. Your 6a) is now moot.

    6b) The unknowns you admit concerning paleo-salinity and foraminiferal depth, not to mention photosynthetic disequilibrium, destroy your conclusions about any reliability in “LOCAL” paleo-temperature. The uncertainty in paleo-salinity alone is worth at least (+/-)2 C.

    6c) Thank-you.

    7) The case in Figure 7 is about measurement error, not paleo-salinity. I have been explicit about that all along, including in the head-post article itself. I.e., I wrote, “The total measurement uncertainty in Keigwin’s dO-18 proxy temperature…” There’s nothing about paleo-salinity and no need to amend anything.

    8) I pointed out in the head-post article that, “At the ftp site where Keigwin’s data are located, one reads “Data precision: ~1% for carbonate; ~0.1 permil for d18-O.

    Lea’s standard error is uncritical and in any case is a general estimate not applicable to any specific case, such as Keigwin’s, where a true uncertainty can be calculated (which I did).

    9) Figure 7: how many times need I repeat that the error bars on Keigwin’s data are from Keigwin’s own experiment? Those error bars have nothing to do with Bemis’s work.

    Figure 7 caused no injury, nor does Figure 6 make an insult. You are merely continuing to make a false projection of Figure 6 onto Figure 7.

    Figure 6 is stand-alone, Frank. It’s meant to show that different standard lines give different temperatures for the same dO-18%o (or vice-versa). This disparity among lines is due to uncontrolled variables. Those uncontrolled variables introduce uncertainty into any determination of T from dO-18.

    Repeating: Figure 6 has nothing to do with Figure 7. The error bars in Figure 7 have nothing to do with Figure 6.

    Are we clear on that now?

    10) 95% confidence limits are the standard way of showing minimal surety in result. Most readers here at WUWT are very sophisticated in such matters and were certainly not confused. New readers will have to get used to it.

    11) I haven’t looked at the other proxies in any detail, but would not be at all surprised if the measurement errors in Ca/Mg and alkanone proxies are 2x-4x larger than Lea’s entries.

  150. Brian H says:

    Pat;
    #10;
    this “95%” disease has to be stomped. It is a squishy, slack standard, used in squishy social science because they can’t get any better. Tell me, what do chemists consider a minimum confidence level? How many sigma?

  151. Monty says:

    William Connolly cites a poster here who said: “It is beyond my knowledge to judge the validity of Pat Frank’s thesis, without years of study, but it seems thorough and is well presented”.

    This gets precisely to the center of the problem with 90% of skeptics. Even though most of them know very little about climate science ( and probably not much about science in general) anything that supports their preconceived position is immediately praised, and anything that goes against it is immediately dammed. Thus the praise for Pat Frank’s post. The only way to judge its validity is to see if he can get it published in a reputable journal. What’s the betting that it never gets beyond WUWT?

  152. Pat Frank says:

    Brian, speaking only for myself, I report 1-sigma. But among physical scientists, 1-sigma is enough. Everyone knows what it means.

    Monty, you wrote, “The only way to judge its validity is to see if he can get it published in a reputable journal.

    Not correct. The only way to judge its validity is on the internal merits. The same is true of any scientific argument.

  153. Monty says:

    OK Pat. Where are you going to submit this? GRL? Climate of the Past? Honestly, I encourage you to. If it passes peer review and has an impact then you will have made a contribution. A blog post just doesn’t cut it I’m afraid.

  154. Pat Frank says:

    Monty, your original point was about “validity.” Having lost that point, you’ve now shifted your ground to “impact.”

    Having an “impact” depends on whether a result is noticed. Publication doesn’t guarantee that. Whether the result is correct or not is the baseline issue. I stand by my results, and they’re all right here to be judged. So far those in opposition — Connolley, Tobis, Thirumalai — have been objectively ineffective.

    Kaustubh Thimuralai, who has his own blog, has wasted space in a long essay in reply that is nothing more than a personal attack. One would think that, as a graduate student in the very field, he’d have made a quantitative rebuttal. Guess not.

    I’d suggest a blog post *does* cut it, in that the argument can be completely valid. What sort of impact it has depends on the reaction and position of those who notice it.

  155. Monty says:

    This is just a long-winded attempt to justify not sending this to peer review where it can be judged by experts in the field, rather than praised by sycophants who don’t understand a word of it. The only conclusion that can be drawn is that you know it wouldn’t get in to a reputable journal. I can only imagine the fuss that the skeptics would make if climate science papers were similarly treated.

  156. Pat Frank says:

    You don’t understand a word of it either, Monty, and yet you criticise anyway. That makes you their opposite — a bombast also with a worthless opinion.

    I’ve already published three peer-reviewed critical articles on the neglect of error in AGW-related climate science, and expect to have the surface air temperature record by the short hairs in my next paper. Maybe after that, I’ll write up a more complete analysis of the effect of measurement error on the reliability of temperature proxies. But that would concern physically real proxies. The temperature-proxy-by-statistical-scaling field is hopeless pseudo-science. A criticism of that would have to be published in a philosophical journal.

  157. Monty says:

    You said; “You don’t understand a word of it either, Monty, and yet you criticise anyway”.

    As a matter of fact I have a PhD in a climate science and have published around 70 papers in the peer-reviewed science literature on various aspects of climate change. These include lots of papers in the leading journals in my field.

    Now, I’m not an expert in the use of oxygen isotope ratios but that’s not the point. The point is that such experts do exist and it is very telling that you dare not expose your ‘research’ to their scrutiny.

    Makes me think your ‘research’ isn’t up to much.

  158. Frank says:

    Pat:

    1) A paired t-test is an experimental design that doesn’t demand that the uncertainty in the experimental and control groups be added in quadrature. Likewise, with foraminifera, uncertainty can be reduced by analyzing only samples that are expected to fall on one calibration line, instead of the full range of possibilities shown in Figure 6.

    2) In the real world, the uncertainty introduced using any calibration curve or standard curve is always established by running an adequate number of positive controls. No matter what you think the uncertainty should be from error propagation, the scientists running these experiments know what the uncertainty IS with positive controls. When they say the temperatures reconstructed for positive controls are typically good to 0.5 degC, there is no point in arguing. If you don’t have access to positive controls, the uncertainty can be determined from the confidence intervals for the derived slope and intercept (linear fit) or coefficients a, b and c (quadratic fit) determined during the least-squares fit. See the sixth paragraph on calibration curves in Wikipedia.

    4) You advocate adding in quadrature the isotope uncertainty during calibration to the isotope uncertainty during analysis; a procedure that appears flawed: For simplicity, consider a linear calibration. When calibration is done with a precision of 0.1%o at two temperatures, 21 and 24 degC (roughly the range of the Sargasso Sea), the slope and intercept will be determined within certain confidence intervals. If the calibration is performed with samples every 2 degC between 16 and 28 degC, the slope and intercept will have much tighter confidence intervals (particularly the slope), despite being constructed from equally precise isotopic data. The reliability of a standard curve depends on more than just precision of the raw data used to construct it. The uncertainty in the isotope data enters the error propagation analysis ONCE and is combined with the uncertainty in the parameters derived when fitting the standard curve. Since we can get better answers from positive controls, this type of error propagation analysis is rarely needed.

    5) To some extent, you MAY be trying to add SYSTEMATIC error that might be introduced by changes salinity, light, and other factors to the experimental uncertainty (random error). This can’t be done: experimental uncertainty can be quantified by statistical analysis of observations; systematic error can not. When systematic bias can be accurately quantified, we remove it. If salinity or some other factor has changed enough over time to effect the O18-temperature, the Keigwin’s reconstruction will have a systematic error, not a larger experimental uncertainty. All scientists know that reasonably tight experimental results (temp SE =<0.5 degC, or p<0.01) can be invalidated by systematic error. Experiment variability is covered statistically and quantitatively; possible sources of systematic error are discussed qualitatively in the paper.

    The amount of O18 is seawater increased during the ice ages as O18-depleted ice accumulated on land. Someone has demonstrated that this systematic error is large enough to invalidate temperature reconstructions from foraminifera extending into ice ages, so O18 in foraminifera is not used for these periods. For the same reason, a systematic error due to changing salinity or seawater O18 COULD invalidate Keigwin's reconstruction. Knowing that one can produce a change in the isotope/temperature calibration by making a large change in salinity in the laboratory doesn't demonstrate a systematic error in Keigwin's reconstruction. First, you need to estimate how much salinity (like O18 during the ice ages) might have changed with time in the Sargasso Sea and then consult laboratory experiments with salinity to see how big a systematic error that might produce.

    6) Several papers I glanced at show inadequate accuracy in reconstructions of SST at different locations, demonstrating the existence of systematic error when this methodology is used at different sites. Bemis is trying to identify the cause of these systematic errors and develop methods for correcting them. Using these proxies to determine absolute temperature a different locations is a perilous operation, but assessing temperature CHANGE at one site seems reasonable.

    "The first principal is that you must not fool yourself – and you are the easiest person to fool." Richard Feynman, in Cargo Cult Science.

  159. Pat Frank says:

    Monty if you’re as expert as you say, you’d not have any problem evaluating and criticising my analysis yourself. You’d also know that the merit of any scientific case depends on content, and in no way on peer-review.

    And yet, you’ve posted here four times without one word of objective criticism, and without evidencing any understanding of the criteria of validity of a scientific argument.

    My analysis is here in public, on a prominent website read by millions and available for criticism by any professional including you. It could not be more exposed than it would be on any open access journal peer-review.

    Any scientist can come here with the opportunity to knock it down in full view of a sophisticated reading public. If you have some valid criticism, go ahead and make it. Otherwise, you’re just making noise.

  160. Monty says:

    I can only imagine the screaming and shouting from the skeptics if climate scientists posted their research on ‘pro-AGW’ blogs and refused to submit to peer-review. It’s clear that the only reason that you refuse to submit to a leading science journal is that you know your ‘research’ is flawed and that it wouldn’t pass peer review.

    There are open-access journals such as Climate of the Past Discussions where everyone can see the refereeing process as it happens (I am often asked to referee papers there myself). You could submit there….but I bet you don’t.

    And not a single criticism from the sycophants who populate this blog! Amazing!

  161. kim2ooo says:

    Monty – Needs to debate the work presented here …not the venue chosen.

    An attempt at pea shuffling seems to be all he can offer…much like his posts on such as Deltoid.

    He consistently says such as, “I think who always trumpets his PhD You are right to be suspicious..”….”I’m always suspicious of those who make a song and dance about their qualifications.”
    Yet, follows up as here, with “As an aside; I have a PhD and I’m a climate scientist.

    Posted by: monty | June 24, 2011 8:57 AM http://scienceblogs.com/deltoid/2011/06/the_conversation_on_climate_ch_2.php

    ” I have a PhD in a relevant subject but don’t feel the need to advertise it with my blog name.”

    Posted by: monty | March 21, 2011 5:57 AM http://scienceblogs.com/deltoid/2011/03/ian_enting_on_climate_science.php

  162. Monty says:

    Sorry Kim, I don’t quite understand. The only reason I mentioned I was an academic working in climate science is because Pat wrote “You don’t understand a word of it either, Monty, and yet you criticise anyway”. Had he not said this I wouldn’t have felt the need to advertise my qualifications.

    My basic point still stands. For all his bluster, Pat Frank will not allow expert scientists to review his work. If there are any lurkers here, you can draw your own conclusions.

  163. kim2ooo says:

    Monty says:
    April 11, 2012 at 7:13 am

    Sorry Kim, I don’t quite understand. The only reason I mentioned I was an academic working in climate science is because Pat wrote “You don’t understand a word of it either, Monty, and yet you criticise anyway”. Had he not said this I wouldn’t have felt the need to advertise my qualifications.

    xxxxxxxxxxxxxxxxxxxxx

    C’mon you advertise your qualifications on many threads. [ See links in above post ].

    xxxxxxxxxxxxxxxxxxxxxx
    Monty says:
    April 11, 2012 at 7:13 am

    “My basic point still stands. For all his bluster, Pat Frank will not allow expert scientists to review his work. If there are any lurkers here, you can draw your own conclusions.”

    xxxxxxxxxxxxxxxxxxxxxxxxxx

    For a PhD don’t you have to learn about “logic fallacies”?. Your logic fallacies here.. are your continued instance of an “Appeal to Authority” – Compounded by the “Red Herring Fallacy”. [ Actually, you also throw in a "Straw Man" Fallacy ].

    [ Learn about them here ] http://www.fallacyfiles.org/redherrf.html

    1: Expert Scientists have embraced numerous faulty papers [ Appeal to Authority ]
    2: Diverting from the debated paper / conclusions made by Mr Pat Frank [ Red Herring ]
    3: Experts CAN review here – Mr Pat Frank hasn’t refused access [ Straw man ].

    Unlike echo-chambers and edit-mills…. WUWT will teach you debate skills.

  164. Monty says:

    Actually, yes I would appeal to authority here. I’m guessing that the world’s leading experts in isotope chemistry and paleoclimatology probably do know more than Pat Frank about isotope chemistry and paleoclimatology. His refusal to submit his ‘research’ so that it can be judged by such experts is damming. I wonder what the odds are of Pat Frank ever making a significant contribution to this field? Zero?

    Kim2000 you must be in possession of some pretty impressive intellectual blinkers if you can’t see that.

    Bye bye.

  165. kim2ooo says:

    Monty says:
    April 11, 2012 at 9:45 am

    Actually, yes I would appeal to authority here.

    xxxxxxxxxxxxxx

    Not yours!
    That is what you are demanding. You want Mr Pat Frank to surrender to YOUR authority and submit to YOUR desires.

    My intellectual “blinkers” saw through your logic fallacies.

    You’re dismissed :) bye bye

  166. Pat Frank says:

    Frank, using your numbering:

    1) A paired t-test only reveals correlation. It says nothing about accuracy (or precision).

    In Figure 6 (Bemis Figure 2), the shift in the standard lines are due to uncontrolled variables. That means unknown influences that materially impact the result. In turn, that means no one knows why the various standard lines are different. For any given new data set, no one knows which of those lines is relevant, or whether any of them is relevant.

    2) You wrote, “When they say … there is no point in arguing.” You’re making an argument from authority. Invalid to the max.

    McCrae reported the same uncertainty as I derived. I quoted him above: “The average deviation of an individual experimental result from this relation is 2°C in the series of slow precipitations just described.” The point scatter in the other calibrations led to uncertainties of similar magnitude, and the error envelopes behaved as systematic error. You have no case.

    Taking your error in slope and intercept: for any line you’d have this — y = M[(+/-)m]*x +B(+/-)b, where M is the mean slope, m is the uncertainty in the slope, B is the mean intercept and b is the uncertainty in the intercept.

    With two slopes and two intercepts about the mean, you have five lines that describe any one relation between x and y (or T and dO-18): the mean line and the four lines that bound your uncertainty.

    Given a T:dO-18 data set, every dO-18 will have five temperatures associated with it; the mean temperature and four uncertainty-bound temperatures about that mean, i.e., Tmean and t1, t2, t3, t4, where the t’s are the four temperatures that define your uncertainty bound. Those four uncertainty temperatures combine to give you a standard deviation around your Tmean, calculated as sqrt[sum of (t-T)^2/3].

    You can’t escape it, Frank. And note that the uncertainty is systematic not random.

    3) you had no point three

    4) I’m using standard error propagation, Frank. I’m not “advocating” it. The method is a basic tool in science and has been in common use for more than 100 years.

    You wrote, “When calibration is done with a precision of 0.1%o…” You still haven’t realized that the point scatter comes from the entire methodology. Mass spec is only part of that.

    The rest of your point is just hand-waving will this and would that. These contribute nothing without having done the experiment.

    In my recent post, I showed that the uncertainties in Kim & O’Neil’s “positive controls” is (+/-)2 C to (+/-)4C. Your claim about “better answers from positive controls” has already been refuted by direct demonstration.

    Regarding the Sargasso Sea analysis, Keigwin didn’t report any experimental error, and none of his dO-18 points include error bars. But his reported that salinity statistically accounted for 30% of his isotopic signal in his calibration against recent SSTs.

    As we do not know how salinity changed at that location across the 3000 years of Keigwin’s analysis (he assumed constant salinity — an assumption refuted by the recent variations in salinity), then a properly conservative and critical view of uncertainty would put 30% error bars on his dO-18 reconstructed SSTs. That would be (+/-)7 C.

    So, if anything, the (+/-)0.75 C uncertainty I calculated just from Keigwin’s point scatter is extremely generous.

    5) The experimental error behaves as systematic, not random. Figures 2-5 and Table 1 report the errors in calibration experiments. They are not subject to unknowns of salinity, light, or temperature.

    The uncertainties I calculated for Keigwin come from his method, not from unknown marine variables. I’ve explained this to you now at least twice, e.g., here, and 7) and 8) hereand you still repeat the mistake. Your entire argument about Keigwin is based in a persistent misconception.

    6) Assessing temperature differences does not remove any uncertainty due to systematic error. Systematic error is removed by differencing only if it is constant and of known magnitude. Neither of those conditions are true in dO-18 proxy temperatures, most especially in those representing paleo-temperatures.

    Feynman’s quote is a double-pointed spear, Frank. It points at you as much as me. Speaking for myself, I’ve had no trouble objectively defending my analysis.

  167. Pat Frank says:

    Monty, you’re obviously hostile to my analysis. It’s clear that if you were able to make a criticism, you’d have done so. However, your posts remain insubstantial and your competence remains undemonstrated.

    In the absence of competence, your argument about peer-review is no more than an argument from authority.

    My field is x-ray absorption spectroscopy applied to elements of biological interest, especially transition metals. Were there a blog critical of the method, it would be no problem to evaluate the criticisms and dispute them, or agree with them, either in some detail.

    This is typical of any scientist encountering his/her field. However, your posts are empty of any trace of scientific familiarity. You haven’t even shown a familiarity with common error analysis.

    You’ve given us no reason to think you know what you’re talking about.

    As to my analysis, it’s only a couple of weeks old. Who knows, I may decide to write it up formally and submit it somewhere. But I’m in the middle of a more extended air temperature reliability study, and that’s first priority for committed time. The proxy business was a side-light consequent to my conversation with Michael Tobis.

  168. Pat Frank says:

    Monty wrote, “I mentioned I was an academic working in climate science … because Pat wrote “You don’t understand a word of it either, Monty, and yet you criticize anyway”. Had he not said this I wouldn’t have felt the need to advertise my qualifications.
    “My basic point still stands. For all his bluster, Pat Frank will not allow expert scientists to review his work.

    But you claim to be an “expert scientist,” Dr. academic-working-in-climate-science-Monty — complete with “qualifications.” And yet you’ve been unable to produce a single substantive sentence.

    So, what are we to conclude?

    From the evidence we have two choices: either you’re not a climate scientist, or one can be a climate scientist without displaying any competence.

    Any climate scientist can come here and lay on the criticism. I’ve been here consistently, taking up all challenges. Unlike you, who has sniped without end. And then you accuse me of bluster. What a laugh. I can’t allow or disallow anything here. i’ve no control over posting.

    You’re a climate scientist of high standing — we all know that because you’ve said so. So, how about you round up a few of your climate scientist buddies and come back with something relevant to say. I’ll be here, and you’ll all be allowed to post whatever you like within the bounds of Anthony’s posted blog ethics.

    Put up or shut up, Monty.

  169. Pat Frank says:

    Monty wrote, “I’m guessing that the world’s leading experts in isotope chemistry and paleoclimatology probably do know more than Pat Frank about isotope chemistry and paleoclimatology.

    Except that my post is about error analysis. Irrelevant again, Monty.

  170. Gail Combs says:

    Pat Frank says:
    April 10, 2012 at 10:01 am

    You don’t understand a word of it either, Monty, and yet you criticise anyway. That makes you their opposite — a bombast also with a worthless opinion….
    _______________________________________
    What makes Monty’s criticism without any facts here on WUWT so interesting is that Monty has a PhD in Physics. (I followed the link in his name when he first came onto WUWT) However Monty does not use his degree in Physics in his rebuttals instead he seems to be using Alinsky’s Rules for Radicals

    I have been following WUWT for years. Most people here seem to have a high level of science background as some of the more lively discussions have shown. The newest crop of trolls seem to be following the USDAs Handbook suggesting staff address farmers at the sixth grade level. I find that extremely insulting. At work I routinely sat in on critiques for new products where the chemistry and engineering were discussed and was on many occasions able to spot problems based on logic and not on intimate knowledge of the process.

    A scientific paper SHOULD be written so other can follow it. Unfortunately when writting for peer reviewed journals, Bafflegab Pays.Dr. Scott Armstrong even wrote a paper on it.

  171. Frank says:

    Pat: I’m painfully aware that the Feynman quote points both ways. I immediately and clearly acknowledged at one point my gross error in accepting the resolution of the mass spec as the uncertainty in isotope measurements. I didn’t fully understand where all of the numbers came from in your calculated error bars for Keigwin and I should have acknowledged that your replies did clear that up. (You introduced uncertainty from Shackelton, not Bemis). However, you haven’t acknowledged that ANY of the points I have made might have any value. Of course, they could all be wrong, so I did ask if I was fooling myself (as Feynman recommends).

    1) When you objected to my initial estimate of an uncertainty of 0.5 degC, I did some research that lead me to Lea’s published paper with value of 0.5 degC. Lea could be wrong, but someone who understands the field does agree with me.

    2) When you continued to insist on your method of adding in quadrature Shackelton’s calibration error to Keigwin’s experimental error, I dug up a Wikipedia reference that confirmed my initial thought that you should have used the uncertainty in the least-squares coefficients obtained by Shackelton. Wikipedia isn’t a great source, but it reduced the chance I might be fooling myself.

    3) My inability to find a better authority than Wikipedia on the error introduced by standard curves eventually reminded me that such error is invariably established experimentally by running control samples – not by error propagation. (Do you have any experience with standard curves? The analytical chemists and other scientists l have worked with always include positive control samples in every experiment.) If I had designed Keigwin’s experimental work, analysis of the “unknown” samples from the sediment core would have been randomly interspersed with multiple control samples grown at several different temperatures 0.5 degC apart and covering the full range of expected temperature. There would be no doubt about whether the experimental technique used with THESE samples reliably resolved temperature CHANGES of 0.5 or 1.0 degC. (Those with experience in this field would have access to control samples from earlier projects.) When I said that Keigwin should know his uncertain from EXPERIMENTAL control samples, you dismissed this as as an “appeal to authority”.

    4) You have intermixed discussion of random and systematic error. You haven’t acknowledged that moving from one calibration line to another is a systematic error, but that no matter which calibration line you are on, a CHANGE of 0.2%o in isotope ratio always translates to about a 1 degC temperature CHANGE. (The uncertainty inherent in a CHANGE isotope ratio is determined by addition the standard error of individual isotope measurements in quadrature. With equal variances and sample sizes, the uncertainty increases by a factor of 1.4.) Although you have shown that systematic errors are easy to find BETWEEN sites, you have refused to discuss in quantitative terms how much various factors (like salinity, sea water isotope ratio, and light) would need to change OVER TIME to introduce a significant error in Keigwin’s reconstruction.

    The internet seems to be full of climate change skeptics who need to be reminded of Feynman’s saying “the easiest person to fool is yourself”. Perhaps you will recognize that your replies have forced me to test what I previously believed to be correct and that some of my criticisms have survived my review. Whether you are capable of applying Feynman’s quote to your own work remains in doubt as long as you seem to be “stonewalling” constructive criticism.

  172. Pat Frank says:

    Gail, I don’t understand Monty’s approach either. As a physicist he should have objected quantitatively. But then Arthur Smith at Planet3org responded similarly, and he’s a Ph.D. physicist, too.

    Your experience in meetings matches my own. A professional with experience and training can spot logical errors in out-of-specialty process. In fact, often those without any specialist training can spot logical flaws in process, though not errors in the science or the data.

    You’re right about bafflegab, too. Steve McIntyre showed beyond any doubt that Michael Mann engaged in exactly that when writing MBH98/99. His obscurantism in phrasing and methodology were deliberately calculated to impress without informing.

    It happens elsewhere, too. I can recall a graduate student complaining that her advisor told her she’d made their paper “too pedagogical.” By that, he meant she’d been too clear in explaining what they’d done and how they’d done it.

    People, even many scientists, are afraid to ask for clarifications for fear of revealing ignorance (or stupidity). The ambitious, the egotistical and, unfortunately, the dishonest exploit that fear to their benefit.

  173. Pat Frank says:

    Frank, first off the Feynman quote came at the end of your long set of criticisms. It had all the implicate meaning of being directed at me.
    Second, I’m sorry to observe that you haven’t made any substantive points.

    Frank, you strike me as a good guy. You’ve been unfailingly polite and have come across as sincere and honest, and doing your best. I’ve appreciated that.

    But try as you might, it was quickly clear that you started your criticism without knowing anything about mass spectrometry, without knowing anything about how the dO-18 proxy works, without knowing anything about measurement error or the difference between accuracy and precision, and without knowing anything about how to propagate error. But that didn’t stop you from sailing in.
    On the other hand, you have tried to provide substance while knowing nothing, while Monty has provided nothing while claiming everything. So, you get an “A” for effort, in any case, even if not for content.

    Following your numbering:

    1) Lea doesn’t agree with you. You agree with Lea, which is an entirely different matter.

    Look at Lea, Table 1. The estimated 0.5 C SE is relevant “when d18O-sw is known. That means they’re relevant when the dO-18 content of sea water is known. However, sea water paleo-DO-18 is not known. The 0.5 C SE is irrelevant for dO-18 paleo-temperature reconstructions.

    Second, none of the Table 1 estimated SE’s are referenced to a study. We don’t know where they came from, or how those estimates were derived. They are apparently Lea’s own ball-park estimates.

    In discussing proxy calibrations and their error, Lea cites McCrae, 1950; Epstein, 1953, Shackleton, 1974; Kim&O’Neil, 1997; Bemis, 1998; and Zhou & Zheng, 2003.

    I’ve already discussed the measurement error in McCrae in Figures 2&3, in Epstein, 1953 in Figure 5, and in Kim & O’Neil, 2007 in Figure 4. Those results and others, including Shackleton and Bemis, 1998, are summarized in Table 4. Where they could be examined, they all exhibit basic systematic measurement error ranging from 0.6 – 2.2 C; i.e., uniformly more than allowed by Lea.
    But to respond even more fully to your concerns, I’ve now looked at Kim&O’Neil, 1997 here: http://i41.tinypic.com/r8x99y.jpg and Zhou & Zheng, 2003 here: http://i43.tinypic.com/ionhg7.jpg

    Both data sets show unmistakeable evidence of systematic measurement error. The Kim & O’Neil, 1997 1-sigma is (+/-)4 C, and Zhou & Zheng, 2003 an incredible (+/-)180 C.

    These analyses now cover all of Lea’s principal citations. None of the calibrations are good to (+/-)0.5 C, and many of them are far poorer.

    2) It’s as though, once again, you didn’t read the essay. I used Shackleton’s 1969 precision mass spec experiment to calculate a lower limit of measurement error in his method. That lower limit stands.

    Shackleton published in a French journal that I can’t access. We, including you, don’t even know whether he published the LSQ uncertainties. So you could never have seen them to suggest using them.

    The LSQ uncertainties will produce an experimental uncertainty similar to the standard deviation of the experimental points. You achieve nothing by that route.

    Keigwin’s experiment is independent of Shackelton’s. Propagating Shackleton’s experimental error into Keigwin’s uncertainty is no more than standard propagation of experimental error. You have no point here.

    3) All of the experiments I evaluated above, apart from Keigwin’s, are calibrations of the method, what you call “positive controls.” How is it you still don’t know that?

    I do analytical work regularly.

    You wrote, “When I said that Keigwin should know his uncertain from EXPERIMENTAL control samples, you dismissed this as as an “appeal to authority”.” None of my posts here, to you or to anyone else, contain the phrase “appeal to authority.” kim2000 used that phrase in a reply to Monty.

    Looking through your posts, you mentioned Keigwin’s experiment in 8), here. Item 8) in my reply was on point and didn’t mention anything about appeals to authority in any form.

    Here, I wrote that, “Your entire argument about Keigwin is based in a persistent misconception.,” which is true.
    Under “2)” in that post, I wrote that you were making an argument from authority when you wrote that, “When they say the temperatures reconstructed for positive controls are typically good to 0.5 degC, there is no point in arguing.” That had to do with Lea, not Keigwin and was indeed an argument from authority.

    It appears you’re convoluting one exchange into another. Combinations of disparate conversations are mistakes, not history.
    Science (except lately in the UK), is “nullius in verba,” remember? The point of arguing is made when an assertion is demonstrated not true. The analyses presented here falsify the assertion of an average (+/-)0.5 C dO-18 proxy uncertainty.
    4) You wrote, “You have intermixed discussion of random and systematic error.” No, my discussion has invariably been about systematic error. The non-Gaussian histograms of the experimental residuals justify a conclusion of systematic measurement error. That has been the case throughout.

    You wrote, “no matter which calibration line you are on, a CHANGE of 0.2%o in isotope ratio always translates to about a 1 degC temperature CHANGE

    Frank, experimenters do not use those calibration lines to obtain temperature changes. They do not calculate a temperature from one part of the line, a second temperature from another part of the line, and then subtract them.
    The proxy experiment works like this: an experimenter calculates the dO-18 in, say, a foraminiferal sediment. S/He assumes the standard sea water salinity and dO-18 for that locale has persisted over the intervening time, unless another proxy is used to make a correction.

    The dO-18 ratio in the calcareous fossil is ratioed to the standard sea water dO-18 assumed to mirror the sea water of that past time. A calibration curve is chosen — Bemis, 1998, Epstein, 1953, or whichever one is deemed appropriate. A sea surface temperature (SST) is calculated using the standard curve. If it’s a tropical locale, the paleo SST will be somewhere around 30(+/-)1 C, with that (+/-)1 C due to systematic measurement error only. Any errors that covary with salinity are not included, because standard salinity was assumed.

    To get a temperature change since that time, that paleo 30 C must be subtracted from the recent SST in that locale. Suppose the local modern SST is obtained from floating buoys, and is 30.5 C. The uncertainty in buoy temperatures has been estimated, for example, by W. J. Emery, et al., (2001) “Accuracy of in situ sea surface temperatures used to calibrate infrared satellite measurements, JGR 106(C2), 2387-2405. That error is about (+/-)0.5 C.

    The difference SST is 30.5-30 = 0.5 C. The uncertainty in that 0.5 C difference is the uncertainties in the proxy paleo-SST and the modern SST in quadrature, and 1-sigma = sqrt(1^2+0.5^2)=1.1 C. So, you’d have to report your temperature difference as 0.5(+/-)1.1 C.

    How significant is the difference temperature?

    If two paleo-temperatures are subtracted to get a difference, say Keigwin’s data at -1ky and -2ky, the propagation of uncertainty following a subtraction applies. In Keigwin’s case, that would be 1-sigma = sqrt(2×0.75^2)=1.1 C.

    The (+/-)0.75 C uncertainty in each temperature does not subtract away because the error is systematic. That 0.75 C is only an average, and the magnitude of the systematic error in each of the two temperatures is not known.

    Look again at the replicate measurements in the two jpegs I linked. Notice the replicate points are vertically displaced, even though they represent measurements of the same quantity. Each point has its own magnitude of systematic error. Subtracting two individual points can actually decrease or increase the error in the difference, and we can never know which of those two results occurred in any given case because we don’t know the true answer. The only way out is to be conservative about uncertainty and propagate the average error and report that.

    The same logic applies to a difference of averages.

    You wrote, “Although you have shown that systematic errors are easy to find BETWEEN sites,..“. Nothing I’ve written here has ever dealt with systematic error between sites. Everything I’ve done here has concerned methodological uncertainty due to systematic measurement error within dO-18 calibration experiments..

    After all this time, all this conversation, and after my repeated explanations, I don’t understand how such a fundamental misunderstanding is possible.

    You wrote, “you have refused to discuss in quantitative terms how much various factors (like salinity, sea water isotope ratio, and light) would need to change OVER TIME to introduce a significant error in Keigwin’s reconstruction.

    First, in item 5) here, I pointed out to you that, “The uncertainties I calculated for Keigwin come from his method, not from unknown marine variables., and provided you with two links to prior posts where I had made the same point. You’ve now made that same error four times running.

    I haven’t refused to do anything except be distracted. Apart from discussing what I have in fact done, I have tried to repair your continual misperceptions about what I’ve done.

    Second, in paragraph 5 under item 4) in my immediately prior response, I did in fact estimate the effect of salinity, using Keigwin’s own description of the statistical covariance of salinity with dO-18 in recent Sargasso Sea waters. That covariance was ~30%, and introduced a 7 C uncertainty in his paleotemperature reconstruction.

    Finally, you wrote, “…some of my criticisms have survived my review. Whether you are capable of applying Feynman’s quote to your own work remains in doubt as long as you seem to be “stonewalling” constructive criticism.

    Your criticisms have been neither valid nor constructive. They’ve either been wrong outright or based in misreadings or factual misunderstandings. It’s not “stonewalling” to point that out, or to demonstrate your apparently inevitable errors.

    You have pushed on with your criticisms no matter that you don’t know mass spectrometry, that you don’t understand the dO-18 proxy method, that you have no understanding of measurement error or its significance or how to propagate error or how such error impacts the significance of a result. Was that wise?

    But you’ve tried hard, for which I again give you credit.

  174. Pat Frank says:

    Shoot — I just noticed that I mistakenly included the Zhou & Zheng 2003 error histogram in the Kim & O’Neil,1997 analysis plot. Here’s the corrected Kim & O’Neil 1997 analysis Figure: http://i42.tinypic.com/x3thlc.jpg

    The fit to the histogram of point scatter shows at least two error modes in the data.

  175. Frank says:

    Pat: Thanks for your kind reply. I do appreciate the ad hominem remarks; they make it more rewarding to finally expose the fact. The science underlying the correlation between O18 and temperature is relatively simply: In Figure 4, the linear plot of the natural logarithm of the isotope ratio vs. 1/T comes from applying the Arrhenius equation to kinetic isotope effects and the slope is the difference in activation energy for the isotopes divided by R. The biological process would probably also show a linear relationship if plotted this way. Analyzing the uncertainty associated with the calibration curves used by this method is more involved. The approach in your post exaggerates uncertainty compared to what follows below.

    I previous suggested that the uncertainty arising from standard curves is normally calculated from the uncertainty in the parameters of the least-squares fit. A presentation showing how this process is done for a linear fit can be seen at: http://ull.chemistry.uakron.edu/chemometrics/07_Calibration.pdf

    The key section is located on slide 20: “Sensitivity: Smallest CHANGE in amount we can see with a known level of confidence.” We want to know the sensitivity of O18:T dating, which is shown graphically in Slide 21. Lea’s figure of 0.5 degC may be the sensitivity derived from Shakelton’s calibration curve.

    The Shakelton calibration curve can be used to convert isotope data into temperature data for multiple specimens obtained from a layer of a sediment core that covers some period of time, presumably many decades. The resulting temperature data will have a mean and a sample standard deviation that reflects: a) annual (and possibly decadal) variation in temperature at the site, b) annual (and possibly decadal) variation in salinity, seawater O18, light and other factors that might perturb the relationship between temperature and O18, and c) experimental variability when O18 is measured. The uncertainty contributed by all of these factors to the standard error of the mean temperature diminishes with the square root of the number of samples analyzed, but the overall uncertainty for the method can never drop below the sensitivity of the calibration. Therefore:

    a) If site variability was relatively high and/or Keigwin analyzed a relatively small number of samples from a layer, the standard error of his mean temperature for that period might be greater than the sensitivity of the calibration method. The observed SE should be reported as the uncertainty. Note that this ALREADY includes the experimental variability associated with measuring O18.

    b) If site variability was lower and/or Keigwin analyzed a larger number of samples, the standard error of his mean temperature might be less than the sensitivity of the calibration method. Under these circumstances, the sensitivity of the calibration, not the standard error of the mean, should be reported as the uncertainty. (If Keigwin analyzed 100 control samples, the standard error of the mean control isotope ratio would be very low, but no mean isotope ratio can be converted into temperature more accurately than the regression boundaries of the calibration curve permit, ie than the sensitivity of the method.)

    c) Shakelton’s and Keigwin’s uncertainties are never added in quadrature. The sensitivity calculated from Shakelton’s calibration merely provides a lower limit to uncertainty that Keigwin can’t overcome by analyzing more samples.

  176. tomwys says:

    any chance of resurrecting the links embedded HERE:

    “We’ll see that the proxy studies below improperly mix these categories. They convert true statistics into false science.

    To spice up the point, HERE are some fine examples of spurious correlations, and HERE are the winners of the 1998 Purdue University spurious correlations contest, including correlations between ice cream…”

  177. Pat Frank says:

    Tomwys, sorry for the trouble. I posted a set of fixed links in the comment section here

    Frank, I have never once directed any ad hominem against you in our conversation. Pointing out that you began your criticisms not knowing anything about mass spec or how the dO-18 proxy works, etc., were not ad hominem statements. They assessed your expertise, not you personally, and they were factual.

    It’ll take a while to answer your points; other duties call. But I’ll get to them.

  178. Frank says:

    Pat: After 10 days of [stunned?] silence regarding the presentation showing how to do uncertainty analysis when using a standard curve, it seems obvious that this methodology was unknown to you. Your ignorance of my scientific expertise is even worse: I used a mass spec several times a week, if not several times a day, for decades and am aware of why one should (or shouldn’t) get a linear plot for the logarithm of isotope ratio vs 1/T (Figure 4a).

    I certainly didn’t understand all of the practical aspects of O18 dating when we began this conversation, but the green and brown lines on Figure 6 immediately suggested that you were confusing random and systematic errors when translating uncertainty in isotope ratio into uncertainty in temperature. Furthermore, I was taught to show error bars of one SEM so that your audience’s eye would immediately be drawn to data with differences that were likely to be statistically significant. (See Figures 5 and 6 at http://jcb.rupress.org/content/177/1/7.full) With 95% CI’s you showed, error bars overlap until p = 0.01, an absurd requirement in a scientific discipline that absurdly considers p<0.33 "likely" and p<0.05 "virtually certain". If the editor of the Journal of Biological Chemistry thinks his readers need an article on interpreting of error bars, I suspect the readers of WUWT may also.

  179. Pat Frank says:

    Frank, in your paragraph 3, you drew attention to the wrong slide. The relevant comparisons are following slide 30, assessments of residuals.

    These later slides describe random and non-random residuals. This distinction, not sensitivity, is relevant to the post analysis.

    Post Figures 3-5 show the fit residuals are non-random. That implies systematic error. Systematic error is also present in the data of Kim & O’Neil and Zhou & Zhang, as later posted here and here.

    Regarding your suggestion about calculating total uncertainty using the uncertainty in fitted parameters, I’ve already dealt with that in item 2, here. You gain nothing by it. Fitted e.s.d.s are, in any case, one step removed from a more straight-forward error analysis using the data scatter itself.

    Lea doesn’t say from where he gets his estimates. Your speculation sheds no light.

    The Shakelton calibration curve can be used to calculate temperatures for any dO-18 ratios that fall within its data bounds. “Decades” has nothing to do with it.

    I truly regret having to observe this, Frank, but every time you sally forth into some area you demonstrate a lack of understanding.

    You wrote, concerning all those systematic effects, “The uncertainty contributed by all of these factors to the standard error of the mean temperature diminishes with the square root of the number of samples analyzed.” No, it does not.

    Only random error diminishes as 1/sqrtN, Frank. Systematic error does not. Here’s a reasonable Wiki discussion. When systematic error varies with time, locale, and/or experimenter it’s magnitude is unknown and unknowable. The only way to get a measure of it is to run a known standard through the method, measure the systematic error, and report that error as the minimum uncertainty in any result. And even that assumes the systematic effects of time and location are nil.

    Your a-c: My uncertainty estimate for Keigwin’s chart has nothing to do with “site variability.” It has strictly to do with measurement error. You’ve now expressed that same mistaken view five times running; the four times already pointed out here.

    You wrote, “If Keigwin analyzed 100 control samples, the standard error of the mean control isotope ratio would be very low…” That would be true only if Keigwin’s measurement errors were randomly distributed. Do you know they are so-distributed?

    All the measurements I’ve investigated show evidence of systematic error. Shackleton mentioned them as plaguing even his high-precision results. You’ll have to demonstrate that Keigwin’s measurements include only random error before being justified in deploying the statistics of random error.

    Shackelton’s and Keigwin’s respective measurement errors are independent. They each contribute independently to the total uncertainty in Keigwin’s final result. Such errors should always be added in quadrature.

  180. Pat Frank says:

    Frank, I’m distracted by work and will be more so as May progresses. These responses take time, and after today I probably won’t have that time until at least mid-June.

    What seems obvious to you is worthless without demonstration. So far, you have demonstrated a repeated tendency to be incorrect.

    You wrote, “I used a mass spec several times a week, if not several times a day, for decades…

    Right. That’s how you knew that, “The absolute size of peaks in a mass spectrum is irrelevant; only ratios are reported.

    That claim was demonstrated as incorrect here, showing that the output of a mass spectrometer is absolute peak height; but the tinypic has been removed, unfortunately. A more permanent demonstration picture is here.

    And your decades-long knowledge of mass spectrometry led you to write that, “Stable isotopes are much more accurate describing temperature change, rather than absolute temperature. Most publications plot stable isotope ratios, rather than derived temperatures, on the vertical axis for precisely this reason,” which is also wrong.

    Whereas in fact, stable isotope ratios for climatology are used to reconstruct temperature, not temperature differences. The temperature of the past is basically caculated as T_past = [(O-18_ratio)_past]/[(O-18_ratio)_present] times T_present plus a constant, where the ratio is vs standard sea water. Frank, you aren’t fooling anyone.

    95% error bars are equivalent to p<0.05, Frank. The statistical p<0.01 represents 3 SD's or about 99.7% certainty (of including the correct value). You're once again mistaken.

  181. Frank says:

    Your comments concerning mass spec on April 29 reply are wrong: 1) The vertical scale on a mass spectrum originally was a measure of the ion current carried from the source to the collector by ions of analyte. Since different molecules ionize with very different efficiencies, ion current (or the output from newer detectors) is normally not reported quantitatively. For that reason, the vertical axis of a mass spectrum is traditionally labeled “relative abundance” or “arbitrary units”. The biggest signal is called the base peak and assigned a size of 100%; all other peaks are listed as percentages of that base peak, ie as RATIOS. Different isotopes of the same molecule do ionize with same efficiency; their difference is in the nuclei, not the electron orbitals involved in ionization. However, when too much sample is introduced in an attempt to strengthen the signal of minor isotopes, isotope ratios can be distorted. 2) The permanent link to the output of a mass spec you provided does NOT show a real mass spectrum. The output on this page characterizes a beam of oxygen ions probably intended to ionize samples during secondary ion mass spectroscopy. Based on your faulty information, it’s beginning to look as if I might know more about mass spec than you do. It takes Mannian guts to question the knowledge of someone who claims to have routinely used mass spec for decades when you make silly mistakes like these. Stick with unsubstantiated insults; they are safer.

    As for error bars, you didn’t bother to read the reference I provided (or weren’t capable of understanding it). Error bars display descriptive statistics, but the paper was concerned with drawing statistical inferences about the DIFFERENCE between two or more data points from the appearance of their error bars. In Figures 5 and 6, the authors have illustrated how the error bars will appear (overlapping or not) when a t test shows the significance of the DIFFERENCE in means is borderline (p = 0.05). One can look at the error bars for a reconstruction and visually estimate whether or not any difference in temperature (for example, between the MWP and the LIA) is likely to be statistically significant. It is easy to spot insignificant differences when the error bar displays one SEM; the error bars touch or overlap. A error bars showing 95% ci can overlap and still represent meaningful differences in mean. Of course, the significance of any difference should be confirmed by a t test, but there is no easy way to display the results for all possible differences.

    Unfortunately, you don’t seem to realize that science is mostly about CHANGE and DIFFERENCE: Is the recent rise in temperature bigger than natural variation? Is the treatment group different from the control? Is the difference between theory and observation significant enough to invalidate the theory? No one gives a #$*!%* whether the mean annual temperature in the Sargasso Sea was 21.2+/-0.7 or 22.6+/-1.6 degC during the LIA; we are interested in knowing the magnitude of natural variation and how much of the reconstructed variation might be attributable to random experimental error. You are capable of addressing issues more complicated than the mean temperature and its confidence interval, aren’t you?

    (Figure 3 in your post suggests the answer to this question is yes. However, you should have performed a test to reject the (null) hypothesis that the data are consistent a single Gaussian distribution before making claims of systematic error.)

    You wrote: “Whereas in fact, stable isotope ratios for climatology are used to reconstruct temperature, not temperature differences. The temperature of the past is basically calculated as T_past = [(O-18_ratio)_past]/[(O-18_ratio)_present] times T_present plus a constant, where the ratio is vs standard sea water. Frank, you aren’t fooling anyone.”

    One can define a T_past_1 and a T_past_2 and calculate their difference:

    [(O-18_ratio)_past_2 - O-18_ratio)_past_1]/[(O-18_ratio)_present] times T_present

    AND eliminate of the uncertainty inherent in that inconvenient constant. Figure 6 vividly demonstrates the wide range of values this constant can have in different situations. I’ve made this point before and refuse to accept the unnecessary inflation in uncertainty associated with calculating temperature differences from absolute temperature. If I add 1.0 mg of sample to a 15.4531 g vial on an analytical balance, the uncertainty in that 1.0 mg doesn’t depend on the uncertainty in the actual weight of the vial. It depends on the sensitivity of the balance to an addition 1 mg when loaded with 15 g.

    If you search google images for “core” and “O18″, you will find hundreds of graphs with delta O18 on the vertical axis instead of a temperature. When uncertainty in local seawater O18 or factors affecting the y-intercept of the calibration curve make it unreasonable to report absolute temperature reconstructions, the changes in O18 provide an estimate of changes in temperature (roughly 4.8 degC per 1 %o for O18 in calcium carbonate).

    Finally, please acknowledge that I provided a reference showing that at least one analytical chemistry professor teaches students to calculate the uncertainty arising from use of a standard curve from the uncertainty in the parameters obtained during the least-squares fit to the calibration data. This procedure uses regression bands calculated for the desired degree of confidence to determine “sensitivity”: the minimum CHANGE that can be reliably detected. When you have time to study this unfamiliar method, you can explain why it is or is not appropriate for O18 dating.

    I don’t know how Shakelton, Keigwin or Lea actually assess uncertainty, but this method doesn’t require Shakelton’s calibration uncertainty to be added in quadrature to Keigwin’s. Nor does it require the resulting uncertainty in temperature to be added in quadrature again when considering temperature change/variability. Unless you can demonstrate why sensitivity is an inappropriate measure of uncertain, Lea’s figure of +/-0.5 degC is more sensible than yours.

    I should have listened when Steve Mosher warned another commenter above not to waste pointing out possible problems with your post. So, I won’t waste more time responding to further comments about my alleged ignorance or putative mistakes. However, I am seriously interested in the proper treatment of uncertainty in this situation and whether a sensitivity derived from the calibration curve is the best answer.

  182. Pat Frank says:

    Frank, where to start? Maybe with your “2) because it will lead to your “1).”

    The legend to the mass spectrum here says, “Mass spectrum of the Hyperion ion source operating with oxygen.” Ie., it’s a real mass spectrum of the oxygen ions produced by an oxygen ion plasma source used, for example, to micromill surfaces. Anyone with even the most basic understanding of mass spectrometry would have recognized that.

    Notice that the ordinate is “counts.” That’s detector counts, representing absolute intensity. Detector counts, typically ion current, is what all mass spectrometers measure. That does not change if someone later rescales the spectrum, setting the most intense peak to 1.00 relative height.

    Later processing to produce peak ratios does not mean that mass spectra themselves consist of peak ratios. You’re completely mixed up about what spectrometers detect (ion current) and how people process spectra afterwards (what is “reported”).

    The text of your “1)” has the stilted language of a formal presentation, suggestive that it was largely copied from elsewhere.

    You may, “[claim] to have routinely used mass spec for decades” but your extemporaneous comments here provide no evidence that you understand anything important about mass spectrometry. As I’m responding to your statements here, and not your purported experience, there’s no difficulty in sustaining the point.

    You wrote, “Stick with unsubstantiated insults; they are safer.” Let’s see you quote any post of mine in which I have written an insult. I claim you won’t be able to do it, and therefore that your statement itself is an unsubstantiated canard.

    Your whole paragraph starting with, “As for error bars,…” merely shows that you still don’t understand the difference between systematic and random errors. Further, for the umpteenth time, the errors I derived have nothing to do with differences between values.

    When error is systematic, the mean value is not the most probable value. The mean is merely one among all the possible values between the systematic uncertainty limits. The unknown true value may not lay near the mean value. Statistical t-tests no longer make physically relevant comparisons between sets of means.

    You wrote, “Unfortunately, you don’t seem to realize that science is mostly about CHANGE and DIFFERENCE:.” Science is about replicable fact in a context of falsifiable theory. Energy flux defines a gradient, which one supposes may be what you mean by change and difference.

    You may not care whether systematic error is (+/-)0.7 C or (+/-)1.6 C, but any scientist would care about that difference. My interest was to explore the accuracy of proxy temperature methods. Both Kaustubh Thirumalai and Kevin Anchukaitis focused on dO-18 as their sole defense of proxy climatology, despite that the field largely relies on tree rings.

    As the dO-18 proxy is truly based in physics (in contrast to tree-ring thermometry) and is acknowledged as the most well-developed and most accurate of all physically valid proxy methods, I decided to examine the errors in that method.

    The measurement errors represent the lower limit of accuracy in the dO-18 method itself. In turn, since dO-18 proxies provide the most accurate proxy temperature reconstructions, their lower limit of accuracy sets the lower limit of resolution in the entire field of proxy thermometry.

    In light of the envelope of data points, your parenthetical comment about Figure 3 is ludicrous.

    You wrote, “One can define a T_past_1 and a T_past_2 and calculate their difference: [(O-18_ratio)_past_2 - O-18_ratio)_past_1]/[(O-18_ratio)_present] times T_present AND eliminate of the uncertainty inherent in that inconvenient constant.

    First, the error in the constant (the intercept) is determined solely by the error in the slope. Second, the first order error in the slope is due to measurement error (my concern here). Third, taking a difference does not eliminate systematic error because one does not know where the true value lies in the distribution of error and every single measured value has its own unique bias due to its own unique level of systematic error. Hence the variety of point scatter.

    Look at Figures 2, 4, & 5: the residual scatter is not constant. The error is not constant. Error is not a constant offset in each point. Subtracting two data points does not eliminate the error inherent in them. The difference may even have a larger error than the points themselves if the systematic errors have the same sign.

    Regarding your paragraph starting, “Finally, please acknowledge…,” as already noted, I dealt with that issue in item 2, paragraphs 3-5 here. Calculating an over all uncertainty using regression uncertainties gets you nothing. As already noted, using regression uncertainties is one step away from the more fundamental uncertainty calculated using the measurement errors in the data set itself.

    You wrote, “I don’t know how Shakelton, Keigwin or Lea actually assess uncertainty…,” but you nevertheless do know that your unknown method doesn’t require adding errors in quadrature and that your inference is “more sensible” than my calculation. And you meant that to be convincing.

    Here’s Shackleton’s equation: T = 16.9-4.38*(dO-18)+0.1*(dO-18)^2. We know from the Table of Shackleton’s precision analytical method that his minimal dO-18 error is (+/-)0.14%o.

    Put that error into the equation, and one calculates that the minimal uncertainty of any calculated temperature is (+/-)0.61 C, when applying a proxy dO-18 measurement to Shackleton’s line.

    That (+/-)0.61 C is the inherent uncertainty residing in Shackleton’s equation itself. We know from Shackleton’s comments that the error is from uncontrolled systematic causes.

    Let’s see if I can explain this in words. If you, as an expert in mass spectrometry, managed to obtain a dO-18 measurement with zero experimental error, so that you knew the true exact dO-18 value, i.e., measurement uncertainty = (+/-)zero, then plugging that perfect dO-18 value into Shackleton’s equation would yield a proxy temperature with an uncertainty of T(+/-)0.61 C; i.e., the pure uncertainty in Shackleton’s equation.

    But suppose you report that your dO-18 measurement error is (+/-)0.1%o (i.e., Keigwin’s average error). That means the dO-18 value you plug into Shackleton’s equation has its own independent error. Your error is in addition to Shackleton’s error, and it’s of the same magnitude, and it’s also systematic.

    Your (+/-)0.1%o dO-18 measurement error produces an equivalent (+/-)0.44 C uncertainty in temperature when run through Shackleton’s equation.
    But again, Shackleton’s equation has its own separate and independent error of (+/-)0.61 C. Any temperature calculated using Shackleton’s equation has a high uncertainty of +0.61 C and a low uncertainty of -0.61 C, from use of the equation alone.

    The (+/-)0.44 C of your own measurement error is independent of Shackleton’s and additive. The +0.44 C portion of your uncertainty sits on top of (adds to) the +0.61 C portion of Shackleton’s uncertainty. Likewise, the -0.44 C of your measurement error adds to the -0.61 C of Shackleton’s uncertainty.

    More analytically, Shackleton’s equation can be represented with the dO-18 measurement uncertainties left visible: T(+/-)sigma = 16.9-4.38*((+/-)0.14%o)+0.1*((+/-)0.14%o)^2.

    Now we add in your measurements, which I’ll label as capital-DO-18. T(+/-)sigma = 16.9-4.38*[((+/-)0.14%o)+(DO-18(+/-)0.1%o)]+0.1*[(+/-)0.14%o)+(DO-18(+/-)0.1%o)]^2.

    Rearranging: T(+/-)sigma = 16.9-4.38*[(DO-18(+/-)0.14%o)(+/-)0.1%o)]+0.1*[DO-18(+/-)0.14%o)(+/-)0.1%o)]^2. The errors now combine.

    And so, T(+/-)sigma = 16.9-4.38*[DO-18(+/-)(total%o error)]+0.1*[DO-18(+/-)(total%o error)]^2.

    Here we see explicitly that the dO-18 measurement uncertainty inherent within Shackleton’s equation adds directly to the error in your particular DO-18 measurements and, specifically, combines with your measurement error.

    Errors in added quantities combine in quadrature. See “Addition and Subtraction” here.

    Applying that statistical rule, “total %o error” = sqrt[(Shackleton error)^2+(Keigwin error)^2]. That’s QED, so let’s finish with this.

    You wrote, “I am seriously interested in the proper treatment of uncertainty in this situation and whether a sensitivity derived from the calibration curve is the best answer

    Page 4 in your calibration presentation shows “sensitivity” to be the same as measurement accuracy when the uncertainty limits represent systematic error.
    In that light, both my post and everything I’ve written here are about accuracy in the dO-18 calibration tests. That makes “sensitivity” the center of the discussion. And you have unfailingly argued against it. One is led to wonder about the seriousness of your interest.

    Finally, you wrote, “I should have listened when Steve Mosher warned another commenter above not to waste pointing out possible problems with your post.

    Steve Mosher claimed that I ‘refuse to engage the argument.’ He had visited the pages at tAV (here and here) and must have known his charge was untrue when he wrote it.

    You also visited at least one of those pages and must have noted my extensive engagement of the argument there.

    You’ve also experienced my extensive engagement with your argument here — no matter that you’ve disagreed with me (though to no end).
    So you knew that Steve Mosher’s criticism was untrue on its face, and you know that it’s untrue here as well. But you’ve repeated it and then applied it. Why aren’t you guilty of a double Mosher?

  183. Frank says:

    Pat:

    Page 4 of the calibration presentation does NOT show sensitivity to be the same as measurement accuracy. Sensitivity is calculated from the separation between the regression bands calculated from the least-squares fit to the calibration data. I anticipate that the sensitivity will improve if the temperature range of the calibration is wider and if more samples are used in calibration, even though the uncertainty in each sample remains unchanged. The sensitivity near the ends of the calibration is poorer than in the middle. This is NOT simply measurement accuracy. A least-squares fit allow one to go beyond the uncertainty of single measurements at one temperature (0.14%o for Shakelton, according to you) particularly when calculating slope. It is the uncertainty in the slope (dT/dO18) that makes the uncertainty in temperature differences bigger than the uncertainty in individual temperature measurements. When the uncertainty in slope is small, your ability to reliably detect temperature differences should improve.

    Nowhere in the presentation are two uncertainties added in quadrature to produce an overall uncertainty and then added again when one is interested in the difference between two results. You are not doing what this professor teaches.

    Read about the units on the y-axis of a mass spectrum here: http://en.wikipedia.org/wiki/Mass_spectrum
    Except in the case of isotopes, variable ionization prevents development of a useful relationship between the number of analyte molecules entering the mass spec and the number ions detected at a particular m/z. Sometimes the signal from minor, easily ionized impurities overwhelms the signal from the major component.

    Read about the use of a BEAM of oxygen ions to ionize SAMPLES during secondary ion mass spectroscopy (SIMS) here:

    http://en.wikipedia.org/wiki/Secondary_ion_mass_spectroscopy

    The m/z ratio of the beam is displayed, not a mass spectrum of a typical sample.

    No one runs a mass spectrum under conditions that break roughly half of the oxygen molecules into the oxygen ions which you can see at m/z = 16. If you did that with CO2 samples, you’d get a massive primary isotope effect enriching the O18 signal because the C-O16 bond is weaker than the C-O18 bond. (Sometimes chemists do want to fragment molecular ions to identify subunits, but they certainly wouldn’t use conditions that blow apart a simple oxygen molecule.) I’m certainly being picky about the oxygen beam, but you did say I knew nothing about mass spec.

    You ignored the whole subject of using error bars to draw inferences about the significance of differences. You continue to mix discussion of random/experimental error – which is disclosed via error bars – with systematic error – which can’t easily be quantified. The possibility of systematic error is high. If you want to destroy O18 proxies, reliable estimates of systematic error could do it.

    No, I didn’t plagiarize my comment. I revised in an to attempt to avoid misunderstanding.

    I understood Mosher to be warning me that no matter what I said, what ideas I provided (concerning the obvious importance of uncertainty in temperature CHANGE), what alternatives I might present (a method for using uncertainty from least-squares calibrations), what alternative ways of looking at a issue I might propose (inferences from error bars), references I might present (Lea’s 0.5 degC); everything would be rejected as being unambiguously wrong; right down to the originality of my words and my experience with mass spec. You could have quickly checked any online source about the use of relative abundant, base peak, and a beam of oxygen ions in SIMS. You could have acknowledged that 1 sigma error bars can be used to draw inferences about climate change in a manner that you didn’t recognize when you decided to use 2 sigma error bars. 2 Sigma wasn’t WRONG, after all. ENGAGEMENT requires an open mind – I don’t mean Tamino’s – not a just error-ridden replies indiscriminately saying WRONG, WRONG, WRONG before you’ve really considered anything. From my perspective, your attitude is no different than the Hockey Team responding to McIntyre. An open-mind: “Frank, could you provide a reference showing how analytical chemists use the uncertainty in the least-squares fit to assess uncertainty in assays using a standard curve?” “Frank, I was focused on the uncertainty in absolute temperature, but the uncertainty in temperature change is also important and does become ridiculously large by adding in quadrature. Your method does produce temperature differences with less uncertainty, but I’m skeptical that one can eliminate the uncertainty inherent in the y-intercept by eliminating that term algebraically.” Which bring me back to Feynman and the easiest person to fool.

Comments are closed.