CO2 Sample Spacing in Ice Cores

Guest Post by Renee Hannon

Introduction

This post examines sample spacing for CO2 measurements in Antarctic ice cores during the past 800,000 years to better understand if gaps in sampling are too large to capture centennial fluctuations. The IPCC states:

“Although ice core records present low-pass filtered time series due to gas diffusion and gradual bubble close-off in the snow layer over the ice sheet (Fourteau et al., 2020), the rate of increase since 1850 CE (about 125 ppm increase over about 170 years) is far greater than implied for any 170-year period by ice core records that cover the last 800 ka (very high confidence).”

AR6 Climate Change 2021, Chapter 2 IPCC 2.2.3.2.1.

Figure 1a shows the IPCC’s figure of CO2 fluctuations over the past 800,000 years. This appears as a continuous and evenly sampled line, but it is not. Ice cores are sampled for CO2 at discrete locations along the core. Figure 1b plots the individual CO2 data points showing obvious gaps of up to hundreds of years between samples.

Figure 1. a) IPCC’s figure 2.4a showing CO2 ice core measurements during the past 800,000 years from Chapter 2 in AR6 Climate Change 2021. Data from Bereiter, 2015. b) CO2 data points that comprise the CO2 thick line shown in the IPCC’s figure 2.4a. Marine Isotope Stage (MIS) shown for key interglacial periods.

CO2 Sample Gaps Exceed Instrumental Record

Samples for CO2 measurements are taken along ice cores at distances ranging from 50 cm apart to over 10 meters shown in Figure 2. There does not seem to be a standard or routine distance for sample selection and sampling frequency is probably determined more by study funding and/or research interests. Sample spacing on cores over the Holocene MIS 1 is good and ranges from 20 cm to 3 meters.

There are two cores long enough to cover older sections including MIS 5 and beyond, Vostok and Dome C. Vostok shown in light orange is a widely used public CO2 dataset but has the largest space between samples, ranging from 2 meters to greater than 18 meters. Dome C sample spacing over the older section from MIS 9 through MIS 19 is very good and less than 1 meter apart.

Figure 2. CO2 sample spacing in depth (meters) collected from Antarctic ice cores over the past 800,000 years BP. Data from Bereiter, 2015 and additional higher resolution data from a recent study by Nehrbass-Ahles, 2020.

Let’s examine samples converted from depth to time which is critical to identifying centennial CO2 fluctuations and trends. Figure 3 shows sample spacing in years over the past 800,000 years. The 200-year interval is highlighted because it is slightly longer than the modern CO2 rise over the last 170-year period reported by the IPCC. There are minimal periods when ice core CO2 sample spacing is less than 200 years; consistently from 0 to 60,000 years BP, sporadically (12 samples) between 125,000 to 140,000 years BP and sporadically between 330,000- and 400,000-years BP. Again, note that Vostok CO2 records have the worst temporal sample resolution.

Figure 3. CO2 sample spacing in Antarctic ice cores converted to years from Bereiter, 2015 and Nehrbass-Ahles, 2020. Note y-scale is inverted which corresponds to higher temporal resolution over interglacial (higher) than glacial (lower) periods despite similar sample depths.

Dome C samples in the Monnin study over MIS 1 averages 72-year sample increments for CO2 and is reasonably good. Joos, 2008, confirms sample spacing for CO2 is 100 years or less during the Holocene MIS 1 and around 200 years during the last deglaciation transition. He also states that CO2 sampling intervals are even shorter, 30 to 60 years, for Law Dome and Firn records over the past 2000 years AD.

The most recent study by Nehrbass-Ahles during MIS 9 to MIS 11 averages 300-year sample spacing or temporal resolution. Sample spacing decreases to 100-year increments during MIS 9. The MIS 17 through MIS 19 timeframe has a mean sample spacing resolution of 570 years and between MIS 11 and MIS 17 is 731 years (Luthi, 2008, Siegenthaler, 2005). The large gap in time between samples provides little chance of observing centennial CO2 fluctuations with datasets during MIS 5 and older.

Nerbass-Ahless concludes that centennial jumps are a pervasive feature of the natural carbon cycle that are undetected in CO2 ice core records of insufficient temporal resolution. They state that sub-millennial-scale CO2 variability is only available for about the past 60,000 years BP.

Sample Gaps Compound CO2 Firn Smoothing in Ice Cores

Ice core data resolution is suppressed by both temporal sampling as well as firn gas diffusion. Sampling resolution is discussed in detail above, but it is worth discussing firn gas smoothing as well.

Many authors have documented gas smoothing in the firn layer due to vertical gas diffusion and gradual bubble close-off during the transition from firn to ice (Trudinger, 2002; Joos and Spahni, 2008; Ahn, 2012; Fourteau, 2020; IPCC, 2021). To compensate for cores from sites with varying snow and ice accumulation, a gas age distribution width or smoothing is modeled. For example, high accumulation Law Dome and WAIS cores have an average gas age of 10-15 years and 30 years, respectively. Low accumulation sites such as Dome C and Vostok contain gas that is averaged or smoothed over hundreds of years. According to Monnin 2001, Dome C is smoothed about 200 years in the Holocene and smoothing increases to 550 years during the Last Glacial Maximum (LGM).

Fourteau has this to say:

“For carbon dioxide, firn smoothing appears to significantly diminish the recorded rates of change in abrupt CO2 increases, compared to their atmospheric values. The estimations of CO2 rates of change are further altered by the process of discrete measurement, and measured values can be 3 times lower than the actual atmospheric rate of change.”

Fourteau, 2020

Figure 4 summarizes key ice cores that are color coded by firn gas smoothing as well as average CO2 sample spacing over the past 800,000 years.

Figure 4. a. Key ice cores showing cored interval length. Colors indicate firn diffusion CO2 smoothing where red is 200 to 600 years, orange is 60 to 150 years, and green is 10 to 50 years. Averages assume 10% of gas-ice age delta. b. 7-point average sample spacing for CO2 composite, Bereiter, 2015 and Nehrbass-Ahles, 2020. Note 200 years is highlighted. c) Actual CO2 sample data points plotted over time.

Figure 4a shows multiple high-resolution cores in the Holocene MIS 1 and preceding glacial period. CO2 diffusion rate in the firn is low and sample spacing is good, 100 years or less. Joos, 2009, states that sample frequency in the ice core is generally high enough to capture century scale variations over the past 22,000 years. He also states the 20th century increase in CO2 is more than an order of magnitude faster than any sustained change during the past 22,000 years. And Nerbass-Ahless, 2020, agrees that sub-millennial-scale CO2 variability is only available for about the past 60,000 years. However, sixty thousand years is less than 10% of the 800,000-year ice core record.

In contrast, there are a limited number of ice cores that cover 100,000 to 800,000 years ago, namely Dome C and Vostok. CO2 is smoothed and averaged over hundreds of years due firn diffusion in these low accumulation sites. In the best dataset over MIS 11 to MIS 9, CO2 is smoothed over 200 years and then sampled on a 300-year increment. Both the firn smoothing and the sample spacing are greater that the modern 170-year period reported by the IPCC.

More typical in older thinning ice, the sample spacing in years is even larger. MIS 17 to MIS 19 sample spacing averages 570 years (Luthi, 2008) and between MIS 11 and MIS 17 sample spacing averages 730 years (Siegenthaler, 2005). Approximately 75% of the 800,000-year ice core record is sampled at intervals greater than 400 years as shown in Figure 4b. That equates to two samples or less over a 1000-year ice core interval, barely enough to establish a millennium trend.

Sampling Methods may Eliminate the Centennial Signal if Interpreted as Noise

A final note on sample methods. Typically, four to six samples for CO2 are taken within a 60 to 100 mm core length for repeatability studies between different laboratories and time lapse (Monnin, 2001). Data points may be rejected due to obvious contamination or fractures. A data point may also be rejected as noise because it has a higher standard deviation than what researchers deem appropriate (Ahn, 2012). The final recorded data point is not usually an actual measurement of CO2. The final data point represents a mean CO2 value derived from the closely spaced samples with a sigma mean typically less than 1.5 ppm. Thus, CO2 measurements can be deemed as ‘outliers’ which are not used and considered noise.

Scientists may also employ additional scrutiny on the final data points such as the statistical method described by Eggleston:

“If the point lay outside of the sum of twice the standard deviation of a Monte Carlo spline and twice the standard deviation of the point itself, it was identified as a statistical outlier.”

Eggleston, 2016

An obvious question is whether scientists are removing CO2 centennial-scale excursions by rejecting samples as statistical outliers compared to the neighboring samples?

Observations

The IPCC statement that:

“the rate of increase since 1850 CE (about 125 ppm increase over about 170 years) is far greater than implied for any 170-year period by ice core records that cover the last 800 ka (very high confidence)”

AR6 Climate Change 2021, Chapter 2 IPCC 2.2.3.2.1, pages 2-17, 2-18.

Appears misleading and inconsistent with the limitations of Antarctic ice core records. Based on sample spacing alone, only 10% of the ice core record is sampled less than 200 years apart and about 75% is sampled greater than 400 years apart. And there is the additional component of CO2 smoothing by 200 to 600 years due to firn gas diffusion in ice core records older than 100,000 years.

CO2 records from ice cores are imperfect data. They are muted due to firn diffusion and poor sample frequency. These data should be used only with a “Fit for Purpose” approach. Basically, Antarctic ice cores are useful for evaluating CO2 millennium trends. Their poor resolution due to firn smoothing, depth of burial smoothing and often sparse sample spacing will never resolve centennial CO2 fluctuations with certainty despite the IPCCs claim it can be done with “very high confidence.”

Paleoclimate CO2 global fluctuations should encompass multiple data sets and not just Antarctic ice cores. For centennial CO2 fluctuations, datasets that capture higher variability need to be incorporated, such as higher resolution Greenland ice cores as well as plant stomata data. To be clear, all datasets with CO2 proxy measurements have their problems and limitations. Using higher resolution data, CO2 centennial fluctuations similar in magnitude and rate of increase to the modern do exist and have been identified such as described in my previous post here.

Acknowledgements: Special thanks to Donald Ince and Andy May for reviewing and editing this article.

Download the bibliography here.

4.8 32 votes
Article Rating
Subscribe
Notify of
248 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Clyde Spencer
June 23, 2022 2:35 pm

Fig. 1a shows a disingenuous approach to displaying data. The solid line implies a sampling interval that is too small to be discerned at the scale of the graph. In reality, they are interpolating without explicitly stating that to be the case. Yet another deceptive practice by ‘scientists’ who should know better. I’ll give them the benefit of the doubt and suggest that they were more concerned about the appearance or aesthetics of the graph than they were about conveying the pertinent information. However, that is also a sign of an incompetent who is pretending to be a scientist.

Rud Istvan
June 23, 2022 2:35 pm

Thanks for this post. You cover something concerning natural variability about which I had not previously thought. Generalizing, the more available ‘CO2 control knob’ data is scrutinized, the dodgier it gets.

CoRev
Reply to  Rud Istvan
June 23, 2022 3:20 pm

Dodgier and dodgier.

whiten
Reply to  CoRev
June 24, 2022 8:00 am

The main problem with ice core data, is with what that data represents.
The only thing there of value as the data and processing of it, is the high fidelity of the fluctuation of long term signal of atmospheric CO2 in relation to atmospheric thermal variation and it’s correlation to temp. variation in long term.

But the actual size of CO2 atmospheric variation is of very very low fidelity, and no consideration at all for any fidelity in the capturing or representation of the actual change rate beyond and above the very smoothed long term variability.

The main dodgy thing happens to be the claim or the implying of high fidelity or fidelity in the data, for what the data and it’s process can not offer any at all.

Not mentioning also, the temp. data processing-reconstruction and adjustments for the long term, which have very significant biases…
which also further impacts (negatively) the CO2 ice core data processing… in consideration of atmospheric CO2 variation-fluctuation size.

cheers

cheers

Reply to  Rud Istvan
June 24, 2022 9:18 pm

There is no evidence that natural CO2 was the “control knob” in the ice core years. Do climate alarmists still claim that now– I thought that was an old Al Gore invention?

It was my impression that climate alarmists rarely called man made CO2 a “control knob” until about 1975, after which there was a strong positive correlation between CO2 levels and the global average temperature.

I thought the alarmists’ focused on man made CO2 only, and only during the decades when the global average temperature was rising (after 1975)

J N
June 23, 2022 2:58 pm

Thank you for this post Renee Hannon. I’ve been saying similar things for a long time. Agree with all!!

Reply to  J N
June 23, 2022 4:02 pm

I’ve also noticed this, and what’s most important to me relative to variability is that samples spaced further apart are sampling longer term averages and longer term averages always change slower and with less p-p variability then shorter term averages.

J N
Reply to  co2isnotevil
June 23, 2022 5:41 pm

Exactly!!

Joe Crawford
Reply to  co2isnotevil
June 24, 2022 7:39 am

Nyquist strikes again :<)

June 23, 2022 3:33 pm

“Figure 1b plots the individual CO2 data points showing obvious gaps of up to hundreds of years between samples.”

It does. But it shows something else – the curve is fairly smooth. The points on it are close to what you would have interpolated if that data had been missing. That gives confidence about interpolating within intervals.

There is a reason for that. Global CO ppm fluctuations involve huge mass movements, and so huge fluxes, if they happen over short intervals. But the past CO movements were in response to temperature changes, which caused the gas to diffuse from deep water. There is a long time scale associated with that.

The recent change is caused by us digging up and burning carbon. We know the mass flux associated with that, and it isn’t limited by a diffusive process. 

Rud Istvan
Reply to  Nick Stokes
June 23, 2022 3:47 pm

NS, I both agree and disagree, based on the changing dC13/dC12 ratios data anyone can look up.

Since dC12 is lighter, it is preferentially photosynthesized. Then preferentially sequestered as fossil fuels, enhancing residual dC13 in the atmosphere. So as atmospheric dC13 proportion declines, that is an incontrovertible indicator of dC12 fossil fuel consumption contributing to Keeling Curve CO2 rise. Put otherwise, Murray Salby was just wrong.

BUT delta rates matter. And the very slow change rate compared to fossil fuel consumption estimates says there is a lot else also going on in the carbon dioxide cycle that must be mostly natural. Of course, the uncertainty error bars are large given the vastness of the oceans and their known CO2 absorption /sequestration capabilities.

Clyde Spencer
Reply to  Rud Istvan
June 23, 2022 8:34 pm

So as atmospheric dC13 proportion declines, that is an incontrovertible indicator of dC12 fossil fuel consumption contributing to Keeling Curve CO2 rise.

I think that argument should be held in abeyance until such time as someone does a rigorous analysis of the isotopic fractionation that takes place with every phase change of water, and passage of CO2 between sources and sinks.

R_G
Reply to  Nick Stokes
June 23, 2022 4:07 pm

Assumption that smooth data (precision) represents truth is one of the most common source of the conformation bias in scientific publications. What about systematic errors? They don’t change the appearance of the data but they certainly affect their accuracy.

Reply to  R_G
June 23, 2022 4:19 pm

Assumption that smooth data (precision) represents truth”
No, it signifies that you can interpolate. IOW, the sampling is adequate. More samples would just smoothly fill in the gaps. Yes, there could be systematic errors, but finer sampling won’t help there.

Editor
Reply to  Nick Stokes
June 23, 2022 4:44 pm

More samples would just smoothly fill in the gaps.”

An unwarranted assumption.

J N
Reply to  Andy May
June 23, 2022 5:54 pm

No Andy, is indeed a very bad assumption because the smooth of sampling data is precisely to eliminate the stochastic noise of having many samples, unless you have something that has a perfectly curve fitting behavior.

Renee
Reply to  Andy May
June 23, 2022 6:28 pm

More samples would just smoothly fill in the gaps

Interestingly Tschumi, 2000, did just that on an Antarctic ice core in his effort to discredit Greenland ice cores. His figure 6 shows the data of 16 samples over 1 meter of the Bryd Antarctic ice core. He states “we selected an ice core from a depth range with surprisingly large variations.” The CO2 concentrations show variations of 40 ppm.

Last edited 12 days ago by Renee
Reply to  Renee
June 23, 2022 7:19 pm

Well, that is a while ago. He claims some Greenland ice cores were showing too much CO 2, so correcting that would not produce new peaks. But he also says:

” The atmospheric CO 2  concentration has increased from about 280 ppmv at the beginning of industrialization to about 365 ppmv at present, and it increased during the transition from the last glacial epoch to the Holocene from about 200 ppmv to the pre-industrial concentration of 280 ppmv. These principal results of CO 2  analyses on polar ice cores are not in doubt. They have been found in ice cores from various drilling sites with different accumulation rates and temperatures, and in ice with very different impurity concentrations.”

b.nice
Reply to  Nick Stokes
June 23, 2022 8:21 pm

“The atmospheric CO2 concentration has increased from about 280”

This of course is absolutely GREAT NEWS for all creatures that rely on CO2 based food.

ie EVERY creature on the planet.

Renee
Reply to  Nick Stokes
June 23, 2022 8:50 pm

Well, that is a while ago

Older data can be valuable. And it was in 2000, so not that long ago and is still quoted by many scientists. Tschumi’s dataset is one of the few CO2 datasets that shows CO2 data on closely spaced Antarctic ice core samples.

Call me a skeptic
Reply to  Nick Stokes
June 24, 2022 1:13 pm

You could also say that CO2 concentration increases from 280 ppm, at the end of the Little Ice Age to 400 ppm in 170 years. See what I did there NS? Correlation doesn’t necessarily mean causation. Climate Fraudsters have bet the house that correlation(rise in CO2) equals causation (temperature increase of 1 degree C). When simply the watts per meter squared output from the sun increased after the Little Ice Age ended.

Reply to  Call me a skeptic
June 24, 2022 9:06 pm

equals causation (temperature increase of 1 degree C)”
No, this discussion is just about the measurement of CO2 variation, not about whether it causes temperature change.

Reply to  Andy May
June 24, 2022 2:56 am

A very unwarranted assumption because the individual samples are already smoothed due to the snow-firn-ice transition period, which also exceeds the record length of instrumental data in almost all Antarctic ice cores. The only clear exception is the Law Dome DE08 core, but it only has a record length of ~2,000 yr.

Ice cores are extremely useful paleoclimatology tools. However, splicing the instrumental record, or even the DE08 core, onto lower frequency cores is nothing less than fraudulent.

AGW is Not Science
Reply to  David Middleton
June 24, 2022 6:05 am

Bingo! Apples to hockey pucks, once again.

J N
Reply to  Nick Stokes
June 23, 2022 5:50 pm

No, more samples just would make data noisier!!!! What can tell you that in those gaps there is any point with CO2 levels similar to the present ones or even bigger?

Last edited 12 days ago by J N
Mark BLR
Reply to  Nick Stokes
June 24, 2022 3:48 am

More samples would just smoothly fill in the gaps.

The issue is that if you sample at a lower frequency, e.g. by increasing the spacing along an ice core between samples, then you will lose the fine detail.

Do that “too much” — the exact “threshold” will depend on the dataset being used — and you will suddenly transition from an “interesting, chaotic, line” to a “boring, smooth, line”.

Starting with a “boring, smooth, line” you (plural) cannot simply “assume” a linear interpolation between the sample points you have.

– – – – – – – – – –

PS : I usually need to “visualise” a scientific issue before I really “get” it.

The graph below may (?) help fellow posters with a similar “limitation” to see the point I am attempting to make here.

Going from the top (blue) line to the bottom (red) one, either directly or via the middle (orange) line, is one thing.

“Assuming” the amplitude / range of a “higher” line starting from either of the “lower” ones is something else entirely !

Sampling-frequency_1.png
Reply to  Mark BLR
June 24, 2022 1:14 pm

So do you think the blue dots give you the right picture?

This is really just a version of Nyquist. Sampling loses you the information above the Nyquist frequency, which in your case is the orange plot. But if you doubled the sampling frequency from that (blue seems to be trebled) you’d get a good representation of the sinusoid.

So the question is, what is the Nyquist frequency here? Well, a good guide is when you are in the region of that double or treble (blue). Finer sampling ceases to make a difference. That is why I did the odd/even test. If halving the frequency makes little difference, you are probably sampling above the Nyquist frequency.

Mark BLR
Reply to  Nick Stokes
June 25, 2022 5:25 am

So do you think the blue dots give you the right picture?

NB : For illustrative purposes only I provided a completely artificial example of a pure sine wave. This will tend to show “aliasing” when sampled at exactly the Nyquist frequency.

For this specific example, yes, I think that the blue dots do indeed give me (/ us) “the right picture”.

So the question is, what is the Nyquist frequency here?

By definition, “the Nyquist frequency” = “half of the sampling rate”, i.e. “2 samples per cycle”.

In my completely artificial example this actually corresponds to the red line !

I deliberately chose the sampling points to be at “N x pi” intervals, when sin(X) always “just happens” to be passing through zero …

But if you doubled the sampling frequency from that (blue seems to be trebled) you’d get a good representation of the sinusoid.

Despite my increasing the font size of the “Legend” in my graph(s) you missed that I actually quadrupled between the orange and blue lines (4 samples per cycle, or twice the Nyquist frequency, to 16 samples per cycle [ 8 x Nf ]).

The orange line shows a definite “periodic / sine wave” signal, but the sampling at precisely “(N x (pi / 2)) + (pi / 4)” intervals reduces the amplitude of the sampled values from 2 to (just under) 1.5 … again, in my completely artificial example.

The orange line (at 2 x Nf …) is indeed “a good representation of the sinusoid”, but it is still inaccurate in some of the finer details.

Reply to  Mark BLR
June 25, 2022 7:23 am

‘By definition, “the Nyquist frequency” = “half of the sampling rate’
No, that is a tautological definition. The Nyquist frequency is the minimum frequency of sampling that can allow reconstruction of the signal. So you need to nominate a frequency above which the power spectrum has sufficiently attenuated, and sample at twice that rate. 

In the ice core case the attenuation is because of the diffusive mechanism which limits frequency response. That is why I think it is sufficient to show that you can reconstruct the odd points from the even points. That says that you have reached such a level of attenuation, and there won’t be further power at higher frequencies.

Mark BLR
Reply to  Nick Stokes
June 25, 2022 10:12 am

No, that is a tautological definition …

I have (a small amount of) practical experience in the domain of “data processing”.

In that domain, the definition of “the Nyquist frequency” is “half of the sampling rate’.

Last edited 10 days ago by Mark BLR
Reply to  Mark BLR
June 25, 2022 1:37 pm

The point of Nyquist theory is to calculate the minimum sampling rate needed to reconstruct the signal, which is a property of the data. Just saying that it is half of whatever sampling rate you have is unhelpful.

Mark BLR
Reply to  Nick Stokes
June 24, 2022 4:57 am

No, it signifies that you can interpolate. IOW, the sampling is adequate. More samples would just smoothly fill in the gaps.

Alternative viewpoint, focussing on the “assumptions” made in the above citation regarding the amplitudes of various “signals”.

My idealised illustration (see my post just “above” this one …) has three “sub-sampled” versions of a pure sine wave with an amplitude of two units.

What are the amplitudes of the red, orange and blue “signals” (copied below with horizontal reference lines to simplify the task) ?

Given either the red or orange “dataset”, how could you possibly “know” with such certainty the amplitude of the original (blue line / pure sine wave) signal ?

Given a portion of a dataset [ any real-world “climate” dataset ! ] similar to the red (flat line) “signal”, what is the amplitude of the high-frequency “natural variability” for the specific parameter being plotted on timescales one tenth (or less) of the sampling period used ?

More samples would just smoothly fill in the gaps.

They might … sometimes, but out here in “The Real World” (TM) 999 times in 1000 when such bald assertions are verified you will find that they actually don’t, e.g. the Tschumi (2000) paper referenced above by another poster.

Sampling-frequency_2.png
Reply to  Mark BLR
June 24, 2022 10:35 pm

Mark,
In the spirit of your plots, I have shown the actual data with successive subsetting, or if you look in reverse, doubling the sampling frequency with each step. Of the original samples, I take 1 in 2, then 1 in 4 and so on to 1 in 128. You can see how, with increasing sampling, the results converge.
comment image

It’s a bit hard to see detail of convergence so I took a mid-glaciation 20Kyr subset

comment image

There is still some noise going from 2 (even/odd) to 1, and that would probably be present in finer sampling. But note the y axis. Total range is now 30 ppm, and the noise is at the 1 ppm level. It isn’t hiding any major warmings.

You can reason like this. We have a thousand or so intervals on the red (even/odd) curve. Could there be big warmings within those intervals? We know the data there, and the answer is none are seen. If there really are hidden warmings in the red, wouldn’t it be amazing if they always managed to stay one side or other of those extra black points?

Mark BLR
Reply to  Nick Stokes
June 25, 2022 5:39 am

In the spirit of your plots

I am copying my “Vostok vs. EPICA (Dome C), last 22,000 years only” plot (singular !) below for reference to address some specific points raised in this post and your OP.

… I have shown the actual data with successive subsetting

It is one thing to claim if I were to “sub-sample” the EPICA dataset (dark blue stars) I would end up with something approximating the Vostok dataset (teal stars).

The points on it are close to what you would have interpolated if that data had been missing. That gives confidence about interpolating within intervals.

You are “confident” that if you start with the Vostok dataset you could “interpolate within [the] intervals” and end up with the EPICA dataset.

I have less “confidence” than you do in your (or anybody else’s !) ability to perform that particular “trick” …

EPICA-Vostok-CO2_22-0kya_2.png
whiten
Reply to  Nick Stokes
June 24, 2022 8:39 am

When considering more sampling, increased sampling, to the point of resolution change, it is required also a new adjustment for the processing method of the given data, else precision remains the same as before or it gets even worse.

Changing considerably the size of the nails, requires a considerable adjustment for the hammer size.

Having a contiguous Atmospheric CO2 data trend constructed from, at the very least, two significantly different low resolutions, for the last 800K years, or even the last 400K years,
leaves not much room or any room at all for any considerable or significant precision, to capture any significant CO2 variation-fluctuation in high resolution during that time period.

cheers

Last edited 12 days ago by whiten
Jim Gorman
Reply to  Nick Stokes
June 23, 2022 4:26 pm

You know nothing about physical data interpreting do you? Those gaps mean you have no idea what the CO2 concentration did during the gap. It could have gone up or down multiple times during the longest gaps.

You are interpreting the graph with a personal bias that tells you there is a “smooth” curve that indicates values didn’t change much during the intervening period. That is not a scientific conclusion at all, it is using a crystal ball. If you don’t know what happened, you should state that and let it go.

Guessing and using weasel words like maybe, could, and might have become di rigor in climate science. My professors would have failed me for using those words. Science needs to relearn the words in the phrase, “I don’t know”!

Editor
Reply to  Jim Gorman
June 23, 2022 4:45 pm

Well said!

Reply to  Jim Gorman
June 23, 2022 6:52 pm

That is not a scientific conclusion at al”
It is. As I showed in the graph below, If you leave out every second point, you get an almost identical graph. Or to put it the other way, if you had only those fewer points, and you interpolated, you would get the same result as the real data gives you.

Making deductions from the true data is even safer, since the missing intervals are smaller.

comment image

Renee
Reply to  Nick Stokes
June 23, 2022 7:28 pm

Nick,
You are missing the point of the article. Ice cores predominantly capture millennial trends mostly due to CO2 Firn diffusion and secondly, compounded by poor sample resolution in time due to ice thinning. The last 400,000 years was sampled at 1 meter or less along the core which is a tight sampling distance. However, when converted to time due to ice thinning and compaction the sample intervals are much larger, 300 to 700 years apart.

Last edited 12 days ago by Renee
Reply to  Renee
June 23, 2022 9:20 pm

“You are missing the point of the article.”
No, I am applying the standard test for sampling adequacy. Would less frequent sampling change the picture? If not, then it is unlikely that more frequent sampling would do so. The reason is that if points are adequately interpolated over twice the current sampling interval, then interpolation would work even better with finer division.

Here is another plot where I have linearly interpolated between every second point, shown with a red curve. Then I show the data I didn’t use in black. Mostly the black points lie on the red. A few times they don’t, and that is where a peak was missed. But the peakiness was evident in the surrounding data, and the discrepancy is less than 20 ppm. The present rise came abruptly out of a calm period, and the rise is about 140 ppm.

comment image

Renee
Reply to  Nick Stokes
June 24, 2022 12:40 am

Nick,
Can you do this plot for me. Take the last 60,000 years. Smooth the data over 200 years. Then take one point from the smoothed data every 500 years, starting at 1950 AD. Tack that on to the rest of CO2 all point plot.

Reply to  Renee
June 24, 2022 2:55 am

Renee,
I’ll have to think a bit about how to do that. But I can easily do the last 60000 years with every even point in red and line interpolation, and then showing the odd points in black:

comment image

Editor
Reply to  Nick Stokes
June 24, 2022 4:29 am

Doesn’t mean anything in this context, the Antarctic resolution is much worse than every other point.

Carlo, Monte
Reply to  Nick Stokes
June 24, 2022 4:53 am

Nice hockey stick.

Renee
Reply to  Nick Stokes
June 27, 2022 7:00 pm

Nick,
Sorry this is late, just ran across it. Here I took Cape Grim and high resolution Law Dome. I ran a 200 year loess on the data. I then sampled the data at 100 year intervals. It’s plotted with Dome C for comparison. The first data point starts at 1940 AD. The next discrete data point is 1840 AD and so on. Therefore the modern warming is represented by one data point at 1940.

2979BA85-561D-42A0-94CD-596CDE7A1B77.jpeg
Reply to  Renee
June 27, 2022 8:49 pm

Thanks, Renee,
I was still working out how to do that 🙁
But I can’t see that it helps much. It just says that however you work it, the last two millennia were within 280±4, and then shot up. Law Dome merges pretty well with Mauna Loa data, so we have the rise period well tracked.

So certainly the modern is a big contrast to those two millennia for which the data is very well resolved. It’s enough to say that we’re doing something that is having a big effect. It isn’t really important to establish that nothing similar has happened in the last 800,000 yrs. But we’ve had a good look without seeing anything.

I looked at Law Dome and Mauna Loa over that period here
I was trying to match the rise of CO2 with the gross amounts that we omitted. It matched pretty well with a fixed airborne fraction.

Reply to  Nick Stokes
June 24, 2022 5:22 am

Nick, you’re totally missing the point. The composite ice core hockey stick is composed of a series of ice cores of widely differing sampling intervals and resolutions.

comment image

This is a composite of the following ice cores:

-51-1800 yr BP:’ Law Dome (Rubino et al., 2013)
1.8-2 kyr BP: Law Dome (MacFarling Meure et al., 2006)
2-11 kyr BP: Dome C (Monnin et al., 2001 + 2004)
11-22 kyr BP: WAIS (Marcott et al., 2014)
22-40 kyr BP: Siple Dome (Ahn et al., 2014)
40-60 kyr BP: TALDICE (Bereiter et al., 2012)
60-115 kyr BP: EDML (Bereiter et al., 2012)
105-155 kyr BP: Dome C Sublimation (Schneider et al., 2013)
155-393 kyr BP: Vostok (Petit et al., 1999)
393-611 kyr BP: Dome C (Siegenthaler et al., 2005)
612-800 kyr BP: Dome C (Bereiter et al., 2014)

These ice cores are of vastly different resolutions. Petit et al., 1999 indicate that the CO2 resolution for Vostok is 1,500 years. Lüthi et al., 2008 suggest a CO2 resolution of about 500 years for Dome C. It appears that the high resolution Law Dome DE08 core was just spliced on to the lower frequency older ice cores.

If I apply a smoothing filter to the DE08 ice core in order to match the resolution of the lower resolution ice cores, I get a considerably different picture. If I use a 500-yr smoothing filter, the Hockey Stick loses its blade completely:

comment image

Every method of estimating past CO2 levels, apart from low resolution Antarctic ice cores, indicate frequent century-scale CO2 shifts, with concentrations occasionally rising to 350 ppm in the Early Holocene and up to 400 ppm in the Eemian. The Antarctic ice core discrepancy can be entirely explained by resolution differences.

While the DE08 ice core doesn’t resolve any prior CO2 spikes over the past 2,000 years, it does resolve a possible drop in CO2, the cause of which is unknown.

The stabilization of atmospheric CO2 concentration during the 1940s and 1950s is a notable feature in the ice core record. The new high density measurements confirm this result and show that CO2 concentrations stabilized at 310–312 ppm from ~1940–1955. The CH4 and N2O growth rates also decreased during this period, although the N2O variation is comparable to the measurement uncertainty. Smoothing due to enclosure of air in the ice (about 10 years at DE08) removes high frequency variations from the record, so the true atmospheric variation may have been larger than represented in the ice core air record. Even a decrease in the atmospheric CO2 concentration during the mid-1940s is consistent with the Law Dome record and the air enclosure smoothing, suggesting a large additional sink of ~3.0 PgC yr-1 [Trudinger et al., 2002a]. The d13CO2 record during this time suggests that this additional sink was mostly oceanic and not caused by lower fossil emissions or the terrestrial biosphere [Etheridge et al., 1996; Trudinger et al., 2002a]. The processes that could cause this response are still unknown.

[11] The CO2 stabilization occurred during a shift from persistent El Niño to La Niña conditions [Allan and D’Arrigo, 1999]. This coincided with a warm-cool phase change of the Pacific Decadal Oscillation [Mantua et al., 1997], cooling temperatures [Moberg et al., 2005] and progressively weakening North Atlantic thermohaline circulation [Latif et al., 2004]. The combined effect of these factors on the trace gas budgets is not presently well understood. They may be significant for the atmospheric CO2 concentration if fluxes in areas of carbon uptake, such as the North Pacific Ocean, are enhanced, or if efflux from the tropics is suppressed.

MacFarling-Meure, C., D. Etheridge, C. Trudinger, P. Steele, R. Langenfelds, T. van Ommen, A. Smith, and J. Elkins (2006). “Law Dome CO2, CH4 and N2O ice core records extended to 2000 years BP“. Geophys. Res. Lett., 33, L14810, doi:10.1029/2006GL026152.

From about 1940 through 1955, approximately 24 billion tons of carbon went straight from the exhaust pipes into the oceans and/or biosphere.

comment image

Last edited 12 days ago by David Middleton
Reply to  David Middleton
June 24, 2022 1:58 pm

Nick, you’re totally missing the point. The composite ice core hockey stick is composed of a series of ice cores of widely differing sampling intervals and resolutions.”
Well, that is a different point. It’s true that compositing brings compatibility problems. But that is different to the problem of whether the sampling is inadequate. And more sampling won’t solve those problems.

That is one reason why I showed the odd/even test. The adequacy of the sampling varies over time, but the odd/even is a local test, and shows where it is adequate and where not. And clearly, near past CO2 peaks it isn’t perfect. But that isn’t where we were prior to the post-industrial take-off.

The real question all this is trying to answer is, can we get a guide from the past as to where the current warming might take us. And the answer is that we haven’t seen anything like it before. Saying that, well, there just might be something that we can’t see, really doesn’t help. Even if it were plausible (it isn’t).

Editor
Reply to  Nick Stokes
June 24, 2022 4:27 am

Nick, Modern CO2 records are daily. 19th Century records are approximately decadal plus or minus. We are not talking about every other point; we are talking about gaps of hundreds of points in many cases. Your graphs don’t mean anything in this context.

Reply to  Andy May
June 24, 2022 5:26 am

On top of that… Each “point” from most Antarctic ice cores is an average of hundreds of years.

Jim Gorman
Reply to  Nick Stokes
June 24, 2022 11:33 am

You love to interpolate, yet that is not scientific. The time spans are so large that interpolation is unwarranted. I have seen you and others complain about “noisy” data even when looking at actual measurements. You have no scientific basis for saying that the intervening span of time between points are not “noisy” but instead are “smooth”.

You are displaying a personal bias using a rationalization with no supporting DATA whatsoever. The data you need just does not exist.

What is so hard about saying “I just don’t know what really happened in the intervening centuries between data points and will refrain from making a conjecture”.

Reply to  Nick Stokes
June 25, 2022 7:52 am

Showing that the data set does not change when every second point is removed tells one nothing about the effect of new information — new data points — may have.

Asserting data removal is equivalent to data addition asserts knowledge that is not in hand.

Nevertheless, the argument from the diffusive model of CO₂ is fair, because it rests on a physical theory that is good so far as it goes.

But diffusive theory doesn’t take into account possible episodic events, or the possible workings of other causal physics of which we are presently ignorant.

We don’t know what we don’t know. Assuming diffusive theory is the only operating mechanism is reasonable, but asserting it as complete brings the argument into areas where ignorance reigns.

We do know that if there was some physically real but presently invisible CO₂ excursion between presently in-hand points, some physical mechanism would have had to bring the CO₂ down (or up) again.

AGW is Not Science
Reply to  Jim Gorman
June 24, 2022 6:09 am

Exactly. Ignorance exceeds knowledge with your garden variety “climate scientist.” They continually mistake their assumptions for ‘facts’ or ‘data.’

ATheoK
Reply to  AGW is Not Science
June 28, 2022 9:58 am

Snicker!

NS is a weed?

Reply to  Jim Gorman
June 24, 2022 9:39 pm

I went to school for a BS degree a long time ago, but we were taught there are only raw data. Any adjustment, smoothing, or curve fitting converts data into an opinion of what the data would have been if measured properly in the first place.

These climate reconstructions are very rough estimates of local climates. They tell us the temperature and CO2 levels vary quite a bit from 100% natural climate changes. They don’t tell u anything about man made climate change.

Three miracles:

A scientist saying “I don’ know”

A scientist saying the data quality / quantity are inadequate
to support the conclusion I was seeking, and

A scientist saying the future climate could get better !

ATheoK
Reply to  Richard Greene
June 28, 2022 9:57 am

Amen!

Editor
Reply to  Nick Stokes
June 23, 2022 4:42 pm

Nick,

“But it shows something else – the curve is fairly smooth.”

The point is, is it smooth? Or smoothed? And how would you know the difference?

From Renee’s article:

Mitchell, 2013:

“If the point lay outside of the sum of twice the standard deviation of a Monte Carlo spline and twice the standard deviation of the point itself, it was identified as a statistical outlier.”

An obvious question is whether scientists are removing CO2 centennial-scale excursions by rejecting samples as statistical outliers compared to the neighboring samples?

I would add, “Would the CO2 increase since 1850 be removed by Mitchell’s Monte Carlo filter?”

In other words, to make the curve look like what they wanted or imagined, did they manufacture the smooth curve? And, if not, how would you know?

Reply to  Andy May
June 23, 2022 5:33 pm

I actually can’t find that statement by Mitchell. Google just returns this WUWT page. The Mitchell article in the biblio is about methane, not CO2, and doesn’t have that statement. But whatever, it doesn’t relate to the data of Bereiter plotted here. There is no indication that that has been smoothed, and clearly it shows the familiar jaggedness of the glaciations.

Renee
Reply to  Nick Stokes
June 23, 2022 6:12 pm

Nick,
The statement by Mitchell is in the supplemental materials and it was for methane. Ahn, 2012, used the approach for CO2 of simply rejecting 2 out of 10 data points, simply because those were higher than the average of others by more than 3 st. dev.

Clyde Spencer
Reply to  Renee
June 23, 2022 7:58 pm

I think that it is a reasonable conclusion that if the data have sparse sampling, one will miss the run up to a local high (or low) and falsely conclude that the sample point is an outlier. By removing that ‘outlier,’ the time-series is smoothed.

Reply to  Renee
June 23, 2022 10:02 pm

“Ahn, 2012, used the approach for CO2 of simply rejecting 2 out of 10 data points”
That is way off. Ahn, in doing an intercomparison between OSU and CMAR, rejected just two data points in the whole combined sets. They were OSU’s results for just one site (Law Dome) for one very narrow depth range (98.565–98.950m). They think something went wrong there. It doesn’t affect their own results.

Editor
Reply to  Nick Stokes
June 23, 2022 6:14 pm

Hi Nick,
Hopefully Renee will pop in and answer this. I did not find it in Mitchell’s paper either. I did find it in a paper by Eggleston, et al. (2016). Here is the quote:

“The second type of outlier was statistical in nature. To identify such outliers, a Monte Carlo bootstrapping method was employed. For each point, a Monte Carlo cubic spline Average (MCA) was constructed using only all other points in the data set; this method is described in more detail by Schmitt et al. [2012]. If the point lay outside of the sum of twice the standard deviation of the spline and twice the standard deviation of the point itself, it was identified as a statistical outlier.”

Evolution of the stable carbon isotope composition of atmospheric CO2 over the last glacial cycle – Eggleston – 2016 – Paleoceanography – Wiley Online Library

Reply to  Andy May
June 23, 2022 7:36 pm

Andy,
This has even less to do with Bereiter’s data; it isn’t even about ice cores. But yes, he identifies outliers, but doesn’t omit them from Fig 1. He marks them with a different symbol.

Renee
Reply to  Nick Stokes
June 23, 2022 8:20 pm

he identifies outliers, but doesn’t omit them from Fig 1. He marks them with a different symbol

Really Nick,
I can’t find where Bereiter identifies outliers and marks them with a different symbol. But I can see for the record of Marcott et al. [2014] he apply a constant correction of -4 ppm as this record shows consistently higher values relative to the EDC record of the same period.

Reply to  Nick Stokes
June 23, 2022 8:37 pm

No, I am talking about Eggleston, who Andy quoted as identifying outliers. That is what E did with them. There is no evidence that Bereiter identified outliers at all.

Editor
Reply to  Nick Stokes
June 24, 2022 4:38 am

Hi Nick, He does indeed remove them, from Eggleston’s paper:

Two types of outliers have been identified and removed from the data set presented here

He does present a plot with them included as open circles, but he removes them from the dataset he is introducing with the article.

J N
Reply to  Nick Stokes
June 23, 2022 5:49 pm

Nick, imagine that in the future, there was a gap of data between 1800 and, let’s say, 2100. If for some yet unknown process, CO2 in 2100 reach to 310 ppm again, what point would you choose for the present time, considering your smooth…

Izaak Walton
Reply to  J N
June 23, 2022 8:32 pm

JN,
by “yet unknown process” you basically mean magic. And if you want to include magical effects as real then not surprisingly all bets are off.

The point is that any natural processes are capable of changing CO2 levels by a similar amount to what has happened thanks to human’s burning fossil fuels would be sufficiently disruptive to have left multiple traces in the geological record. CO2 cannot just change by large amounts for no reason.

Bob boder
Reply to  Izaak Walton
June 24, 2022 4:05 am

Funny how we are told that massive amounts of CO2 from volcanic activity in the geologic past cause massive climate change. Now that Isaac has debunked that I guess I can sleep better.

Reply to  Izaak Walton
June 24, 2022 5:34 am

They did leave “multiple traces in the geological record”…

In contrast to conventional ice core estimates of 270 to 280 parts per million by volume (ppmv), the stomatal frequency signal suggests that early Holocene carbon dioxide concentrations were well above 300 ppmv.

[…]

Most of the Holocene ice core records from Antarctica do not have adequate temporal resolution.

[…]

Our results falsify the concept of relatively stabilized Holocene CO2 concentrations of 270 to 280 ppmv until the industrial revolution. SI-based CO2 reconstructions may even suggest that, during the early Holocene, atmospheric CO2 concentrations that were 300 ppmv could have been the rule rather than the exception.

Wagner F, et al., 1999. Century-scale shifts in Early Holocene CO2 concentration. Science 284:1971–1973.

The majority of the stomatal frequency-based estimates of CO 2 for the Holocene do not support the widely accepted concept of comparably stable CO2 concentrations throughout the past 11,500 years. To address the critique that these stomatal frequency variations result from local environmental change or methodological insufficiencies, multiple stomatal frequency records were compared for three climatic key periods during the Holocene, namely the Preboreal oscillation, the 8.2 kyr cooling event and the Little Ice Age. The highly comparable fluctuations in the paleo-atmospheric CO2 records, which were obtained from different continents and plant species (deciduous angiosperms as well as conifers) using varying calibration approaches, provide strong evidence for the integrity of leaf-based CO2 quantification.

Wagner F, Kouwenberg LLR, van Hoof TB, Visscher H, 2004. Reproducibility of Holocene atmospheric CO2 records based on stomatal frequency. Quat Sci Rev 23:1947–1954. LINK

They left traces in just about every CO2 proxy record apart from low resolution Antarctic ice cores.

Last edited 12 days ago by David Middleton
ATheoK
Reply to  Izaak Walton
June 28, 2022 10:02 am

human’s burning fossil fuels would be sufficiently disruptive to have left multiple traces in the geological record.”

Specious falsehood!

Reply to  Nick Stokes
June 23, 2022 6:18 pm

Here is Fig 1b (my version, time backwards) showing first all the points, and then just every second point.
comment image

As you see, leaving out every second point makes no difference to the appearance of the plot. That is the normal test of whether you have sufficient sampling. If halving the sampling makes no difference, doubling will also not do so. If it is safe to interpolate over the longer lebgth, it is safe to interpolate over the shorter.

(My first upload graph had the first identical to the second, fixed)

Last edited 12 days ago by Nick Stokes
Clyde Spencer
Reply to  Nick Stokes
June 23, 2022 8:41 pm

… leaving out every second point makes no difference to the appearance of the plot.

That should be the subjective appearance. What happens to the descriptive statistics such as mean, standard error of the mean, range, and standard deviation?

Last edited 12 days ago by Clyde Spencer
Reply to  Clyde Spencer
June 23, 2022 9:03 pm

Very little. But are they the numbers you want? Do you have estimates of what they are?

Joe Crawford
Reply to  Nick Stokes
June 24, 2022 8:30 am

Nick, Nyquist’s theorem states that a periodic signal must be sampled at more than twice the highest frequency component of the signal. In other words, you have absolutely no idea of any variations in the data that occur at a frequency equal to or greater than one half the sample rate. This is shown by the red line in Mark BLR’s graph on June 24th at 3:48 am. You can never accurately assume the value of the signal between the samples.

Last edited 12 days ago by Joe Crawford
Reply to  Joe Crawford
June 24, 2022 9:00 pm

You can never accurately assume the value of the signal between the samples.”
That would mean that sampling can never work, no matter how frequent. But it can.
The signal isn’t periodic, nor is the sampling. But the spectrum will attenuate, because of the diffusive mechanism that lies behind natural variation in CO2. So if the sampling frequency is well into the attenuation, all is well. There isn’t any information lost, and interpolation will work.

That was the point of my odd/even division. It shows that little information is lost if you use half the samples. That means you are well into this HF attenuation.

Jim Gorman
Reply to  Nick Stokes
June 25, 2022 5:38 am

Sorry to tell you this, but you just tried to falsify Nyquist. If your sampling rate is not high enough, you can NOT assume that you have a representative sample of what occurred between samples.

This is basically the problem with ice core time resolution. There is a reason sampling must occur at a rate of twice the highest frequency. If CO2 concentrations can happen in short periods of time, then your samples must be at twice this rate. It really doesn’t matter if the changes are truly periodic or not. If an occurrence can happen between samples, then you can miss that occurrence entirely. IOW, your sampling rate is not fast enough.

Reply to  Jim Gorman
June 25, 2022 7:03 am

 If your sampling rate is not high enough, you can NOT assume that you have a representative sample of what occurred between samples.”

The converse is that is the sampling rate is high enough, you CAN assume…
The question is, what rate is high enough? If you have the situation I demonstrated, where just looking at the even samples allows you to predict the odd samples, that is good evidence that you have reached it.

Jim Gorman
Reply to  Nick Stokes
June 25, 2022 4:33 pm

“ If your sampling rate is not high enough, you can NOT assume that you have a representative sample of what occurred between samples.”

I should point out that in this assertion:

p = “your sampling rate is not high enough”, and,
q = “you can NOT assume that you have a representative sample of what occurred between samples”

The converse is [If q, then p]

This means the converse would say, “If you can NOT assume that you that you have a representative sample of what occurred between samples, then your sampling rate is not high enough.”

Your statement of “If the sampling rate is high enough, then you CAN assume that you that you have a representative sample of what occurred between samples.”, is actually the inverse of the original assertion, not the converse.

This

Your assertion that you can “predict” interim data values between data points is nothing but playing with numbers and arriving at a guess and by golly. That is not science. It is indicative of your training as a mathematician and unfamiliarity with physical science. Have you ever taken a physical science lab class where the professor let you “predict” data that you did not actually measure regardless of what math you used to make the prediction.

Let me give you an example. You are building a bridge to carry the weight of current traffic. You measure the weight of every other vehicle and “predict” that the weight of vehicles in between will be within the range of the weights you have measured, therefore designing the bridge for these weights will be safe. Let me know of a bridge designed this way!

Joe Crawford
Reply to  Nick Stokes
June 26, 2022 11:07 am

Nick,
Granted, you can calculate a value between sample points if the data sampled is the output of an ‘ideal low pass filter’, i.e., one that has a gain of 1 for all frequencies below some known cut-off frequency, and a gain of 0 for all higher frequencies. You are probably assuming that the “diffusive mechanism that lies behind natural variation in CO2” is acting as an ideal low pass filter. However, nowhere have I seen it stated or even proposed that the time for the enclosure process of air in firn is constant, which would be required for it to act as an ideal low pass filter.

Last edited 9 days ago by Joe Crawford
Mark BLR
Reply to  Nick Stokes
June 24, 2022 9:58 am

Here is Fig 1b (my version, time backwards) showing first all the points, and then just every second point.

Barnola et al (2003) data from :
https://cdiac.ess-dive.lbl.gov/trends/co2/vostok.html

EPICA data from the NCEI “Paleo Data Search” website :https://www.ncei.noaa.gov/access/paleo-search/?dataTypeId=7

Enter “epica” (+ “Enter” !) into the “GENERAL SEARCH” box near the top of the page, then scroll through the results and click on the “EPICA Dome C – 800KYr CO2 Data (Luthi D.)” option.

At the end of the “edc-co2-2008.txt” file is a section starting :

3. Composite CO2 record (0-800 kyr BP)

0-22   kyr Dome C (Monnin et al. 2001) measured at University of Bern

22-393 kyr Vostok (Petit et al. 1999; Pepin et al. 2001; Raynaud et al. 2005) measured at LGGE in Grenoble

393-416 kyr Dome C (Siegenthaler et al. 2005) measured at LGGE in Grenoble

416-664 kyr Dome C (Siegenthaler et al. 2005) measured at University of Bern

664-800 kyr Dome C (Luethi et al. (sub)) measured at University of Bern

Timescale EDC3_gas_a

Age(yrBP)   CO2(ppmv)

137          280.4

268          274.9

NB : For the original Petit et al (1999) data enter “vostok” in the NCEI’s search box, click in the “Vostok – Isotope and Gas Data and Temperature Reconstruction (Petit, J.-R.)” option, then download the “co2nat.txt” file.

EPICA-Vostok-CO2_22-0kya_2.png
Carlo, Monte
Reply to  Nick Stokes
June 24, 2022 10:29 am

What are the uncertainty intervals for each of these data points?

Reply to  Carlo, Monte
June 24, 2022 12:42 pm

Why don’t you ask the originator of the plot?
As I said, I am replicating Fig 1b of this article.

Carlo, Monte
Reply to  Nick Stokes
June 24, 2022 5:50 pm

You should care because the total uncertainty puts a hard limit on what you know.

Renee
Reply to  Carlo, Monte
June 24, 2022 1:59 pm

What are the uncertainty intervals for each of these data points?

Typically, four to six samples for CO2 are taken within a 60 to 100 mm core length for repeatability studies between different laboratories and time lapse. Data points may be rejected due to obvious contamination or fractures. A data point may also be rejected as noise because it has a higher standard deviation. The final data point represents a mean CO2 value derived from the closely spaced samples with a sigma mean less than 2 to 1.5 ppm.

Carlo, Monte
Reply to  Renee
June 24, 2022 2:16 pm

The standard deviation of this mean is not the complete measurement uncertainty.

Renee
Reply to  Carlo, Monte
June 24, 2022 3:12 pm

Monte,
This is what the supplemental to Bereiter’s article states “Data uncertainties are given as the individual analytical error based on replicate measurements, or as average uncertainty of the corresponding record/system.”

Jim Gorman
Reply to  Renee
June 24, 2022 3:47 pm

This is basically word salad. “individual analytical error based on replicate measurements”. Replicate measurements of the same thing with the same device should end up with zero error if the errors are random and Gaussian. That should provide what is known as a “true value”.

“Average uncertainty of the corresponding record/system” is basically meaningless. I assume they mean the uncertainty of the different measuring devices as determined by calibration procedures on those devices.

Nothing you have copied explains uncertainty propagation calculations amongst the various samples nor what the Type A and Type B uncertainties were calculated to be nor the combined uncertainty.

Renee
Reply to  Jim Gorman
June 24, 2022 5:13 pm

Jim,
Agree. The authors of the CO2 data have not adequately addressed uncertainty. I would suspect the data would have higher uncertainty further back in time.

Carlo, Monte
Reply to  Jim Gorman
June 24, 2022 5:44 pm

Absolutely. The combined uncertainty should include influences such as the CO2 diffusion versus time graph someone else posted here.

Javier
Reply to  Nick Stokes
June 23, 2022 9:36 pm

I agree with Nick Stokes in this. CO2 data from ice cores is a low-pass filtering with a large smoothing that does not detect centennial excursions, but that doesn’t change anything.

We know yearly excursions in CO2 are large because the growing season in the NH pulls a lot of CO2 from the atmosphere, but that CO2 must be returned afterwards every year. To sustain a large centennial excursion that CO2 must come from somewhere. It won’t come from the biosphere or the ocean unless there is a large global temperature change. And by large I mean very large, because the Medieval Warm Period and the LIA produced small changes to the CO2 record. And we know very large global temperature changes have not taken place, from many other proxies including deuterium in ice cores.

The largest CO2 changes in the Pleistocene are due to glacial terminations and glacial inceptions, that involve very large temperature changes. At glacial terminations about half of the CO2 is released by enhanced volcanic activity and the other half by the ocean, the expansion of the biosphere counteracts part of the increase. At glacial inceptions the contraction of the biosphere releases CO2 while the cooling of the ocean absorbs it, explaining the delay of CO2 with respect to temperature.

If we accept that only volcanic activity and temperature changes can cause a significant CO2 excursion in the Pleistocene, and we know to a certain point the magnitude of the CO2 changes that have taken place at the last glacial termination on a millennial smoothed scale, then we must accept that the increase in CO2 from the release of fossil carbon stores cannot have a precedent in the Pleistocene.

The position that natural changes in CO2 are caused by temperature changes is consistent only with the position that no large changes in CO2, like the one experimented now, might have taken place during the Pleistocene. Even if the data could not detect them, they cannot have taken place unless aliens came with a lot of CO2 from Venus.

Bob boder
Reply to  Javier
June 24, 2022 4:08 am

if We accept only large temperature change and volcanic activity? I don’t.

Smart Rock
Reply to  Javier
June 24, 2022 8:26 am

At glacial terminations about half of the CO2 is released by enhanced volcanic activity and the other half by the ocean

This is an interesting statement. I wonder if there is evidence to support it, or is it (very uncharacteristic for our highly esteemed Javier) a mere assumption?

The hypothesis that post-glacial unloading and consequent isostatic adjustment leads to enhanced volcanic activity is convincing. However, seeing that isostatic adjustment following the LGM is still ongoing, it seems likely that the increase in volcanism would be spread over the entire Holocene and into the future. Not a short-term spike. The upper mantle is very viscous, and moves at a “glacial” pace.

Javier
Reply to  Smart Rock
June 24, 2022 10:25 am

Huybers, P. and Langmuir, C., 2009. Feedback between deglaciation, volcanism, and atmospheric CO2Earth and Planetary Science Letters286 (3-4), pp.479-491.

A global reconstruction indicates that volcanism increased two to six times above background levels during the last deglaciation, … Factor of two to six increases in the rate of volcanic emissions, persisting for thousands of years, are estimated to increase atmospheric CO2 concentrations by 20-80 ppm, with the majority of the emissions occurring during the latter half of the deglaciation.

ATheoK
Reply to  Javier
June 28, 2022 10:12 am

That sounds like a backed out calculation using un-named assumptions.
Apparently from their knowledge that the glaciers melted and the Earth warmed, twice since the last glacial maximum.

Otherwise, they’d have to drill many many volcanic flows to measure lava outflow along with coring every volcanic ash field to determine volcanic ash volume.

A method that still ignores the CO₂ gassing from the entire ring of fire.

Editor
Reply to  Javier
June 24, 2022 10:19 am

Javier,
It is possible to believe that fossil fuel CO2 is unprecedented, and the Antarctic CO2 record does not show actual large, short-term excursions at the same time.

It is possible to believe that natural increases are only due to warming events and volcanic eruptions (maybe that is true), but that does not mean that CO2 has not been as high as today over the past 12,000 years at some point in time. A lot can happen in 12,000 years. The only point here is that a short-term warming event or series of volcanic eruptions, or both, might have happened and is not seen in the Antarctic Ice core record. We should not get ahead of our data or make unwarranted assumptions.

Javier
Reply to  Andy May
June 24, 2022 3:57 pm

Andy,

It is possible to believe that fossil fuel CO2 is unprecedented, and the Antarctic CO2 record does not show actual large, short-term excursions at the same time.

I believe both things to be true at the same time.

that does not mean that CO2 has not been as high as today over the past 12,000 years at some point in time.

There is no evidence for that and there is the problem that we don’t know of any source for such high CO2 levels. So lack of evidence plus need of unknown source makes me very skeptic. While absence of evidence is not evidence of absence it is even more difficult to defend things for which there is no evidence, at least in science.

The Holocene is not the warmest interglacial, but belongs to the warm group. CO2 levels have not been very high during the Holocene, and the increase since 1950 is just phenomenal. You won’t convince me that something similar could have happened without a credible CO2 source, even though I recognize that we would not be able to detect it based on ice cores alone.

Editor
Reply to  Javier
June 25, 2022 4:20 am

Fair enough and reasonable. The Antarctic Ice cores cannot preclude a high CO2 excursion, but there is no evidence that CO2 could get that high.

This caught my eye:

“While absence of evidence is not evidence of absence it is even more difficult to defend things for which there is no evidence, at least in science.”

We need to remind our climate alarmist brethren of this truth.

Renee
Reply to  Javier
June 24, 2022 2:36 pm

Hello Javier,

Thanks for commenting. I agree that ice core CO2 measurements are low pass filters due to large smoothing and won’t likely detect centennial excursions. I also agree with your comments about CO2 in recent times going back to MWP and LIA. Joos 2008, states the 20th century increase in CO2 is more than an order of magnitude faster than any sustained change during the past 22,000 years.

However, during the Pleistocene glacial terminations and inceptions, I can’t yet agree anyone reliably knows the magnitude of CO2 changes that took place during these rapid changes. The ice cores are simply the wrong tool and splicing the modern instrumental record onto them is misleading.

Joos shows the attenuation of atmospheric greenhouse gas variations (black) during the enclosure process of air into firn and ice using with a firn diffusion model. The atmospheric spike is portrayed as a broad 30-40 ppm excursion in the ice record. Looking back through the ice record there are several 20- 30 ppm peaks during the punctuated cooling associated with MIS 5, 7, and 9 glacial inceptions.

As for aliens bringing CO2 from Venus, the U.S Defense Dept. has somewhat legitimized UFOs by forming the Airborne Object Identification and Management Synchronization Group as the successor to the Navy’s Unidentified Aerial Phenomena Task Force. Perhaps an enterprising climate scientist will seek DoD funding to research potential CO2 increases from UFOs in general and Venus aliens specifically (Sarc) but we still won’t see that in ice cores.

EDF6D023-1535-49AD-8F85-55538ADCC93B.jpeg
Last edited 11 days ago by Renee
Javier
Reply to  Renee
June 24, 2022 4:16 pm

Renée you write interesting articles that are always a pleasure to read.

However, during the Pleistocene glacial terminations and inceptions, I can’t yet agree anyone reliably knows the magnitude of CO2 changes that took place during these rapid changes. The ice cores are simply the wrong tool and splicing the modern instrumental record onto them is misleading.

Agreed except in calling them rapid. They are rapid in geological terms, but glacial terminations are 5,000 year processes, and glacial inceptions 15,000. The modern CO2 increase is a 70 year affair. Given the size of the increase I have absolutely no doubt that the rate of change is unprecedented in at least 100,000 years and probably millions of years.

I don’t like to think that anything is possible just based on the argumentum ad ignorantiam. I prefer to use Occam’s razor. To have a similar carbon excursion we need a similar size carbon store and a mechanism of release that can work at similar rates (in a few decades). That’s two things we don’t have prior to “our experiment.” Dansgaard-Oeschger events have huge warming rates, although mostly regional in their effect. They produced warmings of 6-9ºC in the North Atlantic region in just a few decades. Their effect on CO2: zero (we detect their effect on methane).

The fact that the MWP and the LIA had very little effect on the CO2 record makes me very suspicious of great unknown carbon excursions in the past. I don’t believe in CO2 magical properties, and that includes its ability to produce great excursions out of nothing.

Clyde Spencer
Reply to  Javier
June 24, 2022 7:34 pm

Even if the data could not detect them, they cannot have taken place unless …

CO2 can have intermediate sequestration during glacial episodes, and be released during interglacials, as is happening in the Arctic currently.

Carlo, Monte
Reply to  Nick Stokes
June 24, 2022 4:48 am

So, Stokes is also the world’s leading expert on ice core metrology.

Why am I not surprised.

Reply to  Carlo, Monte
June 24, 2022 2:08 pm

Yep. I pronounce and the IPCC and all those scientists just fall into line!

Only of course, they actually said it first. I just listen and learn where I can.

Carlo, Monte
Reply to  Nick Stokes
June 24, 2022 5:46 pm

From their circular reasonings?

R_G
June 23, 2022 4:00 pm

Excellent and very informative post. It confirmed my suspicion about limited accuracy of CO2 reconstruction from ice cores. Thank you.

Peta of Newark
June 23, 2022 4:09 pm

Two anecdotes (about gas diffusion) from this corner and a quite serious question.

The question being, are they really measuring what they think they are measuring?

First story:
About 6 years ago while in the throes of selling my farm and moving house I came upon a ‘bit of a fascination’

What was was a six-pack of carbonated spring water. The water was in the ‘classic’ clear plastic PET soda-pop bottles, brand new unopened condition and had been simply forgotten about in the back of a kitchen cabinet.

At first sight (touch) each & every bottle gave the impression that everything was OK with the water, especially that the bottles were still ‘drum-tight’ as if still under pressure from the carbonated water inside them.

Wrong wrong wrong.
The water inside the circa 3-year old bottles was completely flat. Perfectly no fizz, no sparkle no nuffink. The CO2 had all gone.
Crystal clear and clean but no fizz so where did the CO2 go, how did it get out?

Second story:
Again concerning plastic but from the farming side of things.
Every year I had made as Winter Fodder for my cows and their babies, about 1200 tonnes of grass silage.
‘Simply’ = cut and chopped grass (typically Italian and perennial ryegrasses) assembled into a huge pile (a clamp) and whence constructed, every effort made to exclude air/Oxygen from getting into the pile.
Thus the grass effectively pickles itself – the sugars within it become the organic acids with whatever Oxygen was in there and then the process stops through lack of Oxygen.

The Major Component was a large heavy black plastic (poly-ethylene) sheet constructed to seal the walls, ends and top of The Clamp.
OK
But very recently I came upon stuff called HOB plastic sheet, intended for silage makers all around the globe
HOB being = High Oxygen Barrier.

The claims for this stuff gobsmacked me – not especially for this new plastic sheet but what they said about the old original black plastic sheeting.

The requirement for this new HOB sheet was that the old original was porous to Oxygen.
Bad enough, ‘plastic’ is not gas-proof? Who would have thunk?
But especially that the old black plastic was porous to Oxygen, in the silage making situation, at the rate of 1 kilogram of Oxygen per square metre per day

Holy cow. Just what. Why did anyone bother with the stuff?

But for those of us Au Fait with making and using Grass Silage, it certainly explained a lot.

My wonderation is that if plastic sheeting and plastic bottles can be so porous to ‘atmospheric gases‘, where does that leave slabs of snow and ice?
Especially when the things of interest actually dissolve in their water/ice container.
Does Oxygen and or CO2 ‘dissolve’ into polyethylene and especially at the rate claimed

So that when its claimed that ice can be such a reliable and permanent ‘store’ for those things simply amazes me now.
What am I missing here?

Bob Ernest
Reply to  Peta of Newark
June 23, 2022 4:26 pm

Very interesting observations and questions. Now I know why my tonic and club soda don’t last 😂🤦🏻‍♂️

Scissor
Reply to  Peta of Newark
June 23, 2022 5:29 pm

Old Fick’s law comes into play. Permeability and solubility go hand in hand. Some polymers have better solubility for CO2 than others. Thickness of the barrier and porosity also comes into play.

Ice in Antarctica is of course thick and cold, so diffusion is slower. Concentration gradients are smaller and that’s where Fick’s law applies.

Clyde Spencer
Reply to  Scissor
June 23, 2022 8:10 pm

… and cold, so diffusion is slower.

Three years compared to 300,000 years is trivial.

AGW is Not Science
Reply to  Scissor
June 24, 2022 7:35 am

There is melt water brine present in glacial ice at Temps down to 70 below zero. Now imagine how quickly water that cold will Hoover up CO2 from any “air bubbles” it might come into contact with.

Air bubbles in glacial ice are NOT a closed system, which is the ASSUMPTION they use to insist they provide valid “measurements” of the absolute atmospheric levels.

Bob boder
Reply to  Peta of Newark
June 24, 2022 4:12 am

Absolutely, food and beverage companies have long used metalized barriers in plastic films to eliminated O2 penetration.

Stevek
June 23, 2022 4:35 pm

I wish they would just show the rejected points as well. Color them a different color if necessary. Keep all the data.

Renee
Reply to  Stevek
June 23, 2022 5:45 pm

Stevek,
Totally agree. I have only run across one author that showed all the data in his published spreadsheet and that was Mitchell, 2013. It’s a rare occurrence, just like finding centennial CO2 fluctuations in ice cores.

Reply to  Renee
June 23, 2022 7:06 pm

“I have only run across one author that showed all the data in his published spreadsheet and that was Mitchell, 2013. “

But Andy May quoted your article as saying that Mitchell eliminated points beyond 2sd as outliers. I must say that I can’t find that in Mitchell’s paper, which was about methane. But it has nothing to do with Bereiter’s data in Fig 1b.

Drake
Reply to  Stevek
June 23, 2022 5:57 pm

Mostly what Climate Audit was about. Leaving out data, so papers results based on selection bias, and of course those “scientists” wouldn’t give up their data because McIntyre “would only use it against us”.

AGW is Not Science
Reply to  Drake
June 24, 2022 7:37 am

Yup – when you discard all the data you don’t like, you’re no “scientist.”

Mike McMillan
June 23, 2022 4:42 pm

Large changes in CO2, like our current spike, are increasing likely to be smoothed out over time/depth in the ice core records. Unlike oxygen and nitrogen, CO2 can diffuse through ice. The rate is slow, tens of thousands of years, but that’s what you have in Antarctic cores.

Diffusion rates can’t be measured experimentally, but diffused spikes can be identified as bubble-free layers, caused by surface melting and sinking of water into the firn, filling the gaps that later would otherwise become bubbles. Surface water absorbs and concentrates CO2 to higher levels than atmospheric, which can be measured in the middle of the layer, then diffused concentrations are measured outside the layer. This gives an indication of diffusion rates, as the core depth of the sample tells how long the diffusion has been going on.

comment image

Chart should be turned on its side, as the Time axis corresponds to vertical distance in the core sampled. Citation is marked on the chart.

J N
Reply to  Mike McMillan
June 23, 2022 5:45 pm

Mike, great graph. I’ve been searching for this. Do you have the direct link to the original source (J. Ann et al.)?
Cheers

Last edited 12 days ago by J N
Roy Martin
Reply to  J N
June 23, 2022 7:45 pm

CO2 diffusion in polar ice: observations from naturally formed CO2 spikes in the Siple Dome (Antarctica) ice core Jinho Ahn et al. article is here:

https://www.cambridge.org/core/journals/journal-of-glaciology/article/co2-diffusion-in-polar-ice-observations-from-naturally-formed-co2-spikes-in-the-siple-dome-antarctica-ice-core/8C8638D9EC90AEA53B90B3DE70E594C0

Chart is in black and white, don’t know where the color version is from

J N
Reply to  Roy Martin
June 23, 2022 8:07 pm

Thank you Roy.
Cheers

Mike McMillan
Reply to  Roy Martin
June 23, 2022 8:14 pm

Original of the chart –
comment image

I think the point is that the farther we go back in time, the less likely we are to see CO2 spikes, whether they are there or not. In a 100,000 years, when the apes drill down to see what the air was like in the Holocene, this current spike may not show up.

Last edited 12 days ago by Mike McMillan
Reply to  Mike McMillan
June 24, 2022 5:51 am

The resolution is based on the snow accumulation rate. In areas with high accumulation rates, the firn densification process is faster and there is less diffusion and therefore higher resolution. The highest resolution Antarctic ice core is Law Dome DE08.

comment image

DE08 has at least a 30-yr resolution, possibly even ~10-yr.

We’ll never have high resolution ice cores that go back more than a few thousand years due to compaction at depth. Vostok and Dome C are from areas with very low accumulation rates. The have usable record lengths of 400,000 to 800,000 yr, but the also have very low resolutions, >500-yr.

June 23, 2022 5:04 pm

An unstated problem is the reduction of precision due to autocorrelation. The Vostok CO₂ time series is 0.95 lag-1 autocorrelated.

The statistical accuracy of the data is then reduced because the effective size of the data set is diminished.

The usual expression to account for this is to multiply the resolution by factor f = sqrt[(r+1/r-1)], if resolution is taken as the lower limit of uncertainty. For Vostok r = 0.95, f = 6.2.

For a Vostok ice-core point pseudo-resolution of 70 years, the resolution in the data set due to autocorrelation is 6.2×70 years = 437 years.

Wider-separated points will expand the lower limit of resolution.

So, I don’t see how any ice-core data set is able to resolve a 200 year rise in atmospheric CO₂.

Also, if point annual resolution is considered to be the time constant of the data set, then the usual criterion is that full resolution of a signal requires 10 time-steps. Single-point excursions do not constitute a signal.

For a 70-year average point separation, full resolution of a signal requires 10 points or 700 years. If we convolve that with the statistical resolution, we get 825 years.

Once again, no ice-core data set is able to resolve a 200 year rise in atmospheric CO₂.

A comparison of the modern trend in atmospheric CO₂ with the present record of the past is physically meaningless.

Full resolution of a 200 year trend would require an ice-core point about every 30 years. However, minimum firn closure is about 70 years. So, it may be analytically impossible to ever resolve a 200 year trend.

Editor
Reply to  Pat Frank
June 23, 2022 5:22 pm

“I don’t see how any ice-core data set is able to resolve a 200-year rise in atmospheric CO₂.”

I don’t either.

Last edited 12 days ago by Andy May
Reply to  Andy May
June 24, 2022 5:54 am

The one Antarctic exception is Law Dome DE08. Some of the Greenland ice cores probably also have high resolutions; however they’ve never been properly analyzed… At least not in published material.

Renee
Reply to  Pat Frank
June 23, 2022 5:39 pm

Pat,
Thanks for your comment and informative addition. Atmospheric CO2 measurements cannot and should not be compared to CO2 data from ice cores. Different medium, different measurement techniques and different resolutions.

Although the IPCC states this can be done with ‘very high confidence.’

Drake
Reply to  Renee
June 23, 2022 5:59 pm

Yep, Mickey Mann’s grafting temperature record to tree rings, perfectly scientific he says, and backed up by the IPCC.

AGW is Not Science
Reply to  Renee
June 24, 2022 7:45 am

Of course, since it appears to support their propaganda.

Reply to  Pat Frank
June 23, 2022 7:02 pm

“The statistical accuracy of the data is then reduced  because the effective size of the data set is  diminished.”
This is just nonsense. The data is the data. It isn’t less accurate because it is autocorrelated. The autocorrelation is just in the data itself.

Suppose you track a falling object with fine resolution in time and good spatial accuracy. You’ll find the data is highly autocorrelated. That doesn’t mean you are getting anything wrong. It’s just the way it is.

Clyde Spencer
Reply to  Nick Stokes
June 23, 2022 8:26 pm

Suppose you track a falling object with fine resolution in time and good spatial accuracy.

Suppose you are tracking a hail stone that is not uniformly accelerating, but spends a lot of time near its terminal velocity, experiences a lot of buffeting from turbulent updrafts, and may even reverse direction numerous times. It will have much reduced autocorrelation. If the time sampling is sufficiently coarse, much of the irregular behavior will be missed, and one might make erroneous assumptions about the history of the hail stone. The key here is that your “fine resolution in time” is essential to understanding what is happening. What we don’t have with the ice core data is “fine resolution in time.” There may well be things going on with seasonal melting and sublimation that are completely missed.

Reply to  Clyde Spencer
June 23, 2022 9:01 pm

“experiences a lot of buffeting from turbulent updrafts, and may even reverse direction numerous times. It will have much reduced autocorrelation.”

Yes, it will. But Pat’s contention is that autocorrelation diminishes accuracy, and requires finer sampling. On that logic, your reduced autocorrelation increases accuracy. But of course it doesn’t.

Reply to  Nick Stokes
June 23, 2022 10:22 pm

In autocorrelated data the number of statistically independent points, n_eff, is smaller than the number of data points. That means the information content is reduced and so is the resolution.

The more autocorrelation the smaller is n_eff and the lower is the resolution.

Reply to  Pat Frank
June 23, 2022 11:58 pm

This is completely misconceived. It would be appropriate if you were trying to model the CO2 time series as a stationary random process. But no-one is trying to do that, and no parameters of such a random process are being identified. Instead it is just measurement of CO2 as a function of time, with features of glaciation etc which we think we can explain. No-one thinks they are just random events.

The measurement of the numbers that make up that time progression are in no way affected by any notion of autocorrelation.

bigoilbob
Reply to  Nick Stokes
June 24, 2022 7:54 am

The measurement of the numbers that make up that time progression are in no way affected by any notion of autocorrelation.”

Dr. Frank, read this out loud to yourself 10 times. Then tell me what is wrong with it.

Last edited 12 days ago by bigoilbob
Carlo, Monte
Reply to  bigoilbob
June 24, 2022 10:40 am

Blob hath spake.

Reply to  bigoilbob
June 24, 2022 5:21 pm

Autocorrelation means the informational content of each point is diminished. The number of statistically independent points is diminished.

Resolution is necessarily diminished by autocorrelation. This is not controversial.

A relevant treatment is D. Parker (1984) “The statistical effects of incomplete sampling of coherent data series” J. Clim 4(4) 445-449; doi: 10.1002/joc.3370040409
The first paragraph: “A well-known feature of many meteorological and other geophysical data sets is that the values are correlated in time and space. As a result, there is a reduction in the information content of the data set, and it is necessary when applying statistical tests and computing statistical quantities to allow for the redundancy of information by reducing the number of degrees of freedom used in the test. In particular, estimates of the standard error of a mean of n values taken from a population of N values (N >> n) will be affected. For uncorrelated data this standard error σ_np will be σ/√n if σ is the standard deviation of individual values, but this expression must be increased to σ/√n’ for coherent data, where n’ (<n) is the equivalent number of independent terms represented by the n actual data.”

Lower informational content = lower resolution.

Nick, you have consistently shown a lack of understanding concerning measurement, data reliability, and resolution limits. You’ve dropped the ball again here, too.

As to you, bob, you’ve never been correct. And your streak is unbroken.

Jim Gorman
Reply to  Nick Stokes
June 24, 2022 12:02 pm

You are trying to extract a signal from very sparse data. That requires sufficient data points. Dr. Frank is trying to educate you and includes a reference. The least you could do is respond in kind instead of making up a strawman that doesn’t even address the issue.

Reply to  Nick Stokes
June 27, 2022 6:19 pm

Suppose one takes a positional measurement of your falling object every millisecond. And suppose the measurements are 0.95 lag-1 autocorrelated (like the Vostok record).

Then any positional jink of lesser duration than 6.2 milliseconds would be unresolved. Single-point excursions are still not known to be credible.

AGW is Not Science
Reply to  Pat Frank
June 24, 2022 7:44 am

As someone aptly put it, if the current CO2 instrument record was treated in a manner equivalent to the ice core data, it would constitute at most 1-3 data points.

All of which would be discarded as “outliers.”

Jim Gorman
Reply to  Pat Frank
June 24, 2022 11:59 am

Thank you for the cogent reply. A simple analysis for a lay person is that a point every 30 yrs will provide about 7 data points in 200 years. This isn’t unreasonable for detecting a signal over 200 years. At 70 year data points, at least 500 years minimum time for resolving a signal is required.

Carlo, Monte
Reply to  Jim Gorman
June 24, 2022 2:18 pm

But climate scientists only care about “trends”.

Reply to  Jim Gorman
June 24, 2022 6:04 pm

Thanks, Jim. Exactly right. 7 independent data points across 200 years are adequate to see a signal of that width.

Peter Wells
June 23, 2022 5:37 pm

It is a well-known fact that the solubility of CO2 in water varies with the temperature. Your drink, bubbly with CO2, is kept cold so it will stay bubbly longer. As the earth cools during an ice age, the oceans can absorb more CO2. But, of course, as water evaporates and falls out as snow and converts to glacial ice on land, there is less water in the oceans to hold CO2. Good luck in doing all the calculations as to atmospheric effect.

About 65 million years ago a large asteroid hit, wiped out the dinosaurs, ignited forests worldwide, and they were buried as coal. A relatively recent study with the title “Age of Coal” shows that many coal deposits date to this event. Think of how much plant food it took to support dinosaurs worldwide. Consider the book “The Population Bomb” which, in the 1960’s, predicted worldwide starvation due to our apparent inability to raise enough food back before we significantly enhanced our atmosphere, adding more CO2 by burning coal.

We are now in the process of entering the next big ice age as the result of Milankovitch cycle changes to earth’s orbit. Think of what that will do to our ability to raise food.

Frank from NoVA
Reply to  Peter Wells
June 23, 2022 9:05 pm

‘About 65 million years ago a large asteroid hit, wiped out the dinosaurs, ignited forests worldwide, and they were buried as coal. A relatively recent study with the title “Age of Coal” shows that many coal deposits date to this event.’

Don’t most coal deposits date back to the Carboniferous?

Reply to  Peter Wells
June 24, 2022 3:37 am

What? This is nonsense at a level of several Stokeses. YOu can’t just typing wat you think without any knowledge is really wasting people/s time, Just Google it , don’t type w/o thinking. Save the [anet from pointless characters and brain clicks.

“Coal should be around 360 and 286 million years old because most of the coal in the world was formed in the Carboniferous Period, which occured 360 and 280 million years ago” The clue is the carbiniforous period’s name. Dih!

If the forest burned, the carbon would enter the atmosphere as CO2, not the ground to become coal. You didn’t know so made it up. The most basic research would tell you coal seams were deposited in the Carboniferous and when that was.

The CO2 released by fires will be recycled into the new plants and animals, some that emerged from the oceans, others that survived the impact events. Why birds, though. You would have thought flying things would not do well in fireballs? Follow the rules Don’t think, check.

Certainly the glacial phase will soon be upon us, but in geological time, not while this debate is still going on, we have 1.000 years to the next warm maximum, probably a bit less than 1 deg colder than now, as the record shows happens. Really glacial conditions are thousands of years away, and 130m of sea level down, most of Norther Europe and Canada will need to migrate somewhere, interesting if we still have nation states and borders. Can’t grow muchon ice sheets or tundra. BUt a successful civilisation will have hydroponics at scale by then,so who knows? Moving seems easier, if mass relocation doesn’t create “the final battle for Earth”. Shades of Napoleon’s retreat from Moscow?

Earth itself won’t care of course, it’s just another ice age cycle to the planet. A planetary instant, i part in 200 MIllion of its life. Species come and go all the time on Earth. etc.

I expect Hollywood will then be making a disaster movie about glaciers engulfing surprised cities at a gulp. And no doubt the movie goers of the time will believe them. Even evolution can’t fix stupid. I recall they had something like a white blob glacier thingy on Venus when Dan Dare went there…. not real? What do you mean?.

“Run for your lives! The killer ice is coming!”.

POINT: Periodicity matters/Timing is everything.

AGW is Not Science
Reply to  Brian R Catt
June 24, 2022 11:15 am

When the temperature trend starts going down again (and they can no longer hide it with “adjustments”), you can bet your last dollar that they’ll come up with some pseudoscience “explanation” of how THAT is our fault too. And of course, it will still be blamed on our fossil fuel use.

You know, just like last time in the 70s…

Bill Rocks
Reply to  Peter Wells
June 24, 2022 2:41 pm

“many coal deposits date to this event.” “About 65 million years ago …”
How many is many and what is the actual metric?

I believe your source is incorrect. What coal beds and/or what coal mines? Where are the huge coal deposits of terminal Cretaceous age? This is a sincere question.

In addition, if the terminal Cretaceous forests were ignited worldwide, which is a reasonable assumption, the forests would have been incinerated and the remnants would be dominated by charcoal which is very stable over geologic time and is found in sedimatary rocks of hundreds of millions of years age. I am not aware of any terminal Cretaceous-age coals that are significant deposits nor that are dominated by charcoal.

In fact, the coals with significant amounts of charcoal and degraded wood evidence in them are the vast coal deposits of the ancient southern continent of Gondwana, now found in South Africa, Zimbabwe, Australia, India, Botswana and probably Antarctica. These coals are of Permian age and many were deposited in periglacial environments.

Steve Richards
June 24, 2022 12:59 am

Nick stokes tries to explain using the wrong method.

He shows that by removing alternate data points leaves the curve looking the same. Obviously what was said by him is correct. However that is not a point here.
We are concerned with unknown data. Data that has not been sampled. With natural data, co2, it may rise or fall at each measurement point. Without that data, in fine resolution, we can never fully understand what the shape of the data was.
Yes nick, removing samples from known data leaves you with a holey version’s of the former. That’s not what people are discussing.

Reply to  Steve Richards
June 24, 2022 3:18 am

“With natural data, co2, it may rise or fall at each measurement point. “

The implication of what you are saying is that CO2 could go anywhere, so we just don’t know But two powerful counters
1. It doesn’t. That is the point of the odd/even, There are about 900 of the odd points and they mostly sit on the interpolation values of the evens. That would be a huge coincidence if they could have gone anywhere.
2. Physics. We know that the CO2 has to come from the ocean, and from considerable depth. For that to happen, first the air temperature has to rise, then heat has to diffuse down to depth, and then CO2 has to diffuse to the surface. That smooths everything out in time. We don’t know exactly the timescale, but this meshes with point 1. It it is observed to smooth out on the odd/even scale, then that is very likely the diffusion timescale.

Reply to  Nick Stokes
June 24, 2022 5:26 pm

first the air temperature has to rise, then heat has to diffuse down to depth, and then CO2 has to diffuse to the surface.

Expaining in one sentence the lag of CO₂ versus air temperature across every single ice age. No CO₂ driver.

Nick, could you please do us all a favor and explain your insight to Gavin Schmidt?

Reply to  Pat Frank
June 24, 2022 6:51 pm

It is perfectly orthodox scientific understanding.

Reply to  Pat Frank
June 24, 2022 7:02 pm

Here is Real Climate way back in 2007 saying all that:

Second, the idea that there might be a lag of CO2 concentrations behind temperature change (during glacial-interglacial climate changes) is hardly new to the climate science community. Indeed, Claude Lorius, Jim Hansen and others essentially predicted this finding fully 17 years ago, in a landmark paper that addressed the cause of temperature change observed in Antarctic ice core records, well before the data showed that CO2 might lag temperature.

Reply to  Nick Stokes
June 25, 2022 11:15 am

Lorius, et al., used a climate model for their calculation, which merely expressed the built-in assumption that CO₂ forcing drives air temperature. There’s no reason to think that result is physically complete or that the CO₂ assumption is physically correct.

Nowhere in the paper do the authors predict that glacial-interglacial CO₂ lags air temperature. Their case includes that there are other drivers as well.

Lorius & Co., even demur from such a prediction: “Using the data on the direct radiative forcing associated with changes in the concentration of greenhouse gases, we derive information about the role of fast feedback processes. This does not require a solution of the ‘chicken and egg’ problem, that is, we do not have to address fully the question of causes of the glacial-interglacial cycles and of the sequence of possible forcing factors. For example, whether the temperature changes lead or lag the changes in CO₂ or CH₄ concentrations is not relevant for the study of fast feedbacks. (my bold)”

The bolded sentence directly contradicts Gavin’s claim of prior prediction of a CO lag. Gavin just imposed an after-the-fact and tendentious claim to defuse an embarrassing discovery.

Reply to  Pat Frank
June 25, 2022 12:20 pm

I’ve extracted the Vostok and GCM temperatures of Figure 3 from Lorius, et al., and show the double-y plot here.

GHGs lead temperature at every major inflection, contradicting Gavin’s claim.

1990 CO2 & Ice Cores GCM CO2 & Vostok.png
Reply to  Pat Frank
June 25, 2022 1:27 pm

So it goes here:
“T leads GHGs at glacial changes. Gavin is wrong”
“No, here is Gavin saying just that”
“OK, here is a case between changes where GHGs lead T. Gavin is wrong”.

Reply to  Nick Stokes
June 25, 2022 5:06 pm

The blue is GCM T output, Nick. The red is Vostok T.

Lorius published the Vostok series and the modeled GHG contribution to temperature vertically offset in their Figure 3, making it difficult to see that their model has GHG preceding temperatures.

You’re not paying attention.

Reply to  Pat Frank
June 25, 2022 1:29 pm

Lorius, et al., used a climate model for their calculation, which merely expressed the built-in assumption that CO₂ forcing drives air temperature”
Climate models have no such built-in assumption.

Reply to  Nick Stokes
June 25, 2022 4:44 pm

Imposed ceteris paribus and constant relative humidity are that assumption.

Reply to  Pat Frank
June 25, 2022 5:35 pm

They do not assume constant relative humidity either. You really know nothing of models.

Reply to  Nick Stokes
June 25, 2022 7:00 pm

Po-Chedley, et al (2019): “The basic response of the atmosphere to GHG concentration changes is to warm and moisten at nearly constant relative humidity.”
O’Gorman and Schneider (2009): “This [atmospheric water vapor content] is termed Clausius–Clapeyron scaling, because the mean relative humidity remains roughly constant.

Soden & Held (2006): “Water vapor is found to provide the largest positive feedback in all models and its strength is consistent with that expected from constant relative humidity changes in the water vapor mixing ratio.

More than three links and the spam filter rides in.

Reply to  Pat Frank
June 25, 2022 7:09 pm

Here’s another.

Willett & Sherwood (2010): 5. Exceedance for a given rise in region mean temperature assuming constant relative humidity

“…To predict the likely resulting shift in Wvars, a region-specific shift is calculated based on the climatological RH [Relative Humidity] and Wvars for that region, for each change in T, based on the constant-RH assumption. This assumption is upheld for the most part in both GCM and observed studies over large scales…”

Reply to  Pat Frank
June 25, 2022 8:37 pm

This assumption is upheld for the most part in both GCM…”
That is the key. A GCM can’t uphold a result if it assumed it in the first place.
There are good reasons why RH should remain relatively constant, and GCM’s do reproduce that to a substantial extent. But it is a result. They don’t assume it. They couldn’t anyway. Their guiding principle is conservation of mass, and such an assumption would not be consistent with it without further assumption.

GCMs do local calculations, cell to cell. “Constant RH” means constant global average RH. You couldn’t enforce such a condition locally. Same with a global proposition like CO2 drives T. It isn’t done and it can’t be done.

Jim Gorman
Reply to  Nick Stokes
June 26, 2022 5:23 am

This assumption is upheld for the most part in both GCM…”

“That is the key. A GCM can’t uphold a result if it assumed it in the first place.”

Your statement is illogical. You seem to be assuming the statement means it is a constant from the beginning and throughout. It doesn’t say that. You also contradict what the reference asserts without any evidence at all.

“Their guiding principle is conservation of mass,”

This statement is missing sufficient detail. H2O has a constant mass whether it is frozen, liquid, or vapor. Consequently, a simple statement of conservation of mass is meaningless. If you mean a constant mass of water vapor, then you need to rethink original assertion.

GCMs do local calculations, cell to cell. “Constant RH” means constant global average RH. You couldn’t enforce such a condition locally.”

Again, you provide no evidence to refute a mainly constant RH. Doing local cell calculations is not proof of a mostly constant overall RH in GCM’s.

Reply to  Nick Stokes
June 26, 2022 2:10 pm

A GCM can’t uphold a result if it assumed it in the first place.

Of course it can. The modelers assume it. They embed the Clausius-Clapyron equation in their model. The GCM conforms to the CC equation within it, and the assumption is upheld.

Reply to  Pat Frank
June 26, 2022 2:41 pm

Constant relative humidity was deployed in the foundational <a href="https://doi.org/10.1175/1520-0469(1967)0242.0.CO;2 ” target=”_blank”>Manabe & Wetherald 1967, is accepted in Cess 1990 and is found throughout the review by Held & Soden 2000.

Held and Soden mention a behavior of models that is very revealing (page 454): “Differences in equilibrium sensitivity
among different models appear to be due primarily to differences in cloud prediction schemes and, to some extent, the treatment of sea ice, and only in a minor way to differing predictions of water vapor distribution. This point was made very clearly by the intercomparison study of Cess et al (40), in which a variety of atmospheric models in an idealized setting were subjected to a uniform increase in surface temperature. The changes in net radiation at the top of the atmosphere in the clear sky were generally consistent across the different models, and consistent with fixed relative humidity radiative computations. The total-sky (clear plus cloudy) fluxes were much less consistent across models.” (my bold)

So all the models use the Clausius-Clapyron (CC) equation to calculate relative humidity (RH). RH generally changes linearly with air temperature (Manabe&Wetherald 1967), though with small regional modifications due to advection (migration of air parcels).

But the models have different schemes for simulating various climate subsystems while they are constrained to maintain TOA energy flux balance. So each model with fixed CC RH, in some idiosyncratic fashion, modifies the various subsystems during simulation because they are constrained to maintain TOA balance.

So, they all have different climate sensitivities, their simulated climate subsystems (including cloud fraction and distribution) do not compare and, nevertheless, they all manage to get the same clear sky TOA balance and, halleluiah!, the historical air temperature record.

Let’s see: that’s called offsetting errors, isn’t it. See Kiehl, 2007. I’d put in the link, but the three already present are max links.

Reply to  Pat Frank
June 27, 2022 2:26 am

The CC equation just tells the vapor pressure immediately adjacent to a water surface (where, of course, RH=1). It says nothing about the average RH in a whole atmosphere, most of which is far from contact with liquid water.

Reply to  Nick Stokes
June 27, 2022 10:30 am

What is the composition of clouds?

Reply to  Pat Frank
June 27, 2022 3:00 pm

Most of the atmosphere is not clouds.

Reply to  Nick Stokes
June 27, 2022 3:39 pm

But liquid water is widely dispersed throughout.

Jim Gorman
Reply to  Pat Frank
June 28, 2022 12:04 pm

According to Nick, the water vapor is magically transported to altitude without appearing in the ensuing space. Shades of Star Trek.

Jim Gorman
Reply to  Pat Frank
June 27, 2022 3:32 pm

Cotton balls?

Clyde Spencer
Reply to  Nick Stokes
June 24, 2022 8:05 pm

I seriously question whether heat simply diffuses downwards (Across thermoclines!) and CO2 diffuses upwards. It is, instead, upwelling of deep water as surface water is evaporated or blown by winds, and coastal upwelling along western coasts. Movement of bodies of water is much more important than slow diffusion.

Reply to  Clyde Spencer
June 24, 2022 8:51 pm

The point is, it is a diffusive process and can’t happen quickly. That smooths out any high frequency events which means at some timescale, interpolation works well. The task is to find that time scale.

Reply to  Steve Richards
June 25, 2022 12:29 pm

Steve, at each ice core depth, several samples are taken and may go to different labs. In general the difference in the data is less than 1.2 ppmv (1 sigma) for all samples and a systemic difference of a few tenths of a ppmv between different labs.
That means that for the same depth (average gas age) the gas bubbles have the same composition and no huge differences are found.

Prjindigo
June 24, 2022 1:08 am

Well now, the first problem is that unless the ice has sustained a temperature below -109°F some of the CO2 has gone missing from where it was supposed to be and there’s no way at all of telling how much.

Just to be certain let me tell you how much has gone missing: every bit of it that was compressed out when the snow became ice and some that was lost when the ice became water and nobody bothered to measure the specific acidity.

95% of the data is spurious and circumstantial and 100% of that was ignored because these were college graduate level students who do the same quality of work as the ones who write reports based on 7 rats when they had 10 and 3 died from poor nutrition.

Last edited 12 days ago by Prjindigo
Reply to  Prjindigo
June 25, 2022 12:22 pm

Sorry, but some researchers really do real scientific work…

If you think that some water is left in ice at – 40 C, then think again and have at least some knowledge of what you are talking about.

And how do you think that ice contains only 180-200 ppmv CO2 while the atmosphere around that ice is at over 350 ppmv. Does CO2 migrate from low levels to high levels?

Prjindigo
Reply to  Ferdinand Engelbeen
June 26, 2022 12:10 am

The edge of the ice core is melted when taking it. You failed to parse my statements correctly through the assumption that I was wrong. There is a thermal effect well within CO2’s range that occurs when the ice cores are brought up and they out-gas from pressure.

I would like to know how you think there is any ice at -40°C on Earth.

Last edited 10 days ago by Prjindigo
DiggerUK
June 24, 2022 1:50 am

Apart from the statistical arguments, much in the article is above my work experience and pay scale.

It is a long time since I realised the analysis of ice core samples was not done year by year. But it seems that the 70 year sample size has not been reduced since I found that out,… is it possible to decrease the sample sizes? From a statistical view, after using overlapping and averaging periods, would it be necessary to do so anyway?

Another Road to Damascus article for me. Thank you…

Reply to  DiggerUK
June 25, 2022 12:17 pm

Sample size did reduce over the years, from melting a large sample to a 10×10 cm cube for the grating technique to about 30 g ice in the latest sublimation technique with cryogenic separation of all constituents in the gas bubbles and mass spectroscopy for isotopes and quantities.
How much years that represents highly depends of the layer thickness which depends of the local snow accumulation rate and depth of the layers under the pressure of everything above it… Even so, the ice core is put in “relaxation” mode for at least a year after drilling, which recovers at least partly its original volume (thus layer thickness).

The sublimation technique was used for the 800,000 years Dome C ice core:
https://tel.archives-ouvertes.fr/tel-00370658/fr/

Here the start of the new technique:
https://www.researchgate.net/publication/253089968_A_sublimation_technique_for_high-precision_measurements_of_d13CO2_and_mixing_ratios_of_CO2_and_N2O_from_air_trapped_in_ice_cores

chrism
June 24, 2022 2:04 am

this data analysis issue reminds me of the historical recording once daily temperature using mercury thermometers v instant response continuously recorded thermocouple (where high spikes are recorded vs mercury where they were missed or smoothed) – the recent ice sampling shows a spike reflecting an instantaneous peak, whereas the firn diffusion historical record is disparate and possibly more smoothed –

Prjindigo
Reply to  chrism
June 26, 2022 12:11 am

or simply repeated from the day before because nobody bothered to sling-down the mercury thermometer and the thermometers wore out over lifetime

Geoff Sherrington
June 24, 2022 2:47 am

Any discussion of adjusted or missing data benefits from access to every item of data, including that rejected and hopefully, showing how terms like standard deviation were calculated.
To save multiple people searching, can anyone here link to drill holes with complete raw data sets?
There has long been discussion about whether all data have been made public in cases like Law Dome. Steve McIntyre wrote on this at CA, but I do not know if is now all in public domain.
Geoff S

Renee
Reply to  Geoff Sherrington
June 24, 2022 10:05 am

Geoff,
It would certainly be nice to have access to all the data. I’d be really interested in the 4-6 samples that are closely spaced taken for repeatability studies between different laboratories and time lapse. The one study by Tschumi, 2000, using closely spaced samples on the Antarctic Byrd core showed CO2 fluctuations up to 40 ppm. I have not been able to find the complete raw datasets for any of the ice cores.

Geoff Sherrington
Reply to  Renee
June 25, 2022 1:29 am

Renee,
It is scandalous when data paid for by the public is censored or not released by authors who, by their actions, are creating doubt about their findings.
Renee, can you suggest any actions that would help the release of more complete data? What can WUWT readers do?
You are deeper into this data shortage than most of us are. How would you feel about making a list of one, preferably many, drill holes with specific description to allow nonspecialists to seek the missing data once properly described. I have in mind approaches to university senior staff, members of parliament and so on.
Thank you for your essay above. I cannot add to it, my apologies, but this is a topic that requires all data including rejected outliers before sense can be made. Geoff S

Renee
Reply to  Geoff Sherrington
June 25, 2022 4:33 pm

Geoff,
The first place to start is to contact Bereiter who appears to be the gatekeeper of the CO2 composite curve. I can make a first pass and see if he responds. Sometimes they do, most times I hear nothing. Worth a try.

While we’re on this subject, the other issue that jerks my chain are paywalled publications that receive federal grant funding. If tax payer dollars are involved, I believe the publications should be public. You would not believe how many paywalled articles receive funding from the National Science Foundation (NSF).

June 24, 2022 3:02 am

While resolution is GTH, the general trends are still clear enough. We know the rates rise faster due to humans. The question is whether CO2, itself a function of ocean temperature in response to temperature change under Henry’s Law, has any net effect on the climate, after the dominant cloud control mechanism response occurs in response to whatever small GHE perturbation CO2 can create in the Lapse rate, itself determined by the Barometric equation, not GHE.

Q. The core issue within this particular data set is not the density of data in the time series, it is whether CO2 still lags temperature when the revised data is plotted against temperature?. THis paper does not appear to confirm or deny that. CO2 still rising after a better defined peak temperature should confirm the lag, or not if not, etc. Does it?

Q. Is the new/revised data set available somewhere to plot against the corresponding temperature proxy data?

Or did I speed read too fast and this most important topic/question is answered by the authors or the reviewers. It seemed to me that the objective is to create doubt about the data and hence the natural lagging nature of CO2 under the laws of physics, the natural response to warming oceans under Henry’s Law, while not saying so directly, because that is not proven by this “analysis”, as I quickly read it, so they can’t say it out loud..

What do the authors of the original data say? It seems they have not been asked to comment. Odd. What was the actual purpose of this paper? THe response to the paper as presented is “So what?”. Did they do it to publish something, or because there was grant available? NO other obvious purpose. To me.

Is there an agenda hidden in this paper, is it science or climate politics, again? Am I missing something? MOre questions than answers.

The revised data set for CO2 and corresponding temperature would be interesting to plot and see for myself.

Renee
Reply to  Brian R Catt
June 24, 2022 10:47 am

Brian,
Is the new/revised data set available somewhere to plot against the corresponding temperature proxy data

Bereiter’s CO2 dataset has been around since 2015 and has not been revised. It has frequently been plotted against temperature and shown to lag glacial deglaciation and inceptions by 200-800 years.

Is there an agenda hidden in this paper, is it science or climate politics, again?

The post is simply showing ice core CO2 data sampling and it’s compounding effects on CO2 diffusion in the firn. The authors of the data recognize this distinction. “ For carbon dioxide, firn smoothing appears to significantly diminish the recorded rates of change in abrupt CO2 increases, compared to their atmospheric values. The estimations of CO2 rates of change are further altered by the process of discrete measurement, and measured values can be 3 times lower than the actual atmospheric rate of change.” Fourteau 2020.

ThinkingScientist
June 24, 2022 5:03 am

Having read through this and the comments briefly I would point out that on this occasion, like Javier, I generally agree with what NS says.

For a stationary time series the mean and variance will not change if the sample rate of the function is changed.

This is not the same as smoothing the time series with a moving average filter. Smoothing with a moving average will leave the global mean the same but will reduce the variance.

Sample rate and resolution (bandwidth of the underlying signal) are not the same thing. An undersampled signal will be aliased. Oversampling a signal does to reveal any more information than the inherent resolution of the underlying signal. In the ice core case we need to know what the scale of the underlying vertical averaging is due to the natural processes such as mixing etc.

What is actually being talked about here is the difference between the inherent scale at which the observations are made ie the volume sampled. Note that for ice cores there are two smoothing processes at least – one is the size of the sample cut from the core and the other is the moving average effect of all the natural processes acting on the gases to mix and smooth. The effective sample size is referred in geostatistics as the support of the measurement. Providing the sample rate along the core is sufficient then increasing or decreasing the sample rate will not change the basic statistics observed. Of particular interest would be the variance ie identifying past extremes. You can change the scale of support up (larger) by moving average which results in a loss of information but you cannot go the other way as you cannot gain information you never observed. Seismic data is a good example of a very large moving average due to its band-limited nature. If seismic is recorded with a signal bandwidth of say 10 – 60Hz this is well within the Nyquist range for capturing everything about the signal at 4ms sample rate (which would be 0 – 125Hz). Resampling the seismic signal to 1ms still leaves a signal of 10-60Hz – there is no gain of information.

If the underyling ice core data is temporally smooth due to the inherent mixing processes changing the sample rate of the core measurements won’t increase the detail if the sample rate is already sufficient. If the sample rate is not sufficient then the data we observe would appear as aliased.

An example of change of scale of measure (support) is in reservoir modelling in oil & gas. Core data has a very small scale of support, well logs slightly larger (the volume of rock sampled is slightly larger and seismic amplitudes much larger again. Finally we have the cells of the reservoir model. The scale changes (by volume) involved are up to 5 orders of magnitude and the variance will change between all of them. There are further issues added due to the fact that the geological properties have strong vertical spatial anisotropy too which must be considered in the mix.

Last edited 12 days ago by ThinkingScientist
Renee
Reply to  ThinkingScientist
June 24, 2022 1:46 pm

ThinkingScientist,
Your analogies are good ones if one is sampling the same reservoir rock.

There are several issues being discussed; smoothing of CO2 due to diffusion in ice, ice core sample density in both depth and time and then calibration to modern atmospheric data. At least in the same geologic basin or reservoir, the different scales of measurements are typically conducted and calibrated over the same medium and intervals. Also, your analogies have their own pitfalls such as converting seismic data from time to geologic depth.

In the Paleoclimate realm, the attempt is to compare ice core CO2 measurements from different time intervals, with varying sample rates, to a continuous high resolution instrumental atmospheric record of CO2. Ice core data, at least those older than about 60,000 BP, are not fit for this purpose. Similar to using a well log from the Permian Basin to depth convert seismic data in the Gulf of Mexico.

ThinkingScientist
Reply to  Renee
June 25, 2022 12:40 am

Renee,

If the underlying ice core gas data is inherently smooth over depth due to the low pass nature of how gas diffuses and mixes then if spikes of co2 occurred they could not be detected because they are averaged out in the natural processes of ice deposition and preservation.

Changing the sample resolution will not change that. Javier explained it clearly up thread and I agree with him.

Reply to  ThinkingScientist
June 24, 2022 5:35 pm

The ice core record is not a stationary time series.

Clyde Spencer
Reply to  Pat Frank
June 24, 2022 8:24 pm

You beat me to it. There may be intervals where a segment of the time series has no trend. However, the example in Fig. 1 shows periods of non-stationary data with different trends and different intervals for the +/- trends. Depending on the start and stop times, one will get different means and SD. If the sample spacing is not uniform across all time intervals, than a bias will be introduced giving more weight to the intervals with more samples.

Measuring the diameter of a ball bearing over an interval of time is (assuming constant temperature) a stationary time series. However, most time series have changes over time that change the mean and SD.

Renee
Reply to  Pat Frank
June 24, 2022 10:17 pm

The depth record is stationary but the conversion to time is non-stationary (compare figures 2 and 3 in my post).

ThinkingScientist
Reply to  Pat Frank
June 25, 2022 12:36 am

To a first order approximation it is. It can still have periodic behaviour and be first order stationary. Geostatistical realisations with a variogram will exhibit periodic like behaviour due to spatial correlation and are first order stationary ie the increments of the peocess are stationary. They are a common basis for modelling geological behaviour.

A seismic trace has wiggles up and down and is still regarded as a stationary time series ie mean and variance are not changing systematically with time.

ThinkingScientist
Reply to  Pat Frank
June 25, 2022 1:35 am

Is a long time series sine wave stationary?

The basic test is whether the mean is increasing or decreasing over time. Providing the observation window is of sufficient length then yes a sine wave can be considered as stationary.

This principle applies in geostatistics for variogram based work.

And I would argue that because the long ice core data have repeated cycles they can be regarded as stationary in the same way.

There is one other point about the article I do agree with: you cannot compare rates of change for modern measurements with rates of change from effectively low pass filtered ice records of CO2. To make the comparison the two datasets must be on the same support. Because ice records are inherently low pass filtered, that cannot be done and it is not meaningful to do so. Ditto splicing modern temperature records on the end of paleo-reconstructions….not valid.

At the resolution of ice core data the modern record would plot as no more than single point (in fact its probably less than that!). As Javier points out though, absence of evidence is not evidence of absence and vice versa. Ice cores cannot be used as evidence to claim there are no paleo rates of CO2 increase comparable to the recent modern because even if they occurred in the record low pass filtering caused by the natural processes of preservation would prevent us from seeing them.

To understand the impact of support on variance of measurements I would suggest reading the relevant chapter in Isaaks and Srivastava (1989) An Introduction to Applied Geostatistics. Its out of print but there’s plenty of used copies about.

Last edited 11 days ago by ThinkingScientist
ThinkingScientist
Reply to  ThinkingScientist
June 25, 2022 1:53 am

As an addendum I would also point out that in geostatistical random function theory stationarity is not a property of the data, it is an assumption about the underlying stochastic model. If the underlying stochastic random function is stationary but has spatial correlation then a realisation of the process can appear to have areas/clusters of different mean values but the process is still stationary. Depends then on the viewing window and the variogram range

Reply to  ThinkingScientist
June 25, 2022 6:51 am

As an addendum I would also point out that in geostatistical random function theory stationarity is not a property of the data, it is an assumption about the underlying stochastic model. “

That shows why it makes no sense here to talk of stationary, or of autocorrelation. These are properties of stochastic models. But no-one is thinking of the time history of CO2 ppm as a stochastic model. Can anyone suggest what such a model might be?

If you have fitted a curve to some physical data, you might represent the residuals by a stochastic model. But it is only in special circumstances where you would use a stochastic model for he data itself.

ThinkingScientist
Reply to  Nick Stokes
June 25, 2022 10:02 am

Your statement shows you don’t know the first thing about geostatistics or random function models, nor understand any of the basic principles. In geostatistics the working assumption is that the observations are viewed as a single realisation of spatially or temporally correlated stochastic process. The optimal interpolator is then kriging and, of course, we can explore the uncertainty through generating realisations through conditional simulation.

It is a framework specifically developed for and successfully applied to dealing with geoscience data. Like ice core data.

Reply to  ThinkingScientist
June 25, 2022 1:19 pm

OK wise one. So what is the stochastic model underlying the CO2 ppm data?

What stationary stochastic process explains the deglaciations?

Renee
Reply to  Nick Stokes
June 25, 2022 10:17 pm

The current CO2 composite is too deterministic. It only uses one dataset, Antarctic ice cores. And doesn’t incorporated all the studies. It is cherry-picked to use the most conservative data, Dome C and Vostok.

The CO2 composite needs to be more stochastic. It should include all the data in Bereiter’s spreadsheet. At the Holocene interglacial inception shoulder event there is a 25 ppm spread between Antarctic datasets; Dome C, Siple, Byrd, WAIS, Taylor Dome and more. It should include not only dry extraction methods, but also the wet extraction. Dry is the conservative end member and wet is the upper, a range and the mean should be the CO2 composite.

Why are we ignoring data?

Reply to  Renee
June 25, 2022 10:53 pm

Renee,
“The current CO2 composite is too deterministic. 

The CO2 composite needs to be more stochastic.”

There are some very basic misunderstandings in the talk of stationarity, autocorrelation and stochastic processes in this discussion. What we have is a set of measurements of CO2 ppm down an ice core. The distance can be identified with time. Those are not the product of a stochastic process. They are measured numbers, varying with time, and it makes no sense to treat them as products of a global stochastic process.

What you just said relates to uncertainty. The true values could be treated as the sum of the observation plus a small stochastic number, which I could call the residual. OK, that residual is indeed stochastic, and you can talk of it being stationary, correlated or whatever. It probably has mean zero, but is in fact unlikely to be stationary; the uncertainty would be much greater at the deglaciation peak, for example.

In fact Bereiter gives the sigma’s for those uncertainties in column 3 of his data. They certainly aren’t stationary. Maybe you think they should be bigger, but it is silly to say that without first looking at how he calculated them. 

Renee
Reply to  Nick Stokes
June 25, 2022 11:57 pm

His uncertainties in column 3 represents a mean CO2 value derived from 4-6 closely spaced samples with a sigma mean typically less than 2-1.5 ppm. The range of uncertainty around the CO2 composite is too narrow.

Renee
Reply to  Renee
June 26, 2022 12:13 am

Here’s a plot of all of Bereiter’s data over the Holocene. Look at all the variability in the Antarctic datasets, up to 20 ppm. Yet, the CO2 composite represents the minimum CO2 values.

F3D0C74E-036B-4BDB-8CA7-5D03C3660194.jpeg
Last edited 10 days ago by Renee
Renee
Reply to  Nick Stokes
June 26, 2022 12:03 am

Nick,
I interpret stochastic as being more random with a wider distribution including multiple datasets, like the temperature reconstructions. Why are we so certain about CO2.

Last edited 10 days ago by Renee
Reply to  Renee
June 26, 2022 2:03 am

I think the conventional wording is that it has a larger stochastic component. But if you think his sigma’s are too small, you’d need to give an alternative calculation. CO2 is an actually measured gas ppm, not a proxy.

ThinkingScientist
Reply to  Renee
June 27, 2022 12:46 am

Renee,

An understanding of geostatistics and its underlying assumptions would be very useful in your studies, I think. Geostatistics is particularly helpful in considering changes of scale of measurements and the associated change in variance of observations. It is also firmly grounded in the concept of spatial correlation (ie autocorrelation).

A number of comments from Nick Stokes and Pat Franks concerning stationarity, autocorrelation and stochastic models show they have not studied the perspective geostatistics brings to this area via the works of people like Matheron and Journel (as well as Isaaks and Srivastava I referenced earlier in the thread). In particular the concept of the underlying random function model and its statistical inference are important and many people really don’t get it.

However, I would suggest the importance of the change of scale of measurement (support), both induced by natural processes (effectively averaging or low pass filtering in the case of ice cores) and by the measurement process itself are very relevant to your work.

Reply to  ThinkingScientist
June 27, 2022 2:19 am

the concept of the underlying random function model”
But you can’t tell us what that underlying random function model is. In particular, what it is modelling.

ThinkingScientist
Reply to  Nick Stokes
June 27, 2022 4:25 am

Nick,

I am not going to give a treatise on geostatistics and random function theory in a response at this blog. Many people are familiar with the concept of kriging and perhaps understand its “best linear unbiased estimator” status based on the assumption of an underlying variogram model to describe spatial dependency and least squares solving as a 2D or 3D estimator. However, people generally seem unfamiliar with the conditional simulation technique that goes hand in hand with kriging, nor the underlying concept of a random function model that provides the theoretical foundation for both. If you are interested in this area I would suggest you start with Matheron’s theory of regionalised variables and its applications (1971):

MATHERON_Ouvrage_00167.pdf (ensmp.fr)

And then maybe: MATHERON G (1973). The intrinsic random functions and their applications. 

MATHERON_Publication_00180.pdf (ensmp.fr)

Journel has of course published widely on a lot of this stuff, along with many, many others. There is some excellent work relating amplitude and phase spectrum in 3D FFT’s to conditional simulation, an area I have used in my own work.

For the purposes of understanding support Matheron 1971 does cover this but Isaaks and Srivastava 1989 has an excellent and readable undergraduate demonstration of the principles, including the very useful concept that in a change of support (block) scheme the total variance = variance within blocks + variance between blocks.

The question of the support of a measurement is the fundamental to understanding why it is not appropriate to splice together datasets at different support eg paleo vs modern temps and why the comparison is misleading. The question of observed variance and rates of change of ice core vs modern CO2 measurements is fundamentally a question of the support of the measurement and is relevant to the (excellent) work that Renee has posted here.

Last edited 9 days ago by ThinkingScientist
Reply to  ThinkingScientist
June 27, 2022 9:48 am

TS,
“I am not going to give a treatise”
I am not asking for a treatise. I am just asking what this supposed underlying stochastic model is. Or at the very least, what it is actually modelling.

And I am very unimpressed that you can’t supply that simple information.

ThinkingScientist
Reply to  Nick Stokes
June 27, 2022 9:56 am

I gave you the original references which are freely downloadable, go figure it yourself. You are a bright guy.

You can lead a horse to water…..

Reply to  ThinkingScientist
June 27, 2022 2:57 pm

Matheron can’t tell me what you want to model here as a stationary random process. Is it the CO2 ppm itself? With glacial transitions and all? What kind of model would you use for that?

Renee
Reply to  ThinkingScientist
June 27, 2022 12:12 pm

ThinkingScientist,

Thank you for the valuable information and insight on geostatistics. I am still in the CO2 data gathering stage, so to speak. It is apparent that only selected CO2 datasets from Antarctic cores are used in the CO2 composite, not all of the data. This reduces variance, uncertainty and creates a bias in the data, toward the low end.

For example, CO2 data using different extraction techniques with higher uncertainties such as the wet measurements are simply discarded. These data could easily be used to establish an upper limit or used in a stochastic manner.

The splicing of high resolution CO2 instrumental data to low resolution ice cores is so misleading. This one practice could be eliminated by presenting or splicing on an attenuated modern CO2 signal similar to the low res core data. Joos, 2008, presented one scenario shown in my response to Javier.

My forward plan is to compile and present all the data. I’m retired and don’t have access to statistical tools. But would welcome working with an expert in statistics.

ThinkingScientist
Reply to  Renee
June 28, 2022 6:33 am

Hi Renee, I am not a statistics expert but I do know my way around geostatistics reasonably well. The key area for ice core data, I think, is the issue of change of variance due to change of support. This would include the effective change of support induced by the physics of preserving gas in the core. As I understand it this is effectively a low pass filter (or moving average). The support effect in geostatistics is very well understood and the calculations are relatively simple to make. Trying to understand the support of the measurements and then compensating high resolution modern measurements so they can be properly compared to lower resolution ice core measurements would be very worthwhile I think. It would be a small part of the excellent work you have embarked on.

Reply to  ThinkingScientist
June 25, 2022 3:30 pm

…but the process is still stationary.”

Shouldn’t that be, ‘but the model is still stationary’?

ThinkingScientist
Reply to  Pat Frank
June 25, 2022 3:54 pm

Yes, correct.

Reply to  ThinkingScientist
June 25, 2022 3:08 pm

And I would argue that because the long ice core data have repeated cycles they can be regarded as stationary in the same way.”

I wouldn’t.

Dealing with the data itself, as opposed to the data in the context of a model, one can measure autocorrelation to determine the number of statistically independent points in the time series. That determines resolution.

Here’s the Vostok ice core CO₂ record in 10,000 year bins. Does it look stochastic and stationary?

1987 Vostok Ice Cores CO2 Barnola.png
Reply to  Pat Frank
June 25, 2022 3:25 pm

Here a histogram of a sinusiod of 8640 points with a period of 24 points, with bins of 240 points (years); 10× the period. The sinusiod models an annually resolved 8640 year periodic time series with a mean of zero and constant variance.

It doesn’t look anything like the Vostok series.

The lag-1 autocorrelation is 0.966.

The sinusiod is fully determined by the Nyquist rule. Each period is defined by 2 points per 24, leaving 720 unique points.

Statistically independent points are 720*(1-0.966/1+0.966) = 12.5. This is the limit of resolution of the sinusoid.

A signal with period 12.5 years or less would be unresolved in a perfectly periodic time series with a 24 year period, which seems entirely reasonable.

1987 Vostok Ice Sinusiod Stationary test.png
Last edited 10 days ago by Pat Frank
June 24, 2022 8:19 am

Jaworowski, the world’s authority on ice cores, has estimated that 30–50% of CO2 is lost from cores in the micro-fracturing of cores during extraction from the high pressures of their original depth, a huge depressurization. Using 40% as the average CO2 loss, back calculatingthe 300 ppm CO2 of 330,000 years ago and back calculating, the CO2 ppm back then was around 500 ppm, which is 25% HIGHER THAN WE ARE NOW.

The false claim that CO2 is higher now than anytime in millions of years is pure propaganda and not based on any real science or facts. In fact it was much higher than now back in the 1940s, but you will never hear about that.

To assume that ice core CO2 data is anything near empirical data is simply bad science, dishonest science, and convenient for the propagandists.

Bill Rocks
Reply to  Charles Higley
June 24, 2022 2:51 pm

Fascinating. I think you just threw a monkey wrench or a spanner into the gear box.

Reply to  Charles Higley
June 24, 2022 5:33 pm

Jaworowski was professionally defamed for his critically honest commentary.

Reply to  Charles Higley
June 25, 2022 11:47 am

Charles,

Let the late Dr. Jaworowski rest in peace, together with his wrong ideas about ice cores.
His main objections from 1992 were already refuted in… 1996 by the data from three Law Dome ice cores by Etheridge e.a., but Jaworowski did repeat his falsified claims in 2007.

He was a specialist in radioactive fall out in ice cores, never performed anything on CO2 in ice. Metal ions can pass the ice matrix, CO2 can’t.

Further, how can one measure 200 ppmv CO2 in ice through cracks in the ice, if the surrounding air contains over 350 ppmv (and probably more in the closed room where the measurements were made)? Would CO2 not go from high levels to lower levels?

More about Jaworowski:
http://www.ferdinand-engelbeen.be/klimaat/jaworowski.html

Reply to  Ferdinand Engelbeen
June 26, 2022 3:03 pm

Would CO2 not go from high levels to lower levels?

It would depend upon exposure and diffusion rate.

How are cores transported from ice-field to lab? Are they sheathed in a transport and storage tube? Is the residual air in the tube replaced with nitrogen?

Air-tight sheathing with a replacement N₂ atmosphere in the tube would be a good idea. Once in the lab, one could sample the nitrogen for any CO₂ that might have diffused from the ice core. Following analysis of a core section, one could correct the derived concentration by the CO₂ found in the nitrogen blanket.

Of course, that approach assumes uniform outgassing across the entire ice core, which may not be correct. But it would allow an average uncertainty spread around the values.

When a section is sawed off for analysis, how long is it exposed to air?

Once the core section is in the chamber for fracturing (or melting) and analysis, we can surmise that exposure to ambient CO₂ is absent.

Bill Rocks
June 24, 2022 2:44 pm

Excellent post and conversation. Valuable content.

Gordon A. Dressler
June 24, 2022 8:53 pm

Despite the described variations in ice core sampling rates, Figure 1 appears to indicate that even the slowest sampling rates (i.e., greatest distances between CO2 sample locations in the cores) is sufficient to resolve the major cycling noted between marine isotope stages, MIS1 back to MIS19.

Whether using the Figure 1a curve or the Figure 1b curve, one can “eyeball” a predominate cycle period component of ~100,000 years.

This period is consistent with Earth’s average of about 100,000 years between glacial periods as seen over the last million years or so (ref: https://en.wikipedia.org/wiki/100,000-year_problem ).

Left undiscussed in the above article is the cause of this quasi-periodicity. To the best of my knowledge, this is still a largely unresolved issue.

As noted in the cited Wiki reference, some have speculated that it relates to a Milkanovitch cycle (e.g., Earth orbital plane inclination cycle relative to the solar system invariable plane) or a resonance of different Milankovitch cycles, but the fundamental problem is these have rather precisely defined periods that do not admit the +/- temporal variability seen in the measured cycle periods.

Is there any recently developed scientific, ahem, consensus regarding the “100,000 year problem”?

June 25, 2022 7:32 am

I think that Renee did put too much weight on the sampling period…

There is ample reason to believe that there were fast, high natural emissions of CO2 in the past 800,000 years.
The “fast” increase of CO2 when the earth is warming from a glacial to an interglacial period is about 100 ppmv in 5,000 years or 0.02 ppmv/year.

Even if there were extreme volcanic episodes in that period of let’s say 100 ppmv in 150 years (comparable to the current increase), the removal of that extra CO2 above the ocean-atmosphere equilibrium per Henry’s law would take some 150 years too (half life time around 35 years), before the extra CO2 “peak” would be undetectable (resolution of CO2 measurements 1.2 ppmv – 1 sigma – in ice cores).
That means that even with a sampling period of 400 years, there would be a detectable small peak out of the “normal” changes of around 8 ppmv/K for Antarctic temperatures or 16 ppmv/K for global temperatures (with a lot of lag)…