What can we learn from the Mauna Loa CO2 curve?

Guest post by Lance Wallace

The carbon dioxide data from Mauna Loa is widely recognized to be extremely regular and possibly exponential in nature. If it is exponential, we can learn about when it may have started “taking off” from a constant pre-Industrial Revolution background, and can also predict its future behavior. There may also be information in the residuals—are there any cyclic or other variations that can be related to known climatic oscillations like El Niños?

I am sure others have fitted a model to it, but I thought I would do my own fit. Using the latest NOAA monthly seasonally adjusted CO2 dataset running from March 1958 to May 2012 (646 months) I tried fitting a quadratic and an exponential to the data. The quadratic fit gave a slightly better average error (0.46 ppm compared to 0.57 ppm). On the other hand, the exponential fit gave parameters that have more understandable interpretations. Figures 1 and 2 show the quadratic and exponential fits.

image

Figure 1. Quadratic fit to Mauna Loa monthly observations.

image

Figure 2. Exponential fit

 

From the exponential fit, we see that the “start year” for the exponential was 1958-235 = 1723, and that in and before that year the predicted CO2 level was 260 ppm. These values are not far off the estimated level of 280 ppm up until the Industrial Revolution. It might be noted that Newcomen invented his steam engine in 1712, although the start of the Industrial Revolution is generally considered to be later in the century. The e-folding time (for the incremental CO2 levels > 260 ppm) is 59 years, or a half-life of 59 ln 2 = 41 years.

The model predicts CO2 levels in future years as in Figure 3. The doubling from 260 to 520 ppm occurs in the year 2050.

image

Figure 3. Model predictions from 1722 to 2050.

The departures from the model are interesting in themselves. The residuals from both the quadratic and exponential fits are shown in Figure 4.

image

Figure 4. Residuals from the quadratic and exponential fits.

Both fits show similar cyclic behavior, with the CO2 levels higher than predicted from about 1958-62 and also 1978-92. More rapid oscillations with smaller amplitudes occur after 2002. There are sharp peaks in 1973 and 1998 (the latter coinciding with the super El Niño.) Whether the oil crisis of 1973 has anything to do with this I can’t say. For persons who know more than I about decadal oscillations these results may be of interest.

The data were taken from the NOAA site at ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt

The nonlinear fits were done using Excel Solver and placing no restrictions on the 3 parameters in each model.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
341 Comments
Inline Feedbacks
View all comments
Gail Combs
June 8, 2012 1:30 pm

Ferdinand Engelbeen says: June 6, 2012 at 8:55 am
And again, if you find that the AIRS data show that the CO2 is not well mixed, then you have a different definition of well mixed than I have. Well mixed doesn’t mean that any huge injection or removal of CO2 in/out the atmosphere is instantly distributed all over the earth. It only says that such exchanges will be distributed all over the earth in a reasonable period of time.
______________________________________
Ah yes the lets change the argument defense.
The whole idea behind the “well mixed” argument was so the CAGW team could take measurements and SHOW that CO2 is increasing. If CO2 is NOT well mixed ie UNIFORMLY DISTRIBUTED through out the atmosphere. If there is no such thing as a “Baseline CO2” then you end up with huge error bars on all your measurements and you can not show that CO2 is increasing, the whole goal of the entire exercise.
Think of the very nice smooth curve showing increasing CO2 from MANKIND where Ice Core data is grafted to Callender’s cherry-picked data that is then grafted to the Mauna Loa data. Lucy shows the graph here and the fudge used to make it fit here.
Or of Willis’s Carpet Diagram
A study of WHEAT shows just how fast the CO2 content in air can change.

The CO2 concentration at 2 m above the crop was found to be fairly constant during the daylight hours on single days or from day-to-day throughout the growing season ranging from about 310 to 320 p.p.m. Nocturnal values were more variable and were between 10 and 200 p.p.m. higher than the daytime values. http://www.sciencedirect.com/science/article/pii/0002157173900034

Then there are the Cameroon killer lakes that release CO2 and can kill people and animals as far away as 25 km (15 miles) link As Myrrh keeps reminding us, CO2 is heavier than air and it takes energy (wind) to lift it into the atmosphere. So how come we are not seeing major spikes in CO2 down wind from coal plants or cities?
As Allan MacRae says June 4, 2012 at 7:38 pm

The SLC urban CO2 readings show that even a the typical SOURCE of manmade CO2 emissions (the URBAN environment), the natural system of photosynthesis and respiration dominates and there is NO apparent evidence of a human signature. If your premise was correct, you would see CO2 peaks at breakfast and supper times and the proximate (in time) morning and evening rush hours, when power demand and urban driving are at their maxima. This human signature is absent In the SLC data, and yet the natural signature is clearly apparent and predominant…
Similarly, in the AIRS animation I posted earlier, there is NO human signature and the power of nature is clearly evident. Here it is again. http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4

So the CO2 is not really well mixed NOT EVEN IN THE MID TROPOSHPERE where it is shown to vary by as much as six percent even when taking samples from an area as big as 90km X 90km. What it does show is CO2 levels are subject to the dynamics of the natural carbon/water cycle.
On the Plant Stomata ~ they are good for low values of CO2 and underestimate values above 325 ppm. Therefore they are a decent check on the low values found by the Ice Core analysis.

….Plant Stomata react more accurately to CO2 concentration, as has been determined in experiments. (More CO2 means fewer stomata, as plants exchange CO2 more efficiently) Historical collections of leaves can be used to determine past CO2 levels. In most cases, researchers are bound by the modern paradigm, and get confused by the low stomata counts of the past. Stomata cannot measure very high CO2, but only indicate high C)2. Higher CO2 levels over 325ppm are underestimated. When reading stomata research, you need to filter out the ruling paradigm when the problematical ice-core data is used to calibrate the stomata, when it should be the reverse.
Rapid atmospheric changes are well known from past reconstructions:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC129389/pdf/pq1902012011.pdf

BTW, my definition of “well mixed” comes from doing analysis of batches of drugs regulated by the FDA. A wishy-washy definition like you have would have landed my rear in JAIL!

FerdiEgb
June 8, 2012 3:24 pm

Gail Combs says:
June 8, 2012 at 1:30 pm
If there is no such thing as a “Baseline CO2″ then you end up with huge error bars on all your measurements and you can not show that CO2 is increasing, the whole goal of the entire exercise.
The “error bars” as seen by AIRS are a few ppmv up and down all over the earth, while the increase is some 70 ppmv since 1950. AIRS shows the same levelsand the same increase in CO2 as at Mauna Loa in the same area, and hardly any differences over the rest of the earth. Thus what is your problem? Only that you don’t like the data?
Ice Core data is grafted to Callender’s cherry-picked data that is then grafted to the Mauna Loa data
If you could for once your set your biases aside and do read some literature, you should know that, whatever Callender’s criteria were to pick the best data (and several were right on the mark), the ice core data simply confirmed his “best guesses”.
The ice core data in no way are grafted on the Mauna Loa data. That is what the late Jaworowski said, but that is completely bogus and, in my opinion, completely declassified him as an ice core specialist. The “arbitrary” shift of 83 years is because Jaworowski used the column of the age of the ice layers in Neftel’s table of the Siple Dome ice core, while CO2 is measured in the gas phase, which is much younger than the ice at the same depth. See:
http://www.ferdinand-engelbeen.be/klimaat/jaworowski.html#The_arbitrary_shift_of_airice_data
Anyone who has the slightest idea that it takes years to close the bubbles in an ice core, while still exchanges with the atmosphere are possible would know that.
And since 1996 we have the work of Etheridge an three Law Dome ice cores, where he measured CO2 in firn, top down to closing depth and in ice at the same depth. There was a real overlap of some 20 years (1960-1980) between the ice core CO2 and the measurements at the South Pole:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_sp_co2.jpg
Similarly, in the AIRS animation I posted earlier, there is NO human signature
There is hardly any measurable human signature in the momentary CO2 data, simply because the signal is too small. But look at a few years of data and it becomes clear.
About stomata data, they have far more problems than ice core data, not even close to their accuracy. Stomata are a local proxy where local changes in CO2 level caused by land use changes can give a huge change in offset, even if you calibrate them to ice cores in the past century. And how do you want to calibrate ice cores with a measurement error of +/- 1.2 ppmv to stomata data with an accuracy of +/- 10 ppmv?
And why do you worry about a drug within a +/- 2% tolerance in the active ingredient, if the pharmaceutical firm slowly increased it with 30% over the years?

June 8, 2012 4:35 pm

FerdiEgb.
It is far more likely that neither ice cores nor stomata give an accurate representation of the natural scale of atmospheric CO2 variations when the concentration is as low as it is today compared to far higher levels in the geological past.
I hope we can agree that the largest part of the natural cycle would be responses to sea surface temperatures.
Well, if the background level gets as low as it was (around 280 ppm), practically at danger level for life on Earth, then the percentage swings from ocean cycling are bound to be far larger than if the background level were as high as it often was in the distant past.
For all we know the natural swings could well be a doubling from LIA to date and a halving from MWP to LIA purely from changes in the ocean/ atmosphere exchange.
Neither the ice core nor the stomata proxies are necessarily sufficiently well delineated on the multicentennial timescale to refute such a possibility.
We are trying to rely on proxies that are far too coarse for the purpose to which they are being put.

June 8, 2012 4:40 pm

“So how come we are not seeing major spikes in CO2 down wind from coal plants or cities?”
Good question. One can be even more specific than that.
The densest areas of CO2 are downwind of ocean tracks especially where the flow hits a continent and slows down so that the CO2 can accumulate with no significant excess at all downwind of populated areas.

June 8, 2012 5:05 pm

Whoops.
Above, I meant to say:
For all we know the natural swings could be an increase of 30% or so from LIA to date and 30 – 50% from MWP to LIA.
And as regards evidence about the CO2 distribution see here:
http://climaterealists.com/index.php?id=9508
“Evidence that Oceans not Man control CO2 emissions”

Myrrh
June 8, 2012 5:42 pm

Ferdinand, will reply to your post later this weekend.

Gail Combs
June 8, 2012 6:18 pm

FerdiEgb says:
June 8, 2012 at 3:24 pm
Gail Combs says:
June 8, 2012 at 1:30 pm
If there is no such thing as a “Baseline CO2″ then you end up with huge error bars on all your measurements and you can not show that CO2 is increasing, the whole goal of the entire exercise.
>>>>>>>>>>>>>>>>>>>>>
The “error bars” as seen by AIRS are a few ppmv up and down all over the earth….
>>>>>>>>>>>>>>>>>>>>>
Good Grief, you are doing it again, completely missing the point.
I am not talking about the error bars of the method. I have spent my career running gas chromatographs and infrared spectrophotometers. I am well aware they are accurate and precise if used properly and calibrated.
What I am talking about is the error in determining the so called background CO2 of the ENTIRE ATMOSPHERE. If I have a well-mixed batch and I take ten samples from various points and test for component “A” I will get a bell curve with a small standard deviation. This tells me the batch is “Well Mixed” and uniform. If the test results for each point are different I get a large standard deviation and my ability to estimate the “true value” of component “A” has much larger error bars.
Therefore you have two standard deviations, one for the test method and one for the sampling plan. As Stephen Wilde pointed out if the standard deviation for the test method is large you can lose the data in the noise from the test method. If the thing you are measuring is not homogeneous then the standard deviation for the SAMPLING PLAN is large and your estimate of the “true value” has large error bars despite the precision and accuracy of your test method.
Also if I take a sample of air and place it in a flask then test it, I am testing a point source. Both the Ice Core measurements and the AIRS measurements (90km X90km) are not point source measurements they are COMPOSITE SAMPLES. They are the equivalent of taking several flasks of air, mixing them and doing one analysis. Since we both agree the chemical analysis is not a great source of error, then this is the equivalent of taking the average of 10 or 100 or 100 point source samples using a flask. Yet even with this intrinsic averaging we see a decent amount of variation. About 6% in the mid troposphere from AIRS for gosh sakes where you can not blame the variation on sinks or sources close by!

Allan MacRae
June 8, 2012 10:50 pm

Time Capsule:
Below is an exchange from January 2008, soon after I had written my paper on this subject (dCO2/dt varies ~contemporaneously with temperature, and CO2 lags temperature by ~9 months), and Roy Spencer had written his two papers on a similar topic. Nice to see how this radically different viewpoint has apparently become a bit less heretical since then – and I no longer feel quite so nervous around bonfires.
Even nicer to see how Ernst Beck is no longer being treated with disrespect and even derision. I never met Ernst but had the privilege of exchanging about 60 emails with him in 2008 alone. I recall Ernst as a remarkably decent and intelligent soul, who suffered because he dared to speak out against the global warming juggernaut. Ernst left us too soon in 2010, and I hope he can look down upon us now and at last feel some measure of vindication.
_________________
http://wattsupwiththat.com/2008/01/25/double-whammy-friday-roy-spencer-on-how-oceans-are-driving-co2/
Eric Adler (17:12:30) :
“Your analysis leaves out an important factor. It is known to all, including the scientists who wrote the IPCC report, that the change in CO2 concentration in the atmosphere is driven by 2 things:
1) An accelerating upward trend in CO2 due to human caused emissions.
2) The variation in the oceans’ ability to absorb the CO2, which decreases with increasing sea surface temperature.”
Your comment may or may not be correct – over the next decades, we may see the truth emerge from the data.
However, your tone with me and especially with Roy is aggressive and ill-advised.
Re: “It is known to all…”:
Really, such hogwash. I am reminded of that IPCC highlight, Mann’s hockey stick, that eliminated the Medieval Warm Period and the Little Ice Age; also of the Divergence Problem. Mann and the IPCC were clearly wrong – the only remaining question here is not one of error, it is one of fraud.
I am also reminded of the greatly exaggerated climate sensitivity used by the IPCC to produce their scary scenarios, and the ridiculous climate models that continue to predict catastrophic warming, even though Global Warming ceased a decade ago.
I remind you that ice core data shows a ~600 year lag of CO2 after temperature at that time scale. I have provided evidence at shorter time scales. Ernst Beck has provided significant evidence at intermediate time scales, and has suffered scorn from the likes of you.
I also remind you of the “missing sink”, whereby only half of humanmade CO2 reports to the atmosphere. The rest, presumably, is hidden away by evil climate skeptics (or do you prefer the term “climate deniers”).
Still, there may be a significant humanmade CO2 component, which cannot be ruled out at this time.
So even if the final conclusion in my paper turns out to be wrong, it will still be a much closer to the truth than any of the IPCC’s scary conclusions, which are clearly false, alarmist, self-serving and extremely expensive for humanity.
There has been no Global Warming for a decade, and evidence is mounting that Earth will enter a 20-30 year cooling period as the PDO has shifted to cool mode.
I await the IPCC’s smooth transition from Catastrophic Humanmade Global Warming to Catastrophic Humanmade Global Cooling, and your spirited defense thereof. Watch out for whiplash when you change directions.

June 9, 2012 1:42 am

Gail Combs says:
June 8, 2012 at 6:18 pm
I am not talking about the error bars of the method.
Neither did I. It is about the variability in the CO2 data, as the measurements are quite reliable (on fixed stations +/- 0.2 ppmv, AIRS at +/- 5 ppmv).
Look at the scale for the AIRS presentations:
http://photojournal.jpl.nasa.gov/jpeg/PIA12339.jpg
The scale is 382-390 ppmv. For a monthly average, mid-summer. That is a variability of average +/- 1% of the full scale. In other months where the largest seasonal changes are at work, that is +/- 2% of full scale all over the world. For a change of + or – 20% in CO2 fluxes between atmosphere and oceans/biosphere at ground level. I call that well mixed on such a time scale.
Then have a look at the trends over 7 years:
http://svs.gsfc.nasa.gov/vis/a000000/a003600/a003685/AIRSC02_MLOComposite.mp4
An increase, both in Mauna Loa data as in the AIRS data of about 14 ppmv in 7 years time, or near 2 % of full scale increase in less than 10 years. Thus the average CO2 level increased with more than the global variation. No matter the cause, we may be confident that there is an increase.
What is the impact of the global variation on the greenhouse effect? I don’t want to react on any discussion of the real impact of the radiation effect, but based on real absorption figures (Modtran), one can assume some 0.9°C for a CO2 doubling, without any positive or negative feedback. Thus a change of 2% full scale has a (logarithmic) impact of ~0.03°C on the surface, after ~30 years of adjustment of the ocean’s temperature. Simply not measurable.
Thus the (mainly seasonal) variability you see in the AIRS (and MLO and Barrow, and…) data has negligible impact on the greenhouse effect. Even if you take into account that the levels near ground over land can be much higher, that has negligible impact on the greenhouse effect (even with 1000 ppmv in the first 1000 meters).
The 30% increase since the industrial revolution started may have had an impact of about 0.4°C over the past 150 years, that is all. Not included the self-regulating effect of the earth’s water thermostat…
Thus whatever you think about the real variability of the CO2 levels, the impact (if any) is from the total increase over the full time span of reliable measurements.

FerdiEgb
June 9, 2012 2:09 am

Stephen Wilde says:
June 8, 2012 at 4:35 pm
I hope we can agree that the largest part of the natural cycle would be responses to sea surface temperatures.
Yes, but don’t underestimate the impact of vegetation: in spring when the mid-NH starts growing new leaves, the CO2 levels are rapidely sinking to minimum levels. The d13C measurements then show that vegetation growth is the largest cause of the decrease (and opposite in fall). Thus while oceans have the largest total impact (continuous between equator and poles, seasonal for the mid-latitudes), land vegetation has the largest impact on the seasonal variation. Reason why there is less seasonal variation in the SH.
For all we know the natural swings could well be a doubling from LIA to date and a halving from MWP to LIA purely from changes in the ocean/ atmosphere exchange.
The ice core data areCO2 levels averaged over 8-600 years, depending of the accumulation rate. With about 8 ppmv/°C over ice ages / interglacials. The averaging does smooth out larger variations, but that doesn’t change the average. Thus if there was more variation, the 180 ppmv minimum measured in the Vostok and other ice cores could have been even lower, which I don’t expect.
Fortunately for land plants, soil bacteria give some CO2 back to the atmosphere, which makes that CO2 over land in average is higher than background, at least during a few morning hours in sunlight:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_mlo_monthly.jpg
Thus if even a change over a glacial/interglacial has no more impact than 8 ppmv/°C, there is no reason to assume that the MWP-LIA difference or the LIA-current difference would have a much larger impact.

barry
June 9, 2012 5:59 am

We have measurements from many different locations all over the world corroborating a steady rise in CO2, and these reflect not only each other over the long term, but also the annual variability over different latitudes. We have plenty of evidence that our inventory of atmospheric CO2 is reasonably accurate.
Human industry outputs about twice the amount of CO2 that is added to the atmosphere every year (on average).
The change in isotopic ratios for CO2 in the atmosphere is exactly in line with what is expected from burning fossil fuels. There is no natural source of CO2 that would give us the isotopic ratio changes we see (unless a vast store of fossil fuels have been burning naturally for more than a century – anything’s possible).
Atmospheric content of oxygen is decreasing in proportion to the amount of fossil fuel being burned.
The oceans are currently a net sink for CO2 and have been accumulating CO2 for as long as we’ve measured this directly.
So, human industry outputs more than 100% of the extra CO2 added to the atmosphere (per annum, averaged over a few years). A theory that posits the CO2 rise over the last couple hundred years coming from nature, has to overcome a few basic problems.
First, it has to explain why anthro CO2 doesn’t add to the atmosphere – indeed it must explain how anthro CO2 gets sequestered in favour of a natural source. How does the biosphere know to do this?
It has to explain the change in isotopes.
It has to identify – with actual data – a physical mechanism that is responsible for the accumulating CO2.
We have data that explains the CO2 rise from anthropogenic sources. we have no actual data that identifies an increasing natural source, or decreasing natural sink, to explain the rise.
Occam’s razor works well here!
(Easy-going video on CO2 isotopic ratio http://www.youtube.com/watch?v=UXgDrr6qiUk)

Reply to  barry
June 9, 2012 1:37 pm

Barry,
Click on my name, read by blog, and then decide what the 13CO2 index is telling us about what is natural and what is anthropogenic.

Gail Combs
June 9, 2012 8:09 am

Myrrh, I just reread your comment at June 6, 2012 at 5:06 am and it got me to thinking. Why did Keeling pick Hawaii and Mauna Loa? That is aside from wanting to live in a tropical paradise instead of sitting on an ice field and therefore having his pick of eager scientists and grad students to work for him.
First lets deal with the basic assumption we make that scientists are honest and the data is not manipulated. This is the basis for the belief in the Mauna Loa data. However we have seen ample evidence here on WUWT and in science in general that this is a very bad assumption especially when dealing with scientist pursuing “A Cause”
So what data do we have about keeling’s agenda?

http://www.co2web.info/ESEF3VO2.pdf
…At the Mauna Loa Observatory the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques. Critique has also been directed to the analytical methodology and sampling error problems (Jaworowski et al., 1992 a; and Segalstad, 1996, for further references), and the fact that the results of the measurements were “edited” (Bacastow et al., 1985); large portions of raw data were rejected, leaving just a small fraction of the raw data subjected to averaging techniques (Pales & Keeling, 1965).
The acknowledgement in the paper by Pales & Keeling (1965) describes how the Mauna Loa CO2 monitoring program started: “The Scripps program to monitor CO2 in the atmosphere and oceans was conceived and initiated by Dr. Roger Revelle who was director of the Scripps Institution of Oceanography while the present work was in progress. Revelle foresaw the geochemical implications of the rise in atmospheric CO2 resulting from fossil fuel combustion, and he sought means to ensure that this ‘large scale geophysical experiment’, as he termed it, would be adequately documented as it occurred. During all stages of the present work Revelle was mentor, consultant, antagonist. He shared with us his broad knowledge of earth science and appreciation for the oceans and atmosphere as they really exist, and he inspired us to keep in sight the objectives which he had originally persuaded us to accept.”

The first clue in this snippet is “the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques. “ Now I am not the math wiz that Willis and others are but if there is one thing I am familiar with it is doing quantitative analysis on “a new infra-red (IR) absorbing instrument”
Keelings “success” using the IR for quantitative work made it into the literature (1965) and my bosses, two PhD chemists, who owned the company I worked for were eager to try out the new method. If we could make it work it would cut the analysis time by a factor of four. We were using a Gas Cromatograph. Very accurate, slow and a royal pain. (In 1973 we did not have computers so measurements and calculations were all by hand.)
The typical method is to spike the sample with a known amount of substance that has a peak where the test sample has no peak. For the Initial calibration of the instrument, artificial samples are made and run with for example 10ppm, 50ppm, 100ppm, 150ppm, 200pmm, 250ppm, 300ppm, 350ppm, 400ppm, 500ppm, and maybe 600ppm of the molecule of interest. All are spiked with the same amount of calibration substance. Several runs would be made of these calibration materials and a curve plotted. A high and a low calibration sample would be run with each set of test samples and new calibration materials would be made up daily.
This works like a charm for Gas Chromatographs. The repeated runs of each calibration material are very tight and a nice smooth curve is produce. With our brand new (1973) state of the art IR we could not get a calibration curve worth beans! The data points were all over the place. No matter what we tried we could not get a tight standard deviation. (The we was two Phd Chemists, one MS chemist, two BS chemists, a Phd Chem Engineer and assorted techs.) I talked to another chemist (head of Borg-warner labs) during that time period and he had the same problem with trying to get good quantitative analysis results from the IR. Therefore I am not surprised there was no validation of the test method against the traditional analytical techniques. (Analytical test equipment have come a long way since then esp. with the addition of computers.)
The second clue of course is Revelle
“…inspired us to keep in sight the objectives which he had originally persuaded us to accept.” Why in the name of the thousand little gods, would Keeling have to be “persuaded” to keep in sight Revelle’s “objectives” and “document” them if he was an honest scientist? The whole statement is all about a preconceived conclusion about atmospheric CO2 response to fossil fuel combustion.
Now back to why Keeling pick Hawaii and Mauna Loa?
Keeling had the option of picking the Arctic, Alaska or the Antarctic.
Here is the data he would have seen from Barrow Alaska before he made his choice.

Date – –Co2 ppm * * latitude * * longitude * * * *author * * * * * location
1947.7500 – – 407.9 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1947.8334 – – 420.6 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1947.9166 – – 412.1 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.0000 – – 385.7 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.0834 – – 424.4 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.1666 – – 452.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.2500 – – 448.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.3334 – – 429.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.4166 – – 394.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.5000 – – 386.7 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.5834 – – 398.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.6667 – – 414.5 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.9166 – – 500.0 * * * * *71.00* * * -156.80 * * *Scholander * *Barrow
These data must not be used for commercial purposes or gain in any way, you should observe the conventions of academic citation in a version of the following form: [Ernst-Georg Beck, real history of CO2 gas analysis, http://www.biomind.de/realCO2/data.htm ]

The data is much too high for Dr. Revelle’s purpose and there is not the option of cherry-picking that there is at Mauna Loa.
So what about Mauna Loa?
Topo Maps: http://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Hawaii_Island_topographic_map-fr.svg/728px-Hawaii_Island_topographic_map-fr.svg.png
1951 South: http://www.lib.utexas.edu/maps/topo/250k/txu-pclmaps-topo-us-hawaii_south-1951.jpg
1951 North: http://www.lib.utexas.edu/maps/topo/250k/txu-pclmaps-topo-us-hawaii_north-1951.jpg
Note Kohala MTN Forest Reserve and several water courses as well as the Upolu Point Airport are north of Mauna Loa. Toward the northwest is the lava flow of 1859 and two areas marked “Settlement” South and west of that is the Honuaula Forest Reserve complete with a sheep station. Followed with the Kahaluu forest reserve and the Puu Lehua Ranch. To the north East is the lava flow of 1881 and the Mauna Loa Game and Forest Reserve. There are several dated lava flows marked. The lava seems to be the mottled brown area. Woods-brushwood is designated by a clean white area and water courses are in blue (legend is at the bottom )
A satellite image of the Hawaii island chain: http://geology.com/satellite/hawaii-satellite-image.shtml
So what does that tell us about Mauna Lao?
1. It sits near the top of an active volcano.

The Mauna Loa (Hawaii) observatory has been regarded an ideal site for global CO2 monitoring. However, it is located near the top of an active volcano, which has, on average, one eruption every three and a half years. There are permanent CO2 emissions from a rift zone situated only 4 km from the observatory, and the largest active volcanic crater in the world is only 27 km from the observatory. These special site characteristics have made “editing” of the results an established procedure, which may introduce a subjective bias in the estimates of the “true” values. A similar procedure is used at other CO2 -observatories. There are also problems connected to the instrumental methods for measurements of atmospheric CO2 ….
…The concentration of CO2 in the gases emitted from the Mauna Loa and Kilauea volcanoes of Hawaii reaches about 47% . This is more than 50 times higher than in volcanic gases emitted in many other volcanic regions of the world. The reason for this is the alkaline nature of this volcanism, strongly associated with mantle CO 2 degassing. The Kilauea volcano alone is releasing about 1 MT CO2 per year, plus 60 – 130 kT SO2 per year (Harris and Anderson, 1983) http://www.co2web.info/np-m-119.pdf

2. The lava fields are surrounded by tropical vegetation and the wind blows up the slopes of the volcano during the day.
The wheat study shows vegetation can lower the CO2 2 meters above the canopy to a constant 310ppm during the day. The Harvard Forest Study and the Rannells Praire KS study also show there is a variation between 320 ppm and 400 ppm. http://harvardforest.fas.harvard.edu/publications/pdfs/Dang_J_Geophys_Res_2011.pdf
Also many people think the Lava flows, many from the 1800’s, are sterile. This is untrue.

Building Soil
Begin with bare rock-the Hawaiian Islands, for instance. The first organisms to colonize land newly created by lava flows must be able to provide their own nutrients by means of light or chemical energy. Cyanobacteria (blue-green algae), the first colonizers, are able to photosynthesize; some are able to “fix” atmospheric nitrogen, making it available to plants. Lichens (an alliance between fungi and algae) are also early colonizers, providing their own nutrients; they also produce unusual acids that help break down rock. Eventually, as a thin layer of soil develops on the lava, higher plants begin to move in; many of the first have a nitrogen-fixing capability…. http://www.pacifichorticulture.org/garden-allies/71/4/

There was also this WUWT post Earth follows the warming: soils add 100 million tons of CO2 per year
3. The Ring of Fire – MAP

The true extent to which the ocean bed is dotted with volcanoes has been revealed by researchers who have counted 201,055 underwater cones. This is over 10 times more than have been found before. The team estimates that in total there could be about 3 million submarine volcanoes, 39,000 of which rise more than 1000 metres over the sea bed. http://www.newscientist.com/article/dn12218

Volcano Outgasing of CO2.

The primary source of carbon/CO2 is outgassing from the Earth’s interior at midocean ridges, hotspot volcanoes, and subduction-related volcanic arcs. http://www.columbia.edu/~vjd1/carbon.htm

4. Ocean ~ This is where things get really interesting.
a. Oceans effect CO2 uptake by three methods. Increase in humidity => rain => absorbing CO2 out of the air to form Carbonic Acid (as you already noted) http://ion.chem.usu.edu/~sbialkow/Classes/3650/Carbonate/Carbonic%20Acid.html
b. phytoplankton remove CO2 “…from the surface ocean when the dying cells sink to depth makes way for the uptake of more CO2. In a way, the tiny organisms act as a biological conveyer belt for the transport of carbon dioxide out of the surface and into the deep ocean…” Various species of phytoplankton form the crucial diet for many marine organisms…
c. ..there is also an important geochemical balance. CO2 in the atmosphere is in equilibrium with carbonic acid dissolved in the ocean, which in turn is close to CaCO3 saturation and in equilibrium with carbonate shells of organisms and lime (calcium carbonate; limestone) in the ocean…
The fourth effect of the ocean on CO2 is dissolving the gas or out-gassing depending on temperature. Temperature will also effect photoplantan blooms.
Here is Bob Tisdale’s South Pacific Sea Surface Temperature graph (1990-2012) and north Pacific Sea Surface Temperature graph also the 1900 t0 2009 raw Pacific Decadal Oscillation and the smoothed PDO
From 1950ish to 1985ish we see an increase in SST of close to 2C but then the temperature drops by a degree or more from 1985ish til now.
So lets recap.
This “pristine site” is influenced by CO2 discharges not only from Mauna Loa but from the rest of the “ring of fire” There is a massive CO2 vegetation sink in the land and ocean surounding the observatory not to mention the organizisms on the lava itself. You have China cranking up an industrial revolution since she signed the WTO in September of 2001. On top of all of that you have a Pacific ocean SST that warmed two degrees C and then Cooled over one degree C. And with ALL of that going on you want me to believe in that nice smooth curve that Willis shows at the top of this page??? NO WAY!
Heck Mauna Loa Observatory even TELLS us they cherry pick the data!

“At Mauna Loa we use the following data selection criteria:
3. There is often a diurnal wind flow pattern on Mauna Loa ….. The upslope air may have CO2 that has been lowered by plants removing CO2 through photosynthesis at lower elevations on the island,…. Hours that are likely affected by local photosynthesis are indicated by a “U” flag in the hourly data file, and by the blue color in Figure 2. The selection to minimize this potential non-background bias takes place as part of step 4. At night the flow is often downslope, bringing background air. However, that air is sometimes contaminated by CO2 emissions from the crater of Mauna Loa. As the air meanders down the slope that situation is characterized by high variability of the CO2 mole fraction….. http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html

And so they do their curve fitting.

4. In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step, in which we fit a curve to the preliminary daily means for each day calculated from the hours surviving step 1 and 2, and not including times with upslope winds. All hourly averages that are further than two standard deviations, calculated for every day, away from the fitted curve (“outliers”) are rejected. This step is iterated until no more rejections occur…..”

If I was to pick a “pristine site” on land for CO2 measurements it would be a tower sitting in the Gobi Desert, The Gobi Desert is a Plateau around 3,000 to 5,000 feet above sea level. It one of the largest deserts in the world, 1,300,000 sq km in area. Much of the Gobi is not sandy but is covered with bare rock. Precipitation averages less than 100 mm per year, while some areas only get rain once every two or three years. It is in the rain shadow of the Himalayas.
The other option of course is the Antarctic, another desert.

June 9, 2012 2:33 pm

Gail Combs says:
June 9, 2012 at 8:09 am
At the Mauna Loa Observatory the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques.
Gail, please before you quote such opinion, have some thinking of yourself and please read the reasons why Keeling used a different method than the old chemical ones. See his autobiography:
http://scrippsco2.ucsd.edu/publications/keeling_autobiography.pdf
Keeling looked at a different method, simply because the existing methods were not accurate enough (average +/- 10 ppmv) and/or very time consuming. He started by fabricating an extremely accurate device, based on a manometric method, with an accuracy of 1:40,000 to measure CO2 in the atmosphere and later to calibrate the NIR devices and calibration mixtures. That device still was in use until a few years ago in the Scripps Institute. Meanwhile it is the NOAA who is responsible for the worldwide calibrations, but Scripps still uses its owen calibration gases and techniques.
Because you are knowledgeable on analyses: how can you validate a new NIR technique, accurate to about 0.1 ppmv with an old technique, accurate to about 10 ppmv?
With our brand new (1973) state of the art IR we could not get a calibration curve worth beans! The data points were all over the place.
Did you remove (or compensate for) water vapour? Keeling did by freezing most water vapour out over a cold trap (-70°C) before measuring CO2. Without that step, you can find any value as water and CO2 overlap in several bands. Alternatively, nowadays handheld CO2 meters measure water vapour in a different band and then compensate for the water vapour in the CO2 band.
“…inspired us to keep in sight the objectives which he had originally persuaded us to accept.”
The only objective of Keeling (and Revelle) was to have accurate CO2 measurements. With the old chemical methods, not even the seasonal variation was clear in the large variability of measurements of that time. Revelle was of the opinion that more CO2 and warming was beneficial for humanity… Think about the period in which this history was playing: the end fifties, more than a decade before the “global cooling” scare and 30 years before the “global warming” scare…
Keeling had the option of picking the Arctic, Alaska or the Antarctic.
If you did do some effort to read the history, you should have known that the first continuous measurements by the new instrument of Keeling were at the South Pole, starting one year before Mauna Loa… But because there is a gap of a few years in the continuous data, Mauna Loa is mostly referred to as the station with the longest continuous history.
And then Barrow. One of the current baseline stations, an near ideal place to measure CO2 (with seaside wind). Except that the micro-Schollander method used in 1947-1948 had an accuracy of +/- 150 ppmv! The instrument was used to measure CO2 in exhaled air (at about 20,000 ppmv). The calibration procedure was by sampling outside air. If the result was within 200-500 ppmv, the apparatus was deemed ready for its purpose… The calibration figures are what you showed, completely worthless to have any idea of the real CO2 levels at Barrow. Despite that, used by the late Ernst Beck to calculate his “global average”…
Modern data at Barrow since 1971 show a larger seasonal swing than Mauna Loa, but exactly the same trend, only leading MLO with average 6 months.
Then your litany about what can go wrong at Mauna Loa and the cherry picking there. They publish all the raw hourly averages + stdv from Mauna Loa, Barrow, Samoa and the South Pole. Including outliers. Have a look at them and calculate yourself if the “cherry picking” by not using the clearly locally contaminated outliers does influence the yearly average or the trend with more than 0.1 ppmv:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
If I was to pick a “pristine site” on land for CO2 measurements it would be a tower sitting in the Gobi Desert
We fullfil your wishes immediately:
Have a look at the CO2 data from Ulaan Uul (at 914 m height), Mongolia, or Plateau Assy, Kazakhstan, or Mt Waliguan, PRC and compare the data and trends with these of Mauna Loa:
http://www.esrl.noaa.gov/gmd/ccgg/iadv/

June 9, 2012 3:06 pm

Gail Combs 6/9 – 8:09 a.m. :
Thank you for some very interesting information and links! I took particular note of the comparitive results of the IR Absorption vs. Gas Chromatograph calibrations. Surely in the name of all that is gaseous these problems with the IR instruments have been worked out long before now, yes?

Allan MacRae
June 9, 2012 8:11 pm

Barry – do you have any data references for ocean CO2 concentrations versus time?
How about good data references to all your other allegations?
In answer to your question (essentially) about the mass balance argument, here is a hypothesis:
From the limited urban CO2 data I have seen, it is apparent that, at least in spring and summer, humanmade CO2 is sequestered very near to its terrestrial (usually urban) source, and the human signature is absent from the daily as well as the seasonal CO2 data. The rest of the natural system, the sparsely-populated rural areas, the vast boreal forests of Canada and Russia, the vast tropical forests of the Amazon and SE Asia, and the great plains and deserts all just soldier on, without so much as tip-of-the-hat of what is happens in the cities – you could think of it as “what happens in Vegas stays in Vegas”.
Nature just does not give a damn about humanity and its CO2 emissions – the effects do not extend far outside the urban areas – indeed they are often invisible even within the cities themselves.
Being totally ignored by Mother Nature is painful to contemplate, especially when we humans always thought we were the centre of the universe, but good old Mom just does not even know we exist, not does she care.
Sorry Buck-o,
Try to be strong.

Gail Combs
June 9, 2012 8:57 pm

Ferdinand Engelbeen says:
June 9, 2012 at 1:42 am
Gail Combs says:
June 8, 2012 at 6:18 pm
I am not talking about the error bars of the method.
Neither did I. It is about the variability in the CO2 data, as the measurements are quite reliable (on fixed stations +/- 0.2 ppmv, AIRS at +/- 5 ppmv).
Look at the scale for the AIRS presentations:
http://photojournal.jpl.nasa.gov/jpeg/PIA12339.jpg
The scale is 382-390 ppmv. For a monthly average, mid-summer. That is a variability of average +/- 1% of the full scale. In other months where the largest seasonal changes are at work, that is +/- 2% of full scale all over the world. For a change of + or – 20% in CO2 fluxes between atmosphere and oceans/biosphere at ground level. I call that well mixed on such a time scale…..
_______________________________________
GOOD GRIEF! Do you not understand what an AVERAGE DOES! I can take five lots of 10 midgets and 10 basketball players measure them and say the height for these lots of people is 5 foot 11.3 inches +/- 2 inches. Because I used AVERAGES and not individuals I completely lost the variability!
Both the AIRS data and the Mauna Loa data is AVERAGES. Heck you can see what using averages does in this graph of SST vs CO2. The smooth artificial looking curve starting about 1960 is the Mauna Loa data.

Gail Combs
June 9, 2012 9:29 pm

Leigh B. Kelley says:
June 9, 2012 at 3:06 pm
Gail Combs 6/9 – 8:09 a.m. :
Thank you for some very interesting information and links! I took particular note of the comparitive results of the IR Absorption vs. Gas Chromatograph calibrations. Surely in the name of all that is gaseous these problems with the IR instruments have been worked out long before now, yes?
_________________________________
First remember that nice smooth curve starts in 1958. You can compare the Mauna Loa data to the wet chemistry methods in this graph (Mauna Loa starts in 1958 on the graph)
Keeling got a really nice smooth curve for the last 50 years didn’t he?
Now here is some information about the test method.
I should first mention that before computers did the integration of the area under the curve used to determine the amount of a component it was done by hand. The method was to take a sharp pencil and a ruler and carefully figure out where the curve started and stopped (the baseline) and then draw the baseline. Next measure the height of the peak from the drawn baseline and also measure the width at half height. (Often figuring out where the baseline is was a royal pain) Use the formula for a triangle to calculate the area. The other method was to use special paper and cut out the peak and weigh it. Obviously having a computer do the integration was a great boost to the accuracy.
History of the GC
The katharometer also known as the thermal conductivity detector or the hot wire detector was developed for Gas Chomatographs in 1954. It was a relatively low sensitiviy detector. It was followed by the flame thermocouple detector (similar sensitivity) in 1956. Perkin-Elmer produced their first commercial model in May of 1955. The ubiquitous flame ionization detector (FID) described by McWilliams [5] in 1958…. was to become the workhorse of all GC analyses having an extremely high sensitivity and a linear dynamic range exceeding five orders of magnitude. Finally, the exciting family of argon ionization detectors was described by Lovelock [6] in 1960. Correctly designed and operated, the argon detectors could provide sensitivities at least one order of magnitude greater than the FID and the electron capture detector nearly two orders of magnitude greater than the FID. P/E introduced the FID with capillary columns in 1958 and it was produced until the late 1960s. In 1962 a model with baseline compensation was introduced. It wasn’t until 1977 that microprocessor controled GCs were introduced and in 1980 full data-handling capabilities were added. (the computer to integrate those curves)
http://www.perkinelmer.com/CMSResources/Images/44-74443BRO_GasChromaEvolution.pdf
IR Spectrophotometers
In 1944 P/E introduced its first (single beam) IR. The double beam was introduced in 1957. A computer controlled IR was introduced in 1976 and data manipulation in 1979. A major increase in reliability was gained in 1984 with the rotating mirror pair design. More innovation was seen from the 1990s on. http://www.perkinelmer.com/CMSResources/Images/44-74388BRO_60YearsInfraredSpectroscopy.pdf
So to put it bluntly the error in Keelings data was probably around +/- 10ppm or more when he started and +/- 5ppm or more (SWAG) in the 60s and 70s. That is assuming it was in the same order of magnitude as the GCs of the same vintage. Another point to remember is Universities and other noncommercial labs do not get state of the art equipment just because a newer model with more bells and whistles came out. Generally the Universities got the equipment the Corporations donate so it is at least a generation or two older than that in the commercial labs and the equipment in the commercial labs is normally not exactly young either. You would not believe some of the dinosaurs I used, there was this one piece of equipment built in the 1800s still in use in one of the factories I worked!
Here is a modern study comparing GC to IR for analyzing gases.

Can modern infrared analyzers replace gas chromatography to measure anesthetic vapor concentrations?
…Repeated injections of a given sample from a tank or flask containing known volumetric standards into a GC give values with a standard deviation of less than 2–3%…Because GC is a calibrated reference standard, it can be considered to be an accurate measure of the concentrations within the known limits of accuracy. IR is therefore compared directly to the GC measurements,…
…The deviation from GC calculated as (100* [IR-GC]/GC) for the medium and high concentrations ranged from -9 to 6% for isoflurane, from -11 to 5% for sevoflurane, and from -9 to 11% for desflurane. Deviation was more pronounced with the lower concentrations: from -18 to 2% for isoflurane, from -20 to 0% for sevoflurane, and from -8 to 21% for desflurane….
…..Because our study demonstrates difference in performance between individual units, our study suggests that GC remains the method of choice to measure absolute concentrations…..
Conclusion
In summary, the use of different IR absorption bands by the M-CAiOV compact multi-gas analyzer (General Electric) has allowed automated agent detection and may technically have facilitated the compensation for cross-sensitivity between anesthetic vapors and other gases, but has not improved accuracy of vapor analysis beyond that of older IR analyzers. IR and GC cannot be used interchangeably, because the deviations between GC and IR mount up to ± 20%, and because individual analyzers differ unpredictably in their performance.

Looks like I am not the only one who prefers the GC for accuracy over the IR!

barry
June 9, 2012 9:53 pm

Allan,
try the following site for CO2 data: http://ds.data.jma.go.jp/gmd/wdcgg/
There are nearly 200 locations, many of which have long enough records to do trend comparisons.

From the limited urban CO2 data I have seen, it is apparent that, at least in spring and summer, humanmade CO2 is sequestered very near to its terrestrial (usually urban) source

As you say, nature doesn’t care about anthro CO2. The sinks near an urban site will not distinguish between anthro and natural. Neither will the sinks in rural sites. The only thing that matters is that we’re putting out twice the amount that’s being added to the air. Basic arithmetic is hard to refute, and one needs to make all sorts of weird contortions to deny it.
Do you have any data sets showing natural sources that have increased over recent history, or natural sinks that have decreased? So far these ideas seem little more than speculation,

Gail Combs
June 9, 2012 9:56 pm

Oh, Leigh, I should also stick in the link to this discussion on the old data. Tony, if I remember correctly, has access to a really great library and can get stuff not available on the internet.
Historic variations in CO2 measurements

….The apparent considerable natural variation in CO2-see figure 3-due to ocean to air exchange (amongst other factors) puts the apparently irrational variable figures from the 19th Century onwards into context, yet IPCC AR4 suggests a remarkably constant 285ppm at this time, despite the expected outgasing and inflow caused by variability in ocean temperatures. The IPCC icon is Mauna Loa so it is instructive to go to the oracle so see what that says about variability…
so the 335ppm to 368ppm again puts the observed variability in the historic samples in much better context. The overall effect of taking CO2 measurements at Mauna Loa situated on top of an active volcano at over 3000 m altitude and surrounded by a constantly outgasing warm ocean shall be left to others to debate, but the averaging disguises the considerable daily variability….
…This analysis seems at variance with the information now available and that Keeling later came to believe in the accuracy of the old measurements he had previously rejected as being too high is demonstrated in his own autobiography. Ironically Callendar in the last years of his life also doubted his own AGW hypothsesis. Similarly whilst Arrhenius’ first paper on the likely effect of doubling CO2- with temperature rises up to 5C- is often quoted, his second paper ten years later- when he basically admitted he had got his initial calculations wrong-is rarely heard.….

So Tony also understands all that averaging of the data “disguises the considerable daily variability”

June 10, 2012 1:52 am

Gail Combs says:
June 9, 2012 at 8:57 pm
GOOD GRIEF! Do you not understand what an AVERAGE DOES! I can take five lots of 10 midgets and 10 basketball players measure them and say the height for these lots of people is 5 foot 11.3 inches +/- 2 inches. Because I used AVERAGES and not individuals I completely lost the variability!
Please Gail! You don’t like to accept that the daily or even monthly or even yearly variations have not the slightest measurable impact on the greenhouse effect. We are not talking about drugs with a very pronounced dose and effect relationship within a few hours, but about CO2 where you need at least 10% change in level during at least 10 years to have a measurable effect (if any).
And have a look at the hourly averaged raw data from Mauna Loa and the South Pole for 2008, compared to the “cleaned” daily and monthly averaged:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
It doesn’t make any difference for the radiation effect if the CO2 momentary should change from 200 to 800 ppmv and back in some location, as that has an influence which is simply unmeasurable. Only the long-term average is of interest.
BTW, your +/- 2 inches would be in error, the stdv is much larger and gives you a pretty good idea of the variability in the combined lot…

FerdiEgb
June 10, 2012 2:27 am

Gail Combs says:
June 9, 2012 at 9:29 pm
So to put it bluntly the error in Keelings data was probably around +/- 10ppm or more when he started and +/- 5ppm or more (SWAG) in the 60s and 70s.
The error in the first decades was +/- 1 ppmv (all hand calculations from long analog rolls) and in recent decades +/- 0.2 ppmv. As said before, Keeling freezed out the main problem, water vapour. Further, the procedure includes an hourly calibration with 2 (nowadays 3) calibration gases. Thus whatever the instrument’s individual properties or whatever the drift of the instrument over time, that is near fully taken into account.
Besides that, at several points on earth (including Mauna Loa), flask samples are taken as reference, which are independently checked by different (even competing: Scripps against NOAA) labs. Some again use NIR, some us GC for the analyses and Scripps still uses its manometric method to test their own flask samples of Mauna Loa . The different lab results and continuous analyses are within 0.2 ppmv, see:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html#replication
A few stations even use an automated GC for continuous CO2 analyses. All show the same “smooth” (if averaged over a year) trends over time…

Allan MacRae
June 10, 2012 2:44 am

Thanks for the link Barry.
My problem is that of these many measurement sites, it appears that few if any are URBAN.
http://ds.data.jma.go.jp/gmd/wdcgg/cgi-bin/wdcgg/catalogue.cgi
It appears that everyone wants to measure atmospheric CO2 at sites that are as pristine (non-urban) as possible. This is understandable.
If you know of any urban sites with continuous 24-hour CO2 readings, I would be pleased to see them.
Barry, you say at June 9, 2012 at 9:53 pm
“As you say, nature doesn’t care about anthro CO2. The sinks near an urban site will not distinguish between anthro and natural. Neither will the sinks in rural sites. The only thing that matters is that we’re putting out twice the amount that’s being added to the air. Basic arithmetic is hard to refute, and one needs to make all sorts of weird contortions to deny it.”
Barry, my suggestion (or weird contortion of basic arithmetic, as you call it) could be this:
IF (as it appears from the limited urban atmospheric CO2 data I’ve seen) these humanmade CO2 emission are sequestered locally close to the urban source, they do NOT form part of a large, global quasi-equilibrium of CO2 between vegetation, soil and water. The CO2 is just gone – sequestered locally, by whatever means this happens ( probably some form of biological and soil sequestration).
The notion that the CO2 elsewhere in the world has to compensate for this localized near-urban sequestration according to some large mass balance equation is the false assumption. The natural CO2 flux in the rest of the world just carries on as if this urban/near-urban CO2 emission and sequestration took place on another planet – the global CO2 system is not significantly affected by the localized urban phenomena.

Allan MacRae
June 10, 2012 2:58 am

Gail, the scale in this AIRS animation goes from 364 to 386 ppm CO2. That seems reasonable to me.
The greatest contrasts (differences) appear in April-May of each year, and by July these have declined..
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4

Gail Combs
June 10, 2012 3:20 am

barry says:
June 9, 2012 at 9:53 pm
…………….Do you have any data sets showing natural sources that have increased over recent history, or natural sinks that have decreased? So far these ideas seem little more than speculation…..
____________________________________________________
You might want to take a peek at the Kaplan Graph
CO2 background from 1826 to 2008 shows a very good correlation ( r= 0,719 using data since 1870) to global SST (Kaplan, KNMI), with a CO2 lag of 1 year behind SST from cross correlation (maximum correlation: 0,7204). Kuo et al. 1990 derived 5-month time lag from MLO data alone compared to air temperature.

FerdiEgb
June 10, 2012 4:53 am

Gail Combs says:
June 10, 2012 at 3:20 am
You might want to take a peek at the Kaplan Graph
CO2 background from 1826 to 2008 shows a very good correlation ( r= 0,719 using data since 1870) to global SST (Kaplan, KNMI), with a CO2 lag of 1 year behind SST from cross correlation

Except that the 1826-1960 period is based on Beck’s compilation, where the 1942 “peak” is not based on any background data, but only on highly fluctuating locally contaminated data (Giessen: 68 ppmv, 1 sigma) and methods with low to extreme low accuracy.
Further have a look at the sea surface temperature and CO2 in ice cores until 1960 (and an overlap with SPO 1960-1980) and CO2 levels for the period after 1960, where better data are available:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_emiss_increase.jpg
Overall, there are several distinct periods: 1910-1945 and 1975-2000 with warming and 1945-1975 with cooling and 2000-current which is flat. Despite that, in all periods CO2 goes monotonically up in near exact ratio with human CO2 emissions, while the correlation with temperature in the period 1945-1975 is even negative.