What can we learn from the Mauna Loa CO2 curve?

Guest post by Lance Wallace

The carbon dioxide data from Mauna Loa is widely recognized to be extremely regular and possibly exponential in nature. If it is exponential, we can learn about when it may have started “taking off” from a constant pre-Industrial Revolution background, and can also predict its future behavior. There may also be information in the residuals—are there any cyclic or other variations that can be related to known climatic oscillations like El Niños?

I am sure others have fitted a model to it, but I thought I would do my own fit. Using the latest NOAA monthly seasonally adjusted CO2 dataset running from March 1958 to May 2012 (646 months) I tried fitting a quadratic and an exponential to the data. The quadratic fit gave a slightly better average error (0.46 ppm compared to 0.57 ppm). On the other hand, the exponential fit gave parameters that have more understandable interpretations. Figures 1 and 2 show the quadratic and exponential fits.

image

Figure 1. Quadratic fit to Mauna Loa monthly observations.

image

Figure 2. Exponential fit

 

From the exponential fit, we see that the “start year” for the exponential was 1958-235 = 1723, and that in and before that year the predicted CO2 level was 260 ppm. These values are not far off the estimated level of 280 ppm up until the Industrial Revolution. It might be noted that Newcomen invented his steam engine in 1712, although the start of the Industrial Revolution is generally considered to be later in the century. The e-folding time (for the incremental CO2 levels > 260 ppm) is 59 years, or a half-life of 59 ln 2 = 41 years.

The model predicts CO2 levels in future years as in Figure 3. The doubling from 260 to 520 ppm occurs in the year 2050.

image

Figure 3. Model predictions from 1722 to 2050.

The departures from the model are interesting in themselves. The residuals from both the quadratic and exponential fits are shown in Figure 4.

image

Figure 4. Residuals from the quadratic and exponential fits.

Both fits show similar cyclic behavior, with the CO2 levels higher than predicted from about 1958-62 and also 1978-92. More rapid oscillations with smaller amplitudes occur after 2002. There are sharp peaks in 1973 and 1998 (the latter coinciding with the super El Niño.) Whether the oil crisis of 1973 has anything to do with this I can’t say. For persons who know more than I about decadal oscillations these results may be of interest.

The data were taken from the NOAA site at ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt

The nonlinear fits were done using Excel Solver and placing no restrictions on the 3 parameters in each model.

0 0 votes
Article Rating
341 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
June 2, 2012 3:12 pm

Interesting. Need to check the results of other observatories (the time is shorter), to see weather the deviations are global or local.

noaaprogrammer
June 2, 2012 3:22 pm

The late 1980s show a large departure. It would be interesting to plot these residules against the rates of increase/decrease in global temperature.

June 2, 2012 3:25 pm

The fluctuations appear to fit global temperature fluctuations. This intuitively would make sense because warming sea surface temperatures would cause release of CO2 to the atmosphere, and conversely, as sea surface cools, would increase CO2 absorption into the oceans from the atmosphere.

wsbriggs
June 2, 2012 3:29 pm

Because I’m an alround bad guy, I’ll suggest that you could also have fit the data with a scaling distribution, you know, and asymmetrical one with a long tail.
Interesting that it appears the ramp starts kind of when the LIA started petering out. But that couldn’t have anything to do with it…

mondo
June 2, 2012 3:42 pm

Shouldn’t we be looking at this sort of data on a logarithmic rather than arithmetic Y-scale?

Just some guy
June 2, 2012 3:43 pm

Hmmm. It cant follow the same formula forever, of course. If it did,it appears we’ll one million parts per million by the year 2540.

Kasuha
June 2, 2012 3:45 pm

I’d suggest fitting a sine curve to it, too.
And another thing which might be interesting to look at is annual cycle. That would require unadjusted data, though. Does annual cycle follow the concentration curve (i.e. does its amplitude grow with concentration) or does it stay constant?

tango
June 2, 2012 3:46 pm

Mauna Loa had better stop spewing out that CO2 crap other wise it will be slapped with a carbon tax, Australia PM gillard will find a way

Joachim Seifert
June 2, 2012 3:58 pm

All this means that CO2-doubling will be completed by 2050 along with
the climate forcing of 3.7 W/m2…… thus earlier than 2100 as given by
AGW…
Which GMT would result in 2050?

Ian George
June 2, 2012 4:03 pm

The last volcanic eruption of Mauna Loa was in 1984. Prior to that it had erupted quite regularly with 39 eruptions since 1832. Would this be a factor?

Rosco
June 2, 2012 4:18 pm

Doesn’t the constant emission of volcanic gases from the national park mean that perhaps this was not the best choice of monitoring sites for a gas that is known to be emitted from volcanoes ?
I’m sure they adjust the data to compensate.

Latitude
June 2, 2012 4:23 pm

If this trend continues……
who would have thought that man’s measly 5%

Billy Liar
June 2, 2012 4:23 pm

Joachim Seifert says:
June 2, 2012 at 3:58 pm
Which GMT would result in 2050?
Your guess is as good as anyone’s …

June 2, 2012 4:26 pm

My shot at explaining the residuals is that the dips reflect recessions in the West — the 70s and early 90s. So why isn’t there a dip for 2008+? Ans: China never had a recession.

mobihci
June 2, 2012 4:35 pm

i would say the response to human emissions would be linear due to the time frame being considered mainly being from an upward slope of temperature rise.
if the natural component due to increasing ocean heat content is considered, then the ‘man made’ portion will be a fraction of the increase. to just draw a line through a complex system smacks of climate science at work.
what happens when the temps fall is the question, not what curve it fits.

David L. Hagen
June 2, 2012 4:58 pm

Thanks Lance. Keep exploring.
For some detailed CO2 data and analysis with latitude and time see:
Fred H. Haynie, Future of Global Climate Change http://www.kidswincom.net/climate.pdf
And CO2 & OLR
David Stockwell in his Solar Accumulation theory predicts a Pi/2 (90 degree) phase lag for ocean temperature vs solar forcing for the Schwab solar cycle. e.g. 2.75 years for the 11 years solar cycle forcing. See Key evidence for the accumulative model of high solar inuence on global temperature
From that I predicted a similar 3 month (12/4) lag between annual solar forcing and ocean temperature, and thus the CO2 signal. similarly, the Arctic lag should be 6 months displaced from the Antarctic lag in the annual cycle. This appears to be supported by Haynie’s slides 16, 10, 11 and 18/59:

Adding the 9 day or 0.3 mo difference between the winter solstice (Dec 22) and January 1st, nominally gives 0.23 year (3 mo) and 0.63 year ( 7.6 mo) delays from the insolation minimums to the CO2 minimums. The CO2 curves also look like a ~0.4 year or 5 month difference between them. That is close to the 0.5 year (6 mo) difference expected from the North versus Southern hemisphere annual solar cycle. These phase lags suggest the CO2 pulses are driven by the temperature changes which lag the insolation by 90 degrees (2.5 years or 3 months) supporting Haynie’s evidence. See Haynie’s slide 18/59

(Note the Antarctic lag is reduced from the solar forcing because of less polar latitude). See also CO2 & Temperature discussion under <a href=http://judithcurry.com/2011/09/15/on-torturing-data/On Torturing the Data
The CO2 lags corresponding with the ocean temperature lags which are predicted from the annual solar forcing and with the Arctic ~6 months different from the Antarctic lag provides strong evidence for solar driven temperature with annual CO2 variations responding to the temperature.
For economic activity vs CO2 see:
Detection of Global Economic Fluctuations in the Atmospheric CO2 Record

Werner Brozek
June 2, 2012 5:08 pm

Just some guy says:
June 2, 2012 at 3:43 pm
Hmmm. It cant follow the same formula forever, of course. If it did,it appears we’ll one million parts per million by the year 2540.

As well, the oxygen would get too low for life to exist. See
http://www.disclose.tv/forum/atmospheric-oxygen-levels-fall-as-carbon-dioxide-rises-t29534.html
“…we are losing nearly three O2 molecules for each CO2 molecule that accumulates in the air.
“if the oxygen level in such an environment falls below 19.5% it is oxygen deficient, putting occupants of the confined space at risk of losing consciousness and death.”

Harold Pierce Jr
June 2, 2012 5:21 pm

EVERYBODY PAY ATTENTION!!! THE FOLLOWING IS SUPER IMPORTANT!!!
The concentration of CO2 in the atmosphere as determined at Mauna Loa is valid only for highly-purified, bone dry air which is comprised only of nitrogen, oxygen, the inert gases, and carbon dioxide and which does not occur anywhere in the atmosphere. In real air, there is always water vapor and the concentrations of the gases are lowered in portion to that volume fraction of water vapor.
The use of the concentration of CO2 based upon the data from Mauna Loa is an absolute fatal flaw for all climate model calculations. For fluid dynamic calculations, mass per unit volume should be used.
At STP (273 K and 1 atmosphere pressure), one cubic meter of dry air presently has about 390 mls (390 ppmv) or 17.4 mmoles. If this dry air is heated to ca. 333 K (60 deg C), which slightly higher than max temp every recorded in the desert in Pakistan, the concentration of CO2 is still 390 ppmv but its mass is 14.3 mmoles. If the dry air is cooled 183 K (-90 deg C, lowest temp ever recorded in Antarctica), the concentration of CO2 is still 390 ppmv but its mass is 26 mmoles.
The mass of atmospheric gases in any unit volume of the atmosphere depends upon temperature, pressure and absolute humidity. Weather maps show there is no uniform distribution of temperature, pressure and rel humidity in space and time. Thus there is no unifrom distribution of the greenhouse gases in real air.
The reason the climate scientists say the greenhouse gases are well-mixed is due to the methods of atmospheric gas analysis. In general a sample of local air is filtered to remove particles, dried to remove water, scrubbed to volitile organic compounds, oxides of sulfur and nitrogen, and CFC’s. This procedure produces the highly-purified bone dry air mentioned above.
The composition of the atmosphere of local air from remote locations is fairly uniform thu out the world except for minor variations in the concentration of CO2. However, in locals where there is lots of human activities the concentration of CO2 can vary greatly in space and time suchas during rush hour in major cities or in winter in temperate zones.
All the guys who do atmo. gas analyses know what I have stated above is the absolute truth. But they keep their mouths shut so the climate scientists can make the claim that the greenhouse gases are well-mixed and to avoid vilification. We all know what happened to Ernst Beck after he published he review of atmospheric CO2 gas analyses.
For really good data on the composition and properties of the atmosphere go to:
http://www.uigi.com/air.html
Ya know, these climate science guys really don’t know what they are doing.

June 2, 2012 5:24 pm

I plotted Mauna Loa with tropics temps over at woodfortrees and there is no correlation.

NZ Willy
June 2, 2012 5:28 pm

I presume the curve matches the world population curve without much of a residual. Still, the CO2 sinks into the oceans eventually, don’t know how fast that happens.

fp
June 2, 2012 5:30 pm

So strange that the curve follows such a simple mathematical pattern and doesn’t seem correlated to economic activity, world oil consumption, etc. The residual doesn’t seem correlated either. See http://www.indexmundi.com/energy.aspx Oil consumption was dropping in the early 80’s and here the residual is increasing.
The fluctuations in the residual do seem to track the fluctuations in global temperatures though. Makes me wonder if outgassing from the oceans is what’s driving CO2, not fossil fuel use.

June 2, 2012 5:38 pm

I recall reading that the guys on Mauna Loa throw out the “outliers,” when they take CO2 readings. The reasons for throwing out the high CO2 readings involved possible burps of the volcano, and mobs of exhaling tourists in belching busses. However the reasons for throwing out the low readings of CO2 always puzzled me, (and made me a little nervous, because if you throw out enough low readings you can get the reading that “fits” a preconcieved theory.)
My understanding was that the low readings were due to “upslope winds.” Can anyone explain what happens downslope that uses up the CO2? Lush foliage?

AnonyMoose
June 2, 2012 5:52 pm

There may also be information in the residuals—are there any cyclic or other variations that can be related to known climatic oscillations like El Niños?


http://www.woodfortrees.org/plot/esrl-co2/isolate:60/mean:12/scale:0.2/plot/hadcrut3vgl/isolate:60/mean:12/from:1958
(from http://forum.wetteronline.de/showthread.php?p=181540 )
It looks like the temperature changes are similar to El Nino/La Nina, with CO2 lagging temperature.

Dr Burns
June 2, 2012 5:55 pm

From 1948, there seems to have been a steady downward trend in global atmospheric RH. Could this effect Mauna Loa CO2 measurement ?
http://members.shaw.ca/sch25/FOS/GlobalRelativeHumidity300_700mb.jpg

just some guy
June 2, 2012 5:57 pm

Harold Pierce Jr says:
…For really good data on the composition and properties of the atmosphere go to: http://www.uigi.com/air.html
Harold, don’t take this the wrong way, but that link leads to some sort of classified ads website.

Robert of Texas
June 2, 2012 6:02 pm

What I fail to understand is how climate scientists attribute all the increasing CO2 to man… If
– man only contributes (less than) 5% of the total new atmospheric CO2 annually
– the amount of man-made CO2 is increasing faster than most models assumed
– CO2 is the primary cause of warming
– since 1992 (or there-abouts) most global warming is caused by humans
then shouldn’t we see a deviation of some sort from a nice curve? (starting around 1992)
If on the other hand the ocean’s are degassing we would not see such a man-made deviation – it would be too small to measure. Hmm, and that’s what we see…

Bill Illis
June 2, 2012 6:07 pm

I think if you take more recent data, you will find the growth starting to approach a more linear rate. The latest number are only accelerating at 0.002 ppm/yr/yr which is very, very, just slightly exponential.

Pamela Gray
June 2, 2012 6:08 pm

Anything as regular as this data says one of two things.
1. Manmade CO2 pump sitting next to the sensor and never shuts off and is exquisitely tuned to a rythmic increasing beat.
2. Artifact of the “fudge” factor part of the CO2 calculation.
Of these two scenarios, I think #2 has the greater chance of being the culprit. It is exceedingly rare for anything on Earth to be that regular (even if caused by human polution) unless someone fine tuned it to be that regular. It’s like finding a perfectly square rock in the mountains and finding out nature made it. Ain’t gonna happen. Chances are something that regular is wholly artifact. That a person can build a simple model to express the regularity of the signal is revealing, to say the least. Someone have the complete maths sequence for the CO2 calculation?

Bill Yarber
June 2, 2012 6:14 pm

The Earth has been warming since the LIA, a 500 year stretch of lower temperatures on land and sea surfaces. We have warmed about 1C since 1860 and over 2C since the coldest portion of the LIA – right around 1730. Can anyone calculate how much net CO2 the world’s oceans would outgas with 2C warming of sea surface temperatures. My guess is that amount will account for 90+% of the additional CO2 in our atmosphere. Would make a great research paper.
I’m also willing to guess, if Earth’s temps remain relatively constant for another decade, the rate of increase in CO2 concentration will slow back to 0.7ppm
Bill

June 2, 2012 6:24 pm

Mauna Loa may not be active right now, but the whole of Big Island is volcanic and various parts erupt at different times. Kiluaea is on the windward side and the station is on the lee side of Mauna Loa, I actual saw the station from the saddle road last year. Kiluaea has been erupting non-stop for about 30 years now but volcanos do not have to be erupting to release CO2. I still naively feel that cyclical changes in the CO2 output of this place must be taken into account but they don’t seem to be.

Doug Badgero
June 2, 2012 6:45 pm

“Interesting that it appears the ramp starts kind of when the LIA started petering out. But that couldn’t have anything to do with it…”
+1
What we know:
There is about 50 times as much CO2 in the oceans as there is in the atmosphere.
Warming water will release CO2 from solution.
The seasonal and short term changes in CO2 correlate well with temperature and lousy with anthro emissions.
Considering only the amount of anthro emissions, the CO2 levels in the atmosphere should be rising faster.
Yet, most scientists don’t seem to even consider the possibility that the rising CO2 levels are due primarily to the long term warming of the earth and oceans since the LIA.

June 2, 2012 6:45 pm

To NZ Willy,
The Arctic Ocean is the big sink that is covered with ice much of the year. When the ice is a minimum, the net sink rate is around 50ppm/year over the cold open water.

jorgekafkazar
June 2, 2012 7:16 pm

Thanks, Lance, this was fun. But there’s not a whole lot of science, here. Using a curve fit for a 56 year period, then extrapolating backwards and forwards is very shaky. There’s no reason to assume that the multiple mechanisms resulting in atmospheric CO2 concentrations are identical at the start and end of the period; thus the exponential, though convenient, is not necessarily valid as a predictor or analytical tool of any sort.
If you have the time, though, you might want to push this a stage further. At a minimum, I’d like to see error bars fore and aft, as well as a correlation coefficient adjusted for autocorrelation. I think you’ll find your 1723 date should be ±200 years. Worse, trying to tie the date of the knuckle of the exponential to any invention is completely unjustified. Natural CO2 variation may likely overwhelm any man made emissions, distorting the actual curve beyond recognition.
NZ Willy says: “I presume the curve matches the world population curve without much of a residual…”
Bad assumption.

Harold Pierce Jr
June 2, 2012 7:48 pm

just some guy says on June 2, 2012 at 5:57 pm:
Harold Pierce Jr says:
…For really good data on the composition and properties of the atmosphere go to: http://www.uigi.com/air.html”
Harold, don’t take this the wrong way, but that link leads to some sort of classified ads website.
You are right. There is real monkey business going. I called company and left a message. I will call on monday to find out about what is happening. Maybe they don’t know.
I googled: Universal Industrial Gases, Inc. The company name with their main web url comes up and links to all their other sites. If you click on any of these, you are taken to the ad site.
UIGI is really big company. I can’t imagine their webmaster hasn’t found about the “hijacking” of their url.
However, most of industrial customers probably don’t use the links, and use email to contact the company directly.

June 2, 2012 7:59 pm

I note that NOAA are tumpeting a record-breaking 400ppm CO2 in the Arctic http://researchmatters.noaa.gov/news/Pages/arcticCO2.aspx

Brian H
June 2, 2012 8:32 pm

Harold;
That domain leads nowhere now; uigi.com is a blank screen, all sub-pages included.

George E. Smith;
June 2, 2012 8:40 pm

“””””…..I am sure others have fitted a model to it, but I thought I would do my own fit. Using the latest NOAA monthly seasonally adjusted CO2 dataset running from March 1958 to May 2012 (646 months) I tried fitting a quadratic and an exponential to the data. The quadratic fit gave a slightly better average error (0.46 ppm compared to 0.57 ppm). On the other hand, the exponential fit gave parameters that have more understandable interpretations. Figures 1 and 2 show the quadratic and exponential fits……”””””
So I didn’t see any conclusion as to whether the Mauna Loa CO2 data best fits an exponential curve or whether a power series curve is a better fit for the 1722 to 2050 data.
Just suppose we could, (well I’m not at all saying this is possible but just for now assume it might be) som how best fit an exponential curve itself to a power series; some generic form; for example,
like exp(x) = ax/1! + bx^2/2! + cx^3/3! + dx^4/4! + ….. you get the idea, where a, b, c, d, etc are arbitrary parameters to be statistically determined from a sequence of values for exp(x), over some useful range of that function.
Now if you used that power series instead of the exponential function, do you think you could get an even beter fit to the Mauna Loa CO2 data ?
It might be interesting to try.
Now what would happen if you used the actual real measured data; like what actually was found at the top of Mauna Loa, instead of some seasonally adjusted substitute ? Was Briffa’s Yamal Christmas tree seasonally adjusted ?
Why did you choose to start your extrapolated prediction; excuse me, that’s projection, from the year 1722. Aren’t you concerned about being accusede of cherry picking, by selecting that year; rather than say 1769; the year that Captain Cook, (re)discovered New Zealand ?

DocMartyn
June 2, 2012 9:16 pm

CO2 is a biotic gas and so changes in the biotic affect CO2 and vise versa. If you want to have a gas that is a marker of changes in the temperature of ocean vs. atmosphere than Argon works nicely. Keeling has also looked at the changes in the Ar/N2 ratio at Mauna Loa. I don’t know where the archive for this data is.

Steve Keohane
June 2, 2012 10:20 pm

Dr Burns says: June 2, 2012 at 5:55 pm
Dr. Burns, I have a couple copies of that chart, but do not know the source, nor what data was used to compile the graph. Do you? Any information would be appreciated. Thanks
Here is a more recent version: http://i48.tinypic.com/2qlfnzn.jpg

michael hart
June 2, 2012 10:36 pm

Eureqa Formulize, a free program from Cornell Creative Machines Lab is a tool which allows researchers to sift through huge numbers of different curve fitting algorithms. That is, it not only does the curve-fit and regression analysis for one specified formula, it searches for other formulae according to the user’s choices of mathematical form. Well worth a look.
Of course, it should also carry a scientific ‘health warning’ cautioning of the perils of letting yourself be seduced into unwarranted conclusions by the ease of squiggle-matching with a computer.
[Idiots with computers can cause a lot of grief, but most readers of this blog probably don’t need that warning.]

June 2, 2012 11:25 pm

Harold Pierce Jr says:
For fluid dynamic calculations, mass per unit volume should be used.
And from what do you base this rather unusual claim?
I am afraid that the substance of your claim is no more solid than your link.

June 2, 2012 11:29 pm

The projected co2 chart is scary only ‘cuz it assumes unlimited supply of fossil fuels (as in IPCC scenarios). It is rather more reasonable (423-ppm 2029) when adjusted to reflect peak oil, peak gas & peak coal: http://trendlines.ca/free/climatechange/index.htm#fossil

Laws of Nature
June 3, 2012 12:21 am

Dear Anthony et al.
Just yesterday I was looking if there would be any news on the discussion of the Essenhigh paper
I found this:
http://www.skepticalscience.com/news.php?n=1259
And would like to direct the interested reader to comments #20 and #26 there.
These two comments kind of summarize in my opinion about the state of the discussion:
Essenhigh came up with a model, which basically removes anthropogenic CO2 from the equation due to its low residence time, the published answer by Cawley uses a very similar model to “reinstate” the anthropogenic CO2, but fails to disprove Essenhighs assumptions and conclussions.
It seems when asked to comment there Essenhigh even said something like “you have your model, I have mine”.
Now here is my question: Is there anywhere a full discussion of the current state of the science? Or perhaps would you be willing to ask Essenhigh or Segalstad to comment on the Cawley paper here at WUWT?
Personally I think this is one of the most important questions, if Essenhigh’s estimates have some merit! (As shown by him and also mentioned in this comment #20 the fact that we produce more anthropogenic CO2 than remains in the atmosphere every year, is not a prove, that this is a cause for anything (due to the low residence times for CO2), especially not since we are still near the end of the little ice age and seeing centuries of high solar activities.
REPLY: I’ll look into this – Anthony

Shyguy
June 3, 2012 12:26 am

Looks to me like the co2 records got corrupted just like everything else the ipcc get it’s hands on.
Dr. Tim Ball explaining:
http://drtimball.com/2012/pre-industrial-and-current-co2-levels-deliberately-corrupted/

MikeG
June 3, 2012 12:48 am

Sorry, your curve fitting is quite meaningless and has no predictive properties whatever. The data would make an equally convincing fit to a sine curve, and many other functions.

Bart
June 3, 2012 1:21 am

This question has been solved. The derivative of CO2 tracks the variation in sea surface temperature remarkably well. Temperature drives CO2. Human inputs are rapidly sequestered and have no significant observable impact.
A simple analogous (not precise in every detail, but able to provide guidance as to physically possible and plausible behavior) system model is as follows:
dC/dt = (Co – C)/tau1 + k1*H
dCo/dt = -Co/tau2 + k2*(T-To)
C = atmospheric CO2 content
H = human input
Co = nominal set point of CO2 in the atmosphere dictated by temperature
tau1 = fast time constant
tau2 = slow time constant
k1, k2 = coupling constants
For tau2 long, dCo/dt becomes approximately equal to the integral of the temperature anomaly with respect to a particular baseline – in effect, you can say that approximately over a relatively short timeline
dCo/dt := k2*(T-To)
With tau1 short, the H input is attenuated to insignificance and C tracks Co tightly. The requirement that C track Co tightly means that tau1 must be short and must attenuate H to a level of insignificance.
Starting here, I show a series of simulations demonstrating this. Use the “Next” button to forward through the plots, the last one at viewing 6 of 29.
The prevailing paradigm simply does not make sense from a stochastic systems point of view – it is essentially self-refuting. A very low bandwidth system, such as it demands, would not be able to have maintained CO2 levels in a tight band during the pre-industrial era and then suddenly started accumulating our inputs. It would have been driven by random events into a random walk with dispersion increasing as the square root of time. I have been aware of this disconnect for some time. When I found the glaringly evident temperature to CO2 derivative relationship, I knew I had found proof. It just does not make any sense otherwise. Temperature drives atmospheric CO2, and human inputs are negligible. Case closed.

Jerker Andersson
June 3, 2012 1:22 am

The reason for the large variations in the error is because the model is a just a mathematical curve fit and not based on what really controlls the variation of the CO2 increase. I don’t think you can read much into the error for that model except it does not handle anthropogenic and natural variations.

Kelvin Vaughan
June 3, 2012 1:39 am

Werner Brozek says:
June 2, 2012 at 5:08 pm
“if the oxygen level in such an environment falls below 19.5% it is oxygen deficient, putting
occupants of the confined space at risk of losing consciousness and death.”
Here is an interesting article on CO2 levels that can be tolerated.
http://www.nap.edu/openbook.php?record_id=12529&page=112

Dave Walker
June 3, 2012 1:46 am

Caleb- Downslope from Mauna Loa are miles of recent lava flows and then miles of rainforest. The Observatory is 11,000 feet high and enjoys pretty steady wind. They throw out readings tainted with vog or low altitude pollutants. Not a perfect observatory but pretty good.

edim
June 3, 2012 1:51 am

The ML period (1959 – present) is enough to see some interesting correlations.
http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf
The annual growth in atmospheric CO2 correlates very well with the SST ‘anomaly’ and other global temperature indices. The correlation would be even better with the corresponding latitude band SST anomaly. Annual cycle is caused by the annual temperature cycle.
http://2.bp.blogspot.com/-AoUzuwoFQyA/T29AMKmFP7I/AAAAAAAABB8/O58gpDrQ-r4/s1600/co2_sst.gif
Interestingly, according to the dCO2 vs T correlation, it doesn’t take warming to rise the atmospheric CO2 – a sufficiently high constant temperature will cause rising CO2 (dCO2 = const). At some sufficiently low temperature CO2 will stop rising (dCO2 = 0). I once compared HADCRUT3 and ML dCO2 and got:
dCO2 = 2*Ta + 1.2 (R2 ~ 0.6).
Ta = 0.4, dCO2 = 2 ppm/years (like in this decade)
Ta = 0.0, dCO2 = 1.2 ppm/year (like in the ~70s)
If the correlation holds and global temperatures decline, dCO2 will decline too.

June 3, 2012 1:52 am

I agree with Tim Ball here, that Jaworowski is crucial, and has been brutally trashed by CAGW rednecks for his temerity in challenging the corruption of the science. I personally tend to leave Beck aside as though I regard his evidence as very important, it involves too many distracting issues. I did a whole page on the CO2 issue way back in 2009 and it is still relevant as ever.
People simply forget Henry’s Law, the titanic outgassing ability of the oceans in the tropics, and the ability of plants to suck in any spare CO2 – as the recent greening of the Sahel shows. These factors are what I believe the good Ferdinand Engelbeen fails to appreciate. And many others. The above “fit” is indeed seductive. But push the boundaries and the fit breaks down.
Now think. CO2 lags temperature by 800 years, according to Caillon et al. What happened 800 years ago?? Anyone?? And what cycle takes 800 years to happen?? Anyone??

June 3, 2012 2:10 am

Again the same discussions com up every few months…
To begin with: The Mauno Loa and other stations CO2 data are as solid as one can have on this earth. Indeed some of the raw data are discarded (still available, but not used for daily to yearly aveages), because they are influenced by downwind conditions from the nearby volvanic vents or by afternoon upwind conditions, which brings up slightly depleted by vegetation CO2 levels from the valey. Both give raw values which are +/- 4 ppmv around the “background” seasonally variable levels. Including or excluding these outliers for averaging doesn’t influence the yearly average or trends for more than 0.1 ppmv. The Mauna Loa “extremes” don’t exist at the South Pole, where the first continuous measurements ever were made, but they have more mechanical problems in the harsh conditions there. For an impression of the raw data vs. the “cleaned” averages at Mauna Loa and the South Pole see:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
take into consideration the blown up scale of CO2!
The raw data of 4 stations can be found at:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
The rules for exclusion of data and the calibration procedure at Mauna Loa can be found at:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html
Pieter Tans of NOAA has been very cooperative in the past and sent a few days of raw instrument voltage data on simple request, so I could check the calculation procedure which is used to get the raw CO2 data. No problems found.
So far so good for that part.
Then the cause of the increase.
– The oceans are not the cause:
Pure on solubility parameters, an increase of 1°C causes an increase of 16 microatm of CO2 in the oceans surface waters, thus back into dynamic equilibrium causes an increase of about 16 ppmv in the atmosphere. It doesn’t matter how much CO2 is in the oceans, only the extra pressure matters. Take a 0.5 or 1 or 1.5 liter bottle of coke and shake that, the pressure in the above space will get the same. Regardless of the triple amount in the larger bottle, if for the amount that is lost from the liquid into the atmosphere. Amounts are hardly important, the pressure (difference) is.
But an increase in temperature also increases the uptake by vegetation. The net result over very long periods is that an increase of 1°C in ocean temperature gives some 8 ppmv increase in CO2. Thus the ~1°C warming since the LIA gives at maximum 8 ppmv increase of CO2. But we see an increase of over 100 ppmv since the start of the industrial revolution…
Moreover, an extra amount of CO2 from the oceans would increase the 13C/12C ratio of CO2 in the atmosphere, but we see a continuous decrease.
– vegetation is not the cause:
In principle, if there was more vegetation decay than growth, that would increase the total amount of CO2 and give the same 13C/12C decline as observed. But one can deduce the biological balance from the oxygen balance: some less oxygen is used as calculated from fossil fuel use. Thus vegetation is a net oxygen producer, dus a net CO2 absorber.
Oh by the way, a simple formula to calculate the CO2 levels at any moment in the future (or past);
CO2(new) = CO2(old) + 0.55xCO2(emiss) + 4xdT
all in ppmv
That explains the trend and the temperature dependent variability…

Kelvin Vaughan
June 3, 2012 2:13 am

The human body consists of 70% O2.
0.7* 7,000,000,000 * 170lb = 833,000,000,000,000lb of O2 bound up in the human population of the planet. As the population grows we use up more of the O2 in the atmosphere.
170lb is just my guess for the average weight of us humans.
That’s just humans. All biomass contains O2.

Goldie
June 3, 2012 2:16 am

Unfortunately as a number of commentators have pointed out, the predictive part of the curve assumes unlimited and increasing change.
Now if humans are the cause then there will ultimately be a point when the rate of increase in fuel consumption slows. Personally I doubt this because the curve is far too even to be of anthropogenic origin. Consider this for example, when this curve started most of europe used open fire coal heating, through a series of clean air legislation this changed until ultimately most people were using gas. Equally 1958 was the year that the Ford Edsel came out. Since then cars world- wide have become increasingly more fuel efficient. I would think that these changes would have some impact if humanity was the cause, but no apparently not.
If the cause is a natural cycle then it will end, when it ends.

FerdiEgb
June 3, 2012 2:43 am

Laws of Nature says:
June 3, 2012 at 12:21 am
Just yesterday I was looking if there would be any news on the discussion of the Essenhigh paper
I found this:
http://www.skepticalscience.com/news.php?n=1259

The Essenhigh paper was fully responded to in another article in Energy & Fuels by Dr. Gavin Cawley:
“On the Atmospheric Residence Time of Anthropogenically Sourced Carbon Dioxide”
See: http://pubs.acs.org/doi/abs/10.1021/ef200914u
The essence of the error by Essenhigh (and many others) is that he uses the residence time of human CO2 in the atmosphere, which is short, but that doesn’t tell us anything about how long it takes for an injection of an extra amount of CO2 (whatever its source) to bring the whole cycle back into dynamic equilibrium (the “adjustment” time)… It is the same as the throughput of goods and thus capital in a factory (which can be huge) and the financial gain of the same factory, which can be positive, zero or negative, whatever the throughput is.
For some reason, it seems to be very difficult to see the difference between the residence time and the adjustment time, even by very knowledged people…

John Marshall
June 3, 2012 3:11 am

I don’t see how you can get an exponential growth from a cyclic process using a limited product.

FerdiEgb
June 3, 2012 3:16 am

Harold Pierce Jr says:
June 2, 2012 at 5:21 pm
What you write is true, but completely unimportant for the distribution of CO2 and other gases. The only reason that the air is made bone dry for CO2 analyses is that water vapour interferes with the CO2 measurements. As water vapour is extremely variable in any direction, counting CO2 as in dry air makes a comparison of CO2 levels in different altitudes and latitudes far more easy.
The only place where the difference between dry and wet air is important is for the CO2 exchange between oceans (and plant alveoles) and atmosphere. In that case, the pCO2 in the atmosphere is used, thus taking into account the water vapour. The difference between ppmv and pCO2 assumes full water vapour saturation, but even then is quite small.

Keitho
Editor
June 3, 2012 3:16 am

Bart really has this thing figured out.

FerdiEgb
June 3, 2012 3:31 am

Pamela Gray says:
June 2, 2012 at 6:08 pm
Anything as regular as this data says one of two things.
1. Manmade CO2 pump sitting next to the sensor and never shuts off and is exquisitely tuned to a rythmic increasing beat.
2. Artifact of the “fudge” factor part of the CO2 calculation.

Neither. In fact, the regularity indicates that no natural process is responsible for the increase. The regularity is caused by the regularity of the human emissions:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_emiss_increase.jpg
Even in times of economical crisis, the emissions hardly change and in general show a steady increase over time, which results in a near constant increasing increase of what resides in the atmosphere. The year by year variability in atmospheric increase rate is caused by temperature swings, but that largely cancels out over time.

FerdiEgb
June 3, 2012 3:49 am

Keith Battye says:
June 3, 2012 at 3:16 am
Bart really has this thing figured out.
Bart is a very good theoretician, but he makes an essential error: he only looks at the inputs. That makes that the human input is only a fraction of the total input and simply disappears in the cycle. And error many here seems to make. But as is proven in the mass balance: the natural sinks are larger than the natural sources, at least over the past 50+ years. Thus it doesn’t make any difference what the height of the natural input is, or what its trend or variancy is, as the only point that counts is the difference between natural inputs and outputs, which is negative all over the years and the human input is the only source of the increase in the atmosphere. That doesn’t need a fast response from the natural sinks. The observed sink rate is about 40 years half life time…

Reply to  FerdiEgb
June 3, 2012 5:38 am

Ferdinand,
I would greatly appreciate it, if you would do a peer review of http://www.retiredresearcher.wordpress.com and let me know where I may have made errors in my analysis. I agree with you that the CO2 data is our best source for following climate change. However, it is a lagging indicator. The effects of anthropogenic emissions on background levels as reported as monthly averages shows up about 10 years after they have been originally put into the air. These effects are added to always changing natural cycles that are indicated by the ice core and other proxie data. You can comment here or on my blog as you wish. Also, you can click on my name and then click on the URL there. I welcome others to do the same.

FerdiEgb
June 3, 2012 4:05 am

edim says:
June 3, 2012 at 1:51 am
Interestingly, according to the dCO2 vs T correlation, it doesn’t take warming to rise the atmospheric CO2 – a sufficiently high constant temperature will cause rising CO2 (dCO2 = const).
There is a fundamental problem here: your formula assumes that the CO2 level continuous to rise with a constant elevated temperature, but that can’t be true. The main source in the past were the oceans. These emit CO2 with rising temperatures until a new equilibrium is reached, that is when the pCO2 of the ocean surface and the pCO2 in the atmosphere are in average equal (thus as much CO2 is absorbed as is released).
So there is an increase if the temperature increases, but limited to at maximum the new equilibrium. which in the past was about 8 ppmv/°C (MWP-LIA, glacials-interglacials), on short term that gives about 4 ppmv/°C of variability around the trend.

edim
June 3, 2012 4:17 am

Ferdinand, no one’s claiming that the natural environment is a net source of CO2, it’s obvious that the annual growth is smaller than the anthropogenic input, so the environment has been a net sink. The claim is that the growth in atmospheric CO2 is determined by global temperature. What do you think would have happened with CO2 if the temparatures haven’t increased since ~1960s? According to the correlation, the growth would be in average let’s say ~0.9 ppm/year instead of 1.45 ppm/year and the total accumulation until now would be ~45 ppm instead of ~80 ppm. So, we would be at ~360 ppm instead of 395 ppm.
Total accumulation is dependent on the average temperature during the period of accumulation. That’s the observed behaviour.

Dr Burns
June 3, 2012 4:37 am

>>Steve Keohane says:
>>June 2, 2012 at 10:20 pm
>> … but do not know the source …
http://www.esrl.noaa.gov/psd/cgi-bin/data/timeseries/timeseries1.pl
Steve McIntyre and Roy Spencer have papers discussing the effects and how the trends differ from IPCC model assumptions.

Laws of Nature
June 3, 2012 4:58 am

>> Ferdinand Engelbeen says:
“Again the same discussions com up every few months…”
Well, this tends to happen if someone decides basically mid-sentence to remove himself from the discussion! The topic doesnt go away by itself
>> FerdiEgb says:
“The essence of the error by Essenhigh (and many others) is that he uses the residence time of human CO2 in the atmosphere,”
These numbers are known well enough, so Essenhigh and Barts choice are reasonable and not in “error”. What is missing from your side is any prove that they did anything wrong!
In what way does Essenhighs or Barts choices contradict not your believe, but a value measured in nature?
“Bart is a very good theoretician, but he makes an essential error: he only looks at the inputs. That makes that the human input is only a fraction of the total input and simply disappears in the cycle. And error many here seems to make. But as is proven in the mass balance: the natural sinks are larger than the natural sources, at least over the past 50+ years.”
Wow, a full circle again.. perhaps time to read again, what I wrote yesterday:
Laws of Nature says:
“Essenhigh came up with a model, which basically removes anthropogenic CO2 from the equation due to its low residence time, the published answer by Cawley uses a very similar model to “reinstate” the anthropogenic CO2, but fails to disprove Essenhighs assumptions and conclussions.”
The very reason Essenhigh published his paper was to show that there is indeed another possible solution, taking realistic parameters.
Just you saying its wrong doesnt prove it wrong, I sense a lack of arguments!
Ferdinand Engelbeen says:
“But one can deduce the biological balance from the oxygen balance:”
Here you seem to ignore soil bacteria and the whole NOx-circle.
Ferdinand Engelbeen says:
“The oceans are not the cause:
“Pure on solubility parameters, an increase of 1°C causes an increase of 16 microatm of CO2 in the oceans surface waters, thus back into dynamic equilibrium causes an increase of about 16 ppmv in the atmosphere.”
Over at http://www.seafriends.org.nz/issues/global/acid2.htm#why_problem we can read the following statement:
“The oceans contain far more CO2 than air: 38,000Gt versus 700 Gt (about 50 times). A slight warming of the ocean expels CO2 while becoming more acidic, about 1000-1500Gt per degree C (see graph in part 1 => http://www.seafriends.org.nz/issues/global/global16.gif ) .”
Would you agree to that?
Bart says:
“dC/dt = (Co – C)/tau1 + k1*H
dCo/dt = -Co/tau2 + k2*(T-To)”
It would be interesting if you could change the constants to either support Essenhigh’s or Cawley’s conclussions. And then compare them
All the best regards,
LoN

FerdiEgb
June 3, 2012 5:43 am

edim says:
June 3, 2012 at 4:17 am
The claim is that the growth in atmospheric CO2 is determined by global temperature. What do you think would have happened with CO2 if the temparatures haven’t increased since ~1960s? According to the correlation, the growth would be in average let’s say ~0.9 ppm/year instead of 1.45 ppm/year and the total accumulation until now would be ~45 ppm instead of ~80 ppm. So, we would be at ~360 ppm instead of 395 ppm.
You still make the assumption that the growth rate remains the same at a constant (increased or decreased) temperature.
As the seawater temperature changes up and down year by year, it is hard to see that the CO2 growth rate is temperature change related and not absolute temperature related, there are no multi-year periods with a near constant temperature…
If you start at constant 1 ppmv/yr at e.g. 15°C global average sea surface temperature and the next year there is a 0.1°C increase, that would give an increase in the growth rate to ~1.4 ppmv for that second year. The third year there is no further seawater temperature increase, so the CO2 increase rate falls back to ~1 ppmv/year. The influence of seawater surface temperature is short (~1.5 years) to bring the ocean surface layer and the atmosphere in equilibrium for CO2 levels.
The increase in (ocean) temperature since the LIA is at maximum 1°C, good for maximum 8 ppmv extra over that full period, that’s all.

FerdiEgb
June 3, 2012 5:54 am

Bart says:
June 3, 2012 at 1:21 am
I know, we have been there many times… but…
When I found the glaringly evident temperature to CO2 derivative relationship, I knew I had found proof.
Except that any relationship in the derivatives has zero predictive power for any releationship in the original variables…

Steve Keohane
June 3, 2012 5:55 am

Dr Burns says: June 3, 2012 at 4:37 am
Thank you sir! I was taking flak from joelshore on another thread for posting that without providence. I see implications regarding enthalpy.

Bill Illis
June 3, 2012 6:12 am

Just noting that the AIRS satellite has a number of videos for mid-tropospheric CO2 concentrations covering 6 or 7 years now.
Just search “Airs CO2” on Google video.
(they are mainly Youtube products so putting up a direct link would imbed the video in the thread and I don’t think that is necessary).
You will see there is considerable variability and it is entirely possible that someone might measure 500 ppm in Europe or some locality every few days. The numbers have to be averaged out over long periods and there will always be outliers. The Arctic has a strong outburst in the winter months as one of the videos is a polar view.
Also search for “Airs Methane”. With all the Arctic methane bubbles stories going around, it is helpful to see that the actual global data shows Methane comes from everywhere and cycles around the planet extremely fast with the prevailing winds.

June 3, 2012 6:38 am

How does the CO2 data fit a bell curve with the current rise the left side of the bell? Once oil production goes into terminal decline, CO2 increases should slow, then level off, then drop.

June 3, 2012 7:41 am

Wow, it seems that I have missed the Salby discussions of April 19 on -again- the same topic… Had a nice trip in Western Australia in the past 5 weeks (Perth to Darwin), should have contacted Dr. Salby, but I suppose he was somewhere at the other side of that (large, did drive over 7000 km…) continent.

steve fitzpatrick
June 3, 2012 7:43 am

Continued (near) exponential growth is a physical impossibility, since fossil fuel resources are finite. The rate of growth in use will fall as price drives conservation and substitution. If one believes the “peak petroleum” predictions, liquid fuels growth will turn negative within a decade or so. Coal and natural gas are more complicated, but these too must eventually stop increasing. From a practical standpoint (say 25 – 30 years horizon), it makes little difference what the historical record says; atmospheric CO2 will continue to increase between about 2.5 and 3 PPM per year.

kwik
June 3, 2012 7:59 am

jrwakefield says:
June 3, 2012 at 6:38 am
Nah, I think all that is just religious crap. Mankind is to blame, repent, and all that.
Have a look at what Prof. Salsby has to say about it;

Allan MacRae
June 3, 2012 8:19 am

Bart says: June 3, 2012 at 1:21 am
This question has been solved. The derivative of CO2 tracks the variation in sea surface temperature remarkably well. Temperature drives CO2. Human inputs are rapidly sequestered and have no significant observable impact.
________
Probably true.
I discovered this dCO2/dt relationship with temperature in late Dec2007 and published in Jan2008. And the CO2 signal lags temperature by ~9 months. See
http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/
Murry Salby came to the same conclusion in his 2011 video at

The warmists arm-waive that this relationship and resulting lag is a “feedback effect”.
But there is also a CO2-after-temperature lag of ~800 years in the ice core record on a longer time cycle, probably related to deep ocean phenomena.
And there are probably other intermediate cycles as well.
And of course there is the seasonal CO2 “sawtooth”, apparently dominated by the larger Northern Hemisphere land mass.
My hypothesis is that these natural cycles all contribute to the resulting CO2 curve, which is more likely to be a ~sine curve than the subject ~exponential curve.
Natural CO2 flux is much greater than the humanmade component. Furthermore, I have sought and found no human component in the CO2 signal. The increase in atmospheric CO2 appears to be entirely, or almost entirely natural.
P.S.
Apologies for the sentence fragments, … and starting three sentences with “and”. But there are worse sins, like squandering a trillion dollars on global warming nonsense.
I also predicted global cooling in an article written in 2002. If this cooling is severe enough to affect the grain harvest, we will look back on current days with great fondness. Hope I’m wrong, but just in case: Bundle up.

FerdiEgb
June 3, 2012 8:39 am

Laws of Nature says:
June 3, 2012 at 4:58 am
As I said, it seems very difficult to see the difference between the residence time of human CO2 in the atmosphere and what happens to any injection of extra CO2 into the same atmosphere…
1. The residence time;
Every year, some 20% of all CO2 in the atmosphere is exchanged with CO2 from/to other reservoirs. Thus if human input ceased, the remaining human CO2 molecules should reduce by about 20% per year (as is seen in the 14C record). That gives a residence time of slightly above 5 years. But be aware: this exchange doesn’t change the total amount of CO2 in the atmosphere. as long as there is balance between inputs and outputs.
2. The adjustment time:
If we may assume that temperature (as seen in the past) is the main driver for CO2 levels, then at a certain temperature, there is a given “normal” CO2 level. Any extra amount of CO2 (whatever its source) above that level would be removed until the equilibrium is reached again. Currently we have a level of 100 ppmv CO2 above the temperature dictated setpoint. That is the driving force which moves extra CO2 into the oceans an plants. This causes an unbalance in the natural inputs and outputs: some 4 GtC (2 ppmv) more CO2 is absorbed by nature than is released. If we should stop to emit any CO2, the 100 ppmv difference would be the same at the start, 98 ppmv next year, 96.2 ppmv, etc… until back to the temperature equilibrium. The adjustment e-fold time therefore is 210/4 or ~53 years, quite a difference with the residence time.
The problem now is that many use the residence time to show that the human contribution is negligible, but that only shows how fast the human CO2 molecules are replaced by natural molecules, but that is completely irrelevant of what happens to the total amount in the atmosphere, which still is (near) completely caused by the human addition.
To illustrate that, here a graph of what happens if humans should have added all emissions to data at once some 160 years ago:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/fract_level_pulse.jpg
All “human” CO2 is gone in about 60 years, while CO2 levels remain higher even after 160 years. The extra level above the old equilibrium still is caused by the initial human injection, even if no single human CO2 molecule is left…
Ferdinand Engelbeen says:
“But one can deduce the biological balance from the oxygen balance:”
Here you seem to ignore soil bacteria and the whole NOx-circle.

soil bacteria are included in the oxygen balance (they use O2 to produce CO2). NOx is not, but how much does that change the oxygen balance?
A slight warming of the ocean expels CO2 while becoming more acidic, about 1000-1500Gt per degree C
I fear that the NZ coalition is completely wrong here: expelling CO2 from the oceans makes the oceans more alkaline, which is not observed and the DIC (totally dissolved inorganic carbon) content of the ocean surface is increasing, thus these are absorbing more CO2… Moreover as said previously, it doesn’t make any difference how much CO2 is in the oceans, only the CO2 pressure difference between the atmosphere and the oceans matters. If the oceans heat up 1°C, that gives at maximum 16 ppmv extra in the atmosphere, no matter how much resides in the oceans.

Allan MacRae
June 3, 2012 8:44 am

Hi Ferdinand,
I hope you are well.
For the record, I have no problem with CO2 measurement accuracy. The CO2 measurements at Barrow, Mauna Loa, the South Pole and many other sites correlate well and make sense.
Also, we don’t need to discuss yet again your “mass balance” argument – I will leave that to you and Richard C. I agree with Richard – the system just does not work the way you say it does.
Humanmade CO2 emissions are very small compared with natural CO2 flux, which is not only huge on a seasonal basis, but also huge on a daily basis.
I would really like you to be correct, because if you were, humanity would survive better than under my most probable scenario, which is moderate or severe global cooling.
The only impact I can see of humanmade CO2 emissions is that we are making little flowers happy.
To counter the wild claims of the global warming alarmists, I leave all of you with this note from George Carlin, and wish you a very pleasant Sunday. 🙂
Regards, Allan
Warning: Language.
George Carlin – The Planet is Fine !

Bart
June 3, 2012 9:32 am

edim says:
June 3, 2012 at 1:51 am
“…a sufficiently high constant temperature will cause rising CO2…”
I think it is likely that what we are seeing is that the end of the LIA forced a new equilibirum level for temperature which has not yet settled out, and for which humankind bears no responsibility. When it settles, or reverses, we will see CO2 levels level follow. Judging by the numbers, I expect that will be a long time yet.
FerdiEgb says:
June 3, 2012 at 3:49 am
“But as is proven in the mass balance…”
My system analogy above produces the same mass balance. I have explained to you many times why your mass balance argument is faulty.
Laws of Nature says:
June 3, 2012 at 4:58 am
“It would be interesting if you could change the constants to either support Essenhigh’s or Cawley’s conclussions”
The important one which cannot be changed is tau1. That sets the bandwidth of the system. A wide bandwidth system maintains its equilibrium point tightly, with low sensitivity to disturbances. That is what we see in the data – the (quasi-)equilibrium point, which is proportional to the integral of temperature anomaly, is tightly tracked. That demands a wide bandwidth system, which in turn demands low sensitivity to human inputs. Whatever the actual form of the system, however it may deviate from my simple analogy, the dynamics must add up to the same thing: the equlibirum dynamics are very low bandwidth, so that the CO2 level is effectively the integral of temperature anomaly in the “near” term (relative to tau2) when there has been a change of state, and the regulator dynamics are very high bandwidth, so that the CO2 tightly tracks that integral, and human inputs are quickly sequestered and have little effect on the overall level.
FerdiEgb says:
June 3, 2012 at 5:43 am
It’s all right here. The CO2 level is effectively the double integral of temperature anomaly with respect to the proper baseline. These plots were made using GISS LOTI, before Werner Brozek and others pointed out that there is a better match, which is theoretically reasonable, to SST.
FerdiEgb says:
June 3, 2012 at 5:54 am
“Except that any relationship in the derivatives has zero predictive power for any releationship in the original variables…”
We do not need to “predict” it since it is in the past, and can be inferred. I’ve had intense arguments with others who insist this is a weak point in my argument, but there really isn’t any alternative when the bandwidth of the system is observably so wide. It is the excellent tracking of the temperature anomaly in the CO2 derivative which demands that the bandwidth must be wide (short sequestration time). And, wide bandwidth constrains the effect of human emissions to be small.
Allan MacRae says:
June 3, 2012 at 8:19 am
Sorry, Allan, I should be giving you credit. I only do this as a hobby, and I forget the names of the people I have interacted with more than a few weeks ago.

Bart
June 3, 2012 9:39 am

From above: “The CO2 level is effectively the double integral of temperature anomaly with respect to the proper baseline.” My brain skipped a grove, it’s just the single integral.

Kevin Kilty
June 3, 2012 9:42 am

The exponential model presented here has no coefficient that multiplies the exponential, and therefore, implies that this coefficient is 1.0ppm, and is a constant in the model–not determined by data at all. This seems extremely unlikely. Is this a typo?
The observed data are all contained within one e-folding time far along in the time series, So the coefficients are correlated to one another and not well determined.

June 3, 2012 9:47 am

BIll Illis:

You will see there is considerable variability and it is entirely possible that someone might measure 500 ppm in Europe or some locality every few days

Yep part of the state of the art in Keeling’s measurement was in picking sites where the diurnal variation in CO2 was minimized. If anybody is really interested, they should go back to the decade or so of measurements that Keeling took before he established the Mauna Loa observatory.
On another note, one should look at the ratio of C14 to C12 in the CO2 in the atmosphere. If it’s coming from fossil fuel burning, the ratio should increase. If it’s inorganic CO2, it should stay constant. EM Smith noted that the increase in CO2 corresponded to the end of the LIA. Well, since CO2 is thought to act as a GHG, does that suggest an idea for how they might be linked?
This is one of my favorite graphics. It shows that the Northern hemisphere (where most of the trees are) shows the largest annual variation in CO2, and the south pole (where there are very few trees ;-)), shows almost no variation.
If this is all faked, they thought of everything. Maybe people need to consider their sources a bit more carefully, and if somebody consistently is making outlandish claims, consider writing them off as a nut.
Just saying.

June 3, 2012 9:57 am

Steve Keohane says:
June 2, 2012 at 10:20 pm

Dr. Burns, I have a couple copies of that chart, but do not know the source, nor what data was used to compile the graph. Do you? Any information would be appreciated. Thanks
Here is a more recent version: http://i48.tinypic.com/2qlfnzn.jpg

I created the graph. It is in the “Water Vapour Feedback” section of my “Climate Change Science essay at
http://www.friendsofscience.org/assets/documents/FOS%20Essay/Climate_Change_Science.html#Water_vapour
A link to the NOAA data source is given just above the graph. To recreate the graph use Variable “Relative Humidity”, select “Seasonal average”, First month of season “Jan”, second month “Dec”. Select “Area weight grids”. You can select “analysis level” from 1000 mb to 300 mb.

June 3, 2012 9:58 am

A proper regression model provides residual plots that have a random pattern with mean zero. Due to the obvious pattern here, this residual plot indicates some key variable(s) is omitted or the modeling challenges of a nonlinear fit were not overcome. Seems like some basic transformations and an attempt to use a linear model is in order.

June 3, 2012 10:03 am

Sorry this is a misstatement on my part:

On another note, one should look at the ratio of C14 to C12 in the CO2 in the atmosphere. If it’s coming from fossil fuel burning, the ratio should increase

The ratio should decrease because fossil fuels have very low ratios of C14 to C12. If the CO2 is coming from the ocean the ratio of C14/C12 should stay approximately constant.
Here’s a more complete explanation lifted from another source:

• There has been a decline in the 14C/12C ratio in CO2 that parallels the increase in CO2. In 1950 a scientist named Suess discovered that fossils do not contain 14C because they are much older than 10 half lives of 14C.
• There has been a parallel decline in 13C/12C ratio of atmospheric CO2. This has been linked to the fact that fossil fuels, forests and soil carbon come from photosynthetic carbon which is low in 13C. If the increased CO2 was due to warming of the oceans, there should not be a reduction in the ratios of C-13 and C-14 to C-12.
There are other clues that suggest the source of increased CO2 is not related to the warming of the ocean and subsequent release of CO2 from the ocean.
• There has been a decline in the oxygen concentration of the atmosphere. If ocean warming was responsible for the CO2 increase, we should also observe an increase in atmospheric O2, because O2 is also released as the water is warmed.
• The ocean is a sink for atmospheric carbon, and the carbon content of the oceans has increased by 118±19 PgC in the last 200 years. If the atmospheric CO2 was the result of oceans releasing CO2 to the atmosphere, the CO2 in the ocean should not be rising as a result of ocean warming.

I think it’s also important to emphasize that on decadal-scales there is substantial variation in the variation of ocean temperatures, and certainly you can expect CO2 to come in and out of dissolution as natural fluctuations in ocean temperatures result in CO2 releases into and absorption from the atmosphere. Thus one can get changes of C13/C12 that don’t follow the basic long-term patterns expected, because other physical processes play a more dominant role on those time scales.

Allan MacRae
June 3, 2012 10:05 am

No worries Bart,
I am a hobbyist as well, and NOT “trying to get on full time” in the climate science business – it is a fascinating subject, but the quality of debate is too often degraded.
The problem with too many of the climate “full-timers” is that they are drinking each-others’ bath water, and the result is the current lamentable state of climate science and its dogmatic and repressive intellectual environment.
The irony is that BOTH sides of the rancorous “mainstream climate debate” , which is basically an argument about H20 feedbacks and ‘sensitivity” to CO2, probably have “the cart before the horse” – in fact, the primary cause is increasing temperature over past centuries, and increasing atmospheric CO2 is the result.
This would all be quite funny, except that a trillion dollars has been squandered on CAGW nonsense.
Intelligent use of these scarce global resources would have easily saved as many people as died in Hitler’s WW2, or Stalin’s purges, or Mao’ s Great Leap Backward.
On my bleaker days, I must conclude that we are governed by scoundrels and imbeciles.

Stephen Wilde
June 3, 2012 10:10 am

“The increase in (ocean) temperature since the LIA is at maximum 1°C, good for maximum 8 ppmv extra over that full period, that’s all.”
Maybe so for a static unit of water.
But how about constant upwelling of fresh water from depth being exposed to solar shortwave which has increased beyond the increase in raw TSI at top of atmosphere as a result of decreasing global cloudiness during the warming spell ?
Just as observed during the late 20th century.
And then suppose, too, that the upwelling water is not as cold as it was due to warmth injected into the depths during the MWP and only now returning to the surface.
Both those factors would reduce ocean surface absorption of atmospheric CO2 allowing it to build up in the air.
The observed large proportionate change in CO2 ppm is not reflected well in ice cores but then we don’t really know whether the ice core record is sensitive enough to record multicentennial variations in atmospheric CO2 in full detail..
Plant stomata are more sensitive and do indeed record greater variability in CO2 than ice cores but even they may be too coarse for a full representaion of actual multicentennial CO2 variability.
The isotope argument used to be invoked to justify ignoring such matters but new data is weakening that and I don’t hear it much these days. There are plenty of natural sources of CO2 showing the same isotope ratios as human emissions.

Stephen Wilde
June 3, 2012 10:16 am

“This is one of my favorite graphics. It shows that the Northern hemisphere (where most of the trees are) shows the largest annual variation in CO2, and the south pole (where there are very few trees ;-)), shows almost no variation.”
More likely that would be caused by greater seasonal sea surface temperature changes in the northern hemisphere due to the larger landmasses obstruction energy diffusion around the globe.
The unobstructed circumpolar current would minimise variations in the most southern oceans.

Bart
June 3, 2012 10:23 am

Carrick says:
June 3, 2012 at 10:03 am
“There has been a decline in the 14C/12C ratio in CO2 that parallels the increase in CO2.”
14C cannot be used for anything with confidence since the birth of the Atomic Era.
“If the increased CO2 was due to warming of the oceans, there should not be a reduction in the ratios of C-13 and C-14 to C-12.”
The quirks of diffusion process do not justify such linear logic.
Thus one can get changes of C13/C12 that don’t follow the basic long-term patterns expected, because other physical processes play a more dominant role on those time scales.
Airy assertion, with no proof. Every physical process tends to become more significant over longer time scales, as a result of the inherent low pass characteristic of stable systems.
What you cannot get around is this relationship. Temperature variation accounts for the entire shooting match. Any role of human emissions in overall atmospheric concentration is small (I estimate 4-6%). See my preceding posts for background.

Bart
June 3, 2012 10:25 am

Allan MacRae says:
June 3, 2012 at 10:05 am
Those are my good days!

June 3, 2012 10:32 am

Stephen Wilde:

More likely that would be caused by greater seasonal sea surface temperature changes in the northern hemisphere due to the larger landmasses obstruction energy diffusion around the globe.

That’s another possibility. Do you have a reference? I haven’t looked at the latitudinal variation in ocean temperatures, so that’s something that would be a test of your hypothesis.
but the variation increases with latitude.
Either mechanism would predict a peak in atmospheric CO2 in summer months, so you can’t use that to eliminate either hypothesis. I’ll have a look at HADSST3 when I get a chance and see if I can produce a plot.
One thing that works against your hypothesis is the ocean surface is only accounts for 20% of the surface the Earth at high Northern latitudes and nearly 100% at high southern latitudes see this—this would suggest you’d need a huge variation in temperature south-to-north to account for what amounts to a factor of 8 difference in annual variation.

June 3, 2012 10:34 am

Bart, I’m not certain it’s considered scientifically rigorous to hand-wave away ever piece of evidence that doesn’t fit your theory. 😉

Bart
June 3, 2012 10:49 am

Carrick says:
June 3, 2012 at 10:34 am
You offer speculation and call it “evidence”. I’m giving you hard data with a direct bearing on the problem. Physician, heal thyself.

Stephen Wilde
June 3, 2012 10:53 am

“you’d need a huge variation in temperature south-to-north to account for what amounts to a factor of 8 difference in annual variation”.
Would I ?
We are considering a tiny proportionate annual variability in CO2 release / absorption rates not absolute quantities.
The SH could produce a vast amount with virtually no seasonal variation whilst the NH could produce a much smaller amount with a significant seasonal variation.
Anyway, the essence of my point is that you can’t just pin it on trees which is what you tried to do.
.

June 3, 2012 11:05 am

Forgive an innocent little bunny for asking, but how does the derivative of the smoothed CO2 concentration with time explain the roughly 90 ppm increase in CO2 concentrations over the last century? You differentiated that out.
Now the derivative may have some relationship to the variation in global temperature through NPP or outgassing of the oceans but that is a different matter.
It is much the same as the fit to a quadratic or an exponential. That has no explicative power. If, on the other hand, you fit it to emissions you might have something interesting. Of course to know what will happen in the future, you would have to know what future emissions will be.
REPLY: you aren’t a bunny, and you aren’t innocent. When are you going to give up this bizzare persona/charade? -A

Doug Proctor
June 3, 2012 11:17 am

How does any quadratic or exponential curve fit with historical A-CO2 generation?
Does the putative cumulative A-CO2 generation show a similar quad/expon relationship?
Would this not mean that the presence of CO2 is facilitator of more CO2? Would this not mean the residence time of CO2 was >1000 years?

June 3, 2012 11:21 am

Regression is the most erroneously applied statistical tool. It doesn’t take much knowledge and skill to use (leads to poor application) and there are too many who need specific results, thus biased (leads to “cooked” results). Without knowing the process by which the model was built and the accompanying diagnostics (various stats and residual plots), any regression model should be treated with skepticism. Academia has been corrupted by performance demands that are impossible. Research supported by proper statistics take much longer and desired results much harder to achieve. Due to poor training and apathy, it is quite likely most of today’s PhD’s would not meet the statistic bar of a generation ago. If the results are welcome, how the statistics were achieved are likely to be given only a glance. If the results were distasteful, the post hoc cycle then begins. But that is off the main topic.
And then there is the data factor. Data are everything. Anyone who does not provide full access to their data is hiding something. It is juvenile to think any work can be taken seriously if the history of its input cannot be independently verified so the results can be duplicated. To minimize proper review, academia hides behind processes that would inspire Rube Goldberg. Those pushing the CO2 game take the cake. In the climate game, they claim global catastrophe if their advice is not adhered to. Yet they were unable to or refuse to do what is necessary to validate their work. Original data lost? Researchers protect their data better than their children. If they can’t maintain a data base, how can their analyses be trusted. They don’t know what the input is.
The real point here is due to the cost of doing nothing and the cost of the “fix” that the climate “scientists” claim and demand, absolutely in no uncertain terms, an environment of total transparency is required. To do otherwise is too self-serving. Are they willing to let the world cook while some arcane academic principle is maintained? I think not. It must be because their work is garbage.
It doesn’t take an expert to know whether a speaker, presenting himself as an expert in the same area, is a fraud or not. A well-educated man can see through whatever flimsy is present. Not being able to produce data is a bad joke anyone with a modicum of sense can get.

Bart
June 3, 2012 11:23 am

Eli Rabett says:
June 3, 2012 at 11:05 am
“You differentiated that out.”
As I explained, the excellent tracking between the temperature and the CO2 derivative demands a high bandwidth system. And, that high bandwidth necessarily means human inputs are attenuated to a level of insignificance.
This is a sophisticated argument which requires an advanced level of understanding of how feedback systems work.

Bart
June 3, 2012 11:26 am

Doug Proctor says:
June 3, 2012 at 11:17 am
“Would this not mean the residence time of CO2 was >1000 years?”
Would this not mean that CO2 regulation in the planet’s atmosphere would be extremely weak, and there is no way to maintain any kind of narrow equilibrium under those conditions given random contributions to the CO2 level?
Why yes, yes it would.

Bart
June 3, 2012 11:32 am

Doug Proctor says:
June 3, 2012 at 11:17 am
And, please note, scale and bias similarity between emissions and CO2 measurements means very little. It is necessary, but not sufficient for the one to be driving the other. A similar quadratic/exponential curve emerges when integrating the temperature anomaly.

edim
June 3, 2012 11:32 am

Ferdinand wrote:
“There is a fundamental problem here: your formula assumes that the CO2 level continuous to rise with a constant elevated temperature, but that can’t be true. The main source in the past were the oceans. These emit CO2 with rising temperatures until a new equilibrium is reached, that is when the pCO2 of the ocean surface and the pCO2 in the atmosphere are in average equal (thus as much CO2 is absorbed as is released).
So there is an increase if the temperature increases, but limited to at maximum the new equilibrium. which in the past was about 8 ppmv/°C (MWP-LIA, glacials-interglacials), on short term that gives about 4 ppmv/°C of variability around the trend.”
I think we have to forget about the past for a moment – no splicing and different methods please. No need, we have more than 50 years of continuous measurements at ML and other sites as well. Annual growths and therefore the long term accumulation (sum of the annual growths) seem to be dependent on global temperature indices. The conclusion is that the average temperature during the period of accumulation drives the total accumulation. It did since 1959. This still doesn’t mean that the warmth is the sole cause – it could be only because of the human input that the climate system is doing something in response to the human emissions.
You say that constant temperatures causing the rise cannot be true, well you don’t know that. It could very well be that cycles in SST, even without any long term trend, are driving the rise. Degassing CO2 from the oceans is faster (when warming in the cycle, due to volumetric source) than the uptake by oceans (when cooling, due to surface sink), so after every cycle CO2 ends up somewhat higher, depending on temperatures. A reciprocating-type CO2 pump, sort of.

edim
June 3, 2012 11:38 am

Eli, nothing is differentiated out. Any accumulation (smallest being the annual one) is dependent on the average temperature during the period of accumulation. That’s the correlation. The annual growths only increased because temperature increased since 1959. At the 60s level the annual growth was ~0.9 ppm. If temperatures declined in 70s/80s/90s, the annual growth would have been lower and maybe even negative, at sufficiently low temperatures.

June 3, 2012 11:45 am

As Just some guy says: June 2, 2012 at 3:43 pm

It cant follow the same formula forever, of course. If it did, it appears we’ll [get to] one million parts per million by the year 2540.

However, it is useful to compare this CO2 concentration extrapolation to the IPCC CO2 concentration projections.
The IPCC uses a large number of emission scenarios, and calculates a forecast of CO2 concentrations using the Burn carbon-cycle model. The data is here:
http://www.ipcc.ch/ipccreports/tar/wg1/531.htm
There are six main story-lines: A1B, A1T, A1FI, A2, B1 and B2.
The Burn CC model gives a “reference”, a “low” and “high” case for each of the six scenarios, so 18 cases in all.
The “low” case assumes a “fast ocean” (ocean uptake of 2.54 PgC/yr for the 1980s) and no increase of animal respiration. A “high”case assumes a “slow ocean” (ocean uptake of 1.46 PgC/yr for the 1980s) and capping CO2 fertilisation.
Here is a graph of the CO2 extrapolation using the parameters given in the lead post, with some selected IPCC scenarios. Note the huge range of CO2 projections to the year 2100; 1248 ppm in the A1F1 high case, 486 ppm in the B1 low case.
http://www.friendsofscience.org/assets/documents/CO2_IPCC_ScenVsActual.jpg
The A2 Reference case is very close to the CO2 model extrapolation.

Bart
June 3, 2012 11:50 am

edim says:
June 3, 2012 at 11:38 am
“Eli, nothing is differentiated out.”
He has a semi-valid point, one which Richard C. and Robert B. beat me over the head with on another thread. There is still an arbitrary constant which could be added and in which there is room for a (significantly reduced) anthropogenic effect.
However, the argument fails because getting a significant anthropogenic effect still demands a low bandwidth, but the data before us says the bandwidth must be high, or we will not get the excellent tracking between CO2 derivative and temperature anomaly. I illustrated this in the simulations proffered earlier (hit Next to peruse the plots up to viewing 6 of 29).

June 3, 2012 12:15 pm

Bert:

You offer speculation and call it “evidence”. I’m giving you hard data with a direct bearing on the problem. Physician, heal thyself.

What I am offering is a line of converging evidence–data not speculation. The only suggestion I had was why the data behave in a certain manner. But the data is the evidence I would refer to, not my speculative ideas for why it behaves that way. Any theory has to explain all of the data, not just your pet data.
Your single line of evidence showing a correlation between temperature and CO2. That suggests a cause and effect relationship? If you look at the long-term correlations, do you want to predict which sign you’ll find for the correlation, and which hypothesis that supports?

edim
June 3, 2012 12:16 pm

Bart, I can’t see the arbitrary constant, if you could elaborate (some tried at Climate etc). I still may be wrong, but it’s not only variation around some average growth that correlates with the temperatures. The whole of the annual growth does and at some other temperatures the average growth would be different as well, and therefore the long term accumulation. The correlation holds at other partial growths too. The conclusion is that the average growth over some period (annual, biannual, half-decadal, decadal) is determined by the average temperature during the period.
deltaCO2 = c*Tav, Tav is average anomaly

Tony Nordberg
June 3, 2012 12:22 pm

Personally I find the histrorical results from Mauna Loa a bit too exquisite, definite, and singular to be entirely believable, and I guess many of your readers feel the same.
So, I propose that it be worth a blog from someone about the results from other CO2 measuring stations/devices around the world, that are independent of the Scripps-based methods & locations such Mauna Loa.
Also, a blog on a really close look at the Mauna measuring instrument itself, especially around the Analog-Digital conversion arrangements, to identify the inherent non-linearities and assymetries. Even quite small values would be a possible source of upward bias (or drift) in the totalisation, if there is differentiation and subsequent integration in the data processing.

FerdiEgb
June 3, 2012 12:23 pm

Bart says:
June 3, 2012 at 9:32 am
It’s all right here. The CO2 level is effectively the integral of temperature anomaly with respect to the proper baseline. These plots were made using GISS LOTI, before Werner Brozek and others pointed out that there is a better match, which is theoretically reasonable, to SST.
In essence, there is the same problem as what Edim figured out here before: at a constant temperature above a baseline, the formula applied gives a constant rate of CO2 increase. Thus with average ~0.3°C increase in temperature, one can have 70 ppmv increase over a period of 50 years.
The first problem is that both the oceans and the biosphere are proven sinks for CO2, thus can’t have delivered that amount of CO2.
The second problem is that there is no physical mechanism which can do that. The oceans give at maximum 16 ppmv/°C (dynamic equilibrium between ocean surface and atmosphere, according to Henry’s Law). The biosphere only sinks more CO2 by growing harder individually and in increasing total area, and the rest of the earth is too slow in reaction.
Then think about the glacial/interglacial events: the increase in temperature from the depth of a glacial to the height of an interglacial is ~10°C, it takes about 5,000 years and CO2 increases ~80 ppmv over that traject. With a temperature (anomaly) dependent rate for CO2, however small, that simply is impossible.
The rate of change of CO2 is influenced by the change in temperature, not the absolute temperature (or anomaly), only the equilibrium CO2 levels are absolute temperature dependent.

June 3, 2012 12:24 pm

Bart, I also think one needs to at least look at the correlation you pointed out in the context of a simple 0-d radiative model (e.g., Lucia’s 2-box model). And then look at it in terms of a stoichiometric model of ocean/atmosphere and ask yourself what are the time constants associated with CO2 release from the ocean in response to heating.
I think Josh/Eli’s observation deserves a bit more than a rhetorical wave-off, something you are quite prone to doing, as I’ve observed in previous interactions with you. Also, if you can’t keep this at an adult level you can have the floor.

edim
June 3, 2012 12:51 pm

The only thing that’s differentiated out is the start condition (atmospheric CO2 at t0), but it’s a known value.

FerdiEgb
June 3, 2012 12:56 pm

edim says:
June 3, 2012 at 11:32 am
You say that constant temperatures causing the rise cannot be true, well you don’t know that. It could very well be that cycles in SST, even without any long term trend, are driving the rise. Degassing CO2 from the oceans is faster (when warming in the cycle, due to volumetric source) than the uptake by oceans (when cooling, due to surface sink), so after every cycle CO2 ends up somewhat higher, depending on temperatures. A reciprocating-type CO2 pump, sort of.
Sorry, but degassing/uptake by the oceans surface is already fast; a new CO2/temperature equilibrium is reached in 1-2 years. Deep oceans exchange is a different matter, but that doesn’t go faster by higher temperatures (seems more the contrary). It is the absolute temperature of the oceans surface which governs how much CO2 is absorbed or released (Henry’s Law), not influenced by any cycle. The latter can influence the speed at which that happens, but that is not the problem here. And at what CO2 increase rate you think a glacial-interglacial transition would be?
But let’s go the other way out:
If you agree that in the past (and today) the CO2 levels are temperature dependent, shouldn’t be the dCO2 rates dT dependent?

Allan MacRae
June 3, 2012 12:56 pm

Carrrick says: June 3, 2012 at 9:47 am
“This is one of my favorite graphics. It shows that the Northern hemisphere (where most of the trees are) shows the largest annual variation in CO2, and the south pole (where there are very few trees ;-)), shows almost no variation.”
http://scrippsco2.ucsd.edu/images/graphics_gallery/original/co2_sta_records.pdf
A beautiful graphic, thank you Carrick – I plotted it from raw data years ago. It clearly shows the huge magnitude of seasonal (natural) atmospheric CO2 flux compared with humanmade
CO2 emissions.
The seasonal CO2 “sawtooth” varies almost 20ppm at Barrow Alaska, and about 2ppm at the South Pole. I think this primarily reflects the larger land mass in the Northern Hemisphere. In comparison, global atmospheric CO2 concentration is increasing at a rate of about 2ppm/year.
Further, the daily flux in CO2 is also huge.
Here is recently observed Rose Park data at Salt Lake City:
http://co2.utah.edu/index.php?site=2&id=0&img=30
Please examine the Daily CO2 and Weekly CO2 tabs for all measurement stations.
.
Peak CO2 readings (typically ~500ppm) occur during the night, from midnight to ~8am, and drop to ~400 ppm during the day.
1. In contrast, human energy consumption (and manmade CO2 emissions) occur mainly during the day, and peak around breakfast and supper times.
2. I suggest that the above atmospheric CO2 readings, taken in semi-arid Salt Lake City with a regional population of about 1 million, are predominantly natural in origin.
IF points 1 and 2 are true, then urban CO2 generation by humankind is insignificant compared to natural daily CO2 flux, in the same way that (I previously stated) annual humanmade CO2 emissions are insignificant compared to seasonal CO2 flux.
IF these results are typical of most urban environments (many of which have much larger populations, but also have much greater area, precipitation and plant growth), then the hypothesis that human combustion of fossil fuels is the primary driver of increased atmospheric CO2 seems untenable.
Here is one of my favorite graphics. I can see no impact of man in this impressive display of nature’s power.
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4
Humanmade CO2 emissions are lost in the noise of the much larger natural system, and most humanmade CO2 emissions are probably locally sequestered.
Finally, I have no confidence in the C14/13/12 ratio argument. I think others have demolished it and I need not do so again.

edim
June 3, 2012 1:13 pm

“Sorry, but degassing/uptake by the oceans surface is already fast; a new CO2/temperature equilibrium is reached in 1-2 years. Deep oceans exchange is a different matter, but that doesn’t go faster by higher temperatures (seems more the contrary). It is the absolute temperature of the oceans surface which governs how much CO2 is absorbed or released (Henry’s Law), not influenced by any cycle. The latter can influence the speed at which that happens, but that is not the problem here. And at what CO2 increase rate you think a glacial-interglacial transition would be?”
Absolute temperatures of the ocean surface oscillate in annual cycles. Land surface temperatures too. Different latitudes in different phases and different amplitudes. In oceans we have currents, upwelling, downwelling etc, which must influence CO2 fluxes at ocean/atmosphere interface.
“But let’s go the other way out:
If you agree that in the past (and today) the CO2 levels are temperature dependent, shouldn’t be the dCO2 rates dT dependent?”
I can’t say what they should or not. The observed behaviour is that dCO2 is T dependent. Like I said, I cannot accept ice core records, different methods splicing etc. There’s no need.

FerdiEgb
June 3, 2012 1:30 pm

Edim and Bart,
Have a look at what Pieter Tans of NOAA says about CO2 growth rate and temperature anomaly + precipitation:
from page 14 on:
http://esrl.noaa.gov/gmd/co2conference/pdfs/tans.pdf
He shows everything with the trends removed and the resp. response functions of CO2 vs. temperature and precipitation.

SasjaL
June 3, 2012 2:11 pm

Regarding measurements in close proximity of volcanoes:
The amount of carbon dioxide (and other volcanic gases) is an indicator when an outbreak is about to occur. The amount increases exponentially before …

Ian George Says:
June 2, 2012 at 4:03 pm

You are pointing at something important …

Werner Brozek says:
June 2, 2012 at 5:08 pm
… As well, the oxygen would get too low for life to exist. …

The “green stuff” takes care of that (as you know) … Not to forget, because there are a number of different oxidation processes in nature, including those in our bodies, we need an abundance of carbon dioxide, so that the “green stuff” can produce the oxygen we need … (More carbon dioxide contributes to more “green stuff” contributes to more oxygen …)

templedelamour says:
June 2, 2012 at 6:24 pm

Yes, even a “dead” volcano emit gases, including carbon dioxide.

June 3, 2012 2:31 pm

Friends:
The important point is that the dynamics of the seasonal variation in atmospheric CO2 concentration indicate that the natural sequestration processes can easily sequester ALL the CO2 emission (n.b. both natural and anthropogenic), but they don’t: about 3% of the emissions are not sequestered. Nobody knows why not all the emissions are sequestered. And at the existing state of knowledge of the carbon cycle, nobody can know why all the emissions are not sequestered. But that is the issue which needs to be resolved.
Importantly, it is certain that accumulation of the anthropogenic emission is NOT the cause of the rise in CO2 indicated by the Mauna Loa data.
The curve fitting exercise of the above article is pointless. If a curve is fitted then the equation of the curve provides a description of the shape of the curve but no information is gained by such an exercise. And it cannot assist in explaining why all the emissions are not sequestered.
In the above discussion, Bart claims his model of the carbon cycle is ‘right’ so all other models should be ignored. However, there are several models of the carbon cycle which each assumes a different mechanism dominates the carbon cycle and they each fit the Mauna Loa data. We published 6 such models with 3 of them assuming an anthropogenic cause and the other 3 assuming a natural cause of the rise in CO2 indicated by the Mauna Loa data: they all fit the Mauna Loa data.
These issues were ‘done to death’ in the thread at
http://wattsupwiththat.com/2012/05/24/bob-carters-essay-in-fp-policymakers-have-quietly-given-up-trying-to-cut-%C2%ADcarbon-dioxide-emissions/
The discussion in that thread is worth a read by those interested in the ‘carbon cycle debate’.
Richard

FerdiEgb
June 3, 2012 3:18 pm

edim says:
June 3, 2012 at 1:13 pm
I can’t say what they should or not. The observed behaviour is that dCO2 is T dependent. Like I said, I cannot accept ice core records, different methods splicing etc. There’s no need.
Regardless of what you think about ice cores, there are plenty of other proxies which show a lot ot temperature change over glacials and interglacials.
E.g. the previous Interglacial was 2°C warmer than today (up to 10°C in Alaska, Siberia,…) during about 3,000 years, followed by a period of 7,000 years with 1°C warmer than today. With a constant CO2 rate over these periods, where should the CO2 levels be at the end?
The opposite is one of the main problems with that theory: glacials were 100,000 years long and far below the current temperatures, thus certainly with a huge negative CO2 rate according to your assumptions. Thus ending at zero CO2 already after a few hundred years…

Bart
June 3, 2012 3:20 pm

edim says:
June 3, 2012 at 12:16 pm
The point those other guys made is that anthropogenic inputs are effectively linear in rate over the time span. With medium level bandwidth, that becomes a linear output in CO2. It still says the output would be drastically reduced from a straight accumulation, though.
A linear output is also the result of integrating the anomaly offset in the temperature. Ergo, they claimed, I could trade off the one for the other.
But, I cannot, because it would lessen the tracking efficiency to the point where the CO2 derivative would not keep pace with the temperature variation.

Bart
June 3, 2012 3:24 pm

I assert that these are the facts, folks:
1) CO2 is very nearly proportional to the integral of temperature anomaly from a particular baseline since 1958, when good measurements became available.
2) Because of this proportionality, the CO2 level necessarily lags the temperature input, therefore in dominant terms, the latter is an input driving the former.
3) The temperature relationship accounts for all the fine detail in the CO2 record, and it accounts for the curvature in the measured level.
4) This leaves only the possibility of a linear contribution from anthropogenic inputs into the overall level, which can be traded with the only tunable parameter, the selected anomaly offset.
5) Anthropogenic inputs are linear in rate. Therefore, to get a linear result in overall level from them, there has to be rapid sequestration. (Else, you would be doing a straight integration, and the curvature, which is already accounted for by the temperature relationship, would be too much.)
6) With rapid sequestration, anthropogenic inputs cannot contribute a significant amount to the overall level.
Now, you may quibble about this or that, and assert some other relationship holds here or there, but your theories must conform with the reality expressed by these six points, because this is data, and data trumps theory.

Lance Wallace
June 3, 2012 3:26 pm

mondo says:
June 2, 2012 at 3:42 pm
Shouldn’t we be looking at this sort of data on a logarithmic rather than arithmetic Y-scale?
Indeed if we subtract the constant term of 260 ppm, we get a straight line on semilog paper:
ln (CO2) = 0.0169 (t-1958) + 3.979 with an R^2 of 99.86%

Bart
June 3, 2012 4:04 pm

richardscourtney says:
June 3, 2012 at 2:31 pm
“We published 6 such models with 3 of them assuming an anthropogenic cause and the other 3 assuming a natural cause of the rise in CO2 indicated by the Mauna Loa data: they all fit the Mauna Loa data.”
I’m sure neither of us wants to revisit the rancor of our earlier exchange, but I must insist that the phrase “they all fit” be qualified. I think I am being fair in characterizing your definition of “fit” as “within the instantaneous uncertainty level of the MLO data.” I have insisted, reliably I might add, that it is quite possible to dig below that level for long term underlying processes by filtering, and that is where you will find whether they agree or not with the fine detail mandated by the temperature relationship. For the life of me, I do not know why you refuse to do that particular analysis.
Unfortunately, as you have informed us, your paper is not generally available to the general public, being behind a paywall, and we cannot check your assertion for ourselves. Nor do I have access to the precise data you used to verify your models. So, until presented with evidence otherwise, I am not going to believe that all of the “fits” are equally good. In fact, I very much expect that the fits which use less anthro and more natural will be better.

Lance Wallace
June 3, 2012 4:21 pm

jorgekafkazar says:
June 2, 2012 at 7:16 pm
Thanks, Lance, this was fun. But there’s not a whole lot of science, here. Using a curve fit for a 56 year period, then extrapolating backwards and forwards is very shaky. There’s no reason to assume that the multiple mechanisms resulting in atmospheric CO2 concentrations are identical at the start and end of the period; thus the exponential, though convenient, is not necessarily valid as a predictor or analytical tool of any sort.
If you have the time, though, you might want to push this a stage further. At a minimum, I’d like to see error bars fore and aft, as well as a correlation coefficient adjusted for autocorrelation. I think you’ll find your 1723 date should be ±200 years. Worse, trying to tie the date of the knuckle of the exponential to any invention is completely unjustified. Natural CO2 variation may likely overwhelm any man made emissions, distorting the actual curve beyond recognition.
Thanks Jorge, indeed I did it just for fun and make no scientific claims. The fit was so good I assumed the error would be small, but since you ask I redid the nonlinear fit in Statistica (I had used Excel before). There were some small differences in the point estimates: Background CO2 was 257 ppm (SE 1.05 ppm) compared to 260; the start year was 1711 instead of 1723 (darn! missed Newcomen’s invention by a year!) (SE 3.94 years, well short of your suggestion of about 100); and tau was 61.3 (SE 0.7) years, compared to 59. So indeed the errors are amazingly small, especially considering the last 20 years of intense focus on CO2 with apparently little effect.
I wasn’t actually too serious about the Newcomen invention, just trying to track the beginning of the curve to the beginning of the Industrial Revolution since that is so often fingered as the culprit. Note that the curve takes 40 years(!) to increase from 261.4 ppm to 262.4 ppm.

Lance Wallace
June 3, 2012 4:27 pm

Pamela Gray says:
June 2, 2012 at 6:08 pm
Anything as regular as this data says one of two things.
1. Manmade CO2 pump sitting next to the sensor and never shuts off and is exquisitely tuned to a rythmic increasing beat.
2. Artifact of the “fudge” factor part of the CO2 calculation.
I too am gobsmacked by the regularity of the curve. If it is mostly due to increasing anthropogenic emissions, one would think as others have pointed out that there would be more serious impacts of economic lulls and booms. If mostly natural on the other hand, why the constant acceleration rather than a linear or possibly deceleration following the LIA? But it is rather fun to see the complete lack of any visible effect of all the Kyotos, Balis, Copenhagens,and Rios.

Lance Wallace
June 3, 2012 4:39 pm

George E. Smith; says:
June 2, 2012 at 8:40 pm
1) So I didn’t see any conclusion as to whether the Mauna Loa CO2 data best fits an exponential curve or whether a power series curve is a better fit for the 1722 to 2050 data.
2) Why did you choose to start your extrapolated prediction; excuse me, that’s projection, from the year 1722. Aren’t you concerned about being accusede of cherry picking, by selecting that year; rather than say 1769; the year that Captain Cook, (re)discovered New Zealand ?
Well, as to 1) above, I did mention that the quadratic gave a better fit over the 50-odd years than the exponential. But the exponential fit has much more attractive interpretations of the three parameters. For example, I did not “choose to start” at 1722–that was a free-floating parameter that was “chosen” by the Excel nonlinear fitting function. The other parameters (background of 260 ppm, doubling period of 41 years for the incremental CO2 above background) were also completely a function of the fit. The background value, for example,is fairly close to the 280 ppm that is presently regarded (I presume due to multiple lines of observational evidence–at least I hope so) as the level before the Industrial Revolution. These parameters did not have to come out so nicely, and in that case there would be no particular reason to look further at the exponential fit.

Lance Wallace
June 3, 2012 4:45 pm

MikeG says:
June 3, 2012 at 12:48 am
Sorry, your curve fitting is quite meaningless and has no predictive properties whatever. The data would make an equally convincing fit to a sine curve, and many other functions.
Well, the exponential fit did have some nice properties, like predicting a background level and an initial starting point both of which have observations in fairly close agreement. Also, based on the 50-year period with residuals seldom exceeding 1 ppm, I would be willing to bet that the curve will be no more than 1 ppm in error 5 years from now.

Lance Wallace
June 3, 2012 4:48 pm

Bart says:
June 3, 2012 at 1:21 am
A simple analogous (not precise in every detail, but able to provide guidance as to physically possible and plausible behavior) system model is as follows:
dC/dt = (Co – C)/tau1 + k1*H
dCo/dt = -Co/tau2 + k2*(T-To)
Well,you have 5 adjustable parameters here and you know what von Neumann said about that.

Lance Wallace
June 3, 2012 4:51 pm

Freddy Hutter, TrendLines Research says:
June 2, 2012 at 11:29 pm
The projected co2 chart is scary only ‘cuz it assumes unlimited supply of fossil fuels (as in IPCC scenarios). It is rather more reasonable (423-ppm 2029) when adjusted to reflect peak oil, peak gas & peak coal
I actually assumed nothing, just let the observations do what they would. The idea that peak fossil fuels drive the curve is certainly possible, but is also an assumption.

Lance Wallace
June 3, 2012 5:01 pm

Ferdinand Engelbeen says:
June 3, 2012 at 2:10 am
Oh by the way, a simple formula to calculate the CO2 levels at any moment in the future (or past);
CO2(new) = CO2(old) + 0.55xCO2(emiss) + 4xdT
Thank you Dr. Engelbeen for the useful references. Your proposed formula seems to suggest that at times of decreasing or plateauing temperatures, the CO2 emissions would need to increase at just the right speed to offset the reduced effect of temperature and maintain the eponential increase.

Myrrh
June 3, 2012 5:13 pm

Dave Walker says:
June 3, 2012 at 1:46 am
Caleb- Downslope from Mauna Loa are miles of recent lava flows and then miles of rainforest. The Observatory is 11,000 feet high and enjoys pretty steady wind. They throw out readings tainted with vog or low altitude pollutants. Not a perfect observatory but pretty good.
==============
They decide what is or isn’t this claimed ‘well-mixed’ background – without ever proving there is such a thing as this, or showing even if there was how they can tell the difference. They merely decide what the figure will be, that isn’t science. They’re not measuring anything.
Keeling was anti-coal and went to a laboratory on the world’s largest active volcano, surrounded by constant volcanic activity in venting and thousands of earthquakes every year above and below sea level, over a huge hot spot creating volcanic islands in a warm sea, all constantly releasing carbon dioxide, and he claimed with less than two years data that he had definitely established there was a trend, established that global levels from man-made carbon dioxide were rising. WUWT?? Doesn’t ring any warning bells? The man had an agenda, all his decisions can be seen to be agenda driven, and, he and his son had control over the stations for years and now all within the coordinated ‘consensus’ to prove AGW – there is nothing to suggest the Keeling curve is anything but make believe.
What happens here? The disjuncts are just ignored. Why hasn’t AIRS produced the top and bottom of troposphere data? Too much proof from their conclusions of mid troposphere that carbon dioxide was lumpy and not well-mixed? And they’d have to go away and learn something about wind systems…
Here, a real picture of carbon dioxide levels worth a thousand words:
http://www.biomind.de/realCO2/literature/evidence-var-corrRSCb.pdf
Evidence of variability of atmospheric CO2 concentration during the 20th century
Dipl. Biol. Ernst-Georg Beck, Postfach 1409, D-79202 Breisach, Germany
Discussion paper May 2008
From page 9:
CO2 in Troposphere/Stratosphere 1894 -1973
“Figure 4 Tropospheric and stratospheric measurements of CO2 from literature 1894-1973 (see Table 2) graphed from 66 samples, calculated as 18 yearly averages.
Despite the low data density, the CO2 contour in troposphere and stratosphere confirms the direct measurements near the ground that suggest a CO2 maximum between 1930 and 1940.
The CO2 peak around 1942 is also confirmed by several verified data series since 1920 sampled at ideal locations and analysed with calibrated high precision gas analysers used by Nobel awardists (A. Krogh since 1919) showing an accuracy down to ±0.33% in 1935. Figure 5 shows the 5 years average out of 41 datasets (see Table 1). For comparison the reconstructed CO2 from ice records according to Nefttel et al. is included.”
The Nefttel et al on next page 10, but don’t let that distract you from the picture on page 9…
Then see picture on page 12:
“Considering Figure 8 we can see that Callendar selected only the lowest sample values and omitted several data sets. His averages are mostly lower than the correct values. His so-called “fuel line” is therefore about 10 ppm higher than he calculated. Furthermore he ignored thousands of correctly measured data on the sea, continent and in the troposphere for reasons we can only speculate.”
And if objective, that speculation leads to the inevitable conclusion that this was agenda driven and the Keeling Curve should be flagged as such in science teaching.

Lance Wallace
June 3, 2012 5:22 pm

Kevin Kilty says:
June 3, 2012 at 9:42 am
The exponential model presented here has no coefficient that multiplies the exponential, and therefore, implies that this coefficient is 1.0ppm, and is a constant in the model–not determined by data at all. This seems extremely unlikely. Is this a typo?
The observed data are all contained within one e-folding time far along in the time series, So the coefficients are correlated to one another and not well determined.
Actually, I struggled with that for a time and don’t really know who won. There IS a coefficient of sorts–write the exponential as a product and the coefficient is exp(-t0/tau). If we add another coefficient, leaving the rest of the equation alone, the new coefficient just interacts with this coefficient and makes everything ambiguous. If on the other hand we replace the exp(-t0/tau) with just the new coefficient, we lose the knowledge of when the exponential rise began and also the time constant, although it is probably the case that the fit would be as good.

Lance Wallace
June 3, 2012 5:49 pm

Ken Gregory says:
June 3, 2012 at 11:45 am
Here is a graph of the CO2 extrapolation using the parameters given in the lead post, with some selected IPCC scenarios. Note the huge range of CO2 projections to the year 2100; 1248 ppm in the A1F1 high case, 486 ppm in the B1 low case.
http://www.friendsofscience.org/assets/documents/CO2_IPCC_ScenVsActual.jpg
The A2 Reference case is very close to the CO2 model extrapolation.
Many thanks Ken for applying the model above and comparing it to the IPCC models. Quite delightful that a simple exponential model tracks so closely to a highly sophisticated multiple-parameter model employing the full panoply of hard-won climate science findings.

Bart
June 3, 2012 7:27 pm

Lance Wallace says:
June 3, 2012 at 4:48 pm
“Well,you have 5 adjustable parameters here and you know what von Neumann said about that.”
It isn’t a “fit”, so the criticism is inapposite. It is the simplest model possible to elucidate the behavior dictated by the data, and you know what Einstein said about that.

P. Solar
June 3, 2012 9:01 pm

Wallace, I think the key finding here is in your residuals. They show that there is a temperature dependency but it’s very small. +/- 1.5 max over the period of the data.
That in itself is very useful because I have seen a number of people suggesting that most of CO2 rise was to temperature rise ( like happens during and after deglaciation ).
The other thing to notice is that the quadratic is a better fit. The exponential starts too big and ends too small. That is a bad sign in view of the extrapolations you do at both ends.
As your plot shows the two can be grossly similar over that short a section and many observers have loosely categorised the change as “exponential” because it is growing ever faster.
There was some paper (reported on WUWT) that had found “super-exponential” growth in CO2. This attracted much derision here from the uneducated howlers who thought it was alarmist propaganda. In fact, as I pointed out at the time, in mathematics super-exponential just means a function that grows faster than an exponential. The papers finding was that a quadratic was a better fit and noted that a quadratic is super-exponential. Your plot basically confirms this.
The other problem with extrapolating in either direction is that there is very little of the curved section to fit to so very small errors could lead to large changes in the coeffs of the fit. It also implies the assumption that whatever caused / is causing the rise is unchanging on the century scale, eg. ecomonic growth had been and always will be a fixed percentage per year ( a fixed %age growth gives an exponential).
I did the exponential plotting thing two or three years ago. I found it best to do three different exp. fits: pre-1900; 1900-1965 and 1965 onwards. I was using economic data for fossil fuel extraction and scaling the result to Mauna Loa record.
I’ll try to dig out my results.

June 3, 2012 10:17 pm

The curve fitting exercise of the above article is pointless. If a curve is fitted then the equation of the curve provides a description of the shape of the curve but no information is gained by such an exercise. And it cannot assist in explaining why all the emissions are not sequestered.
I have to agree — indeed, the very impossibility of distinguishing a quadratic from an exponential (or, as pointed out, from a harmonic function or many other possibilities) is indicative that the baseline is too small to prove much of anything at all.
I also agree with your argument on non-uniqueness — Indeed, one thing that IS apparent in this curve is that the variation ABOUT exponential, or quadratic, or sinusoidal, is almost completely irrelevant compared to the dominant behavior. It can be modeled quite accurately by:
C(t) = C_0(t) + V(t)
where C_0(t) is either of the empirical fits above (or any other fit to the smooth curve) and V(t) describes the “anomaly” — the noise on the curve, and is visibly over two orders of magnitude smaller. LOTS of things could determine the shape of C_0(t). Lots of combinations of things, at that. The problem continues to be the absurdly short baseline. Monotone increasing is not only boring, it is impervious to analysis — any monotonic driver can be correlated with it and will “work”. A completely separate cause — which might or might not be a significant component of the cause of C_0(t) — could be responsible for the small secular variations in V(t).
The same argument works both for and against the CAGW CO_2 is the devil argument. Over the last 110+ years, there is at least some correlation between the solar cycle and global temperatures. There is also (presumably) monotone/exponential or whatever increasing CO_2. It has been argued that CO_2 is responsible for the increasing temperature, with solar cycle at best a minor modulator around the overall average increase. However, one can also fit the data with the solar cycle being a primary driver and CO_2 an all but irrelevant modulator. This sort of thing is often possible when fitting nonlinear curves — there is an almost irresistible temptation to commit the sin of seeking confirmation from the success of a fit to some set of functions that tells a good story (that is, the story you want to believe) and blind yourself to the fact that there might be a dozen equally or even more successful fits (and a few less successful ones) that tell a very different story, and that (nature being nature) one of the less successful fit/stories could end up being the true one, given the (usually unstated or unknown) errors in the measurements and methodology that comprise the fit data.
There is a really, really lovely paper by Koustoyannis that illustrates the general problem quite beautifully. In fact, on its first page the figure alone says it all, and shows why Anthony’s curve fitting is meaningless — it is half way between window A and window B, where the data could be quadratic, exponential sinusoidal or just spectral noise on a really long term deterministic behavior that hasn’t even begun to be resolved. You can grab a preprint of his paper here:
http://itia.ntua.gr/en/docinfo/673/
Koustoyannis is actually a Really Bright Guy, and the essence of his paper is that the entire statistical basis of climate science is tragically flawed. He goes on and shows that the entire concept of causality in climate science is badly broken; that it should really be described not by ordinary classical statistics but rather by Hurst-Kolmogorov statistics, which is basically a peculiar structure of stochastic state transitions modulated by non-stochastic noise. I find his argument rather compelling. You can see one of the talks he has presented on HK statistics in climate science here:
http://www.cwi.colostate.edu/nonstationarityworkshop/SpeakerNote/Wednesday Morning/Koustayannis.pdf
Sadly, I think his math is way beyond the level of understanding of most climate scientists. I keep hoping Bob Tisdale sees one of my posts of his work, though, as his models precisely reproduce the stochastic jump followed by trendless noise Bob observes in SST data. Hurst-Kolmogorov process — stationary noise plus scaling behavior.
I also just finished Taleb’s The Black Swan and found it quite revelatory. It, too, warns against the abuse of statistical methods designed for Mediocristan — that part of the world where Gaussian statistics and concepts like standard deviation have meaning and life is rather predictable and boring (like mainline classical physics) — by applying them in Extremistan, places where nonlinearity and scaling behavior render any application of traditional statistical methods completely invalid and capable of making dangerous and expensive mistakes. He also warns — repeatedly — against the danger of confirmation bias, the ability of humans to go looking for data that will confirm their pet theory and find it, at the minor cost of ignoring all the other data that confounds it.
Places like climate science. The very first question that should have been asked in climate science before starting a sky-is-falling-and-we-must-tell-the-king scandal is is there anything truly unusual about the climate today. The answer is very clearly no. A resounding no. A statistically certain, overwhelming no. Whether one looks at 25 million years of proxy-derived climate data, 5 million years of proxy-derived climate data, 1 million years of proxy-derived climate data, 100,000 years of proxy-derived climate data, or 10,000 years of proxy-derived climate data, there is absolutely nothing remarkable about the present. It isn’t the warmest, the most rapidly warming, the coolest, the rainiest, or the -est of anything. It is boring.
The entire CAGW fiasco is a clear example of how humans implicitly believe the largest thing they’ve ever seen to be the largest thing in existence.
rgb

P. Solar
June 3, 2012 10:42 pm

OK, found my exp fits. The data I used has this header, should be enough to find its source.
#*** Global CO2 Emissions from Fossil-Fuel Burning,
#*** Cement Manufacture, and Gas Flaring: 1751-2007
#***
#*** June 8, 2010
#***
#*** Source: Tom Boden
#*** Gregg Marland
#*** Tom Boden
#*** Carbon Dioxide Information Analysis Center
#*** Oak Ridge National Laboratory
#*** Oak Ridge, Tennessee 37831-6335
The extraction data was integrated by addition of each years output to get total emissions and then plotted on a log scale. This shows three quite clear stages in development and underlines the fact that fitting just one curve or any sort is too simplistic.
As is generally known, industrial output only really took off after about 1960.The log plot shows there were three stages of fairly constant annual percentage growth. The post 1960 period corresponds to the Mauna Loa record and allows scaling the industrial extraction data to the residual airborne CO2 concentration. This scaling produces a preindustrial CO2 level of around 295 ppm, suggesting the favorite figures of 260-270 are rather too low.
The later segment was fitted from 1965 onwards and the actual data rises slightly quicker than the fitted line near the end, so the 2050 projection of 462ppm may be somewhat low *if* current rate of growth continues.
http://imagebin.org/index.php?mode=image&id=215058
http://imagebin.org/index.php?mode=image&id=215060
[mods, these images are only good to 15days, please copy them if possible.]

June 3, 2012 10:47 pm

I looked at the correlation between d(S_CO2)/dt and HADSST3(?), and the results were pretty much what I was expecting (ironically based in part on work that Bart had done previously).
The peak correlation corresponds to approximately a 2 month lag between change in CO2 and temperature, with temperature lagging CO2. It’s actually easy to see which lags which with a zoom in of Bart’s chart.
Here’s my figure.
Cause and effect suggests that CO2 is driving temperature change, and what we’re looking at here is predominately the “fast response” of climate to changes in CO2.
I’m guessing that Bart never tested the lag when he made this claim:

Because of this proportionality, the CO2 level necessarily lags the temperature input, therefore in dominant terms, the latter is an input driving the former.

(That’s par for the course from my experience.)

June 3, 2012 11:03 pm

Allan:

Humanmade CO2 emissions are lost in the noise of the much larger natural system, and most humanmade CO2 emissions are probably locally sequestered.

Depends on whether you are looking at surface measurements, or “full atmospheric column” as to whether this is true or not.
The Phoenix dome regularly achieves surface CO2 concentration levels over 600 ppm, but that has a negligible effect on Phoenix temperatures because it’s all concentrated near the surface. The whole point of measuring on Mauna Loa is related to looking at the “well-mixed” portion of the atmosphere, well above regional scale changes (and being situated above a tropical jungle there are substantial diurnal variations in CO2 driven by natural variability–again well studied by Keeling.)
People who are going to criticize the work of others, should really take the time needed to understand the rationale of the original experimental setup. Otherwise you’re not being skeptical you’re being overly credulous to your own pre-conceived notions. Keeling spent roughly 10 years researching the optimal way of collecting these data, and before him, for reasons you’ve managed to elucidate it was generally accepted it was not achievable. Part of the credit he has received, and it was well-deserved, was working out how to side step many of the issues you and others have raised (and these are patty-cake level issues you’re raising compared to the critics of the day he had to face).

Finally, I have no confidence in the C14/13/12 ratio argument. I think others have demolished it and I need not do so again.

This is just another one of those wave-offs you guys like to do for facts you can’t readily counter. “Demolished” just means somebody wrote some nice sounding words that don’t standup to the scrutiny of the skeptical mind.

Bart
June 3, 2012 11:06 pm

Carrick says:
June 3, 2012 at 10:47 pm
Honestly, Carrick, I would have expected better than such a series of elementary errors from you. But, then again, that’s par for the course from my experience.
We’re talking about the relationship between the CO2 level and the temperature. You are looking at the derivative of the former versus the latter. With a 90 degree phase lead from the derivative, what the hell else would you expect?
As for zooming in on my chart, the CO2 data is processed with a 2 year non-causal filter, being the centered average. The WoodForTrees site automatically centers the average to make up for the phase delay. So, it’s hardly surprising that the data points anticipate the future when half of each one is made up of information from the future.
Massive fail, dude.

P. Solar
June 3, 2012 11:33 pm

OK , let’s try again for those plots.Imagebin seems to appropriately named , if you don’t want anyone to see your image: image bin it!
http://image.bayimg.com/iaongaadp.jpg
http://image.bayimg.com/jaonfaadp.jpg

Editor
June 3, 2012 11:41 pm

I hate to say it, but this analysis is meaningless. You can’t just fit a curve to something and extend it, that’s the kind of thing that the AGW alarmists do.
Instead, you need to look at the actual rates at which CO2 is emitted, and the rate at which it is sequestered. This is an exponential rate of sequestration of some kind, in which the amount sequestered increases as the atmospheric concentration increases.
One of the characteristics of this type of exponential decay is that if the emissions are constant, you end up with a “sigmoid function”, where eventually the amount sequestered will increase to match the amount emitted and the rise in atmospheric levels will stop.
Now, the author shows us this image, claiming doubling by 2050 …

However, there is no reason to prefer his curve over this sigmoidal curve, which matches the data just as well as his does …

In reality, nothing in nature continues to grow exponentially.
w.

P. Solar
June 3, 2012 11:50 pm

Carrick, could you explain why you expected the rate of change of CO2 to ’cause’ a temperature rise?
It is well known that higher water temp will produce outgassing, on the other hand if you are suggesting CO2 conc is producing a “forcing” that is producing a temperature change you should be looking at CO2 conc vs dT/dt
Perhaps you could explain your thinking. I seem to have missed the causal relationship you are suggesting.

June 4, 2012 12:05 am

Lance Wallace says:
June 3, 2012 at 5:01 pm
Thank you Dr. Engelbeen for the useful references. Your proposed formula seems to suggest that at times of decreasing or plateauing temperatures, the CO2 emissions would need to increase at just the right speed to offset the reduced effect of temperature and maintain the eponential increase.

The human emissions are near linearly increasing over time, that is the basis for the trend. The 0.55 is pure coincidence, as that depends of the reaction speed of the process, but remarkably constant over the past 100 years or so (the “airborne” fraction of the emissions – in mass – remains constant). As the influence of temperature on short term (seasonal: 5 ppmv/°C, interannual ~4-5 ppmv/°C around the trend) is rather small, its influence on the trend if averaged over 2-3 years is near negligible.
BTW, no need to plot the accumulated CO2 trend on log paper, even on lineair scale there is a remarkable correlation with the emissions:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg
BTW2: No Dr. here, have a B.Sc. in chemistry, but changed to a M.Sc, job in process automation some 25 years ago, now already 8 years retired…

P. Solar
June 4, 2012 12:24 am

Willis : “In reality, nothing in nature continues to grow exponentially.”
Your point about many curves matching such a short segment if fair, but the whole AGW debate is all about a change that is NOT natural : economic growth (and hence fossil fuel usage) has been growing exponentially since the 60’s (about 2% per year).
So I don’t see why you suggest a sigmoid, which would correspond to constant emissions. No sign of that happening in the near future. 🙁

FerdiEgb
June 4, 2012 1:06 am

Bart says:
June 3, 2012 at 3:24 pm
I assert that these are the facts, folks:
1) CO2 is very nearly proportional to the integral of temperature anomaly from a particular baseline since 1958, when good measurements became available.

That is nice, but much too short to give a straight answer. If you extent that back to before the MLO time, the temperature was below your baseline for the full period 1900-1935, average -0.2°C. That is good for a CO2 drop of 84 ppmv, if you integrate over that period. Whatever the quality of the historical measurements, other proxies (like stomata) or ice cores, none of them shows such a drop. To the contrary, all show some increase. Extending that further to the depth of the LIA (some 0.4°C below baseline during several centuries), about all CO2 would be used up.
That simply shows that the correlation between temperature and rate of change of CO2 is mainly on the variability, but on the trend itself that is a spurious correlation.
Second, there is no known physical mechanism which can deliver 70 ppmv in 50 years time into the atmosphere for an increase of only 0.2°C in average. The equilibrium CO2 level between ocean surface and atmosphere for a 0.2°C increase is +3 ppmv. Vegetation acts in opposite ways and both are proven net sinks for CO2. All other sources are too slow or too small.
Further, there are lots of other observations which need to be fitted, whatever the source of the increase may be, temperature driven or not:
– the decline in d13C/12C ratio in the atmosphere and oceans
– the pre-bomb decline in d14C/12C ratio
– the increase of biomass (oxygen balance)
– the increase of DIC (total inorganic carbon) in the oceans
– the overall mass balance
2) Because of this proportionality, the CO2 level necessarily lags the temperature input, therefore in dominant terms, the latter is an input driving the former.
Agreed, except that this is true for the influence of temperature variations on the variations in increase rate, not on the increase rate itself.
3) The temperature relationship accounts for all the fine detail in the CO2 record, and it accounts for the curvature in the measured level.
The first is true, the second is for 96% spurious (the remaining 4% is the effect of the increase in overall temperature over that period)
4) This leaves only the possibility of a linear contribution from anthropogenic inputs into the overall level, which can be traded with the only tunable parameter, the selected anomaly offset.
That is where we differ in opinion: You can have the same fit of both curves if the human emissions are responsible for most of the trend and temperature variability is responsible for most of the variability in the derivative of the trend.
5) Anthropogenic inputs are linear in rate. Therefore, to get a linear result in overall level from them, there has to be rapid sequestration. (Else, you would be doing a straight integration, and the curvature, which is already accounted for by the temperature relationship, would be too much.)
As the temperature-trend relationship is largely spurious, there is no need for rapid sequestration (as can be seen in the observed adjustment time).
6) With rapid sequestration, anthropogenic inputs cannot contribute a significant amount to the overall level.
Again, the observed sequestration is not rapid, it is in the order of 53 years.
Now, you may quibble about this or that, and assert some other relationship holds here or there, but your theories must conform with the reality expressed by these six points, because this is data, and data trumps theory.

As your theory already trumps the data in the first point, your theory is falsified…

P. Solar
June 4, 2012 1:44 am

Duh, looks like links only work from my IP where I uploaded them. The day WP has a page preview we’ll all do a lot better. let’s try again……

Lance Wallace
June 4, 2012 2:38 am

Bart says:
June 3, 2012 at 7:27 pm
Lance Wallace says:
June 3, 2012 at 4:48 pm
“Well,you have 5 adjustable parameters here and you know what von Neumann said about that.”
It isn’t a “fit”, so the criticism is inapposite. It is the simplest model possible to elucidate the behavior dictated by the data, and you know what Einstein said about that.
Excellent riposte, Sir, and I fully deserved it for being flippant. Models with some sort of underlying physics are much to be preferred over my simple curve-fitting exercise. I checked your link and could see that a small tau1 was better than a large one, but didn’t see what values you obtain for tau2 or the other parameters and wondered whether you cared to present the values here and perhaps comment on their interpretation.

Lance Wallace
June 4, 2012 3:20 am

Willis Eschenbach says:
June 3, 2012 at 11:41 pm
“I hate to say it, but this analysis is meaningless. You can’t just fit a curve to something and extend it, that’s the kind of thing that the AGW alarmists do.”
Ouch! You really know how to hurt a guy, Willis. I’m hardly defending what I did in a lighthearted way for an hour or two a couple of days ago, it was just that the fit resulted in a rather good estimate of both the rough time of the beginning of the rise (some 200 years ago) and the rough level of the background CO2 level (about 260 ppm). Then it turned out, as RGBrown noticed above, that the residuals were on the order of 1 ppm, two orders of magnitude below the CO2 levels. Finally, the residuals may be conveying some information to us, which several people above have tried their hand at interpreting. For example, the single almost pure spike occurred in 1998, contemporaneous with the super El Nino and the temperature spike. Without the “meaningless” model, one would not see the departures from the model, which could provide clues to the underlying physics. For example, there appear to be some sort of annual-to-decadal cycles visible in the residuals.
That said, I have no argument at all with those who have correctly noted that other approaches can provide equally good fits. Presumably Bart’s coupled differential equation model, Willis’s sigmoid, Brown’s model with a main monotonic curve plus a small noise term, Koustoyannis’ Hurst-Kolmogorov statistics arising from stochastic step functions followed by trendless noise, P. Solar’s three-exponential model, can all fit the data equally well, and I would expect them all to show the exact same behavior of the residuals that was shown by both the quadratic and exponential models I used. If so, then there is something that the common “noise” afflicting all these models is trying to tell us about the underlying physical reality.

Reply to  Lance Wallace
June 4, 2012 6:18 am

Lance,
I have been statistically curve fitting most of the avialable climate data for several years. Most recently I have worked with the Scripps column 10 CO2 and 13CO2 data monthly averages from the South Pole to Alert, Canada. I included global anthropogenic emissions in these regressions. This analysis indicates less than 10% anthropogenic contribution added onto a rising segment of a 200 year natural cycle. Take a look at http://www.retired researcher.wordpress.com. So far I have had no peer reviews of this work. Only one comment so far. With all the smart people that visit this site, I expected more.

Myrrh
June 4, 2012 4:03 am

Just learned about Julian Flood’s theory the Kriegesmarine Effect with a temperature spike at the same time as the CO2 in upper troposphere and stratosphere spike in the Beck paper I posted.
http://wattsupwiththat.com/2012/06/03/shocker-the-hansengiss-team-paper-that-says-we-argue-that-rapid-warming-in-recent-decades-has-been-driven-mainly-by-non-co2-greenhouse-gases/#comment-1000669

Allan MacRae
June 4, 2012 4:05 am

Carrick says: June 3, 2012 at 11:03 pm
Sorry Carrick,
First, you totally miss the point of the urban CO2 readings – it’s about Ferdinand’s mass balance argument, which fails not only on a seasonal basis but even on a daily basis, imo.
Your conclusions are technically wrong because you have not taken the time to understand the issues. You are missing one or more steps in the process of Read, Think, Write.
Your comment on the 2 month lag is just plain wrong on several counts. Look carefully at my original graphs in my 2008 paper. All the original data is in Excel sheets there.
The C14/13/12 issue has been done to death here at WUWT and elsewhere (by Roy Spencer and many others) – Google it.
Finally, your condescending tone demeans you. Forgive me for responding in kind.

P. Solar
June 4, 2012 5:03 am

Yet another attempt at getting a frigging image into this blog.
http://imagebin.org/index.php?mode=image&id=215058
http://imagebin.org/index.php?mode=image&id=215060

June 4, 2012 5:05 am

Lance Wallace:
At June 4, 2012 at 3:20 am you say:

Presumably Bart’s coupled differential equation model, Willis’s sigmoid, Brown’s model with a main monotonic curve plus a small noise term, Koustoyannis’ Hurst-Kolmogorov statistics arising from stochastic step functions followed by trendless noise, P. Solar’s three-exponential model, can all fit the data equally well, and I would expect them all to show the exact same behavior of the residuals that was shown by both the quadratic and exponential models I used. If so, then there is something that the common “noise” afflicting all these models is trying to tell us about the underlying physical reality.

I notice that you do not mention my post at June 3, 2012 at 2:31 pm which says

We published 6 such models with 3 of them assuming an anthropogenic cause and the other 3 assuming a natural cause of the rise in CO2 indicated by the Mauna Loa data: they all fit the Mauna Loa data.
These issues were ‘done to death’ in the thread
http://wattsupwiththat.com/2012/05/24/bob-carters-essay-in-fp-policymakers-have-quietly-given-up-trying-to-cut-%C2%ADcarbon-dioxide-emissions/

Each of our models matched each annual value of atmospheric CO2 concentration indicated by the Mauna Loa data to within the measurement accuracy of the Mauna Loa data. So, none of them have any “residuals” or “noise”. This was explained in the link.
But our three basic models each assumed a different mechanism dominates the behaviour of the carbon cycle. And they can each be used to emulate a natural or an anthropogenic cause of the rise in the Mauna Loa data.
Hence, our findings falsify your suggestion that “there is something that the common “noise” afflicting all these models is trying to tell us about the underlying physical reality”.
Richard
PS Our pertinent paper is
Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005)

FerdiEgb
June 4, 2012 6:15 am

Myrrh says:
June 3, 2012 at 5:13 pm
Despite the low data density, the CO2 contour in troposphere and stratosphere confirms the direct measurements near the ground that suggest a CO2 maximum between 1930 and 1940.
Myrrh, I have had a lot of discussions with the late Ernst Beck about the validity of his data. The tropospheric data don’t confirm the direct measurements on the ground, simply because these were sometimes hundreds of ppmv higher than near ground. Shows that the data are completely useless. Unfortunately so. That is also the case for most data which show the 1942 “peak”, mostly taken at places with a huge diurnal variation and extreme variation. That alone already shows that the data are highly contaminated by local sources.
The data at Mauna Loa are sometimes contaminated by local sources too, but not more than +/- 4 ppmv, compared to e.g. Giessen where the longest 1939-1941 series was taken with a variability of 68 ppmv (1 sigma!). How can one deduce a “global” signal from such a series?

June 4, 2012 6:22 am
FerdiEgb
June 4, 2012 6:26 am

Allan MacRae says:
June 4, 2012 at 4:05 am
First, you totally miss the point of the urban CO2 readings – it’s about Ferdinand’s mass balance argument, which fails not only on a seasonal basis but even on a daily basis, imo.
The mass balance must be always obeyed, no matter what happens where. But that is only calculatable on a yearly basis, as we only have yearly inventories of the emissions. Urban readings anyway are irrelevant for the mass balance, as are all readings in the lowest few hundred meters above land. That represents only 5% of the air mass where the CO2 is not well mixed due to a lot of local sources and sinks. In the rest of the global air mass, the yearly averaged measurements are all within 2 ppmv for the same hemisphere and 5 ppmv between the hemispheres, where the SH lags the NH but the trends are exactly the same:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends_1995_2004.jpg

FerdiEgb
June 4, 2012 6:38 am

Allan MacRae says:
June 4, 2012 at 5:08 am
Carrick – Here are just a few C13/C12 articles I found in 2 minutes of searching – there are many more.

Allan, I am afraid that Dr. Spencer was quite wrong with his article and I have commented there extensively and on his blog, by mail, which he published. It is clearly not his field. The main problem for the origin of the d13C decline could be the release by biomass degradation, but the oxygen balance shows that total biomass is growing…
Other main sources of low 13C are either too small (or also mainly of human origin like CH4), or unknown, but there is no reason to assume that these started to emit increasingly together with the human emissions. For underwater volcanoes: that CO2 is captured by the deep oceans in the deep oceans, which are near zero per mil d13C.

P. Solar
June 4, 2012 8:49 am

Jeeeuz! Does anyone have any idea what is going on with these friggin pastebin services today. ?
Are they rigged to ban their use from WUWT or what ?!
The last link I posted just returns an empty page , not even your basic empty html tags, Just sweet F.A.
The same link works fine locally and if I open the link in a new tab (still blank at first) then I do refresh , finally I get to see my image.
Now that looks to me like they must be checking HTTP_REFERRER and if it is this site they refuse to serve the image.
Someone care to check that?
http://imagebin.org/index.php?mode=image&id=215058
http://imagebin.org/index.php?mode=image&id=215060
If I’m going quietly mad it would be handy to have so confirmation too, then I can go and seek treatment 😉

Bart
June 4, 2012 9:25 am

FerdiEgb says:
June 4, 2012 at 1:06 am
“…the temperature was below your baseline for the full period 1900-1935, average -0.2°C.”
The impetus to CO2 is approximately proportional to dT + 0.5, where dT is the temperature anomaly relative to whatever the baseline is in the data. At no time in the modern era or even a little before would that quantity have been negative or zero. In fact, it suggests that CO2 will keep rising for some time to come, until long term limiting factors kick in.
The rest of your post is an appeal to magick.
P. Solar says:
June 4, 2012 at 8:49 am
You’re not mad. The images are not coming through.

Bart
June 4, 2012 9:39 am

Lance Wallace says:
June 4, 2012 at 2:38 am
“…but didn’t see what values you obtain for tau2 or the other parameters and wondered whether you cared to present the values here and perhaps comment on their interpretation.”
tau2 is only required to be large relative to the record length, so that the “equilibrium” level of CO2 (the level to which the current temperature is driving it) will be approximately the integral of the temperature anomaly over the relevant time interval. I need not have put in a term involving it at all to match the data, but there must be some ultimate limit to the equilibrium value, and a time constant is one way of enforcing one. For this exercise, I simply set the feedback gain to zero (tau2 = infinity).
The value of k2 is 0.2 and the value of To is 0.5. These were chosen to be consistent with the data.
The value of k1 was chosen to be 0.5. This is consistent with the IPCC insistence that roughly half of the emitted CO2 is almost immediately dissolved in the oceans. As you can see, I tried a variety of values for tau1, with the most realistic ones being on the order of 3 years or less to be consistent with the data. With such values of tau1, the contribution from H becomes negligible, so the value of k1, which must be less than or equal to one in any case, is fairly moot.

Tony Mach
June 4, 2012 9:45 am

In the last figure you can see the decline of industrial production after 1990 in the former East Block.

June 4, 2012 9:48 am

Ferdinand, thanks for taking the time to respond to Allan’s comments. That was helpful for me at least. (I don’t claim to be an expert, nor do I choose to argue as if I were, on this topic.)
Allan, I suspect you know full well my comments weren’t intended in a condescending manner (though your original one and your responses clearly were), I suggest that you read here. I’d suggest there’s quite a bit left for you to learn before you can argue as an expert on the matter of atmospheric measurements in general, and CO2 isotopic measurements in particular.

June 4, 2012 10:42 am

Bart:

Honestly, Carrick, I would have expected better than such a series of elementary errors from you. But, then again, that’s par for the course from my experience.

From your experience, I assume you are referring to the thread on McIntyre’s blog as that and Nick’s blog where you repeated the same errors are our only real brush.
I’ll let others judge who came out on top on that discussion. 😉 You are the one who claimed you can’t have negative delays in an impulse response function and confused physical causality with signal causality, not me.
Since your experience of me making “elementary errors” is so great, I’m sure you can point to one or two of them from those threads (and maybe including your admission after approximately 50 ad hominems from you, I didn’t count them, but I should have, that you were wrong).

We’re talking about the relationship between the CO2 level and the temperature. You are looking at the derivative of the former versus the latter. With a 90 degree phase lead from the derivative, what the hell else would you expect?

if you take the derivative of a function, say I(tau), if I(tau) = 0 for tau < 0, then I'(tau) = 0 for tau < 0. Taking derivatives in general does not induce a group delay, phase shifts are different than temporal shifts.

As for zooming in on my chart, the CO2 data is processed with a 2 year non-causal filter, being the centered average [and it goes on]

This is a bit of a red herring. You were plotting derivative of CO2 against temperature, I showed that there was a delay both computationally and confirmed it with a visual plot.
But in any case, centered average can splatter some high frequency noise into negative time bins, but it doesn’t shift the low-frequency components, and you can still deduce the delay using that. (Or directly numerical as I also did.) The fact the two agree shows your argument doesn’t matter.
If you want to make an argument using an other quantity such as S_CO2, you should make it based on that quantity, not using a quantity that doesn’t show what you claim to be arguing.

June 4, 2012 10:59 am

P. Solar:

Carrick, could you explain why you expected the rate of change of CO2 to ’cause’ a temperature rise?
It is well known that higher water temp will produce outgassing, on the other hand if you are suggesting CO2 conc is producing a “forcing” that is producing a temperature change you should be looking at CO2 conc vs dT/dt

It’s the fact that CO2 changes that causes the forcing. I wouldn’t have plotted it the way Bart did, but that’s how he did it, and that’s why I addressed the delay the way I did.
I expected the delay for the “fast” response to be about two months, because I had previously computed it for S_CO2 versus global mean temperature. I realized from that analysis that you needed better resolution than one month to pull out the actual delay (because it’s between one and two months).
There is a real physical effect as you correctly pointed out relating to response of the oceans to atmospheric temperature in which as the temperature rises, CO2 comes out of solution in the ocean. However, there is a rather large delay associated with that, and it’s not impulsive, because there is a latency that depends on depth in the response of mean ocean temperature to forcings from atmospheric temperature. It’s on the order of years.
I knew from previous discussions that any effect where you see near simultaneous response of atmospheric temperature and atmospheric concentration of CO2 can’t be explained by CO2 dissolution from the ocean. So what Bart was showing wasn’t based on a physical understanding of the processes and his conclusions were unphysical.
(He’s not really big on admitting mistakes so don’t expect a “recall” of this theory of his anytime soon, in fact, even when he makes a mistake, it sounds like “We were both right…” even when he was completely wrong.)

SteveSadlov
June 4, 2012 11:07 am

You may have heard of 350.org.
Well here I am with 500.org.
Anything to try and soften the inevitable coming macro decline. But even 500 will be a mere band aid against massive forces.

Bart
June 4, 2012 11:18 am

Carrick says:
June 4, 2012 at 10:42 am
“but it doesn’t shift the low-frequency components”
Switch to a twelve month average, and see how your argument holds up.
“You are the one who claimed you can’t have negative delays in an impulse response function and confused physical causality with signal causality, not me.”
Obviously, you cannot have real negative delays, just apparent ones indicated by the particular analysis tool. And, I showed your argument was inapplicable to the case at hand. As was Nick’s criticism. I was right about the transfer function, and remain so to this day.
Yes, I jumped to a conclusion about a negligible matter (because you were being such a —-) but quickly came clean about it and put it aside. You should take a lesson from that experience – it does you no good to argue an untenable position into the ground. The best thing to do is come clean about it and move on to more substantive issues. There is a quote attributed to the great British economist Sir Maynard Keynes, who was known to change his positions, sometimes in mid-argument. When challenged on this by a critic, he fixed him with an unwavering stare and replied: “When I find that I am wrong, I change my mind. What do you do?”
Here is some advice: when you find yourself holding the absurd position that temperature responds to the rate of change of CO2… stop digging. Your arm-waving here merits no further response.

Bart
June 4, 2012 11:26 am

Carrick says:
June 4, 2012 at 10:59 am
“I knew from previous discussions that any effect where you see near simultaneous response of atmospheric temperature and atmospheric concentration of CO2 can’t be explained by CO2 dissolution from the ocean.”
You aren’t looking at “atmospheric concentration of CO2”. You are looking at its derivative. Do you know the differential (pun intended)?
You are wrong. Ridiculously, uproariously, hilariously so. Admit it, and move on, and people will think better of you.

FerdiEgb
June 4, 2012 12:40 pm

Bart says:
June 4, 2012 at 9:25 am
FerdiEgb says:
June 4, 2012 at 1:06 am
“…the temperature was below your baseline for the full period 1900-1935, average -0.2°C.”
The impetus to CO2 is approximately proportional to dT + 0.5, where dT is the temperature anomaly relative to whatever the baseline is in the data. At no time in the modern era or even a little before would that quantity have been negative or zero. In fact, it suggests that CO2 will keep rising for some time to come, until long term limiting factors kick in.

OK, that is the “fudge factor” to match the increase rate and its variability. No problem with that. But still so, if the period 1900-1960 still was positive, I am quite interested how much CO2 that injected in the atmosphere (or how little there was at the beginning of the 20th century). And further back to the LIA which was, depending of the reconstruction, 0.3-1.0°C cooler than today. Still no problem for CO2 levels? Even further back: near 100,000 years of glacials…
The rest of your post is an appeal to magick.
I am sure that you are a very good theoretician, but sometimes one need to bring that kind of people back to the ground on their two feet. What you have worked out is theoritically magnificent, but there are some practical problems:
There is no natural process that I know of or ever heard of or ever read of that can deliver 70 ppmv (and according to your formula far beyond that in the future) in only 50 years, only based on a sustained increase of a few tenths of a °C.
If you think that is possible, please give an indication what process that might be with references.

Myrrh
June 4, 2012 12:41 pm

FerdiEgb says:
June 4, 2012 at 6:15 am
Myrrh says:
June 3, 2012 at 5:13 pm
Despite the low data density, the CO2 contour in troposphere and stratosphere confirms the direct measurements near the ground that suggest a CO2 maximum between 1930 and 1940.
Myrrh, I have had a lot of discussions with the late Ernst Beck about the validity of his data. The tropospheric data don’t confirm the direct measurements on the ground, simply because these were sometimes hundreds of ppmv higher than near ground. Shows that the data are completely useless. Unfortunately so. That is also the case for most data which show the 1942 “peak”, mostly taken at places with a huge diurnal variation and extreme variation. That alone already shows that the data are highly contaminated by local sources.
Ferdinand we’ve been through this argument before – your premise begins with belief in “well-mixed global” so everything you see as out of the ordinary is “local contamination” – but, again, until you can show how Keeling arrived at his “well-mixed” claim then all that exists in reality is local.
AIRS data found that; the pictures they showed downplayed what they actually said in their conclusion – that to their astonishment carbon dioxide was not at all well mixed, but lumpy, and so couldn’t be playing any major role in ‘global warming’, and, that they needed to go and understand wind systems to get a grasp of what was going on.
There is no, none, zilch, nada, eff all, way that Keeling could establish such a thing as “well-mixed” background level from where he was measuring. It is simply not physically possible to tell apart even if such a creature existed as “well-mixed global”. He was measuring local and they are still measuring local, arbitrarily deciding what local they will include and what not to present this mythical “well-mixed global”.
I have shown you the man had an agenda, his only interest was to show a rise in man-made CO2 levels – so his curve. You may well be shocked by the enormity of what it takes to link all those stations into his and Callendar’s avowed agenda, but as we’ve had reams and reams of proof, this is done regularly and with coordinated exactness in manipulating world temperature records.
I’m sorry Ferdinand, you may well trust this, but nothing I’ve learned about it shows Keeling and Callendar as anything but cherry pickers who came up with the unproven idea that there is such a thing as “well-mixed background”. AIRS did not find it. AIRS will not release top of troposphere or bottom of troposphere, why not?
Because they can fudge the mid troposphere regardless they came out with the HONEST conclusion that “it was not at all well-mixed, but lumpy” and was “insignificant in global warming”.
It’s lumpy, because, it’s all local.
All you’re doing is what Callendar did, taking out everything that doesn’t fit your unproven premise.
And you believe it because they kept repeating that it exists. What that means to all the hard and dedicated work you’ve built on it, I can only imagine, but first prove “well-mixed background” exists, because Callendar showed not such thing in his cherry picking:
“Considering Figure 8 we can see that Callendar selected only the lowest sample values and omitted several data sets.”
The data at Mauna Loa are sometimes contaminated by local sources too, but not more than +/- 4 ppmv, compared to e.g. Giessen where the longest 1939-1941 series was taken with a variability of 68 ppmv (1 sigma!). How can one deduce a “global” signal from such a series?
As above, their adjustments are arbitrary, there is no physics that can separate a supposed ‘global’ signal from local production. There is no global signal, it’s all local. Global is lumpy.
There’s a huge amount of data of this lumpy CO2, which, is fully part of the Water Cycle, and which, is one and a half times heavier than air so will always sink displacing Air without work being done – either way, however high it gets it will come down to Earth where plants exist waiting for it…
And, because it is heavier than Air is will not readily rise into the atmosphere. It takes wind, or heat as gases expand, or as it joins with water vapour rising into the cold heights as carbonic acid where it condenses into rain releasing its heat in the cold heights, or, it can be expelled direct into the heights by volcanic force, or planes. And all that within the wind systems, which do not cross the equator but stick to their own hemispheres, winds are volumes of Air on the move because of the difference in temperature, pressure – hot air rises and cold sinks – this is convection, exactly what happens in a classroom when a bottle of scent is opened… There is no “spontaneous diffusion of molecules into empty space” – Air is not empty space. That’s why we have sound, because the molecules don’t “spontaneously diffuse as per ideal gas law” – but vibrate where they are making their neighbour volumes of gas vibrate passing on sound, and then stop vibrating.
All this to show, there is no physics which makes “well-mixed global background”. That premise has to be rejected, or empirically and by real physics proved..

Reply to  Myrrh
June 4, 2012 1:32 pm

Myrrh,
If you look at the at the raw event flask data, you will find many spikes in the CO2 data that are flagged and not included in monthly averages. Most of these spikes are not errors because there is usually a corresponding spike in the 13CO2 data. The recorded monthly averages represent background levels that vary with latitude but not longitude. I think that cold water in clouds is absorbing the CO2 and transporting it to the upper atmosphere and the poles.This process is moderating the measured concentration near the surface and gives the appearance of “well mixed”.
Also, it can explain the higher concentrations in the upper atmosphere in the mid latitudes. The equator is the source and the cold water near the poles are the sinks.

Wilson Flood
June 4, 2012 1:15 pm

Looks a lot like the growth curve for human population. Try fitting one on top of the other. Good fit eh? Not surprising. We all make our little contribution.

June 4, 2012 1:25 pm

Bart:

You are wrong. Ridiculously, uproariously, hilariously so. Admit it, and move on, and people will think better of you.

You’re the one who was originally looking at dS_CO2/dt versus HADSST and now I’m wrong.
LMAO. Sure.

FerdiEgb
June 4, 2012 1:34 pm

Well, I have worked out the simplest formula of all to mimic the CO2 increase in the atmosphere. That is just a start, based on yearly averages for temperature and CO2 levels, so that may need fine tuning to monthly values and the coefficients need fine tuning too.
Here is the simple formula:
CO2(new) = CO2(old) + 0.55*emissions + 4*dT
That holds for any change in temperature, any period or any amount of emissions (the latter pure coincidence for the past 110 years…). Even ice ages and interglacials, but for longer periods the factor 4 for dT increases to a factor 8 (you know, that 800 year lag via the deep oceans…).
Here the plots:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_der_temp.jpg
and
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_temp_atm.jpg
Discussion: The first term is essentially the integral of Bart’s first formula:
dC/dt = (Co – C)/tau1 + k1*H
where k1 = 1 and H = emissions, C is the current CO2 level, Co the pre-industrial level at ~290 ppmv, the latter influenced by temperature, at a rate of about 8 ppmv/°C.
As the current difference is over 100 ppmv, a small shift in Co has little effect on Co – C and that change can be neglected for the current period. tau1 is large, so that only about halve H is removed, despite the large Co-C difference.
Co in my opinion is simply directly correlated with absolute temperature and not anomaly dependent. That is one difference with what Bart made. The other is that the except for the shift in Co, the influence of temperature on CO2 levels is constrained to temperature differences, with a finite amount and time duration. That means that for fast changes (1-3 years, ocean surface, vegetation), the influence is about 4 ppmv/°C temperature difference, while for longer time spans that increases to 8 ppmv/°C.
The essential difference with Bart’s solution is that except for the “equilibrium” setpoint, the influence of the emissions and the temperature influence on CO2 levels are completely independent of each other, where the temperature influence is mainly visible in the variability of the increase rate and around the trend, while the influence of the emissions is mainly visible in the average height of the increase rate and in the trend itself.

Bart
June 4, 2012 1:45 pm

FerdiEgb says:
June 4, 2012 at 1:34 pm
“The essential difference with Bart’s solution is that…” Ferdinand’s solution does not match the derivative of CO2 to the temperature anomaly, as is clearly indicated by the data. Hence, Ferdinand’s model fails to reflect the real world.

FerdiEgb
June 4, 2012 2:02 pm

Myrrh says:
June 4, 2012 at 12:41 pm
AIRS data found that; the pictures they showed downplayed what they actually said in their conclusion – that to their astonishment carbon dioxide was not at all well mixed, but lumpy, and so couldn’t be playing any major role in ‘global warming’
The CO2 data from AIRS looks lumpy, because their scale is only +/- 4 ppmv. If you see a variability of 2% of full scale, while about 20% of all CO2 goes in and out the atmosphere over the seasons, then I call that well-mixed. How much difference do you think that it makes for global warming (as far as there is) if you have 396 ppmv or 404 ppmv? It is the 100+ ppmv increase which may make the difference…
About Keeling: I have the highest respect for him. He was only interested in better CO2 measurements and devoted all his life on that one item, including the invention of new measurement methods of unprecedented accuracy and allowing continuous measurements. Read his autobiography for what obstructions he did overcome to continue the measurements at Mauna Loa and other stations against the administrations:
http://scrippsco2.ucsd.edu/publications/keeling_autobiography.pdf
I have not the slightest interest in complot theories that the CO2 data are manipulated in any way. I have controlled them from raw voltage data to what is openly archived. There is no manipulation. Or how can you convince hundreds of people involved from some 70 baseline stations (+400 others over land), from different countries, and different instutions to collectively and continuously lie about the data? Even the pensioners, that they still shut their mouth over such huge scientific scandal?
Please, it is not because you don’t like the data that they must be proven false at all cost, you only disprove yourself as a valid opposant on other items where the other side is not on such firm ground…

FerdiEgb
June 4, 2012 2:30 pm

Bart says:
June 4, 2012 at 1:45 pm
Ferdinand’s solution does not match the derivative of CO2 to the temperature anomaly, as is clearly indicated by the data. Hence, Ferdinand’s model fails to reflect the real world.

Peanuts. If I should use the monthly values like you did, that would show a better match, but I can return the favor: your temperature influence doesn’t match the CO2 trend as good as the emissions:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_co2_1900_2004.jpg
where a temperature change of halve the scale gives 5 ppmv change in CO2 levels, but the whole scale should give a 80 ppmv increase?
And:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg
a near perfect match…
Thus we may agree that the temperature variation is a real world perfect match for the variability in increase rate, while the emissions are a real world perfect match for the trend itself. But I think that it is of more interest to know what the cause is of the trend than the cause of the variability of the increase rate…
BTW, I am still interested in your backcalculation to 1900 and over the LIA. For the latter you may use any recontruction, except Mann’s HS, for obvious reasons…
And already a process found which can deliver 70 ppmv CO2 with a a continuous elevated temperature of a few tenths of a degree C?

June 4, 2012 2:49 pm

Bart and Ferdinand,
I submit my statistical model is a better fit than both and better satisfies mass balance. It includes both anthropogenic and natural sources. http://www.retiredresearcher.wordpress.com.

June 4, 2012 3:30 pm

To Ferdinand Engelbeen /
Since there have been a number of recent posts/threads on CO2 and the Carbon cycle here at WUWT, I have been wondering when you might show up!
Re your comment @ 6/3 – 2:10 a.m.: You state in part, “The net result over very long periods is that an increase of 1 deg. C in ocean temperature gives some 8 ppmv increase in CO2. Thus the ~ 1 deg. C warming since the LIA gives at maximum 8 ppmv increase of CO2. But we see an increase of over 100 ppmv since the start of the industrial revolution…” I have always been impressed by this argument (which you often make) which turns on the observed (or better, reconstructed) glacial-interglacial temperature vs. CO2 relation. But I am somewhat puzzled by the 1 deg. C = 8 ppmv CO2 premise, for in that case then either (1) the resulting shifts in temperature increase and CO2 increase are significantly smaller than those reconstructed from ice cores, or (2) granted that the glacial-interglacial temp. increases at the poles were greater than the estimated globally averaged increases [5-6 deg. C], then during the G-IG transitions, CO2 was not at all “well mixed.” There may be something packed into the “ocean temperature” qualification you make, but even at or near the poles, I doubt that ocean temperatures increased nearly as much as ice cap temperatures. They were significantly warmer at glacial maxima to begin with. So, same problem. Does my concern make any sense?

Bart
June 4, 2012 3:54 pm

FerdiEgb says:
June 4, 2012 at 2:30 pm
“…your temperature influence doesn’t match the CO2 trend as good as the emissions:”
Sure it does. You forgot to integrate, since it is the CO2 rate of change which is proportional to temperature.
fhhaynie says:
June 4, 2012 at 2:49 pm
I will have to look it over more carefully when I have a moment to spare. However, this plot is not something I think I agree with. As I believe CO2 is essentially controlled by temperature, I do not see “controls” making much difference.

Bart
June 4, 2012 3:56 pm

…which is proportional to temperature anomaly.

Lance Wallace
June 4, 2012 4:15 pm

I expect this post is well past its natural lifetime. One last comment. Integrating the exponential provides an estimate of the total amount of CO2 contributed to the globe since the rise began (around 1720 as predicted by the fit). By 1958, the total increase was 37,000 ppm-months. Conveniently enough, the super-El Nino of 1998 was almost exactly one half-life further on, and the total was 74,000. Now the total is 94,000. That is, since the temperature flattened out 14 years ago, the atmospheric levels have increased by the addition of 21% of all the CO2 added to it in the last 200 years. Yet the temperature is not cooperating. Where is the temp-CO2 correlation now?

Lance Wallace
June 4, 2012 4:19 pm

Well, actually the increase was 27% of the total amount added by 1998, or 21% of the total to date.

June 4, 2012 5:32 pm

Bart,
The ice core data indicates global temperatures fluctuate in several cycles of varying length. That 200 year cycle observed in the CO2 data could well be controlled by a 200 year temperature cycle with an unknown lag time. Some of the shorter less significant cycles are very likely temperature controlled. A cycle around 20 years is possibly related to ENSO.

Editor
June 4, 2012 6:05 pm

P. Solar says:
June 4, 2012 at 12:24 am (emphasis mine)

Willis :

“In reality, nothing in nature continues to grow exponentially.”

Your point about many curves matching such a short segment if fair, but the whole AGW debate is all about a change that is NOT natural : economic growth (and hence fossil fuel usage) has been growing exponentially since the 60′s (about 2% per year).
So I don’t see why you suggest a sigmoid, which would correspond to constant emissions. No sign of that happening in the near future. 🙁

Not even close, P. Solar, not even close.

Source
Note that per capita emissions have been dropping faster than total emissions, and that the population growth has been slowing down, and that population is due to stabilize by around 2050 …
Please, people, do your homework. It saves you from a host of miseries, not the least of which is people publicly correcting your errors.
w.

Reply to  Willis Eschenbach
June 5, 2012 6:34 am

Willis,
Your input/output model should produce the best fit. Basically, it is what Ferdinand uses. The problem is that it does not seperate natural from anthropogenic emissions. Ferdinand assumes that natural net input/output is in “dynamic equilibrium” thus the net accumulation is all anthropogenic. Bart says that the natural changes in the input/output are so much greater than anthropogenic emissions that they make little difference. My analysis indicates that, at present anthropogenic emission rates, they are statistically significant, but account for less than 10% of the accumlation. http://www.retiredresearcher.wordpress.com.

gnomish
June 4, 2012 6:57 pm

http://www.nist.gov/data/PDFfiles/jpcrd427.pdf
solubility of co2 in water.
something i find quite odd- they rely on models a whole lot.
why would they do that when actual experiment is so easy and gives actual data?

Editor
June 4, 2012 7:05 pm

Lance Wallace says:
June 4, 2012 at 3:20 am

Willis Eschenbach says:
June 3, 2012 at 11:41 pm

“I hate to say it, but this analysis is meaningless. You can’t just fit a curve to something and extend it, that’s the kind of thing that the AGW alarmists do.”

Ouch! You really know how to hurt a guy, Willis. I’m hardly defending what I did in a lighthearted way for an hour or two a couple of days ago, it was just that the fit resulted in a rather good estimate of both the rough time of the beginning of the rise (some 200 years ago) and the rough level of the background CO2 level (about 260 ppm).

My apologies, Lance, I was commenting on the analysis. The problem is that bad science done by skeptics diminishes the reputation of the blog as well as the reputation of all skeptics. In addition, I get busted all the time for not commenting on skeptical papers, since most of my analyses are of AGW supporting papers. So I try to come down equally hard on both sides.
The problem is that the rise in CO2 is just a bit off of linear, just slightly curved. As a result, you can fit a whole host of curves to it. If you do the statistics, you’ll see that the difference in how good the fit is tends to be very small. As a result, we have absolutely no information that would allow us to pick one curve over the other.
If you want to fit a curve, my suggestion is that you fit a curve with some physical meaning to it. The amount of CO2 remaining in the air can be very closely modeled by a sink which sequesters a few percent of the atmospheric excess amount each year.
For example you can use a time step equation
A(t) = E(t) + .968 * A(t-1)
where A(t) is the amount of the emissions remaining in the atmosphere in year t, E(t) is emissions in year t, and A(t-1) is the amount remaining in the atmosphere in year t-1.
To convert from ppmv to gigatonnes of carbon (GtC), multiply by 2.1838. This assumes a pre-industrial CO2 level in 1850. I’ve put an Excel spreadsheet up here to show how it can be done.
Using that, you can get a pretty good read on what will happen under various future scenarios, you put in the emissions, it will tell you the airborne CO2 concentration. I’ve included an example of freezing emissions at the 2005 level, you can put in what you want.
All the best,
w.

Allan MacRae
June 4, 2012 7:38 pm

Allan MacRae says: June 4, 2012 at 4:05 am
First, you totally miss the point of the urban CO2 readings – it’s about Ferdinand’s mass balance argument, which fails not only on a seasonal basis but even on a daily basis, imo.
FerdiEgb says: June 4, 2012 at 6:26 am
The mass balance must be always obeyed, no matter what happens where. But that is only calculatable on a yearly basis, as we only have yearly inventories of the emissions. Urban readings anyway are irrelevant for the mass balance, as are all readings in the lowest few hundred meters above land. That represents only 5% of the air mass where the CO2 is not well mixed due to a lot of local sources and sinks. In the rest of the global air mass, the yearly averaged measurements are all within 2 ppmv for the same hemisphere and 5 ppmv between the hemispheres, where the SH lags the NH but the trends are exactly the same:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends_1995_2004.jpg
_____________
You are missing the point Ferdinand. The SLC urban CO2 readings show that even a the typical SOURCE of manmade CO2 emissions (the URBAN environment), the natural system of photosynthesis and respiration dominates and there is NO apparent evidence of a human signature. If your premise was correct, you would see CO2 peaks at breakfast and supper times and the proximate (in time) morning and evening rush hours, when power demand and urban driving are at their maxima. This human signature is absent In the SLC data, and yet the natural signature is clearly apparent and predominant.
Similarly, in the AIRS animation I posted earlier, there is NO human signature and the power of nature is clearly evident. Here it is again.
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4
These are huge natural DYNAMIC systems that are apparently NOT impacted by the relatively small human contribution. Sadly, Nature apparently just ignores our humanmade CO2 emissions, as irritating as that must be for you.
I know you have made up your mind on this point Ferdinand, and nothing will shake your belief. Try watching the George Carlin video again – George gets it. 🙂

Allan MacRae
June 4, 2012 8:00 pm

Here are a few more references on C13/12:
There are many more such references our there, Ferdinand – but no doubt you think Murry Salby is totally out of his depth too, just like Roy Spencer.
http://wattsupwiththat.com/2012/04/19/what-you-mean-we-arent-controlling-the-climate/
http://wattsupwiththat.com/2011/08/05/the-emily-litella-moment-for-climate-science-and-co2/
Here is your dilemma Ferdinand:
CO2 lags temperature at all measured time scales, and yet you insist that CO2 drives temperature.
You may be right and I may be wrong, but please explain to me again how the future can cause the past.

Brian H
June 5, 2012 1:50 am

P. Solar;
Both those images work fine for me. But you misspelled “inferred” on the title line(s).
>:)

June 5, 2012 1:51 am

Bart says:
June 4, 2012 at 3:54 pm
Sure it does. You forgot to integrate, since it is the CO2 rate of change which is proportional to temperature.

And that is the problem:
The rate of change indeed is proportional to the temperature (change). But temperature near completely explains the variation in rate of change, not the whole rate of change.
Where it goes wrong is that, by scaling and offsetting the temperature, you attribute the total of the rate of change to temperature, including the part that is introduced by the scaling and offset. But there is nothing that allows you to attribute the bulk of the rate of change to temperature, as even if you detrend the whole bunch (and the integral is essentially zero), the correlation between temperature (change) and rate of change variation remains the same.
See:
http://esrl.noaa.gov/gmd/co2conference/pdfs/tans.pdf
page 14 and following,
Thus in my informed opinion, the bulk of the rate of change is caused by human emissions (at about twice the rate of change) while the variability of the rate of change is caused by temperature changes, (near) completely independent of each other.

FerdiEgb
June 5, 2012 3:23 am

Leigh B. Kelley says:
June 4, 2012 at 3:30 pm
The 8 ppmv/°C is based on the Vostok ice core, recently confirmed by the 800 kyr Dome C ice core:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/Vostok_trends.gif
The bulk of the variability around the trend is caused by the variable lag of CO2 vs. temperature: about 800 years during a glacial-interglacial transition, but several thousands of years during the opposite transition (therefore more deviation at the upper side than at the lower side). I didn’t compensate for the lags, that should have given even a better fit.
CO2 is already well mixed within a few years, thus as Vostok is a mixture of (about 600) years, that is no problem. The temperature is a proxy: either hydrogen (dD) or oxygen (d18O) isotopes are used. The origin of the isotope changes is mainly in the sea water surface temperature of where the water vapour of the clouds/snow/ice of the core originated and partly the temperature at the condensation place and the freezing pace. For more coastal ice cores like Law Dome, the bulk of the vapour originates from the nearby Southern Ocean, while for the high altitude, inland ice cores like Vostok and Dome C, most originates from a wide area all over the SH. Thus in general, the temperature proxy of Vostok reflects the whole SH oceans…
There may be some problems if the NH showed a different behaviour, but besides some shifts in the start and other episodes of the glacial/interglacial events, the NH behaves quite similar as the SH.
Another confirmation of the around 8 ppmv/°C is in Law Dome:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_1000yr.jpg
There is an app. 6 ppmv drop in CO2 around 1600, at the coldest part of the LIA. Law Dome can not be used as global temperature proxy (it is regional, but see the current discussion at ClimatAudit), so we need to compar that to one of the “spagetthy” reconstructions of global temperature over the past 1,000 years. I compared it to these with the highest MWP-LIA difference (Mann’s HS has the lowest difference) like Esper, Moberg,… which show a change of ~8°C over the time span of interest, which brings us again to around 8 ppmv/°C.
There are several other proxy ranges for the CO2/temperature ratio, the full range is, if I remember well some 4-20 ppmv/°C. But it seems to me that the ice cores in this case gives the best, at least hemispheric answer.
A constraint on the upper bound is the change introduced by ocean warming: any warming of the ocean surface (including the upper 700 m, the “mixed layer”) gives according to Henry’s Law an increase of 16 microatm of the pCO2 at the surface. Thus an increase of ~16 ppmv in the atmosphere is sufficient to compensate for that. But as at the other side vegetation works harder at higher temperatures (and increased precipitation), the average increase would be lower when everything again is in dynamic equilibrium…

Allan MacRae
June 5, 2012 5:15 am

Ferdinand Engelbeen says: June 5, 2012 at 1:51 am
“Thus in my informed opinion, the bulk of the rate of change is caused by human emissions (at about twice the rate of change) while the variability of the rate of change is caused by temperature changes, (near) completely independent of each other.”
____________
Comment to Bart:
It is possible that Ferdinand is correct and I am wrong (but I really doubt that).
Ferdinand is correct in that the short-cycle derivative dCO2/dt apparently does not significantly impact the trend of the overall CO2 versus temperature relationship – it just explains the “wiggles” in that trend.
The question is what primarily causes what – does atmospheric CO2 drive temperature or does temperature drive CO2? Do current humanmade CO2 emissions significantly increase atmospheric CO2, or are they “lost in the noise” of the much larger dynamic natural system?
My contention is, adapting Ferdinand’s wording:
“ the bulk of the rate of change is NOT caused by human emissions, BUT IS A RESULT OF ONE OR MORE LONGER-TIME NATURAL TEMPERATURE CHANGE CYCLES, CONSISTENT WITH the SHORT-TIME-CYCLE variability of the rate of change THAT IS ALSO caused by temperature changes.”
I prefer my hypo because
1. My hypo is more consistent with Occam’ s Razor – whereas Ferdinand’s hypo requires opposing trend directions at different time scales in the system, mine does not, such that all trends are consistently in the same direction (temperature drives CO2) at all time scales.
2. My hypo is consistent with the fact that CO2 lags temperature at all measured time scales, from an ~800 year lag on the longer time cycle as evidenced in ice cores, to a ~9 month lag on the shorter time cycle as evidenced by satellite data.
3. I have yet to see evidence of a major human signature in actual CO2 measurements, from the aforementioned AIRS animations to urban CO2 readings ( although I expect there are local data that I have not seen that do show urban CO2 impacts, particularly in winter and locally in industrialized China.)
The impacts on humanity of these two opposing hypotheses are significant:
If Ferdinand’s hypo is correct, we will likely see a little more global warming – not the catastrophic warming of the IPCC scenarios (driven by ridiculously high “climate sensitivity” and positive feedback assumptions), but a modest warming that will actually be (net) beneficial to humanity and the environment, imo.
If I am correct in my overall assessment (and not just this hypo), we are likely to see some global cooling, which may be moderate or severe. Historically, humanity has done very poorly during periods of severe global cooling. For example, many millions starved in Northern countries circa 1700, during the depths of the Little Ice Age and the Maunder Minimum.
The implications of the current obsession with global warming mania is that, ironically, society will be unprepared should a period of global cooling occur.
During the last period of global cooling, tens of thousands of innocent people were burned as witches, in many cases because they were accused of causing the cold weather that devastated crops and resulted in widespread starvation.
If there is another period of severe global cooling, I would not like to be one of the many climate scientists who has profited from stoking the fires of global warming hysteria.

Joachim Seifert
Reply to  Allan MacRae
June 5, 2012 10:37 am

Allan, you are right, 100%….. My new paper on 4 longterm global
warming/cooling mechanisms will show it for over a 10,000 years.
time frame….It is all just a matter of a few more months…JS

richard verney
June 5, 2012 5:54 am

Shyguy says:
June 3, 2012 at 12:26 am
Looks to me like the co2 records got corrupted just like everything else the ipcc get it’s hands on.
Dr. Tim Ball explaining:
http://drtimball.com/2012/pre-industrial-and-current-co2-levels-deliberately-corrupted/
///////////////////////////////////////////////////////////
I am sceptical of the reasons justifying the ignoring of this old experimental data and what it tells us of 19th and early 20th century CO2 levels.
It would be interesting to repeat those old experiments using the same location, same time of year, same equipment and same methodology etc and see what results are achieved today.

Brian H
June 5, 2012 6:02 am

FerdiEgb says:
June 5, 2012 at 3:23 am
“spagetthy”

Not even close. Spaghetti.

FerdiEgb
June 5, 2012 6:57 am

Leigh B. Kelley says:
June 4, 2012 at 3:30 pm
Sorry, mistake on the MWP-LIA difference which was 0.8°C in several reconstructions, lucky for our ancestors (and us), not 8°C…

FerdiEgb
June 5, 2012 7:53 am

Allan MacRae says:
June 4, 2012 at 7:38 pm
You are missing the point Ferdinand. The SLC urban CO2 readings show that even a the typical SOURCE of manmade CO2 emissions (the URBAN environment), the natural system of photosynthesis and respiration dominates and there is NO apparent evidence of a human signature.
Depends where and when you measure… Mauna Loa had problems with local traffic CO2 when that increased over the years, until they banned all traffic there. And have a look at the data from Diekirch (Luxemburg), in a shielded valley with forests + urban + small factories:
http://meteo.lcd.lu/papers/co2_patterns/co2_patterns.html
Especially Fig. 12 for the differences between Sunday and weekday pieks during rush hour…
Of course the human signal is small (about 3%) compared to the diurnal and seasonal fluxes. But that is only important if the natural fluxes are in unbalance and add or substract some net amount of CO2. Well, it is proven that nature as a whole substracts CO2 from the atmosphere: each year in quantity about halve what humans emit. Thus momentary measurements near huge sources and sinks don’t tell you what happens in the total atmosphere, but a lot of stations and airplanes and ships surveys and nowadays AIRS in the “wel mixed” atmosphere do.
To make a comparison:
You have a fountain where the water is pumped out of a bassin and recirculates back in the bassin. The fountain has some computerised valve system which regulates the height of the fountain from 60% to 100% height on a regular basis. The maximum flow over the fountain is 1000 liter per minute. Now someone opens a small supply into the main waterflow at 10 liter per minute, he goes away on another job and forgets that the was adding water.
The extra supply is only 0.1% of the maximum flux. That is practically unmeasurable in the huge 40% change of the main waterflow. But will we have an overflow of the bassin sooner or later, or not?
Further, I missed the end of April discussion, as I was travelling through West-Australia, but I have intensively discussed at the Salby discussion last year, read my comments there again…
http://wattsupwiththat.com/2011/08/05/the-emily-litella-moment-for-climate-science-and-co2/
Here is your dilemma Ferdinand:
CO2 lags temperature at all measured time scales, and yet you insist that CO2 drives temperature.

Well, it was true until 1850 that temperature dictated the CO2 levels with different lag times, but since 1850, the CO2 levels were increasing far beyond what the temperature shows. At the current temperature, CO2 levels should be 290-300 ppmv. but the counts are ticking up and we reached near 400 ppmv already. Thus at this moment CO2 is leading with 100+ ppmv… Still temperature swings cause (a few months) lagged swings in the CO2 rate of change, but that are swings around the trend, not the trend itself.
Thus at this moment the CO2 levels lead the temperature to a far extent. If that will have a huge impact on temperature, that is an entirely different question. My opinion is that the inpact will be small (around 1°C), hardly a problem and mainly beneficial. But my fear is that opposing every single bit of what climate research shows, even based on solid evidence, works contraproductive for where the skeptics are right.

Bart
June 5, 2012 9:07 am

Allan MacRae says:
June 4, 2012 at 7:38 pm
No use, Allan. I have demonstrated in excrutiating mathematical detail that Ferdinand’s “mass balance” argument is completely bogus. It made no dent in his armor.
Ferdinand Engelbeen says:
June 5, 2012 at 1:51 am
“…the correlation between temperature (change) and rate of change variation remains the same.”
You cannot arbitrarily detrend the data. The slope of the temperature is what produces the curvature in the accumulated CO2, and it matches exactly. That leaves no room for a significant human influence.
Allan MacRae says:
June 5, 2012 at 5:15 am
“It is possible that Ferdinand is correct and I am wrong (but I really doubt that).”
I have addressed this issue at several points in this thread, e.g., here, and here, and here.
fhhaynie says:
June 5, 2012 at 6:34 am
“My analysis indicates that, at present anthropogenic emission rates, they are statistically significant, but account for less than 10% of the accumlation.”
My estimate is between 4% and 6%.

Bart
June 5, 2012 9:13 am

“I have addressed this issue at several points in this thread…”
First link should have been here.

Bart
June 5, 2012 9:15 am

Gah! Here.

June 5, 2012 9:17 am

Ferdinand @ 6/5 – 3:23 a.m.
Thank you for your reply. After rereading my post of yesterday, I was appalled. It seems that after years of relative isolation in rural Montana, my writing style has devolved into an amalgam of Kant (in the Critique of Pure Reason) and Henry James (in Wings of the Dove), with the piercing clarity of Hegel thrown into the mix!
Some of my puzzlement remains. Using the first graph you linked to, we have a range of ~187 ppmv CO2 to ~292 ppmv CO2 and ~ -9.5 deg. C to +3 deg. C for temp. These give total dCO2 of 105 ppmv and total dT of 12.5 deg. C. Using your hypothesized 8 ppmv CO2 per 1 deg. C, we have 105 ppmv / 8 ppmv = 13.125 deg. C dT, reasonably close to the ~12.5 deg. C I eyeballed from your graph. Here is my problem. Whether the temperature change is referred to the “nearby Southern Ocean” (presumably at very high latitude) or to the entire SH (mostly to the oceans); and whether it is referred to the change in SST or to the entire 0-700 m mixing layer, this sort of temp. change for/in the ocean seems confoundingly large (at least to me). I note that you qualify the processes contributing to the ice core(s) dT by adding where the condensation takes place and what the freezing pace is, but still… I got similarly high dT’s using the 8 ppmv per 1 deg. C formula for other G-IG transition dCO2 increases ranging from 180-300 ppmv to 200-260 ppmv for dCO2. It seems that there has to alot to this “place of condensation” and “freezing pace” business. Please throw me a line!

FerdiEgb
June 5, 2012 9:30 am

richard verney says:
June 5, 2012 at 5:54 am
I am sceptical of the reasons justifying the ignoring of this old experimental data and what it tells us of 19th and early 20th century CO2 levels.
There were two problems with the olde data: first the methods. Some were really bad (accuracy +/- 150 ppmv, but intended for measuring CO2 in exhaled air, still OK with such accuracy), the better were +/- 10 ppmv, good enough for average measuring, but even the seasonal swings are hard to detect.
The main problem is where was measured: Some series were from mid-towns, within forests, some at the coasts, some on seaships. It may be clear that measuring one or a few samples per day at a place where the diurnal variation may be a few hundred ppmv is not really representative for the CO2 levels of that time in the bulk of the atmosphere…
Those measurements that were made on seaships and at the coast, with seaside wind, all are around the ice core measurements.
The modern measurements, invented by C.D. Keeling and used since 1958 at the South Pole and Mauna Loa has an accuracy of +/- 0.2 ppmv and is fully continuous. The calibration happens each hour to maintain the accuracy.
It happens that we have some interesting data from a modern station near Giessen, Germany, near where the longest historical series in the period 1939-1941 was taken. The old data show a variability of 68 ppmv (1 sigma). Have a look at the diurnal variation from the modern station near Giessen, compared to baseline stations (Mauna Loa, Barrow and the South Pole), all raw data:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_background.jpg
The historical sampling in Giessen was 3 times a day, of which the morning and evening samplings were on the flanks of the highest changes…
Thus, unfortunately, most of the historical measurements can’t be used to know the real background CO2 levels of that time.

Bart
June 5, 2012 10:43 am

This keyboard is cursed. The first link where I explained why the derivative information is dispositive was here.

June 5, 2012 10:45 am

Friends:
This thread has become an ‘angels on a pin’ discussion with various participants each asserting that their model of carbon cycle behaviour is right (so everybody else is wrong).
I remind that I began my post on this thread at June 3, 2012 at 2:31 pm by saying

The important point is that the dynamics of the seasonal variation in atmospheric CO2 concentration indicate that the natural sequestration processes can easily sequester ALL the CO2 emission (n.b. both natural and anthropogenic), but they don’t: about 3% of the emissions are not sequestered. Nobody knows why not all the emissions are sequestered. And at the existing state of knowledge of the carbon cycle, nobody can know why all the emissions are not sequestered. But that is the issue which needs to be resolved.

I repeat the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?
Nobody has addressed that question although Allan MacRae (at June 4, 2012 at 7:38 pm) touches on it when he writes

The SLC urban CO2 readings show that even a the typical SOURCE of manmade CO2 emissions (the URBAN environment), the natural system of photosynthesis and respiration dominates and there is NO apparent evidence of a human signature. …
Similarly, in the AIRS animation I posted earlier, there is NO human signature and the power of nature is clearly evident. …
These are huge natural DYNAMIC systems that are apparently NOT impacted by the relatively small human contribution. …

Exactly so.
But that takes us back to my question.
It rephrases my question as being,
Why does the system behave as it does when it could easily sequester all emissions and when it does sequester the local anthropgenic emissions?
At June 5, 2012 at 5:15 am, Allan MacRae asserts that the answer is the observed increase in atmospheric CO2 emission is a delayed response to global temperature rise over the past century.
I admit that I think he is right but I point out that the change could be in part or in whole the anthropogenic emission. I explain this as follows.
The carbon system may be adjusting to a new equilibrium in response to a change such as the temperature rise, the anthropogenic emission, a combination of those two effects, and/or something else.
The rate constants of some processes of the carbon system are very slow so they take years or decades to adjust. Hence, any change causes the system to adjust towards a new equilibrium which it never reaches because the system again changes before the new equilibrium is attained.

As I said in my post at June 3, 2012 at 2:31 pm and repeated in my post at June 4, 2012 at 5:05 am, using that assumption

there are several models of the carbon cycle which each assumes a different mechanism dominates the carbon cycle and they each fit the Mauna Loa data. We published 6 such models with 3 of them assuming an anthropogenic cause and the other 3 assuming a natural cause of the rise in CO2 indicated by the Mauna Loa data: they all fit the Mauna Loa data.

ref. Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005)
The real issue is that the Mauna Loa data is little different from a straight line relationship with time. Hence, almost any model with two or more variables can be tuned to match the Mauna Loa data to within the measurement error (n.b. to a perfect fit to each datum).
Hence, arguments which amount to “My model works so it must be right” get nowhere: a wide variety of models ‘work’.
I again repeat, the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?
Richard

June 5, 2012 10:56 am

Cause and effect suggests that CO2 is driving temperature change, and what we’re looking at here is predominately the “fast response” of climate to changes in CO2.
I’m guessing that Bart never tested the lag when he made this claim:

I already pointed this out, in detail, on another thread where many things were discussed including the ability of his own model to fit variable contributions of CO_2 to the final growth in concentration without losing the correlation between temperature fluctuations and the derivative of CO_2 concentration (which I demonstrated numerically and posted the code for). Richard Courtney also commented that he has successfully fit multiple models to the CO_2 data within the error bars on the data, making it difficult to use pure “agreement with the data” to resolve differences between models.
However, Bart had an answer for the “lead/lag” problem in that discussion. I haven’t (I admit) gone all the way back to the raw data to investigate the claim, but he alleges that the order inconsistency is due to the fact that his curve uses 24 month running averages, so (one presumes) certain fluctuations can cause dCO_2/dt to run up before T. However, one would have to look at the raw data to see if in fact this is what is happening, and I have not done so. It could equally well be the case that the raw dCO_2/dt data is leading the T data — statistically, this is the most likely possibility simply because of the very sharpness of the correspondence, the very thing he argues for to make his case. The fluctuations match shape at a derivative granularity of the minimum time step size, or very nearly so, making it unlikely (in my own opinion) that this as an artifact of smoothing. But I am not sure, and unless/until you recompute the running averages on different timescales you will not be sure either. I’m too busy to do this, but woods for trees does make it pretty easy to do it, so play through.
Second, because sometimes dCO_2/dt leads T, and sometimes T leads dCO_2/dt (or they are closely synchronized) it is also important to bear in mind that inferring causality from correlation is weak in both directions. A better explanation is that both of them are being driven by a third variable, with a differential and somewhat random lag. For example, global atmospheric temperatures and fluctuations in CO_2 concentration could both be driven by global SSTs, or even local SSTs in only e.g. the south central pacific. Or by variations in cloud based albedo. Or by seasonal business and heating energy consumption in east asia. Or by space aliens, who are heating the Earth with an invisible heat ray that uses dark energy beamed from a secret base on the Moon to cause global warming with the intent of making us extinct in a couple of hundred years (no hurry, their colonization party is en route and the trip will take them a half century or more).
But exotic speculations aside, I think the fairest response so far is Richard’s. There are many models, with very different presumed underlying causal mechanisms, that CAN fit the CO_2 data. Some of those models make the bulk contribution come from anthropogenic sources — and work well enough to describe the data within error bars. Some models (like Bart’s) make the bulk contribution come from temperature-dependent shifts in e.g. chemical equilibrium in global sources and sinks that regulate the base atmospheric CO_2 concentration almost completely independent of what humans contribute.
In both cases there is some weak evidence confounding the simple explanations, but we simply lack objective, model-assumption free measurements from real data of the presumed processes that would permit us to conclusively favor any model over all the others (and in a complex system like the planet, may never obtain the needed information because the true model may NOT be simple, or simplifiable — it may involve solving a double coupled nonlinear Navier-Stokes problem on a global scale with intimate coupling of source and sink chemistry to local non-Markovian state, so that all of the fluctuations and variation we see is the “accidental” correspondence of a complex nonlinear system with one of the many variables that drive it that a momentary shift to a different Poincare cycle will then confound. We might just lack the long time scale data (with a sampling density and precision sufficient to be useful) to resolve the question, and might not HAVE it for a century or more of “modern” precision measurements at a still final granularity than the network of SST buoys or weather stations permits at present.
Given this, it is entirely plausible that Bart is correct. It is entirely (but somewhat less) plausible that he is incorrect. It is silly to argue that he must be right or must be wrong, because this is yet another variant of the eternal “correlation is causality” argument and is known to be scientifically and logically invalid, easily confounded by both accidental correspondence (which happens) and by additional variables with completely distinct causality that control both correlated variables. Smoking does not cause teen pregnancy, in spite of the fact that one can show a positive correlation between teens that smoke and teens that participate in activity that does cause teen pregnancy. We can never resolve the argument with better or worse matches in the correlation, though — we have to appeal to a considerable amount of completely independently derived science and measurement and raw sociology to understand what really (most probably) causes the observed correlation — teen hormones that encourage risk taking and social rebellion and uninformed participation in experimental sexual behavior simultaneously.
rgb

June 5, 2012 11:10 am

The real issue is that the Mauna Loa data is little different from a straight line relationship with time. Hence, almost any model with two or more variables can be tuned to match the Mauna Loa data to within the measurement error (n.b. to a perfect fit to each datum).
Damn skippy, Richard. Although I would have just asserted monotonic nonlinear relationship with time, so a near-infinity of nonlinear models with 1-3 parameters can fit it to within annualized fluctuations, and it is then VERY trivial to superpose any of those primary models with a secondary model that explains only the fluctuations including correspondence in the derivative of CO_2 and temperature!
Given such a short and monotonic baseline behavior, we understand almost nothing on the basis of mere numerical correspondence. Nor are any of the physical arguments or models particularly convincing — they depend way too much on the prior beliefs and biases of the arguer, with little to no way to falsify any of them using the data alone. What would help would be some sign of significant non-monotonic variation at the current monotonic scale (that is, not teensy annualized noisy fluctuations) that can only be predicted or hindcast with a subset of those models, but so far, that isn’t visible in the Mauna Loa data, especially not after they throw part of it away on the basis of (biased) arguments to produce a “cooked” product. The raw data, including the data that they throw away because of the direction the wind blows etc, might tell a different story because accepting or rejecting any part of the data according to external considerations forces the conclusion away from anything that omitted data might confound.
It isn’t worth going through the sins against the logic of statistical analysis routinely committed in climate science, but they are manifold and mortal.
rgb

June 5, 2012 11:10 am

Ferdinand:
re. your post at June 5, 2012 at 9:30 am
The ‘background’ CO2 level is meaningless. An IR photon interacting with a CO2 molecule does not ‘know’ if the molecule is ‘background’ or not. So, the total number of CO2 molecules in the atmosphere is all that matters, especially when CO2 is “well mixed” in the air.
At issue is to determine how the total amount of CO2 in the atmosphere has varied with time. Local measurements can do that for their localities. We use Mauna Loa as a proxy for the global total but, in principle, anywhere would do.
Richard

June 5, 2012 11:38 am

Robert Brown:
Please forgive my ignorance of your local idiom. What does “Damn skippy” mean, please?
And, while I am asking, I take this opportunity to say I agree everything else you said in that post (I did understand that).
Thanking you in anticipation.
Richard

Bart
June 5, 2012 11:52 am

Robert Brown says:
June 5, 2012 at 10:56 am
“However, one would have to look at the raw data to see if in fact this is what is happening, and I have not done so.
Do so. Here, I take the smoothing level down to 12 months (you have to average out the yearly variation). You’ve still got a 6 month advance because of the WoodForTrees centering of the average, but your spurious leads have vanished.
“Second, because sometimes dCO_2/dt leads T, and sometimes T leads dCO_2/dt (or they are closely synchronized) it is also important to bear in mind that inferring causality from correlation is weak in both directions.”
No, it isn’t. The variables of interest are the temperature and the total CO2, not the derivative. A change in temperature shows up in overall CO2 concentration at a later time. The temperature leads.
richardscourtney says:
June 5, 2012 at 10:45 am
“Hence, almost any model with two or more variables can be tuned to match the Mauna Loa data to within the measurement error (n.b. to a perfect fit to each datum).”
No, not perfect. The word perfect has a very specific meaning. Only within the arbitrary bounds you have set as a threshold.
“Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can? “
They very nearly do. If the system were in equilibrium, they would.

June 5, 2012 11:56 am

Please forgive my ignorance of your local idiom. What does “Damn skippy” mean, please?
It means “yes, I most emphatically agree”. I have no idea where it comes from, but of course Google has some ideas and at least one definition.
http://en.wiktionary.org/wiki/damn_skippy
So you could have replied to me with “Damn skippy to your damn skippy!” yourself instead of the more sedate “I agree (with) everything else you said in that post”:-).
rgb

Gail Combs
June 5, 2012 12:18 pm

richardscourtney says: @ June 5, 2012 at 10:45 am
…I repeat the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?….
_______________________________________
SWAG.
#1. CO2 is not evenly distributed throughout the atmosphere.
#2. CO2 has not remained more or less constant for eons but fluctuates a great deal more than the Warmists want us to know.
#3 There is competition between C3 and C4 plants. C3 plants do not take CO2 down below around 300 ppm based on the open field wheat study. If there is abundant CO2 the C3 plants have an advantage over the C4 plants and crowd them out. (SWAG)
#4 WIND – Trade winds and the jet streams. We have seen a change in the trades (El Nino, La Nina and a change in the location of the jets)
#5 The temperature has stopped rising and maybe falling (see Beck’s comment about 1941 blip)
In 2000 the mean annual air -sea flux for CO2 http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/image/annfluxgmm2u2windmap.jpg
Models again but they mention that the air-sea gas transfer rate is a function of WIND SPEED.

…The net air-sea CO2 flux is estimated using the sea-air pCO2 difference and the air-sea gas transfer rate that is parameterized as a function of (wind speed)2 with a scaling factor of 0.26. This is estimated by inverting the bomb Carbon-14 data using Ocean General Circulation models and the 1979-2005 NCEP-DOE AMIP-II Reanalysis (R-2) wind speed data. The equatorial Pacific (14°N-14°S) is the major source for atmospheric CO2, emitting about +0.48 Pg-C / yr, and the temperate oceans between 14° and 50° in the both hemispheres are the major sink zones with an uptake flux of -0.70 Pg-C/yr for the northern and –1.05 Pg-C/yr for the southern zone. The high latitude North Atlantic, including the Nordic Seas and portion of the Arctic Sea, is the most intense CO2 sink area on the basis of per unit area, with a mean of –2.5 tons-C / month / km^2 (1 Ton = 10^6 grams). This is due to the combination of the low pCO2 in seawater and high gas exchange rates. In the ice-free zone of the Southern Ocean (50°S-62°S), the mean annual flux is small (-0.06 Pg-C/yr) because of a cancellation of the summer uptake CO2 flux with the winter release of CO2 caused by deepwater upwelling. The annual mean for the contemporary net CO2 uptake flux over the global oceans is estimated to be -1.4 ± 0.7 Pg-C/yr. Taking the pre-industrial steady state ocean source of 0.4 ± 0.2 Pg-C/yr into account, the total ocean uptake flux including the anthropogenic CO2 is estimated to be –2.0 ± 0.7 Pg-C/yr in 2000….. http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/pages/air_sea_flux_2000.html

From the Air Vent, a comment by Ernest Beck:

Ernst Beck said @ March 8, 2010 at 4:35 pm
#38 Hans Erren
Hans Erren is argumenting like the AGW alarmists: ad hominem.
I have documented the whole set of CO2 measurements since 1800. Thanks to Louis Hissink for giving me the chance to publish it first. This was in 2006. But meanwhile I am several steps ahead. My website http://www.realCO2.de presents the status quo. I have investigated >90 000 single values at 901 stations sampled by >80 scientists using well known methods (see my website) under controlled conditions throughout the world. The new data set contains real background measurements e.g. 1893 and 1935 in the upper atmosphere. These represent the local CO2 levels at that stations. Using modern vertical CO2 profile characteristics I was able to establish new methods to calculate the annual means of the background levels from near ground measurements within an error range of about 1-3 %.
Appyling these methods to the historical series we result in a historical curve of annual MBL (marine boundary levels) since 1826 valid for the whole world.
My new paper which will be published in 2010 will present these research. Now we can compare the historical data with the modern data (e.g. Mauna Loa) and it shows a large peak around 1942. The time lag after SST is 1 year showing very high correlation as Schneider et al had found in West Antarctica with an temperature peak of about 8 °C in 1941. (El Nino).
reference see here: http://www.pnas.org/content/105/34/12154.abstract.
It´s not as Erren says: I do not have to declare anything. Data are published. They have to contradict me but not in a blog, in a paper.

various references:
Carbon starvation in glacial trees recovered from the La Brea tar pits, southern California
http://www.co2science.org/subject/b/summaries/biodivc3vsc4.php
plant response to CO2: http://i32.tinypic.com/nwix4x.png
CO2: stomata http://www.geocraft.com/WVFossils/stomata.html
CO2 Aquittal: http://www.scribd.com/doc/31652921/CO2-Acquittal-by-Jeffrey-A-Glassman-PhD
ON WHY CO2 IS KNOWN NOT TO HAVE ACCUMULATED IN THE ATMOSPHERE &
WHAT IS HAPPENING WITH CO2 IN THE MODERN ERA: http://www.rocketscientistsjournal.com/2007/06/on_why_co2_is_known_not_to_hav.html#more
Satellite Data:
http://www.jaxa.jp/press/2009/10/20091030_ibuki_e.html
http://chiefio.wordpress.com/2011/10/31/japanese-satellites-say-3rd-world-owes-co2-reparations-to-the-west/
CO2 Flux Estimated from Air-Sea Difference in CO2 Partial Pressure: http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/pages/air_sea_flux_2000.html

June 5, 2012 12:21 pm

Robert Brown and Bart:
Robert, thankyou. As you say, I should have Googled it. Sorry.
Bart, I strongly object to your saying

No, not perfect. The word perfect has a very specific meaning. Only within the arbitrary bounds you have set as a threshold.

The fit was perfect in that each datum from each model matches each corresponding datum in the Mauna Loa data set to within the measurement error. That IS a perfect fit. And the measurement error is not “arbitrary bounds” I made. I told you its value (Ferdinand has said the same above) and I provided you with a link to the Mauna Loa Lab.’s own explanation of how they derive it.
I do NOT make arbitrary choices (but I sometimes make mistaken ones). In this case I used the only valid “threshold”.
Richard

Bart
June 5, 2012 12:27 pm

richardscourtney says:
June 5, 2012 at 10:45 am
“Only within the arbitrary bounds you have set as a threshold.”
The measurement error is a wideband process. The “signal” you are looking for is at low frequency. As a result, you can apply a low pass filter to remove high frequency noise, revealing the low frequency signal hiding in it. This is especially the case for numerically differentiated data, as the differentiation process amplifies noise at high frequency.
In general, there is a particularly strong yearly signal you want to remove. A twelve month average will do this, though it does not have particularly good passband characteristics (gain falls off from unity at dc fairly rapidly). A better filter can be designed using standard packages, but that requires specialized knowledge.
A succession of 12 month averages really clobbers noise in a derivative, as you can see in this plot.
Now, in that plot, you will see that I divided the temperature series up into segments, because there appears to be a step change around 1990. In the analogous model I suggested above, this could result from a step change in the variable To, the equilibrium temperature. This suggests that those arguing above on this thread for deep ocean upwelling as the source of the temperature differential needed to drive the levels of CO2 to their current levels may be on the right track. Around 1990 or so, there might have been a sudden shift in the thermodynamic state of the upwelling water.
On the other hand, the Hadley SST here may not be very precise, or have been subject to various “adjustments” which are not readily available to us.

June 5, 2012 12:34 pm

Bart:
Your response to my post is a travesty. It quotes your words as being mine when yhose were the very words I had explained are a lie. It then waffles on with complete misunderstanding of what measurement error indicates when I have previously explained this basic science to you in a previous thread.
I see no point in answering you further whatever you post unless it is to refute another misquotation of me.
Richard

June 5, 2012 12:46 pm

Richard, “Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can? “
The annual cycle in the Arctic is a clue. The cold water and biological activity could be considered a great sink limited only by how fast the CO2 is transfered to the water surface. During much of the year most of the surface is covered with ice. As the ice closes the sink drain in the fall, the CO2 concentration rises as CO2 continues to be delivered from the south. The concentration reaches a maximum around the middle of February when most of the ocean is covered with ice. It reaches a minimum when the area of exposed cold water is a maximum. Also, the partial pressure difference between air and water is a factor (transfer rate a function of concentration). Another factor to consider is the length of time any extra absorbed CO2 takes to get back to the equator where it is readmitted into the atmosphere. Think about the life and death cycle of phytoplankton.

FerdiEgb
June 5, 2012 2:31 pm

Bart says:
June 5, 2012 at 9:07 am
You cannot arbitrarily detrend the data. The slope of the temperature is what produces the curvature in the accumulated CO2, and it matches exactly. That leaves no room for a significant human influence.
It is not only about the slope of the temperature, it is mainly about the offset. That is what “matches” the accumulated CO2, but both are simply chosen to match the CO2 data when integrated. That is curve fitting, which in this case is easely fitted, because the underlying trend in the data is quite linear and the temperature variability matches the variability in
CO2 increase rate, because that is a clear cause-effect relationship.
Where it goes wrong is that the slope and the offset are as good (or even better) fitted by the emissions, which are twice as high as the observed rate of change. Thus your slope and offset are completely arbitrary and can be replaced by 0-55% of the emissions, the upper bound does leave only a little room for the influence of temperature.
Indeed we have had many discussions on this topic, but besides the mass balance argument, the main problem with your theory is that you made the human emissions and the temperature too interdependent, while the influence of both on CO2 levels is (near) completely independent of each other. Therefore there is no need for a rapid sequestration of human (or any other) CO2, which anyway is not what is observed.
But again, there is a simple proof that your formula does or doesn’t work if you use the same coefficient and offset for the past periods (1900-1960 and LIA-1900).
And still I am waiting for any knowledge of a physical process that delivers 70 ppmv CO2 over 50 years only from a continuous small elevated CO2 level…

June 5, 2012 2:41 pm

fhhaynie:
Thankyou for your post at June 5, 2012 at 12:46 pm.
Yes, but none of that is quantified. In fact almost nothing in the carbon cycle is.
I again post the processes which we considered most important.
Short-term processes
1. Consumption of CO2 by photosynthesis that takes place in green plants on land. CO2 from the air and water from the soil are coupled to form carbohydrates. Oxygen is liberated. This process takes place mostly in spring and summer. A rough distinction can be made:
1a. The formation of leaves that are short lived (less than a year).
1b. The formation of tree branches and trunks, that are long lived (decades).
2. Production of CO2 by the metabolism of animals, and by the decomposition of vegetable matter by micro-organisms including those in the intestines of animals, whereby oxygen is consumed and water and CO2 (and some carbon monoxide and methane that will eventually be oxidised to CO2) are liberated. Again distinctions can be made:
2a. The decomposition of leaves, that takes place in autumn and continues well into the next winter, spring and summer.
2b. The decomposition of branches, trunks, etc. that typically has a delay of some decades after their formation.
2c. The metabolism of animals that goes on throughout the year.
3. Consumption of CO2 by absorption in cold ocean waters. Part of this is consumed by marine vegetation through photosynthesis.
4. Production of CO2 by desorption from warm ocean waters. Part of this may be the result of decomposition of organic debris.
5. Circulation of ocean waters from warm to cold zones, and vice versa, thus promoting processes 3 and 4.
Longer-term process
6. Formation of peat from dead leaves and branches (eventually leading to lignite and coal).
7. Erosion of silicate rocks, whereby carbonates are formed and silica is liberated.
8. Precipitation of calcium carbonate in the ocean, that sinks to the bottom, together with formation of corals and shells.
Natural processes that add CO2 to the system:
9. Production of CO2 from volcanoes (by eruption and gas leakage).
10. Natural forest fires, coal seam fires and peat fires.
Anthropogenic processes that add CO2 to the system:
11. Production of CO2 by burning of vegetation (“biomass”).
12. Production of CO2 by burning of fossil fuels (and by lime kilns).
Several of these processes are rate dependent and several of them interact.
At higher air temperatures, the rates of processes 1, 2, 4 and 5 will increase and the rate of process 3 will decrease. Process 1 is strongly dependent on temperature, so its rate will vary strongly (maybe by a factor of 10) throughout the changing seasons.
The rates of processes 1, 3 and 4 are dependent on the CO2 concentration in the atmosphere. The rates of processes 1 and 3 will increase with higher CO2 concentration, but the rate of process 4 will decrease.
The rate of process 1 has a complicated dependence on the atmospheric CO2 concentration. At higher concentrations at first there will be an increase that will probably be less than linear (with an “order” <1). But after some time, when more vegetation (more biomass) has been formed, the capacity for photosynthesis will have increased, resulting in a progressive increase of the consumption rate.
Processes 1 to 5 are obviously coupled by mass balances. Our paper assessed the steady-state situation to be an oversimplification because there are two factors that will never be “steady”:
I. The removal of CO2 from the system, or its addition to the system.
II. External factors that are not constant and may influence the process rates, such as varying solar activity.
Modeling this system is a difficult because so little is known concerning the rate equations. However, some things can be stated from the empirical data.
At present the yearly increase of the anthropogenic emissions is approximately 0.1 GtC/year. The natural fluctuation of the excess consumption (i.e. consumption processes 1 and 3 minus production processes 2 and 4) is at least 6 ppmv (which corresponds to 12 GtC) in 4 months. This is more than 100 times the yearly increase of human production, which strongly suggests that the dynamics of the natural processes here listed 1-5 can cope easily with the human production of CO2. A serious disruption of the system may be expected when the rate of increase of the anthropogenic emissions becomes larger than the natural variations of CO2. But the above data indicates this is not possible.
The accumulation rate of CO2 in the atmosphere (~1.5 ppmv/year which corresponds to ~3 GtC/year) is equal to almost half the human emission (~6.5 GtC/year). However, this does not mean that half the human emission accumulates in the atmosphere, as is often stated. There are several other and much larger CO2 flows in and out of the atmosphere. The total CO2 flow into the atmosphere is at least 156.5 GtC/year with 150 GtC/year of this being from natural origin and 6.5 GtC/year from human origin. So, on the average, ~3/156.5 = ~2% of all emissions accumulate.
The above qualitative considerations suggest the carbon cycle cannot be very sensitive to relatively small disturbances such as the present anthropogenic emissions of CO2. However, the system could be quite sensitive to temperature. So, our paper considered how the carbon cycle would be disturbed if – for some reason – the temperature of the atmosphere were to rise, as it almost certainly did between 1880 and 1940 (there was an estimated average rise of ~0.5 °C in average surface temperature).
Clearly, much more needs to be known if we are to answer my question.
And, as you point out, ocean circulation is but one of several variables additional to those we considered.
Richard

Allan MacRae
June 5, 2012 2:46 pm

Allan MacRae says: June 5, 2012 at 5:15 am ADDENDUM – ADDED POINT 4. BELOW.
The question is what primarily causes what – does atmospheric CO2 drive temperature or does temperature drive CO2? Do current humanmade CO2 emissions significantly increase atmospheric CO2, or are they “lost in the noise” of the much larger dynamic natural system?
My contention is, adapting Ferdinand’s wording:
“ the bulk of the rate of change is NOT caused by human emissions, BUT IS A RESULT OF ONE OR MORE LONGER-TIME NATURAL TEMPERATURE CHANGE CYCLES, CONSISTENT WITH the SHORT-TIME-CYCLE variability of the rate of change THAT IS ALSO caused by temperature changes.”
I prefer my hypo because
1. My hypo is more consistent with Occam’ s Razor – whereas Ferdinand’s hypo requires opposing trend directions at different time scales in the system, mine does not, such that all trends are consistently in the same direction (temperature drives CO2) at all time scales.
2. My hypo is consistent with the fact that CO2 lags temperature at all measured time scales, from an ~800 year lag on the longer time cycle as evidenced in ice cores, to a ~9 month lag on the shorter time cycle as evidenced by satellite data.
3. I have yet to see evidence of a major human signature in actual CO2 measurements, from the aforementioned AIRS animations to urban CO2 readings ( although I expect there are local data that I have not seen that do show urban CO2 impacts, particularly in winter and locally in industrialized China.)
[new point 4]
4. My hypo is more consistent with the Uniformitarian Principle.
Richard, I awoke very early this morning and have had little sleep – accordingly, I may tackle your excellent question in more detail later.
My high-risk, sleep deprived response is that Jan Veizer probably has it mostly right in his landmark 2005 GSA Today paper.
In my own words, the CO2 cycle “piggybacks” on the water cycle, and is a huge, DYNAMIC, DISPERSED (global in area) and HETEROGENEOUS system that is condemned to chase equilibrium in time and space into eternity.
Obviously, I need some sleep.
Best personal regards, Allan

FerdiEgb
June 5, 2012 2:50 pm

Robert Brown says:
June 5, 2012 at 11:10 am
The raw data, including the data that they throw away because of the direction the wind blows etc, might tell a different story because accepting or rejecting any part of the data according to external considerations forces the conclusion away from anything that omitted data might confound.
Look for yourself if throwing out some of the data at MLO or other stations has any effect on the trend and/or slope of the “cleaned” data. For four baseline stations, the raw (hourly averaged) data are available at:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
I have plotted the raw data and the “cleaned” averages from MLO and SPO here:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
But mind the CO2 scale!

FerdiEgb
June 5, 2012 3:04 pm

richardscourtney says:
June 5, 2012 at 11:10 am
The ‘background’ CO2 level is meaningless. An IR photon interacting with a CO2 molecule does not ‘know’ if the molecule is ‘background’ or not. So, the total number of CO2 molecules in the atmosphere is all that matters, especially when CO2 is “well mixed” in the air.
The influence of CO2 on radiation effects is over the whole air column. Over the oceans, the CO2 levels are rather uniform up to 20 km height. Over land you can have enormous differences in the first few hundred meters, due to huge local sources and sinks. But if you look at the radiation, even if you have 1,000 ppmv in the first 1,000 meters, its effect would be minimal.
Thus measurements in the first few hundred meters over land shouldn’t be used for possible local effects (which are negligible), neither for trends, although in average the trends may resemble what happens in the bulk of the atmosphere.

June 5, 2012 3:58 pm

Ferdinand:
Please read the final paragraph in my post you answered. You seem to have missed the point of my post in your answer.
Richard

Bart
June 5, 2012 4:17 pm

richardscourtney says:
June 5, 2012 at 12:34 pm
I quoted words so you would know where in the conversation I was picking up. Anyone following the thread knows who said what.
And, you are just plain wrong about the measurements. I have tried to explain basic filtering theory to you, but you just plug your ears and shout “nah, nah, nah!”
Your sheltered arrogance is astounding. You are denying an entire field of research from such giants as Fisher and Kalman, and ubiquitous application. How do I reach through to you to do even a modicum of research, or just try what I have explained you need to do?
FerdiEgb says:
June 5, 2012 at 2:31 pm
“It is not only about the slope of the temperature, it is mainly about the offset.”
No, it is not. I have explained this at length. I am tired of explaining.

Bart
June 5, 2012 4:38 pm

richardscourtney says:
June 5, 2012 at 12:21 pm
“The fit was perfect in that each datum from each model matches each corresponding datum in the Mauna Loa data set to within the measurement error. That IS a perfect fit.”
No, it is not perfect. Perfect means PERFECT.
The measurement error is distributed in frequency, much of it in the high frequency region in which we are not interested. That error can be filtered out.
Here is a very simple example. Suppose I tell you a signal is known to be a constant plus uncorrelated zero mean noise with standard deviation of S. By your logic, I can never determine the constant value to less than +/-S.
But, if I take N samples and average them, my estimate will have an uncertainty of +/- S/sqrt(N). As the number of points N goes to infinity, I asymptotically approach perfect (really perfect, not just pretend perfect) knowledge of the constant.
A constant is very low frequency, uncorrelated noise is evenly distributed across all frequencies, and an average is a low pass filter. It’s the same general deal.
richardscourtney says:
June 5, 2012 at 12:34 pm
“I see no point in answering you further whatever you post unless it is to refute another misquotation of me.”
In that case, richardscourtney said above, and I quote, “I enjoy molesting puppies.”
Now, you have to respond 😉

Bart
June 5, 2012 4:55 pm

FerdiEgb says:
June 5, 2012 at 2:31 pm
“But again, there is a simple proof that your formula does or doesn’t work if you use the same coefficient and offset for the past periods (1900-1960 and LIA-1900).”
A) we do not have reliable measurements from those times
B) different operating conditions mean different parameters. This is typical of linearized approximations of nonlinear systems – they only hold in a local neighborhood of the time in which the linearization is performed. Right now, in the modern era, these parameters hold, and they explain the last several decades of atmospheric CO2 concentration and rule out singificant human contribution to it.
“And still I am waiting for any knowledge of a physical process that delivers 70 ppmv CO2 over 50 years only from a continuous small elevated [temperature] level…”
Deep ocean upwelling, as I and others have commented.

Gail Combs
June 5, 2012 4:58 pm

Shyguy says:
June 3, 2012 at 12:26 am
Looks to me like the co2 records got corrupted just like everything else the ipcc get it’s hands on….
___________________________
All I had to do is read what Mauna Loa said. The “selection process” at Mauna Loa Observatory.

4. In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step, in which we fit a curve to the preliminary daily means for each day calculated from the hours surviving step 1 and 2, and not including times with upslope winds. All hourly averages that are further than two standard deviations, calculated for every day, away from the fitted curve (“outliers”) are rejected. This step is iterated until no more rejections occur.
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html

“Nough said.

Gail Combs
June 5, 2012 5:17 pm

Lucy Skywalker says: @ June 3, 2012 at 1:52 am
….Now think. CO2 lags temperature by 800 years, according to Caillon et al. What happened 800 years ago?? Anyone?? And what cycle takes 800 years to happen?? Anyone??
___________________________________
Here is the Graph guys ( “Correction to: A 2000-Year Global Temperature Reconstruction Based on Non-Tree Ring Proxies,” ~ Loehle and McCulloch (2008))
http://www.econ.ohio-state.edu/jhm/AGW/Loehle/

June 5, 2012 6:16 pm

I have plotted the raw data and the “cleaned” averages from MLO and SPO here:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
But mind the CO2 scale!

Now that’s very interesting. They both look reasonable (raw to cooked), but there is impressive variability at MLO compared to SPO (not to mention significantly higher baseline values). In fact, the SPO data you can’t tell the difference between raw and cooked, where at MLO cooked looks like it (if anything) underestimates the true variance of the data, especially on the low side of things. But this makes the correlation between even smoothed CO_2 and global temperature more suspicious. MLO is clearly not particularly representative of the entire globe (given two samples, both completely different in their fluctuation properties). As one might expect, it suggests that we would be far better served by an entire globe spanning set of CO_2 concentration monitoring stations than by “just one”, sitting on an active volcano that is used as if it is representative of the entire atmosphere at 4200 meters above sea level at all temperatures and latitudes.
Just a thought.
rgb

Reply to  Robert Brown
June 6, 2012 10:15 am

Robert,
I have been statistically analyzing global CO2 and 13CO2 data (monthly averages, raw flask, and continuous instrument) for years. You should expect significant differences between Mauna Loa and the South Pole. The Arctic ocean is a big sink with a seasonally adjusted drain valve while the circumpolar current sink that travels around an Antarctic elevated land mass (that is neither source nor sink) is never closed. I think this difference results in the amplitude of the seasonal cycle increasing with latitude (but not longitude) in NH and not nearly so in SH. The Scripps column 10 monthly averages not only factors out these seasonal differences but also excludes observed spikes in the raw flask data. I have just completed a global, “background”, statistical model for CO2 and the 13CO2 index. I think this needs to be refined and published. I haven’t written anything for peer review publication since I retired from EPA 21 years ago. I live in Cary. If you could find a smart graduate student with good math and programming skills, that is interested; I would gladly share what I have done and work with them (no charge). You can judge my analytical approach at http://www.retiredresearcher.wordpress.com.
PS. Of all those that comment here, I rank you as being nearest to “scientifically correct”.

Allan MacRae
June 5, 2012 6:17 pm

Repeating my above statement, to correct those who repeatedly insist on misrepresenting my position, either through illiteracy or malice:
Allan MacRae says: June 3, 2012 at 8:44 am
“For the record, I have no problem with CO2 measurement accuracy. The CO2 measurements at Barrow, Mauna Loa, the South Pole and many other sites correlate well and make sense.”
This observation does NOT require that CO2 is always well-mixed. It is clear that much time and effort is devoted to taking many CO2 measurements and rejecting many “outliers”.
The AIRS animation proves the point. Repeating it, yet again:
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4

June 5, 2012 6:27 pm

“Nough said.
Indeed. You really do have to wonder if anybody in the world understands statistics and why rejecting outliers in an unknown distribution is insane. But then, locating “the” CO_2 observatory for the world on an active volcano is insane. Having just one (or just five, or just ten) for the world is insane. From a statistical point of view.
It is not impossible that 100% of the Mauna Loa increasing CO_2 “signal” is due to a steady, occult, increase in CO_2 outgassing due to volcanic processes within Mauna Loa itself and surrounding islands. I don’t suggest that this is the mostly likely/plausible explanation, only that the only way one could check is with an observatory on top of Mount Everest, another on Kilimanjaro, ten thousand (or a hundred thousand) more moored on weather balloons at 20,000 feet in some sort of regular grid covering the planetary surface. Or performing some very complex and dubious geophysical research (since even if you excluded ML itself, there would be outgassing from vulcanism on the surrounding pacific floor to consider, and still more confounding factors). Expecting MLO to generalize to “the Earth” is a bit egregious.
rgb

Editor
June 5, 2012 6:40 pm

richardscourtney says:
June 5, 2012 at 10:45 am

… I repeat the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?

Thanks, Richard. It seems to me that the sequestration processes, following Le Chatelier’s Principle, will push the system back towards equilibrium. How hard they push, however, is a function of how far the atmospheric levels are from equilibrium. As a result, I would not expect them to sequester all the emissions, and I am puzzled why you think that they would, could, or should sequester it all …
What am I missing?
w.

Editor
June 5, 2012 6:49 pm

Robert Brown says:
June 5, 2012 at 6:16 pm

As one might expect, it suggests that we would be far better served by an entire globe spanning set of CO_2 concentration monitoring stations than by “just one”, sitting on an active volcano that is used as if it is representative of the entire atmosphere at 4200 meters above sea level at all temperatures and latitudes.

Actually, we do have a bunch of stations, one in Samoa, one in Barrow, Alaska, and the like. They lead to things like this:

My best to you as always,
w.

Editor
June 5, 2012 6:55 pm

Robert Brown says:
June 5, 2012 at 6:27 pm

… But then, locating “the” CO_2 observatory for the world on an active volcano is insane. Having just one (or just five, or just ten) for the world is insane. From a statistical point of view.
It is not impossible that 100% of the Mauna Loa increasing CO_2 “signal” is due to a steady, occult, increase in CO_2 outgassing due to volcanic processes within Mauna Loa itself and surrounding islands. I don’t suggest that this is the mostly likely/plausible explanation, only that the only way one could check is with an observatory on top of Mount Everest, another on Kilimanjaro, ten thousand (or a hundred thousand) more moored on weather balloons at 20,000 feet in some sort of regular grid covering the planetary surface. Or performing some very complex and dubious geophysical research (since even if you excluded ML itself, there would be outgassing from vulcanism on the surrounding pacific floor to consider, and still more confounding factors). Expecting MLO to generalize to “the Earth” is a bit egregious.
rgb

Actually, it turns out that MLO is quite a good place for a CO2 measuring station … see my post “Under the Volcano, Over the Volcano” for a discussion of the issues. I also discussed the Beck data there, and Dr. Beck posted a response, I was stoked. His response starts by saying:

Dear Willis,
I agree, the near ground data listed in my first paper do not reflect background data.

Read his whole comment here.
w.

Allan MacRae
June 5, 2012 9:42 pm

richardscourtney says: @ June 5, 2012 at 10:45 am
“ …I repeat the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?”
Re-reading your question Richard, I ‘m not sure I understand it fully.….
As I stated previously, the system will continue to chase equilibrium in time and space, and fortunately for life on this planet, that dynamic equilibrium at this time results in sufficient atmospheric CO2 to maintain photosynthesis.
The CO2 sequestered in thick beds of limestones, dolomites, coal, lignite, peat and petroleum all over the planet was once, I presume, part of Earth’s atmosphere.
I also assume that over time, continued sequestration of atmospheric CO2 in these sediments will ultimately lead to atmospheric CO2 concentrations that are too low to sustain photosynthesis.
Barring an earlier natural catastrophe, will this mechanism lead to the end of life on Earth as we know it, as photosynthesis shuts down and the food chain fails?
This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.
– T.S. Eliot, “The Hollow Men”
[ Not to worry – an asteroid strike will probably get us long before then. 🙂 ]

June 6, 2012 12:15 am

Bart says:
June 5, 2012 at 4:55 pm
A) we do not have reliable measurements from those times
We do have reasonable good temperature measurements for the period from 1850 on. That would give an impression of the alleged increase in CO2, according to the 1950-2010 fit. The fitting period is already 1/3rd of the whole period. Thus the verification period would show if the fit gives a reasonable answer for the whole period, even if we only have sparse data for CO2.
Right now, in the modern era, these parameters hold, and they explain the last several decades of atmospheric CO2 concentration and rule out singificant human contribution to it.
Again, by tightly connecting the fate of human emissions to temperature, you have ruled out the influence of the emissions. But as these are largely independent variables, one can fit the same decades with a factor of the emissions, without any arbitrary offset, leaving 0% to 100% influence of temperature on the trend, but still 100% influence of temperature on the variability of the rate of change.
“And still I am waiting for any knowledge of a physical process that delivers 70 ppmv CO2 over 50 years only from a continuous small elevated [temperature] level…”
Deep ocean upwelling, as I and others have commented.

There is no indication that the deep oceans have had a measurable change in temperature (which is impossible in such a short period), only the upper 700 meters have and that only can cause at maximum a 16 ppmv increase in the atmosphere since the LIA, thus a few ppmv in the period of interest, according to Henry’s Law. Further, an increase in turnover speed from/to the deep oceans only changes the throughput, not the amounts in the atmosphere, as far as there was any increase in turnover. It is alleged that increased temperatures would reduce the turnover (which isn’t proven, but the opposite is very unlikely). And the 70 ppmv increase of CO2 since 1960 would only push more CO2 into the cold sinks and reduce the release of CO2 from the Pacific warm pool, according to Le Châtelier’s Principle… Thus the deep oceans are a net sink for CO2, not a source.

FerdiEgb
June 6, 2012 12:38 am

Robert Brown says:
June 5, 2012 at 6:16 pm
Expecting MLO to generalize to “the Earth” is a bit egregious.
MLO is not even used to calculate the “global” CO2 dataset, only sealevel stations are used for that purpose. But it is used as reference, because it has the longest continous record. Measurements at the South Pole started one year earlier, but have a gap of a few years in the continuous record, be it that that can be filled by 14-day flask samples taken in that period.
But for the trend, it doesn’t matter what you take as reference, as all trends are near equal, there is only a lag of the SH after the NH and a lag with altitude, which indicates that the main source of the increase is near ground in the NH:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends.jpg
You can compare the data yourself from lots of stations here:
http://www.esrl.noaa.gov/gmd/ccgg/iadv/
Further, the “cleaning” procedure at MLO and other stations doesn’t change the average or the trend with more than 0.1 ppmv, no matter if you include or exclude the outliers. The outliers at MLO are clearly those measurements that contain local contamination (downslope wind from the the volcano, upslope wind from the valley), these are rightfullt discarded, but that doesn’t affect the trend.

FerdiEgb
June 6, 2012 12:45 am

Brian H says:
June 5, 2012 at 6:02 am
FerdiEgb says:
June 5, 2012 at 3:23 am
“spagetthy”
Not even close. Spaghetti.

I prefer tagliatelli, much easier to eat an spell…

FerdiEgb
June 6, 2012 1:50 am

richardscourtney says:
June 5, 2012 at 10:45 am
… I repeat the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?

I think that the problem is in the different causes of the increase/decrease.
There is a fast response to temperature, as can be seen in the seasonal swings. A temperature increase/decrease gives a near immediate response from the ocean surface and opposite from vegetation. On global average, that combined response gives some 5 ppmv/°C. The temperature swing over the seasons is about 1°C globally, mainly due to the higher temperature response in the NH. Most of the CO2 response is from the NH mid-latitude vegetation, as the d13C record shows.
This seasonal temperature swing forces an enormous flux in and out the atmosphere. But that is a rather fixed flux, where the variability in flux is mainly due to year by year temperature changes. The overall change as well as over the seasons (at ~5 ppmv/°C) as around the trend (at ~4 ppmv/°C) seems rather small compared to the huge fluxes involved, but that may be a result of the countercurrent action of the two main flows involved.
Thus CO2 changes from temperature changes have a fast component over the seasons and over interannual periods. There is also a very slow component which gives the changes over multidecades to multimillennia as seen over the MWP-LIA change that gives changes of around 8 ppmv/°C. In that case, deep ocean exchanges and land/ice area changes are involved with much longer response times.
The current discussion with Bart now is about the response to temperature for interannual to decadal changes. According to him (and others), that may be hundreds of ppmv/°C. One never knows, but it would be very remarkable that the response is small on high frequency changes and on very low frequency changes, but extremely high in between.
Now what happens if some source (volcanoes, humans) inject some extra CO2 into the atmosphere?
The natural processes will respond to that, as the above, temperature driven, dynamic equilibrium is disturbed. Some of it goes very fast in the upper ocean layer (response time 1-2 years), but that is maximum 10% of the disturbance, due to ocean chemistry (the Revelle factor).
Some is going into the deep oceans and vegetation, but that is a much slower process. Vegetation grows faster with more CO2 in ideal circumstances, but in the real world, the circumstances are not always ideal (water, nutritients, sunlight,…) and even in the best circumstances, the extra growth is average 50% for 100% more CO2. The deep oceans also take in some of the extra CO2, depending of the increase in CO2 level (the pCO2) compared to the oceans surface pCO2 at the sink places. But that process is much slower than the absorption or release of CO2 of the surface layer, due to a limited exchange flux between atmosphere and deep oceans. The response time for the combined deep ocean/vegetation to an extra CO2 injection is in the order of 50 years.
There still are many other processes which can respond to such disturbances, but these are all much slower.
Thus in summary, while the response to temperature in first instance is very fast, the response to a disturbance of the overall equilibrium is much slower, because that is mainly governed by slower processes than which are responsible for the response to temperature.

June 6, 2012 2:17 am

Willis Eschenbach:
Thankyou for your interest in this subject. At June 5, 2012 at 6:40 pm you ask me

It seems to me that the sequestration processes, following Le Chatelier’s Principle, will push the system back towards equilibrium. How hard they push, however, is a function of how far the atmospheric levels are from equilibrium. As a result, I would not expect them to sequester all the emissions, and I am puzzled why you think that they would, could, or should sequester it all …
What am I missing?

I answer:
You are “missing” any consideration of the behaviour of the carbon cycle system as exhibited by both the seasonal and the diurnal variations in atmospheric CO2 concentration.
The annual rise in atmospheric CO2 concentration is the residual of the seasonal variation of each year.
I remind that above, at June 5, 2012 at 2:41 pm, I wrote

At present the yearly increase of the anthropogenic emissions is approximately 0.1 GtC/year. The natural fluctuation of the excess consumption (i.e. consumption processes 1 and 3 minus production processes 2 and 4) is at least 6 ppmv (which corresponds to 12 GtC) in 4 months. This is more than 100 times the yearly increase of human production, which strongly suggests that the dynamics of the natural processes here listed 1-5 can cope easily with the human production of CO2.

In other words, it is hard to understand why the sequestration processes do not sequester all the annual emission.
This failure to sequester all of the annual emission is clearly not because the system is near to saturation because the rates of sequestration do not indicate that. Unfortunately, I lack your ability to post figures here, but if you look at Figure 2 in the item I emailed to you a few weeks ago then you will see it is of
“Rise and fall of carbon dioxide concentration in the atmosphere at four sites, Mauna Loa Hawaii, Estevan Canada, Alert Canada, Shetland Islands. Here three years are selected from the long term graph 1991- 2000, C.D. Keeling and T.P. Whorf. “On line trends”, cdiac.ornl”
(Incidentally, my above quotation of my words in this post are also copied from the same document which contains the illustration).
In each case in that Figure, the atmospheric CO2 plummets in the Spring then makes an abrupt reversal. The sequestration does not gradually reduce before reversal as it would if the sequestration processes were nearing saturation.
This behaviour (i.e. rate of sequestration and rate of reversal) indicates that the sequestration processes are not near to saturation, they can easily sequester all the anthropogenic emission, and
(a) the sequestration processes abruptly cease absorbing at the end of the Spring
or
(b) the emission processes are much faster than the sequestration processes and they abruptly start emitting at the end of the Spring
or
(c) a combination of (a) and (b).
Furthermore, as Allan MacRae says at June 4, 2012 at 7:38 pm

The SLC urban CO2 readings show that even a the typical SOURCE of manmade CO2 emissions (the URBAN environment), the natural system of photosynthesis and respiration dominates and there is NO apparent evidence of a human signature. If your premise was correct, you would see CO2 peaks at breakfast and supper times and the proximate (in time) morning and evening rush hours, when power demand and urban driving are at their maxima. This human signature is absent In the SLC data, and yet the natural signature is clearly apparent and predominant.

Simply, at a local level the diurnal variation of atmospheric CO2 is observed to be independent of known pulses of CO2 into the atmosphere. This, too, is evidence that the variations in atmospheric CO2 concentration are determined by some balance of the natural emission and sequestration processes, and it suggests that the balance is established at each geographic position. (This geographical conclusion is supported by the differences between the curves in the Figure 2 which I mentioned earlier in this post).
Clearly, something is adjusting the ‘set point’ of atmospheric CO2 concentration hour-by-hour, day-by-day, month-by-month, and year-by-year so it appears to act independently of known inputs of CO2 into the air.
I hope this answers your question
Richard

June 6, 2012 2:32 am

Allan MacRae:
Thankyou for your clarification at June 5, 2012 at 9:42 pm. It says:

richardscourtney says: @ June 5, 2012 at 10:45 am
“ …I repeat the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?”

Re-reading your question Richard, I ‘m not sure I understand it fully.….
As I stated previously, the system will continue to chase equilibrium in time and space, and fortunately for life on this planet, that dynamic equilibrium at this time results in sufficient atmospheric CO2 to maintain photosynthesis. …

I am sorry that I was not adequately clear.
I agree that “the system will continue to chase equilibrium in time and space”. Indeed, I said as much in my post at June 5, 2012 at 10:45 am where I wrote

The carbon system may be adjusting to a new equilibrium in response to a change such as the temperature rise, the anthropogenic emission, a combination of those two effects, and/or something else.
The rate constants of some processes of the carbon system are very slow so they take years or decades to adjust. Hence, any change causes the system to adjust towards a new equilibrium which it never reaches because the system again changes before the new equilibrium is attained.

But, again, that merely rewords my question. It changes my question to become
“Is the carbon cycle adjusting to a new equilibrium by not sequestering all CO2 emissions and, if so, why?”
Richard

June 6, 2012 2:40 am

Gail Combs:
Thankyou for your posts. At June 5, 2012 at 2:41 pm, I tried to get people to recognise that the complexity of the carbon cycle makes problematic any simplistic analyses of its changes. I think your posts make the same point but more clearly.
Richard

June 6, 2012 3:24 am

Robert Brown:
Your post at June 5, 2012 at 6:27 pm begins by saying

“Nough said.

Indeed. You really do have to wonder if anybody in the world understands statistics and why rejecting outliers in an unknown distribution is insane. But then, locating “the” CO_2 observatory for the world on an active volcano is insane. Having just one (or just five, or just ten) for the world is insane. From a statistical point of view. …

I write to provide a sincere and friendly warning.
For more than a decade, I have made the same (and other) points concerning the MLO data as you are making. But I have found it is fruitless making such points because the only responses are refusals to consider the facts and – if the points are pressed – personal abuse.
It is interesting to ponder why there is such refusal to consider the obvious limitations of the MLO data.
In my opinion the refusal is because the MLO data is the longest continuous record of atmospheric CO2 concentration and very little other quantification exists for any part of the carbon cycle. Therefore, almost all existing consideration and modelling of the carbon cycle is built on use of the MLO data. So, any doubt of the accuracy, precision and reliability of the MLO data is perceived as being an attack on the work of each person who has made any consideration and/or model of the carbon cycle.
About a decade ago I tried to investigate the entire business of the MLO data. My investigation included studying the published accounts of experimental procedures to obtain the data, visiting the site of the MLO lab., observing the surroundings of the MLO lab.from a helicopter, and attempting (without success) to get an interview with Keeling jnr.. I concluded that the MLO data is very far from being trustworthy.
I then attempted to draw attention to my conclusions in closed email groups of interested persons. I suspect that Ferdinand and Willis can remember the acrimonious responses I obtained.
So, I have concluded that consideration of the accuracy, precision and reliability of the MLO data is pointless: nobody wants to know, and the only thing achieved is ‘bruises’.
Richard

Myrrh
June 6, 2012 5:06 am

fhhaynie says:
June 4, 2012 at 1:32 pm
Myrrh,
If you look at the at the raw event flask data, you will find many spikes in the CO2 data that are flagged and not included in monthly averages. Most of these spikes are not errors because there is usually a corresponding spike in the 13CO2 data. The recorded monthly averages represent background levels that vary with latitude but not longitude. I think that cold water in clouds is absorbing the CO2 and transporting it to the upper atmosphere and the poles.This process is moderating the measured concentration near the surface and gives the appearance of “well mixed”.
Also, it can explain the higher concentrations in the upper atmosphere in the mid latitudes. The equator is the source and the cold water near the poles are the sinks.
=======
[I’ve just done the two sections below and thought I’d better come back up to do some sort of introduction to it.., except I still don’t know where I’m going with it except as I read more descriptions of Hawaii and Mauna Loa it still niggles me for several reasons, and I’d like to sort it out.]
It was watching a programme on the Water Cycle in Hawaii given by a geologist from the university, don’t recall his name, it was before I took an interest in Mauna Loa, where I learned that without the Water Cycle temperatures would be 67°C, not the 15°C we have, because warm moist air rises it takes away the heat from the surface of the Earth and as the water vapour condenses back into liquid at colder height temperatures it gives up this heat which continues to flow from hotter to colder up and away. Hawaii is a good study for the Water Cycle: http://www.hawaiihistory.org/index.cfm?fuseaction=ig.page&PageID=365
“Water Cycle
The water cycle on an island follows the same principles and properties of water anywhere, but ocean island geology and geography create unique details in how the cycle plays out.
The island water cycle begins with ocean evaporation. Moist air cools as it rises and as the humidity level of this air increases to 100%, water vapor condenses to form clouds. Most rainfall in Hawai`i results from orographic lifting, the lifting of clouds as they’re pushed up against the islands’ central mountains by northeast tradewinds. As prevailing winds push the moisture-laden air to approximately 2,000 meters, the air reaches its saturation point where cloud vapor condenses to water and rain results.
The areas with the greatest rainfall are also those areas with the most persistent uplifting, that is, areas on the windward sides of the islands. Large valleys form under the most intense rainfall. Rain shapes small gullies in areas of less intense rain. Leeward plains can remain essentially uneroded; rainfall comes there only when the heaviest storms push clouds over the mountains.”
===============
My italics. Northeast tradewinds: http://wings.avkids.com/Book/Atmosphere/instructor/wind-01.html
“General Wind Patterns
As mentioned, local wind patterns are the result of pressure differences in the immediate area: land, sea, mountain, etc. But there are global patterns that we can observe as well. Let’s start by following movement in the northern hemisphere. Hot air rises from the equator, creates a low pressure area, and flows towards the north pole. The upper wind flow is deflected to the right by the Coriolis effect, which causes it to pile up and move from west to east. The piled up air cools, creating a high pressure area, and sinks; and as it accumulates on the surface it flows towards both the equator and north pole. The air moving toward the equator is influenced by the Coriolis effect and moves from the northeast, and because of its direction is called the northeast trade winds. (Wind is classified according to the direction from which it is blowing.) The poleward moving air also moves to the right and is called the prevailing westerlies. The third wind belt develops as cold polar air sinks and moves south, is deflected to the right, and is therefore called the polar easterlies. The same air pattern occurs in the latitudes of the southern hemisphere, except that the deflection of the wind is to the left rather than right. (In the southern hemisphere the trades are called the southeast trade winds.)
Roughly speaking, trade winds occupy the area between 0 (the equator) and 30 degrees latitude; prevailing westerlies the area between 30 and 60 degrees; and polar easterlies the region between 60 and 90 degrees (the pole).”
======================
So, there are two things in play here, the local conditions and the Northeast tradewinds. Descriptions of how and what they’re measuring on Mauna Loa to claim that it is “pristine” for background is two-fold; the diurnal winds moving upslope in the heat of the day flow back downslope as they cool, and, the Northeast tradewinds claimed to be bringing in pristine untouched by local, well-mixed background. Here’s a map of the Hawaian islands: http://www.worldatlas.com/webimage/countrys/namerica/usstates/hi.htm
How they can claim that anything in the downslope winds is free of local production is weird in itself when they at the same time claim it is local air thoroughly mixing for two or three years above them, but its not going to be same volume of air, wind, coming down as going up and Hawaii, according to those who fly around its skies (which greatly increased traffic day and night is ignored), is well known for its mixing winds, the Northeastern trades must be going into a kind of trap here, held captive in the mix. This is a very warm ocean in an area of great volcanic activity – yet – that volcanic activity is never ever mentioned except as their decision of cut off point, claiming they are taking away whatever huge amounts are around from the venting, eruptions and so on and arbitrarily deciding when that no longer affects their samples. They decide in other words what the ‘background’ is, and then take away what doesn’t fit. Exactly as Callendar did…, ignoring all the spikes and outliers that don’t fit in with their agenda of proving “man made CO2 is rising globally every year” – and as I said earlier, Keeling claimed to have proved this with less than 2 years data. Some scientist. Why would you trust the figures of a scientist who claimed this?
This Mauna Loa station is the poster child for their claims their stations are all in “pristine” positions, uncontaminated by local CO2 production – how can they claim that for Hawaii when they have to admit to having to continually juggle with such great local outpouring of it which they then subtract and, where of what they measure of what’s left is this anywhere claimed to be man-made? Good grief, that ‘scientists’ defend the method is beyond rationality, we expect those with bent for things scientific to see flaws in the premise, but warmists can’t, anymore they can see any of the fudges in the method.
Not all convection taking warm CO2 laden air up around them is going to turn in rain against their mountain, or the other mountains on the islands, or their odd idea that depleted levels of carbon dioxide coming upslope is always from plant uptake (because they ignore all other reasons for it as they do for its higher levels, such as plants when not actively synthesising will be breathing out carbon dioxide just as we do, and, some 90% of the oxygen in our atmosphere is produced by photosynthesis in the oceans so how can not be a factor in measuring on islands?).
And looking at that line of islands, the Northeast tradewinds which they claim are bring in “pristine untouched by local CO2”, come over all the other islands before getting to Hawaii (thousands of earthquakes a year much underwater and etc. etc. the other carbon dioxide production)
So this great emphasis they put on “pristine Northeast tradewinds travelling across thousands of miles of uncontaminated by local ocean” is nonsense just for that, they pass through the whole island chain before getting to Hawaii, but, they measure, different crowd I imagine, stuff coming in from China!
All that has to be ‘local’. Local is whatever readings you get from all the factors in play locally.
Their mythical “well mixed background” is an arbitrarily chosen cut off point excising whatever doesn’t fit, lying that it’s a pristine site by ignoring or fudging all other factors – which they’ve achieved from the get go by starting with a ridiculously low cherry picked Callendar/Keeling beginning number. And, on top of that there’s no way of showing any difference between volcanic and ‘man-made’, it’s an illusion which keeps being fudged in presentation by pro AGWs and in Mauna Loa’s descriptions they’re not differentiating. There is no way Keeling could claim that his arbitrarily chosen cut off point in Hawaii was showing a trend of man-made CO2 of his mythical global well-mixed background.
What really pisses me off, is that I’m not a scientist and can spot all the tweaks and fudges and propaganda of “say it enough times and people will believe up a mountain which is itself the world’s largest active volcano in an area of constant volcanic activity in a warm sea, etc. is pristine site”, and I have to argue with those who claim for themselves the ability of “scientific thinking” defending this crap by regurgitating the meme tweaks.
But I’m really fed up with the well-mixed meme, the endless crap fisics which ignores its relative weight etc. even when there are glaring disjuncts as in the AGW Hawaii explanations that they have to wait until carbon dioxide from their volcanic activity has settled because it’s heavier than air. Why do we need to teach climate ‘scientists’ this?? How can anyone thinking themselves a climate scientist not know the real properties of carbon dioxide? It’s not an effin ideal gas in a lab, the atmosphere isn’t empty space. How can these people not understand that the volume of atmosphere around us is a heavy voluminous fluid gas, or we wouldn’t hear sound? But no, they firmly state that molecules of carbon dioxide spontaneously diffuse and speed through the empty air at great speeds bouncing off other such weightless volumeless molecules and so thoroughly mixing. What is it with them? Can’t they see the disjunct between a fluid gas atmosphere, in which we have sound, and empty space? And then there’s Brownian motion also given as a reason carbon dioxide is thoroughly mixed – WUWT??
Without real world physics of properties and process then any made up explanation will do -empty space ideal gas diffusion or fluid volume of gas in Brownian motion which works on nanodistances, and as if carbon dioxide is something apart from the medium.. They have no convection in their models because they have no atmosphere and then they give Brownian motion as a reason Carbon Dioxide ‘diffuses as does scent from a bottle opened in a classroom’ – they need to reassess their capacity for ‘scientific’ thinking if they still regurgitate these nonsense memes when this is pointed out to them. Incapable even of understanding gravity because it doesn’t exist in their empty space ideal gas world. At the very least they should investigate this for themselves, instead they get on a general warmists support bandwagon knocking real world physics.
Anyway, anyone claiming the station at Hawaii is producing real figures showing increase of man-made global carbon dioxide production who can ignore not only one example of the contortions they go through to claim it’s a pristine site and their method sound, but a whole list of reasons hiding the fact that even this claimed well-mixed background is not proven to exist, is not an effin scientist. Full stop.
Some disjuncts:
http://en.wikipedia.org/wiki/Global_Ocean_Data_Analysis_Project
“Additionally, analysis has attempted to separate natural from anthropogenic DIC, to produce fields of pre-industrial (18th century) DIC and “present day” anthropogenic CO2. This separation allows estimation of the magnitude of the ocean sink for anthropogenic CO2, and is important for studies of phenomena such as ocean acidification.[3][4] However, as anthropogenic DIC is chemically and physically identical to natural DIC, this separation is difficult.”
Oh, but they’re improving the method of finding the difference… So didn’t Keeling find it? What have we been fed all these years?
No wonder Keeling was delighted to get to work at Hawaii, the Antarctic just wasn’t producing enough carbon dioxide for his desired manipulations..
So between these two fudge memes, that carbon dioxide is well mixed gas and that Mauna Loa is ideal place to measure this supposed global background which meme Keeling himself started, is all the junk science created to support it, including showing one picture every now and then from AIRS which is totally at odds with its stated official conclusion that CO2 is lumpy to their great surprise, and, which anyway is withholding upper and lower troposphere data. Of course they were surprised, because the fictional fisics of AGW has created a different world with different properties and processes. It admits, for example, that water vapour is not well-mixed, but lumpy, but says that all the other gases are – and this is simply, astonishingly, accepted without blinking.
So water vapour somehow can lump in an empty atmosphere where the other gases are in ideal empty space bouncing off each other – without attraction – so their carbon dioxide can’t join with water vapour and their clouds appear by magic. But AIRS found that carbon dioxide is lumpy, really lumpy, and they were surprised because they’d been brought up on fictional fisics regurgitated by warmists, if they hadn’t been they would know what real scientists still know:
http://www.skepticalscience.com/news.php?n=433#29253
“In real air there is no uniform distributon of the masses of the consituents including water vapor and clouds in the atmosphere in space and time as is shown by daily weather maps of the various regions of the earth. High pressure cells have more mass of the gases than do low pressure cells, and thus there is no uniform distribution of CO2 in the atmosphere. Air containing water vapor is less dense than dry air and has less mass of the fixed gases and of CO2 both of which will vary with humidity. Mountains are a prominent geological feature of the continents and the density of the air in them is less than at sea level and diminishes rapidly with elevation.”
The Mauna Loa data must be junk science because they’re measuring something that doesn’t exist, their silly manipulations with measuring notwithstanding. And until those defending it deal with all the disjuncts their analysis of Mauna Loa and the rest will never make sense because their base premise is flawed.
The claim: http://cdiac.ornl.gov/trends/co2/sio-mlo.html
Pristine Northeast tradewinds, Mauna Loa measuring dust storms from China:
http://co2now.org/Know-CO2/CO2-Monitoring/mauna-loa-atmospheric-science-and-wonder.html
More examples of pristine Northeast tradewinds, sometimes there sometimes not fudge: http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html
“Nearby emission or removal of CO2 typically produces sharp fluctuations, in space and time, in mole fraction. These fluctuations get smoothed out with time and distance through turbulent mixing and wind shear. A distinguishing characteristic of background air is that CO2 changes only very gradually because the air has been mixed for days, without any significant additions or removals of CO2.”
So which is it – pristine Northeast tradewind air or local mixing?
Like ideal gas spontaneous diffusion in empty space and Brownian motion, whatever tweaking meme necessary to obfuscate, they’re not describing the real world. And even this is believed when they actually describe their own methods of measuring and show they’ve made it up:
http://wattsupwiththat.com/2011/10/02/global-warming-potentials/#comment-757687
http://wattsupwiththat.com/2011/10/02/global-warming-potentials/#comment-757726
But it’s always the same old same old impossible to separate out man-made global creaping up year by year so they can ignore the world-wide spikes of Pinatubo as well as their own continuous production of it while producing pictures out of time and mislabled http://wattsupwiththat.com/2008/07/29/co2-well-mixed-or-mixed-signals/
and
And Ferdinand, releasing a few pictures years after the relevant data was known showing a time of as little of variation they could and not showing any pictures completely at odds with this from which their own astonished conclusion about all their collected data, that carbon dioxide is lumpy and not well-mixed, is no basis from which to do your own calculations, is it?
Also, the inescapable conclusion: http://wattsupwiththat.com/2011/10/02/global-warming-potentials/#comment-757955
“The pdf is 65 pages long and rips apart the whole CO2 data base including Ice Cores and Manuna Loa. There are a lot of references to back up the anaysis too. So yes the data has been “adjusted” like Hansen’s temperatures. No wonder the Mauna Loa data shows a nice linear increase in CO2. Without the CO2 is “well mixed” assumption the whole house of cards collapses.”
So, Gail’s conclusion there is what we really have to deal with, not which curve arguments.
Where’s the proof of the Mauna Loa premise that there is such a thing as well-mixed because nothing they show in their methods of measuring and idiotic fudging of everything around them is any kind of proof that such a critter exists to fit to a curve.

June 6, 2012 5:08 am

Further, the “cleaning” procedure at MLO and other stations doesn’t change the average or the trend with more than 0.1 ppmv, no matter if you include or exclude the outliers. The outliers at MLO are clearly those measurements that contain local contamination (downslope wind from the the volcano, upslope wind from the valley), these are rightfullt discarded, but that doesn’t affect the trend.
I know it is a bit of an anomaly here, but I stand corrected, by both you and Willis. Not anomalous that I stand corrected (I know perfectly well that my knowledge is limited and often mistaken) but there are days on WUWT that I think nobody can ever be corrected…;-)
If I understand you correctly, the correlation between NH leading SH and so on makes anthropogenic sources (in the far more industrialized NH) more likely, although still far from proven given that NH temperatures also lead (or are generally higher than) SH temperatures and Bart’s temperature-CO_2 connection is not inconsistent with that. So we are right back where we started, with Richard’s model conclusions (that we don’t yet know what is responsible for CO_2 rise) still true, although some models may be less likely than others on the grounds of sheer physical chemistry. Is that a fair assessment?
rgb

Gail Combs
June 6, 2012 6:03 am

Laws of Nature says:
June 3, 2012 at 4:58 am
>> Ferdinand Engelbeen says:
“Again the same discussions com up every few months…”….
____________________________________________________
Law of Nature, F.E. is not going to budge from his position no matter what facts, peer-reviewed studies or information is thrown at him. I went round and round on one of the IPCCs most basic premises, that CO2 is uniformly distributed, and got nothing but double speak.
I figure FE is one of the WUWT trojan horses, guaranteed to show up and defend the IPCCs position on CO2. We have several of them who have a specific patch of IPPC “Science” they are dedicated to defending. Quite fascinating to watch and keeps the discussions lively.

Allan MacRae
June 6, 2012 6:27 am

These CO2 graphics are interesting (h/t to Willis):
Carbon tracker animation
http://www.esrl.noaa.gov/gmd/ccgg/carbontracker/
CO2 Weather Map
The big CO2 blob on 1Jan2010 near Newfoundland is very interesting – it is undoubtedly caused by all those Newfies home on pogey for the winter recreation season – driving their Skidoos, burning wood in their stoves and smoking all that dope.
http://www.esrl.noaa.gov/gmd/ccgg/carbontracker/co2weather.php?region=nam&date=2010-01-01
On a global scale, the powerful impact of Spring is evident in this 1May2010 image – and apparently the Newfies have moved on, to party with their good friends in highly industrialized, overpopulated Sakhalin Island.
http://www.esrl.noaa.gov/gmd/ccgg/carbontracker/co2weather.php?region=glb&date=2010-05-01
sarc off/
As I hypothesized earlier, there is the possibility of urban impacts on CO2 over Western Europe and parts of China in this winter image, also 1Jan2010.
http://www.esrl.noaa.gov/gmd/ccgg/carbontracker/co2weather.php?region=glb&date=2010-01-01
In the big global picture, I think these images further support my premise that human emissions of CO2 are overwhelmed by natural seasonal CO2 flux. Since the system is highly dynamic, not static, the “mass balance” argument does not hold – the system just accommodates the human emissions and makes natural adjustments as its seeks its’ own equilibrium.
I don’t think human fossil fuel combustion is contributing significantly to the observed atmospheric CO2 increase – I think it is all, or almost all natural.

Gail Combs
June 6, 2012 7:44 am

Bill Illis says:
June 3, 2012 at 6:12 am
Just noting that the AIRS satellite has a number of videos for mid-tropospheric CO2 concentrations covering 6 or 7 years now.
Just search “Airs CO2″ on Google…. You will see there is considerable variability and it is entirely possible that someone might measure 500 ppm in Europe or some locality every few days…..
___________________________________________
What is also interesting is what NASA says of AIRS,

…As the spacecraft moves along, this mirror sweeps the ground creating a scan ‘swath’ that extends roughly 800 km on either side of the ground track…. AIRS looks toward the ground through a cross-track rotary scan mirror which provides +/- 49.5 degrees (from nadir) ground coverage along with views to cold space and to on-board spectral and radiometric calibration sources every scan cycle. The scan cycle repeats every 8/3 seconds. Ninety ground footprints are observed each scan. One spectrum with all 2378 spectral samples is obtained for each footprint. A ground footprint every 22.4 ms. The AIRS IR spatial resolution is 13.5 km at nadir from the 705.3 km orbit.

Nadir is looking straight down so this is the minimum column of air scanned. I am assuming that the “footprint” represents a column of air 13.5 km thick by 800 km/90 foot prints or 8.9 km wide.
Another page (the one I was looking for) states AIRS reports the daytime and nighttime global distribution of carbon dioxide in the mid-troposphere at a nadir resolution of 90 km x 90 km. So it looks like the “footprints” above are combined to give a reading.
Another page on AIRS says:

…Global monthly maps of CO2 have been generated and identify global transport patterns in the mid-troposphere. These results will aid climate modelers in parameterization of mid-tropospheric transport processes of CO2 and other gases. AIRS CO2 provides a mid-tropospheric measurement…

And AIRS is STILL finding variations from foot print to foot print despite all the combining and averaging being done.
http://airs.jpl.nasa.gov/data/about_airs_co2_data/about_airs_co2_data_files/index.jpg of “The monthly average of carbon dioxide in the middle troposphere made with AIRS data retrieved during July 2003
A real kicker: http://joannenova.com.au/2011/11/co2-emitted-by-the-poor-nations-and-absorbed-by-the-rich-oh-the-irony-and-this-truth-must-not-be-spoken/
We have ample evidence that the data for temperature is adjusted, homogenized, sliced and diced and yet when it comes to CO2 data the same people who view temperature data with a critical eye believe there is no collusion despite Callendar, who greatly influenced Charles Keeling starting off the measurement of CO2 with cherry-picking low readings from the historic data! For the history see: http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/
So somehow we are to believe data from ice cores and data from Mauna Loa represent the CO2 for the entire earth. This is despite the clear cut evidence from greenhouses that the atmosphere could never be below 200 ppm without wiping out C3 plants at a minimum.

..leaving the air around them CO2 deficient, so air circulation is important. As CO2 is a critical component of growth, plants in environments with inadequate CO2 levels of below 200 ppm will generally cease to grow or produce… http://www.thehydroponicsshop.com.au/article_info.php?articles_id=27

….Below 200 PPM, plants do not have enough CO2 to carry on the photosynthesis process and essentially stop growing. Because 300 PPM is the atmospheric CO content, this amount is chosen as the 100% growth point. You can see from the chart that increased CO can double or more the growth rate on most normal plants. Above 2,000 PPM, CO2 starts to become toxic to plants and above 4,000 PPM it becomes toxic to people….. http://www.hydrofarm.com/articles/co2_enrichment.php

…Plant photosynthetic activity can reduce the CO2 within the plant canopy to between 200 and 250 ppm… I observed a 50 ppm drop in within a tomato plant canopy just a few minutes after direct sunlight at dawn entered a green house (Harper et al 1979) … photosynthesis can be halted when CO2 concentration aproaches 200 ppm… (Morgan 2003) Carbon dioxide is heavier than air and does not easily mix into the greenhouse atmosphere by diffusion… Source

June 6, 2012 8:55 am

Gail Combs says:
June 6, 2012 at 6:03 am
Law of Nature, F.E. is not going to budge from his position no matter what facts, peer-reviewed studies or information is thrown at him. I went round and round on one of the IPCCs most basic premises, that CO2 is uniformly distributed, and got nothing but double speak.
Well, Gail, if you have really good arguments, I will be the first to say that you are right and I am wrong. The difference between us is that I am as skeptic towards something that is said by fellow skeptics as towards what is said by the “warmist” side.
But you seem to accept any explanation of anyone, no matter how wrong and impossible, as long as it seems to undermine every single bit that the other side (IPCC) says. Take e.g. your quote of the late Ernst Beck about the 1942 “peak” in historical measurements and a little further you quote the stomate data as “proof” that there was far more variability in the CO2 data. But if you look at the stomata data, these show no peak at all in 1942 (neither do ice cores or coralline sponges or any other carbon related proxy I know of). Thus two of your quotes are completely contradictory…
And again, if you find that the AIRS data show that the CO2 is not well mixed, then you have a different definition of well mixed than I have. Well mixed doesn’t mean that any huge injection or removal of CO2 in/out the atmosphere is instantly distributed all over the earth. It only says that such exchanges will be distributed all over the earth in a reasonable period of time. In the case of CO2, that is days to weeks for the same latitude and altitude band, weeks to months for different latitudes or altitudes and months to 1-2 years between the hemispheres.
Or what do you expect that happens if you exchange 20% of all CO2 in and out of the atmosphere in a few months over the seasons? If AIRS only shows a 2% (+/- 4 ppmv) change of full scale over the same months, then a lot of the change is already mixed out.
I figure FE is one of the WUWT trojan horses, guaranteed to show up and defend the IPCCs
position on CO2. We have several of them who have a specific patch of IPPC “Science” they are dedicated to defending. Quite fascinating to watch and keeps the discussions lively.

Just received my paycheck from Fenton Communications for this month…

Bart
June 6, 2012 9:30 am

Ferdinand Engelbeen says:
June 6, 2012 at 12:15 am
“Thus the verification period would show if the fit gives a reasonable answer for the whole period, even if we only have sparse data for CO2.”
I haven’t really looked at it, but it is worse than just sparse, and I doubt very useful at all. There really is no need. The best modern data we have shows that the temperature/CO2 derivative relationship holds for the last 54 years. It undoubtedly held before then, so any contradiction from any other record would have to be decided in favor of the modern record.
“But as these are largely independent variables…”
They are not. The CO2 generated is subject to the same sequestration processes.
“…one can fit the same decades with a factor of the emissions, without any arbitrary offset, leaving 0% to 100% influence of temperature on the trend, but still 100% influence of temperature on the variability of the rate of change.”
Only if you fantasize that the climate acts as it pleases, without regard for physical laws. You cannot accept the variation of the temperature as influencing CO2, and dismiss its trend from doing so – that would be an unphysical high-pass response. And, that trend accounts for the entire curvature in the CO2 measurement record, leaving no avenue for a significant human influence.
“There is no indication that the deep oceans have had a measurable change in temperature…”
As the cold deep ocean water surfaces, it warms to come into equilibrium with its surroundings. That is what releases the CO2. The set point for CO2, “To” in my analogous system, changes as a result.

Agile Aspect
June 6, 2012 9:33 am

Joachim Seifert says:
June 2, 2012 at 3:58 pm
All this means that CO2-doubling will be completed by 2050 along with
the climate forcing of 3.7 W/m2…… thus earlier than 2100 as given by
AGW…
Which GMT would result in 2050?
;————————————
It’s physically impossible for CO2 to trap heat on the minor sideband of 15 microns (the lifetime of the excited state after the absorption is to short.)
At the peak frequency of the outbound radiation of the Earth, namely 10 microns, the forcing has the wrong sign.
In short, the climate forcing parameter of 3.7 W/m^2 will have no impact on GMT regardless of the year or the sign.

Bart
June 6, 2012 9:37 am

“It undoubtedly held before then”
At least, in the near past (some number of decades, at least) from the start of the record in 1958. Most likely, it holds all the time, with either slow variation or some sudden jumps, due to a change in the equilibrium conditions, as dictated by the upwelling of the deep oceans.
This is an actual physically realizable hypothesis which is consistent with all the data. Your handwaving and arbitrary apportionment of the flows – these are not physically viable propositions. Until you can come up with a causal, stable set of differential equations which can describe behavior such as you posit, you are just imagining the system as you would like it to be, with no physical anchor into the real world.

June 6, 2012 9:50 am

Ferdinand:
I am replying to your post at June 6, 2012 at 1:50 am because it is so rare for you and me to agree that it is a pleasure when we do. You conclude your post saying to me:

Thus in summary, while the response to temperature in first instance is very fast, the response to a disturbance of the overall equilibrium is much slower, because that is mainly governed by slower processes than which are responsible for the response to temperature.

That is my opinion, too. Indeed, I said it above at June 5, 2012 at 10:45 am where I wrote

The rate constants of some processes of the carbon system are very slow so they take years or decades to adjust. Hence, any change causes the system to adjust towards a new equilibrium which it never reaches because the system again changes before the new equilibrium is attained.

The problem is that we lack sufficient data to know if that is true or not. Indeed, if you accept that opinion (n.b. OPINION and not fact) then – as my same post explained – the existing data allows the system to be modelled in a variety of ways whether one assumes the anthropogenic emission is affecting the equilibrium or not.
Richard

Bart
June 6, 2012 9:53 am

FerdiEgb says:
June 6, 2012 at 1:50 am
“The current discussion with Bart now is about the response to temperature for interannual to decadal changes. According to him (and others), that may be hundreds of ppmv/°C.”
There are two different temperatures coming into my analogous model, the sea surface temperature “T”, and the equilibrium temperature “To”. “To” varies on a long term basis, and can be large, due perhaps to the differential heat content of the deep oceans upwelling to the surface. Since 1958, “To” has been significantly offset from “T”, and that has driven the linear trend portion of the CO2 rise, which is the greater part.
Additionally, around 1990, there appears to have been (though the data and the manner in which they have been processed are uncertain) a step change in “To”. Or, perhaps there is a continuous change in “To” which, unfortunately, is very difficult to plot using the WoodForTrees tool.

Bart
June 6, 2012 11:14 am

richardscourtney says:
June 6, 2012 at 9:50 am
“The problem is that we lack sufficient data to know if that is true or not.”
What we lack is people open-minded and skilled enough to use the information we have.
Did you read my post here? This is such a simple example that surely you will clap your hand to your head and realize Bart was right, and we can then work constructively to resolve the issue.
And, you can also take the opportunity to assuage all of the aggrieved puppy lovers out there you have angered with your quoted comments.

FerdiEgb
June 6, 2012 2:40 pm

Bart says:
June 6, 2012 at 9:30 am
I haven’t really looked at it, but it is worse than just sparse, and I doubt very useful at all. There really is no need. The best modern data we have shows that the temperature/CO2 derivative relationship holds for the last 54 years. It undoubtedly held before then, so any contradiction from any other record would have to be decided in favor of the modern record.
There is probably no contradiction in the temperature change / CO2 derivative variability. But as said before, the problems are in the slope and offset. But see further.
“But as these are largely independent variables…”
They are not. The CO2 generated is subject to the same sequestration processes.

No, they are completely different:
The reaction of the biosphere (land plants, algae, soil bacteria) in average is more uptake of CO2 at higher temperatures and more precipitation. That is a fast process, with a reaction time of 1-2 years. The reaction of land plants to increased CO2 levels is a matter of decades. Extra growth by CO2 and temperature or precipitation are independent of each other, be it that the uptake is constrained to a certain temperature range and sufficient precipitation. The influence of the seasonal temperature change is about 90 GtC in and out, the influence of the current increase in CO2 at ~100 ppmv (70 ppmv in the period of interest) is ~1.5 GtC/year more uptake. Thus there is a huge difference in reaction type and uptake speed.
The reaction of the upper ocean layer to increased temperatures is an overall release of CO2 in accordance to Henry’s Law: about 16 ppmv/°C. The reaction to more CO2 in the atmosphere is more absorbance of CO2 into the surface layer, if that exceeds the equilibrium according to Henry’s Law.
Thus both in the biosphere as in the upper oceans the reaction processes involved for temperature changes and extra CO2 are largely independent of each other, even opposite for the oceans.
Without any extra CO2 injection, an increase in temperature would lead to an increase in CO2 release from the ocean’s surface layer and an increase of CO2 uptake by the biosphere, until a new dynamic equilibrium between all processes is reached and there it stops. That is also seen in the ice core records for any period in time before 1850 as a change of ~8 ppmv/°C.
But the deep oceans can be involved, see further…
Only if you fantasize that the climate acts as it pleases, without regard for physical laws. You cannot accept the variation of the temperature as influencing CO2, and dismiss its trend from doing so – that would be an unphysical high-pass response. And, that trend accounts for the entire curvature in the CO2 measurement record, leaving no avenue for a significant human influence.
Again, the variation in temperature matches the variability in increase rate, even if you completely detrend the increase rate. But there is no physical or statistical law that says that the slope and offset must be attributed to the temperature influence. And if the removal of the human emissions is slow enough, as it seems to be, then most of the slope and the bulk of the increase rate can be attributed to the human emissions.
As the cold deep ocean water surfaces, it warms to come into equilibrium with its surroundings. That is what releases the CO2. The set point for CO2, “To” in my analogous system, changes as a result.
Only marginally so: maximum 16 ppmv for 1°C. That is all what you get if the deep ocean upwelling remains the same. If the upwelling increases, that must be compensated by more down welling elsewhere anyway, which means more throughput but no change in atmospheric CO2.
The only difference could be if the deep ocean upwelling increased substantially in CO2 content. But that is a remote possibility and has nothing to do with temperature.
—————————————
From your second message:
There are two different temperatures coming into my analogous model, the sea surface temperature “T”, and the equilibrium temperature “To”. “To” varies on a long term basis, and can be large, due perhaps to the differential heat content of the deep oceans upwelling to the surface. Since 1958, “To” has been significantly offset from “T”, and that has driven the linear trend portion of the CO2 rise, which is the greater part.
Your formulae are here:
dC/dt = (Co – C)/tau1 + k1*H
dCo/dt = -Co/tau2 + k2*(T-To)
No problem with the first line. But problems with the second one, where you establish the influence of temperature on Co and thus indirectly the dependence of the removal speed of human CO2.
Of course there is a dependence of Co on temperature, but that is a fixed one, not an eternal gliding one. That is what the physics from air-oceans and air-biosphere changes says.
The second line needs a stop function when the new equilibrium setpoint would be reached:
dCo/dt = -(Co-Coo)/tau2 + k2*(T-To)
Where Coo is the original equilibrium CO2 level at To and Co is the new equilibrium CO2 level at T. Further k2 ~4 ppmv/°C increasing to ~8 ppmv/°C for very long periods. The 4 ppmv/°C is the current temperature changes – increase rate changes ratio and the 8 ppmv/°C is what is observed in the past via ice cores.
That implies that tau2 is very short (as observed) and tau1 is relative long (as observed)…
If you make Co a gliding one, then there is no stop in either direction if T passes To, which for every time frame beyond the current leads to huge deviations from reality.

Myrrh
June 6, 2012 4:10 pm

richardscourtney says: @ June 5, 2012 at 10:45 am
…I repeat the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?….
=============
Because as i’ve touched on in my last post, the carbon dioxide molecule is different in AGWScienceFiction fisics and it’s this critter that represents anthropogenic. It’s a supermolecule which wears its knickers on the outside, not only driving hurricanes and moving jet streams, it is an ideal gas in an ideal gas empty space atmosphere of like ideal gas molecules oxygen and nitrogen (but not water vapour) so as per ideal gas description these all zip around at enormous speeds through their empty space AGWSF world’s atmosphere bouncing off each other and so thoroughly mixing in elastic collisions and as per ideal gas it has no weight so can accumulate for hundreds even thousands of years bouncing off and zipping around unattracted to any other molecules around it. Phew. Not that you’ll find any method for sequestration into sinks for natural – they just happen, as do clouds – because in AGWSF fisics carbon dioxide has no way of getting back to Earth except it bounces into it; rocks can’t be sinks because ideal gases bounce without attraction so they don’t have rain which is carbonic acid which does the weathering and of course their clouds appear by magic because gases aren’t buoyant in air and anyway water vapour is being bounced around by all the ideal gas molecules bouncing it around, so how can it form?, and so on, and so on, so don’t be thinking that because they say water vapour isn’t an ideal gas but lumpy, that they have any way of producing it in their empty space atmosphere so enthralling to the likes of Spencer who is so clearly out of his depth when some engineer or other applied scientist comes along and says you can’t use vacuum and radiation because it ignores convection because he’s ignoring that there is a something between Earth and the vacuum of space – they don’t know what convection is because they don’t have real molecules but ideal, so they freak out if you mention gravity…
They really have no concept of the atmosphere above our heads and all around us at all, they don’t understand volume.
Anyway, that’s why anthropogenic doesn’t get swallowed up in the same sinks as natural, they avoid explaining how it does for natural because they don’t know as it can’t be explained by their strange fisics, they merely parrot real world physics general descriptions like “rain”, which can’t form in their fisics. The problem is that scientists who do understand real molecules and real atmosphere don’t know they’re talking to people who have a completely different fisics, who are describing a totally different world, so there’s a lot of talking past each other because warmers don’t understand they don’t know real physics.
But still insist they do. Heck it’s taught at uni level, the PhD in physics who taught me about this blew my mind, I couldn’t believe at first that he was serious. He said he’d fail me in the exams he set (in Scotland), so I questioned him to make sure I was really hearing that carbon dioxide spontaneously diffuses into the atmosphere as per ideal gas to bounce off other ideal gas molecules and so thoroughly mix.. At first he refused to accept that carbon dioxide would separate out at all, until I showed him real world examples, volcanoes venting dangers, breweries, and so on, then he removed his post, moderator status, in which he said CO2 could never separate out and came back with this really weird idea that some large amounts of CO2 together would bring down the ‘package’ of air with them, so not really separating out. You couldn’t make it up… Except someone did to create this AGW Science Fiction world. That’s when I said, OK, so we agree that Carbon Dioxide can pool on the ground, so, there’s a room where carbon dioxide has pooled on the ground and nothing changes to original conditions which allowed it to pool, no work done, no windows opened, no fan put on, I say it will stay pooled on the ground because it is one and a half times heavier than air. He said it wouldn’t, it would spontaneosly diffuse into the atmosphere of the room as per ideal gas and through collisions at vast speeds of ideal gas in empty space would very quickly become thoroughly mixed and couldn’t become unmixed without a great deal of work being done. And I bet every warmist believing or taught that is reality is going, yeah yeah, that’s how it is…
So, they don’t have any fisics to get their MightyMolecule into carbon sinks and can’t explain how natural does it.
And, that’s why they don’t understand your question.
================
Willis Eschenbach says:
June 5, 2012 at 6:49 pm
Robert Brown says:
June 5, 2012 at 6:16 pm
As one might expect, it suggests that we would be far better served by an entire globe spanning set of CO_2 concentration monitoring stations than by “just one”, sitting on an active volcano that is used as if it is representative of the entire atmosphere at 4200 meters above sea level at all temperatures and latitudes.
Actually, we do have a bunch of stations, one in Samoa, one in Barrow, Alaska, and the like. They lead to things like this: picture
On the other hand we have:
“1.2 The Location of CO2 Monitoring Station in regions enriched by volcanic CO2
Volcanic CO2 emission raises some serious doubts concerning the anthropogenic origins of the rising atmospheric CO2 trend. In fact, the location of key CO2 measuring stations (Keeling et al., 2005; Monroe, 2007) in the vicinity of volcanoes and other CO2 sources may well result in the measurement of magmatic CO2 rather than a representative sample of the Troposphere. For example, Cape Kumukahi is located in a volcanically active province in Eastern Hawaii, while Mauna Loa Observatory is on Mauna Loa, an active volcano – both observatories within 50km of the highly active Kilauea and its permanent 3.2 MtCO2pa plume. Samoa is within 50 km of the active volcanoes Savai’i and/or Upolo, while Kermandec Island observatory is located within 10 km of the active Raoul Island volcano. etc.” continue on: http://carbon-budget.geologist-1011.net/
Willis Eschenbach says:
June 5, 2012 at 6:55 pm
Robert Brown says:
June 5, 2012 at 6:27 pm
… But then, locating “the” CO_2 observatory for the world on an active volcano is insane. Having just one (or just five, or just ten) for the world is insane. From a statistical point of view.
It is not impossible that 100% of the Mauna Loa increasing CO_2 “signal” is due to a steady, occult, increase in CO_2 outgassing due to volcanic processes within Mauna Loa itself and surrounding islands. I don’t suggest that this is the mostly likely/plausible explanation, only that the only way one could check is with an observatory on top of Mount Everest, another on Kilimanjaro, ten thousand (or a hundred thousand) more moored on weather balloons at 20,000 feet in some sort of regular grid covering the planetary surface. Or performing some very complex and dubious geophysical research (since even if you excluded ML itself, there would be outgassing from vulcanism on the surrounding pacific floor to consider, and still more confounding factors). Expecting MLO to generalize to “the Earth” is a bit egregious.
rgb
Actually, it turns out that MLO is quite a good place for a CO2 measuring station … see my post “Under the Volcano, Over the Volcano” for a discussion of the issues. I also discussed the Beck data there, and Dr. Beck posted a response, I was stoked. His response starts by saying:
Dear Willis,
I agree, the near ground data listed in my first paper do not reflect background data.
Read his whole comment here. http://wattsupwiththat.com/2010/06/04/under-the-volcano-over-the-volcano/#comment-403530

========
A couple of things, firstly thanks for the link because his link http://www.biokurs.de/treibhaus/CO2_versus_windspeed-review-1-FM.pdf gave me the same moment of enjoyment I had the first time I saw his comparison Fig 1 – his, “Note the different scales of the 2 plots!”
Good man himself.
Secondly, which is why I decided to reply to your posts here, Mauna Loa is sold as, because claimed by Keeling, the definitive record of global well mixed background, which is always this unproven and explained by strange fisics “global well-mixed background” – Beck is talking about local well-mixed which is on par with one of the explanations of Mauna Loa method, of local mixing by winds etc. – from which range of data global could be worked out. The paper he linked to in your discussion was carbon dioxide versus wind speed – in AIRS they didn’t even understand winds, they said they’d have to go and do some work to understand them because carbon dioxide was lumpy and not well-mixed.
If anyone could work out what global was it would be BECK, our great loss, it certainly ain’t anyone juggling the numbers at Mauna Loa in contradicting methods arbitrarily deciding levels claiming that it’s pristine global they’re reading.
http://wattsupwiththat.com/2010/09/23/obituary-ernst-george-beck/
“Due to his immense specialized knowledge and his methodical severity Ernst very promptly noticed numerous inconsistencies in the statements of the Intergovernmental Penal on Climate Change IPCC. He considered the warming of the earth’s atmosphere as a result of a rise of the carbon dioxide content of the air of approximately 0.03 to 0.04 percent as impossible. ”
Impossible. Those brought up with the tweaked into comic parody fisics in the corrupted science education of AGW cannot understand Beck. I really don’t know what it would take for these to understand they don’t understand because the tweaks are subtle and cover a huge range of science fields.
We could make a start…, carbon dioxide heavier than air would only take stuff we have in the kitchen, vinegar and bicarb of soda, and a lit candle on which to pour the invisible stream of fluid gas from above it to put it out.. As shown on QI a short while back when Stephen Fry did this on air, (BBC prime time).
Huge task to deconstruct all their experiments and teaching of fisics.

Myrrh
June 6, 2012 4:57 pm

Aggh, sorry, missed close italics after: “Dear Willis,
I agree, the near ground data listed in my first paper do not reflect background data.
Read his whole comment here. http://wattsupwiththat.com/2010/06/04/under-the-volcano-over-the-volcano/#comment-403530
Please fix mods if possible, thanks.
Have just taken a closer look at Beck’s old website – looks like not being maintained – does that mean the work he had on it is unavailable? http://www.biomind.de/realCO2/

Bart
June 6, 2012 5:14 pm

FerdiEgb says:
June 6, 2012 at 2:40 pm
“Thus both in the biosphere as in the upper oceans the reaction processes involved for temperature changes and extra CO2 are largely independent of each other, even opposite for the oceans.”
The sequestration processes are still the same. And, that is what matters.
“But there is no physical or statistical law that says that the slope and offset must be attributed to the temperature influence.”
There is for the slope. And, it follows that there necessarily is for the offset to make it all match.
“Only marginally so: maximum 16 ppmv for 1°C…”
…of temperature differential. You do not know the temperature differential with the deep oceans, and how that changes as it upwells to the surface.
“Of course there is a dependence of Co on temperature, but that is a fixed one, not an eternal gliding one.”
Not when you have a long term variation in the thermal energy of the upwelling deep oceans. That is a very bold statement to make without any evidence at all. If there’s one thing we know about… well, everything, it is that it always changes.
“The second line needs a stop function when the new equilibrium setpoint would be reached:”
You’re just rearranging the deck chairs. Define Too = To – Coo/(k2*tau2). Now, you have dCo/dt = -Co/tau2 + k2*(T-Too). And, you reach the same conclusions.
You know I give little credence to the ice cores.
“That implies that tau2 is very short (as observed) and tau1 is relative long (as observed)…”
Not possible. Not observed. Then, you will not track the fine detail of the temperature change as here. That’s the whole problem.

Allan MacRae
June 6, 2012 11:11 pm

richardscourtney says: June 6, 2012 at 2:32 am
I agree that “the system will continue to chase equilibrium in time and space”. Indeed, I said as much in my post at June 5, 2012 at 10:45 am where I wrote
The carbon system may be adjusting to a new equilibrium in response to a change such as the temperature rise, the anthropogenic emission, a combination of those two effects, and/or something else.
The rate constants of some processes of the carbon system are very slow so they take years or decades to adjust. Hence, any change causes the system to adjust towards a new equilibrium which it never reaches because the system again changes before the new equilibrium is attained.
But, again, that merely rewords my question. It changes my question to become
“Is the carbon cycle adjusting to a new equilibrium by not sequestering all CO2 emissions and, if so, why?”
Richard
_________________
Hello Richard,
Again, it is late and I really don’t know if I’m answering your question or just rambling…
Jan Veizer’s 2005 paper (I’ve just emailed you and Willis a copy) may help – see this section on page 22:
COUPLING OF THE WATER AND CARBON CYCLES
The atmosphere today contains ~ 730 PgC (1 PgC = 1015 g of carbon) as CO2 (Fig. 19). Gross primary productivity (GPP) on land, and the complementary respiration flux of opposite sign, each account annually for ~ 120 Pg. The air/sea exchange flux, in part biologically mediated, accounts for an additional ~90 Pg per year. Biological processes are therefore clearly the most important controls of atmospheric CO2 levels, with an equivalent of the entire atmospheric CO2 budget absorbed and released by the biosphere every few years. The terrestrial biosphere thus appears to have been the dominant interactive reservoir, at least on the annual to decadal time scales, with oceans likely taking over on centennial to millennial time scales.
Interannual variations in atmospheric CO2 levels mimic the Net Primary Productivity (NPP) trends of land plants, and the simulated NPP, in turn, correlates with the amount of precipitation
(Nemani et al., 2002, 2003; Huxman et al., 2004) (Fig. 16). The question therefore arises: is the terrestrial water cycle and NPP driven by atmospheric CO2 (CO2 fertilization) or is it the other
way around? As a first observation, note that the “troughs” in precipitation and NPP coincide with the minima in sunspot activity (Fig. 16). As already pointed out, if a causative relationship
exists, it can only be from the sun to the earth.
During photosynthesis, a plant has to exhale (transpire) almost one thousand molecules of water for every single molecule of CO2 that it absorbs. This so-called “Water Use Efficiency”
(WUE), is somewhat variable, depending on the photosynthetic pathway employed by the plant and on the temporal interval under consideration, but in any case, it is in the hundreds to one
range (Taiz and Ziegler, 1991; Telmer and Veizer, 2000). The relationship between WUE and NPP deserves a more detailed consideration. In plant photosynthesis, water loss and CO2
uptake are coupled processes (Nobel, 1999), as both occur through the same passages (stomata). The WUE is determined by a complicated operation that maximizes CO2 uptake while minimizing water loss. Consequently, the regulating factor for WUE, and the productivity of plants, could be either the atmospheric CO2 concentration or water availability.
From a global perspective, the amount of photosynthetically available soil water, relative to the amount of atmos atmospheric CO2, is about 250:1, much less than the WUE demand of the dominant plants, suggesting that the terrestrial ecosystem is in a state of water deficiency (Lee and Veizer, 2003). The importance of the water supply for plant productivity is clearly evident from the NPP database that is a collection of worldwide multi-biome productivities, mostly established by biological methods (Fig. 20). The principal driving force of photosynthesis is Unquestionably the energy provided by the sun, with the global terrestrial system reaching light saturation at about an NPP of 1150 ± 100 g carbon per year Fig. 20). If the sun is the driver, what might be the limiting variable? Except locally, CO2 cannot be this limiting factor because its concentration is globally almost uniform, while NPP varies by orders of magnitude. Temperature, because of its quasi anticorrelation with The NPP (Fig. 16), is not a viable alternative either.
… continued

June 7, 2012 2:41 am

Myrrh says:
June 6, 2012 at 4:57 pm
Have just taken a closer look at Beck’s old website – looks like not being maintained – does that mean the work he had on it is unavailable? http://www.biomind.de/realCO2/

The late Ernst Beck past away a few years ago, a pity as I had years of nice discussions with him. Just before his death he published a new work, together with Francis Massen (meteorological station Diekirch, Luxemburg, see his work at http://meteo.lcd.lu/papers/co2_patterns/co2_patterns.html ) about a method to calculate the “background” CO2 levels in a particular place, if figures of wind speed are known. At sufficient wind speed, one sees an asymptotic approach to the “background” value. Unfortunately, the main series of interest, responsible for the 1942 “peek”, has few datapoints at high wind speed and still a wide range.
See: http://meteo.lcd.lu/papers/co2_background_klima2009.pdf
That was his last work…

Myrrh
June 7, 2012 4:47 am

Thank you Ferdinand.
I wonder where all his data files are. Perhaps his daughter has them. I’ve read people saying that he made them freely available whenever requested, so perhaps between all those the collection could be brought together.

FerdiEgb
June 7, 2012 5:17 am

Bart says:
June 6, 2012 at 5:14 pm
The sequestration processes are still the same. And, that is what matters.
Bart, the sequestration processes are essentially the same, but with largely different coefficients. The reaction on temperature by trees e.g. is an upside down U-curve in growth (this CO2 uptake), which implies a temperature optimum, while the reaction in CO2 changes is quite linear within the constraints of the other variables. The reaction of soil bacteria in CO2 release for a temperature change is almost immediately, no matter the CO2 levels. The global reaction on temperature in first instance is very rapid, the reaction on increased CO2 levels is quite slow, just opposite of what your theory says…
There is for the slope. And, it follows that there necessarily is for the offset to make it all match.
That is curve fitting, not based on any real process. Indeed if the temperature changes, then the slope of that change will have an influence on the CO2 rate of change. But not necessary for the whole slope in rate of change. As the emissions show a similar slope, you can’t know which of the two has the highest influence. And that doesn’t imply any influence of temperature on the average height of the rate of change.
…of temperature differential. You do not know the temperature differential with the deep oceans, and how that changes as it upwells to the surface.
As the deep oceans have an enormous mass with little variation in temperature (around 5°C), only the surface temperature matters for extra CO2 releases, at 16 ppmv/°C…
At 5°C, the deep ocean waters are undersaturated in CO2, as happens at the sink places near the poles. The upwelling at the Pacific warm pool increases its temperature to oversaturation, thus pushing more CO2 out of the waters. But that is limited to the absolute temperature at the surface, not influenced by any temperature difference with the deep oceans. It may be influenced by changes in concentration of the deep ocean waters, but that is an entirely different matter.
Further measurements averaged over the oceans show (with some caveats) that the average pCO2 difference between the whole sea surface and the atmosphere is 7 microatm lower for the sea surface. Thus the oceans (including the deep ocean circulation) are a sink for CO2, not a source. See:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/exchange.shtml
and following pages.
Not when you have a long term variation in the thermal energy of the upwelling deep oceans. That is a very bold statement to make without any evidence at all. If there’s one thing we know about… well, everything, it is that it always changes.
Of course, temperature changes allways, but at an average temperature change, there is an average CO2 change. That follows from Henry’s Law for the oceans, countered by the reaction of the biosphere.
The thermal energy from the upwelling is peanuts compared to the thermal energy from the sun, which is what the temperature of the ocean’s surface dictates. But again, if there was some more or less upwelling from the cold deep oceans, that may influence the surface temperature at the place of upwelling, but that only gives a change of 16 ppmv/°C, no matter the flux (which is compensated at the sink place).
You’re just rearranging the deck chairs. Define Too = To – Coo/(k2*tau2). Now, you have dCo/dt = -Co/tau2 + k2*(T-Too). And, you reach the same conclusions.
You know I give little credence to the ice cores.

Come on, you again define To – Coo/(k2*tau2) as an indefinite influence of a temperature difference on CO2 levels, which has not the slightest bearing in any known natural process. Certainly not the oceans nor vegetation. Of course that gives the same conclusions, but based on a wrong premisse. The real world says that there is a definite change in CO2 for a definite change in temperature…
Oh, there is a lot of people out here who don’t like ice cores, because they don’t like the data…
Not possible. Not observed. Then, you will not track the fine detail of the temperature change as here. That’s the whole problem.
Have a look at:
http://www.woodfortrees.org/plot/esrl-co2/derivative/mean:24/plot/gistemp/from:1959/scale:0.2/offset:-0.05
Then add 0.55% of the emissions rate of change (not possible in WFT)…
Indeed the temperature trend change seems to track the trend in rate of CO2 change, but if you have a fast reaction on temperature in the 1-2 years range, even that may be spurious and caused by the emissions.

FerdiEgb
June 7, 2012 8:32 am

Even better, look here:
http://www.woodfortrees.org/plot/esrl-co2/derivative/mean:24/plot/gistemp/from:1959/scale:0.2/detrend:0.1
And then add 0.55 times the rate of change of the emissions…

Bart
June 7, 2012 8:51 am

FerdiEgb says:
June 7, 2012 at 8:32 am
You cannot just arbitrarily remove pieces of the temperature trend you do not like.
We are at an impasse. Again. You are dead set on making reality what you wish it to be, but it is not physically realizable. In time, you will learn that I am right. We will take it up again at another time.

FerdiEgb
June 7, 2012 10:28 am

Bart says:
June 7, 2012 at 8:51 am
You cannot just arbitrarily remove pieces of the temperature trend you do not like

You cannot attribute an arbitrary slope and offset of the temperature trend fully to the temperature influence, if another variable can explain these even better.
The more that the influence of a change in temperature is only a few years, as Pieter Tans showed. Your solution is physically impossible in other periods than the current, while mine covers all periods in the past and future.
But see you next time…

June 7, 2012 11:38 am

A standard geometric equation
y = a * x^(bx)
fits the data just as well with only two parameters, but that might no mean very much.
a = 3.1543228750649087E+02
b = 1.0073802042369474E-03
Minimum Error: -4.254266E+00
Maximum Error: 5.169439E+00
Std. Error of Mean: 8.579375E-02
James Phillips

Bart
June 7, 2012 11:44 am

FerdiEgb says:
June 7, 2012 at 10:28 am
“You cannot attribute an arbitrary slope…”
The slope IS NOT arbitrary. It is in the temperature data. And, the data trumps your wishful thinking.

Allan MacRae
June 7, 2012 11:48 am

Hi Bart,
One caution:
In my opinion, we don’t really KNOW what is happening in the climate system. We only have our hypotheses.
Ferdinand believes that human combustion of fossil fuels is the primary driver of increasing atmospheric CO2. He has his reasons, and has done a tremendous amount of work on this subject.
I believe the cause of increasing atmospheric CO2 is primarily natural. I have summarized my rationale above, at June 5, 2012 at 2:46 pm
I prefer my hypo because
1. My hypo is more consistent with Occam’ s Razor – whereas Ferdinand’s hypo requires opposing trend directions at different time scales in the system, mine does not, such that all trends are consistently in the same direction (temperature drives CO2) at all time scales.
2. My hypo is consistent with the fact that CO2 lags temperature at all measured time scales, from an ~800 year lag on the longer time cycle as evidenced in ice cores, to a ~9 month lag on the shorter time cycle as evidenced by satellite data.
3. I have yet to see evidence of a major human signature in actual CO2 measurements, from the aforementioned AIRS animations to urban CO2 readings ( although I expect there are local data that I have not seen that do show urban CO2 impacts, particularly in winter and locally in industrialized China.)
4. My hypo is more consistent with the Uniformitarian Principle.
However, my conclusions are based primarily on the balance of probabilities – my hypo is simpler, eliminates (or minimizes) apparent contradictions in the conventional model, and is faithful to the data and the most time-honored principles of scientific inquiry. But is still a probability, not a certainty.
Attempts to disprove my hypo seem to rely primarily on religious, rather than scientific bases, for example, the ~9 month lag of CO2 after temperature is waived away as a “feedback effect”. The “logic” here is they “KNOW” CO2 drives temperature, and therefore the observed phenomenon MUST BE a feedback. Right-o!
This “feedback” contention also requires that the same physical (climate) system is operating in OPPOSITE directions within the same time frame – my old boss used to caution people who took this approach, telling them they were “sucking and blowing a the same time”.
This word “feedback” seems to be a favorite in climate science – you may also recall that in the “mainstream climate debate”, large positive feedbacks need to be fabricated in order to demonstrate that the observed increase in atmospheric CO2 will cause catastrophic global warming. The complete absence of ANY evidence that such large positive feedbacks even exist does not seem to trouble the global warming alarmists – these large positive feedbacks are another tenet of the Church of Global Warming. No doubt the only thing that has prevents them from burning “deniers” at the stake has been the emission of even more deadly greenhouse gases. 🙂
Best regards to you and Ferdinand,
Allan

FerdiEgb
June 7, 2012 1:18 pm

Bart says
June 7, 2012 at 10:28 am :
The slope IS NOT arbitrary. It is in the temperature data. And, the data trumps your wishful thinking.

OK, the slope is in the temperature data. That makes you think that the increase of CO2 in the atmosphere is completely caused by the increase in absolute temperature, compared to a baseline. Thus the human emissions play no role. But if the influence of temperature is limited to the change in temperature and limited in time (1-2 years), then the slope doesn’t say anything about how temperature influences the CO2 levels over longer term and then the human emissions play the most important role.
It is there that we differ in opinion.
But there will be a simple proof who is right in this case: if the current temperature standstill holds for a few years, or we even have a cooling, the rate of change would stay where it is or decrease, if you are right. If I am right, the rate of change will continue to increase together with the emissions, at least in the foreseeable future. Only if the temperature increases, then it remains unresolved.
The 1990 change in offset gives already an indication…

Bart
June 7, 2012 6:36 pm

FerdiEgb says:
June 7, 2012 at 1:18 pm
“But if the influence of temperature is limited to the change in temperature and limited in time (1-2 years)…”
Not physically possible.
“But there will be a simple proof who is right in this case.”
Not really. We both agree CO2 will continue rising for the foreseeable future. Sea surface temperatures would need to decline by over 0.5 degrees for CO2 to start declining. We might see a visible decrease in slope, however, with the moderate cooling expected in the next 20-30 years.

Bart
June 7, 2012 6:40 pm

Allan MacRae says:
June 7, 2012 at 11:48 am
IMO, the probability for human dominated atmospheric CO2 concentration is vanishingly small.

June 8, 2012 12:48 am

Allan MacRae says:
June 7, 2012 at 11:48 am
I have yet to see evidence of a major human signature in actual CO2 measurements
I did give you the reference to the data of Diekirch, Luxemburg, where there is a clear difference in diurnal peaks during peak hours between Sunday and weekdays. But even if the human contribution is small and readily is mixed in the bulk of the natural changes, that doesn’t exclude that it may be the main cause of the increase.
Think about sea level gauges: the change in sealevel is completely dwarfed by the tides, not measurable at all. But still one can calculate the sea level change after some 25 years of data…
For the human influence, 3 years averaging is enough to filter out the fast temperature change influence.
This “feedback” contention also requires that the same physical (climate) system is operating in OPPOSITE directions within the same time frame
That the main effect is for temperature driving CO2 levels, doesn’t exclude the opposite effect of CO2 on temperature. As long as the effect is small (overall coefficient less than 1), there is no runaway effect. See the difference between a theoretical change in CO2 with and without feedback on temperature, even with a lag of CO2:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/feedback.jpg
What you get is that both temperature and CO2 levels end somewhat higher than without feedback for a finite change of the initial temperature driver.

FerdiEgb
June 8, 2012 12:59 am

Bart says:
June 7, 2012 at 6:36 pm
But if the influence of temperature is limited to the change in temperature and limited in time (1-2 years)…”
Not physically possible.

Never heard of Henry’s Law? Any finite temperature change of the ocean’s surface introduces a finite change in pCO2 of the seawater, thus needs a finite change in pCO2 of the atmosphere to bring everything back into dynamic equilibrium. The time constant to reach the new equilibrium is 1-2 years…
That doesn’t mean that at any time a new equilibrium would be reached, as the temperature changes continuously, but there is no unlimited change in CO2 for a constant offset in temperature.
We might see a visible decrease in slope, however, with the moderate cooling expected in the next 20-30 years.
Indeed, I was talking about the rate of change, not the absolute levels. The change in slope should be observed already in the next 2-3 years…

Ed
June 8, 2012 3:22 am

Pamela Gray says:
Anything as regular as this data says one of two things.
1. Manmade CO2 pump sitting next to the sensor and never shuts off and is exquisitely tuned to a rythmic increasing beat.
2. Artifact of the “fudge” factor part of the CO2 calculation.
Of these two scenarios, I think #2 has the greater chance of being the culprit. It is exceedingly rare for anything on Earth to be that regular (even if caused by human polution) unless someone fine tuned it to be that regular. It’s like finding a perfectly square rock in the mountains and finding out nature made it. Ain’t gonna happen. Chances are something that regular is wholly artifact. That a person can build a simple model to express the regularity of the signal is revealing, to say the least. Someone have the complete maths sequence for the CO2 calculation?

I’m starting to get suspicious of the Sun too. How come it rises and sets to a fixed timetable each day, year after year? And how come those warmist weather forecasters at the Met Office are supposed to predict the tides, which coneniently all take place to a fixed timetable. And what about the seasons, they’re a little too regular too for my liking.
I tell you the CAGW hoax may extend much further than we previously thought.

Myrrh
June 8, 2012 4:38 am

Ferdinand, as I’ve answered Richard’s question here the way I have, I’d like to just tidy up by answering a previous post of yours in another discussion on Mauna Loa where you reply to the same question from Joy:
http://wattsupwiththat.com/2008/08/04/one-day-later-mauna-loa-co2-graph-changes-data-doesnt/#comment-30463
“The mass balance is the ultimate proof of the contribution of humans to the increase of CO2 in the atmosphere. As long as more is emitted than there is increase, there is no net addition from nature. That not all emissions are absorbed by the oceans/vegetation is a matter of process equilibrium: one need a driving force to push CO2 from the atmosphere into the oceans(/vegetation), which is only possible if the pressure of CO2 in the atmosphere (pCO2atm) is higher than the average pCO2 of the oceans. Thus levels in the atmosphere must go up first. The higher the pressure difference, the more CO2 is absorbed.”
Your reasoning comes from the fisics from AGW as I’ve given in my reply to Richard, claiming that the gases in the atmosphere are ideal and not real, (ideal and real technical terms, ideal is an imaginary construct and real is real world). As I’ve explained, in the the AGW Science Fiction world you’re arguing from, the ideal gases have no properties, no volume, no weight, no attraction, not subject to gravity. Your carbon dioxide doesn’t exist.
In the real world carbon dioxide is a real gas with volume and weight and attraction – in a real gas atmosphere comprising mainly of real gas molecules nitrogen and oxygen. What this means is that above us we have a huge heavy voluminous ocean of fluid gas, not empty space, that’s why we have sound*.
In the real world atmosphere lighter gases rise through air (Air, the fluid gas ocean above and around us), and heavier than Air gases sink. Methane which is lighter than air will always rise, Carbon dioxide being one and a half times heavier than air will always spontaneously sink, displacing the lighter air around it, unless work is being done such as wind, heat, and, being heavier than air will not spontaneously rise into the atmosphere.
Carbon dioxide being a real gas and not the ideal gas without properties of the AGW Science Fiction world, does have attraction – it and water vapour have an irresistable attraction for each other – that’s why all rain is carbonic acid, all pure clean rain is pure clean carbonic acid. Carbonic acid is being spontaneously formed wherever there is water and carbon dioxide together in the atmosphere – that’s why your iron garden furniture rusts outside, because all humidity in the air, all fog, dew, and so on is carbonic acid. Carbon dioxide therefore is fully part of the Water Cycle, it is constantly being rained back to Earth whenever it is not sinking because heavier than air in between being moved around by wind.
That is the Carbon Life Cycle, as fully part of the Water Cycle and Carbon Dioxide in its own right heavier than Air, sinking back to Earth from fires, volcanic eruption etc.
There is no way that ‘anthropogenic’ Carbon Dioxide is separate in process or properties from any naturally produced or naturally going into sinks; there is no ‘accumulation of anthropogenic carbon dioxide in the atmosphere’.
Impossible.

Myrrh
June 8, 2012 4:54 am

p.s. *
To understand that the atmosphere around us is not ’empty space full of ideal gas molecules zipping around at great speed bouncing off each other and so thoroughly diffusing’, but a heavy, weighing a ton on your shoulders, voluminous fluid ocean of real gas molecules, subject to gravity and going nowhere fast, you have to leave your fictional world behind and get back to real world physics:
Sound: http://www.mediacollege.com/audio/01/sound-waves.html
“Note that air molecules do not actually travel from the loudspeaker to the ear (that would be wind). Each individual molecule only moves a small distance as it vibrates, but it causes the adjacent molecules to vibrate in a rippling effect all the way to the ear.”
Have you stepped back through the looking glass?
“Note that air molecules do not actually travel from the loudspeaker to the ear (that would be wind). Each individual molecule only moves a small distance as it vibrates, but it causes the adjacent molecules to vibrate in a rippling effect all the way to the ear.”

Gail Combs
June 8, 2012 10:30 am

Allan MacRae says: @ June 5, 2012 at 9:42 pm
…..The CO2 sequestered in thick beds of limestones, dolomites, coal, lignite, peat and petroleum all over the planet was once, I presume, part of Earth’s atmosphere.
I also assume that over time, continued sequestration of atmospheric CO2 in these sediments will ultimately lead to atmospheric CO2 concentrations that are too low to sustain photosynthesis.
Barring an earlier natural catastrophe, will this mechanism lead to the end of life on Earth as we know it, as photosynthesis shuts down and the food chain fails?
____________________________
That is the real point that needs to be gotten across to everyone. That is the real catastrophe involving CO2 not global warming.
I figure humans mining coal and oil is natures way of releasing CO2 back to the atmosphere and preserving all “the endangered species” threatened by CO2 starvation. “The endangered species” is much of the plant and higher level animal life forms. Just an increase of 150 to 200 ppm of CO2, as assumed by the CAGW crowd, has had a major effect on increasing the food supply for plants and therefore animals.

Gail Combs
June 8, 2012 11:25 am

richardscourtney says: @ June 6, 2012 at 3:24 am
…….
Thanks for telling of your tries in validating the Mauna Loa data. I can not say I am surprised at what you relate. Dr. Zbigniew Jaworski treatment shows that it is not and has never been about “Science”
Another point that is often over looked is the major change in vegetation in the USA and Europe over time. Because of the demand for fire wood for heating much of the USA and Europe and elsewhere was clear cut during the Little Ice Age. You can walk through the woods in New England and see the old stone walls from the time when much of New England was farm not forest. The introduction of coal for heating, meant forest were no longer cut for firewood. Wood burning was the predominant global energy source until about 1880 when the use of coal was necessitated by wood depletion engendered by rising population pressures coupled with an increased demand for high energy density sources for nascent manufacturing enterprises. The time period prior to this was known as the “little ice age” (1300-1850) ~ 500 years of “cooling”. I wonder what the sea surface temperatures were?
Another point people do not take into account is the effect on US agriculture of the “New Deal” policies restricting the amount of land that could be planted and the Soil Conservation Act passed in response to the Dust Bowl of the 1930s.

During World War I about one million acres of grassland in western Nebraska, better suited to grazing than to crops, was plowed under and planted. In the 1920s farmers were so desperate to increase income that they over plowed, over planted, and over grazed the land on the Great Plains…. http://www.livinghistoryfarm.org/farminginthe30s/crops_09.html

This makes an interesting backdrop to Beck’s graph of historic CO2 measurements from 1826 to 1960 where Callender cherry picked the lowest results to represent the “background” CO2. ( close-up ) The Last two graphs are thanks to Lucy Skywalker
(Some think the blip in 1940’s in the Beck graph could be due to the oilslicks from WWII. )

June 8, 2012 12:19 pm

Myrrh says:
June 8, 2012 at 4:38 am
Carbon dioxide being one and a half times heavier than air will always spontaneously sink, displacing the lighter air around it, unless work is being done such as wind, heat, and, being heavier than air will not spontaneously rise into the atmosphere.
It will. I don’t know where you live, but here in Europe we frequently see Sahara sand settled on our cars (the same for the West Coast of the US for sand from the Chinese/Mongolian deserts) if the wind is from that direction. That travels thousands of kilometers. Even if it is hundreds times heavier than air or CO2. The difference in specific mass between CO2 and air is only 1.5 times not hundreds, thus once mixed in, it may be transported over hundreds of times longer distances than sand, thus simply all over the world.
Further, CO2 is measured near ground (where it may be pure CO2 or pure air or anything in between), but from about 500 m above land and over the oceans, there is near as much CO2 near the surface as at 4,000 m high as up to 20 km height (measured by satellite, balloons, airplanes)…
According to what you believe, the measurements at Mauna Loa or the South Pole should show far less CO2 values than near ground, which is not the case at all.
The only cases where CO2 stays (temporarely) near ground is if huge upwelling occurs at once, then the wind or simple convection has not the time to mix that CO2 in. Or in caves where CO2 can build up if more is produced than is removed. Or in ice cores, where the CO2 in the stagnant part of the air/ice column increases with about 1% near the bottom in a period of 40 year, called the “gravitational fractionation” (for which is compensated in the CO2 measurements of the ice bubbles).
See further:
Brownian motion at
http://en.wikipedia.org/wiki/Brownian_motion
and about CO2 measurements at height:
http://www.mendeley.com/research/co2-columnaveraged-volume-mixing-ratio-derived-tsukuba-measurements-commercial-airlines-17/
and here a lot of them:
http://www.esrl.noaa.gov/gmd/ccgg/iadv/
check there the airplane data e.g. from Rarotonga (500-6500 m) and look at the (lack of) differences…

Myrrh
June 8, 2012 12:35 pm

Gail Combs says:
June 8, 2012 at 11:25 am
(Some think the blip in 1940′s in the Beck graph could be due to the oilslicks from WWII. )
=====
Julian Flood’s theory re temps which I first learned about in his post first link below, and I’d just been looking at BEck’s work on CO2 spikes which had the same spike and looked for more info on the Kriegesmarine theory and found Julian’s first post about this in 2010 – I posted to let him know of the Beck data (second link) and also put into that post what I take to be Julian’s first post on the subject on WUWT.
Just realised I was so delighted to find that link with Beck’s work, I didn’t look for his original post about this on Judith Curry’s blog. But will now have to wait until after dinner.
http://wattsupwiththat.com/2012/06/03/shocker-the-hansengiss-team-paper-that-says-we-argue-that-rapid-warming-in-recent-decades-has-been-driven-mainly-by-non-co2-greenhouse-gases/#comment-1000669
http://wattsupwiththat.com/2012/06/03/shocker-the-hansengiss-team-paper-that-says-we-argue-that-rapid-warming-in-recent-decades-has-been-driven-mainly-by-non-co2-greenhouse-gases/#comment-1000861

FerdiEgb
June 8, 2012 12:54 pm

Pamela Gray says:
Someone have the complete maths sequence for the CO2 calculation?
I have received two days of raw voltage data from Pieter Tans on simple request. Using the calculations as outlined in the guidelines at:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html#instrument
one can implement that in Excel.
After the calibration of the instrument each hour, the previous (2×20 minutes 10-second snapshot) voltage values can be translated into CO2 levels. These are averaged to obtain the average and standarddeviation over the past hour. The averaged hourly data (+ stdv) are available on line for four baseline stations at:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
My comparison of the raw voltage data with what was stored is here for one hour of data:
http://www.ferdinand-engelbeen.be/klimaat/mlo_raw_v_2006_11_17_00h.xls
I calculated that for the full two days and simply found that the hourly average data reflected the 10-second sampling. Thus if you think that the data are manipulated, I don’t know where that should have happened.

FerdiEgb
June 8, 2012 1:17 pm

Myrrh says:
June 8, 2012 at 12:35 pm
Besides the problems by using unreliable data from heavily contaminated places, Beck’s high CO2 values around 1942 are not confirmed by any other CO2 (or d13C) proxy I know of. Including stomata data, the other posterchild to haunt the ice core data…
But the main problem is the speed with which it should have happened. While a few thousands volcanoes all spuwing lots of extra CO2 are remotely possible to give a 80 ppmv increase in only 7 years, there is no sink on this world which can absorb 80 ppmv CO2 within 7 years, if Beck’s CO2 peak was true. If you know of such mechanism, I am very interested…

Gail Combs
June 8, 2012 1:30 pm

Ferdinand Engelbeen says: @ June 6, 2012 at 8:55 am
And again, if you find that the AIRS data show that the CO2 is not well mixed, then you have a different definition of well mixed than I have. Well mixed doesn’t mean that any huge injection or removal of CO2 in/out the atmosphere is instantly distributed all over the earth. It only says that such exchanges will be distributed all over the earth in a reasonable period of time.
______________________________________
Ah yes the lets change the argument defense.
The whole idea behind the “well mixed” argument was so the CAGW team could take measurements and SHOW that CO2 is increasing. If CO2 is NOT well mixed ie UNIFORMLY DISTRIBUTED through out the atmosphere. If there is no such thing as a “Baseline CO2” then you end up with huge error bars on all your measurements and you can not show that CO2 is increasing, the whole goal of the entire exercise.
Think of the very nice smooth curve showing increasing CO2 from MANKIND where Ice Core data is grafted to Callender’s cherry-picked data that is then grafted to the Mauna Loa data. Lucy shows the graph here and the fudge used to make it fit here.
Or of Willis’s Carpet Diagram
A study of WHEAT shows just how fast the CO2 content in air can change.

The CO2 concentration at 2 m above the crop was found to be fairly constant during the daylight hours on single days or from day-to-day throughout the growing season ranging from about 310 to 320 p.p.m. Nocturnal values were more variable and were between 10 and 200 p.p.m. higher than the daytime values. http://www.sciencedirect.com/science/article/pii/0002157173900034

Then there are the Cameroon killer lakes that release CO2 and can kill people and animals as far away as 25 km (15 miles) link As Myrrh keeps reminding us, CO2 is heavier than air and it takes energy (wind) to lift it into the atmosphere. So how come we are not seeing major spikes in CO2 down wind from coal plants or cities?
As Allan MacRae says @ June 4, 2012 at 7:38 pm

The SLC urban CO2 readings show that even a the typical SOURCE of manmade CO2 emissions (the URBAN environment), the natural system of photosynthesis and respiration dominates and there is NO apparent evidence of a human signature. If your premise was correct, you would see CO2 peaks at breakfast and supper times and the proximate (in time) morning and evening rush hours, when power demand and urban driving are at their maxima. This human signature is absent In the SLC data, and yet the natural signature is clearly apparent and predominant…
Similarly, in the AIRS animation I posted earlier, there is NO human signature and the power of nature is clearly evident. Here it is again. http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4

So the CO2 is not really well mixed NOT EVEN IN THE MID TROPOSHPERE where it is shown to vary by as much as six percent even when taking samples from an area as big as 90km X 90km. What it does show is CO2 levels are subject to the dynamics of the natural carbon/water cycle.
On the Plant Stomata ~ they are good for low values of CO2 and underestimate values above 325 ppm. Therefore they are a decent check on the low values found by the Ice Core analysis.

….Plant Stomata react more accurately to CO2 concentration, as has been determined in experiments. (More CO2 means fewer stomata, as plants exchange CO2 more efficiently) Historical collections of leaves can be used to determine past CO2 levels. In most cases, researchers are bound by the modern paradigm, and get confused by the low stomata counts of the past. Stomata cannot measure very high CO2, but only indicate high C)2. Higher CO2 levels over 325ppm are underestimated. When reading stomata research, you need to filter out the ruling paradigm when the problematical ice-core data is used to calibrate the stomata, when it should be the reverse.
Rapid atmospheric changes are well known from past reconstructions:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC129389/pdf/pq1902012011.pdf

BTW, my definition of “well mixed” comes from doing analysis of batches of drugs regulated by the FDA. A wishy-washy definition like you have would have landed my rear in JAIL!

FerdiEgb
June 8, 2012 3:24 pm

Gail Combs says:
June 8, 2012 at 1:30 pm
If there is no such thing as a “Baseline CO2″ then you end up with huge error bars on all your measurements and you can not show that CO2 is increasing, the whole goal of the entire exercise.
The “error bars” as seen by AIRS are a few ppmv up and down all over the earth, while the increase is some 70 ppmv since 1950. AIRS shows the same levelsand the same increase in CO2 as at Mauna Loa in the same area, and hardly any differences over the rest of the earth. Thus what is your problem? Only that you don’t like the data?
Ice Core data is grafted to Callender’s cherry-picked data that is then grafted to the Mauna Loa data
If you could for once your set your biases aside and do read some literature, you should know that, whatever Callender’s criteria were to pick the best data (and several were right on the mark), the ice core data simply confirmed his “best guesses”.
The ice core data in no way are grafted on the Mauna Loa data. That is what the late Jaworowski said, but that is completely bogus and, in my opinion, completely declassified him as an ice core specialist. The “arbitrary” shift of 83 years is because Jaworowski used the column of the age of the ice layers in Neftel’s table of the Siple Dome ice core, while CO2 is measured in the gas phase, which is much younger than the ice at the same depth. See:
http://www.ferdinand-engelbeen.be/klimaat/jaworowski.html#The_arbitrary_shift_of_airice_data
Anyone who has the slightest idea that it takes years to close the bubbles in an ice core, while still exchanges with the atmosphere are possible would know that.
And since 1996 we have the work of Etheridge an three Law Dome ice cores, where he measured CO2 in firn, top down to closing depth and in ice at the same depth. There was a real overlap of some 20 years (1960-1980) between the ice core CO2 and the measurements at the South Pole:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_sp_co2.jpg
Similarly, in the AIRS animation I posted earlier, there is NO human signature
There is hardly any measurable human signature in the momentary CO2 data, simply because the signal is too small. But look at a few years of data and it becomes clear.
About stomata data, they have far more problems than ice core data, not even close to their accuracy. Stomata are a local proxy where local changes in CO2 level caused by land use changes can give a huge change in offset, even if you calibrate them to ice cores in the past century. And how do you want to calibrate ice cores with a measurement error of +/- 1.2 ppmv to stomata data with an accuracy of +/- 10 ppmv?
And why do you worry about a drug within a +/- 2% tolerance in the active ingredient, if the pharmaceutical firm slowly increased it with 30% over the years?

Stephen Wilde
June 8, 2012 4:35 pm

FerdiEgb.
It is far more likely that neither ice cores nor stomata give an accurate representation of the natural scale of atmospheric CO2 variations when the concentration is as low as it is today compared to far higher levels in the geological past.
I hope we can agree that the largest part of the natural cycle would be responses to sea surface temperatures.
Well, if the background level gets as low as it was (around 280 ppm), practically at danger level for life on Earth, then the percentage swings from ocean cycling are bound to be far larger than if the background level were as high as it often was in the distant past.
For all we know the natural swings could well be a doubling from LIA to date and a halving from MWP to LIA purely from changes in the ocean/ atmosphere exchange.
Neither the ice core nor the stomata proxies are necessarily sufficiently well delineated on the multicentennial timescale to refute such a possibility.
We are trying to rely on proxies that are far too coarse for the purpose to which they are being put.

Stephen Wilde
June 8, 2012 4:40 pm

“So how come we are not seeing major spikes in CO2 down wind from coal plants or cities?”
Good question. One can be even more specific than that.
The densest areas of CO2 are downwind of ocean tracks especially where the flow hits a continent and slows down so that the CO2 can accumulate with no significant excess at all downwind of populated areas.

Stephen Wilde
June 8, 2012 5:05 pm

Whoops.
Above, I meant to say:
For all we know the natural swings could be an increase of 30% or so from LIA to date and 30 – 50% from MWP to LIA.
And as regards evidence about the CO2 distribution see here:
http://climaterealists.com/index.php?id=9508
“Evidence that Oceans not Man control CO2 emissions”

Myrrh
June 8, 2012 5:42 pm

Ferdinand, will reply to your post later this weekend.

Gail Combs
June 8, 2012 6:18 pm

FerdiEgb says:
June 8, 2012 at 3:24 pm
Gail Combs says:
June 8, 2012 at 1:30 pm
If there is no such thing as a “Baseline CO2″ then you end up with huge error bars on all your measurements and you can not show that CO2 is increasing, the whole goal of the entire exercise.
>>>>>>>>>>>>>>>>>>>>>
The “error bars” as seen by AIRS are a few ppmv up and down all over the earth….
>>>>>>>>>>>>>>>>>>>>>
Good Grief, you are doing it again, completely missing the point.
I am not talking about the error bars of the method. I have spent my career running gas chromatographs and infrared spectrophotometers. I am well aware they are accurate and precise if used properly and calibrated.
What I am talking about is the error in determining the so called background CO2 of the ENTIRE ATMOSPHERE. If I have a well-mixed batch and I take ten samples from various points and test for component “A” I will get a bell curve with a small standard deviation. This tells me the batch is “Well Mixed” and uniform. If the test results for each point are different I get a large standard deviation and my ability to estimate the “true value” of component “A” has much larger error bars.
Therefore you have two standard deviations, one for the test method and one for the sampling plan. As Stephen Wilde pointed out if the standard deviation for the test method is large you can lose the data in the noise from the test method. If the thing you are measuring is not homogeneous then the standard deviation for the SAMPLING PLAN is large and your estimate of the “true value” has large error bars despite the precision and accuracy of your test method.
Also if I take a sample of air and place it in a flask then test it, I am testing a point source. Both the Ice Core measurements and the AIRS measurements (90km X90km) are not point source measurements they are COMPOSITE SAMPLES. They are the equivalent of taking several flasks of air, mixing them and doing one analysis. Since we both agree the chemical analysis is not a great source of error, then this is the equivalent of taking the average of 10 or 100 or 100 point source samples using a flask. Yet even with this intrinsic averaging we see a decent amount of variation. About 6% in the mid troposphere from AIRS for gosh sakes where you can not blame the variation on sinks or sources close by!

Allan MacRae
June 8, 2012 10:50 pm

Time Capsule:
Below is an exchange from January 2008, soon after I had written my paper on this subject (dCO2/dt varies ~contemporaneously with temperature, and CO2 lags temperature by ~9 months), and Roy Spencer had written his two papers on a similar topic. Nice to see how this radically different viewpoint has apparently become a bit less heretical since then – and I no longer feel quite so nervous around bonfires.
Even nicer to see how Ernst Beck is no longer being treated with disrespect and even derision. I never met Ernst but had the privilege of exchanging about 60 emails with him in 2008 alone. I recall Ernst as a remarkably decent and intelligent soul, who suffered because he dared to speak out against the global warming juggernaut. Ernst left us too soon in 2010, and I hope he can look down upon us now and at last feel some measure of vindication.
_________________
http://wattsupwiththat.com/2008/01/25/double-whammy-friday-roy-spencer-on-how-oceans-are-driving-co2/
Eric Adler (17:12:30) :
“Your analysis leaves out an important factor. It is known to all, including the scientists who wrote the IPCC report, that the change in CO2 concentration in the atmosphere is driven by 2 things:
1) An accelerating upward trend in CO2 due to human caused emissions.
2) The variation in the oceans’ ability to absorb the CO2, which decreases with increasing sea surface temperature.”
Your comment may or may not be correct – over the next decades, we may see the truth emerge from the data.
However, your tone with me and especially with Roy is aggressive and ill-advised.
Re: “It is known to all…”:
Really, such hogwash. I am reminded of that IPCC highlight, Mann’s hockey stick, that eliminated the Medieval Warm Period and the Little Ice Age; also of the Divergence Problem. Mann and the IPCC were clearly wrong – the only remaining question here is not one of error, it is one of fraud.
I am also reminded of the greatly exaggerated climate sensitivity used by the IPCC to produce their scary scenarios, and the ridiculous climate models that continue to predict catastrophic warming, even though Global Warming ceased a decade ago.
I remind you that ice core data shows a ~600 year lag of CO2 after temperature at that time scale. I have provided evidence at shorter time scales. Ernst Beck has provided significant evidence at intermediate time scales, and has suffered scorn from the likes of you.
I also remind you of the “missing sink”, whereby only half of humanmade CO2 reports to the atmosphere. The rest, presumably, is hidden away by evil climate skeptics (or do you prefer the term “climate deniers”).
Still, there may be a significant humanmade CO2 component, which cannot be ruled out at this time.
So even if the final conclusion in my paper turns out to be wrong, it will still be a much closer to the truth than any of the IPCC’s scary conclusions, which are clearly false, alarmist, self-serving and extremely expensive for humanity.
There has been no Global Warming for a decade, and evidence is mounting that Earth will enter a 20-30 year cooling period as the PDO has shifted to cool mode.
I await the IPCC’s smooth transition from Catastrophic Humanmade Global Warming to Catastrophic Humanmade Global Cooling, and your spirited defense thereof. Watch out for whiplash when you change directions.

June 9, 2012 1:42 am

Gail Combs says:
June 8, 2012 at 6:18 pm
I am not talking about the error bars of the method.
Neither did I. It is about the variability in the CO2 data, as the measurements are quite reliable (on fixed stations +/- 0.2 ppmv, AIRS at +/- 5 ppmv).
Look at the scale for the AIRS presentations:
http://photojournal.jpl.nasa.gov/jpeg/PIA12339.jpg
The scale is 382-390 ppmv. For a monthly average, mid-summer. That is a variability of average +/- 1% of the full scale. In other months where the largest seasonal changes are at work, that is +/- 2% of full scale all over the world. For a change of + or – 20% in CO2 fluxes between atmosphere and oceans/biosphere at ground level. I call that well mixed on such a time scale.
Then have a look at the trends over 7 years:
http://svs.gsfc.nasa.gov/vis/a000000/a003600/a003685/AIRSC02_MLOComposite.mp4
An increase, both in Mauna Loa data as in the AIRS data of about 14 ppmv in 7 years time, or near 2 % of full scale increase in less than 10 years. Thus the average CO2 level increased with more than the global variation. No matter the cause, we may be confident that there is an increase.
What is the impact of the global variation on the greenhouse effect? I don’t want to react on any discussion of the real impact of the radiation effect, but based on real absorption figures (Modtran), one can assume some 0.9°C for a CO2 doubling, without any positive or negative feedback. Thus a change of 2% full scale has a (logarithmic) impact of ~0.03°C on the surface, after ~30 years of adjustment of the ocean’s temperature. Simply not measurable.
Thus the (mainly seasonal) variability you see in the AIRS (and MLO and Barrow, and…) data has negligible impact on the greenhouse effect. Even if you take into account that the levels near ground over land can be much higher, that has negligible impact on the greenhouse effect (even with 1000 ppmv in the first 1000 meters).
The 30% increase since the industrial revolution started may have had an impact of about 0.4°C over the past 150 years, that is all. Not included the self-regulating effect of the earth’s water thermostat…
Thus whatever you think about the real variability of the CO2 levels, the impact (if any) is from the total increase over the full time span of reliable measurements.

FerdiEgb
June 9, 2012 2:09 am

Stephen Wilde says:
June 8, 2012 at 4:35 pm
I hope we can agree that the largest part of the natural cycle would be responses to sea surface temperatures.
Yes, but don’t underestimate the impact of vegetation: in spring when the mid-NH starts growing new leaves, the CO2 levels are rapidely sinking to minimum levels. The d13C measurements then show that vegetation growth is the largest cause of the decrease (and opposite in fall). Thus while oceans have the largest total impact (continuous between equator and poles, seasonal for the mid-latitudes), land vegetation has the largest impact on the seasonal variation. Reason why there is less seasonal variation in the SH.
For all we know the natural swings could well be a doubling from LIA to date and a halving from MWP to LIA purely from changes in the ocean/ atmosphere exchange.
The ice core data areCO2 levels averaged over 8-600 years, depending of the accumulation rate. With about 8 ppmv/°C over ice ages / interglacials. The averaging does smooth out larger variations, but that doesn’t change the average. Thus if there was more variation, the 180 ppmv minimum measured in the Vostok and other ice cores could have been even lower, which I don’t expect.
Fortunately for land plants, soil bacteria give some CO2 back to the atmosphere, which makes that CO2 over land in average is higher than background, at least during a few morning hours in sunlight:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_mlo_monthly.jpg
Thus if even a change over a glacial/interglacial has no more impact than 8 ppmv/°C, there is no reason to assume that the MWP-LIA difference or the LIA-current difference would have a much larger impact.

barry
June 9, 2012 5:59 am

We have measurements from many different locations all over the world corroborating a steady rise in CO2, and these reflect not only each other over the long term, but also the annual variability over different latitudes. We have plenty of evidence that our inventory of atmospheric CO2 is reasonably accurate.
Human industry outputs about twice the amount of CO2 that is added to the atmosphere every year (on average).
The change in isotopic ratios for CO2 in the atmosphere is exactly in line with what is expected from burning fossil fuels. There is no natural source of CO2 that would give us the isotopic ratio changes we see (unless a vast store of fossil fuels have been burning naturally for more than a century – anything’s possible).
Atmospheric content of oxygen is decreasing in proportion to the amount of fossil fuel being burned.
The oceans are currently a net sink for CO2 and have been accumulating CO2 for as long as we’ve measured this directly.
So, human industry outputs more than 100% of the extra CO2 added to the atmosphere (per annum, averaged over a few years). A theory that posits the CO2 rise over the last couple hundred years coming from nature, has to overcome a few basic problems.
First, it has to explain why anthro CO2 doesn’t add to the atmosphere – indeed it must explain how anthro CO2 gets sequestered in favour of a natural source. How does the biosphere know to do this?
It has to explain the change in isotopes.
It has to identify – with actual data – a physical mechanism that is responsible for the accumulating CO2.
We have data that explains the CO2 rise from anthropogenic sources. we have no actual data that identifies an increasing natural source, or decreasing natural sink, to explain the rise.
Occam’s razor works well here!
(Easy-going video on CO2 isotopic ratio http://www.youtube.com/watch?v=UXgDrr6qiUk)

Reply to  barry
June 9, 2012 1:37 pm

Barry,
Click on my name, read by blog, and then decide what the 13CO2 index is telling us about what is natural and what is anthropogenic.

Gail Combs
June 9, 2012 8:09 am

Myrrh, I just reread your comment at June 6, 2012 at 5:06 am and it got me to thinking. Why did Keeling pick Hawaii and Mauna Loa? That is aside from wanting to live in a tropical paradise instead of sitting on an ice field and therefore having his pick of eager scientists and grad students to work for him.
First lets deal with the basic assumption we make that scientists are honest and the data is not manipulated. This is the basis for the belief in the Mauna Loa data. However we have seen ample evidence here on WUWT and in science in general that this is a very bad assumption especially when dealing with scientist pursuing “A Cause”
So what data do we have about keeling’s agenda?

http://www.co2web.info/ESEF3VO2.pdf
…At the Mauna Loa Observatory the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques. Critique has also been directed to the analytical methodology and sampling error problems (Jaworowski et al., 1992 a; and Segalstad, 1996, for further references), and the fact that the results of the measurements were “edited” (Bacastow et al., 1985); large portions of raw data were rejected, leaving just a small fraction of the raw data subjected to averaging techniques (Pales & Keeling, 1965).
The acknowledgement in the paper by Pales & Keeling (1965) describes how the Mauna Loa CO2 monitoring program started: “The Scripps program to monitor CO2 in the atmosphere and oceans was conceived and initiated by Dr. Roger Revelle who was director of the Scripps Institution of Oceanography while the present work was in progress. Revelle foresaw the geochemical implications of the rise in atmospheric CO2 resulting from fossil fuel combustion, and he sought means to ensure that this ‘large scale geophysical experiment’, as he termed it, would be adequately documented as it occurred. During all stages of the present work Revelle was mentor, consultant, antagonist. He shared with us his broad knowledge of earth science and appreciation for the oceans and atmosphere as they really exist, and he inspired us to keep in sight the objectives which he had originally persuaded us to accept.”

The first clue in this snippet is “the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques. “ Now I am not the math wiz that Willis and others are but if there is one thing I am familiar with it is doing quantitative analysis on “a new infra-red (IR) absorbing instrument”
Keelings “success” using the IR for quantitative work made it into the literature (1965) and my bosses, two PhD chemists, who owned the company I worked for were eager to try out the new method. If we could make it work it would cut the analysis time by a factor of four. We were using a Gas Cromatograph. Very accurate, slow and a royal pain. (In 1973 we did not have computers so measurements and calculations were all by hand.)
The typical method is to spike the sample with a known amount of substance that has a peak where the test sample has no peak. For the Initial calibration of the instrument, artificial samples are made and run with for example 10ppm, 50ppm, 100ppm, 150ppm, 200pmm, 250ppm, 300ppm, 350ppm, 400ppm, 500ppm, and maybe 600ppm of the molecule of interest. All are spiked with the same amount of calibration substance. Several runs would be made of these calibration materials and a curve plotted. A high and a low calibration sample would be run with each set of test samples and new calibration materials would be made up daily.
This works like a charm for Gas Chromatographs. The repeated runs of each calibration material are very tight and a nice smooth curve is produce. With our brand new (1973) state of the art IR we could not get a calibration curve worth beans! The data points were all over the place. No matter what we tried we could not get a tight standard deviation. (The we was two Phd Chemists, one MS chemist, two BS chemists, a Phd Chem Engineer and assorted techs.) I talked to another chemist (head of Borg-warner labs) during that time period and he had the same problem with trying to get good quantitative analysis results from the IR. Therefore I am not surprised there was no validation of the test method against the traditional analytical techniques. (Analytical test equipment have come a long way since then esp. with the addition of computers.)
The second clue of course is Revelle
“…inspired us to keep in sight the objectives which he had originally persuaded us to accept.” Why in the name of the thousand little gods, would Keeling have to be “persuaded” to keep in sight Revelle’s “objectives” and “document” them if he was an honest scientist? The whole statement is all about a preconceived conclusion about atmospheric CO2 response to fossil fuel combustion.
Now back to why Keeling pick Hawaii and Mauna Loa?
Keeling had the option of picking the Arctic, Alaska or the Antarctic.
Here is the data he would have seen from Barrow Alaska before he made his choice.

Date – –Co2 ppm * * latitude * * longitude * * * *author * * * * * location
1947.7500 – – 407.9 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1947.8334 – – 420.6 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1947.9166 – – 412.1 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.0000 – – 385.7 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.0834 – – 424.4 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.1666 – – 452.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.2500 – – 448.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.3334 – – 429.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.4166 – – 394.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.5000 – – 386.7 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.5834 – – 398.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.6667 – – 414.5 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.9166 – – 500.0 * * * * *71.00* * * -156.80 * * *Scholander * *Barrow
These data must not be used for commercial purposes or gain in any way, you should observe the conventions of academic citation in a version of the following form: [Ernst-Georg Beck, real history of CO2 gas analysis, http://www.biomind.de/realCO2/data.htm ]

The data is much too high for Dr. Revelle’s purpose and there is not the option of cherry-picking that there is at Mauna Loa.
So what about Mauna Loa?
Topo Maps: http://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Hawaii_Island_topographic_map-fr.svg/728px-Hawaii_Island_topographic_map-fr.svg.png
1951 South: http://www.lib.utexas.edu/maps/topo/250k/txu-pclmaps-topo-us-hawaii_south-1951.jpg
1951 North: http://www.lib.utexas.edu/maps/topo/250k/txu-pclmaps-topo-us-hawaii_north-1951.jpg
Note Kohala MTN Forest Reserve and several water courses as well as the Upolu Point Airport are north of Mauna Loa. Toward the northwest is the lava flow of 1859 and two areas marked “Settlement” South and west of that is the Honuaula Forest Reserve complete with a sheep station. Followed with the Kahaluu forest reserve and the Puu Lehua Ranch. To the north East is the lava flow of 1881 and the Mauna Loa Game and Forest Reserve. There are several dated lava flows marked. The lava seems to be the mottled brown area. Woods-brushwood is designated by a clean white area and water courses are in blue (legend is at the bottom )
A satellite image of the Hawaii island chain: http://geology.com/satellite/hawaii-satellite-image.shtml
So what does that tell us about Mauna Lao?
1. It sits near the top of an active volcano.

The Mauna Loa (Hawaii) observatory has been regarded an ideal site for global CO2 monitoring. However, it is located near the top of an active volcano, which has, on average, one eruption every three and a half years. There are permanent CO2 emissions from a rift zone situated only 4 km from the observatory, and the largest active volcanic crater in the world is only 27 km from the observatory. These special site characteristics have made “editing” of the results an established procedure, which may introduce a subjective bias in the estimates of the “true” values. A similar procedure is used at other CO2 -observatories. There are also problems connected to the instrumental methods for measurements of atmospheric CO2 ….
…The concentration of CO2 in the gases emitted from the Mauna Loa and Kilauea volcanoes of Hawaii reaches about 47% . This is more than 50 times higher than in volcanic gases emitted in many other volcanic regions of the world. The reason for this is the alkaline nature of this volcanism, strongly associated with mantle CO 2 degassing. The Kilauea volcano alone is releasing about 1 MT CO2 per year, plus 60 – 130 kT SO2 per year (Harris and Anderson, 1983) http://www.co2web.info/np-m-119.pdf

2. The lava fields are surrounded by tropical vegetation and the wind blows up the slopes of the volcano during the day.
The wheat study shows vegetation can lower the CO2 2 meters above the canopy to a constant 310ppm during the day. The Harvard Forest Study and the Rannells Praire KS study also show there is a variation between 320 ppm and 400 ppm. http://harvardforest.fas.harvard.edu/publications/pdfs/Dang_J_Geophys_Res_2011.pdf
Also many people think the Lava flows, many from the 1800’s, are sterile. This is untrue.

Building Soil
Begin with bare rock-the Hawaiian Islands, for instance. The first organisms to colonize land newly created by lava flows must be able to provide their own nutrients by means of light or chemical energy. Cyanobacteria (blue-green algae), the first colonizers, are able to photosynthesize; some are able to “fix” atmospheric nitrogen, making it available to plants. Lichens (an alliance between fungi and algae) are also early colonizers, providing their own nutrients; they also produce unusual acids that help break down rock. Eventually, as a thin layer of soil develops on the lava, higher plants begin to move in; many of the first have a nitrogen-fixing capability…. http://www.pacifichorticulture.org/garden-allies/71/4/

There was also this WUWT post Earth follows the warming: soils add 100 million tons of CO2 per year
3. The Ring of Fire – MAP

The true extent to which the ocean bed is dotted with volcanoes has been revealed by researchers who have counted 201,055 underwater cones. This is over 10 times more than have been found before. The team estimates that in total there could be about 3 million submarine volcanoes, 39,000 of which rise more than 1000 metres over the sea bed. http://www.newscientist.com/article/dn12218

Volcano Outgasing of CO2.

The primary source of carbon/CO2 is outgassing from the Earth’s interior at midocean ridges, hotspot volcanoes, and subduction-related volcanic arcs. http://www.columbia.edu/~vjd1/carbon.htm

4. Ocean ~ This is where things get really interesting.
a. Oceans effect CO2 uptake by three methods. Increase in humidity => rain => absorbing CO2 out of the air to form Carbonic Acid (as you already noted) http://ion.chem.usu.edu/~sbialkow/Classes/3650/Carbonate/Carbonic%20Acid.html
b. phytoplankton remove CO2 “…from the surface ocean when the dying cells sink to depth makes way for the uptake of more CO2. In a way, the tiny organisms act as a biological conveyer belt for the transport of carbon dioxide out of the surface and into the deep ocean…” Various species of phytoplankton form the crucial diet for many marine organisms…
c. ..there is also an important geochemical balance. CO2 in the atmosphere is in equilibrium with carbonic acid dissolved in the ocean, which in turn is close to CaCO3 saturation and in equilibrium with carbonate shells of organisms and lime (calcium carbonate; limestone) in the ocean…
The fourth effect of the ocean on CO2 is dissolving the gas or out-gassing depending on temperature. Temperature will also effect photoplantan blooms.
Here is Bob Tisdale’s South Pacific Sea Surface Temperature graph (1990-2012) and north Pacific Sea Surface Temperature graph also the 1900 t0 2009 raw Pacific Decadal Oscillation and the smoothed PDO
From 1950ish to 1985ish we see an increase in SST of close to 2C but then the temperature drops by a degree or more from 1985ish til now.
So lets recap.
This “pristine site” is influenced by CO2 discharges not only from Mauna Loa but from the rest of the “ring of fire” There is a massive CO2 vegetation sink in the land and ocean surounding the observatory not to mention the organizisms on the lava itself. You have China cranking up an industrial revolution since she signed the WTO in September of 2001. On top of all of that you have a Pacific ocean SST that warmed two degrees C and then Cooled over one degree C. And with ALL of that going on you want me to believe in that nice smooth curve that Willis shows at the top of this page??? NO WAY!
Heck Mauna Loa Observatory even TELLS us they cherry pick the data!

“At Mauna Loa we use the following data selection criteria:
3. There is often a diurnal wind flow pattern on Mauna Loa ….. The upslope air may have CO2 that has been lowered by plants removing CO2 through photosynthesis at lower elevations on the island,…. Hours that are likely affected by local photosynthesis are indicated by a “U” flag in the hourly data file, and by the blue color in Figure 2. The selection to minimize this potential non-background bias takes place as part of step 4. At night the flow is often downslope, bringing background air. However, that air is sometimes contaminated by CO2 emissions from the crater of Mauna Loa. As the air meanders down the slope that situation is characterized by high variability of the CO2 mole fraction….. http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html

And so they do their curve fitting.

4. In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step, in which we fit a curve to the preliminary daily means for each day calculated from the hours surviving step 1 and 2, and not including times with upslope winds. All hourly averages that are further than two standard deviations, calculated for every day, away from the fitted curve (“outliers”) are rejected. This step is iterated until no more rejections occur…..”

If I was to pick a “pristine site” on land for CO2 measurements it would be a tower sitting in the Gobi Desert, The Gobi Desert is a Plateau around 3,000 to 5,000 feet above sea level. It one of the largest deserts in the world, 1,300,000 sq km in area. Much of the Gobi is not sandy but is covered with bare rock. Precipitation averages less than 100 mm per year, while some areas only get rain once every two or three years. It is in the rain shadow of the Himalayas.
The other option of course is the Antarctic, another desert.

June 9, 2012 2:33 pm

Gail Combs says:
June 9, 2012 at 8:09 am
At the Mauna Loa Observatory the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques.
Gail, please before you quote such opinion, have some thinking of yourself and please read the reasons why Keeling used a different method than the old chemical ones. See his autobiography:
http://scrippsco2.ucsd.edu/publications/keeling_autobiography.pdf
Keeling looked at a different method, simply because the existing methods were not accurate enough (average +/- 10 ppmv) and/or very time consuming. He started by fabricating an extremely accurate device, based on a manometric method, with an accuracy of 1:40,000 to measure CO2 in the atmosphere and later to calibrate the NIR devices and calibration mixtures. That device still was in use until a few years ago in the Scripps Institute. Meanwhile it is the NOAA who is responsible for the worldwide calibrations, but Scripps still uses its owen calibration gases and techniques.
Because you are knowledgeable on analyses: how can you validate a new NIR technique, accurate to about 0.1 ppmv with an old technique, accurate to about 10 ppmv?
With our brand new (1973) state of the art IR we could not get a calibration curve worth beans! The data points were all over the place.
Did you remove (or compensate for) water vapour? Keeling did by freezing most water vapour out over a cold trap (-70°C) before measuring CO2. Without that step, you can find any value as water and CO2 overlap in several bands. Alternatively, nowadays handheld CO2 meters measure water vapour in a different band and then compensate for the water vapour in the CO2 band.
“…inspired us to keep in sight the objectives which he had originally persuaded us to accept.”
The only objective of Keeling (and Revelle) was to have accurate CO2 measurements. With the old chemical methods, not even the seasonal variation was clear in the large variability of measurements of that time. Revelle was of the opinion that more CO2 and warming was beneficial for humanity… Think about the period in which this history was playing: the end fifties, more than a decade before the “global cooling” scare and 30 years before the “global warming” scare…
Keeling had the option of picking the Arctic, Alaska or the Antarctic.
If you did do some effort to read the history, you should have known that the first continuous measurements by the new instrument of Keeling were at the South Pole, starting one year before Mauna Loa… But because there is a gap of a few years in the continuous data, Mauna Loa is mostly referred to as the station with the longest continuous history.
And then Barrow. One of the current baseline stations, an near ideal place to measure CO2 (with seaside wind). Except that the micro-Schollander method used in 1947-1948 had an accuracy of +/- 150 ppmv! The instrument was used to measure CO2 in exhaled air (at about 20,000 ppmv). The calibration procedure was by sampling outside air. If the result was within 200-500 ppmv, the apparatus was deemed ready for its purpose… The calibration figures are what you showed, completely worthless to have any idea of the real CO2 levels at Barrow. Despite that, used by the late Ernst Beck to calculate his “global average”…
Modern data at Barrow since 1971 show a larger seasonal swing than Mauna Loa, but exactly the same trend, only leading MLO with average 6 months.
Then your litany about what can go wrong at Mauna Loa and the cherry picking there. They publish all the raw hourly averages + stdv from Mauna Loa, Barrow, Samoa and the South Pole. Including outliers. Have a look at them and calculate yourself if the “cherry picking” by not using the clearly locally contaminated outliers does influence the yearly average or the trend with more than 0.1 ppmv:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
If I was to pick a “pristine site” on land for CO2 measurements it would be a tower sitting in the Gobi Desert
We fullfil your wishes immediately:
Have a look at the CO2 data from Ulaan Uul (at 914 m height), Mongolia, or Plateau Assy, Kazakhstan, or Mt Waliguan, PRC and compare the data and trends with these of Mauna Loa:
http://www.esrl.noaa.gov/gmd/ccgg/iadv/

June 9, 2012 3:06 pm

Gail Combs @ 6/9 – 8:09 a.m. :
Thank you for some very interesting information and links! I took particular note of the comparitive results of the IR Absorption vs. Gas Chromatograph calibrations. Surely in the name of all that is gaseous these problems with the IR instruments have been worked out long before now, yes?

Allan MacRae
June 9, 2012 8:11 pm

Barry – do you have any data references for ocean CO2 concentrations versus time?
How about good data references to all your other allegations?
In answer to your question (essentially) about the mass balance argument, here is a hypothesis:
From the limited urban CO2 data I have seen, it is apparent that, at least in spring and summer, humanmade CO2 is sequestered very near to its terrestrial (usually urban) source, and the human signature is absent from the daily as well as the seasonal CO2 data. The rest of the natural system, the sparsely-populated rural areas, the vast boreal forests of Canada and Russia, the vast tropical forests of the Amazon and SE Asia, and the great plains and deserts all just soldier on, without so much as tip-of-the-hat of what is happens in the cities – you could think of it as “what happens in Vegas stays in Vegas”.
Nature just does not give a damn about humanity and its CO2 emissions – the effects do not extend far outside the urban areas – indeed they are often invisible even within the cities themselves.
Being totally ignored by Mother Nature is painful to contemplate, especially when we humans always thought we were the centre of the universe, but good old Mom just does not even know we exist, not does she care.
Sorry Buck-o,
Try to be strong.

Gail Combs
June 9, 2012 8:57 pm

Ferdinand Engelbeen says:
June 9, 2012 at 1:42 am
Gail Combs says:
June 8, 2012 at 6:18 pm
I am not talking about the error bars of the method.
Neither did I. It is about the variability in the CO2 data, as the measurements are quite reliable (on fixed stations +/- 0.2 ppmv, AIRS at +/- 5 ppmv).
Look at the scale for the AIRS presentations:
http://photojournal.jpl.nasa.gov/jpeg/PIA12339.jpg
The scale is 382-390 ppmv. For a monthly average, mid-summer. That is a variability of average +/- 1% of the full scale. In other months where the largest seasonal changes are at work, that is +/- 2% of full scale all over the world. For a change of + or – 20% in CO2 fluxes between atmosphere and oceans/biosphere at ground level. I call that well mixed on such a time scale…..
_______________________________________
GOOD GRIEF! Do you not understand what an AVERAGE DOES! I can take five lots of 10 midgets and 10 basketball players measure them and say the height for these lots of people is 5 foot 11.3 inches +/- 2 inches. Because I used AVERAGES and not individuals I completely lost the variability!
Both the AIRS data and the Mauna Loa data is AVERAGES. Heck you can see what using averages does in this graph of SST vs CO2. The smooth artificial looking curve starting about 1960 is the Mauna Loa data.

Gail Combs
June 9, 2012 9:29 pm

Leigh B. Kelley says:
June 9, 2012 at 3:06 pm
Gail Combs @ 6/9 – 8:09 a.m. :
Thank you for some very interesting information and links! I took particular note of the comparitive results of the IR Absorption vs. Gas Chromatograph calibrations. Surely in the name of all that is gaseous these problems with the IR instruments have been worked out long before now, yes?
_________________________________
First remember that nice smooth curve starts in 1958. You can compare the Mauna Loa data to the wet chemistry methods in this graph (Mauna Loa starts in 1958 on the graph)
Keeling got a really nice smooth curve for the last 50 years didn’t he?
Now here is some information about the test method.
I should first mention that before computers did the integration of the area under the curve used to determine the amount of a component it was done by hand. The method was to take a sharp pencil and a ruler and carefully figure out where the curve started and stopped (the baseline) and then draw the baseline. Next measure the height of the peak from the drawn baseline and also measure the width at half height. (Often figuring out where the baseline is was a royal pain) Use the formula for a triangle to calculate the area. The other method was to use special paper and cut out the peak and weigh it. Obviously having a computer do the integration was a great boost to the accuracy.
History of the GC
The katharometer also known as the thermal conductivity detector or the hot wire detector was developed for Gas Chomatographs in 1954. It was a relatively low sensitiviy detector. It was followed by the flame thermocouple detector (similar sensitivity) in 1956. Perkin-Elmer produced their first commercial model in May of 1955. The ubiquitous flame ionization detector (FID) described by McWilliams [5] in 1958…. was to become the workhorse of all GC analyses having an extremely high sensitivity and a linear dynamic range exceeding five orders of magnitude. Finally, the exciting family of argon ionization detectors was described by Lovelock [6] in 1960. Correctly designed and operated, the argon detectors could provide sensitivities at least one order of magnitude greater than the FID and the electron capture detector nearly two orders of magnitude greater than the FID. P/E introduced the FID with capillary columns in 1958 and it was produced until the late 1960s. In 1962 a model with baseline compensation was introduced. It wasn’t until 1977 that microprocessor controled GCs were introduced and in 1980 full data-handling capabilities were added. (the computer to integrate those curves)
http://www.perkinelmer.com/CMSResources/Images/44-74443BRO_GasChromaEvolution.pdf
IR Spectrophotometers
In 1944 P/E introduced its first (single beam) IR. The double beam was introduced in 1957. A computer controlled IR was introduced in 1976 and data manipulation in 1979. A major increase in reliability was gained in 1984 with the rotating mirror pair design. More innovation was seen from the 1990s on. http://www.perkinelmer.com/CMSResources/Images/44-74388BRO_60YearsInfraredSpectroscopy.pdf
So to put it bluntly the error in Keelings data was probably around +/- 10ppm or more when he started and +/- 5ppm or more (SWAG) in the 60s and 70s. That is assuming it was in the same order of magnitude as the GCs of the same vintage. Another point to remember is Universities and other noncommercial labs do not get state of the art equipment just because a newer model with more bells and whistles came out. Generally the Universities got the equipment the Corporations donate so it is at least a generation or two older than that in the commercial labs and the equipment in the commercial labs is normally not exactly young either. You would not believe some of the dinosaurs I used, there was this one piece of equipment built in the 1800s still in use in one of the factories I worked!
Here is a modern study comparing GC to IR for analyzing gases.

Can modern infrared analyzers replace gas chromatography to measure anesthetic vapor concentrations?
…Repeated injections of a given sample from a tank or flask containing known volumetric standards into a GC give values with a standard deviation of less than 2–3%…Because GC is a calibrated reference standard, it can be considered to be an accurate measure of the concentrations within the known limits of accuracy. IR is therefore compared directly to the GC measurements,…
…The deviation from GC calculated as (100* [IR-GC]/GC) for the medium and high concentrations ranged from -9 to 6% for isoflurane, from -11 to 5% for sevoflurane, and from -9 to 11% for desflurane. Deviation was more pronounced with the lower concentrations: from -18 to 2% for isoflurane, from -20 to 0% for sevoflurane, and from -8 to 21% for desflurane….
…..Because our study demonstrates difference in performance between individual units, our study suggests that GC remains the method of choice to measure absolute concentrations…..
Conclusion
In summary, the use of different IR absorption bands by the M-CAiOV compact multi-gas analyzer (General Electric) has allowed automated agent detection and may technically have facilitated the compensation for cross-sensitivity between anesthetic vapors and other gases, but has not improved accuracy of vapor analysis beyond that of older IR analyzers. IR and GC cannot be used interchangeably, because the deviations between GC and IR mount up to ± 20%, and because individual analyzers differ unpredictably in their performance.

Looks like I am not the only one who prefers the GC for accuracy over the IR!

barry
June 9, 2012 9:53 pm

Allan,
try the following site for CO2 data: http://ds.data.jma.go.jp/gmd/wdcgg/
There are nearly 200 locations, many of which have long enough records to do trend comparisons.

From the limited urban CO2 data I have seen, it is apparent that, at least in spring and summer, humanmade CO2 is sequestered very near to its terrestrial (usually urban) source

As you say, nature doesn’t care about anthro CO2. The sinks near an urban site will not distinguish between anthro and natural. Neither will the sinks in rural sites. The only thing that matters is that we’re putting out twice the amount that’s being added to the air. Basic arithmetic is hard to refute, and one needs to make all sorts of weird contortions to deny it.
Do you have any data sets showing natural sources that have increased over recent history, or natural sinks that have decreased? So far these ideas seem little more than speculation,

Gail Combs
June 9, 2012 9:56 pm

Oh, Leigh, I should also stick in the link to this discussion on the old data. Tony, if I remember correctly, has access to a really great library and can get stuff not available on the internet.
Historic variations in CO2 measurements

….The apparent considerable natural variation in CO2-see figure 3-due to ocean to air exchange (amongst other factors) puts the apparently irrational variable figures from the 19th Century onwards into context, yet IPCC AR4 suggests a remarkably constant 285ppm at this time, despite the expected outgasing and inflow caused by variability in ocean temperatures. The IPCC icon is Mauna Loa so it is instructive to go to the oracle so see what that says about variability…
so the 335ppm to 368ppm again puts the observed variability in the historic samples in much better context. The overall effect of taking CO2 measurements at Mauna Loa situated on top of an active volcano at over 3000 m altitude and surrounded by a constantly outgasing warm ocean shall be left to others to debate, but the averaging disguises the considerable daily variability….
…This analysis seems at variance with the information now available and that Keeling later came to believe in the accuracy of the old measurements he had previously rejected as being too high is demonstrated in his own autobiography. Ironically Callendar in the last years of his life also doubted his own AGW hypothsesis. Similarly whilst Arrhenius’ first paper on the likely effect of doubling CO2- with temperature rises up to 5C- is often quoted, his second paper ten years later- when he basically admitted he had got his initial calculations wrong-is rarely heard.….

So Tony also understands all that averaging of the data “disguises the considerable daily variability”

June 10, 2012 1:52 am

Gail Combs says:
June 9, 2012 at 8:57 pm
GOOD GRIEF! Do you not understand what an AVERAGE DOES! I can take five lots of 10 midgets and 10 basketball players measure them and say the height for these lots of people is 5 foot 11.3 inches +/- 2 inches. Because I used AVERAGES and not individuals I completely lost the variability!
Please Gail! You don’t like to accept that the daily or even monthly or even yearly variations have not the slightest measurable impact on the greenhouse effect. We are not talking about drugs with a very pronounced dose and effect relationship within a few hours, but about CO2 where you need at least 10% change in level during at least 10 years to have a measurable effect (if any).
And have a look at the hourly averaged raw data from Mauna Loa and the South Pole for 2008, compared to the “cleaned” daily and monthly averaged:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
It doesn’t make any difference for the radiation effect if the CO2 momentary should change from 200 to 800 ppmv and back in some location, as that has an influence which is simply unmeasurable. Only the long-term average is of interest.
BTW, your +/- 2 inches would be in error, the stdv is much larger and gives you a pretty good idea of the variability in the combined lot…

FerdiEgb
June 10, 2012 2:27 am

Gail Combs says:
June 9, 2012 at 9:29 pm
So to put it bluntly the error in Keelings data was probably around +/- 10ppm or more when he started and +/- 5ppm or more (SWAG) in the 60s and 70s.
The error in the first decades was +/- 1 ppmv (all hand calculations from long analog rolls) and in recent decades +/- 0.2 ppmv. As said before, Keeling freezed out the main problem, water vapour. Further, the procedure includes an hourly calibration with 2 (nowadays 3) calibration gases. Thus whatever the instrument’s individual properties or whatever the drift of the instrument over time, that is near fully taken into account.
Besides that, at several points on earth (including Mauna Loa), flask samples are taken as reference, which are independently checked by different (even competing: Scripps against NOAA) labs. Some again use NIR, some us GC for the analyses and Scripps still uses its manometric method to test their own flask samples of Mauna Loa . The different lab results and continuous analyses are within 0.2 ppmv, see:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html#replication
A few stations even use an automated GC for continuous CO2 analyses. All show the same “smooth” (if averaged over a year) trends over time…

Allan MacRae
June 10, 2012 2:44 am

Thanks for the link Barry.
My problem is that of these many measurement sites, it appears that few if any are URBAN.
http://ds.data.jma.go.jp/gmd/wdcgg/cgi-bin/wdcgg/catalogue.cgi
It appears that everyone wants to measure atmospheric CO2 at sites that are as pristine (non-urban) as possible. This is understandable.
If you know of any urban sites with continuous 24-hour CO2 readings, I would be pleased to see them.
Barry, you say at June 9, 2012 at 9:53 pm
“As you say, nature doesn’t care about anthro CO2. The sinks near an urban site will not distinguish between anthro and natural. Neither will the sinks in rural sites. The only thing that matters is that we’re putting out twice the amount that’s being added to the air. Basic arithmetic is hard to refute, and one needs to make all sorts of weird contortions to deny it.”
Barry, my suggestion (or weird contortion of basic arithmetic, as you call it) could be this:
IF (as it appears from the limited urban atmospheric CO2 data I’ve seen) these humanmade CO2 emission are sequestered locally close to the urban source, they do NOT form part of a large, global quasi-equilibrium of CO2 between vegetation, soil and water. The CO2 is just gone – sequestered locally, by whatever means this happens ( probably some form of biological and soil sequestration).
The notion that the CO2 elsewhere in the world has to compensate for this localized near-urban sequestration according to some large mass balance equation is the false assumption. The natural CO2 flux in the rest of the world just carries on as if this urban/near-urban CO2 emission and sequestration took place on another planet – the global CO2 system is not significantly affected by the localized urban phenomena.

Allan MacRae
June 10, 2012 2:58 am

Gail, the scale in this AIRS animation goes from 364 to 386 ppm CO2. That seems reasonable to me.
The greatest contrasts (differences) appear in April-May of each year, and by July these have declined..
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4

Gail Combs
June 10, 2012 3:20 am

barry says:
June 9, 2012 at 9:53 pm
…………….Do you have any data sets showing natural sources that have increased over recent history, or natural sinks that have decreased? So far these ideas seem little more than speculation…..
____________________________________________________
You might want to take a peek at the Kaplan Graph
CO2 background from 1826 to 2008 shows a very good correlation ( r= 0,719 using data since 1870) to global SST (Kaplan, KNMI), with a CO2 lag of 1 year behind SST from cross correlation (maximum correlation: 0,7204). Kuo et al. 1990 derived 5-month time lag from MLO data alone compared to air temperature.

FerdiEgb
June 10, 2012 4:53 am

Gail Combs says:
June 10, 2012 at 3:20 am
You might want to take a peek at the Kaplan Graph
CO2 background from 1826 to 2008 shows a very good correlation ( r= 0,719 using data since 1870) to global SST (Kaplan, KNMI), with a CO2 lag of 1 year behind SST from cross correlation

Except that the 1826-1960 period is based on Beck’s compilation, where the 1942 “peak” is not based on any background data, but only on highly fluctuating locally contaminated data (Giessen: 68 ppmv, 1 sigma) and methods with low to extreme low accuracy.
Further have a look at the sea surface temperature and CO2 in ice cores until 1960 (and an overlap with SPO 1960-1980) and CO2 levels for the period after 1960, where better data are available:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_emiss_increase.jpg
Overall, there are several distinct periods: 1910-1945 and 1975-2000 with warming and 1945-1975 with cooling and 2000-current which is flat. Despite that, in all periods CO2 goes monotonically up in near exact ratio with human CO2 emissions, while the correlation with temperature in the period 1945-1975 is even negative.

FerdiEgb
June 10, 2012 5:40 am

Allan MacRae says:
June 10, 2012 at 2:44 am
If you know of any urban sites with continuous 24-hour CO2 readings, I would be pleased to see them.

Here is a semi-urban site near Giessen (Linden), where the historical measurements at 3x per day in the period 1939-1941 are one of the main series in Beck’s compilation that caused the 1942 “peak”:
http://www.hlug.de/?id=7122&view=messwerte&detail=graph&station=1005
Where you can download the data.
The historical site was at the Eastern edge of the town of Giessen, largely downwind of the town for the most prevailing wind directions. The new station is at the edge of a small village, more SW of the town.
IF (as it appears from the limited urban atmospheric CO2 data I’ve seen) these humanmade CO2 emission are sequestered locally close to the urban source, they do NOT form part of a large, global quasi-equilibrium of CO2 between vegetation, soil and water.
Of course they do! Every plant has a limited uptake of CO2, depending of sunlight, water, nutritients and CO2 level. Every molecule of human CO2 captured is one less natural molecule captured. Thus while the human CO2 may be captured within a minute of release by the next nearby tree, the capturing doesn’t influence the increase in total CO2 in the atmosphere, which goes up with the total amount released by humans. The same applies for CO2 capturing by oceans.
But at the other side, the total increase in CO2 also increases the uptake by oceans and vegetation, no matter which of the two sources remains in the atmosphere. That makes that in average half of the emissions in quantity remains in the atmosphere, regardless if none of the original human CO2 still resides in the atmosphere and already is replaced by natural CO2 or all human CO2 still is there.

Allan MacRae
June 10, 2012 8:23 am

Thank you Ferdinand,
Linden has CO2 data measurements. You say it is semi-urban. June 2012 CO2 rises to a maximum at night and declines during the day – there is no urban signature in the CO2 data.
http://www.hlug.de/?id=7122&view=messwerte&detail=graph&station=1005
Satellite Photo of the Linden Station
http://maps.google.ca/maps?hl=en&pq=gersfeld+rhon+germany&cp=11&gs_id=e&xhr=t&q=wasserkuppe&bav=on.2,or.r_gc.r_pw.r_qf.,cf.osb&biw=1600&bih=754&wrapid=tljp133933845412600&um=1&ie=UTF-8&sa=N&tab=wl
So has Wasserkuppe, which is at 931m elevation in a wilderness area.
http://www.hlug.de/?id=7122&station=801
Satellite Photo of the Wasserkuppe Station
http://maps.google.ca/maps?hl=en&pq=gersfeld+rhon+germany&cp=11&gs_id=e&xhr=t&q=wasserkuppe&bav=on.2,or.r_gc.r_pw.r_qf.,cf.osb&biw=1600&bih=754&wrapid=tljp133933845412600&um=1&ie=UTF-8&sa=N&tab=wl
The other ~30 active atmospheric monitoring stations in Germany do NOT measure CO2. Several are in Frankfurt. Pity.
http://www.hlug.de/?id=7122&station=801

FerdiEgb
June 10, 2012 10:07 am

Allan MacRae says:
June 10, 2012 at 8:23 am
The other ~30 active atmospheric monitoring stations in Germany do NOT measure CO2. Several are in Frankfurt. Pity.
The list is from Hessen, only one of the states in Germany.
I knew that Schauninsland in the Black Forest (SW Germany, in another state: Baden-Würtemberg) had long time CO2 measurements, where only 10% were deemed background, due to the high contamination from valley air. While searching for the data, I saw an interesting paper:
http://archiv.ub.uni-heidelberg.de/volltextserver/volltexte/2006/6727/pdf/LevinGRL2003.pdf
They use 14C/12C to determine how much CO2 comes from fossil fuel emissions…

Myrrh
June 10, 2012 1:53 pm

Ferdinand Engelbeen says:
June 8, 2012 at 12:19 pm
Myrrh says:
June 8, 2012 at 4:38 am
Carbon dioxide being one and a half times heavier than air will always spontaneously sink, displacing the lighter air around it, unless work is being done such as wind, heat, and, being heavier than air will not spontaneously rise into the atmosphere.
It will. I don’t know where you live, but here in Europe we frequently see Sahara sand settled on our cars (the same for the West Coast of the US for sand from the Chinese/Mongolian deserts) if the wind is from that direction. That travels thousands of kilometers. Even if it is hundreds times heavier than air or CO2. The difference in specific mass between CO2 and air is only 1.5 times not hundreds, thus once mixed in, it may be transported over hundreds of times longer distances than sand, thus simply all over the world.
“It will” what? Spontaneously sink or not spontaneously rise in air?
I’ll take that you’re objecting to “will not spontaneously rise into the atmosphere” and so you don’t understand what I’ve said because you give an example of wind carrying it, which is not spontaneous, but work.
Which is why their conclusion in the AIRS data included the note to selves that they needed to go and study wind systems, because it’s not always windy and they found CO2 to be lumpy and not at all well-mixed. Spontaneously, because it is heavier than air, carbon dioxide will always sink, displacing air.
Besides “ideal gas spontaneous diffusion in in empty space” and “Brownian motion”, AGWScience Fiction also gives the meme “thoroughly mixed by winds in the turbulent atmosphere” – when these good people educated to fictional fisics memes went off to study wind systems, they would have found immediately that a) it’s not always windy and b) most winds are local and c) local winds and others vary in height and especially, d) that the major wind systems do not cross hemispheres..
Hopefully they also discovered that in their fisics there is no wind – because their atmosphere is made up of ideal gases in empty space, and ideal gases don’t have volume, and wind is volumes of real gas on the move.
Looking out my window now across the garden to the fields to the distant hills, it’s not windy. There’s a slight intermittent gentle breeze around 10-20′, but the tops of the trees higher than this aren’t moving.
Further, CO2 is measured near ground (where it may be pure CO2 or pure air or anything in between), but from about 500 m above land and over the oceans, there is near as much CO2 near the surface as at 4,000 m high as up to 20 km height (measured by satellite, balloons, airplanes)…
Well, that’s far too specific without any detail – show and tell.
According to what you believe, the measurements at Mauna Loa or the South Pole should show far less CO2 values than near ground, which is not the case at all.
Nope, I’ve never said that – I don’t use that expression when discussing any of this ..; “belief” is a word in constant use by AGWSF promoters, not me, because they’re not interested in the facts.
The only cases where CO2 stays (temporarely) near ground is if huge upwelling occurs at once, then the wind or simple convection has not the time to mix that CO2 in. Or in caves where CO2 can build up if more is produced than is removed. Or in ice cores, where the CO2 in the stagnant part of the air/ice column increases with about 1% near the bottom in a period of 40 year, called the “gravitational fractionation” (for which is compensated in the CO2 measurements of the ice bubbles).
Gravity because carbon dioxide is a real gas not ideal, it has weight, because of gravity.
Well there are lots of ongoing local measurements in forests and such which normally have several flasks at different heights and note changes in winds and such and of course changes due to photosynthesis and so on, most plants do this AM and they’ll be breathing out CO2 the rest of the time. Generally the pattern will be less CO2 at the greater heights except windy conditions, which will disperse it lower down.
[Winds, convection currents, will appear locally from temperature differences in local air as the land heats up and the hot volume of air rising becoming less dense, as will in any carbon dioxide in it, and so adjacent cooler volumes being heavier will sink and flow under this – this flowing volume of air is called wind. Inshore/Offshore descriptions are good for getting the picture because the differences in heat capacities of water and land show this phenomenon clearly.]
And don’t forget rain, any rainstorm will take all the CO2 around and bring it to the ground. All of it.
As I said, Keeling never proved there was such a critter as “well-mixed background” – the natural properties of carbon dioxide and the processes in the natural world make this idea nonsense. It’s all local.
And, as I note in a reply from you to Gail, changing tack and talking averages distracting from this is on par with AGWSF giving contradictory reasons for the claimed existence of this elusive “well-mixed background”. There is no internal consistency in AGWSF fisics, just mixing up real world physics – all to the end of “proving” that “well-mixed background ” exists, you cannot separate out the AGWSF meme with its fake fisics explanations from ‘averages’; there is no real world physics that gives “well-mixed background”, as they discovered in AIRS.
Local measurements will always give local conditions and from these averages could be established, that has to include all the varied factors in play particular to each locality which will tell its own story. Keeling didn’t want this, he had an agenda and picked the lowest value he could to create his fraudulent science, and came up specifically with this idea of “well-mixed background”, which he famously said could be measured as well from Mauna Loa as from anywhere else. Which is why we have the steadily rising faux carbon dioxide levels from Keeling et al unrelated to anything happening with temperatures during that time – see Fig 5 on http://www.globalweathercycles.com/GWGCNCF/chapter5.htm
From http://icecap.us/images/uploads/08_Beck-2.pdf
“2. THE CO2 BACKGROUND HYPOTHESIS
Charles Keeling soon noticed a rise of atmospheric carbon dioxide concentration.
From his measurements in the 50s and 60s on the Pacific coast in USA, on Mauna Loa
and the Antarctica, he concluded that he had measured a constant worldwide
background concentration of CO2. The CO2 background should be the CO2
concentration around the world, free from local sources which had increased from 315
ppm in 1958 to 380 ppm in 2008 mainly by burning of fossil fuels.
“According to the IPCC this is the main cause of global climate change [10]
“Keeling also was the first to introduce carbon isotopes (13C) to the investigation of the
carbon cycle and the origin of the carbon source [17]. This lead to the assumption that
CO2 from fossil sources enrich in the atmosphere while natural photosynthesis and animal
respiration compensate each other and can therefore be ignored. Consequently man-made
CO2 was firmly recognized as the polluter of the atmosphere, i.e. the worldwide rise of
the CO2 concentration. In fact CO2 from burning fossil fuels and phytoplankton from the
oceans—which cover the globe to 71%—have about the same 13 C value.”
“1955: The invention of the background level out of 50 samples”
CHARLES D. KEELING “explanation: in the afternoon best mixing if air without plant + soil influence”
BECK: missing [in blue]: check of soil degassing, check of weather (humidity, precipitation..)!!
“Figure 2: C. Keeling’s first attempts to measure the CO2 background concentration
(Keeling 1958) using the lowest CO2 concentration in the afternoon. On the top the
head of the paper by C. Keeling 1958; below the graph of the diurnal variation of
CO2 at the Olympic Forest in the state of Washington. In red additional information
on the measurement conditions and the type of calculation of a daily average C.
Keeling had used. In blue missing measurements to get the right conclusion.

Figure 2 shows the birth of the idea of a CO2 background concentration 1955.
At that time C. Keeling had measured CO2 in summer, using a home-made
manometer. He needed about 90 minutes to obtain each value [17]. He conceded in
1993 that he had not read any technical literature [16], so he probably did not know
that by using the existing high precision gas analysers designed by Haldane,
Petterson, Schuftan or Kauko, he could have obtained readings within minutes,
getting a far more accurate value down to 0.33%. This means an accuracy of about
+/-1 ppm using 309 ppm in 1955.
Considering the measured diurnal CO2 variation in the forest we notice a much
higher CO2 content in the air at night than during the day, caused by the absorption of
some of the carbon dioxide by the photosynthesis. The correct average would have
been 365.3 ppm, which is typical for such a location in summer. In fact Keeling used
only the lowest measured values made in the afternoon on the grounds and that there
is a compensation of soil respiration by soil organisms and photosynthesis. Soil respiration produces about the same amount CO2 as the respiration of animals on the ground. But plants do respire too especially at night. Keeling did not measure the soil respiration and the possibility of geological soil degassing by rock weathering remains.
“This procedure of ignoring natural CO2 sources that may make a large contribution to the atmospheric concentration is maintained until today.”
So, we have the well known history of Callendar’s cherry picking and an example here of Keeling’s, one of many, besides showing he was a complete ignoramous in the subject of measuring CO2..
Here’s background to the beginning with Revelle by By John Coleman: http://www.kickthemallout.com/article.php/Video-Revelle_Admits_CO2_Theory_Wrong
Here’s background to the history of CO2 measurements: Historic variations in CO2 measurements
Tony Brown
http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/
– real scientists who strove to understand the subject and built on each others hard work from honesty in reporting and continual refinement of method. Don’t you bloody dare claim their work or their understanding inferior to Keeling. Keeling may well have pretended later that he knew the Haldane..
See further:
Brownian motion at
http://en.wikipedia.org/wiki/Brownian_motion

Further proof you don’t know what I’m talking about as you’ve confused wind with Brownian motion, not only confusing convection with Brownian motion, but having no sense of scale of these two different processes.
see http://en.wikipedia.org/wiki/Diffusion
“Separation diffusion from convection in gases
While Brownian motion of multi-molecular mesoscopic particles (like pollen grains studied by Brown) is observable under an optical microscope, molecular diffusion can only be probed in carefully controlled experimental conditions. Since Graham experiments, it is well known that avoiding of convection is necessary and this may be a non-trivial task.
“Under normal conditions, molecular diffusion dominates only on length scales between nanometer and millimeter. On larger length scales, transport in liquids and gases is normally due to another transport phenomenon, convection, and to study diffusion on the larger scale, special efforts are needed.
“Therefore, some often cited examples of diffusion are wrong: If cologne is sprayed in one place, it will soon be smelled in the entire room, but a simple calculation shows that this can’t be due to diffusion. Convective motion persists in the room because the temperature inhomogeneity. If ink is dropped in water, one usually observes an inhomogeneous evolution of the spatial distribution, which clearly indicates convection (caused, in particular, by this dropping).”
More garbled processes to create a fictional fisics to promote AGW.
And, that you haven’t even bothered to read it yourself… So, obviously you can’t follow my explanations of how AGWSF mixes up real world physics by cherry picking bits from unrelated processes as well as changing properties..
From your wiki page: “Consider, for instance, particles suspended in a viscous fluid in a gravitational field. Gravity tends to make the particles settle, whereas diffusion acts to homogenize them, driving them into regions of smaller concentration. Under the action of gravity, a particle acquires a downward speed of …, where ..is the mass of the particle, .. is the acceleration due to gravity, and .. is the particle’s mobility in the fluid. ….. In a state of dynamic equilibrium, the particles are distributed according to the barometric distribution.
..
“Dynamic equilibrium is established because the more that particles are pulled down by gravity, the greater is the tendency for the particles to migrate to regions of lower concentration. The flux is given by Fick’s law, [see picture here: 220px-Brownian_motion_gamboge.jpg] “The equilibrium distribution for particles fof gamboge shows the tendency for granules to move to regions of lower concentration when affected by gravity.”
So where’s the “well-mixed background” from Brownian Motion?
According to what you believe, the measurements at Mauna Loa or the South Pole should show far less CO2 values than near ground, which is not the case at all.
So Brownian Motion irrelevant?

Myrrh
June 10, 2012 2:04 pm

FerdiEgb says:
June 8, 2012 at 1:17 pm
Myrrh says:
June 8, 2012 at 12:35 pm
Besides the problems by using unreliable data from heavily contaminated places, Beck’s high CO2 values around 1942 are not confirmed by any other CO2 (or d13C) proxy I know of. Including stomata data, the other posterchild to haunt the ice core data…
Gosh Ferdi, you’re quite a wag, “data unreliable from heavily contaminated places” immediately associated with Beck in the next sentence, when it’s Mauna Loa the poster child of heavily contaminated places – was that deliberate?
Let’s take a good look first at what is being measured, Beck’s hundreds of thousands real data records of well measured and well understood in local conditions against: “Greenhouse Gas Observatories Downwind from Erupting Volcanoes”
http://www.americanthinker.com/2009/12/greenhouse_gas_observatories_d.html
From which:
“In case readers don’t get the point, the NOAA also explains (emphasis in original):
“GLOBALVIEW-CO2 is derived from measurements but contains no actual data. To facilitate use with carbon cycle modeling studies, the measurements have been processed (smoothed, interpolated, and extrapolated) resulting in extended records that are evenly incremented in time.”
“Processed, smoothed, interpolated, and extrapolated? Data extension? Data integration? No actual data? Making atmospheric measurements that will facilitate a predetermined conclusion?”
I would stress “records that are evenly incremented in time” – because that’s exactly what the agenda pre-determined it to be – hence the silly Keeling Curve.
And Willis –
“The Observatory at Point Barrow, Alaska is about 170 miles downwind from the Prudhoe Bay headquarters of the North Slope oil industry. It is therefore subject to a localized increase in man-made air pollution, including CO2 emissions. Coincidentally, of course, the Barrow Observatory was established in 1973 — just before construction began on the Trans-Alaska Pipeline. Barrow is also annually subject to several months of “Arctic haze,” which University of Alaska Geophysicist Ned Rozell indicates is from ex-Soviet and new Chinese “iron, nickel and copper smelters and inefficient coal-burning plants.”
Do please read the rest of that article by Andrew Walden, editor of Hawaii Free Press
As for stomata data, please fetch for the last century, I can’t find anything specific to that period. But I have found that stomata data and South Pole Flask measurements much higher than from ice core records – http://www.geocraft.com/WVFossils/stomata.html
See Fig.5 “Stomata CO2 record is in general agreement with Air Flask CO2 measurements”.
South Pole Air Flask (1957-2006 AD)
Atmospheric CO2 concentrations (ppmv) derived from flask samples collected at South Pole, Antarctica
L.P. Steele, P.B. Krummel, R.L. Langenfelds
Atmospheric, Research, Commonwealth, Scientific, and Industrial Research Organization, Australia
August 2007
(“A precipitous drop in CO2 during the “Younger Dryas” was captured nicely by the stomata reord, but missed by the CO2 record in ice cores.”)
“But the main problem is the speed with which it should have happened. While a few thousands volcanoes all spuwing lots of extra CO2 are remotely possible to give a 80 ppmv increase in only 7 years, there is no sink on this world which can absorb 80 ppmv CO2 within 7 years, if Beck’s CO2 peak was true. If you know of such mechanism, I am very interested…”
Plants absorb more CO2 when there is more CO2 to absorb. They’d have been woofing it as fast as their little stomata could grab it – have you been missing the references throughout the discussions here on WUWT on the optimum levels for plant photosynthesis and better harvests we’re getting and greening deserts? Life gobbled it up.

Myrrh
June 10, 2012 4:01 pm

Gail Combs says:
June 9, 2012 at 8:09 am
The first clue in this snippet is “the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques. “ Now I am not the math wiz that Willis and others are but if there is one thing I am familiar with it is doing quantitative analysis on “a new infra-red (IR) absorbing instrument”
Well, what you needed was Keeling on your team… 🙂
Re your collection on Mauna Loa – I’ve been sitting here thinking how far I’ve come since I first took an interest and was pointed in the direction of Willis’s under and over the volcano, his link above. He’s such a good writer that I got the picture immediately, that he was simply regurgitating what he had been told, because, no question arose from him about the huge amounts of carbon dioxide produced by the world’s premier volcanic hot spot.. Nothing to detract from the “pristine” meme, that downwelling contains nought but the most pure unadulterated thoroughly mixed so it can’t be unmixed without work being done background CO2 levels which arrive naked and unadorned by local production from miles up in the Northeast tradewinds – if he’s not on their payroll he should be.
And reading his description of how they measured, I thought, WUWT? They’re arbitrarily choosing it each time. I checked their own descriptions of this, yes, as you point out, they don’t even hide this! But what they do do is keep calling it “pristine” which I can understand a non-scientist might not notice, but when a non-scientist with an interest like myself can spot the bull piled on bull practically immediately, how come so many of real scientists on the world’s premier science blog, can’t?
The more I investigated the more blown away I was by the shear audacity of the con, but what still amazes me is that even after pointing out lots and lots of piles of bull on bull associated with this Keeling myth of “well-mixed background”, some just refuse to give up the notion that Callendar/Keeling’s low number was deliberately cherry-picked to represent it. All that’s happening with the agenda driven Mauna Loa/Scripts/NOAA data is that they have been gradually, by incremental steps, bringing it to real world figures which haven’t changed from the time when Keeling got in on the act – the 400 ppm was standard industry average then…
Which is what got me interested in exploring all this in the first place, the tweaks to sell a fictional fisics to promote AGW.
We’ve now had a generation educated in it, so as I gave the story of the PhD in physics who told me carbon dioxide was an ideal gas and would spontaneously diffuse into the atmosphere which was empty space and couldn’t become unmixed without a great deal of work being done, a whole generation of scientists in all fields who think this fictional well-mixed background, which Keeling created, actually exists.
So we have the shock horror of the AIRS team who couldn’t understand why it wasn’t well-mixed, lumpy freaked them out because they knew nothing about real gases, they only knew the memes.
As here: NASA Releases New CO2 Data, Refutes Conventional Wisdom
“For carbon dioxide, AIRS measures and tracks its concentration and movement as it moves across the globe. Observation data is critical for scientists to validate their models or adjust them to better predict the impact of greenhouse gas emissions on the weather and climate.
The data have already refuted a long-held belief that carbon dioxide is evenly distributed and do so fairly quickly in the atmosphere once it rises from the ground, said Moustafa Chahine, the science team leader of the AIRS project at the Jet Propulsion Laboratory, at the annual meeting of the American Geophysical Union (AGU) in San Francisco Tuesday.
“Contrary to the prevailing wisdom, carbon dioxide is not well mixed in the mid-troposphere,” Chahine said. “You can see the jet stream splitting the carbon dioxide clump.”
“AIRS data shows instead that carbon dioxide, which has seen its rate of increase accelerating from 1 part per million in 1955 to 2 parts per million today, would require about two to three years before it blends in, he said. The atmosphere currently has about 400 parts per million.”
http://www.greentechmedia.com/articles/read/nasa-releases-new-co2-data-refutes-conventional-wisdom/
Aw shucks, must be millions more parts per million by now with all that accumulating going on, but they’re still using the 19th century figure…
So where’s the “spontaneous diffusion of carbon dioxide as ideal gas which gives the same proportion globally practically immediately like scent wafting from a bottle opened in the classroom, or ink poured into a glass of water, which can’t separate out without a great deal of work being done”?
What? It stays lumpy even when split by the jet stream?
“Conventional wisdom”, “prevailing wisdom” and “a long-held belief” – created by AGWScience Fiction’s meme creating department.
How can anyone still argue the increase from the low figure created by them? We could really do with a collation under the headings of the different memes, as you’ve pulled together “pristine”..
And still, they won’t release the top of the troposphere or bottom of the troposphere…

Myrrh
June 10, 2012 5:46 pm

Re volcanic activity – There’s more on the geologist website fromTimothy Casey:
“2.0 Calculated Estimates: Glorified Guesswork
The estimation of worldwide volcanic CO2 emission is undermined by a severe shortage of data. To make matters worse, the reported output of any individual volcano is itself an estimate based on limited rather than complete measurement. One may reasonably assume that in each case, such estimates are based on a representative and statistically significant quantity of empirical measurements. Then we read statements, such as this one courtesy of the USGS (2010):

“Scientists have calculated that volcanoes emit between about 130-230 million tonnes (145-255 million tons) of CO2 into the atmosphere every year (Gerlach, 1991). This estimate includes both subaerial and submarine volcanoes, about in equal amounts.”

In point of fact, the total worldwide estimate of roughly 55 MtCpa is by one researcher, rather than “scientists” in general. More importantly, this estimate by Gerlach (1991) is based on emission measurements taken from only seven subaerial volcanoes and three hydrothermal vent sites. Yet the USGS glibly claims that Gerlach’s estimate includes both subaerial and submarine volcanoes in roughly equal amounts. Given the more than 3 million volcanoes worldwide indicated by the work of Hillier & Watts (2007), one might be prone to wonder about the statistical significance of Gerlach’s seven subaerial volcanoes and three hydrothermal vent sites. If the statement of the USGS concerning volcanic CO2 is any indication of the reliability of expert consensus, it would seem that verifiable facts are eminently more trustworthy than professional opinion.
“This is not an isolated case. Kerrick ..” continued on http://carbon-budget.geologist-1011.net/
My bold italics.
And there’s this from http://chiefio.wordpress.com/2011/12/10/liquid-co2-on-the-ocean-bottom/
White Smokers & The Lake Of CO2 On The Ocean Bottom
“It is looking to me like we really don’t know a darned thing about the CO2 cycle in the deep oceans. How much, where, what happens, where it comes from, where it goes to, where it makes puddles and lakes.”
..
“The Obvious
To me it’s a pretty obvious question to ask that if we KNOW the volcanic cycle is highly variable (at least on land) and we KNOW that CO2 comes from these volcanic related vents on the ocean floor, and we KNOW that the quantity of CO2 cycled by nature is vastly more than the amount we produce: Why in the heck is anyone not of the opinion that CO2 is largely prone to fluctuations from natural volcanic cycles? How could anyone ever justify asserting the human component matters, even a tiny bit, as it’s going to be well inside the natural variation of these volcanic sources. (There are vastly more volcanoes under the sea than on land. We’re talking powers of ten more…)
When we can see globs of CO2 on the ocean floor with shrimp swimming through the CO2 “smoke” and we can find lakes of the stuff on the ocean floor, how in heck can anyone say a few PPM of gas in the air will ever matter to the ocean? Or the life in it?
When we can find puddles of stuff on the ocean floor, and have literal geysers of the stuff too, how can we ever think that those quantities are not important to how much leaves the ocean and goes into the air?
It looks to me like the ocean is quite comfortable dealing with levels of CO2 flux that make our burning of fossil fuels look like a lit match in a forest fire. As much as it might hurt some folks ego, it looks to me like we just don’t matter.”

Gail Combs
June 10, 2012 6:12 pm

Allan MacRae says:
June 10, 2012 at 2:58 am
Gail, the scale in this AIRS animation goes from 364 to 386 ppm CO2. That seems reasonable to me.
______________________________________
It is not the scale I am talking about it is the fact they do averages. That 364 to 386 ppm is not the high and the low readings in the troposphere but the AVERAGE for a large hunk of air over time.
NASA says of AIRS,

…As the spacecraft moves along, this mirror sweeps the ground creating a scan ‘swath’ that extends roughly 800 km on either side of the ground track…. AIRS looks toward the ground through a cross-track rotary scan mirror which provides +/- 49.5 degrees (from nadir) ground coverage along with views to cold space and to on-board spectral and radiometric calibration sources every scan cycle. The scan cycle repeats every 8/3 seconds. Ninety ground footprints are observed each scan. One spectrum with all 2378 spectral samples is obtained for each footprint. A ground footprint every 22.4 ms. The AIRS IR spatial resolution is 13.5 km at nadir from the 705.3 km orbit.

Nadir is looking straight down so this is the minimum column of air scanned. I am assuming that the “footprint” represents a column of air 13.5 km thick by [800 km/90 foot prints] or 8.9 km wide.
Another page states AIRS reports the daytime and nighttime global distribution of carbon dioxide in the mid-troposphere at a nadir resolution of 90 km x 90 km. So it looks like the “footprints” above are combined to give ONE reading, and that reading is the AVERAGE of a heck of a lot of air over a 12 hour period!
Another page on AIRS says:

…Global monthly maps of CO2 have been generated and identify global transport patterns in the mid-troposphere. These results will aid climate modelers in parameterization of mid-tropospheric transport processes of CO2 and other gases. AIRS CO2 provides a mid-tropospheric measurement…

And AIRS is STILL finding variations from foot print to foot print despite all the combining and averaging being done. SEE
http://airs.jpl.nasa.gov/data/about_airs_co2_data/about_airs_co2_data_files/index.jpg of “The monthly average of carbon dioxide in the middle troposphere made with AIRS data retrieved during July 2003
You can hide a lot of information that you do not want people to know by using averages without giving the standard deviation. If the atmosphere is not well mixed then Callender and Keeling had no basis for tossing all the “outliers” from Mauna Loa and the historical data that Beck dug out.

.. Keeling later came to believe in the accuracy of the old measurements he had previously rejected as being too high is demonstrated in his own autobiography. Ironically Callendar in the last years of his life also doubted his own AGW hypothsesis. Similarly whilst Arrhenius’ first paper on the likely effect of doubling CO2- with temperature rises up to 5C- is often quoted, his second paper ten years later- when he basically admitted he had got his initial calculations wrong-is rarely heard. In this latter paper he estimated a figure of 0.7C for doubling… http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/

So once you get rid of the theory of “well mixed” and “background CO2” you can not toss out Scholander’s Barrow data ranging from 386 ppm to 500 ppm in 1947 and the whole edifice of CAGW collapses because a reading of 400 ppm of CO2 is nothing to write home about, especially given Kaplan’s Graph of SST vs CO2.
The use of averages is just another way to lie with statistics and that is why I keep pounding on the issue. As tony Brown said “*Averaging disguises the ranges.”

barry
June 10, 2012 6:35 pm

If CO2 was not well-mixed but generally sank to the ground, we’d have trouble breathing at sea level. If molecular weight mattered the gases in the open atmosphere would be seen in different layers, like a cake. But even the gentlest, unfelt breeze, mixes them, like dust motes dancing in a ray of light, even in a sealed room. There is some clumping, small and large-scale, and particularly near source, and all the gases thin out with height, but there is no stratification of gases in the atmopsphere. You can pour out CO2 and it will pool on the ground for a little while, but before long it will be mixed into the rest of the atmosphere. Wind and molecular energy strongly overwhelm gravity for gases in the open atmosphere.
(Various forces work to ‘stratify’ ozone, but it is a heavy gas and should not be hanging around the stratosphere if gravity had much to do with it!)

Myrrh
June 10, 2012 6:59 pm

Gail – I vaguely recall reading this after the AIRS pronounced CO2 lumpy and insignificant in the scheme of things compared to water vapour, although I don’t know if was produced before or after it does read like damage limitation with spin spin spin – no mention of lumpy and Chahine’s shock to “conventional wisdom”, and they’ve got Mauna Loa as an extinct volcano and it makes a virtue of the scant 18 month data gathering before Keeling’s mad pronouncement that he’d found a trend.
http://airs.jpl.nasa.gov/story_archive/Measuring_CO2_from_Space/History_CO2_Measurements/
How many have got the time to deconstruct every statement in it?
How can NASA put this out?

Gail Combs
June 10, 2012 7:23 pm

In thinking about it I have a SWAG as to what was going on in the following quote thanks to the information from Tony Brown and Perkin-Elmer

http://www.co2web.info/ESEF3VO2.pdf
…At the Mauna Loa Observatory the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques. Critique has also been directed to the analytical methodology and sampling error problems (Jaworowski et al., 1992 a; and Segalstad, 1996, for further references), and the fact that the results of the measurements were “edited” (Bacastow et al., 1985); large portions of raw data were rejected, leaving just a small fraction of the raw data subjected to averaging techniques (Pales & Keeling, 1965).
The acknowledgement in the paper by Pales & Keeling (1965) describes how the Mauna Loa CO2 monitoring program started: “The Scripps program to monitor CO2 in the atmosphere and oceans was conceived and initiated by Dr. Roger Revelle who was director of the Scripps Institution of Oceanography while the present work was in progress. Revelle foresaw the geochemical implications of the rise in atmospheric CO2 resulting from fossil fuel combustion, and he sought means to ensure that this ‘large scale geophysical experiment’, as he termed it, would be adequately documented as it occurred. During all stages of the present work Revelle was mentor, consultant, antagonist. He shared with us his broad knowledge of earth science and appreciation for the oceans and atmosphere as they really exist, and he inspired us to keep in sight the objectives which he had originally persuaded us to accept.”

The Mauna Loa Observatory (MLO) was dedicated on 28 June 1956. Perkin-Elmer introduced the first (single beam ) IR in 1954. In 1957 they introduced the “First affordable infrared instrument: Model 137 Infracord, low cost, double beam, optical null” P-E talks of introducing their new Gas Chromatograph, the Model 154-C, at the 1959 Pittsburgh Conference and had a glowing write-up in Analytical Chemistry. I have no doubt that the new IR instrument was also introduced at a conference and had a big write-up in the literature. I am sure Keeling hear of the instrument and given the sales pitch I got from P-E a decade, a glowing pep talk about how this machine could “leap tall buildings…” he took the bait and bought the instrument.
So here is Keeling saddled with a real dog of an instrument on an active volcano with winds blowing from and to the sea. His analytical results must have been all over the map! He can not get the darn thing to give good calibration results against the wet chemistry methods, the site, far from being “pristine” as he thought, could not be worse and he has to make it all good because of the tons of money already poured into the project. So you have Callender, Keeling and Revelle all communicating with each other. Callender has already set the precedence of cherry picking “low background CO2 measurements” Revelle is giving pep talks about how important the research is and so they end-up running with cherry picked values and averaged data because that is all they had given the situation. After all not only is their project, their reputations and grant money at stake but they KNOW they are right and that the research is VERY IMPORTANT.

From Callendars biography;

“In 1944 climatologist Gordon Manley noted Callendar’s valuable contributions to the study of climatic change. A decade later, Gilbert Plass and Charles Keeling consulted with Callendar as they began their research programs. Just before the beginning of the International Geophysical Year in 1957, Hans Seuss and Roger Revelle referred to the “Callendar effect” — defined as climatic change brought about by anthropogenic increases in the concentration of atmospheric carbon dioxide, primarily through the processes of combustion.”

http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/

SO as Tony Brown said The overall effect of taking CO2 measurements at Mauna Loa situated on top of an active volcano at over 3000 m altitude and surrounded by a constantly outgasing warm ocean shall be left to others to debate, but the averaging disguises the considerable daily variability. Of course tossing out the “outliers” that do not “fit the curve” takes care of the rest of the problem.
As I noted in another comment somewhere on WUWT, Many Scientists Fabricate and Falsify Research and once they start they generally have to continue.
For example Diederik Stapel, a prominent Dutch social psychologist, routinely falsified data and made up entire experiments. He falsified at least 30 papers. University of Connecticut found 145 instances over seven years in which Dr. Dipak Das, a heart Doctor, fabricated, falsified and manipulated data.
Once started it is very difficult to extricate yourself from the lies without a career ending blow-up.

Gail Combs
June 10, 2012 8:06 pm

barry says:
June 10, 2012 at 6:35 pm
If CO2 was not well-mixed but generally sank to the ground, we’d have trouble breathing at sea level….
_______________________________
Barry, as I mentioned above Lake Nyros (sitting on a volcanic vent) belched CO2 and killed all the animals and 1700 people within a 15-mile (25km) radius of the lake. So yes CO2 can sink to the ground and “flow” Humans can tolerate several 1000 ppm, the threshold in mine-safety regulations is 5000 ppm of carbon dioxide.
I worked for years in chemical batch mix rooms and continuous process which is why I laugh when anyone tries to tell me CO2 is “well mixed” Spend several years sampling batches to find out how long it takes a mechanically mixed batch to be come “well mixed” gives a real life appreciation of how HARD it is to get the &^%$%@* stuff to mix and how long it actually takes. (at least an hour)
Think about CO2:
1. You have Volcanoes spewing CO2 through eruptions and vents
2. You have humans producing CO2
3. You have water dissolving or out gassing CO2 as the surface temperature changes.
4. You have plants gobbling CO2 during the day and respiring it at night.
5. You have termites and swamps producing more CO2 than all humans combined.
And all you have to mix it is Brownian motion, updrafts and winds as the sources and sinks continue to change the local amount of CO2.
As the wheat experiment showed you get a local reading of 300 ppm during the day and as much as a 500 ppm reading at night.
Every watch a planes contrail or the steam from a nuclear cooling tower? The visible moisture can stay for HOURS depending on the weather. and that “clump” of air containing the water vapor stays pretty much clumped. Ever watch clouds… So why in hades do you think water vapor will stay clumped and CO2, just because we can not see, it will not?

Gail Combs
June 10, 2012 8:13 pm

Myrrh says:
June 10, 2012 at 4:01 pm
Myrrh, I am glad some one understands what I am trying to say about the “well mixed” fallacy. Without it the entire house of cards falls so I can not see why everyone is defending Keeling and Mauna Loa especially when they come right out and SAY they are cherry picking. (and averaging)

Allan MacRae
June 10, 2012 9:17 pm

Myrrh says: June 10, 2012 at 1:53 pm
“In fact CO2 from burning fossil fuels and phytoplankton from the oceans—which cover the globe to 71%—have about the same 13 C value”
Interesting, thanks – do you have a source?

Reply to  Allan MacRae
June 11, 2012 5:49 am

Allen,
Google “Metabolic Fractionation of C13 & C12 in Plants” and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC406107/ tops the list.

barry
June 10, 2012 9:29 pm

Gail,

Think about CO2… all you have to mix it is Brownian motion, updrafts and winds as the sources and sinks continue to change the local amount of CO2.

Yes, local variability occurs dependent on various factors, just as clouds form dependent on temperature and pressure etc. But these are incidental to the average (or ‘background’) content. The amount of water vapour is different everywhere, but the atmospheric total is in equilibrium until something distrubs that equilibrium. Same goes for CO2. We have corroborating measurements from all over the world, not just from ML.
Here are the data for a non-urban site in Tasmania (Cape Grim), a location chosen for its isolation, considered one of the best locations to get a ‘pure’ atmospherc composition that reflects the global average.
Here are samples from Ascension Island in the Northern Hemisphere. (Period overlaps with Cape Grim)
Easter Island
Estavan Point, Canada
(I’m picking places that look like they’re non-urban and have some overlap with each other… that’s my sole criteria here, and I’m not omitting anything I find for any other reason)
Kotelny Island
HAKUHO MARU (Japanese research vessel)
Assy Plateau Kazakhstan
You will see the same thing as I do if you go through the records. There is a rise in CO2 background levels worldwide at the same concentration everywhere. There are also daily and seasonal fluctuations, but the background levels are consistent across a range of montoring sites, undertaken by many different groups. One would have to invoke the most far-reaching conspiracy theory to deny such corroboration of the accumulation of atmospheric CO2. You can ignore Keeling and ML – it doesn’t change the facts of world-wide inventories of atmospheric CO2. This is is ‘settled’ science. There are far more worthy candidates for skepticism.

barry
June 10, 2012 9:42 pm

“In fact CO2 from burning fossil fuels and phytoplankton from the oceans—which cover the globe to 71%—have about the same 13 C value”

But phytoplankton gives off 14 C. Fossil fuels do not. Therefore, if phytoplankton were a net contributor to global levels, we would see a corresponding rise in C14. We do not.
It is the ratio of 12, 13 and 14 C that establishes the anthro fingerprint of rising CO2. 14 C decays after 50, 000 years. It’s lack of increase in 14 C that tells us ancient carbon is responsible for the rise in atmospheric CO2 in the modern era.

Brian H
June 10, 2012 11:46 pm

barry says:
June 10, 2012 at 6:35 pm
(Various forces work to ‘stratify’ ozone, but it is a heavy gas and should not be hanging around the stratosphere if gravity had much to do with it!)

Indeed. O3 and CO2 differ by only about 5% in density; they should be well-mixed with each other, wherever they are!
😉

Allan MacRae
June 11, 2012 2:17 am

barry says: June 10, 2012 at 9:29 pm
Thanks Barry. First, I agree with your comments to Gail regarding CO2 measurements, with some limitations.
I have worked with much of the data you cited, and accept that atmospheric CO2 concentration is rising and is pretty much as reported at these sites, NOTWITHSTANDING that the data is intensively cherry-picked to remove outliers, etc. The correlation from site to site and the declining seasonal sawtooth (oscillation) from Barrow Alaska to the South Pole would be extremely difficult to falsify, even if the intent existed to do so (which I doubt). Also, the fact that dCO2/dt varies ~contemporaneously with temperature and CO2 lags temperature by ~9 months would also be difficult to fake. I have verified this relationship with both satellite and surface temperatures, even though there appears to be a significant warming bias in the surface temperatures. I have also verified the relationship using global CO2 and only Mauna Loa CO2 measurements.
I have less faith in the pre-1958 CO2 measurements from ice cores etc. – I think they are directionally correct but may show some absolute drift from actuals.
Nor do I have much faith in the alleged “pre-industrial” level of ~280 ppm CO2 in the atmosphere. This could be completely false, especially since CO2 lags temperature at all measured time scales and past temperatures have certainly been both warmer (MWP) and colder (LIA) than the present.
I just have not done enough work to confirm or deny these numbers, and there is so much BS in climate science (e.g. the Mann hokey stick, Climategate, etc.) that I like to verify everything myself.)
I keep finding more and more evidence that indicates that humanmade CO2 emissions are simply not the primary cause of the increase in atmospheric CO2. In those few sites I have seen where CO2 is measured in the urban environment, the natural CO2 signature prevails and the human signature is typically absent.
___________
barry says:
June 10, 2012 at 9:42 pm
“In fact CO2 from burning fossil fuels and phytoplankton from the oceans—which cover the globe to 71%—have about the same 13 C value”
But phytoplankton gives off 14 C. Fossil fuels do not. Therefore, if phytoplankton were a net contributor to global levels, we would see a corresponding rise in C14. We do not.
It is the ratio of 12, 13 and 14 C that establishes the anthro fingerprint of rising CO2. 14 C decays after 50, 000 years. It’s lack of increase in 14 C that tells us ancient carbon is responsible for the rise in atmospheric CO2 in the modern era.
Thanks Barry. Again, I have not checked this C14/13/12 relationship carefully myself. Those who have keep poking holes in it – the latest being Murry Salby. Do you have any comments on the work of these parties who have slagged the C14/13/12 argument? Are they all wrong? If so where?

June 11, 2012 2:56 am

Myrrh says:
June 10, 2012 at 2:04 pm
I’ll make it as shor as possiblet…
“It will” what? Spontaneously sink or not spontaneously rise in air?
It will spontaneously mix in air, partly due to diffusion (the practical result of the Brownian motion) and it will be mixed by wind, turbulence and convection to any height and latitude.
It takes 40 years in stagnant air to show a 1% increase in CO2 levels at the bottom of an air column in firn. By far not enough to give any settling out in free air.
See also the CO2 levels taken by airplanes over Colorado:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/inversion_co2.jpg
Besides huge changes in the first 500 meter over land, where huge sources and sinks are at work, the levels hardly change with height.
http://www.globalweathercycles.com/GWGCNCF/chapter5.htm
This reference says that there is no correlation between CO2 and temperature. Gail Combs did give a reference that temperature caused the CO2 rise. Some discrepancy here…
http://www.kickthemallout.com/article.php/Video-Revelle_Admits_CO2_Theory_Wrong
The film is about the link between CO2 and global warming. That is a complete different topic. Where does Revelle say that the CO2 measurements are/were wrong?
http://icecap.us/images/uploads/08_Beck-2.pdf
Again, read what Keeling himself said about the “background” CO2. Not what others say what he did or said or meant or allege what his motives were…
What the late Beck said is the equivalent of measuring temperature on a hot asphalt roof and averaging it over the day, then adding the averaging of a lot of other such places.
Nobody would suggest that that is the best way to establish the average temperature of the earth… But you and others insist that it should be done. Not very consequent…
Further proof you don’t know what I’m talking about as you’ve confused wind with Brownian motion, not only confusing convection with Brownian motion, but having no sense of scale of these two different processes.
Please Myrrh, wind, turbulence and convection does bring CO2 to any height in the atmosphere and Brownian motion / diffusion keeps it in the atmosphere. As proven, it takes 40 years without any convection to increase the CO2 levesl with 1% at the bottom of a 70 meter stagnant air column. That makes that far away from huge sources and sinks, the CO2 levels are within thight borders all over the earth.

barry
June 11, 2012 5:07 am

Allan,

Again, I have not checked this C14/13/12 relationship carefully myself. Those who have keep poking holes in it – the latest being Murry Salby. Do you have any comments on the work of these parties who have slagged the C14/13/12 argument? Are they all wrong? If so where?

I’m not up on those arguments, though did read some of Murray Salby a while back. There is a body of work and a great many atmospheric chemists behind the mainstream view. Are they all wrong?
(I’m nowhere near expert enough to answer your questions properly – Ferdinand is the guy for that)
Not too long ago it was popular to say the warming of surface temperature over the last century or so was not settled science, and many observers, includng one or two scientists involved in climate studies, claimed that much or all of the warming was either bad data or urban heat island effect. After BEST and the dedicated work of some skeptics (notably at the Air Vent), regulars here began to adopt the position that of course no one doubts that global temperature has risen, just that… (insert alternative topic). But that fashion changed again – not from any new information (indeed all work on the subject since, by skeptic and other, corroborates), but because that’s what people want to believe, o5r at least, that’s the conversation they’d prefer to see happening.
There are some published scientists that completely deny that there is any greenhouse effect from CO2 (Gerlich and Tscheuschner 2009). They have their supporters, too.
Here is a physicist who has successfully refuted Einstein’s special Theory of Relativity. Here is a page with many links to the debunking of Einstein’s theories (an excellent site). You can find an army of real scientists and dedicated amateurs demolishing once and for all the great hoax of Einsteinism.
My point is, every topic has its critics. That doesn’t necessarily mean that the topic is actually controversial. For obvious reasons, AGW attracts the largest army of detractors. The size of that population is indicative only of the political interest, not the quality of the criticism/s.
I don’t think Salby poked holes in the CO2 theory. IIRC, the implication of his thesis is that a global temperature change of less than 1C may be responsible for an increase of 100ppm CO2, which is preposterous. Global temps changed by ~5C during the last few glacial transitions, which would suggest CO2 swings of 500ppm, effectively giving the atmosphere a negative balance of CO2 during glacial maximum.
Allan, I think that looking at the details of this topic is interesting for various reasons, but it seems to me that some people are a little bit too passionate about finding holes. A dispassionate appraisal doesn’t leave much wiggle room, but a person with a strong agenda will always be able to find some uncertainty to chew over in science. The rise in CO2 is due to us. Some small part of the rise may be from deforestation (max 25%, but that’s still ‘us’), and there is variability from short-term temperature changes (ENSO), local circumstances, and diurnal and seasonal influences, but we emit 30 billion Gt every year (on average), and half that is added to the atmosphere every year (on average). It’s simple arithmetic. The isotopic argument is actually irrelevant, but it corroborates anyway.

FerdiEgb
June 11, 2012 5:33 am

Myrrh says:
June 10, 2012 at 2:04 pm
Gosh Ferdi, you’re quite a wag, “data unreliable from heavily contaminated places” immediately associated with Beck in the next sentence, when it’s Mauna Loa the poster child of heavily contaminated places – was that deliberate?
For your interest, the influence of local contamination is easely seen in a huge variability of the measurements within an hour and/or within a day. The standardeviation is the main measure for the variability. The historical measurements at Giessen, one of the cornerstones of Beck’s 1942 “peak” has a standard deviation of 68 ppmv. The average stdev of the hourly averaged raw data at Mauna Loa is 0.2 ppmv (2004). The maximum stdev for any hour in that year is 3.65 ppmv and the stdev over the full year, thus including the CO2 changes over the seasons and the trend, downwind volcanic emissions and upwind, depleted by vegetation air from the valley, is 2.16 ppmv.
You may observe that the 1942 “peak” is within 2 sigma of the measurements at Giessen, thus insignificant and that the trend at Mauna Loa after 2-3 years surpasses the 2 sigma range.
Now you want to convince me that the measurements at Giessen are acceptable, but these at Mauna Loa aren’t…
http://www.americanthinker.com/2009/12/greenhouse_gas_observatories_d.html
Please do some thinking for yourself… again. What the American “thinker” says is pure nonsense. Mount Erebus at 800 miles from the South Pole that may influence the results? Come on, Myrrh. And all these sources (volcanoes, swamps, vegetation, the South Pole base power plant,…) simply follow the global human emissions in an incredible straightforward rate of over 50% in the past 160 years? Of course human emissions simply disappear in space and someone with a big turnwheel regulates the natural emissions in ratio with the human emissions…
The Observatory at Point Barrow, Alaska is about 170 miles downwind from the Prudhoe Bay headquarters of the North Slope oil industry
Again pure nonsense, wind at Point Barrow is mainly from the Arctic Ocean. If the wind is occasionally from land side, it is not used for averages or trends.
As for stomata data, please fetch for the last century, I can’t find anything specific to that period. But I have found that stomata data and South Pole Flask measurements much higher than from ice core records
Here the overlap between ice core data and direct atmospheric measurements:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_sp_co2.jpg
Here the calibration curve for stomata data vs. ice cores, firn and direct measurements over the past century:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/stomata.jpg
If there was a real peak of 80 ppmv around 1942, as the late Ernst Beck alleged, then the stomata reading at 310 ppmv would be off scale. But nothing to see there.
Stomata data in the past may give a better overview of the variability, but are unreliable for past absolute levels, as the local/regional offset may change over the centuries.
Plants absorb more CO2 when there is more CO2 to absorb. They’d have been woofing it as fast as their little stomata could grab it
The current increase with some 100 ppmv CO2 causes a total sink rate of about 2 ppmv/year, of which 1/3rd by vegetation and 2/3rd by the (deep) oceans. Plants don’t double their uptake with a CO2 doubling. In average there is a 50% increase in uptake for a 100% increase in CO2 in ideal circumstances, but nature is seldom ideal…
Thus a 80 ppmv peak in 1942 would require some 120 years of increased uptake to return to near “normal” levels…

Gail Combs
June 11, 2012 6:13 am

barry says:
June 10, 2012 at 9:29 pm
Gail,
Think about CO2… all you have to mix it is Brownian motion, updrafts and winds as the sources and sinks continue to change the local amount of CO2.
Yes, local variability occurs dependent on various factors, just as clouds form dependent on temperature and pressure etc. But these are incidental to the average (or ‘background’) content….
____________________________________
It is pretty obvious that what I am trying to get at is going over most peoples heads so here is the explaination of the statistics:
What I am calling AVERAGES = means.
Statistics for individual samples:

mean
The most common expression for the mean of a statistical distribution with a discrete random variable is the mathematical average of all the terms….
range
The range of a distribution with a discrete random variable is the difference between the maximum value and the minimum value. For a distribution with a continuous random variable, the range is the difference between the two extreme points on the distribution curve…
https://controls.engin.umich.edu/wiki/index.php/Basic_statistics:_mean,_median,_average,_standard_deviation,_z-scores,_and_p-value#The_Sampling_Distribution_and_Standard_Deviation_of_the_Mean

Standard Deviation
The standard deviation gives an idea of how close the entire set of data is to the average value. Data sets with a small standard deviation have tightly grouped, precise data. Data sets with large standard deviations have data spread out over a wide range of values. The formula for standard deviation is given below as equation (3)
(3) Formula image address
The Sampling Distribution and Standard Deviation of the Mean
Population parameters follow all types of distributions, some are normal, others are skewed like the F-distribution and some don’t even have defined moments (mean, variance, etc.) like the Chaucy distribution. However, many statistical methodologies, like a z-test (discussed later in this article), are based off of the normal distribution. How does this work? Most sample data are not normally distributed.
This highlights a common misunderstanding of those new to statistical inference. The distribution of the population parameter of interest and the sampling distribution are not the same. Sampling distribution?!? What is that?
Imagine an engineering is estimating the mean weight of widgets produced in a large batch. The engineer measures the weight of N widgets and calculates the mean. So far, one sample has been taken. The engineer then takes another sample, and another and another continues until a very larger number of samples and thus a larger number of mean sample weights (assume the batch of widgets being sampled from is near infinite for simplicity) have been gathered. The engineer has generated a sample distribution.
As the name suggested, a sample distribution is simply a distribution of a particular statistic (calculated for a sample with a set size) for a particular population. In this example, the statistic is mean widget weight and the sample size is N. If the engineer were to plot a histogram of the mean widget weights, he/she would see a bell-shaped distribution. This is because the Central Limit Theorem guarantees that as the sample size approaches infinity, the sampling distributions of statistics calculated from said samples approach the normal distribution.
Conveniently, there is a relationship between sample standard deviation (σ) and the standard deviation of the sampling distribution (σx – also know as the standard deviation of the mean or standard errordeviation). This relationship is shown in equation (5) below:
[my try at inserting formula]
(5) σx = σx/
√ X  
(5) Image Address if I messed up
An important feature of the standard deviation of the mean, is the factor in the denominator. As sample size increases, the standard deviation of the mean decrease while the standard deviation, σ does not change appreciably.
https://controls.engin.umich.edu/wiki/index.php/Basic_statistics:_mean,_median,_average,_standard_deviation,_z-scores,_and_p-value#The_Sampling_Distribution_and_Standard_Deviation_of_the_Mean

Do not forget that rain droplets will absorb CO2 right out of the air forming carbonic acid. Here are some studies on water vapor. You are not going to find such studies on CO2 because of the “Well Mixed” sacred cow. I am not saying the range (σ) in CO2 is huge, I am just saying that the range (σ) is much wider that reported because the σ reported is the σ for the means and not the individuals.
Also the mixing in the atmospher is not all it is touted to be.
Here is a paper on the mixing of water vapour in the atmosphere. Unfortunately it is pay walled:
IMAGE – Fig. 3. Global distribution of CRISTA 2 water vapour mixing ratios (ppmv) at 100 hPa altitude for 10–15 August 1997.

Abstract
Water vapour measurements during the second mission of the CRyogenic Infrared Spectrometers and Telescopes for the Atmosphere (CRISTA) instrument are presented in the altitude regime 8–20 km. Mixing ratios are shown on isentropic surfaces (300–500 K) as global zonal means and as averages in 60° longitude sectors. Transports are indicated to occur preferentially on isentropic surfaces in the northern hemisphere, but not in the tropics and in the south…. http://www.sciencedirect.com/science/article/pii/S0273117708004158

NASA video: http://www.nasa.gov/mov/291251main_L3_H2O_Final_576.mov
NASA still: http://www.nasa.gov/images/content/291249main_vapor_still_226.jpg

The distribution of atmospheric water vapor, a significant greenhouse gas, varies across the globe. During the summer and fall of 2005, this visualization shows that most vapor collects at tropical latitudes, particularly over south Asia, where monsoon thunderstorms swept the gas some 2 miles above the land.
http://www.nasa.gov/topics/earth/features/vapor_warming.html

Here is AIRS on percipatable H2O content of the atmosphere: IMAGE This is for “Mean clear air water vapor distribution, 500 millibar to top-of-atmosphere.” Again they are looking at the AVERAGE for a column of air. Recent Climatology, Variability, and Trends in Global Surface Humidity: Surface RH has relatively small spatial and interannual variations, with a mean value of 75%–80% over most oceans in all seasons and 70%–80% over most land areas except for deserts and high terrain, where RH is 30%–60%…. Large RH increases (0.5%–2.0% decade−1) occurred over the central and eastern United States, India, and western China,.. Means of course are the result of AVERAGING.

Abstract
Water vapor plays the key role in the global hydrologic cycle and climate change. However, the distribution and variability of water vapor in the troposphere is not understood well in the globe, particularly the high-resolution variation. In this paper, 13-year 2-h precipitable water vapors (PWV) are derived from globally distributed 155 Global Positioning System sites observations and global three-hourly surface weather data and six-hourly National Centers for Environmental Prediction/National Center for Atmospheric Research reanalysis products, which are the first used to investigate multiscale water-vapor variability on a global scale. It has been found that the distinct seasonal cycles are in summer with a maximum water vapor and in winter with a minimum water vapor. The higher amplitudes of annual PWV variations are located in midlatitudes with about 10-20 plusmn 0.5 mm, and the lower amplitudes are found in high latitudes and equatorial areas with about 5 plusmn 0.5 mm. The larger differences of mean PWV between in summer and winter are located in midlatitudes with about 10-30 mm, particularly in the Northern Hemisphere. The semiannual variation amplitudes are relatively weaker with about 0.5 plusmn 0.2 mm. In addition, significant diurnal variations of PWV are found over most International Global Navigation Satellite Systems Service stations. The diurnal (24 h) cycle has amplitude of 0.2-1.2 plusmn 0.1 mm, and the peak time is from the noon to midnight. The semidiurnal (12 h) cycle is weaker, with amplitude of less than 0.3 mm.
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4812036&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F36%2F5075848%2F04812036.pdf%3Farnumber%3D4812036

There is absolutely no reason to believe that CO2 mixing in the atmosphere is any better than that of the mixing of water vapor. You can see the east west bands in both gases.

FerdiEgb
June 11, 2012 6:14 am

Gail Combs says:
June 10, 2012 at 6:12 pm
So once you get rid of the theory of “well mixed” and “background CO2″ you can not toss out Scholander’s Barrow data ranging from 386 ppm to 500 ppm in 1947
As said before, the accuracy of the method used at Barrow in 1946-1947 was +/- 150 ppmv. The variability is from the method itself and thus doesn’t give us any clue what the real variability in the outside air was of that time.
The modern station at Barrow has a measurement error of +/- 0.2 ppmv and shows a few spikes of 10 ppmv, with a stdev of a few ppmv in the hourly averages, besides a seasonal variation of +/- 6 ppmv. That includes all local, regional and global natural and human sources and sinks:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends_brw_2007_2008.jpg
That is for two years, but all hourly averaged data for the period 1973-2011 show the same low variability, besides the seasonal influence:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/brw/
Thus in my opinion, the historical data from Barrow are only good for vertical classification.

Gail Combs
June 11, 2012 6:22 am

ARGGGGrrrr
The symbol for sigma shows in the reply box but not in the comment. (grumble)

FerdiEgb
June 11, 2012 7:35 am

Gail Combs says:
June 11, 2012 at 6:13 am
For comparison:
All modern CO2 stations take 10-second voltage samples during 40 minutes + 20 minutes of calibration with 3 calibration gases. That is transformed into an hourly average CO2 level + standarddeviation of the past hour from the 240 snapshots. Every 25th hour, a fourth out-of-range calibration gas is used to check the performance of the other calibration gases during one hour. Thus on a yearly base, the real number of samples is near 1.5 million, of which the hourly averages + stdev is known. Except for mechanical (and instrumental) failures of course.
There are 10 baseline stations which all work in the same way, besides many others which also work in places with little local contamination, with similar procedures. That presents many millions of data per year from a lot of places, including their variability within an hour.
Compare that to the historical data:
Many were one-shots at places with huge diurnal changes. The longest ever series was at Giessen, Germany, with 3 samples a day, of which one (according to modern measurements) is at extra low CO2 levels due to afternoon CO2 absorption by vegetation and 2 at the flanks of the largest diurnal change in CO2 level. That already gives a bias of 40 ppmv in a summer day. Moreover, a slight change in sampling time influences the bias quite a lot.
The overall stdev of the less than 2000 samples over 1.5 years is 68 ppmv. Nobody knows the real accuracy of the method used (no calibration or interlaboratory tests known), the checking of the reagens, the skill of the people, etc. All we know is the huge variability of the historical (and modern) measurements at Giessen. Which in my opinion is enough to exclude such data from any “global” assesment.
But Giessen is the main base of the 1942 “peak” from the late Ernst Beck, together with another long series from Poonah, India. The latter is even worse: the data gathered were intended to see what the CO2 levels did during vegetation growth (rice, soy,…). The samples were from under, inbetween and over the leaves of the growing plants, from barren ground to harvest. And that should give an idea of “global” CO2 levels of that time?

FerdiEgb
June 11, 2012 7:52 am

Gail Combs says:
June 11, 2012 at 6:13 am
There is absolutely no reason to believe that CO2 mixing in the atmosphere is any better than that of the mixing of water vapor. You can see the east west bands in both gases.
Gail, never heard of the maximum humidity of air with temperature? There is reason to assume that water vapour is NOT well mixed, because there is a maximum limit and it drops or freezes out of the air. But there is no reason at all that CO2 drops out or freezes out at any rate and any mixture (except at -80°C and 1 bar, thus for 100% CO2).
BTW, the CO2 in water is true, but irrelevant, as the amounts are low and what is absorbed at the condensation places is released at the evaporation places, hardly influencing the overall CO2 levels.

Myrrh
June 12, 2012 5:32 am

Hi Gail, my post to you hasn’t appeared yet.

Myrrh
June 12, 2012 6:43 am

Gail – I’d posted it here, http://wattsupwiththat.com/test-2/#comment-1007003 just before posting in this discussion. Scuse typos and such and a few missing edits, but you’ll get the picture.

June 13, 2012 12:38 am

[SNIP: Eli, you are waving a red flag at a bull. This will only divert the discussion away from the topic at hand. Please don’t do that. -REP]

Allan MacRae
June 13, 2012 3:16 am

fhhaynie says: June 11, 2012 at 5:49 am
Allan,
Google “Metabolic Fractionation of C13 & C12 in Plants” and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC406107/ tops the list.\\
Thank you Mr. Haynie,
SUMMARY
C13/C12 ratio analyses of chemical fractions from several plant phyla show that in all cases the lipid fraction is enriched in C12 compared to the whole plant. The C13/C12 ratio of the plant lipids corresponds roughly to the Cl 13/C12 ratio of petroleums. The C12 enrichment in petroleums as compared to present day plants can be explained if selective preservation of plant lipids occurred during the sedimentation process. The degree of C12 enrichment in the plant lipid fraction is inversely related to the amount of lipid in the plant. The C12 enrichment which occurs in plant lipids may be balanced by the 03 enrichment which occurs in respired C02• Isotope selection at the level of acetate or pyruvate is a possible mechanism for explaining our results.
I note that Figure 1 shows the C13/C12 ratios for petroleum, coal and land plants are all about the same.
So please help me out again: where is the human signature in the oxidation in these three materials when they all have similar C13/C12 ratios?

FerdiEgb
June 13, 2012 6:16 am

Allan MacRae says:
June 13, 2012 at 3:16 am
So please help me out again: where is the human signature in the oxidation in these three materials when they all have similar C13/C12 ratios?
There is no direct way to make a differentiation between CO2 released from burning fossil fuels or burning organics (either by humans or bacteria or forest fires), based on the 13C/12C isotope ratios. But there are two independent indirect ways:
– fossil fuels contain no 14C, but all recent organics do. That showed up in the pre-bomb tests atmosphere as soon as ~1870 and needed a correction of the carbon dating.
– the oxygen use. Fossil fuel burning needs oxygen. The amounts of oxygen used can be calculated from type of fuel and burning efficiencies. The measurements were very difficult to give the necessary accuracy (lass than a ppmv on 20,000 ppmv oxygen…), but since about 1990, that is obtained. That shows that the oxygen use has a small deficit compared to what was calculated. That means that the biosphere as a whole (plants + animals + bacteria) is a net producer of oxygen, thus a net user of CO2 and thus preferential 12CO2, leaving more 13CO2 in the atmosphere.
Thus vegetation is a net sink for CO2 and not the cause of the declining levels of 13CO2.
See:
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf
For the period 1900-2000 in graph form:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/bolingraph.gif

FerdiEgb
June 13, 2012 6:25 am

Corrections:
Thus vegetation is a net sink for CO2 and not the cause of the declining levels of the 13C/12C ratio in the atmosphere.
And of course, Bolin’s graph is for the 1990-2000 period, but one can calculate it further back in time, based on ice core CO2 and d13C measurements, be it that O2/N2 measurements in the ice cores are not reliable enough..

FerdiEgb
June 13, 2012 8:25 am

correction 2: there is over 20% oxygen in the atmosphere, thus over 200,000 ppmv… To see a change of a few ppmv, the accuracy of the method must be better than 1:200,000. Not a simple task…