The Bern Model Puzzle

Guest Post by Willis Eschenbach

Although it sounds like the title of an adventure movie like the “Bourne Identity”, the Bern Model is actually a model of the sequestration (removal from the atmosphere) of carbon by natural processes. It allegedly measures how fast CO2 is removed from the atmosphere. The Bern Model is used by the IPCC in their “scenarios” of future CO2 levels. I got to thinking about the Bern Model again after the recent publication of a paper called “Carbon sequestration in wetland dominated coastal systems — a global sink of rapidly diminishing magnitude” (paywalled here ).

Figure 1. Tidal wetlands. Image Source

In the paper they claim that a) wetlands are a large and significant sink for carbon, and b) they are “rapidly diminishing”.

So what does the Bern model say about that?

Y’know, it’s hard to figure out what the Bern model says about anything. This is because, as far as I can see, the Bern model proposes an impossibility. It says that the CO2 in the air is somehow partitioned, and that the different partitions are sequestered at different rates. The details of the model are given here.

For example, in the IPCC Second Assessment Report (SAR), the atmospheric CO2 was divided into six partitions, containing respectively 14%, 13%, 19%, 25%, 21%, and 8% of the atmospheric CO2.

Each of these partitions is said to decay at different rates given by a characteristic time constant “tau” in years. (See Appendix for definitions). The first partition is said to be sequestered immediately. For the SAR, the “tau” time constant values for the five other partitions were taken to be 371.6 years, 55.7 years, 17.01 years, 4.16 years, and 1.33 years respectively.

Now let me stop here to discuss, not the numbers, but the underlying concept. The part of the Bern model that I’ve never understood is, what is the physical mechanism that is partitioning the CO2 so that some of it is sequestered quickly, and some is sequestered slowly?

I don’t get how that is supposed to work. The reference given above says:

CO2 concentration approximation

The CO2 concentration is approximated by a sum of exponentially decaying functions, one for each fraction of the additional concentrations, which should reflect the time scales of different sinks.

So theoretically, the different time constants (ranging from 371.6 years down to 1.33 years) are supposed to represent the different sinks. Here’s a graphic showing those sinks, along with approximations of the storage in each of the sinks as well as the fluxes in and out of the sinks:

Figure 2. Carbon cycle.

Now, I understand that some of those sinks will operate quite quickly, and some will operate much more slowly.

But the Bern model reminds me of the old joke about the thermos bottle (Dewar flask), that poses this question:

The thermos bottle keeps cold things cold, and hot things hot … but how does it know the difference?

So my question is, how do the sinks know the difference? Why don’t the fast-acting sinks just soak up the excess CO2, leaving nothing for the long-term, slow-acting sinks? I mean, if some 13% of the CO2 excess is supposed to hang around in the atmosphere for 371.3 years … how do the fast-acting sinks know to not just absorb it before the slow sinks get to it?

Anyhow, that’s my problem with the Bern model—I can’t figure out how it is supposed to work physically.

Finally, note that there is no experimental evidence that will allow us to distinguish between plain old exponential decay (which is what I would expect) and the complexities of the Bern model. We simply don’t have enough years of accurate data to distinguish between the two.

Nor do we have any kind of evidence to distinguish between the various sets of parameters used in the Bern Model. As I mentioned above, in the IPCC SAR they used five time constants ranging from 1.33 years to 371.6 years (gotta love the accuracy, to six-tenths of a year).

But in the IPCC Third Assessment Report (TAR), they used only three constants, and those ranged from 2.57 years to 171 years.

However, there is nothing that I know of that allows us to establish any of those numbers. Once again, it seems to me that the authors are just picking parameters.

So … does anyone understand how 13% of the atmospheric CO2 is supposed to hang around for 371.6 years without being sequestered by the faster sinks?

All ideas welcome, I have no answers at all for this one. I’ll return to the observational evidence regarding the question of whether the global CO2 sinks are “rapidly diminishing”, and how I calculate the e-folding time of CO2 in a future post.

Best to all,

w.

APPENDIX: Many people confuse two ideas, the residence time of CO2, and the “e-folding time” of a pulse of CO2 emitted to the atmosphere.

The residence time is how long a typical CO2 molecule stays in the atmosphere. We can get an approximate answer from Figure 2. If the atmosphere contains 750 gigatonnes of carbon (GtC), and about 220 GtC are added each year (and removed each year), then the average residence time of a molecule of carbon is something on the order of four years. Of course those numbers are only approximations, but that’s the order of magnitude.

The “e-folding time” of a pulse, on the other hand, which they call “tau” or the time constant, is how long it would take for the atmospheric CO2 levels to drop to 1/e (37%) of the atmospheric CO2 level after the addition of a pulse of CO2. It’s like the “half-life”, the time it takes for something radioactive to decay to half its original value. The e-folding time is what the Bern Model is supposed to calculate. The IPCC, using the Bern Model, says that the e-folding time ranges from 50 to 200 years.

On the other hand, assuming normal exponential decay, I calculate the e-folding time to be about 35 years or so based on the evolution of the atmospheric concentration given the known rates of emission of CO2. Again, this is perforce an approximation because few of the numbers involved in the calculation are known to high accuracy. However, my calculations are generally confirmed by those of Mark Jacobson as published here in the Journal of Geophysical Research.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
251 Comments
Inline Feedbacks
View all comments
bacullen
May 6, 2012 4:34 pm

the first thing I noticed is “tau” time calculated to 3 (three!!!) significant figures. A good sign that those involved have NO clue what they are doing. Now let me finish reading…..

DocMartyn
May 6, 2012 4:34 pm

“rgbatduke says:
Now, as to your actual assertion that the rate that CO2 molecules are “snatched from the air” is proportional to the concentration of molecules in the air — absolutely. However, for each mode of removal it is proportional to the total concentration in the air, not the “first fraction, second fraction, third fraction”. The CO2 molecules don’t come with labels, so that some of them hang out anomalously long because they are “tree removal” molecules instead of “ocean removal” molecules. ”
This is indeed true, but it presents a problem to the modelers. They know that any saturatable process does not have first order kinetics as it approaches saturation; but if they allow all the first order processes to ‘see’ the whole atmospheric [CO2], then they end up with a rate constant that is the sum of all the rates. They have to artificially add in saturation limits, supported by lots of arm waving, to make their box models spit out the result they want; a saturatable sink.
Take a look at the marine biotica. Total mass 3GtC, annual fixation of carbon, 50GtC. A good fraction of the 50GtC is converted in ‘poop’ and falls to the bottom. If there is oxygen present some is converted to CO2, A lot is encased in mud. The figure of 150 GtC in the sediments is bollocks; that is only the carbon in the surface of the sediment. There is 20,000,000 GtC of Kerogen at the bottom of the oceans; this has been removed from the Wiki figures over the past year. The Kerogen is the true sink of the Carbon Cycle and it can only have come from the biosphere,
The ultimate test of the Bern Box Model is to measure the relative ratios of 14C in the ocean depths. After nuclear testing a large series of 14C were generated and this disappeared from the atmosphere with a t1/2 of about a decade. According to the Bern model, the vast majority of this 14C should be in the upper surface of the ocean, and lower amounts in the ‘saturatable’ sinks.
Look at figure 4
http://www.geo.cornell.edu/geology/classes/eas3030/303_temp/Ocean_14C&_acidification_ppt.pdf
14C is higher at depths less than 2000m than at 2000m; this means the flux of particulate 14C to the bottom is high, then the organic material is partly gasified, CO2/CH4, and rises.
The 14C numbers of the H-bomb tests are not well modeled by the Bern Box models, but they get around it by having a difference in the equilibrium time between surface water and air of CO2, depending on if the isotope is 12C or 14C. Arguing that 12CO2:12CO2 were at equilibrium and 14CO2:14CO2 were not.

rgbatduke
May 6, 2012 4:36 pm

right or wrong in that, but for what it is worth the link to the paper is on the line below:
http://www.john-daly.com/ahlbeck/ahlbeck.htm

Good paper. Agree or disagree, he is very clear about what he models and the assumptions in it and how he sets his parameters.
You are kidding right? Of course there is no actual partition it is a model so you can think through how carbon moves in and out of the atmosphere. You do get that, right? You do understand that to change sinks effects you change what is in each bucket (partition) to model how quickly that sink removes it from the atmosphere, right?
Please tell me you are not this rigid in your thought process – where is your degree from?

I’m not rigid in my thought process at all. I am looking at the equation Willis linked! Are you? Is there something in that equation that makes you think that it it could possibly be correct? The point I’ve been making is that even if you remove the exponential decaying parts from the kernel entirely, you are left with an integral of 0.15* E(t) from -\infty to the present. I can do this integral in my head for any non-compact function E(t) — it is infinite. Ignoring the -\infty and integrating from “a long time ago but not infinity” in such a way that you get the right baseline behavior is obviously wrong in so many ways, if that is what they do.
In any event, this integral basically says that 15% of what is added every year is never going to be removed, and in fact is still around from every belch or fart of CO_2 gas since the planet began. The decay kernel then strictly increases this cumulative concentration, it does not decrease it, so that even if we all vanished from the planet tomorrow \rho_{CO_2} would remain constant for eternity.
This is clearly absurd, as I’ve tried to say so many times now.
We could then go on and address the rest of the kernel, the part that actually might make physical sense, depending on how it is derived. But it is then difficult, actually, to have it make a LOT of sense because the result would almost certainly have a completely incorrect form if $latexE(t)$ were suddenly set to zero. I could be convinced otherwise, but it would certainly take some effort, because then those “buckets” that are basically fairly arbitrary terms in an approximation to a very odd decay function, one that describes a very highly nonexponential process, not a sum of mixed differential processes.
In physics mixed exponential processes are far from unknown. For example, if one activates silver with slow neutrons, two radioactive isotopes are produced with different half-lives. If you try to determine the half lives from the raw count rate, you find that one of the two isotopes decays much more quickly than the other, so that after a suitable time the observed rate is almost all the slow process. One can fit that, then subtract the back-projected result and fit the faster time constant. Or nowadays, of couse, you could use a nonlinear least squares routine to fit the two at the same time and maybe even be able to get the result from a much shorter observation time if you have enough signal.
But note well, two different isotopes. I’m having a very hard time visualizing how, if CO_2 sources all turned off tomorrow, 1/e of 32% of it would have disappeared from the atmosphere within 2.56 years via one channel, but 28% of it will have only gone down by a factor of 1/e^{-2.56/18}, while 25% of the rest will have diminished by 1/e^{-2.56/171} and 15% of it will not have changed at all. It might even be correct, but what does this mean? All of the CO_2 molecules in the atmosphere are identical. What one is really describing is some sort of saturation (as you might have noted) of some process that can never take up more than 32% of the atmospheric CO_2, no matter how long you wait, with absolutely no sources at all.
At that point I have to say that I become very dubious indeed. First of all, this implies a complete lack of coupling across the “buckets”, which is itself impossible. By the time the fast process has removed 32% of the atmospheric CO_2 — call it ten or fifteen years, depending on how many powers of 1/e you want to call zero, the concentration exposed to the intermediate process has had its baseline concentration dropped by a third or more. This, in turn, destroys the assumptions made in writing out sums of exponentials in the first place, and so its time constant is now meaningless because the CO_2, unlike the silver atoms, has no label! It is quite possible that whatever process was involved in the 18 year exponential decay constant removal has switched sign and become a CO_2 source, because the reason given for not just summing the exponential decay rates of the independent processes is that they are not independent.
Finally, one then has to question the uniqueness of the decomposition of the decay kernel. Why three terms (plus the impossible fourth term)? How “linearized” were the assumptions that went into constructing it, and how far does \rho_{CO_2} have to change before the assumptions break down? This is a pretty complex model — wouldn’t simpler models work just as well, or even better? Why write the solution as an integral equation at all instead of as a set of coupled ODEs?
The latter is the big question. If E(t) were constant or slowly varying or there was some kernel of meaning to be extracted from converting the ODEs into an integral equation, there might be some point. But when one looks at the leading constant term, presumably added because without it the model is just wrong, it leads to instantly incorrect asymptotic behavior. Surely that is a signal that the rest of the terms cannot be trusted!. The evidence is straightforward — there are times in the past when CO_2 concentration has been much higher. Obviously the monotonic term is fudged over the real historical record or CO_2 now would not be less. But nothing in this equation predicts the asymptotic equilibrium CO_2 concentration if E(t) is zero. In fact, it creates a completely artificial baseline CO_2 that the decay kernel parts will regress to, one that varies with time to be ever higher now in spite of the fact that one simply didn’t do the integral over all past times and in fact imposed an arbitrary cut-off or something so that it didn’t diverge.
Am I somehow mistaken in this analysis? Is there some way that the baseline CO_2 concentration produced by this model is not strictly increasing from an absolutely arbitrary amount that is whatever value you choose to assign the integral before you really start to do it, say 1710 years in the past (ten of the slowest decay times)?
I’ve done my share of fitting nonlinear multiple exponentials, and you can get all kinds of interesting things if you have three of them and a constant to play with, but there is no good reason to think that the resulting fit is meaningful or extensible.
rgb
P.S. My degree in physics is from Duke. And I’ve published papers on Langevin models in quantum electrodynamics, and spent a decade doing Monte Carlo and finite size scaling analysis that involved fitting exponentially divergent quantities (and made my share of mistakes, and could easily be mistaken here — this is the first few hours I have looked at the equation, after all). But still, wrong/completely nonphysical asymptotic form is not a good sign when looking at a model, as I point out just as emphatically when it is CAGW doubters (like Nikolov and Zeller who propose a model for explaining atmospheric heating that contains utterly nonphysical dimensioned parameters) that come up with it.
And yeah, it disturbs me a lot to talk about “buckets” in a three term exponential decomposition of an integral equation kernel supposed to describe a systems of great underlying complexity with many feedback channels and mechanisms. It’s too many, or too few. Too few to be a good approximation to a laplace transform of the actual integral kernel. Too many to be physically meaningful in a simple linearized model. If you want to write G(t - t') = \int a(\kappa) e^{-kappa (t - t')} d\kappa \approx \sum_i a_i e^{- \kappa_i (t - t')} I’m all for it, but be aware that the a_i you end up with from an empirical fit are, well, shall we say open to debate in any discussion of physical relevance or meaning, especially one where the mechanisms they supposedly represent can themselves have nontrivial functional dependences.
And in the end, if you have a believable model, why not just integrate the coupled ODEs? That’s what I’d do, every time. If nothing else it can reveal places where your linearization hypotheses are terrible, as you add or tweak detail and the model predictions diverge.
Do you disagree?
[Formatting fixed … I think … -w.]

May 6, 2012 4:36 pm

IMO…until someone can produce evidence that some atmospheric CO2 changes into SUPER CO2 that is time resistant to sequesters…………….
CO2 sequesters are blind to the object [ CO2 ].
The only evidence we have is the variability of the sequester – some do it faster.

jorgekafkazar
May 6, 2012 4:41 pm

rgbatduke: I often read the thread backwards (for various reasons), and I’ve learned to distinguish your comments well before I scroll all the way up to your name. They really stand out. Thanks for participating.

Rob Z.
May 6, 2012 4:43 pm

This model doesn’t seem to jive with the idea that the atmosphere is well mixed. It would seem to me that the model would be better characterized by using diffusion models similar to those used in electrochemical systems in soluton.

Rosco
May 6, 2012 5:03 pm

Freeman Dyson proposed “growing” topsoil (Biomass) as a means to fighting climate change – which he is sceptical of – as this would be more cost effective than reducing emissions and have a positive benefit for agriculture whilst resolving excess CO2.
Why haven’t the greens supported this innovative idea ?
I think it clearly shows the “climate change” debate is about political powerand not much else !

MJB
May 6, 2012 5:09 pm

It reads like a weighted average that has not yet been averaged – and perhaps shouldn’t be. If so, then the 1.33 year sink never does fill up, and indeed does keep sequestering, however the size of the “pipe” is only 8% so can not do the whole job in 1.33 years (it would take about 17 years). Meanwhile, while the 1.33 is busy running, so are the slower sinks. So the combined rate would be something less than 17. To go back to the tank of water example, it is like having a single tank with lots of pipes to drain it, let’s say 100. 8 of those pipes are of a size that would empty the tank in 1.33 years if there were 100 of that size. The other 92 pipes on our tank are sized to correspond to the sequestration (drainage) rate of the other partitions. To try a different analogy, it’s like having 100 people drinking from a pitcher of beer the size of a swimming pool. Some are using garden hoses, some are using straws, and others are using fibre optics. The pool eventually empties, everyone gets some beer, just some get a lot more than others.

KR
May 6, 2012 5:15 pm

Willis Eschenbach
You asked: “what is the physical mechanism that is partitioning the CO2 so that some of it is sequestered quickly, and some is sequestered slowly?”
Perhaps the writeup you linked to was just not clear enough, or you interpreted the approximations presented as the model itself – if so, my sincere apologies. It seems quite clear to me what the page (http://unfccc.int/resource/brazil/carbon.html) presented: approximations of the Bern model (http://tellusb.net/index.php/tellusb/article/viewFile/15441/17291) results for how much and how fast CO2 ends up in different partitions, based upon that model of the carbon cycle, as those percentages. Not parameters, not the model itself, but an approximation of the model results. Results presented so that other researchers could use that approximation in their own work, with the stated caveat that “Parties are free to use a more elaborate carbon cycle model if they choose.”
I’m therefore finding quite difficult to see how you arrived at the interpretation you posed when writing the original post – that there is somehow an initial “partitioning”. That’s neither a correct description of the Bern model results nor of the UN page you linked to…

Dan Kurt
May 6, 2012 5:23 pm

@Latitude says:May 6, 2012 at 3:26 pm
“Gosh Dan, you just explained how denitrification is possible without carbon………….”
So you are a bean farmer!
Dan Kurt

Nullius in Verba
May 6, 2012 5:39 pm

jimboW,
Thanks. It’s appreciated.
rgbatduke,
“However, for each mode of removal it is proportional to the total concentration in the air, not the “first fraction, second fraction, third fraction”. The CO_2 molecules don’t come with labels,”
As I mentioned above, the fractions are not a separation of the atmosphere into labelled portions, but a consequence of the relative sizes of the reservoirs. If water flows from tank A to tank B until levels equalise, and the tanks have equal surface area, they converge on the midpoint and half the water added to tank A stays there. If you add another bucketload to A, they equalise again and half the new bucket goes to B. It’s not some magic form of CO2 that hangs around for longer, it’s just the effect of the level increasing in the destination reservoir.
“In any event, this integral basically says that 15% of what is added every year is never going to be removed, and in fact is still around from every belch or fart of CO_2 gas since the planet began. The decay kernel then strictly increases this cumulative concentration, it does not decrease it, so that even if we all vanished from the planet tomorrow \rho_{CO_2} would remain constant for eternity.”
Ah, right. I’ve figured out what you’re talking about, now.
Yes, that is what the equations says, because it doesn’t include the very long term geological sequestration processes that take place on the order of thousands of years. Those components would have no significant effect – they’d look effectively constant – over the time intervals they ran the simulations for. The first fraction represents all those time constants too big to measure.
CO2 is conserved. If you chuck it into a system with no exits, it will stay there forever.
The integral is a convolution of the emission history with an impulse response function. If you get a pulse of emissions and then nothing, the emission falls further and further behind, t-t’ gets larger and larger, and the impact of the emissions is given a large negative weighting, shrinking it. It does decay.

Nicholas
May 6, 2012 5:42 pm

Hello Willis,
I don’t know anything about this model but I can tell you that there are different physical models which would have similar behaviour.
Consider a thermal model where you have multiple heatsinks with different thermal resistances and heat storage capacity connected to a source of heat via interfaces with different thermal resistances.
A small heatsink connected to your heat source via a low thermal resistance will absorb heat quickly but its temperature will quickly reach equilibrium with the source so it will stop absorbing heat after a short period. At the same time, a large heatsink connected to the heat source via a high thermal resistance will only absorb a small amount of heat but it will take a lot longer to reach equilibrium so it will continue to do so for a long time.
You could come up with a similar electronic model where you have multiple capacitors with different capacities and leakages connected to a node via different value resistors.
I can see how nature could exhibit similar characteristics.
Having said that, their model seems like a clumsy approximation. I don’t know why they don’t use free electronics modeling tools like SPICE which are perfect for examining the response of systems with multiple time constants to perturbations. SPICE has been around for a while and is pretty much perfect for this sort of task. You just have to convert your model into capacitors, resistors and inductors which isn’t very hard. It’s done all the time with thermal modeling.

MB
May 6, 2012 5:56 pm

All modelers are looking towards the next funding round and sensationalism wins the day every time. “Nature”, despite its unwarranted kudos, is not a scientific journal, it is a magazine.

thingadonta
May 6, 2012 6:02 pm

I dont know too much about carbon cycle in the diagram, but I’m very suspicious about the figures concerning the sediments and the sea. As usual, they have forgotten about volcanoes.
The oceans contain mid ocean ridges and other undersea volcanoes that exchange vast amounts of c02 and other elements and minerals with seawater; none of this is represented in the diagram. The mid ocean ridges themselves strech for tens of thousands of kilometres. I know from personal experience that sediments adjacent to underwater voclanoes are enriched in carbonate, as I have drilled though thousands of metres of them. This carbonate exists in a complex arrangement with the heat and c02 sourced from the volanoes, as well as the carbonate in seawater, and I also suspect these undersea volcanoes buffer the acidity of the oceans as a whole-ie if the aicidity of the ocean oes up-more volcanic carbonate is deposited in the sediments-if the ocean acidity goes down-more carbonate is dissolved.
Volcanism has never really been very popular amongst the greens, because they arent very ‘green’ to begin with.

JFD
May 6, 2012 6:03 pm

Let’s back away from the problem just a bit and look at some data on carbon. Carbon in the world is located in the following locations/situations:
99.9% is in the sedimentary rocks in the form of limestone and dolomite
.002% is the fossils in the form of crude oil, natural gas, lignite and coal
.06% is in water bodies, primarily the oceans, in the form of CaCO3 and HCO3
.001% is in the atmosphere in the form of CH4, CO2, CO, VOCs and halogens
.005% is in the mineral soil in the form of humus, forest litter, bottom of mires and bogs
.001% is in living organisms, mainly vegetation
The question is then how does carbon dioxide convert -get into- the various forms of carbon storage or sinks? The times for each sink are obviously highly variable with the water bodies being the shortest, vegetation being second and the sedimentary rocks probably being the longest. With data perhaps one could develop relative time intervals. To me, using different times is acceptable for a model, I just don’t like the times and percentages used by the authors. They have made it too simple and precise of a problem.
One has to be careful of pinch points when dealing with something that is only .1% of the whole. In human time frames only water bodies and vegetation are of probable interest. Winds and currents are of the most interest with clearing and replanting of forests and jungles being of important interest as well.
With the CO2 in/out ratio being so constant, I am suspicious of oxidation of limestones and dolomites being undercounted in any material balance calculations. Ninety Nine point nine percent doesn’t have to change very much to sway the other times considerably.

stevefitzpatrick
May 6, 2012 6:05 pm

The CO2 uptake can be fitted to give a reasonable match in a number of different ways. My guess is that the Bern model is very wrong because it ignores a dominant process: thermohaline circulation, which leads to absorption of lots of CO2 at high latitudes in cold ocean regions with deep convection. Some of the sinks in the Bern model are real, but almost certainly the model is not an accurate predictor of future CO2 absorption; it suggests far too short a time to “saturate” the system with CO2. Consider a simpler simpler fit to the data: http://wattsupwiththat.com/2009/05/22/a-look-at-human-co2-emissions-vs-ocean-absorption/ Just as good a fit, and more physically reasonable.
The future absorption of CO2 with rising CO2 in the atmosphere will be much higher than the Bern model suggests, and for a very long time (at least several hundred years).

BernieH
May 6, 2012 6:27 pm

To electrical engineers, the impulse response model (the equation) in the link is quite unremarkable – just an ordinary linear system. On our benches, it would look like a bunch of R-C low-pass circuits in parallel (six of them I guess). The partitions (gain constants) and time constants are parameters of the model, and could be derived from rho(t) by deconvolution, if we knew E(t). This assumes the system really is linear (the one on the bench is), and is exactly the sum of six real poles. The theory is straightforward and manipulations such as “Prony’s method” and “Kautz function analysis” are long-established (and quite beautiful ).
That noted, attempting to apply this mathematical procedure to a true CO2 concentration curve is of course, utter nonsense. Likely the CO2 situation is not even linear in the first place, and the measured curves are subject to large systematic and random errors. For a circuit on the bench, we could at least cheat and peek at some of the component values. But for the atmosphere, there are no actual partitions, or separable processes with-defined characteristic times. There are NO discrete components – let alone ones we could identify and measure! It’s just a silly over-beefy model.
For CO2, It is doubtful there would be any usable physical reality to even a single pole model. It’s very very far from being a circuit on a bench.

JFD
May 6, 2012 6:30 pm

Willis, you are one of the great ones. I very much appreciate your keen mind and the quickness and width of your knowledge and interests. I read your treatises first, always.
I just see that it doesn’t take much exposure to the air of near surface carbonates, arising from landslides, floods, hurricanes, earthquakes; you name it, to introduce enough additional CO2 to the atmosphere to offset the removal by the other sinks. Thus, in human time, there will always be CO2 in the atmosphere no matter what the faster processes do in removing CO2.
I have 99.9% versus .1% in my favor, grin.
JFD

thingadonta
May 6, 2012 6:47 pm

JFD says:
99.9% is in the sedimentary rocks in the form of limestone and dolomite
.002% is the fossils in the form of crude oil, natural gas, lignite and coal
.06% is in water bodies, primarily the oceans, in the form of CaCO3 and HCO3
.001% is in the atmosphere in the form of CH4, CO2, CO, VOCs and halogens
.005% is in the mineral soil in the form of humus, forest litter, bottom of mires and bogs
.001% is in living organisms, mainly vegetation.
No carbon in volcanoes, mid ocean ridge systems? Ever heard of carbonitite volcanoes?

Chuck Nolan
May 6, 2012 7:21 pm

Bill Illis says:
May 6, 2012 at 1:16 pm
“It will take about 150 years to draw down CO2 to the equilibrium of 275 ppm if we stop adding to the atmosphere each year. Alternatively, we can stabilize the level just by cutting our emissions by 50%”
——————————————–
Why would we want to do that?

JFD
May 6, 2012 7:29 pm

Sure, I’ve heard of carbonitite volcanoes. They have a high percentage of limestone and dolomite (calcium/magnesium carbonates) in them. They are in the 99.9% of carbon listed first in my post.

1 3 4 5 6 7 10