Guest Post by Willis Eschenbach
Although it sounds like the title of an adventure movie like the “Bourne Identity”, the Bern Model is actually a model of the sequestration (removal from the atmosphere) of carbon by natural processes. It allegedly measures how fast CO2 is removed from the atmosphere. The Bern Model is used by the IPCC in their “scenarios” of future CO2 levels. I got to thinking about the Bern Model again after the recent publication of a paper called “Carbon sequestration in wetland dominated coastal systems — a global sink of rapidly diminishing magnitude” (paywalled here ).
Figure 1. Tidal wetlands. Image Source
In the paper they claim that a) wetlands are a large and significant sink for carbon, and b) they are “rapidly diminishing”.
So what does the Bern model say about that?
Y’know, it’s hard to figure out what the Bern model says about anything. This is because, as far as I can see, the Bern model proposes an impossibility. It says that the CO2 in the air is somehow partitioned, and that the different partitions are sequestered at different rates. The details of the model are given here.
For example, in the IPCC Second Assessment Report (SAR), the atmospheric CO2 was divided into six partitions, containing respectively 14%, 13%, 19%, 25%, 21%, and 8% of the atmospheric CO2.
Each of these partitions is said to decay at different rates given by a characteristic time constant “tau” in years. (See Appendix for definitions). The first partition is said to be sequestered immediately. For the SAR, the “tau” time constant values for the five other partitions were taken to be 371.6 years, 55.7 years, 17.01 years, 4.16 years, and 1.33 years respectively.
Now let me stop here to discuss, not the numbers, but the underlying concept. The part of the Bern model that I’ve never understood is, what is the physical mechanism that is partitioning the CO2 so that some of it is sequestered quickly, and some is sequestered slowly?
I don’t get how that is supposed to work. The reference given above says:
CO2 concentration approximation
The CO2 concentration is approximated by a sum of exponentially decaying functions, one for each fraction of the additional concentrations, which should reflect the time scales of different sinks.
So theoretically, the different time constants (ranging from 371.6 years down to 1.33 years) are supposed to represent the different sinks. Here’s a graphic showing those sinks, along with approximations of the storage in each of the sinks as well as the fluxes in and out of the sinks:
Now, I understand that some of those sinks will operate quite quickly, and some will operate much more slowly.
But the Bern model reminds me of the old joke about the thermos bottle (Dewar flask), that poses this question:
The thermos bottle keeps cold things cold, and hot things hot … but how does it know the difference?
So my question is, how do the sinks know the difference? Why don’t the fast-acting sinks just soak up the excess CO2, leaving nothing for the long-term, slow-acting sinks? I mean, if some 13% of the CO2 excess is supposed to hang around in the atmosphere for 371.3 years … how do the fast-acting sinks know to not just absorb it before the slow sinks get to it?
Anyhow, that’s my problem with the Bern model—I can’t figure out how it is supposed to work physically.
Finally, note that there is no experimental evidence that will allow us to distinguish between plain old exponential decay (which is what I would expect) and the complexities of the Bern model. We simply don’t have enough years of accurate data to distinguish between the two.
Nor do we have any kind of evidence to distinguish between the various sets of parameters used in the Bern Model. As I mentioned above, in the IPCC SAR they used five time constants ranging from 1.33 years to 371.6 years (gotta love the accuracy, to six-tenths of a year).
But in the IPCC Third Assessment Report (TAR), they used only three constants, and those ranged from 2.57 years to 171 years.
However, there is nothing that I know of that allows us to establish any of those numbers. Once again, it seems to me that the authors are just picking parameters.
So … does anyone understand how 13% of the atmospheric CO2 is supposed to hang around for 371.6 years without being sequestered by the faster sinks?
All ideas welcome, I have no answers at all for this one. I’ll return to the observational evidence regarding the question of whether the global CO2 sinks are “rapidly diminishing”, and how I calculate the e-folding time of CO2 in a future post.
Best to all,
w.
APPENDIX: Many people confuse two ideas, the residence time of CO2, and the “e-folding time” of a pulse of CO2 emitted to the atmosphere.
The residence time is how long a typical CO2 molecule stays in the atmosphere. We can get an approximate answer from Figure 2. If the atmosphere contains 750 gigatonnes of carbon (GtC), and about 220 GtC are added each year (and removed each year), then the average residence time of a molecule of carbon is something on the order of four years. Of course those numbers are only approximations, but that’s the order of magnitude.
The “e-folding time” of a pulse, on the other hand, which they call “tau” or the time constant, is how long it would take for the atmospheric CO2 levels to drop to 1/e (37%) of the atmospheric CO2 level after the addition of a pulse of CO2. It’s like the “half-life”, the time it takes for something radioactive to decay to half its original value. The e-folding time is what the Bern Model is supposed to calculate. The IPCC, using the Bern Model, says that the e-folding time ranges from 50 to 200 years.
On the other hand, assuming normal exponential decay, I calculate the e-folding time to be about 35 years or so based on the evolution of the atmospheric concentration given the known rates of emission of CO2. Again, this is perforce an approximation because few of the numbers involved in the calculation are known to high accuracy. However, my calculations are generally confirmed by those of Mark Jacobson as published here in the Journal of Geophysical Research.

I could be mistaken, but it seems to me that this Bern Model has appropriated a perfectly good concept of “impulse response” and other ideas of Laplace transform theory encountered by electrical engineers in “Linear Systems Theory.” The theory is of course correct in its EE version. The climate application is highly flawed: first in its inappropriate misapplication, and then in its poor implementation (at least a failure to do it with orthogonal basis functions).
So we are left with comments here pointing out the failings in the climate application. Quite so. But this does not reflect back and invalidate the theory as used in EE. Perhaps you do need to sketch the corresponding circuit (it involves R-C low-pass sections with a common input E (setting time constants), buffered, weighted, and summed (the partitioning lacking in the atmosphere). The equation in the link is correct. The convolution integral does not blow up. Think about electrons on discrete capacitors, not CO2 in the one atmosphere.
Given enough parameters (recall von Neumann’s delicious joke about an elephant modeled with 5 parameters), most mathematical constructions can be made to work locally. A polynomial can model a sinewave locally – but soon runs rapidly to infinity! Wrong choice. Only a fool would try to bake a cake in a refrigerator. But after that failure, should we decide it was not suitable for cooling lemonade?
It does no good to attempt to find fault with established linear systems theory. What seems to be wrong is the inappropriate application attempt, or at least considering it anything more than a local model (no physical meaning).
“a2videodude says:
The bottom line is that simultaneously deducing the distribution of amounts AND half-lives from decay data (either radioactive decay or CO2 concentration decay) is incredibly difficult and the uncertainties are enormous because the functions you are using to model the decay (a series of exponentials) are far, far from being orthogonal. Any negative exponential can, to excellent accuracy, be approximated by a sum of other exponentials with different decay rates. You can either deduce decay rate if you know you have a single (or at least very simple but known) combination of reservoirs, or you can deduce the amounts in different reservoirs if you know their decay rates independently. You just can’t to both things simultaneously to any useful degree.”
Absolutely correct. In a chemical reaction the measured first order rate constant IS the sum of all the first order rate constants, which are individual collisions of molecules of differing energy and colliding on different vectors.
I normally cringe from citing Wikipedia, but there is an interesting plot of carbon-14 concentration since 1945 at en.wikipedia.org/wiki/Carbon-14 that shows an exponential decay (removal of C-14 excess over background levels) consistent with an e-folding time on the order of a decade. The reason for the excess? Atmospheric nuclear testing. A never-to-be-repeated experiment.
Willis:
Thankyou for your comment to me that says;
“Richard, your paper is paywalled, so I fear I won’t be able to comment. That may be the reason your claims have gotten little traction, because you are referring to an unavailable citation.”
OK, I understand that, and I am not arguing that it get “traction”. In this thread I have been pointing out what the paper says so those points can be considered in the context of arguments about the Bern Model.
Also, I have a personal difficulty in that the paper was published in E&E and I am now on the Editorial Board of E&E so I cannot give the paper away. That said, I presented a version of it at the first Heartland Climate Conference and that version is almost completely a ‘cut and paste’ from the paper so I could send you a copy of that if you are interested to read it.
Regards
Richard
Willis Eschenbach
I spent some time re-reading the thread, and you are completely correct. It was entirely inappropriate for me to ascribe ill intent, and I would like to sincerely apologize for doing so in what should be a discussion of the science. Mea culpa, I was wrong. Please – call me on such things if I cross that line again.
—
With respect to the science: I thought it was quite clear that the exponentials and constants in the link you provided (http://unfccc.int/resource/brazil/carbon.html) are not the Bern model itself, which is described in Siegenthaler and Joos 1992 (http://tellusb.net/index.php/tellusb/article/viewFile/15441/17291). That is a multi-box model involving eddys, surface uptake, and the physics of mid-term CO2 absorption, with transport parameters calibrated against carbon-14 distribution measurements – complexities not in that page of exponentials.
Rather, they are approximate exponentials and fractions fitted to the results of running the Bern model, providing other investigators with some tools to estimate mid-term CO2 effects. Along with the caveat that “Parties are free to use a more elaborate carbon cycle model if they choose”. For example, the Bern model does not include CaCO3 chemistry or silicate weathering, long term carbon sinks.
So, as a starting point of discussion on the science:
– Is it clear that those exponentials are not the Bern model?
I am just amazed that nobody has commented on the relationship I have pointed out numerous times, which clearly shows that temperature has been driving CO2 concentration. It is obvious here that there simply is no need to consider human emissions to any significant level. This very simple observation kicks the very foundation out from under the Climate Change imbroglio.
KR says:
May 7, 2012 at 4:47 pm
KR, my thanks to you. It is the mark of an honest man and a gentleman to acknowledge when he has gone over the line. You have my sincere acknowledgement and appreciation.
I thought I had been clear above when I said:
My point is that whether you are using the Bern Model itself, or the simpler model described in the paper that I linked, it needs to be physically plausible and lead to physically plausible results. The problem is that the simple model that I linked to, which emulates the Bern Model, does neither.
My best to you, and again my thanks for your honesty,
w.
Willis Eschenbach – “”…as far as I can see, the Bern model proposes an impossibility. It says that the CO2 in the air is somehow partitioned, and that the different partitions are sequestered at different rates”
I really don’t understand this statement. There are multiple parallel and serial processes occurring in the fairly simple Bern model, and the approximations (constants for weighting and time factor) are just approximations for the various and multiple (parallel and serial) inter-compartment transfer rates. The percentages allow considering different CO2 pulses, and the time factors show how the Bern model inter-compartment movements show up.
There certainly is no initial partitioning of a CO2 input in the Bern model. The complexities of the model, its behavior, are just curve-fitted. Much in the way nuclear reactor fuel decay can be fit with a decaying exponential, regardless of the internal physics – a behavioral description. The exponentials are purely descriptive, behavioral analogs to the Bern model, which was (if I interpret that initial page correctly) provided as one potential resource. The issues raised with compartmentalization are really irrelevant.
I (IMO) don’t believe it’s appropriate to criticize the Bern (or any other model) from that standpoint – two steps back, arguing about the curve-fit to the model behavior. Rather, if you wish to truly critique a model, you need to show where the model itself breaks down. And since there is as far as I can see no discussion here of the assumptions, parameters (fit to among other things the carbon-14 data), and compartmentalization of that model, what is being discussed really isn’t the model at all.
That is not to say that the Bern model is a thing of perfection. It’s 20 years old, does not include long term sequestration such as CaCO3 or rock weathering, a simplistic biosphere compartment, and (as Joos et al note in their paper) has some latitude dependent inaccuracies with replicating C-14 measurements. But to quote George Box, “Essentially, all models are wrong, but some are useful”. A critique of this model needs to show how the model fails to meet observations – something that simply hasn’t been done on this thread.
There has been no demonstration that the model itself isn’t useful – that requires an evaluation against the data. If it fits the data, it’s useful. If it doesn’t, it’s not. I have seen no discussion of the model behavior against observations.
KR says:
May 7, 2012 at 7:53 pm
“A critique of this model needs to show how the model fails to meet observations – something that simply hasn’t been done on this thread.”
Ahem..
Bart
The Bern model is a carbon cycle model, not a temperature model, and the observations used to calibrate the Bern model are carbon-14 distributions in the oceans, also checked against (IIRC) CFC-11 distributions. In regards to the mid-term carbon cycle, the model being discussed is reasonably accurate – it matches those observations.
It would be an error to assume that CO2 is the only forcing WRT temperature, however – methane, CFC’s, aerosols, solar, and the ENSO variations are also in play. All of those affect climate forcings (and hence temperatures) as well. And all of those need to be (and are, in the literature) considered when looking at forcings and climate responses – issues beyond the realm of the carbon-cycle model discussed in this thread.
I am going to repeat what I see as a couple key points, and then add one new thought that may help in pulling them together:
1) As mentioned, the 4 exponential equation is a fit to a more complex model. The fit is statistical, not physical. The underlying model, however, is physical, with diffusive oceans and ecosystems. The first web page I found when googling Bern carbon cycle model has a bit of a description: http://www.climate.unibe.ch/~joos/model_description/model_description.html. Note that even the Bern model is simple compared to the models used by carbon cycle researchers.
2) Nullius’ description of 3 compartments is a decent one for getting the key concept, which is that you have 2 compartments which reach equilibrium with a pulse of emissions (or added water) on one time scale, and a 3rd which reaches equilibrium with the first two on a longer time scale, and some percentage of the added water remains in the original compartment forever.
3) My key additional point, then, is that the Bern cycle approximation is meant to apply to one specific scenario, which is a pulse of carbon emissions in a system which starts at equilibrium. This is why it doesn’t match intuition applied to phenomena like constant airborne fractions, and is only a rough guide to the effect of a stream of emissions over a number of years. Though, as a side note, a constant airborne fraction is a number that depends on the rate of emissions increase, and therefore isn’t a reliable source of information about sink saturation: if next year, human emissions were to drop by a factor of 10, based on my understanding I would predict a reduction in CO2 concentrations (because emissions would be smaller than the sink), so airborne fraction would become negative too. Or if emissions grew by a factor of 10, the airborne fraction would probably grow pretty large, because the sink would not grow nearly as fast. My intuition on sinks is based on the assumption that while there is probably pretty fast equilibration between the very surface ocean and some parts of the ecosystem, the year-to-year changes in sink size will be interactions with the slower moving parts of the system which are driven by the difference between the concentration in the fast-mixing layer and the medium mixing layer.
4) While there is a “permanent” part of the Bern cycle approximation, it isn’t really permanent – carbonate formation does eventually take carbon out of the cycle and back into deep ocean and thence to sedimentary rock (on a greater than ten thousand year timescale), where it will eventually be subsumed, and in millions of years may eventually end up being outgassed by a volcano.
-MMM
KR says:
May 7, 2012 at 8:26 pm
“It would be an error to assume that CO2 is the only forcing WRT temperature…”
Indeed it would. CO2 is not forcing temperature. Temperature is forcing CO2. The fact that the derivative of CO2 is highly correlated with temperature anomaly establishes it. As I related above, the forcing cannot be the other way around without producing absurd consequences.
“In regards to the mid-term carbon cycle, the model being discussed is reasonably accurate – it matches those observations. “
A subjective exercise in curve fitting which cannot gainsay the above.
MMM says:
May 7, 2012 at 8:30 pm
“Note that even the Bern model is simple compared to the models used by carbon cycle researchers.”
I came upon this remarkable relationship between the derivative of CO2 concentration and temperature by accident just last week. It is so blindingly clear that temperature is driving the CO2 concentration that it took me aback. How could this relationship have been missed when researchers have been looking at the problem for decades, and have what are undoubtedly elaborate models into which much time of very smart people has been invested?
The answer: they did not have the observations – the strong correlation has only recently become evident. CO2 has only been reliably sampled since 1958, and a real kink in the rate of change has only come about with the last decade’s lull in temperature rise.
I can only surmise that others who have taken the time to pay attention to what I have put forward here are similarly taken aback, and do not yet know how to respond. Can the solution to the riddle actually be so easy?
Yes, it can.
For those who have not been following along, the way in which CO2 concentration can be insensitive to human inputs while its derivative is effectively proportional to delta-temperature is explained here.
Bart,
Atmospheric concentrations of CO2 are slightly sensitive to anthropogenic emissions; at present, making up less than 10%. http://www.retiredresearcher.wordpress.com.
Willis — A multiple exponential equation is typically the solution to a system of linear differential equations.
To take a simple example, reduce Fig. 2 to three reservoirs — Atmosphere A, Surface Ocean S, and Deep Ocean D. Assume their equilibrium capacities are proportional to the given values, 750, 1020, and 38,100 GtC. If this is an equlibrium, the flows in and out must be equal, so let’s take the averages and say that S to A and back is 91 GtC/yr in and S to D and back is 96 GtC/yr., that these are the instantaneous rates of flow, and that if any reservoir were to change, its outflow(s) would change proportionately.
Then
d/dt A = -(91/750)A + (91/1020)S,
d/dt S = +(91/750)A – (91/1020)S – (96/1020)S + (96/38100)D ,
d/dt D = +(96/1020)S – (96/3810)D.
Setting x equal to the column vector (A, S, D)’, this has the form
d/dt x = B x, where
B = (-91/750 91/1020 0;
91/750 -187/1020 96/38100;
0 96/1020 -96/38100)
The general solution of this system has the form
x = Sum{ c_j exp(d_j t) v_j},
where the column vectors v_j are the right eigenvectors of B and d_j the corresponding eigenvalues.
For this B, the eigenvalues are -.2615, -.0457, 0. Since I assumed for simplicity that all the C is in one of the 3 reservoirs, the changes sum to 0, B is singular, and there is one zero eigenvalue. (If I had included a permanent sink like sediments or biomass, all eigenvalues would be negative and the system would be stationary, but the math works either way.)
The corresponding matrix V of eigenvectors is approximately V =
(-.51 -.44 .02;
.81 -.36 .03;
-.29 .82 .1.00)
The column vector c of weights is determined by the initial condition x_0 = V c, so that
c = V^-1 x_0. For simplicity we may take all variables as deviations from the initial equilibrium. If x_0 = (1, 0, 0)’, so that we are adding 1 GtC to the initial value of A, c will be
c = (-.69, -1.42, .96)’. Over time, A will then be
A = c_1 v_1,1 exp(d_1 t) + c_2 v_1,2 exp(d_2 t) + c_3 v_1,3 exp(d_3 t)
= .35 exp(-.26 t)+ .63 exp(-.05 t) + .02.
The three e-fold times are 3.8, 22, and inf years. The initial unit injection is “partitiioned” into three portions of size .35, .63, .02 with different decay rates, but there in fact is no difference between the gas in the three portions. The sum simply evolves according to this equation.
The same approach can be used to solve more complicated systems, stationary or nonstationary. Does this help?
[Formatting fixed. -w.]
Thanks, Willis!
Bart says: May 7, 2012 at 9:00 pm
“I came upon this remarkable relationship between the derivative of CO2 concentration and temperature by accident just last week.”
Hi Bart – I discovered this dCo2/dt vs T relationship in late December 2007, emailed to a few friends including Roy Spencer, and published in Jan 2008 at
http://icecap.us/images/uploads/CO2vsTMacRae.pdf
Please see my post above at May 7, 2012 at 3:51 am
____________________________
Nullius in Verba says: May 7, 2012 at 4:50 am
“Allan McRae, Nice analysis! Temperature variations cause a lagged CO2 response because of the solubility pump’s dependence on temperature. But CO2 change is contributed to by many sources and sinks, and just because one component is caused by temperature doesn’t mean all the others are.”
Nullius, I don’t think I’ve ever said it’s just about solubility – it’s clearly not. There is a solubility component, and also a huge biological component, and others…. I did say the following in the above 2008 paper:
“Veizer (2005) describes an alternative mechanism (see Figure 1 from Ferguson and Veizer, 2007, included herein). Veizer states that Earth’s climate is primarily caused by natural forces. The Sun (with cosmic rays – ref. Svensmark et al) primarily drives Earth’s water cycle, climate, biosphere and atmospheric CO2.”
See Murry Salby’s more recent work where (I recall) he included both “temperature” AND “soil moisture” as drivers of CO2 and got a somewhat better correlation coefficient. I have not reviewed his work in any detail.
I further think the science is substantially more complicated, with several temperature cycle lengths, each with its associated CO2 lag time.
So – is the current increase in atmospheric CO2 largely natural or manmade?
Please see this 15fps AIRS data animation of global CO2 at
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4
It is difficult to see the impact of humanity in this impressive display of nature’s power.
All I can see is the bountiful impact of Spring, dominated by the Northern Hemisphere with its larger land mass, and some possible ocean sources and sinks.
I’m pretty sure all the data is there to figure this out, and I suspect some already have – perhaps Jan Veizer and colleagues.
Allan MacRae says:
May 7, 2012 at 10:31 pm
Well, Allan, count me an enthusiastic supporter of your position. When you view things the right way, the relationship just comes screaming out at you. Kudos for your writeup.
I knew CO2 and temperatures exhibited seasonal fluctuations which I assumed were correlated in some way, but I never realized there was such a pronounced long term correlation with the derivative and the temperatures. The alleged driving influence of human emissions can now be summed up in the famous words of Laplace: I have no need of that hypothesis.
Bart:
At May 7, 2012 at 9:00 pm you say and ask:
“I came upon this remarkable relationship between the derivative of CO2 concentration and temperature by accident just last week. It is so blindingly clear that temperature is driving the CO2 concentration that it took me aback. How could this relationship have been missed when researchers have been looking at the problem for decades, and have what are undoubtedly elaborate models into which much time of very smart people has been invested?
The relationship is a demonstration of Nigel Calder’s “CO2 Thermometer” which he first proposed in the 1990s. He describes it with honest appraisal of its limitations at
http://calderup.wordpress.com/2010/06/10/co2-thermometer/
And never forget the power of confirmation bias powered by research funding.
In 2005 I gave the final presentation on the on the first day of at a conference in Stockholm. It explained how atmospheric CO2 concentration could be modelled in a variety of ways that were each superior to the Bern Model, and each gave a different development of future atmospheric CO2 concentration for the same input of CO2 to the air.
I then explained what I have repeatedly stated in many places including on WUWT; i.e.
The evidence suggests that the cause of the recent rise in atmospheric CO2 is most probably natural, but it is possible that the cause may have been the anthropogenic emission. Imortantly, the data shows the rise is not accumulation of the anthropogenic emission in the air (as is assumed by e.g. the Bern Model).
A representative of KNMI gave the first presentation of the following morning. He made no reference to my presentation and he said KNMI intended to incorporate the Bern Model into their climate model projections.
So, I conclude that what is knowable is less important than what is useful for climate model development.
Richard
PS Apologies if this is a repost
Thank you Bart for your kind words,
While the dCO2/dt vs Temperature relationship is new information, I suspect that the lag of CO2 after temperature at different time scales (~800 year lag in ice core data, ~9 month in the modern instrument data record) has been long known, and only recently “swept under the rug” by global warming mania. Here are two papers from 1990 and 1995 on the multi-month CO2-after–temperature delay, first brought to my attention as I recall by Richard S Courtney:
Keeling et al (1995)
http://www.nature.com/nature/journal/v375/n6533/abs/375666a0.html
Nature 375, 666 – 670 (22 June 1995); doi:10.1038/375666a0
Interannual extremes in the rate of rise of atmospheric carbon dioxide since 1980
C. D. Keeling*, T. P. Whorf*, M. Wahlen* & J. van der Plichtt†
*Scripps Institution of Oceanography, La Jolla, California 92093-0220, USA
†Center for Isotopic Research, University of Groningen, 9747 AG Groningen, The Netherlands
________
OBSERVATIONS of atmospheric CO2 concentrations at Mauna Loa, Hawaii, and at the South Pole over the past four decades show an approximate proportionality between the rising atmospheric concentrations and industrial CO2 emissions1. This proportionality, which is most apparent during the first 20 years of the records, was disturbed in the 1980s by a disproportionately high rate of rise of atmospheric CO2, followed after 1988 by a pronounced slowing down of the growth rate. To probe the causes of these changes, we examine here the changes expected from the variations in the rates of industrial CO2 emissions over this time, and also from influences of climate such as El Niño events. We use the13C/12C ratio of atmospheric CO2 to distinguish the effects of interannual variations in biospheric and oceanic sources and sinks of carbon. We propose that the recent disproportionate rise and fall in CO2 growth rate were caused mainly by interannual variations in global air temperature (which altered both the terrestrial biospheric and the oceanic carbon sinks), and possibly also by precipitation. We suggest that the anomalous climate-induced rise in CO2 was partially masked by a slowing down in the growth rate of fossil-fuel combustion, and that the latter then exaggerated the subsequent climate-induced fall.
________
Kuo et al (1990)
http://www.nature.com/nature/journal/v343/n6260/abs/343709a0.html
Nature 343, 709 – 714 (22 February 1990); doi:10.1038/343709a0
Coherence established between atmospheric carbon dioxide and global temperature
Cynthia Kuo, Craig Lindberg & David J. Thomson
Mathematical Sciences Research Center, AT&T Bell Labs, Murray Hill, New Jersey 07974, USA
THE hypothesis that the increase in atmospheric carbon dioxide is related to observable changes in the climate is tested using modern methods of time-series analysis. The results confirm that average global temperature is increasing, and that temperature and atmospheric carbon dioxide are significantly correlated over the past thirty years. Changes in carbon dioxide content lag those in temperature by five months.
________
As you can see, Keeling believed that humankind was also causing an increased in atmospheric CO2. I’m not convinced, since human emissions of CO2 are still small compared with natural seasonal flux. I think human CO2 emissions are lost in the noise and are not a significant driver. More likely, the current increase in CO2 is primarily natural. I’ve heard ~all the counter-arguments by now, including the C13/C12 one, and don’t think they hold up.
It is possible that the current increase in atmospheric CO2 is primarily driven by the Medieval Warm Period, ~~800 years ago. The “numerical counter-arguments” rely upon the absolute accuracy of the CO2 data from ice cores. While I think the trends in the ice core data are generally correct, the values of the CO2 concentrations are quite possibly not absolutely accurate, and then the “numerical counter-arguments” fall apart..
Regards, Allan
Hu McCulloch, that would be a reasonable description of the system: including the magic words ‘Assume their equilibrium’. However, the system is not at equilibrium, indeed it is far from equilibrium. There are two zones of high biotic density, the first few meters of the top and the first few centimeters of the bottom. CO2 is a biotic gas and is denuded from the surface layer, as photosynthetic organisms devour it, generating oxygen. CO2 flux from the atmosphere and the lower depths to this area is high. Particulate organic matter rains down from the surface, enriched with14C. Some is intercepted and converted to CO2/CH4, but a reasonable amount reaches the bottom. Look at the numbers once again, slice the ocean into a layer cake of 1m thick layers. The bottom layer has a huge amount of carbon, and also has a higher 14C12C ratio than the bottom 3 kilometers of water. There is a very rapid, y-1, transport of organic matter directly to the bottom of the oceans.
If one wishes to defend the Bern CO2 model, do this experiment. a prior, calculate the equilibrium concentration of molecular oxygen with ocean depth. This should be trivial as 23% atmospheric oxygen gives about 250 micro molar aqueous O2 at the surface.. If the O2 concentration does not follow the physical model of oxygen partition with respect to temperature/pressure, then one must ask why CO2 should.
fhhaynie says:
Thank you for the link.
I would like to see that cross posted to WUWT BTW.
In the article it says
If the atmosphere is “accumulating the lighter CO2 faster” and “the lighter is more from organic origin” would this not indicate the increase in CO2 is more organic in origin and not from burning fossil fuels (inorganic)? (I haven’t had my morning tea yet and may be a bit blurry mentally)
On the other hand I consider coal complete with fossil ferns as “organic”
Gail,
Fossil fuels are of organic origin and have 13CO2 indexes between around -23 and -30.
“It says that the CO2 in the air is somehow partitioned, and that the different partitions are sequestered at different rates. ”
My understanding is that they simulated their “box model” to get its impulse response. They then fitted three or four exponentials, plus a constant, to the resulting impulse response.
As I said I am a bit blurry still. Dr Spencer addressed the “natural” vs “man-made” argument about the C12 – C13 ratio here:
Atmospheric CO2 Increases: Could the Ocean, Rather Than Mankind, Be the Reason?
Spencer Part2: More CO2 Peculiarities – The C13/C12 Isotope Ratio
The fact that these carbon isotope ratios are taken at Mauna Loa, the site of an active volcano that between eruptions, emits variable amounts of carbon dioxide on one hand and a CO2 “active” ocean affected by ENSO on the other does not give me much confidence in the carbon isotope ratio, C13/C12 as the purported signature of anthropogenic CO2.
That is a really small change in signal they are talking about especially given the mythical nature of CO2 as a gas well mixed in the atmosphere.
Further to Bart:
I am still pondering my conclusions in my 2002 paper – as some critics have noted, there are two drivers of CO2 – the humanmade component and the natural component, and both can be having a significant effect – critics suggest the humanmade component is dominant. I suggest the natural component is dominant.
Following my email to him, Roy Spencer also wrote on this subject at
http://wattsupwiththat.com/2008/01/25/double-whammy-friday-roy-spencer-on-how-oceans-are-driving-co2/
One more reference on this subject is by climate statistician William Briggs, at
http://wmbriggs.com/blog/2008/04/21/co2-and-temperature-which-predicts-which/
Prior work, which I became aware of after writing my 2008 paper, includes:
Pieter Tans (Dec 2007)
http://esrl.noaa.gov/gmd/co2conference/agenda.html
Tans noted the [dCO2/dt : Temperature] relationship but did not comment on the ~9 month lag of CO2.