The bombtest curve and its implications for atmospheric carbon dioxide residency time

English: Retrieved from LOC 3a44011. Aerial vi...
Aerial view of atomic bomb test on Bikini Atoll, 1946; showing “mushroom” beginning. Part of Operation Crossroads; alternate angle of Baker explosion (Photo credit: Wikipedia)

Studies of Carbon 14 in the atmosphere emitted by nuclear tests indicate that the Bern model used by the IPCC is inconsistent with virtually all reported experimental results.

Guest essay by Gösta Pettersson

The Keeling curve establishes that the atmospheric carbon dioxide level has shown a steady long-term increase since 1958. Proponents of the antropogenic global warming (AGW) hypothesis have attributed the increasing carbon dioxide level to human activities such as combustion of fossil fuels and land-use changes. Opponents of the AGW hypothesis have argued that this would require that the turnover time for atmospheric carbon dioxide is about 100 years, which is inconsistent with a multitude of experimental studies indicating that the turnover time is of the order of 10 years.

Since its constitution in 1988, the United Nation’s Intergovernmental Panel on Climate Change (IPCC) has disregarded the empirically determined turnover times, claiming that they lack bearing on the rate at which anthropogenic carbon dioxide emissions are removed from the atmosphere. Instead, the fourth IPCC assessment report argues that the removal of carbon dioxide emissions is adequately described by the ‘Bern model‘, a carbon cycle model designed by prominent climatologists at the Bern University. The Bern model is based on the presumption that the increasing levels of atmospheric carbon dioxide derive exclusively from anthropogenic emissions. Tuned to fit the Keeling curve, the model prescribes that the relaxation of an emission pulse of carbon dioxide is multiphasic with slow components reflecting slow transfer of carbon dioxide from the oceanic surface to the deep-sea regions. The problem is that empirical observations tell us an entirely different story.

The nuclear weapon tests in the early 1960s have initiated a scientifically ideal tracer experiment describing the kinetics of removal of an excess of airborne carbon dioxide. When the atmospheric bomb tests ceased in 1963, they had raised the air level of C14-carbon dioxide to almost twice its original background value. The relaxation of this pulse of excess C14-carbon dioxide has now been monitored for fifty years. Representative results providing direct experimental records of more than 95% of the relaxation process are shown in Fig.1.

 

image

Figure 1. Relaxation of the excess of airborne C14-carbon dioxide produced by atmospheric tests of nuclear weapons before the tests ceased in 1963

The IPCC has disregarded the bombtest data in Fig. 1 (which refer to the C14/C12 ratio), arguing that “an atmospheric perturbation in the isotopic ratio disappears much faster than the perturbation in the number of C14 atoms”. That argument cannot be followed and certainly is incorrect. Fig. 2 shows the data in Fig. 1 after rescaling and correction for the minor dilution effects caused by the increased atmospheric concentration of C12-carbon dioxide during the examined period of time.

image

Figure 2. The bombtest curve. Experimentally observed relaxation of C14-carbon dioxide (black) compared with model descriptions of the process.

The resulting series of experimental points (black data i Fig. 2) describes the disappearance of “the perturbation in the number of C14 atoms”, is almost indistinguishable from the data in Fig. 1, and will be referred to as the ‘bombtest curve’.

To draw attention to the bombtest curve and its important implications, I have made public a trilogy of strict reaction kinetic analyses addressing the controversial views expressed on the interpretation of the Keeling curve by proponents and opponents of the AGW hypothesis.

(Note: links to all three papers are below also)

Paper 1 in the trilogy clarifies that

a. The bombtest curve provides an empirical record of more than 95% of the relaxation of airborne C14-carbon dioxide. Since kinetic carbon isotope effects are small, the bombtest curve can be taken to be representative for the relaxation of emission pulses of carbon dioxide in general.

b. The relaxation process conforms to a monoexponential relationship (red curve in Fig. 2) and hence can be described in terms of a single relaxation time (turnover time). There is no kinetically valid reason to disregard reported experimental estimates (5–14 years) of this relaxation time.

c. The exponential character of the relaxation implies that the rate of removal of C14 has been proportional to the amount of C14. This means that the observed 95% of the relaxation process have been governed by the atmospheric concentration of C14-carbon dioxide according to the law of mass action, without any detectable contributions from slow oceanic events.

d. The Bern model prescriptions (blue curve in Fig. 2) are inconsistent with the observations that have been made, and gravely underestimate both the rate and the extent of removal of anthropogenic carbon dioxide emissions. On basis of the Bern model predictions, the IPCC states that it takes a few hundreds of years before the first 80% of anthropogenic carbon dioxide emissions are removed from the air. The bombtest curve shows that it takes less than 25 years.

Paper 2 in the trilogy uses the kinetic relationships derived from the bombtest curve to calculate how much the atmospheric carbon dioxide level has been affected by emissions of anthropogenic carbon dioxide since 1850. The results show that only half of the Keeling curve’s longterm trend towards increased carbon dioxide levels originates from anthropogenic emissions.

The Bern model and other carbon cycle models tuned to fit the Keeling curve are routinely used by climate modellers to obtain input estimates of future carbon dioxide levels for postulated emissions scenarios. Paper 2 shows that estimates thus obtained exaggerate man-made contributions to future carbon dioxide levels (and consequent global temperatures) by factors of 3–14 for representative emission scenarios and time periods extending to year 2100 or longer. For empirically supported parameter values, the climate model projections actually provide evidence that global warming due to emissions of fossil carbon dioxide will remain within acceptable limits.

Paper 3 in the trilogy draws attention to the fact that hot water holds less dissolved carbon dioxide than cold water. This means that global warming during the 2000th century by necessity has led to a thermal out-gassing of carbon dioxide from the hydrosphere. Using a kinetic air-ocean model, the strength of this thermal effect can be estimated by analysis of the temperature dependence of the multiannual fluctuations of the Keeling curve and be described in terms of the activation energy for the out-gassing process.

For the empirically estimated parameter values obtained according to Paper 1 and Paper 3, the model shows that thermal out-gassing and anthropogenic emissions have provided approximately equal contributions to the increasing carbon dioxide levels over the examined period 1850–2010. During the last two decades, contributions from thermal out-gassing have been almost 40% larger than those from anthropogenic emissions. This is illustrated by the model data in Fig. 3, which also indicate that the Keeling curve can be quantitatively accounted for in terms of the combined effects of thermal out-gassing and anthropogenic emissions.

image

Figure 3. Variation of the atmospheric carbon dioxide level, as indicated by empirical data (green) and by the model described in Paper 3 (red). Blue and black curves show the contributions provided by thermal out-gassing and emissions, respectively.

The results in Fig. 3 call for a drastic revision of the carbon cycle budget presented by the IPCC. In particular, the extensively discussed ‘missing sink’ (called ‘residual terrestrial sink´ in the fourth IPCC report) can be identified as the hydrosphere; the amount of emissions taken up by the oceans has been gravely underestimated by the IPCC due to neglect of thermal out-gassing. Furthermore, the strength of the thermal out-gassing effect places climate modellers in the delicate situation that they have to know what the future temperatures will be before they can predict them by consideration of the greenhouse effect caused by future carbon dioxide levels.

By supporting the Bern model and similar carbon cycle models, the IPCC and climate modellers have taken the stand that the Keeling curve can be presumed to reflect only anthropogenic carbon dioxide emissions. The results in Paper 1–3 show that this presumption is inconsistent with virtually all reported experimental results that have a direct bearing on the relaxation kinetics of atmospheric carbon dioxide. As long as climate modellers continue to disregard the available empirical information on thermal out-gassing and on the relaxation kinetics of airborne carbon dioxide, their model predictions will remain too biased to provide any inferences of significant scientific or political interest.

References:

Climate Change 2007: IPCC Working Group I: The Physical Science Basis section 10.4 – Changes Associated with Biogeochemical Feedbacks and Ocean Acidification

http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-4.html

Climate Change 2007: IPCC Working Group I:  The Physical Science Basis section 2.10.2 Direct Global Warming Potentials

http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-10-2.html

GLOBAL BIOGEOCHEMICAL CYCLES, VOL. 15, NO. 4, PAGES 891–907, DECEMBER 2001 Joos et al. Global warming feedbacks on terrestrial carbon uptake under the Intergovernmental Panel on Climate Change (IPCC) emission scenarios

ftp://ftp.elet.polimi.it/users/Giorgio.Guariso/papers/joos01gbc[1]-1.pdf

Click below for a free download of the three papers referenced in the essay as PDF files.

Paper 1 Relaxation kinetics of atmospheric carbon dioxide

Paper 2 Anthropogenic contributions to the atmospheric content of carbon dioxide during the industrial era

Paper 3 Temperature effects on the atmospheric carbon dioxide level

================================================================

Gösta Pettersson is a retired professor in biochemistry at the University of Lund (Sweden) and a previous editor of the European Journal of Biochemistry as an expert on reaction kinetics and mathematical modelling. My scientific reasearch has focused on the fixation of carbon dioxide by plants, which has made me familiar with the carbon cycle research carried out by climatologists and others.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
519 Comments
Inline Feedbacks
View all comments
July 7, 2013 5:22 am

Greg says:
July 6, 2013 at 10:25 pm
The rate of change (annual change in the case of Keeling) does show clear variation that provides a means to separate the different causes.
No it doesn’t. The short term variability of the derivative says next to nothing about the cause of the longer term variability. Even if you detrend the derivative, or shift the baseline to the mean, you still have the same variability…
You seem to have (tacitly) accepted that your earlier logic for two different time constants was in fact taken account of by Pettersson’s eqn 3 as I pointed out.
That is a misunderstanding: in my opinion there still are two separate, near independent time constants at work, one for isotope decay and one for excess CO2 decay. The first is clearly one-way, as the bomb tests spike shows, the other far from proven, as we are currently still around 50% residual fraction.
As that are independent time constants, the end of one doesn’t prove anything for the other.
The out-gassing process can be treated separately as long as the system is assumed to be linear but the relaxation time constant has to be the same.
The relaxation time for 14C is temperature dependent, hardly pressure related (+/- 5% in flux for 1 K). The relaxation time for extra CO2 is pressure related (+/- 1.5% in flux for 100 ppmv), hardly temperature related.
You erroneous calculation of 800/150 = 5.33 in fact applies to six months not a year.
Sorry, the fluxes are 90 GtC in/out the oceans, 60 GtC in/out the biosphere. It doesn’t make any difference if you use the inputs alone or the outputs alone to do the calculation, nor that the fluxes are continuous (some 40 GtC of the oceans) or seasonal or even within one month…
What plays a role in the empirical findings is that, like in the case of 14C, part of the output returns to the input, thus effectively halving the outflux of 14C in itself, therefore doubling the decay rate.
From http://climategrog.wordpress.com/?attachment_id=233 :
It is clear that this relationship matches a large proportion of the variation across the full record. The residual “constant” of each quantity is found by taking the mean of the full record. This gives residual 0.7 K/century warming of SST and an acceleration of atmospheric CO2 of 2.8 ppm/year/century.
The relationship is for the variation, it is entirely spurious for the residual “constant” and not based on any physical process. The initial increase in temperature causes an increase in pCO2 and hence CO2 fluxes which is surpassed by human emissions within a few years. The latter is the real cause of the increase in the atmosphere. Not temperature…
In fact, this is exactly waht Pettersson does and you should (re-)read the discussion in paper 2 which shows how such numbers clearly lead to the conclusion of a long term temperature sensitivity of the order of 100ppmv/K.
Throughout paper 2, Pettersson assumes that the 14C bomb test decay is the right one that governs the excess CO2 decay rate. As the excess CO2 decay rate is independent of the 14C decay rate, the whole paper 2 doesn’t make sense. Take e.g. following sentence:
The anthropogenic contributions to the atmospheric content of carbon dioxide on the average have corresponded to
about 20% of the total amount of anthropogenic carbon dioxide emitted (19% during the the last
two decades of the examined time period).

Thus 80% of human emissions were captured by other reservoirs. But as the increase in the atmosphere, according to Petterson, was about 50/50 natural/human, the natural fluxes/turnover should have increased in ratio with the human emissions over the same time frame (the sinks don’t make a differentiation between natural and human CO2…). That means for the period 1960-2010 more than a doubling. Thus leading to a halving of the residence time. But we see an increase in residence time over time, certainly not a halving…
can you comment on whether there is a credible quantitative difference that needs to be accounted for?
As argumented before: near independent processes governing the two decay rates…

Gene Selkov
July 7, 2013 5:30 am

On second thought, even though life can exist “in” ice, however we define “in”, that would be sea ice or similarly contaminated ice. If we’re talking about inner Greenland or Law Dome, that ice is as near sterile as it ever gets. No nutrients, no antifreeze.

Greg
July 7, 2013 5:36 am

Ferdi, I’ve said I think there may be some grounds to question the generality of the C14 curve, because of your dilution argument. Are you still convinced that is relevant.
If so, could you give what you regard a being the figures for the pre-WWII “stable” atm ratio and the ocean/air deficit at that time?

July 7, 2013 6:10 am

Greg says:
July 7, 2013 at 4:03 am
Indeed my math is completely rusty (from 45-50 years ago, had a complete different mostly non-math active life…), but something seems to be wrong with your calculations of the residence time.
By definition (in paper 1):
The turnover time (β) is normally defined as the amount of compound being present in the reservoir divided by the flux rate at which the compound is removed
That gives:
β = Amount/Flux
Where Flux may be influx or outflux, as both are near equal.
The flux rate for total CO2 of all inputs and outputs combined is 150 GtC/year input and 154 GtC/year output (rough estimates, but let us take them for granted). In my opinion, and I haven’t seen any other meaning of residence time, some ~20% of all CO2 in the atmosphere is removed from the atmosphere and placed into other reservoirs and replaced by CO2 from other reservoirs. That gives a residence time for any CO2 molecule in the atmosphere of 800 GtC / 150 GtC/year = ~5.33 years.
If you do that by integration of all inputs and outputs over a year, as these are largely countercurrent, the overall integral is near zero (4 GtC) and the halve year integral is not more than 10 GtC in both directions.
Except if Pettersson uses a different definition and literally means what is removed as the difference in inflow and outflow.
If that is the definition of residence time, then we still have the 12 years residence time for 14C and the 51.5 years residence time for excess CO2.
The former is what is observed in the 14C decay curve, the latter is observed in the observed decay rate of an excess CO2 amount above equilibrium…

July 7, 2013 6:44 am

gostapettersson says:
July 7, 2013 at 4:04 am
Thanks for the reaction…
It deals with the bombtest curve, which shows that the relaxation of the excess 14CO2 created by the bomb tests conforms to a monoexponential decay function and hence can be characterized by a single RELAXATION TIME estimated to 14 years.
No problems with that at all.
That information (describing the ‘impulse response function’ for CO2) is all one requires to estimate how much emissions contribute to increasing the atmospheric CO2 level.
That is where it goes wrong. There is a huge difference in the removal rate of some extra CO2 above equilibrium and the removal rate of one isotope compared to other isotopes in the main circulation.
The throughput of CO2 through the atmosphere is about 150 GtC/year, exchanging about 20% of all CO2 in the atmosphere with CO2 of other reservoirs. The deep oceans are the main sink for the 14C bomb spike, as only the old 14C levels return. That gives decay rates of 5-8 years in general and 14 years specific for the bomb spike.
The reduction of a CO2 mass spike back to equilibrium is of a different order: the throughput of 150 GtC plays no role, only the difference in throughput between inputs and outputs plays a role. If there is no difference, you can wait until eternity, and any 14C spike is already reduced to zero, still the CO2 mass spike resides in the atmosphere.
The current difference is ~4 GtC/year for an above equilibrium amount of 210 GtC. That gives a decay rate of ~51.5 years.
Two different processes, with two different decay rates with hardly any connection between the two…
Thus any conclusion on CO2 excess decay based on the 14C spike decay has no bearing in reality.
Which doesn’t vindicate the Bern model. The Bern model also has a lot of problems, including a fixed term, which is not applicable for the emissions up to now…

Greg
July 7, 2013 6:45 am

Pettersson: “…normally defined as the amount of compound being present in the reservoir divided by the flux rate at which the compound is removed:
β = Amount/Flux”
I think the equation is unclear , but look at the text . It’s flux rate
This is what I’ve been trying to get across to you. You should not be integrating across the whole year or taking peak to peak for the year. It is the RATE OF CHANGE that defines all these relaxation processes. It is a first order differential equation.
You are saying there is 150 Gtn and 150 out each year so 150 Gtn/year. This is wrong. If we are to simplify it that crudely it is 150 in 6months. then again 150 in 6months in the other direction. That is the _flux rate_.
However, this changes your figures in the other direction to that which I indicated above. This leaves a very short time constant for the annual variation which seems correct intuitively. This may mean that a single slab model is insufficient to model this but I think this requires more thought before jumping to any conclusions. Pettersson has a full career of this kind of chemistry so I imagine it’s second nature to him.

Greg
July 7, 2013 7:07 am

gostapettersson , thanks for dropping in.
I’ve asked Anthony to forward some information but knowing his email load he may not notice.
I’m sure you would like to keep a low profile to avoid a deluge of abuse about your papers from those who have been scared by the scaremongers. That probalby explains why I could not find a recent contact for you.
In case you missed it above, I think this graph gives a broad corroboration of your El Nino sensitivity.
http://climategrog.wordpress.com/?attachment_id=233
Also since convolving Keeling with a 14y decay gives a fairly straight increase it may be interesting to compare this to the average difference in air/ocean pCO2 of 7 microatm.
That should tell us something from the interdecadal average pressure difference against 2ppmv/annum.
Thanks for the papers, certainly food for thought. I’ve been working of trying to determine the residual/thermal ration for the last week or so , it good that this came up now.

Greg
July 7, 2013 7:17 am

@mods, what was wrong with that last post that it got held back? Just so I can avoid the tripwire next time 😉

ZP
July 7, 2013 7:32 am

Greg,
I tend to be in agreement with you. In any real kinetic analysis, one solves the differential equation (or system of differential equations) either analytically (ideally) or numerically (more commonly). Simply using arithmetic with values that happen to have the correct units is a naive approach that is doomed to abject failure.
The fitting constants are properly referred to as rate constants or mass transfer coefficients. These constants are independent of the species concentration! The rate constants are functions of temperature, ionic strength (salt content), dielectric constant, etc, however.
Once one knows the rate constants, one can readily calculate the half-life (or time for the system to relax to a specified level) for any process. You cannot do these calculations by guesstimating annual fluxes.

July 7, 2013 8:49 am

Greg says:
July 7, 2013 at 4:33 am
Tau = excess / (outflows – inflows) = 210 / 4 = 52.5 years
Once again you throw out numbers without explaining you undeclared assumptions. I’m guessing that “4″ is half of 8 Gtn recent annual human emissions. Thus your undeclared assumption is that 100% or the rise is due to emissions , which implies a 50% residual.

I thaught you would understand by now what that means…
excess = amount in the atmosphere above equilibrium in GtC = 100 ppmv = 210 GtC.
outflows – inflows = 4 GtC what is calculated from human emissions minus what is measured in the atmosphere as residual increase. No need to know the inputs or outputs.
Thus your undeclared assumption is that 100% or the rise is due to emissions , which implies a 50% residual.
Not at all: the 4 GtC sink rate may be the result of human emissions or – completely theoretical (as Bart alleges) – of a huge increase in circulation from 150 GtC/year to 1500 GtC/yr, increasing the sinks to 1504 GtC/yr and dwarfing the human emissions to negligible. All we know is that all outputs together are 4 GtC larger than all inputs (human + natural) together.
only explanation possible . As soon as anyone starts out like that, there’s a fair chance they are wrong. Especially with a system as complicated and poorly understood as climate.
As there is something compulsory called the mass balance… as said in the reply to Pettersson, if there is no difference between CO2 inputs and outputs, you may wait until infinity and no gram of CO2 will net leave the atmosphere, despite the high throughput. Even if the throughput of CO2 in the atmosphere doubled or tripled. Even if the 14C decay rate halved or is reduced to 1/3rd thanks to the increase in throughput…

July 7, 2013 9:02 am

Greg says:
July 7, 2013 at 6:45 am
Pettersson: “…normally defined as the amount of compound being present in the reservoir divided by the flux rate at which the compound is removed:
β = Amount/Flux”
I think the equation is unclear , but look at the text . It’s flux rate

I think the word “rate” in this case is simply meant as amount removed… That is the interpretation I have always seen in the past. But let Gösta Petterson decide…
I agree, quite a mess… To make it even more painful, see the definition of residence time (I know… Wiki):
http://en.wikipedia.org/wiki/Residence_time :
The generic variable form of this equation is as follows:
Tau = V/q
where Tau is used as the variable for residence time, V is the capacity of the system, and q is the flow for the system.

July 7, 2013 9:33 am

Greg says:
July 7, 2013 at 6:45 am
Some extra thoughts:
You are saying there is 150 Gtn and 150 out each year so 150 Gtn/year. This is wrong. If we are to simplify it that crudely it is 150 in 6months. then again 150 in 6months in the other direction. That is the _flux rate_.
It doesn’t make any difference for the 14C decay (neither for the excess CO2 decay) if the 150 GtC (4 GtC difference) occured in one year, halve a year or one month. All what counts is the amount that passes the atmosphere. That is what reduces the amount of 14C (thanks to the lower 14C input) in the atmosphere, not the rate at which that happens within a year. Thus the flux within a year is important, not the flux “rate” within a year.
The flux over several years may change, if that is the case, then it needs to be taken into account.

Greg
July 7, 2013 9:45 am

Tau = excess / (outflows – inflows) = 210 / 4 = 52.5 years
So you are assuming that the ‘equilibrium’ has not moved during the last 150 years of absorbing emissions. Maybe that is based on some other undeclared assumption. It is always useful to list what assumptions one is making. This makes it clear to anyone else what you are talking about, and sometimes having to spell it out points out a logical error anyway.
so this is 100 ppmv / 2 ppmv/a = 50 years , assuming equilibrium has not moved (doubtful)
On the other end you have 800 / (150/0.5) = 2.67 years.
Unless I’m missing something , the only way you can reconcile the two is by at least a two slab model. Now again I’m having to guess what you’re thinking and may be wrong. Perhaps explaining your ideas would help.

Greg
July 7, 2013 9:47 am

I’ve asked three times now what you consider the C14 stable value and ocean deficit values to be. You have refrained from replying three times. Does that mean you have now realised that it would be too small to be significant and you have abandoned the “dilution” argument?

Greg
July 7, 2013 9:49 am

Are you now in agreement that the crude linear approximation to ‘flux rate’ should be 150/0.5 rather than 150 ?

Greg
July 7, 2013 9:54 am

Ferdi says:” if the 150 GtC (4 GtC difference) occured in one year, halve a year or one month. All what counts is the amount that passes the atmosphere. ”
Ok you still don’t get it.
Which bit of flux rate is proportional to amount don’t you understand?
Of course it matters if its a month or a year , that defines the rate.

July 7, 2013 10:08 am

Greg says:
July 7, 2013 at 5:36 am
Ferdi, I’ve said I think there may be some grounds to question the generality of the C14 curve, because of your dilution argument. Are you still convinced that is relevant.
If so, could you give what you regard a being the figures for the pre-WWII “stable” atm ratio and the ocean/air deficit at that time?

Some interesting info at:
http://www.earthscienceindia.info/pdfupload/download.php?file=tech_pdf-17.pdf

July 7, 2013 10:24 am

Greg says:
July 7, 2013 at 9:54 am
Which bit of flux rate is proportional to amount don’t you understand?
The bit that the “rate” within a year in the case of 14C thinning is not of the slightest influence. Only the flux within a year is important…

Greg
July 7, 2013 10:32 am

” in the case of 14C ”
but 800/150 was nothing to do with C14, you’re avoiding the issue.

July 7, 2013 10:38 am

Ferdinand Engelbeen says:
July 7, 2013 at 10:24 am
Greg says:
July 7, 2013 at 9:54 am
Which bit of flux rate is proportional to amount don’t you understand?
Maybe some misunderstandings at work… while English is not my native language, in general I can understand it quite good, but sometimes may give way to misunderstandings.
flux for me is amount/time unit
flux rate is change in flux over time.
In the case of 14C decay, the flux rates neither the fluxes(was wrong in the previous message) involved are important, only the total amount which passes through the atmosphere is important, normally expressed over a year. But it doesn’t matter for the exchanges of 14C if that happens over a full year, halve a year or one month…
Thus what is the real meaning of flux vs. flux rate vs. amount/yr according to you and what influence have the definitions on the 14C decay?

Greg
July 7, 2013 10:46 am

http://www.earthscienceindia.info/pdfupload/download.php?file=tech_pdf-17.pdf
Thanks, so this has little to do with preferential absorption and all to do with deep water up swell. Surface deficit about -50 permil.
So with 90/800 = 11.25% of annual turnover in and out of oceans and a 5% deficit, that sounds like 0.56% annual dilution. Now if my maths is correct that corresponds to a half life of 125 years.
Do you think we can safely discount that as having a disruptive effect on the C14 curve fitting exercise?

Greg
July 7, 2013 11:04 am

the base equation for all this is
x = -k,dx/dt
As long as we’re agreed that it is the rate of change that’s fine.
You wished to invoke the 150 Gtn figure. Now that is not an annual rate of change , the annual figure is about 4 Gtn/a
If you want to use 150Gtn, that is the seasonal change over half a year. So if we draw a straight line through rate of change from min to max (which likely is not an nice even half a year, it’s probably asymmetrical and you should take the steepest side) you need to divide by 0.5 years. You should also take the fastest part of the cycle which best shows the limiting exponential rather than the forcing easing off. So this rough method will underestimate somewhat.
est. rate of change > 300 Gtn/annum
Like I’ve repeatedly underlined , this is a crude approximation but gives a value of 300Gtn/annum for that process.
That’s a swing of almost 20% and corresponds to a time const of 2.67 years.
This is shallow surface waters , well mixed by wind. I doubt this even included the full depth of the “mixed layer”.

Greg
July 7, 2013 11:12 am

Since this method under-estimates and these global carbon cycle figure are gross approximations themselves, I’d be inclined to identify this with the 1.17 year time constant that Lance Wallace provided and the 1.18 of Bern model.

July 7, 2013 11:48 am

Greg says:
July 7, 2013 at 10:46 am
So with 90/800 = 11.25% of annual turnover in and out of oceans and a 5% deficit, that sounds like 0.56% annual dilution. Now if my maths is correct that corresponds to a half life of 125 years.
The 5% deficit is against the “normal” level of 14C in the atmosphere, which in general was more or less compensated for by new production in the atmosphere, about everything being in equilibrium pre-bomb.
The deficit against the bomb test spike initially is 55%, decaying over time.
The 90 GtC is for the total oceans, of which about 50 GtC goes in and out the mixed layer with a maximum of 10% change in the ocean for every isotope, including 14CO2.
The ~40 GtC exchange with the deep oceans is what gives the dilution of the bomb spike.
That should give the 14 years decay rate…

Greg Goodman
July 7, 2013 11:50 am

I says: “So with 90/800 = 11.25% of annual turnover in and out of oceans and a 5% deficit, that sounds like 0.56% annual dilution.”
This may be worth adding in as a further correction along with the C13 dilution correction. 0.56% over 30 years is 16.8% so not exactly negligible.
This would raise the end of the series leading to a longer principal time constant and a a bit of residual, implying a small degree of reversibility.
That would then produce some separation of the two time constants :
τ = β/(1+Keq) and β
It would not be too surprising if that did not then produce 14 and 18.6 , thus reconciling the C14 curve with the first two terms of the Bern model.
Numbers and my reasoning need checking by someone competent in kinetics but it seems that this comes close to reconciling Gosta Pettersson’s C14 curve and the Bern model.
Personally I was not a fan of the Bern model but you have to go with the data.
If this curtails the more extreme long time constant and residual of Bern, at the same time as providing an independent cross-check, that will be a valuable result.

1 14 15 16 17 18 21