
Studies of Carbon 14 in the atmosphere emitted by nuclear tests indicate that the Bern model used by the IPCC is inconsistent with virtually all reported experimental results.
Guest essay by Gösta Pettersson
The Keeling curve establishes that the atmospheric carbon dioxide level has shown a steady long-term increase since 1958. Proponents of the antropogenic global warming (AGW) hypothesis have attributed the increasing carbon dioxide level to human activities such as combustion of fossil fuels and land-use changes. Opponents of the AGW hypothesis have argued that this would require that the turnover time for atmospheric carbon dioxide is about 100 years, which is inconsistent with a multitude of experimental studies indicating that the turnover time is of the order of 10 years.
Since its constitution in 1988, the United Nation’s Intergovernmental Panel on Climate Change (IPCC) has disregarded the empirically determined turnover times, claiming that they lack bearing on the rate at which anthropogenic carbon dioxide emissions are removed from the atmosphere. Instead, the fourth IPCC assessment report argues that the removal of carbon dioxide emissions is adequately described by the ‘Bern model‘, a carbon cycle model designed by prominent climatologists at the Bern University. The Bern model is based on the presumption that the increasing levels of atmospheric carbon dioxide derive exclusively from anthropogenic emissions. Tuned to fit the Keeling curve, the model prescribes that the relaxation of an emission pulse of carbon dioxide is multiphasic with slow components reflecting slow transfer of carbon dioxide from the oceanic surface to the deep-sea regions. The problem is that empirical observations tell us an entirely different story.
The nuclear weapon tests in the early 1960s have initiated a scientifically ideal tracer experiment describing the kinetics of removal of an excess of airborne carbon dioxide. When the atmospheric bomb tests ceased in 1963, they had raised the air level of C14-carbon dioxide to almost twice its original background value. The relaxation of this pulse of excess C14-carbon dioxide has now been monitored for fifty years. Representative results providing direct experimental records of more than 95% of the relaxation process are shown in Fig.1.
Figure 1. Relaxation of the excess of airborne C14-carbon dioxide produced by atmospheric tests of nuclear weapons before the tests ceased in 1963
The IPCC has disregarded the bombtest data in Fig. 1 (which refer to the C14/C12 ratio), arguing that “an atmospheric perturbation in the isotopic ratio disappears much faster than the perturbation in the number of C14 atoms”. That argument cannot be followed and certainly is incorrect. Fig. 2 shows the data in Fig. 1 after rescaling and correction for the minor dilution effects caused by the increased atmospheric concentration of C12-carbon dioxide during the examined period of time.
Figure 2. The bombtest curve. Experimentally observed relaxation of C14-carbon dioxide (black) compared with model descriptions of the process.
The resulting series of experimental points (black data i Fig. 2) describes the disappearance of “the perturbation in the number of C14 atoms”, is almost indistinguishable from the data in Fig. 1, and will be referred to as the ‘bombtest curve’.
To draw attention to the bombtest curve and its important implications, I have made public a trilogy of strict reaction kinetic analyses addressing the controversial views expressed on the interpretation of the Keeling curve by proponents and opponents of the AGW hypothesis.
(Note: links to all three papers are below also)
Paper 1 in the trilogy clarifies that
a. The bombtest curve provides an empirical record of more than 95% of the relaxation of airborne C14-carbon dioxide. Since kinetic carbon isotope effects are small, the bombtest curve can be taken to be representative for the relaxation of emission pulses of carbon dioxide in general.
b. The relaxation process conforms to a monoexponential relationship (red curve in Fig. 2) and hence can be described in terms of a single relaxation time (turnover time). There is no kinetically valid reason to disregard reported experimental estimates (5–14 years) of this relaxation time.
c. The exponential character of the relaxation implies that the rate of removal of C14 has been proportional to the amount of C14. This means that the observed 95% of the relaxation process have been governed by the atmospheric concentration of C14-carbon dioxide according to the law of mass action, without any detectable contributions from slow oceanic events.
d. The Bern model prescriptions (blue curve in Fig. 2) are inconsistent with the observations that have been made, and gravely underestimate both the rate and the extent of removal of anthropogenic carbon dioxide emissions. On basis of the Bern model predictions, the IPCC states that it takes a few hundreds of years before the first 80% of anthropogenic carbon dioxide emissions are removed from the air. The bombtest curve shows that it takes less than 25 years.
Paper 2 in the trilogy uses the kinetic relationships derived from the bombtest curve to calculate how much the atmospheric carbon dioxide level has been affected by emissions of anthropogenic carbon dioxide since 1850. The results show that only half of the Keeling curve’s longterm trend towards increased carbon dioxide levels originates from anthropogenic emissions.
The Bern model and other carbon cycle models tuned to fit the Keeling curve are routinely used by climate modellers to obtain input estimates of future carbon dioxide levels for postulated emissions scenarios. Paper 2 shows that estimates thus obtained exaggerate man-made contributions to future carbon dioxide levels (and consequent global temperatures) by factors of 3–14 for representative emission scenarios and time periods extending to year 2100 or longer. For empirically supported parameter values, the climate model projections actually provide evidence that global warming due to emissions of fossil carbon dioxide will remain within acceptable limits.
Paper 3 in the trilogy draws attention to the fact that hot water holds less dissolved carbon dioxide than cold water. This means that global warming during the 2000th century by necessity has led to a thermal out-gassing of carbon dioxide from the hydrosphere. Using a kinetic air-ocean model, the strength of this thermal effect can be estimated by analysis of the temperature dependence of the multiannual fluctuations of the Keeling curve and be described in terms of the activation energy for the out-gassing process.
For the empirically estimated parameter values obtained according to Paper 1 and Paper 3, the model shows that thermal out-gassing and anthropogenic emissions have provided approximately equal contributions to the increasing carbon dioxide levels over the examined period 1850–2010. During the last two decades, contributions from thermal out-gassing have been almost 40% larger than those from anthropogenic emissions. This is illustrated by the model data in Fig. 3, which also indicate that the Keeling curve can be quantitatively accounted for in terms of the combined effects of thermal out-gassing and anthropogenic emissions.
Figure 3. Variation of the atmospheric carbon dioxide level, as indicated by empirical data (green) and by the model described in Paper 3 (red). Blue and black curves show the contributions provided by thermal out-gassing and emissions, respectively.
The results in Fig. 3 call for a drastic revision of the carbon cycle budget presented by the IPCC. In particular, the extensively discussed ‘missing sink’ (called ‘residual terrestrial sink´ in the fourth IPCC report) can be identified as the hydrosphere; the amount of emissions taken up by the oceans has been gravely underestimated by the IPCC due to neglect of thermal out-gassing. Furthermore, the strength of the thermal out-gassing effect places climate modellers in the delicate situation that they have to know what the future temperatures will be before they can predict them by consideration of the greenhouse effect caused by future carbon dioxide levels.
By supporting the Bern model and similar carbon cycle models, the IPCC and climate modellers have taken the stand that the Keeling curve can be presumed to reflect only anthropogenic carbon dioxide emissions. The results in Paper 1–3 show that this presumption is inconsistent with virtually all reported experimental results that have a direct bearing on the relaxation kinetics of atmospheric carbon dioxide. As long as climate modellers continue to disregard the available empirical information on thermal out-gassing and on the relaxation kinetics of airborne carbon dioxide, their model predictions will remain too biased to provide any inferences of significant scientific or political interest.
References:
Climate Change 2007: IPCC Working Group I: The Physical Science Basis section 10.4 – Changes Associated with Biogeochemical Feedbacks and Ocean Acidification
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-4.html
Climate Change 2007: IPCC Working Group I: The Physical Science Basis section 2.10.2 Direct Global Warming Potentials
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-10-2.html
GLOBAL BIOGEOCHEMICAL CYCLES, VOL. 15, NO. 4, PAGES 891–907, DECEMBER 2001 Joos et al. Global warming feedbacks on terrestrial carbon uptake under the Intergovernmental Panel on Climate Change (IPCC) emission scenarios
ftp://ftp.elet.polimi.it/users/Giorgio.Guariso/papers/joos01gbc[1]-1.pdf
Click below for a free download of the three papers referenced in the essay as PDF files.
Paper 1 Relaxation kinetics of atmospheric carbon dioxide
Paper 2 Anthropogenic contributions to the atmospheric content of carbon dioxide during the industrial era
Paper 3 Temperature effects on the atmospheric carbon dioxide level
================================================================
Gösta Pettersson is a retired professor in biochemistry at the University of Lund (Sweden) and a previous editor of the European Journal of Biochemistry as an expert on reaction kinetics and mathematical modelling. My scientific reasearch has focused on the fixation of carbon dioxide by plants, which has made me familiar with the carbon cycle research carried out by climatologists and others.
The iair/ocean sotope differences are measured in per mil . Page 7 of the first paper deals with the question and says most isotope dependant reactions show a ratio of less than 1.02 (2%) .
Most of this discussion is about whether the relevant time constants are 3.5 , 14 or 179 years. We’re arguing about whether we’ve got the right order, so let’s stop wasting time arguing over the 8 per mil isotop ratios.
My comment above:
http://wattsupwiththat.com/2013/07/01/the-bombtest-curve-and-its-implications-for-atmospheric-carbon-dioxide-residency-time/#comment-1354912
suggests a way to infer the proportion of residual emissions to out-gassing from the time constant and the persistent average imbalance between the partial pressures measured as about 7 µatm
Now since we have a data based estimation of tau and the 7 µatm figure we have an estimation of the proportion and I have not seen this approach used so far.
Perhaps we should look at that and stop arguing about irrelevant minutiae.
Here is an image which helps understand why looking at the relationship between T and CO2 during the last deglaciation tells us nothing except what the relationship isn’t.
http://climategrog.wordpress.com/wp-admin/post.php?post=412&action=edit&message=1
Greg,
I submit that the 7 uatm disparity may not represent a limitation of ocean uptake which should be nearly instantaneous, but rather biological activity at the interface. This disparity may indeed be a good measure of the biological activity.
gymnosperm says:
July 4, 2013 at 1:52 pm
All you are doing is taking an arbitrary group of molecules and defining it as “mass” in relation to another arbitrary concept of “doubling”. Your “mass” contains 14CO2. Are you arguing that it falls out faster because it is heavier? For some other reason? If not, it should represent the rest of the “mass”.
If you want to test the residence time of CO2, the 14C bomb test may be a good tracer (but it isn’t perfect for several reasons). If you want to test the effect of an increase of total CO2 in the atmosphere, the 14C bomb test is not a good tracer, as it doesn’t add measurable mass to the atmosphere.
The difference is in the definitions of residence time and excess decay time. Turnover and gain/loss.
Mats says:
July 4, 2013 at 3:09 pm `
Stomatal proxy record of Co2 concentrations from the last termination suggest an importan role for CO2 at climate change transitions” http://www.sciencedirect.com/science/article/pii/S0277379113000553
Margret Steinthorsdottir
Simple reaction: be prudent with stomata data, they only reflect CO2 changes over land where the leaves did grow, with all the problems that gives.
Besides quite rough indications of CO2 data (+/- 10 ppmv for the same CO2 level in the atmosphere), stomata index ( SI) data are counted on leaves, which by definition grow on land. Thus let’s look how the CO2 levels evolve over e few sunny days over land, compared to what at the same days “baseline” CO2 measurements at Barrow, Mauna Loa and the South Pole measure (all raw data, including outliers):
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_background.jpg
Giessen is a semi-rural surrounding, mid-west Germany, where a long series of historical CO2 measurements were done in the 1940’s. It has a modern CO2 monitoring station nowadays, measuring 1/2 hour samples with GC.
According to the stomata specialists, the stomata index of this years leaves is based on the average CO2 level over the previous growing season. But over land, that already has a positive bias, as can be seen in the monthly averages of Giessen:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_mlo_monthly.jpg
Stomata data over the past century are calibrated against ice cores, firn and direct measurements. That removes the bias over the past century. But there is not the slightes guaranty that the bias didn’t change over the centuries. One of the main places used for SI data composition is in the SE Netherlands. The landscape in the main wind direction changed tremendously over the centuries, including heavy industrialisation over the recent past. Even the main wind direction may have changes over certain periods like the LIA vs. the MWP.
In the case of the Swedish data, they “calibrated” the stomata data by assuming that the CO2 levels over the Holocene (as recorded in ice cores), were constrained within 280-300 ppmv. Again that is no guaranty that the bias isn’t changed over previous centuries like what happened with regional vegetation over the Younger Dryas or the 8.2 kyr event…
Thus, while stomata data have a far better resolution than the ice cores, the absolute values of CO2 need to be taken with a grain of salt…
Ferdi: “If you want to test the effect of an increase of total CO2 in the atmosphere, the 14C bomb test is not a good tracer, as it doesn’t add measurable mass to the atmosphere.”
If it added significant mass to the atmosphere it would not be called a “tracer”, would it? Are you saying that no study working on tracer elements is valid because they don’t add measurable mass?
This is a non argument.
The additional mass is already there when the tracer is added. As tracer it then allows a study of how the gas present at that time declines. That is what tracer studies do.
Pettersson accounts for further dilution of C14 by continued emissions and thus his study reflects the reduction of the CO2 excess present at the start of the data in 1963 by absorption.
Consider a system in equilibrium that takes a pulse of 1Gtn of extra CO2 with the C14 tracer in it. You seem to be suggesting that the C14 laden molecules will disappear four time faster for some reason.
You are making a false distinction. What the C14 data shows is the same as what the C12 laden gas does. It is not a measure of the residence time of the average CO2 molecule to its first absorption. How could it be? That would only be the case if there was zero re-emission of C14 molecules. ie the isotope separation was total at the first interaction.
I don’t think anyone is suggesting that, so you are making a false distinction. Pettersson’s analysis measures what he says it does.
Greg Goodman says:
July 4, 2013 at 6:58 pm
Lance, w.r.t double exp. model. Introducing a second shorter term would presumably slightly lengthen the 14y time const. , that really would not be so far from the first two Bern model values given above.
33.8% will have a lifetime of 18.51 years
18.6% will have a lifetime of 1.186 years
How do the amplitudes come out if you do a double exp model?
Somewhat against my better judgement, I have fit the bomb data to a double exponential. As you suggest, the results provide a very short residence time of 1.17 years (amazingly close to the 1.186 years you mention above), but only extend the longer residence time from 14.4 to 15.3 years. The amplitudes are 84.4% for the long lifetime and 15.6% for the short one. The root mean square error (RMSE) falls from 3.2% to 2.7%. The fit is much better at the beginning, arguably better at the end, and rather indeterminate in the middle.
The reason I am nervous about fitting two exponentials to these rather noisy data is that doubling the number of adjustable parameters from 2 to 4 will always produce a better fit, and is getting us close to the John von Neumann quip about making the elephant wiggle his trunk. There is probably something like the Aikake information criterion (AIC) that could be applied here to show whether adding the second exponential has really improved the situation, but I don’t know how to apply it for this case. Probably I could add another exponential and achieve an even better fit, perhaps with a longer residence time, a la the Bern model, but I am pretty certain that this would be a ridiculous example of overfitting.
Full Excel file with all calculations here. See the last graphic (called, informatively, Chart 1) for the comparison of the two models.
https://dl.dropboxusercontent.com/u/75831381/C-14%20decay%20from%20bomb%20testing%20double%20exponential.xlsx
Greg Goodman says:
July 4, 2013 at 5:17 pm
residence time = mass / inflows = mass / outflows = 800/150 = 5.33 years.
Am I supposed to recognise your 800 and 150 ? That makes no sense as it stands.
Sorry, I forgot that not everybody has read all my arguments over the many (5-6?) years that we had this (often repeated, sometimes fearce) discussion…
The definition of residence time is how much of a mass is replaced by an input from (or output to) another reservoir. In the case of the atmosphere, the current total carbon mass of CO2 is 800 GtC. Carbon is used as unit, because it easier to follow, even if it gets carbonates in sea or rocks or sugar and cellulose in plants. The exchange rate of CO2 with other reservoirs is about 150 GtC, mainly seasonal, over a year. Thus every year about 20% of all CO2 in the atmosphere is exchanged with CO2 from/to other reservoirs. Thus by definition, the residence time of CO2 in the atmosphere is 5.33 years.
If what goes in (inflows) equals what goes out (outflows), nothing happens with the total CO2 content in the atmosphere. Despite that, the 14C content of the bomb spike in the atmosphere will decrease over time, simply because it is exchanged with low-14C from the oceans surface, from vegetation decay of years ago and from the deep oceans. That part of the 14C spike decay is purely exchange rate (residence time) related and doesn’t have any connection with the fate of some extra CO2 (as mass) injection (whatever the source). Therefore, the 14C bomb spike is a bad indicator for the latter.
There are further constraints on 14C: part of the 14C absorption returns in next year(s) from vegetation (leaves) decay and from the oceans surface. That lengthens the 14C decay beyond the 5.33 years residence time. Only the deep oceans exchange rate is mainly one-way, as the return maybe over 800-1200 years. Further, as humans emit carbon which is essentially 14C free (much too old…), that thins the 14C level, so the real 14C decay is in fact much longer… Thus take your pick.
The decay rate of such an excess will be given by:
Tau = excess / (outflows – inflows) = 210 / 4 = 52.5 years
If you inject an amount of CO2 (whatever the source: oceans, forest fires, volcanoes, humans,…) in the atmosphere, the total mass of CO2 increases. Some sinks and sources (like volcanoes or vegetation decay by bacteria) don’t react on such an increase. Others do react: the oceans reduce their releases and increase the sinks and so does the vegetation uptake. That means that the original outflows = inflows now is in disequilibrium. That can be quite exactly calculated, as human emissions are resonably known (from taxes on fossil fuels and average burning efficiency) and the increase in the atmosphere is accurately monitored to 0.1 ppmv/year.
The difference between the pre-industrial era (where there seems to be an equilibrium between CO2 and temperature with some lag) and the current CO2 level nowadays is 100 ppmv. 1 ppmv CO2 increase in the current atmosphere equals about 2.1 GtC. Thus the 100 ppmv above equilibrium is about 210 GtC above equilibrium. That is what gives the extra pressure that brings the inputs and outputs in disequilibrium. The extra pressure results in an extra uptake (outflows – inflows) of about 2 ppmv (4 GtC) per year, as that is the difference between human emissions and the increase in the atmosphere. Supposing that the average reaction of all natural sinks and sources together is linear, that gives an e-fold decay rate of 210/4 = 52.5 years or an half life time of ~40 years. That is the e-fold time representing current reality and replaces all Bern model terms, as long as it lasts…
Greg Goodman says:
July 4, 2013 at 8:21 pm
The 7 microatm is for the sea surface only and is quite fast (~ 5 years decay rate, 2 years to get an equilibrium, as the ocean side increases with about the same rate), but… only can absorb (or release) not more than 10% of the change in the atmosphere, due to chemical equilibria in the oceans. Thus 90% of any change in CO2 remains in the atmosphere and is removed by other, slower reactions, like the far more restricted exchange rate with the deep oceans.
Greg Goodman says:
July 4, 2013 at 9:04 pm
Let’s play that back in slow motion. Temp changes partial pressure of CO2 in sea water. Rate of out-gassing depends the partial pressure difference.
So … rate of out-gassing ie d/dt(CO2), is proportional to temperature, which is exactly what I said in the first place.
That is right for the first year, but you are wrong by lengthening the proportionality indefinitely. You don’t take into account what happens in the atmosphere over the following years.
The extra outgassing from a temperature increase -> partial pressure increase in the oceans -> more CO2 release (and less absorption) -> increase in the atmosphere.
After 1 year:
increase in the atmosphere -> partial pressure change in the atmosphere -> less CO2 release (and more absorption) -> increase in the atmosphere is reduced.
Until a new equilibrium is found. That is at ~16 ppmv extra in the atmosphere for an increase of 1 K of seawater temperature…
Thus what lacks is a decay function for the proportionality over time…
As an example, see the decaying response functions for temperature and precipitation on the CO2 rate of change by Pieter Tans from NOAA, during his speach at the festivities of 50 years Mauna Loa:
http://esrl.noaa.gov/gmd/co2conference/pdfs/tans.pdf
Starting at sheet 11.
“Somewhat against my better judgement, I have fit the bomb data to a double exponential. As you suggest, the results provide a very short residence time of 1.17 years (amazingly close to the 1.186 years you mention above), but only extend the longer residence time from 14.4 to 15.3 years. The amplitudes are 84.4% for the long lifetime and 15.6% for the short one. The root mean square error (RMSE) falls from 3.2% to 2.7%. The fit is much better at the beginning, arguably better at the end, and rather indeterminate in the middle.”
Lance, I agree this is probably close to overfitting on the basis of the C14 data, since the Bern model was apparently matched to several trace studies it may have stronger justifications. I don’t know.
It is interesting that the two approaches show this similarity. This number is also exactly the period of the Chandler nutation of 443 days. I don’t want to get into speculating what that may mean (if anything) or how could be possible but it’s a curious coincidence.
I think the end of the data from the C.E. sites is probably reaching the floor where new creation of C14 from cosmic influences may need checking for possibly perturbing the initial decay pattern.
Interesting results, Thanks for looking into it.
Greg says:
July 5, 2013 at 11:53 am
Consider a system in equilibrium that takes a pulse of 1Gtn of extra CO2 with the C14 tracer in it. You seem to be suggesting that the C14 laden molecules will disappear four time faster for some reason.
Yes, that is exactly what happens. The reason is the dilution by the exchanges from the deep oceans: they bring 13C and 14C depleted waters back from the deep, while 13C and 14C rich carbon disappears in the deep. That gives a more rapid depletion of 13C and 14C, compared to the removal of any extra 12C and thus a false impression of faster decay.
Here follows a comparison between the decrease of 13C in the atmosphere, due to fossil burning without dilution of the deep oceans (but all with the same net sink rate of the carbon mix) compared to different deep ocean exchanges:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/deep_ocean_air_zero.jpg
The discrepancies in the earlier years may be from vegetation, which is not accounted for.
If 14C follows the same dilution (not completely, as still some 14C returns from the deep), then the 14 yrs decay rate from the bomb spike would be ~45 years, quite a bit towards the ~52.5 years I calculated before…
Ferdinand,
After all this we finally get to why you believe 14CO2 is not a good proxy. Interesting, but what happens in the abyss that biologically rejected heavy isotopes go in and heavy isotope depleted water comes out? Carbonate rain? Are the heavy isotopes preferentially mineralized under the extreme pressure?
Ferdi, thanks for explaining where the numbers come from. That makes it a lot clearer.
No. 800Gtn/150Gtn is a dimensionless ratio. Why do you call this a “time” it is a ratio.
The second figure is a pk-to-pk oscillation is not a rate of change. It does not imply that in 5.3 years time all of the CO2 will have changed over. Neither in that form does it tells us even the average probablistic time any one modelcule will remain in the air.
Also something does not add up here. If almost 20% of the total CO2 in the atmosphere gets absorbed each year how is it that MLO only shows a 5ppmv peak to peak variation in 400ppmv?
So … rate of out-gassing ie d/dt(CO2), is proportional to temperature, which is exactly what I said in the first place.
http://climategrog.wordpress.com/?attachment_id=402
Indeed, assuming we are talking about variations in shallow water. Upwelling, deep water will be relatively unaffected , the part. pressure being related to 22C+dSST not just dSST. N. Atlantic sinks will be somewhat the opposite, SST being well below the global average with which the well mixed atmosphere is reacting.
Ah, so we finally get to the bottom of where all this is coming from. Another undeclared assumption. This sounds like you are invoking Revelle’s buffer hypothesis. Before Revelle’s work, climate was though to work much like the simple model Petterssson is using, which meant CO2 remained much less time in the atmosphere.
Revelle’s was a very detailed and thorough attempt at modelling air-ocean interaction. However it was done before the MLO record and had to be based on theory without being empirically verified.
You now state as a matter of fact that 90% change remains in the atmosphere.
Have you read paper 3 provided by Pettersson here? He does not bind himself with a 90% assumption and derived a combined model which combines out-gassing and residual emissions in a much more equal ratio in a way that proves a very good fit for MLO record.
this is one of the first things Pettersson deals with in paper 1, it is included in the 14y estimation. Have you read _any_ of what he presented here ?
This is a valid point in principal but what is deficit in the oceans total CO2 content of the oceanic reservoir that is interacting on an inter-annual time-scale ?
Pettersson’s model may need an extra decay term to account for this. If part of the drop is due to dilution in the oceanic reservoir, the true decay due to permanent absorption would be longer. This may open up some uncertainty in the ratio he derived.
To look at this we need the effective oceanic reservoir of CO2 that is involved on the inter-annual time scale and an explanation of 800/150 producing only 5ppm peak to peak change in 400.
Greg says:
July 5, 2013 at 4:31 pm
Ferdi, thanks for explaining where the numbers come from. That makes it a lot clearer.
“Thus by definition, the residence time of CO2 in the atmosphere is 5.33 years.”
No. 800Gtn/150Gtn is a dimensionless ratio. Why do you call this a “time” it is a ratio.
No it’s not, it’s 150 GTn/year it’s the rate of annual exchange between the reservoirs and the atmosphere. So ~152GTn/year influx and ~147GTn/year efflux leaves an annual increase of about 5GTn as observed.
Also something does not add up here. If almost 20% of the total CO2 in the atmosphere gets absorbed each year how is it that MLO only shows a 5ppmv peak to peak variation in 400ppmv?
Because as shown above a similar amount returns.
we need an explanation of 800/150 producing only 5ppm peak to peak change in 400.
Done.
To look at this we need the effective oceanic reservoir of CO2 that is involved on the inter-annual time scale
As far as the C14 is concerned it’s the thermohaline circulation, the downwelling cold water sinks, flows deep and upwells order 1000 years later, during that time the C14 decays.
To see the age of surface water anywhere in the oceans look here:
http://radiocarbon.pa.qub.ac.uk/marine/
Other sources of C14 depleted water are fossil fuel sourced CO2 and dissolved Calcium carbonate from rivers.
Phil,
The half life of 14C is 5700 years. A very rough number for the thermohaline circulation is a millennium. Some folks are still stuck on the concept of “Meridianal Overturning Circulation” where most of the down welling is in the North Atlantic. By the time North Atlantic deep water makes it around the Antarctic Vortex and into the North Pacific it is closer to 1600 years old, but the reality is that a helluva lot of deep water is formed at the edge of the ice around Antarctica. This deep water enjoys a shorter transit. Pick a number. A millennium has a nice ring.
Anyway, at say a .1 decay in a millennium, radioactive decay will not explain the transition from heavy isotopes in to light isotopes out of the thermohaline circulation.
Ferdinand Engelbeen says:
July 5, 2013 at 1:58 pm
Some addition:
Take a pulse injection of some 100 GtC fossil CO2 in the atmosphere 160 years ago:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/fract_level_pulse.jpg
The “fraction” FA represents the fraction of “human” CO2 still residing in the atmosphere and equals the 13C rate of change as “tracer”.
FL is the same in the upper oceans (not important here), tCA = total carbon in the atmosphere and nCA = natural carbon in the atmosphere.
After some 60 years, near all of the human carbon has disappeared, but still 30% of the pulse remains in the atmosphere.
Something similar happens with the 14C bomb spike in the atmosphere, but that depends on the specific rates of what returns of 14C from the deep oceans.
Greg says:
July 5, 2013 at 4:31 pm
It does not imply that in 5.3 years time all of the CO2 will have changed over.
No, but it tells us that the average decay rate of a “tracer” is 5.3 years, if it doesn’t come back from the other reservoirs…
If almost 20% of the total CO2 in the atmosphere gets absorbed each year how is it that MLO only shows a 5ppmv peak to peak variation in 400ppmv?
Ah, that is a nice feature of nature… The two main fluxes, oceans and vegetation, change in opposite ways with temperature: besides the equatorial upwelling and polar downwelling, the mid-latitudes are source of CO2 in summer and sinks of CO2 in winter. That is probably over half of the back-and-forth exchange. Mid- and high-latitude vegetation goes opposite: huge uptake in spring and summer, huge release of CO2 in fall (from fallen leaves), less in winter and somewhat more again in summer. The net result of all these seasonal flux changes is near 1 ppmv in the SH and up to 16 ppmv near ground in the NH. Thus vegetation wins in the NH, simply because land vegetation area in the NH is much larger than ocean area, compared to the SH:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/month_2002_2004_4s.jpg
Indeed, assuming we are talking about variations in shallow water. Upwelling, deep water will be relatively unaffected , the part. pressure being related to 22C+dSST not just dSST. N. Atlantic sinks will be somewhat the opposite, SST being well below the global average with which the well mixed atmosphere is reacting.
No, you need a flux decay rate based on pressure difference, not temperature. The partial pressure of the continuous upwelling is affected by temperature and remains the same for a sustained change in temperature. That is true. But as the partial pressure of the atmosphere changes, the initial flux increase (at about 5% for a 1 K increase) is countered back to the previous flux before the temperature increase. That need to be taken into account.
Revelle’s was a very detailed and thorough attempt at modelling air-ocean interaction. However it was done before the MLO record and had to be based on theory without being empirically verified.
It was based on theoretical considerations of the equilibrium reactions in seawater, which can be simply confirmed by laboratory tests, even at that time. Revelle himself didn’t (openly) believe it, as it was again the “consensus” of that time (l’histoire ce répète…) and didn’t take it into account for his earliest calculations, but changed his mind just prior to his death. But see:
http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf
You now state as a matter of fact that 90% change remains in the atmosphere.
It is practically confirmed by the continuous series in a few places where ocean samples were taken over several decades (Bermuda, Hawaii,…). The increase in DIC (total inorganic carbon) in seawater follows the increase in the atmosphere with 10% of the change. Unfortunately the nice graphs showing a lot of data disappeared from the net and I haven’t found a good alternative yet.
The 90% of course doesn’t remain in the atmosphere, but it responds to a slower decay rate than the first 10%. That is important for the differences in fast and slower ractions to temperature, but less important for establishing one decay rate for the combination of the two fastest terms.
I had read the Pettersson writings, but it seems that I have missed that they incalculated the 14C dilution by fossil fuel use. Thus the main problem with the Petterson 14C bomb spike is the dilution of 14C by the deep oceans and other reservoirs.
If part of the drop is due to dilution in the oceanic reservoir, the true decay due to permanent absorption would be longer.
As the 13C “spike” from human emissions shows a factor 3.2 thinning from the deep oceans, the 14C spike drop probably also needs a huge correction…
gymnosperm says:
July 5, 2013 at 4:20 pm`
After all this we finally get to why you believe 14CO2 is not a good proxy. Interesting, but what happens in the abyss that biologically rejected heavy isotopes go in and heavy isotope depleted water comes out? Carbonate rain? Are the heavy isotopes preferentially mineralized under the extreme pressure?
These days my reasonings are a lot slower than in the past (getting older…), so it takes more time to get to the point which is important…
In the case of 13C depleted emissions, it is rather simple: what goes into the deep oceans over the past 160 years still needs a lot of time to get back into the atmosphere. The pre-industrial equilibrium of isotopes was 0 to 1 per mil in the deep oceans (still the same today), 1 to 5 per mil in the oceans surface (thanks to biolife and the drop out of organics into the deep) and around -6.4 in the atmosphere. Exchanges between ocean surface and atmosphere induce a dop of about -10 per mil and back a drop of about -2 per mil. Average for the bulk back and forth fluxes -8 per mil, which is what gives the equilibrium values in atmosphere and oceans.
Vegetation is a difficult to grab players: a net vegetation uptake increases d13C in the atmosphere, a net decay decreases d13C. Comparing other variables (for d13C and O2 changes) seems to show that vegetation was a slight emitter pre-1990 and a slight, but increasing absorber post 1990.
Similar changes may occur in 14C, as what comes out from the oceans is pre-bomb spike and what goes in is post-bomb spike, initially that was a factor ~2. There is some further depletion due to age, but that is not very important (1,000 years vs. 60,000 years as below detectable).
Again, vegetation is a difficult to constrain player, and the biological discrimination is larger than for 13C.
Then we have the redistribution of 14C over different fast reacting reservoirs. Both ocean surface and (land) vegetation are about the same size as the atmosphere. Due to ocean chemistry, the ocean surface reacts only with a 10% change, but the distribution of 14C over land vegetation is not so constrained and may be quite fast for some parts (leaves growth and decay), slower for other parts (trunk, roots), regardless of what is stored in more permanent carbon (peat, browncoal,…).
Not an easy calculation
“Ah, that is a nice feature of nature…” of course I should have thought of that. I could see something was wrong.
14 x 3.2 = 44.8 that is nearer to your 100ppmv/(2ppmv/a) figure , but that itself is too low because 2ppmv/a is only the most recent rate of change.14 x 3.2 isn’t the right way to do it but it will be in that sense but less.
where can this value of 3.2 be derived? Of course
http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf
Thanks for the link , that’s quite a concise explanation of Ravelle factor. The last paragraph:
So it seems most of the arguments you have put forward so far equating microatm to ppmv are invalid.
Greg says:
July 6, 2013 at 1:42 am
Taking into account the ocean chemistry, a greater proportion of the added carbon remains in the form of dissolved CO2 than for the pre-existing mixed layer carbon. Thus the percentage increase in (pCO2 )ml is greater than the percentage increase in DIC.
You need to take into account the different percentages of the different carbon forms. DIC is the sum of free CO2/H2CO3 + bicarbonate + carbonate ions.
Most is bicarbonate, 10% is carbonate and free CO2 + H2CO3 is less than 1%.
Thus a 30% increase of CO2 in the atmosphere gives a 30% increase of free CO2 in the ocean surface at equilibrium (1% -> 1.3%), which obeys Henry’s Law, but only a 3% increase in total carbon (DIC) which represents the bulk of the increase in the ocean surface.
Greg says:
July 6, 2013 at 1:42 am
So it seems most of the arguments you have put forward so far equating microatm to ppmv are invalid.
Seems that I have misinterpreted what you meant…
pCO2 of the oceans is a matter of several factors, the two main factors being temperature and total dissolved inorganic carbon. Other factors also play a role: pH (but that largely depends of DIC, if no external factors are invloved), salt content,…
If DIC and temperature are known, one can calculate the pCO2, all other variables being more or less constant. The nice thing of seawater is that knowing 2 or 3 variables is enough to calculate all the other variables, including pH.
Nevertheless, most pCO2 data are based on on site measurements. In the past mostly from dedicated equipment (ships, buoiys, stations), nowadays more and more from commercial seaships equiped with fully automatic detection systems for a lot of variables, including pCO2.
What is measured in seawater as pCO2 in general is in disequilibrium with pCO2 of the atmosphere. If there is no pressure difference, then there is no net flux (more accurate: influx and outflux of CO2 are equal). The larger the difference, the larger the flux in the high to low direction. That is directly proportional, the other main variable, wind speed, being constant.
Thus no matter how large or small the fluxes are, no matter how fast the saturation of the ocean surface is, as long as there is a pCO2 difference (which for the atmosphere pCO2 near equals ppmv), the flux remains proportional to the difference.
Ferdi: residence time = mass / inflows = mass / outflows = 800/150 = 5.33 years.
No, that is a misinterpretation. The definition “residence time” which is an inappropriate and misleading name for the time constant of an exponential decay resulting from a linear feedback model.
The denominator in this relationship is the flux. That is the time derivative, not the semi-annual peak to trough value as a linear increase. The flux will have units of Gtn/year but that does not mean you can just substitute any value you can find that has the same units.
This is an oscillatory function not a straight line. You need to solve the differential equation to derive the time constant , the value you ave derived is from a fundamental misconception of what the flux term represents.
Again Pettersson explains this and how it related to his derivation in paper 1 . It really appears that you have not read his papers at all.