Guest Post by Willis Eschenbach
On another thread here at WUWT we were discussing the Bern carbon dioxide model used by the IPCC. The Bern Model calculates how fast a pulse of emitted CO2 decays back towards the pre-pulse state. See below for Bern model details. We were comparing the Bern model with a simple single-time-constant exponential model. Someone linked to a graphic from the IPCC AR5 report, Working Group 1, Chapter 6:
ORIGINAL CAPTION (click image to enlarge): Figure 6.1 | Simplified schematic of the global carbon cycle. Numbers represent reservoir mass, also called ‘carbon stocks’ in PgC (1 PgC = 10^15 gC) and annual carbon exchange fluxes (in PgC yr–1). Black numbers and arrows indicate reservoir mass and exchange fluxes estimated for the time prior to the Industrial Era, about 1750 (see Section 6.1.1.1 for references). Fossil fuel reserves are from GEA (2006) and are consistent with numbers used by IPCC WGIII for future scenarios. The sediment storage is a sum of 150 PgC of the organic carbon in the mixed layer (Emerson and Hedges, 1988) and 1600 PgC of the deep-sea CaCO3 sediments available to neutralize fossil fuel CO2 (Archer et al., 1998).
Red arrows and numbers indicate annual ‘anthropogenic’ fluxes averaged over the 2000–2009 time period. These fluxes are a perturbation of the carbon cycle during Industrial Era post 1750. These fluxes (red arrows) are: Fossil fuel and cement emissions of CO2 (Section 6.3.1), Net land use change (Section 6.3.2), and the Average atmospheric increase of CO2 in the atmosphere, also called ‘CO2 growth rate’ (Section 6.3). The uptake of anthropogenic CO2 by the ocean and by terrestrial ecosystems, often called ‘carbon sinks’ are the red arrows part of Net land flux and Net ocean flux. Red numbers in the reservoirs denote cumulative changes of anthropogenic carbon over the Industrial Period 1750–2011 (column 2 in Table 6.1). By convention, a positive cumulative change means that a reservoir has gained carbon since 1750. …
Now, there are many things of interest in this graphic, but what particularly interested me in this were their estimates of total fossil fuel reserves. Including gas, oil and coal, they estimate a total fossil fuel reserve of about 640 to 1580 gigatonnes of carbon (GtC). I decided to apply those numbers to both the Bern Model and the simple exponential decay model.
Now, the Bern model and the simple exponential model are both exponential decay models. The the difference is that the simple exponential decay model uses one value for the half-life of the CO2 emissions. On the other hand, the Bern model uses three different half-lifes applied to three different fractions of the CO2 emissions, plus 15% of the emitted CO2 is said to only decay over thousands of years.
My interest was in finding out what would happen, according to the two CO2 models, if we burned all of the fossil fuels by 2100. For the smaller case, burning 640 GtC by the year 2100 implies a burn rate below current emissions, that is to say about 7.5 GtC per year for the next eighty-five years.
For the larger case, 1,580 GtC implies a burn rate that increases every year by 1.1%. If that happens, then by the end of this century we’d have burned 1,580 gigatonnes of carbon.
So, given the assumptions of the two models, how would this play out in terms of the atmospheric concentration of CO2? Figure 2 shows those results:
Figure 2. CO2 projections using the Bern Model (red and blue) and a single exponential decay model (purple and light green). Single exponential decay model uses a time constant tau of 33 years. Note that this graph has been replaced, the original graph showed incorrect values.
Now, there are several things of interest here. First, you can see that unfortunately, we still don’t have enough information to distinguish whether the Bern Model or the single exponential decay model is more accurate.
Next, the two upper values seem unlikely, in that they assume a continuing exponential growth over eighty-five years. This kind of long-term exponential growth is rare in real life.
Finally, here’s the reason I wrote this post. This year, the atmospheric CO2 level is right around four hundred ppmv. So to double, it would have to go to eight hundred ppmv … and even assuming we could maintain exponential growth for the next eight decades and we burned every drop of the two thousand gigatonne high-end estimate of the fossil reserves, CO2 levels would still not be double those of today.
And in fact, even a fifty percent increase in CO2 levels by 2100 seems unlikely. That would be six hundred ppmv … possible, but doubtful given the graph above.
Short version? According to the IPCC, there are not enough fossil fuel reserves (oil, gas, and coal) on the planet to double the atmospheric CO2 concentration from its current value.
Best regards to all,
w.
My Usual Request: Misperceptions are the bane of the intarwebs. If you disagree with me or anyone, please quote the exact words you disagree with. I can defend my own words. I cannot defend someone else’s interpretation of some unidentified words of mine.
My Other Request: If you believe that e.g. I’m using the wrong method on the wrong dataset, please educate me and others by demonstrating the proper use of the right method on the right dataset. Simply claiming I’m wrong doesn’t advance the discussion.
Models: The Bern Model is described here and the calculation method used in the model is detailed here.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Willis, I know from previous comments made that you prefer to blog rather than submit to peer reviewed journals. However, this seems like a dynamite story that may merit doing so. If the story is in the mainstream scientific literature, it is a bit harder to ignore.
Regarding the issue of whether a doubling of CO2 and an increase in 2 Deg C is above pre-industrial or present levels, the issue is the final temperature reached. Using temperature estimates based on isotopes from ice cores. In the Eemian, about 125 000 years ago, the central estimate is for temperatures to have been about 3 C higher than the recent average. As the planet survived the Eemian, it seems there is plenty of room, whether we calculate from now or from pre-industrial levels for a 2 C increase.
Keith, this story is in the scientific literature. It’s in the IPCC report as well. Willis Eschenbach is simply saying things people familiar with this topic have known for ages.
Nice post, Willis! So even if one buys the validity of the IPCC’s Bern model, the climate armageddon is not happening.
The true insanity of all of these calamitous scenarios is the ridiculously stupid assumption that the world will still be burning fossil fuels anywhere near the current level 40 years from now. Anyone who is familiar with power technologies realizes that Gen4 nuclear reactors (such as Transatomic Power’s new version of the molten salt reactor) will replace all other forms of power production, excepting (perhaps) peak load power plants. This doesn’t really require fear of carbon emissions, since the new technology is not only cleaner, safer than any technology, but also cheaper and totally reliable and also eliminates to a large extent any nuclear waste issues. There are no obstacles to the commercialization of this new technology and it is proliferation resistant to a high degree. Nor can anyone cause a nuclear meltdown at such a plant.
The main obstacle are the braindead greenies, who oppose nuclear power, regardless. But they can easily be neutralized by facts, when presented properly. Even today we have informational videos produced by Transatomic Power, done by the female half of the company’s ownership team.
arthur4563,
Some link to that video available?
James Hansen claims we’ll reach 1,400 ppm due to fossil fuel burning by about 2130, and this will lead to 20°C of warming and an uninhabitable planet.
http://rsta.royalsocietypublishing.org/content/371/2001/20120294.short
If we assume that fossil fuel emissions increase by 3% per year, typical of the past decade and of the entire period since 1950, cumulative fossil fuel emissions will reach 10 000 Gt C in 118 years. Are there sufficient fossil fuel reserves to yield 5000–10 000 Gt C? Recent updates of potential reserves, including unconventional fossil fuels (such as tar sands, tar shale and hydrofracking-derived shale gas) in addition to conventional oil, gas and coal, suggest that 5×CO2 (1400 ppm) is indeed feasible. Our calculated global warming in this case [1400 ppm] is 16°C, with warming at the poles approximately 30°C. Calculated warming over land areas averages approximately 20°C. Such temperatures would eliminate grain production in almost all agricultural regions in the world. Increased stratospheric water vapour would diminish the stratospheric ozone layer. More ominously, global warming of that magnitude would make most of the planet uninhabitable by humans.
The Club of Rome raising its ugly head.
KennethRichards, thank you for the link. If Hansen et al are as far off as I think they are, it should not take long to find out. Same if they are correct and I am wrong.
Another Hansen commentary on fossil fuel emissions. He’s puzzled why the airborne fraction hasn’t been correlating for several decades…
http://ej.iop.org/images/1748-9326/8/1/011006/erl459410f3_online.jpg
“However, it is the dependence of the airborne fraction on fossil fuel emission rate that makes the post-2000 downturn of the airborne fraction particularly striking. The change of emission rate in 2000 from 1.5% yr-1 to 3.1% yr-1 (figure 1), other things being equal, would [should] have caused a sharp increase of the airborne fraction” —- Hansen et al, 2013
[Note: I posted this comment yesterday at earlier post on CO2, but missed the conversation, so please forgive re-posting here.]
There is another carbon-14 observation that enlightens this debate. Each year, cosmic rays create roughly 8 kg of carbon-14 in the upper atmosphere, and has done so for millions of years. One in eight thousand carbon-14 atoms decays into nitrogen every year. For equilibrium, there must be 64,000 kg of carbon-14 on Earth (so it will decay at the same rate it is being created). But there is only 800 kg of carbon-14 in the atmosphere (I’m rounding to one significant figure). Where is the rest of it? And how does a net transfer of 8 kg of carbon-14 take place into this reservoir each year? You will find more details at the posts starting with the one below.
http://homeclimateanalysis.blogspot.com/2015/09/carbon-14-origins-and-reservoir.html
So far as I can tell, the remaining 63,200 kg must be in the deep ocean, where the concentration of carbon-14 is 80% of that in the atmosphere. We have something like 40 Pg (petagrams) of carbon moving into the deep ocean each year, and 40 Pg coming back, so a net flow of 8 kg takes place into the deep ocean. We can write down analytical equations for the resulting two-reservoir system, and solve them directly or numerically.
http://homeclimateanalysis.blogspot.com/2015/10/carbon-14-analytic-solution-to.html
We also note that absorption by the ocean, and emission, are governed by Henry’s Law. Absorption by the oceans increases in proportion to concentration in the atmosphere. According to this model, the residence time of CO2 in the atmosphere is around 17 years. According to the bomb test data, its about 15 years. If we consider how long it will take humans to double the atmospheric concentration of CO2, the answer is: roughly 6000 years, because we have to double the concentration in the oceans too.
http://homeclimateanalysis.blogspot.com/2015/12/carbon-cycle-with-ten-petagrams-per-year.html
Or so it seems to me, anyways.
“Each year, cosmic rays create roughly 8 kg of carbon-14 in the upper atmosphere, and has done so for millions of years.” A very crude approximation. C14 concentration has historically varied, read about corrections needed for radiocarbon dating.
Kevan Hashemi,
The 14C bomb test decay suffers from the time lag (~1000 years) between sinks into the deep near the poles and return near the equator: what was going into the deep in 1960 was at the height of the tests, what did return was from ~1000 years ago at about 44% of the bomb spike. See my reaction there.
That makes that the decay rate of the 14C bomb spike was several times faster than for a 12CO2 spike…
With an observed net sink rate of 2.15 ppmv for 110 ppmv excess pressure in the atmosphere, the e-fold decay rate is slightly over 50 years, about 3 times slower than the removal of 14CO2 out of the atmosphere…
Over the past 55 years, the slightly over 50 years decay rate was practically constant, which points to a rather linear absorption rate in ratio to the extra pressure in the atmosphere above the oceans steady state per Henry’s law…
George, Creation rate constant to about +-25%.
Ferdinand, I did admire your diagram in the comments to previous CO2 post, and it looks good to me, and I hear what you are saying about carbon-14 and carbon-12, although I have not studied carbon-12. My point is: you don’t need anything more than the following four numbers to figure out the carbon cycle of the Earth: carbon-14 production rate by cosmic rays (to +-25%), mass of carbon in the atmosphere (to +-25%), decay rate of carbon-14, and concentration of carbon-14 in the deep ocean. These are sufficient to fix the carbon cycle’s behavior, as expressed in a classic paper by Arnold et al “The Distribution of Carbon-14 in Nature”. The bomb test data is an independent confirmation of the model. The model shows how it will take 6000 years to double atmospheric CO2 concentration at 10 Pg/yr.
Kevan,
You have a problem with your 14CO2 cycle: it is not enough to know the total mass of 14C in the oceans, you need to take into account the long delay between what is going into the deep oceans and what returns.
The total amount of 14CO2 is negligible compared to the amount of 12/13CO2 circulating over the deep oceans – atmosphere. That makes that a doubling of 14CO2 in the atmosphere (as happened by the bomb tests) doesn’t have any influence on the total CO2 going in and out: what goes in and out as 12/13CO2 remains the same (in ratio to the extra CO2 in the atmosphere), but what returns as 14CO2 is only half the bomb spike (minus the radio-active decay rate in 1,000 years), even if there is zero difference in 12/13CO2 in/output.
The change is in the difference between the 14CO2 concentrations at the input and output, while for a 12CO2 spike there is hardly a change in concentration, only in mass which returns. The latter has a much slower decay rate than the 14CO2 spike decay which is the product of total returning (12/13CO2) mass and 14CO2 concentration…
:>
I normally don’t like to post just to applaud, but. Very nice Willis. It’s stuff like this that never occurs to me in the first place. Not enough fossil fuel reserves to double from current levels huh.
Thanks.
Excellent demonstration of the dimensions of the problem. An engineer’s approach. Climate science tends to keep these sorts of evaluations mainly in the dark so they can hyperbolize the fears without measuring anything. In a discussion about world population a few years ago (I’ve mentioned it a few times since in comments) I noted that the world population could all fit into Lake Superior with 15m^2 to tread water in. Yeah I know we take up a lot of space but I just wanted to see how much physical space we take up first.
Regarding the sinks end of the formulae, the recent greening of the planet seems to have taken everyone by surprise, even (possibly) Ferdinand Englebeen. I presented a simple thought experiment a few days ago on another CO2 thread that the biological sinks are exponential: a fringe of green in the Sahel would make the soil in the strip a bit moister and a new ‘fringe’ would seed going into the arid area and so on, with the original and successive fringes increasing their masses going forward. This would trim the higher estimates of future atmospheric CO2 content until an equilibrium was reached between expanding emissions and sinks. I think we are going to see the effects in the very near future with the slope of CO2 growth beginning to flatten. Any net cooling would also slow the growth further although the lag would hide it for a while, although in 85 years it may show up.
Thanks very much for this. I can see now why there is so much hand waving and throwing up very slow sink rates and long residence times. They shocked themselves with the math first and then cooked up ways to shore up the CO2 fears. I imagine the Bern model had many iterations to make it as scary as decently possible. When I know there is an agenda behind a program, I start by cutting their projections at least in half because I know they have stuffed every supportive parameter to the limits of what people can buy into.
PS The same thing is going on in the oceans of course with coccolithophores, etc.
To a much lesser extent: CO2 is hardly a limiting factor in the ocean surface, iron and other nutrients are the main limiting factors…
So they say, yet coccolithophores are blooming like mad.
==========
As the bloom depletes nutrients, convection replenishes. There are huge stores deep.
These mechanisms are understood? Yeah, someday.
======================
Gary Pearse,
Why should I be surprised?
It is known for some time that the biosphere as a whole is a small, but growing sink for CO2, at least since 1990 when the accuracy of the oxygen measurements got good enough to measure the small surplus in oxygen produced by the biosphere…
Still it is only the third decay speed (~170 years e-fold decay if I remember well) in the Bern model and for the 110 ppmv extra pressure in the atmosphere, the extra uptake still is limited to ~1 GtC/year (0.5 ppmv/year) of the ~9 GtC/year human emissions. The fastest sink is in the ocean surface (but limited to 10% of the change in the atmosphere, or ~0.5 GtC/year). The second are the deep oceans.
Ferdinand, I did put ‘possibly’ in deference to your widely accepted expertise on the subject. But I still believe the significant greening over a relatively short time, especially fringing arid areas which were expected to become even more arid was a surprise to most. Bravo to you for not being surprised, but I wish you had said something along time ago about it. The extra uptake of 1GtC/year sounds a little ‘static’ to me and that is the impression one gets from discussions. My point is that an exponential growth governs this sink (and in the oceans). Is this not a new idea coming out of the greening?
Dissolved iron in the ocean is “low” but the general abundance of iron in a basaltic volcanic floored ocean basins and the the issuing of iron from weathered rocks on land by rivers where iron averages 5% of the total composition and meteoric dust, etc, means that when iron is taken up by biota, there is an abundance of sources to replenish it. Essentially, calcium carbonate too has low solubility, but it from the same sources is abundant and available to replenish the ocean’s soluble burden continuously. How else does one account for the coccolithophores making up the Cliffs of Dover, etc and the abundant shellfish of the oceans. Shell fish even can take it out of fresh water in granitic rock basins.
I can see most of us have been deceived on these issues. With a cap on atmospheric CO2 in the atmosphere at ~550-650ppm if just left alone, all this worry about need for iron fertilization and quickly shutting down the fossil fuel business turns out to be a mask for the fact that we are already near the atmosphere’s cap for the effects of today’s emissions. Knowing that wasn’t going to push the new world order agenda as far forward as it has. Henry’s law is all very well for the world in an erlenmeyer flask, but is much wanting in a dynamic situation of the ocean and atmosphere’s complexities.
Gary,
I did mention the increase of uptake by the biosphere of ~1 GtC/year many times before, including the two links I have about that budget:
http://science.sciencemag.org/content/287/5462/2467
and
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf
I don’t have a recent update which shows the further evolution of the biological sink, but what is clear is that it is heavily influenced by El Niño, where all bio-life suddenly turns into a net source, followed by a net sink when temperatures drop again…
But, but, but……
Willis, are you really saying that the IPCC is not simply predicting Peak Oil well before the end of the century, but Peak Fossil Fuel too?? Arghhh, wash your mouth out with soap and water, otherwise Richard S Courney will have a hissy fit. 😉
R
Willis,
Wonderful post. It would be better if you did not include the incorrect single exponential model.
You wrote: “we still don’t have enough information to distinguish whether the Bern Model or the single exponential decay model is more accurate.”
That is not true. I don’t don’t know if the Bern model is right or wrong, but the single exponential decay model is certainly wrong. The short reason is that C-14 decay has a different time constant, so there must be at least two exponentials involved.
If you look at Ari Halperin’s paper (http://defyccc.com/se2016/), he starts out with a detailed description of a rather complex model. He then makes a number of simplifications that alter the physical meaning of the model. He ends up with his equation (16), which I reproduce here with somewhat different notation:
d(C-Ce)/dt = E – lambda*(C-Ce)
where C is concentration of CO2 in the atmosphere, Ce is equilibrium concentration, E is emission rate, and lambda is a first order rate constant. C and E are functions of time, Ce and lambda are constant properties of the system. Halperin’s equation looks different since he replaces (C-Ce) with the excess concentration, for which he uses the symbol C and breaks up E into several terms, but it is mathematically identical.
Physically, the above equation represents a linear two-box model in which one box is the atmosphere and the other is an infinite reservoir of CO2. I say infinite because Halperin assume that no matter how much CO2 is added to the reservoir, the concentration, Ce, in equilibrium with the reservoir does not change.
Now one can define the lifetime of CO2 at least four different ways:
(1) The average residence time of individual CO2 molecules in the atmosphere as indicated, for example, by the lifetime of bomb test C-14.
(2) The apparent residence time as indicated by comparing emission history to concentration history. That is what Halperin calculates.
(3) The pulse decay time, that is, the decay time observed following the emission of a large pulse of CO2 into an initially equilibrium atmosphere.
(4) The decay time following a sudden stop in emissions.
In a complex system, there is no reason that these four lifetime have to be the same. But in Halperin’s model, all four are identical. We know for a fact that (1) and (2) are different, so Halperin’s model is wrong and can not be used to make extrapolations into the future.
Agreed the Halperin model does not even make a particularly good match the last 60y ( despite his claiming it was an “excellent” fit ). A single exponential is totally unsuitable for extrapolation.
The infinite sink is a problem. It implies that given enough time of CO2 would end up in the oceans. Silly.
The 15% which remains in the Bern parameters is supposed to reflect the proportion that would remain in the atm. when the sinks reach their new equilibrium. I don’t know whether that is an accurate guess.
Greg,
You wrote: “The infinite sink is a problem. It implies that given enough time of CO2 would end up in the oceans. Silly.”
To be fair to Halperin, he effectively assumes an infinite reservoir, not an infinite sink. For the reservoir, the concentration will eventually come to a certain fixed value, independent of how much CO2 was emitted and absorbed. For a sink, the fixed value would be zero, which would be silly indeed.
Mike M. (period),
Except for (1) which is a complete different item (the long lag between sinks and return of 14C makes it quite different), (2) to (4) should be equal for a linear system. As far as I know, Halperin didn’t use equation (1) at all.
Besides the questionable partitioning in quantities in the Bern model, it doesn’t make much difference if you use one decay rate or a mix of several, as long as there is no limit in the maximum uptake. Except for the ocean surface, that is not the case. Even if you look at the up to today emissions, that gives not more than 3 ppmv extra in the atmosphere when the steady state of the deep oceans and the atmosphere is reached again. Thus little residual increase, even not with 900 or 2000 GtC emissions.
The general approach of multi-decay model is:
1/τ = 1/τ(1) + 1/τ(2) + 1/τ(3) +…
As long as the decay rates don’t change over time (the first being rapidly saturated, thus also giving a fixed decay), it doesn’t matter if you use the total decay rate or the sum of the individual one’s. The overall decay rate is slightly faster than the fastest decay rate, except for the first, because of its limit in quantity.
Ferdinand,
You wrote: “Except for (1) which is a complete different item (the long lag between sinks and return of 14C makes it quite different), (2) to (4) should be equal for a linear system.”
Lifetimes (3) and (4) should be the same in a linear system, but they need not be the same in a non-linear system, which is what we have in reality. Lifetime (2) should be different from the others even in a linear system provided there is not just one process involved.
“As far as I know, Halperin didn’t use equation (1) at all.”
I don’t understand what you mean. That he ignored lifetime (1)? Why is that relevant. His model is the equation that I gave.
“The general approach of multi-decay model is:
1/τ = 1/τ(1) + 1/τ(2) + 1/τ(3) +…
As long as the decay rates don’t change over time (the first being rapidly saturated, thus also giving a fixed decay), it doesn’t matter if you use the total decay rate or the sum of the individual one’s.”
That is simply not true. You can use that formula to combine the effects of multiple sinks when you have a steady state or quasi-steady state; that is why the Halperin model can give a decent fit. It will also give the initial rate of decay. But if you combine in that way, you will end up with a huge error when you extrapolate.
Mike M. (period),
Some confusion here…
I was responding to your points (1) … (4).
Your point (1) is about the residence time of an individual CO2 molecule, which is not relevant for the decay rates of any excess CO2 mass in the atmosphere and thus not used by Halperin.
Your points (2), (3) and (4) should give the same decay rates for a bunch of linear processes, no matter if you use the individual decay rates or one overall decay rate.
You can use that formula to combine the effects of multiple sinks when you have a steady state or quasi-steady state
Not at all, it is true for any combination of linear decay processes, no matter how far from steady state. Here for a double decay process:
https://en.wikipedia.org/wiki/Exponential_decay#Decay_by_two_or_more_processes
If we may forget the first decay rate for a moment, which is quite limited in uptake, the dominant decay is the second one in the deep oceans, but the third one in vegetation also helps as the overall decay is slightly faster than the second one alone.
The main difference between the single decay and the Bern model is not in the multiple decay rate, it is the partitioning in separate compartments each with its own maximum sink limit which makes the extrapolation of the Bern model more questionable than the single decay model… See the nice fit of the past CO2 increase in the second graph in my response to Whiten here, with a single decay rate…
Ferdinand,
“Not at all, it is true for any combination of linear decay processes, no matter how far from steady state.”
I was indeed careless in what I said. You are correct *if* there is no saturation of the sinks. I had accepted the conventional wisdom on saturation, and implicitly assumed that in my earlier response, but you have given me reason to doubt that. If you are correct and there is no saturation, then Halperin’s model may give a reasonable extrapolation.
My default position is that when multiple groups of capable people put years into studying something, it is unlikely that they have made some dumb error. Science can not otherwise proceed. But when evidence of an error is presented, that must also be considered. So it looks like I have some reading and thinking to do.
Mike M. (period),
The Bern model was discussed already in 2001 between Peter Dietze (who used a single decay model too) and Fortunate Joos and others about the Bern model:
http://www.john-daly.com/dietze/cmodcalc.htm
Since that time, the 55 years e-fold decay (with a slightly different formula) remained about the same, even a little faster, which points to (currently) no limit in the CO2 uptake at the ocean sink places.
I think the main problem in the Bern model is that they calculated it from a gigantic 5,000 GtC pulse, which indeed gives a huge residual even in the deep oceans, but they applied it to even the smallest pulse in the present.
From the above discussion, regardless the mutual misunderstandings, it seems that the Bern model makers applied the Revelle factor to the whole ocean surface, including the sink places, which is highly questionable. Feely’s compilation of pCO2 measurements all over the oceans was published in the same year as the above discussion, thus may not be known at that time…
Ì don’t think lots of researchers are busy with modeling the CO2 cycle, most even may try to figure out the present cycle and don’t care (much) about future scenario’s…
Extrapolating Willis’s graphs into the next century would show a significant fall in CO2 (700 down to less than 500 by 2135 using the higher numbers, 500 /400 to less than 400 /340 the lower) and consequent cooling, sea level going down, etc.
Building on this, if all oil, gas and coal were burnt this century, presumably the difference between TCI and ECS would disappear. In AR5, TCI is in the range 1.0 to 2.5C and ECS 1.5 to 4.5C. If CO2 were to get into the 700+ range by the end of the century and then fall dramatically the next, the top end of temperature rise would fall dramatically.
Willis:
It is clear that three exponential model will be more accurate: it has more parameters. The global carbon cycle obviously cannot be accurately represented by a single exponential. It may be “good enough” over 60y or so of data that one may suggest a more parsimonious description is preferable. It will not be more accurate.
However, what is parsimonious for fitting a limited period of known data is NOT going to work as an argument for what is best for wild extrapolation outside the range of the data.
Further more, if the single exponential is derived by fitting to the extended historical data it will not even be optimally fitted to the last 60y of good data.
Historical emission data show three different, roughly exponential rates of growth. Probably only the last is meaningful for ( business as usual ) extrapolation.
Since the Bern model seems to be “validated” by comparison to other models, I’m not particularly convinced by their derived coeffs. but the idea of three time constants for three main reservoirs seems sensible.
However, the trust of the article is interesting: that we cannot keep on doubling atm CO2. One doubling from present levels is about the outside limit. And we’re not going to get beyond about 2.5x pre-industrial
Good article.
So, yeah, fine, even using their model, the max rise in atmospheric CO2 is insignificant. But, the model is bollocks. Atmospheric CO2 is governed by temperatures, and humans have very little impact on it.
http://i1136.photobucket.com/albums/n488/Bartemis/temp-CO2-long.jpg_zpsszsfkb5h.png
In the near future, La Nina is going to send temperatures crashing down, and there will be a decade or two of declining or stable global temperature thereafter. We will see the rate of change of CO2 decline with it, even as human inputs continue increasing. Hopefully, that divergence will finally end the vainglorious notion that humans are in control of the planet.
Bart,
Where were you so late?
The increase is 90% caused by human emissions, 10% by temperature. as all observations point to a human cause, none to temperature as the sole cause. Temperature variability causes most of the variability which is not more than +/- 1.5 ppmv for extremes like El Niño and Pinatubo around the 80 ppmv CO2 increase. See the real cause of the increase:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/had_co2_emiss_nat_deriv.jpg
Bartemis:
I just noticed something on the WoodForTrees temperature (green) net atmospheric CO2 emission correlation.
Do you see that around year 1990 that the green (temperature) is above the red (net emission)? Was that a “strong” El Nino year? Was it different than others?
I suspect we are near the peak now. We will likely never see 450 PPM let alone 500PPM.
The amount of CO2 in the atmosphere could easily double over the next hundred years!
Everyone assumes the increase in atmospheric CO2 is because of anthropogenic emissions. That doesn’t have to be completely true.
The amount of CO2 in the intermediate and deep oceans dwarfs that everywhere else. 1.5% of the oceans’ CO2 would double the amount of CO2 in the atmosphere. Referring to the CO2 solubility graph on this page we find that a temperature rise of 0.6 deg. would do it. (Yes, I do realize the amount of heat it would take to raise the temperature that much.)
Anthropogenic emissions aren’t raising the atmospheric CO2, it’s Trenberth’s heat hiding in the deep oceans that’s causing all the extra CO2. 🙂
commieBob,
A rise of 0.6°C of the ocean surface (or the whole oceans, doesn’t matter) will increase the CO2 levels in the atmosphere with ~10 ppmv and then it stops, no matter if there is 100 or 10,000 times more CO2 in the deep oceans than in the atmosphere. The solubility of CO2 in seawater is a matter of pressure and ratio, less of quantities, as long as sufficiently available.
Take a bottle of 0.5, 1.0 and 1.5 liter Coke from the same batch and shake them all three. You will measure the same pressure under the cap at the same temperature, despite the three times higher quantity of CO2 in the larger bottle (a small difference due to the relative larger loss out of the liquid in the smaller bottle allowed)…
Currently the partial pressure of CO2 in the atmosphere is higher than of the oceans: the net CO2 flux is from the atmosphere into the oceans, not reverse…
… and if you change the temperature the pressure will change, which is my point.
A close reading of my post should reveal that I was being somewhat Rabelaisian.
On the other hand, you give an increase of about 10 ppmv. As a gas approaches saturation, Henry’s law ceases to apply. How do you justify your figure?
Does the volume of water matter? Yes. If I boil a beaker, the atmospheric concentration of CO2 won’t be measurably affected (even if the beaker contained 100% dry ice). You have to have enough CO2 to make a difference, which you acknowledge with: “as long as sufficiently available”.
Here’s a good pdf.
The atmosphere controls the oceans gas contents for all gases except radon, CO2 and H2O.
In other words, the atmosphere does not control the oceans’ gas content for CO2.
OOPS – I forgot to close a blockquote.
In other words, the atmosphere does not control the oceans’ gas content for CO2.
Ferdinand, I think there are other possible factors that could be contributing, such as biological activity, which is affected by temperature. For example, Baker et al 2013 estimates an increase in atmospheric CO2 of 100ppmv from photosynthetic activity in certain ocean regions during different times of the year, while pointing out that simple temperature-dependent solubility calculations cannot explain the fluctuations in atmospheric CO2 in those ocean regions. Of course no-one has any comprehensive global data on biological activity in the oceans over the last 100 years and so for all we know a significant portion of the increase in CO2 could be biologically-driven which in turn could be temperature-driven also.
commieBob,
The link to the .pdf doesn’t work, but a few renarks:
– For a small change in temperature (like we see in the past few hundred years), there is a quasi-linear change in pCO2 of the oceans surface waters of about 16 ppmv/°C. That is all. That includes Henry’s law for the solubility of CO2 as gas in the ocean waters which is only 1% of all carbon species, 90% is bicarbonates and 9% carbonates. That also includes all equilibrium reactions between free CO2, bicarbonate and carbonate / hydrogen ions following the increase in temperature. See:
http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/text/LMG06_8_data_report.doc where the formula used to compensate for temperature at measurement time vs. the in situ temperature is:
(pCO2)sw @ur momisugly Tin situ = (pCO2)sw @ur momisugly Teq x EXP[0.0423 x (Tin-situ – Teq)]
– There is no limit in CO2 that the atmosphere can receive, there is a limit in the ocean surface at about 10% of the change in the atmosphere. That is the Revelle factor. For a reverse change in the ocean surface, the amounts in the surface are too small (~1000 GtC) to give the full 10x change in the atmosphere (~800 GtC) and the new equilibrium is reached before the 10X increase of the change in the atmosphere is reached.
– The amounts of CO2/derivatives in the deep oceans play little role on short time, as the exchanges with the deep oceans are limited.
– As long as the pCO2, the partial pressure of CO2, in the atmosphere is higher than in the ocean surface, the net CO2 flux is from atmosphere into the oceans, not reverse. No matter the quantities involved.
The area weighted average pCO2 in the atmosphere is 7 μatm (~ppmv) higher than in the ocean surface. See:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/exchange.shtml and following pages and the graphs at:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtm and next page
Richard,
Indeed the biological factor is quite variable and heavily influenced by temperature.
Fortunately that can be monitored as global change, due to the oxygen and δ13C balances. If there is a physical change in CO2 caused by an ocean temperature change, δ13C goes slightly up and CO2 goes slightly up or reverse with temperature. If the change is caused by bio-life, CO2 goes down and δ13C goes firmly up, or reverse with temperature.
Thus if δ13C and O2 changes parallel each other, then degassing / absorbing oceans are dominant. If δ13C and O2 changes are opposite, then bio-life is dominant (both for land and ocean plants).
The past 25 years of O2/δ13C monitoring show that bio-life (land + sea) is a small but growing sink for CO2 with higher temperatures and increased CO2 levels in the atmosphere. Thus not the cause of the CO2 increase in the atmosphere, neither of the firm δ13C decline since ~1850, which parallels human emissions.
“If there is a physical change in CO2 caused by an ocean temperature change, δ13C goes slightly up and CO2 goes slightly up or reverse with temperature. If the change is caused by bio-life, CO2 goes down and δ13C goes firmly up, or reverse with temperature”
Sorry Ferdinand, but I cannot make sense of what you are trying to communicate to me here. It all seems rather muddled. Currently it is understood that δ13C is decreasing in the atmosphere and this is squarely blamed on human emissions. However a decrease in δ13C is a logical consequence of increased biological activity, such as photosynthetic activity, as mentioned above. According to Williams et al 2005, based on paleo-climate data: “Delta 13C values were high until 17.79 ka after which there was an abrupt decrease to 17.19 ka followed by a steady decline to a minimum at 10.97 ka. Then followed a general increase, suggesting a drying trend, to 3.23 ka followed by a further general decline. The abrupt decrease in δ-values after 17.79 ka probably corresponds to an increase in atmospheric CO2 concentration, biological activity and wetness at the end of the Last Glaciation”. Hence the current decrease in δ13C could be, in part, due to changes in biological activity.
handbook,
One need to take into account both the height and direction of the changes in question.
If the oceans are warming, CO2 is released at about 16 ppmv/°C and at the same time, the δ13C level (is a measure for the 13C/12C ratio in CO2) slightly increases in the atmosphere, because the δ13C of the ocean surface is higher than of the atmosphere, even including the δ13C shift at the ocean-air boundary.
At the same time, higher temperatures give more plant growth (less land ice and longer growth seasons). More plant growth means more CO2 uptake (and O2 release), preferentially 12CO2, which makes that the residual 13CO2 in the atmosphere increases in ratio. Thus the δ13C level increases with more plant growth, while CO2 levels decrease.
Over the past 800,000 years, the oceans were dominant for CO2 levels, as can be seen in parallel CO2 and temperature increase, where CO2 levels follow temperature levels with some lag. As the effect of plant growth on δ13C levels is much larger than from the oceans, the growing vegetation gives a slight increase with temperature of a few tenths of a per mil δ13C from the depth of a glacial period to an interglacial, which we are in now.
During the whole current interglacial, the Holocene, there was some variability of δ13C of not more than +/- 0.2 per mil, mainly as result of temperature on vegetation and oceans (MWP-LIA and back).
Since ~1850, humans have emitted lots of CO2 from fossil fuels, with very low δ13C (around -24 per mil), while vegetation was slowly growing, thus taking more 12CO2 out of the air and thus not the cause of the firm δ13C drop in the atmosphere. Neither are the oceans, as these should increase the δ13C with more CO2 release.
The resulting drop of over 1.4 per mil δ13C is unprecedented over the past 800,000 years in ice cores, coralline sponges or any other δ13C/CO2 proxy:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/sponges.jpg
The oceans are not a bottle of Coke. They are vast and flowing, and your conceptualization is facile.
Bart,
As repeatedly shown to you, it doesn’t matter if you take a closed sample of seawater and wait for it to get in equilibrium with the atmosphere above it or look at the enormous amounts of CO2 flowing in and out between atmosphere and ocean surface at steady state: For the same area weighted ocean surface temperature, the same CO2 level in the atmosphere will be measured in the single sample as over the global oceans.
There is no way that you can have a different result, as that simply will change the input and output fluxes to reach the steady state again…
Nonsense. You mean, as repeatedly claimed by you, with no empirical evidence nor, indeed, any scientific rigor at all.
It is absurd. Of course the thermohaline circulation is temperature dependent. It’s built right into the name.
If conditions remain the same, a steady state would eventually be reached, but only on a time scale commensurate with overturning of hundreds of years. In the meantime, the evolution of associated processes will tend to have integral relationships. And, that is what the empirical evidence shows – atmospheric CO2 evolves as the integral of temperature anomaly. There is no doubt about it.
Bart,
You use any (im)possible scapegoat to defend your theory. No matter how ridicule it is. No matter that you have zero evidence for what you say and ignore every evidence of the opposite.
Take an increased temperature of the oceans surface: if the increase is 1°C over all the ocean surface, everywhere, including upwelling and sink places, that will increase the local pCO2 of the oceans everywhere with ~16 μatm.
At the upwelling sites, that gives a (~5%) increase in CO2 emissions as the influx is in direct ratio to the pCO2 difference between ocean surface and atmosphere.
At the sink sites, that gives a (~5%) decrease in CO2 uptake, as the outflux is in direct ratio to the pCO2 difference between atmosphere and ocean surface.
Both give an increase of CO2 in the atmosphere.
The increasing pCO2 in the atmosphere decreases the CO2 emissions at the upwelling sites and increases the uptake at the sink sites, because the pCO2 differences change in the opposite direction of the temperature increase.
At ~16 ppmv extra in the atmosphere, the original in-out pCO2 differences and thus fluxes are restored to what they were before the temperature increase, no matter if that was in steady state or not.
That means that 1°C warming all over the oceans has exactly the same effect on the CO2 levels in the atmosphere above it as 1°C warming of a sample of seawater in a bottle.
That is a matter of the most simple process dynamics, maybe the problem is that it is too simple for you…
commieBob, Trenberth ocean warming begs the question; If the oceans are warming faster than ever (even without a rise in sea surface temps) then shouldn’t the oceans be outgassing faster than ever?
the only problem I see with the model is that it doesn’t include any newly discovered reserves along the way. We would have to make an estimate of the rate of growth of reserves, and then of economic activity that would burn those reserves. So that might increase the curve to the 800 ppm using the Bern model.
So the effective sameness of temperatures at pressures accounting for solar distances only further proves that gas specie is irrelevant. As per the gas laws. Radiative forcing exerts no effect. Zero NiL. Nada.
Of all those working on this, my money remains on Ssllby. There is a real body of work.
SALBY
Brett,
Don’t put too much money on Dr. Salby: he made several severe errors in his speeches, which make him not the best source of what happens with CO2 in the atmosphere…
Or, you’ve made several errors. Given the ones I’ve seen, such as falling for the ridiculous pseudo-mass balance argument and imagining the oceans as a great big bottle of Coke, I know where I’d invest my wager.
Bart,
Did you already find your oceanic source of piling up CO2, which propagates back from sink to source?
Or have you found any proof that the natural carbon cycle increased a fourfold to dwarf the fourfold increase of human emissions and resulting increase in the atmosphere and thus net sink rate?
For some people here, the Coke bottle is a good example that quantities are less important that pressure and temperature, no matter if that is static as in the example or dynamic all over the oceans…
Willis, did you consider how much the fuel reserves itself increases? Take a look at historical estimates. What if in 2050, the announced available fuel reserves is higher than it is now, despite having used oil and coal for 34 years?
Fossil fuel reserves are actually very much a moving target. Because what amounts to “recoverable” depends on the cost to recover a given deposit, and the price that the resource will bring. As fossil prices rise, the amount of recoverable reserves goes up, and as prices decline, reserves go down (as deposits formerly recoverable are priced out of the market).
Further, the amount of exploration is also economically limited. If reserves are low, companies spend more on exploration. But it simply makes no sense to spend huge amounts on exploration when the current reserve level is adequate several decades into the future. Like all things, there comes a point at which it makes no economic sense.
Thus the amount of reserves is more of an economic issue than a geological one. As current reserves are depleted, exploration will ramp up and reserves will increase. If you look at historical reserve levels for oil and gas, we see this clearly: the world had 30 years of oil reserves in 1980, and 30 years later, in 2010, we reached 50 years of reserves. That number will no doubt decline this year because of the oil price drop, but we’re still in no danger of running out any time soon.
The case for coal is even less limited, because coal reserves are already adequate for centuries rather than decades at current use rates. Thus nobody explores for coal any more. But if we really did get anywhere near burning our existing 900 Gt of coal, you can bet exploration would begin and reserves would rise.
Great post and comments. Thank you and good night.
I think the Bern model is correct, but needs to be modified to have less slowdown due to some of its slowdown being from climate sensitivity to CO2 being overestimated.
Dear Willis E.,
I disagree with usage of time constant tau of 33 years as mentioned around Figure 2.
In wattsupwiththat.com/2015/04/19/the-secret-of-half-life, you showed a determination of time constant tau being 59 years (IIRC, if I got this right) which *.693 (natural log of 2) means half life of 41 years.
Lately, Ari Halperin has posted in WUWT arguing in favor of single exponential decay as opposed to Bern, along with a shorter half life (which I consider pushy-short) of 30-35 years. Divide that by .693 and the time constant tau is 43-50.5 years.
Also, I noticed a graph in a recent post by Ari Halperin showing CO2 fitting closely with what Ari Halperin models, but with a slight difference in favor of an accelerating characteristic of CO2 growth.
I think that with consideration of this and ingenuity for finding and extracting fossil fuels, we are in for about 700 PPMV CO2 (not quite a doubling from slightly over 400 PPMV but about 80% of a doubling on log scale,
and about 1.3-1.32 doublings on log scale from 280-285 PPMV CO2 if not for human impact on CO2 level).
Of note is the large amounts of C02 in the subsurface about mid ocean ridges and other underwater volcanoes which seems largely missing from Figure 6.1. The 1750 quoted in ocean subsurface sediments likely under-estimates this by a large margin, although how much of this varies with climate change or other factors may not be much.
Something has always bothered me about CO2. Back in the 1970s I was taught to expect that most CO2 sequestration occurred in the oceans by plankton and that it was rate-limited only by the amount of CO2 available. Plants on land are respirators, reversing photosynthesis at night, in the big scheme of things they can’t account for much. Then two things got into my head, one thing was the two 1950s era papers proposing that dissolution in the ocean is rate-limited and the other that the ocean is nutrient-limited, so I let it go, there being no other apparent explanation for the build-up of CO2 but for Man’s output exceeding the rate limit. But there is always the question of the relative amount of natural variation vs. man-made contribution. My question – is there a good period of record on global upwelling index? It seems to me that upwelling should vary with ocean current and atmospheric circulation oscillations and that a significant increase in upwelling would probably result in big releases of CO2 to the atmosphere as the water warms. Do we assume like everything else the AGW crowd does that global upwelling and ocean-atmosphere CO2 exchange is a constant? Or do we assume the Earthy is a dynamic system that is out of equilibrium as we continue to move away from the Pleistocene and the climate warms? Is CO2 sequestered in the ocean from a colder time is only now finding its way to the atmosphere due to poor mixing? Beats the heck out of me, and I think I had an excellent education in the earth sciences in the 1970s.