Double The Atmospheric CO2? Fuggeddaboutit!

Guest Post by Willis Eschenbach

On another thread here at WUWT we were discussing the Bern carbon dioxide model used by the IPCC. The Bern Model calculates how fast a pulse of emitted CO2 decays back towards the pre-pulse state.  See below for Bern model details. We were comparing the Bern model with a simple single-time-constant exponential model. Someone linked to a graphic from the IPCC AR5 report, Working Group 1, Chapter 6:


carbon cycle ipcc ar5 fig 6.1ORIGINAL CAPTION (click image to enlarge): Figure 6.1 | Simplified schematic of the global carbon cycle. Numbers represent reservoir mass, also called ‘carbon stocks’ in PgC (1 PgC = 10^15 gC) and annual carbon exchange fluxes (in PgC yr–1). Black numbers and arrows indicate reservoir mass and exchange fluxes estimated for the time prior to the Industrial Era, about 1750 (see Section for references). Fossil fuel reserves are from GEA (2006) and are consistent with numbers used by IPCC WGIII for future scenarios. The sediment storage is a sum of 150 PgC of the organic carbon in the mixed layer (Emerson and Hedges, 1988) and 1600 PgC of the deep-sea CaCO3 sediments available to neutralize fossil fuel CO2 (Archer et al., 1998). 

Red arrows and numbers indicate annual ‘anthropogenic’ fluxes averaged over the 2000–2009 time period. These fluxes are a perturbation of the carbon cycle during Industrial Era post 1750. These fluxes (red arrows) are: Fossil fuel and cement emissions of CO2 (Section 6.3.1), Net land use change (Section 6.3.2), and the Average atmospheric increase of CO2 in the atmosphere, also called ‘CO2 growth rate’ (Section 6.3). The uptake of anthropogenic CO2 by the ocean and by terrestrial ecosystems, often called ‘carbon sinks’ are the red arrows part of Net land flux and Net ocean flux. Red numbers in the reservoirs denote cumulative changes of anthropogenic carbon over the Industrial Period 1750–2011 (column 2 in Table 6.1). By convention, a positive cumulative change means that a reservoir has gained carbon since 1750. …

Now, there are many things of interest in this graphic, but what particularly interested me in this were their estimates of total fossil fuel reserves. Including gas, oil and coal, they estimate a total fossil fuel reserve of about 640 to 1580 gigatonnes of carbon (GtC). I decided to apply those numbers to both the Bern Model and the simple exponential decay model.

Now, the Bern model and the simple exponential model are both exponential decay models. The the difference is that the simple exponential decay model uses one value for the half-life of the CO2 emissions. On the other hand, the Bern model uses three different half-lifes applied to three different fractions of the CO2 emissions, plus 15% of the emitted CO2 is said to only decay over thousands of years.

My interest was in finding out what would happen, according to the two CO2 models, if we burned all of the fossil fuels by 2100. For the smaller case, burning 640 GtC by the year 2100 implies a burn rate below current emissions, that is to say about  7.5 GtC per year for the next eighty-five years.

For the larger case, 1,580 GtC implies a burn rate that increases every year by 1.1%. If that happens, then by the end of this century we’d have burned 1,580 gigatonnes of carbon.

So, given the assumptions of the two models, how would this play out in terms of the atmospheric concentration of CO2? Figure 2 shows those results:

If We Burn It All By 2100Figure 2. CO2 projections using the Bern Model (red and blue) and a single exponential decay model (purple and light green). Single exponential decay model uses a time constant tau of 33 years. Note that this graph has been replaced, the original graph showed incorrect values.

Now, there are several things of interest here. First, you can see that unfortunately, we still don’t have enough information to distinguish whether the Bern Model or the single exponential decay model is more accurate.

Next, the two upper values seem unlikely, in that they assume a continuing exponential growth over eighty-five years. This kind of long-term exponential growth is rare in real life.

Finally, here’s the reason I wrote this post. This year, the atmospheric CO2 level is right around four hundred ppmv. So to double, it would have to go to eight hundred ppmv … and even assuming we could maintain exponential growth for the next eight decades and we burned every drop of the two thousand gigatonne high-end estimate of the fossil reserves, CO2 levels would still not be double those of today.

And in fact, even a fifty percent increase in CO2 levels by 2100 seems unlikely. That would be six hundred ppmv … possible, but doubtful given the graph above.

Short version? According to the IPCC, there are not enough fossil fuel reserves (oil, gas, and coal) on the planet to double the atmospheric CO2 concentration from its current value.

Best regards to all,


My Usual Request: Misperceptions are the bane of the intarwebs. If you disagree with me or anyone, please quote the exact words you disagree with. I can defend my own words. I cannot defend someone else’s interpretation of some unidentified words of mine.

My Other Request: If you believe that e.g. I’m using the wrong method on the wrong dataset, please educate me and others by demonstrating the proper use of the right method on the right dataset. Simply claiming I’m wrong doesn’t advance the discussion.

Models: The Bern Model is described here and the calculation method used in the model is detailed here.

201 thoughts on “Double The Atmospheric CO2? Fuggeddaboutit!

    • Yep well done Willis. Been making the same point for years. But our alarmist chums are too wedded to the alarm in their cause to listen to any voice of reason.

    • Well what caught my eye right away was the very first top left (how I read books) right underneath the atmospheric lid.

      So the NET ocean flux, is not actually net at all, it is 2.3+/-0.7 gozinta, and also 0.7 +/-0.0 gozouta at the same time. So what part of N-E-T is it that they are having difficulty understanding ??

      By my simple minded calculations, I get a … net … of 1.6 (+/- 0.7 ?) gozinta.

      If I have a gozouta process with a time constant of 1.0…… second, and a competing gozinta process with a time constant of 1.000,000,000,001 second and both have the same Initial rate, that does NOT give me a residence time of circa 1E+12 second.

      The residence time constant is still 1.0…….. second.

      Taking one CO2 molecule out with your right hand, and putting a different one in with your left hand, is not an implementation of “The Bern Model”

      “””””…… The Bern Model calculates how fast a pulse of emitted CO2 decays back towards the pre-pulse state. …..”””””

      “””””….. a pulse of emitted CO2 …..”””””

      Where the heck in that description of the Bern model does it say: “We are going to feed you a pulse of CO2 (Impulse) for you to get rid of, but behind your back we are going to continue to feed CO2 in to you to stop you from getting rid of any CO2.

      That is “The Sorcerer’s Apprentice Model” of CO2 (in this case) elimination.

      Based on the postulations in this “Bern” model (shouldn’t that be Burn model), the residence time for CO2 in the atmosphere is infinite or as big as the age of the earth, because to the best of our knowledge, there has never been a time, even for one second, where the atmosphere did not contain any CO2 molecules at all. So CO2 is a permanent resident of the earth’s atmosphere, as is H2O.

      But this is all an aside from Willis’s argument here.

      Sorry for the interruption Willis.


      • Well I see that black is bad (red flag) CO2 and red is good (green) CO2.

        Wait, red CO2 can’t be good, because that makes the ocean more acid, which etches away the coral.

        I guess NO CO2 is really good at all.


        But it still is no fair to give a puff of CO2 and then turn on a continuous supply and challenge me to remove the puff.

    • Two questions, (1) arithmetic: Willis says “…. they estimate a total fossil fuel reserve of nine hundred to two thousand gigatonnes of carbon…” If you add the fossil carbon reserve numbers in the IPCC figure, you get 1002-1940 GtC (=PgC). Where does Willis get his 900 GtC? (2) IPCC figure legend says black numbers are reserves in 1750 & red numbers are extractions from 1750-2011. If you subtract the extractions from the 1750 reserves, you get current reserves of 637-1575 GtC. Aren’t these the numbers to use as starting points for model calculations?


      • Dang … good catches both. I’ll revise the head post. Actually, makes my argument stronger.


  1. Nice… Even if we burn all the hydrocarbons on the planet there is no catastrophic emergency of any sort…

    1. Why then is this a hot button political issue?

    2. Why are we worried about it? We obviously cant make enough CO2 to create a runaway atmospheric temperature problem.

    This them is just a ruse for political gain… Where have i heard this before? Ottmar Edenhoffer’s statement now come to mind…

  2. “Channeling standard alarmist”:

    1. But we’re already half way there!. Current levels are 50% above preindustrial.

    2. As the Ocean warms it will vomit up more CO2 than we are able to emit!

    3. As the Arctic warms and the permafrost collapses, the methane explosion will dwarf the effect from CO2 only.

    Can’t think of anymore, but there are probably some.

    • Standard alarmists rarely remember that depleting resources have rising prices making them economically unviable to purposes that produce less added value.

      Willis did not include methane clathrates as a source of fossil energy. Should they?

      • Higher prices drive further exploration. With prices high enough, just about any source becomes viable. And simultaneously any alternative source, depending upon poor EROI being heavily subsidised, become even further out of reach.
        If anyone is serious about alternative energy, he must make it cheap.

      • Higher prices seen in the period 2006-2013 weren’t sufficient to induce enough exploration to allow us to replace what we produce via new discoveries. The system simply doesn’t work as cornucopians envision it. They miss the fact that as prices go up we do increase activity, which in turn drives up internal costs, and this reduces marginal project viability.

        There are also timing issues, other than the light tight oil in the USA we have a rate limit (the resources may be there but they take time to get on line). We don’t put fossil fuels on the table as if we were McDonald’s. Projects take time to move forward. Some are just too complex, for example Kashagan.

        Since we don’t react to produce as fast as “required” we have these price spikes which spur alternatives and efficiency. I’m afraid the next price spike will hit in a couple of years and this time we may not react nearly enough, this in turn will drive prices into the $120 per barrel level, which means bad news for third world countries.

        I tried doing my own estimate a couple of years ago, and concluded the peak would be about 630 ppm. Given the number of poorly constrained variables I can never bring myself to improve my model. But many of you should remember I’ve been making a lot of noise about the IPCC RCP8.5, and that it wasn’t feasible to emit as much as it assumed.

        This issue is important because 95 % of the adverse impact papers and propaganda issued by the panic party use temperatures derived from RCP8.5.

        Nowadays they are getting ready to deliver the CMIP6 climate model inputs, and I’m afraid they’ll continue to jam way too much CO2 and methane into the system, which will allow them to continue feeding their climate panic propaganda with very adverse impacts.

    • Long time lurker here.
      I was hanging out here when you were known as CTM, and most people did not know what CTM stood for.
      I never commented back in those days. Good to see you are still around.

    • Sorry Chasmod, but long before your methane explosion raises the Temperature 1200 km away from the clathrate burp (where Hansen will be waiting to notice the warming); me and my fellow entrepeneurs will be selling the heck out of that natural gas burp, and extracting lots of clean energy from it.

      We solve the methane problem by burning the stuff.


      PS: but good to hear from you Chasmod.

  3. 4. Don’t forget the Methane Hydrate at the bottom of the oceans. As the oceans warm up past 70C, the methane hydrate will start disassociating, the methane will be released, the ‘hydrate’ will become water, and as it takes less space as ‘hydrate’ than as water the volume of the oceans will rapidly expand and create at least 2 mm sea level rise.

    5. In order to counteract the 2 mm sea level rise all State will have to protect their borders and coasts by installing 5 m high concrete sea walls, with massive locks to allow ships to get in and out of ports, together with wind mill driven pumps to lift river water over the sea walls so the land is not flooded. The concrete needed will be taken from cement produced by heating limestone or chalk until the Calcium Carbonate has been reduced to Calcium Oxide, and the Carbon Dioxide has been released to atmosphere. We have easily enough CO2 locked away in the limestone and chalk beds in Northern (for limestone) and Southern (for chalk) England to get a 97% increase in CO2 levels from the current 400 ppm level.

    • Dudley,

      I understand that if Portland Cement could be made using a nuclear energy source of heat it would be ‘carbon neutral’ – the carbon dioxide given off during manufacture is re-absorbed from the atmosphere as the concrete cures.

      • No, that is not how portland cement sets. CO2 is driven off when the cement is produced, but it is a reaction with water (mixed into the concrete) which causes the cement to harden. Cured concrete does very slowly react with atmospheric CO2, but the process is extremely slow; hundreds of years to millions of years, depending on thickness. Production of portland cement would emit less CO2 if the heat came from nuclear reactors, but it would always emit CO2.

    • Dudley, the maximum SST in open ocean is 31C because of evap and convective cooling. No danger of 70C. The only place we get these temperatures are in the equatorial band.

      • Perhaps I should have added “/sarc/” with ref to the 70C. That would indeed be ocean warming!

    • Some of this is just apocalyptic nonsense. See level has been going down for five years and has been on a 10,000 year downward trend, from a centimeter or two a year, to about 1.7mm average over 30 years now. As for the methane bubbling up and killing us all, first methane has a very short half life in the atmosphere, second, there are bacteria that will gobble it all up in the ocean before it even comes to the surface. Even if the oceans get to 70 degrees, it isn’t like 69.5 degrees nothing happens and at 70 it will all bubble up at once, it would be a gradual progression, slow enough for bacteria consumption, and atmospheric conversion to CO2

    • Dudley:
      2mm sea level rise requires 5M sea walls? BullPoop!

      A 2mm rise is indistinguishable from tidal, current or wave movements.

      What are you expecting to warm up the oceans above 40°C?

      That small rise in temperature represents an incredible amount of heat. Your couldn’t reach that temperature for thousands of years, even without little ice ages impeding global warmth.

      And just how much of England’s chalk and limestone formations have been turned into lime for England’s current state of urban sprawl? I suppose England has plans for all of the lime necessary to reach a 97% CO2 increase?

      Armchair arguments without checking reality.

      • Nope! Just using some of the ridiculous figures that the CAGW alarmists like to use. We are currently getting a sea level rise of around 2 mm per year, and the alarmists are suggesting that sea level rise could be 7 or 10 m. So according to them a 5 m sea wall would be not quite enough to “preserve civilization as we know it”.

        It would be nice for my swimming pool to warm up to 30C to make it comfortably swimmable. I prefer swimming in the Suez Canal or in Trincomalee Harbour where the water is decently warm (both of which I have done in the dim, distant past). Haven’t swum in the North Sea or the English Channel since I was a small boy, and would not dream of it now. Brrrrr!

  4. Thanks for this. A topic that has come up a few times is that the IPCC’s top-end scenario, 8.5w/m2 by the end of the century, is utterly unrealistic. Put simply there isn’t enough carbon to get us there (though I have to say that in other places I’ve seen estimates of up to 5,000GtC… would love to hear from an expert).

    So far in the XXI century emissions/forcing are more or less tracking the 6w/m2 scenario, which is very similar to the ‘business as usual’ SRES A1B projection. However, this has happened during an unprecedented boom in coal use, as well as a slowdown in the CO2 efficiency of GDP (put otherwise, CO2 intensity of GDP was falling *faster* before 2000 than in 2000-2015). Already, in 2014-2015 we’ve seen how emissions have stalled and the decline in CO2 intensity of GDP has accelerated – not because of climate policies, but simply because after a period of break-neck growth you’re going to have a return to the mean. So even the 6w scenario is probably too high.

    Guess what: for the 6w world, the IPCC expects 47 cm or sea level rise by the end of the century – not from now, but from a 1986-2005 baseline. Horrors.

    In your top-end scenario, 2,000GtC + Bern model, concentrations reach just over 700ppm. This is equivalent to a doubling (280 to 560ppm) plus about 1/3 of a doubling (560 to just north of 700ppm). In other words this is equivalent to 4.8w/m2. Now, there’s also about 1.5w/m2 of forcing from non-CO2 GHGs, presumably this will increase as well, but you can see how we’re unlikely to match the 6w scenario this century.

  5. “For the larger case, two thousand GtC implies a burn rate that increases every year by 1.4%. If that happens, then by the end of this century we’d have burned two thousand gigatonnes of carbon.”

    “According to the IPCC, there is not enough fossil carbon on the planet to double the atmospheric CO2 concentration from its current value.”

    The IPCC is citing GEA estimates of reserves. An estimate of reserves is not an upper limit of the total fossil carbon on the planet.

    To put your arithmetic and assumptions the other way around, on the high estimate, increasing 1.4% would leave a possibly manageable GHG problem in 2100. But we would also be totally out of fossil fuel. Do you expect that to happen?

    • Thanks, Nick. I take no position on the likelihood of their numbers being correct. I’m saying that their estimate (according to the graphic shown in Figure 1) is that there are a total of 900-2000 gigatonnes of fossil fuel carbon in the earth. Note that this is reported in the same manner as their estimates of the various reservoir masses, as being their best estimate of the total amount of carbon in that form and location.

      All I did was calculate how high the atmospheric CO2 would go if we burned all of their best estimate of the total fossil carbon. I didn’t say that their assessment was accurate, which is why I headed my conclusion “According to the IPCC” …



    • Further to Nick’s point. by “reserve” is it meant to be the amount of stuff we think is there and can reach with current technology, or is it stuff that might be there but can’t necessarily reach with current technology?

      • Kevin Lohse, a “reserve” is not everything that exists. Reserves are what is technologically and economically recoverable. That means the level of reserves can increase simply because the price of oil (or coal, or whatever) goes up.

        You might find a comment I wrote below helpful. Long story short, for most of its scenarios, the IPCC doesn’t project a doubling of CO2 from our current levels. For the one scenario in which the IPCC does, it is assumed we will recover far more than our current stated reserves, meaning they openly acknowledge the scenario isn’t based on the numbers cited in this post.

      • The definition of reserve is variable. SEC filings require that a resource be considered economically viable using current technology and economic environment to be classified as PROVED reserves.

        The reserves shown by BP in their annual yearbook should not be considered proved reserves (I’m very familiar with individual volumes included in the total, and they definitely don’t pass the hurdle).

        For the purposes of global climate modeling it’s necessary to assume a resource volume, a fraction of which can be considered proved reserves. But we can also include future additions attributed to new discoveries, improved technology and higher prices. The new discoveries are now bounded by reality, we simply aren’t finding that much anymore. The technology and price impact are modeled, quite variable, and can yield very different results. The price impact is also a function of alternative energy resources taking up the slack. These in turn are driven by regulations, technology and price.

        Superimposed on all of this are asumptions about the world economy, population, and energy efficiency. The eventual result is a wildly changing nearly chaotic model which can yield very different outcomes. Under most “reasonable” scenarios I have seen the peak CO2 equivalent never reaches ~670 ppm. I’ve seen a very reasonable model result which yielded 530 ppm CO2, mine yielded 630 ppm CO2.

      • Brandon,
        And viable reserves can be reduced by legislation that restricts where and how resources can be extracted. Therefore, both economics and politics impact the estimate of reserves.

      • It is my understanding that “reserves” are known knowns. Reserves are the deposits that we know where they are, have drilled and tested them, we know how to extract and process them using current technology and we also know that we can make a profit doing so.

    • The IPCC has their heads up their hind ends when it comes to fossil fuel resources. I’ve read the description for the integrated asset model prepared for RCP 8.5, and it’s evident they never gave much thought to the resource issue. We should also remember they were ORDERED to arrive at a 8.5 watts per m2 forcing, which they achieved by jacking co2 and methane into the system as needed. When I read their description of what they did I have to conclude the total resource amount and the economics associated with such hyper inflated coal burn rates never seemed to enter their minds.

    • And the upper limit of the total fossil carbon on the planet, is not a realistic assumption of the actual rates of extraction and consumption, by either burning for energy or conversion to plastics, or other petrochemical products.


  6. I would be interested to see what the plant life on this planet would do with so much plant food.
    Would there be any deserts left ?
    Or would they be covered in grassland/forest?

  7. And if the temperature response to rising CO2 levels is logarithmic, this means we (the world) are spending circa $1.0 billion per day in combatting supposed climate change to achieve almost nothing.

  8. Here we go, Thermageddon front and center.

    “Great Barrier Reef coral bleaching at 95 per cent in northern section, aerial survey reveal

    By Peter McCutcheon Updated March 28, 2016 18:34:34

    An aerial survey of the northern Great Barrier Reef has shown that 95 per cent of the reefs are now severely bleached — far worse than previously thought.

    Key points:
    95 per cent of the Great Barrier Reef’s northern reefs rated as severely bleached
    Only 4 out of 520 reefs surveyed were found to be unaffected by bleaching
    Third global coral bleaching event since 1998

    Professor Terry Hughes, a coral reef expert based at James Cook University in Townsville who led the survey team, said the situation is now critical. “This will change for Great Barrier Reef forever,” Professor Hughes told 7.30. “We’re seeing huge levels of bleaching in the northern thousand-kilometre stretch of the Great Barrier Reef.” Of the 520 reefs he surveyed, only four showed no evidence of bleaching. From Cairns to the Torres Strait, the once colourful ribbons of reef are a ghostly white. “It’s too early to tell precisely how many of the bleached coral will die, but judging from the extreme level even the most robust corals are snow white, I’d expect to see about half of those corals die in the coming month or so,” Professor Hughes said.”


    More at link.

    • A forecast calamity “I expect half of those corals to die” that will occur in a time horizon “coming months or so” that is well within a period that most readers will be arrive. How quaint? Shall we start marking our calendars.

    • Corals have been on the Earth for budreds of millions of years, through all kinds of Co2 levels, all kinds of surface temperatures and all kinds of extinction events. So I am quite sure that a rise from 300 to 400 ppm is not going to wipe them out.

      • Yup, not to mention that 12k years ago none of the current GBR even exited, just some coastal hinterland limestone outcrops amid swamps, dunes and estuaries. Somehow it’s the end of the world if some snuff and will be replaced in a couple of years and be present in even greater profusion and diversity. Plus they want to pretend it takes 10 to 12 years to recover. BS! in cold water maybe, but in warm water it is much faster, 4 to 5 years tops. At the northern end of the reef, it’ll be looking pretty darned good in 3 years.

        Note also they say they’ve got coral cores going back 400 years? Yeah, well, I happen to know they go back longer, but they want to reference from the onset of the little ice age period, to present, so they can pretend prior bleaching did not occur and must be AGW in origin.

        Also notice that they want to pretend that bleaching events are new since 1998.

        Absolute bunk! They’re simply lying, and they know they are!

  9. As a layman I have a question, which may be uninformed. I often read that CO2 levels millions of years ago sometimes reached a level of 4,000 ppm. If this is true, what was the source of that much carbon and where is that carbon now?

    • I have heard it has been as high as 7000ppm but that is a very relevant question which I haven’t even seen discussed anywhere.

    • ” .. where is that carbon now?”
      A good read on the matter is Patrick Moore’s lecture ‘should we celebrate carbon dioxide’ on the Global Warming Policy Foundation site.

    • When the earth was formed, most of the atmosphere was methane and CO2, the latter mainly from volcanoes. No oxygen. Plant life did change that by transforming a lot of CO2 into organics and releasing O2, which in turn oxidized methane to CO2. Animals came on the scene which used the organics and O2 to return it as CO2…

      So in general, most CO2 in the past was (deep) magma / volcanic in a changing equilibrium between temperature of the oceans and the atmosphere, emerging life with plant uptake as well as in the oceans (shells and organics) as over land (peat – browncoal – coal) and later animal releases.

      Most of the ancient CO2 now is in inorganic carbonate layers, laid down by coccoliths, small plankton species which form carbonate shells. That is visible as thick layers of white rock in South England and Normandy in France and many other places on earth. Another large part is in organics: coal, oil, gas,…

      • Re: “Most of the ancient CO2 now is in inorganic carbonate layers”; and don’t forget all the coal reefs (past and present) laid down by the humble coral polyp!

      • Fernando Leanme,

        You are right, I had left that out (and water vapor in the still hot atmosphere) as that were the “inert” ingredients. Methane was less than I expected (around 100 ppmv, about hundred times higher than today), but may have had an important role in building the first life forms…

      • Small point, much of the fossilized carbon gets subducted under continents as the oceans contract and expand. When subducted, the CO2 will come out again from volcanic activity. Without plate tectonics, the world would quickly run out of CO2 and life as we know it would disappear. It is amazing all the subtle little things about our planet that make life possible. Even if Venus were at Earth’s distance from the sun, so it could be potentially tolerable to life, it wouldn’t support much life because Venus doesn’t have active plate tectonics to constantly recycle the used carbon.

    • Go back a few billions years ago, and it was probably 95% the same as Venus and Mars, the question is not the source, it is where did it go?
      It went into making mountains of limestone and chalk.

    • The history of CO2 in the Phanerozoic Eon looks like this:

      In the Paleozoic Era (541 to 252 million years ago) concentration fell from over 7000 ppm in the Cambrian Period to perhaps under 300 in the Carboniferous (named after the coal beds formed then by plants sucking CO2 out of the air, then rotting in the absence of advanced fungi). The Ordovician-Silurian Ice Age occurred under CO2 of at least 4000 ppm, and solar irradiance just four percent less than now. The Carboniferous-Permian Ice Age began under CO2 much higher than now, too, but it lasted long enough, as noted, for levels to drop below today’s, although probably not to Pleistocene Glaciation concentration, i.e. below 200 ppm. CO2 recovered in the Late Permian. The poorly understood “Great Dying” mass extinction event was probably not solely due to CO2 emitted by the Siberian Traps flood basalt eruptions and associated coal bed burning, but the gas might have contributed to it.

      CO2 was much higher than now, 1000 to 3000 ppm, clear through the Mesozoic Era (252 to 66 Ma), but not high enough by itself to account for the exceptional warmth of the Cretaceous Period, named for its chalk deposits, as at Dover. The luxuriant plant life of this era allowed the growth of giant sauropods in the Jurassic and Cretaceous. A lot of CO2 was released during the breakup of Pangaea, which started at the Triassic-Jurassic boundary. This volcanic activity also raised sea levels and heated the oceans.

      CO2 remained elevated early in our present Cenozoic Era. During the Paleocene and Eocene, the first two epochs of the Paleogene Period, levels were as high as in the Cretaceous, or higher at the Paleocene-Eocene Thermal Maximum, put started dropping during the Eocene, perhaps aided by the Azolla Event, a water plant bloom in the Arctic Ocean. Earth became cooler and drier in the Oligocene Epich, thanks to the formation of the Southern Ocean and isolation of Antarctica, leading to ice sheet build up. CO2 dropped under 1000 ppm. C4 plants evolved to cope with this downturn.

      The Miocene, first epoch of the Neogene Period, got progressively cooler and drier, although by fits and starts. Antarctic ice fluctuated. CO2 dropped further. Grasslands replaced forests in many places. This continued in the Pliocene, affecting the evolution of our African ape ancestors, as did the Rift Valley, formed by the ongoing splitting of continental plates.

      With the closure of the seaway between North and South America around three million years ago, ocean circulation changes led to the Pleistocene glaciations in the Northern Hemisphere, making the Cenozoic ice age global. CO2 crashes dangerously close to starvation for C3 plants during glacial advances, at 150 ppm.

      So 400 ppm is a welcome relief for most plants, and a return to the good old days of 800 to 1300 ppm, as in real greenhouses, would be cause for celebration. But probably isn’t in the cards. Even burning all fossil fuels over the next century or two likely won’t get our air to more than 600 ppm.

  10. Willis,

    Nice calculations!.

    As far as I remember, the Bern model was originally made for 3000 and 5000 GtC, thus burning all available oil and gas in the first case, regardless of price and accessibility, where the 5000 GtC includes a lot of coal.

    The big problem with the Bern model is the insistence on limits in sink capacity, which is true for the ocean surface (the first fast decay rate), questionable for the deep oceans (the second decay rate) and non-existent for the biosphere (the third one).

    These limits were based on the 3000 and 5000 GtC scenario’s, but they simply used them on every scenario, which is only applicable for the ocean surface, not for the deep oceans and certainly not for the biosphere.

    The ocean surface is in close contact with the atmosphere and exchanges CO2 at a very high rate. Due to ocean chemistry, a change of 30% CO2 in the atmosphere gives a change of 30% of free CO2 in the ocean surface per Henry’s law, but as free CO2 is only 1% in seawater, that gives only a 3% in total carbon species (DIC – dissolved inorganic carbon) in the oceans surface at (dynamic) equilibrium. That is expressed in the Revelle factor. There the Bern model is right.

    The same reasoning is applied for the deep oceans, but that doesn’t add up: the cold ocean surface at the sink places is largely undersaturated in CO2 (~250 μatm vs. ~400 μatm/ppmv in the atmosphere), so that a lot of CO2 (~40 GtC/year) disappears in the deep, be it largely compensated by a near equal upwelling near the equator. Once the CO2 is transported into the deep oceans, there is no contact with the atmosphere and the Revelle factor doesn’t count.

    The main point is that there is no saturation of the deep oceans to see in the past 55 years and not in the foreseeable future. The result is that human emissions slowly mix up with the enormous deep ocean carbon content: all human emissions until now (~400 GtC) are good for ~1% increase of the deep ocean/atmosphere content, thus ~3 ppmv in the atmosphere once back to steady state…

    The exercise with 900 GtC thus is good for an additional 2.5%, or 7 ppmv extra in the atmosphere and with 2000 GtC that gives 15 ppmv when returned to steady state. In all cases the return to steady state is from the one-decay rate or the second one of the Bern model of ~50 years or a half life time of 30-35 years, as the first and third decay rates of the Bern model only add a little to the decay speed.

    The residual 3 or 5 or 15 ppmv is what takes a lot of time to decay in the biosphere (which has no limit, but still limited in the Bern model) and carbonate sinks in the oceans (according to the Bern model), but that is directly proportional to the total CO2 released, not a fixed percentage of the release as the Bern model says.

    The latter is the most questionable assumption in the Bern model: they first split the increase in the atmosphere in fixed compartments of % release and then apply the different decay rates to each compartment itself, without any interaction between the compartments for a common in/decrease in the atmosphere… See the comments of
    TerryS here

    • But the deep ocean surface is never really at Henry’s Law equilibrium, even a dynamic equilibrium (steady state).

      Because of the Temperature lapse rate with depth (for a while) and the increased CO2 solubility at lower Temperature, plus the phenomenon of concentration driven diffusion, the oceans are constantly pumping CO2 away from the surface into deeper water.

      So you never get a Henry’s Law equilibrium (statically) and because the concentration driven pumping depletes the near surface CO2 concentration BELOW the equilibrium concentration, then a small transient Temperature increase never brings the available concentration to the Henry’s law equilibrium value, so outgassing due to small Temperature fluctuations does not occur; in other words no dynamic steady state condition either.

      The ocean near surface (down to the thermocline) Temperature gradient, is a continuous CO2 pump, away from the surface. In addition, as you get away from the surface, the diffusion of CO2 becomes three dimensional rather than one dimensional. That slows down general diffusion rates (random walk situation) and of course biological consumption of CO2 continues to deplete deeper ocean waters of CO2.

      Skeletons of dead organisms continue to rain down on the decks of the Titanic.


      • george e. smith,

        The ocean surface layer (the “mixed layer”) is quite isolated from the deep oceans, as well as for temperature, nutrients as for CO2. The main exchanges are at the sink and the upwelling zones, which each cover about 5% of the ocean surface. That is a water flux which is driven mainly by wind and by temperature: the latter mostly at the edge of the sea-ice, where salts are expelled from the freezing waters.

        The amounts of CO2 sinking in the deep and upwelling again are quite constant, were rather in equilibrium (~ 40 GtC/year), but are influenced by the increase in the atmosphere: currently ~3 GtC more sink than source, which is the second sink rate in the Bern model. Temperature has a small, temporarily influence of ~16 ppmv/°C that is all:

      • Dang, I thought we were really doing some long-term good by freeing all that imprisoned carbon.

      • Yeah . . I was thinking just the other day that the biosphere is gonna bloom like crazy, and there’s gonna be this huge natural CO2 demand, and as we eventually slow our CO2 production . . Well, catastrophic global cooling seems inevitable . . and ocean alkaliniification, beachfront property values plummeting . . polar bears wandering hundreds of miles to find a hole in the ice . . ..

    • Ferdinand,

      You wrote: “Once the CO2 is transported into the deep oceans, there is no contact with the atmosphere and the Revelle factor doesn’t count.”

      I never thought of that, but now that you point it out, it is obvious. I would greatly appreciate it if you could point me to where I can find a detailed discussion, either primary literature or someplace with references to primary literature.

      • Mike M. (period),

        Own reasoning… It was obvious to me when I saw the huge difference in pCO2 between ocean surface and atmosphere at the main sink places. See Feely e.a. at: and following chapters.
        The charts with the pCO2 differences and net fluxes are at: and next page.

        That gives that at the main (THC) sink place (the NE Atlantic) the difference in pCO2 and thus net CO2 sink flux is hardly influenced by the Revelle factor, even in the far future.

        Once in the deep there is no exchange with the atmosphere and even an increase in CO2/derivatives due to the raining out of organic debris and inorganics (carbonate shells from coccoliths) out of the surface which are broken down by bacteria (organics) or simply dissolve (inorganics) under the “carbonate compensation depth”.
        That all makes that the upwelling waters are largely oversaturated when they reach the surface again and warm up near the equator, emitting a lot of CO2 which then returns to the sink places, giving a rather continuous cycle of ~40 GtC/year (~3 GtC/year more sink than source)…

      • Ferdinand,

        Thanks, that will get me started. I knew that P_CO2 in the deep ocean is very high due to the “biological pump” but I never thought about what that means for the eventual fate of anthropogenic CO2.

    • Ferdinand,
      You said, “Once the CO2 is transported into the deep oceans, there is no contact with the atmosphere…” That is not quite true. The organisms living near the surface take in CO2 and incorporate it into their bodies. When they die, they drift downward and become oxidized in the deeps. The CO2 from decomposition is readily absorbed by the cold water under great pressure. Thus, the downwelled deep ocean waters are indirectly in contact with the atmosphere through a mechanism that is different from simple absorption across a water/air interface. Nevertheless, since the atmospheric CO2 influences planktonic growth, it also influences the CO2 in the deep waters.

      • Clyde,

        Of course there is the biological pump working which enriches the deep ocean waters, but that was not the point of discussion.
        According to the Bern model, the deep ocean sink for CO2 is affected by the Revelle factor, like the ocean surface is. For the sink places, that is clearly not the case as these are much lower in pCO2 than the atmosphere, thus still absorb lots of CO2 without much effect of changing ocean chemistry at these places. Once these waters are in the depth, there is no direct contact with the atmosphere and the Revelle factor doesn’t play any role anymore. Even not if the C content doubled on its way to the upwelling zones…

  11. I feel like this post is missing a lot of information. One of the most glaring emissions is the IPCC doesn’t project CO2 levels doubling in any scenario but its most extreme, the RCP 8.5. For all of its other projections, even CO2-equivalent levels (where the IPCC accounts for the effect of other greenhouse gases) stay below 800. That this post’s results are perfectly in line with three of the four IPCC scenarios seems something that merits mention.

    And as for RCP 8.5? The people who made it discuss why it reaches levels above the apparently possible ones. Researchers discuss how the CO2 levels projected in RCP 8.5 would require significant amounts of unconventional hydrocarbon resources beyond our current stated reserves. That is, the people making the RCP 8.5 say the levels they project would require developing new methodologies to allow us to go beyond the reserves listed in the image this post relies upon.

    This means the IPCC says we will not double atmospheric CO2 levels from our current levels in three of its four scenarios, and the people who made the one scenario where it could happen say the only way we would reach such levels is if we developed new technology that allowed us to extract far more resources (primarily coal) than are currently included in our calculated reserves due to our technological limitations. That seems like the sort of thing which should have been mentioned in this post.

    Side note, 383 + 173 + 446 does not equal ~900. It equals 1002.

    • It isn’t perfectly in line with the report. The author of this blog post assumed all our foreseeable carbon reserves per the IPCC will be consumed in some 85 years – that will not happen, so the figures you cite that show congruence are really way to high.

      • marque2, what are you talking about? All the IPCC projections save RCP8.5 say we won’t reach the levels described in this post. Pointing out we wouldn’t reach the levels this post describes because we wouldn’t burn enough resources to meet its maximum case doesn’t contradict the IPCC which says we won’t reach what this post shows as the maximum case.

        Also, this post does not assume all foreseeable carbon reserves will be burned. Reserves like those used for the estimates in this image are ones which are technologically and economically recoverable. It is quite easy to foresee cases where technological and/or economic changes would make the recovery of additional resources viable.

  12. Using Lewis and Curry’s estimate of TCR as approximately 1.3K, this means that by 2100, we’ll still be struggling to achieve a 1.3C rise in global temperatures over today’s value (approximately 1C above pre-industrial), even with the high end scenarios. Which kind of makes predictions of 3-4C rise in global temperatures by 2100 (even over pre-industrial levels) look a bit silly. But there again, the IPCC multi model mean for TCR is 1.8K and they allow for it to be as high as 3K, so catastrophe is still theoretically possible, therefore we should do everything we can to avoid this hypothetical Thermageddon.

    • Doing EVERYTHING we possibly can could involve forced sterilization of 90 % of all males on the planet, or killing half the population. Other crazy ideas come to mind.

      • 10% of all males can still fertilize 100% of females, Fernando ;-)
        There would be high demand for those males.

      • Dr. Strangelove sez:
        [Strangelove’s plan for post-nuclear war survival involves living underground with a 10:1 female-to-male ratio]
        General “Buck” Turgidson: Doctor, you mentioned the ratio of ten women to each man. Now, wouldn’t that necessitate the abandonment of the so-called monogamous sexual relationship, I mean, as far as men were concerned?
        Dr. Strangelove: Regrettably, yes. But it is, you know, a sacrifice required for the future of the human race. I hasten to add that since each man will be required to do prodigious… service along these lines, the women will have to be selected for their sexual characteristics which will have to be of a highly stimulating nature.
        Ambassador de Sadesky: I must confess, you have an astonishingly good idea there, Doctor.

      • Fernando,
        Any attempt to significantly reduce the population of feral cats by sterilizing only the male cats would be doomed to failure. It might actually increase the population because the few remaining fertile male cats wouldn’t have to compete with as many other males and would therefore suffer fewer serious injuries and fatal wounds. They could apply their energy to servicing the female cats rather than fighting. While there would be some cultural changes required, I’m afraid one could expect a similar result with humans.

    • TCR is a theoretical calculation based on an assumed doubling of CO2 concentrations in 70 years, which works out to 1% concentration growth per year. Real-life growth is slower (about 0.5% per year), which means by the time you reach a given concentration, there has been a longer time for oceans to equilibrate (the famous ‘warming in the pipeline’). In other words, the real-world temperature increase should be somewhere between TCR and ECS.

      Additionally, even if concentrations didn’t change at all from now on, we would expect temperature to increase a bit due to warming in the pipeline that was already ‘committed’. So if we doubled concentrations from now till 2100, even if TCR = 1.3C, the temperature increase will be higher than that.

      There is little question that, over time, we’re going to cross the two-degree ‘target’. Of course, in the end the question is why should anyone care; the worst that can happen is that sea level rises a few meters after a few centuries. Presumably people of the year 2,500 will know how to build a seawall.

      As a curiosity, SRES A1B, i.e. ‘business as usual’, the ‘developing’ countries will by 2100 have achieved an income TWICE that of the ‘developed’ world in 2000. In the middle of this unprecedented miracle, focusing on one or two degrees seems unbelievably myopic.

      • Alberto, ” there has been a longer time for oceans to equilibrate (the famous ‘warming in the pipeline’).”

        This appears to be conflating atmosphere/ocean carbon chemistry with atmosphere/ocean thermal transfer. In the case of the chemistry the atmosphere is the net source and the oceans the net sink. The case of thermal transfer is overwhelmingly the reverse.

      • Gymnosperm, I didn’t conflate temperatures with the carbon cycle. What I meant is that in papers estimating TCR and ECS through the energy budget, like Lewis & Curry, the difference between both figures is due to ocean heat coming to the atmosphere. Since real-life CO2 concentration rises more slowly than the ‘ideal’ calculated in these papers (1% a year vs 0.5% a year), there is more time for this warming to ‘kick in’.

    • A warm world is in general a better world. A kilometre or two of ice above your head can tend to ruin your day. Please note planet should be greener and wetter :-)

    • Jessop


      If there were an infinite amount of resource (i.e.: money), go chase your hypothetical Thermageddon…

      However, with limited resources, you’re (literally) making a decision to kill millions of people (malaria, starvation, etc) because you chose to allocate resources to chasing hypothetical Thermageddon.

      Hopefully my comment gives you some hypothetical perspective (but I doubt it).

      ps: a version of the above logic explains why most people drive a Honda instead of a Ferrari. What kind of car do you drive?

  13. As a non scientist I’d like to ask your more qualified contributors a question about what exactly produces a Venus like greenhouse world of the sort that we are constantly being threatened with.

    I am well aware that for most of Earth history carbon dioxide levels have been generally much higher than today’s levels, that the planet Venus is 30 million miles nearer the Sun and therefore likely to be a wee bit warmer even on a good day and that although Mars has virtually nothing but carbon dioxide in its thin atmosphere it is about double our distance from the Sun and therefore unlikely to ever compete with Florida in the suntan stakes.

    However, surely atmospheric density – I think the atmospheric pressure column is the correct way of describing it – must play a critical part? The fact that the Earth never has gone into a runaway greenhouse despite hugely higher carbon dioxide atmospheric content in the past must mean that it has limiting factors at work preventing this. I understand the explanation that there is a less and less effect as more and more carbon dioxide reaches the atmosphere, but what is the relationship in respect of the density of the atmosphere.

    I’d be really grateful for a short explanation from one of your expert contributors.

    • Atmospheric density does play a critical part.

      Venus receives about twice as much energy from the Sun compared to the earth. However, it also has an albedo of about 0.9, which means that it reflects about 90% of the energy it receives straight back to space. In spite of this, Venus is the hottest planet in the solar system due to an enhanced greenhouse effect arising from CO2 and sulphuric acid in its atmosphere. The surface temperature is more than 700 kelvin.

      The key factor is that atmospheric pressure is about 9 times higher than on Earth and this is what makes the greenhouse effect so powerful on Venus. On Earth, CO2 only absorbs radiation in a narrow band centred on a wavelength of 15 microns (it also absorbs at other wavelengths, e.g 4.3 microns, but these are not relevant to the Earth’s greenhouse effect). On Venus, the atmosphere is 90% CO2 and, because of the high pressure, the CO2 absorption bands are extended by a process called ‘Pressure Broadening’.
      On Earth, CO2 concentrations are about one 25th of 1% , instead of 90%. A venusian type greenhouse effect is not going to happen here because of the required atmospheric pressure is not going to occur.

      • If one measures the temperature of the Venusian atmosphere as one descends towards its surface, at one atmosphere pressure (ie., about 14.5 psi), Venus is about the same temperature as planet Earth. Obviously, as one descends further towards the surface, and as the atmospheric pressure builds up, the temperature correspondingly increases.

        Despite Mars’ atmosphere being around 96% CO2, it is damn cold, but then again, there is all but no pressure.

      • Without it’s extra solar energy Venus would cool and continue to cool, yes when you raise the energy in x2 compared to earth, of course the factors increase exponentially. Move our own globe into that position and it also would become an uninhabited hot gas ball. The primary factor being it’s closer proximity to the sun.

        This does not equate to runaway CAGW but in fact too much energy into a system, all gases are in the atmosphere rather than condensed as water and such. The atmosphere of Venus keeps this heat in, but it is not the source of the heat. Using such a model of a planet that gets more energy as a basis for CAGW is catastrophically stupid when the very energy in is required to create the atmosphere, even without high CO2 at x2 energy, Venus would be clouded in other gases regardless, and just as hot no doubt.

        So too much energy for an earth like planet, with liquid water is the problem for Venus. Of course that solar energy has “nothing to do with earth’s climate” but it drives the atmosphere on Venus x2 earth’s amount.

      • Correction to MikeB March 28, 2016 at 3:44 am.

        The atmospheric pressure at the surface on Venus is 93 bar = approx 92 atmospheres, not 9 times as high as sea level on earth.

        Venus’s albedo is given as 0.75 (spherical) or 0.85 (geometric). 90% is too high. I understand that geometric albedo is the albedo measured when the source of illumination is directly behind the observer. whereas the spherical albedo relates to the proportion of light radiated in all directions. The subject is confusing, and there are numerous definitions. Data from Wikipaedia.

        Actually, because of the very high albedo, the surface of Venus should receive less solar energy than should the earth. See:

      • The lower atmosphere on Venus is supercritical CO2, meaning it is sort of like a gas and sort of like a liquid. I’m no expert on supercritical fluids but to me this means 1) heated CO2 isnt very bouyant and doesn’t rise very well and 2) when the supercritical lower atmosphere of Venus transitions to a gas a few miles up, there is no evaporation cooling effect like we have on Earth when water transitions to a gas at the oceans surface. So no heat is lost like when water evaprates, its all remains trapped in the lower atmosphere. I’ve wondered what would happen to the Earths ocean temperatures if water behaved like supercritical CO2 and didn’t evaporate. I suspect the oceans would be boiling for dure.

      • I do recommend deep study of pressure broadening before accepting it. It is. Another Dodge.

    • Also a day on Venus a day is what, 243? earth days, you really cannot compare earth and Venus when talking CAGW, there is no real basis for it. Earth would be unrecognizable if we had 243 days, probably no life too and could in fact have similar conditions given oceans would be in direct sun for hundreds of days straight!!

      • I’m not sure the rotation is that [relevant]. Since very little sunlight actually reaches the surface I would guess that atmospheric movement plays a larger roll.

      • One thing missing, is this. The temperature and gravity affect how long light molecules stay in the system. In Earth’s case the balance is such that free Hydrogen and Helium escape but free N2, O2, and Water can stay in the atmosphere held by gravity. Because Venus is so close to the sun the extra 2.65x more energy caused lighter molecules to escape leaving the lightest as CO2. Venus couldn’t hold water, no matter the period of the day, and it wouldn’t hold free O2 either so we couldn’t have Earth like life on the planet no matter what speculation we might make.

    • Venus 96.5% carbon dioxide, @ 9.2 MPa = 8.87 MPa, CO2,
      Earth 0.039% carbon dioxide @ 0.101325 MPa = 3.951675e-05 MPa, CO2; the Venusian atmosphere has ~ 225, 000 as much CO2 partial pressure as the Earth’s does.
      There really is no comparison between Earth and Venus

      • So the effective sameness of temperatures at pressures accounting for solar distances only further proves that gas specie is irrelevant. As per the gas laws. Radiative forcing exerts no effect. Zero NiL. Nada.

  14. Interesting Willis, it does ask questions of the IPCC claims and the disaster mongers of course.

    Can you do same calculations for if we put all reserve CO2 into the oceans to see if they even approach neutral let alone acidic conditions :P

    I’d be interested to see what you find

  15. “On the other hand, the Bern model uses three different half-lifes applied to three different fractions of the CO2 emissions, plus 15% of the emitted CO2 is said to only decay over thousands of years.”

    I hope I’m misreading that. It seems to suggest that the IPCC believes that each atmospheric CO2 molecule somehow knows what decay mechanism to use and can only use its own designated mechanism. Frankly, I don’t think CO2 molecules are that smart. My impression is that Ferdinand Englebean’s post (above) addresses that. I’ll meditate on this and maybe even look some stuff up, but right now, my todo list overfloweth and I need to get on with the tax stuff.

    If my impressions are correct, I’d say the Bern model is pretty dubious and the simple exponential decay is probably better. Fortunately, they don’t seem to be very different for short timespans — like 84 years.

    • Whatever the Bern Model’s other shortcomings may be, its requiring each CO2 molecule to know which time constant to use is not among them. This post explains the multiple-time-constant concept.

      • Thank you Joe. Even with the Gawdawful math, that seems much less detached from reality. I need to think about it more, but for now, I think the partitioning process is not described very well by “On the other hand, the Bern model uses three different half-lifes applied to three different fractions of the CO2 emissions, plus 15% of the emitted CO2 is said to only decay over thousands of years.”.

      • Dear Joe,

        Still need to start with checking your background work, but every time I get started, articles like this one need a lot of responses from my side…

        That the Bern model is a mix of different decay rates is not a problem, real life is like that. The main problem I see is that they make a partitioning of any new increase in different compartments before they apply the different decay rates. That is only true for the ocean surface, which is rapidly saturated at 10% of the change in the atmosphere. That is very disputable for the deep oceans (the second decay rate) and non-existent for the biosphere (the third one). See my comment here and the link therein to TerryS’ comment.

      • Joe: After nowhere near as much thought as it deserves, I think that the exponential model is indeed simple. Given a starting point value, and an exponent, one can compute the CO2 concentration at any time with one simple calculation. It’ll do a great job as long as the starting value is close to correct and the phenomenon is truly changing exponentially and the exponent is very close to correct and the exponent doesn’t change.

        The Bern model on the other hand appears to be at heart a sort of polynomial where some — it’s not clear from what I read which — of the constants have been determined by fitting to historical data. Sure. Why not? I’d guess — and it is a guess — is that it will be subject to usual problems with polynomials. Most importantly polynomials often do a very good job of estimating values within the data range and a poor job of predicting values much outside the range of the data they have been fitted to. I’d have to look at the Bern model a LOT more to determine whether it’s prudent to ascribe any physical meaning to the terms. I don’t plan to do that. It seems quite possible that the terms in the Bern model are just more or less arbitrary entities that have been trimmed, stretched, jiggered to give reasonable results — at least over the range of historic data.

    • It’s typical for processes governed by diffusion equations to have multiple apparent time constants. They come about because, as concentrations decrease, the likelihood or frequency of absorption decreases.

      As a result, decay is not directly proportional to the quantity, but to a constantly evolving function of it. A piecewise constant fit to the function produces piecewise exponential decay with associated time constants.

      The problem is not the form of the Bern model, but its parameterization, which is just a WAG.

  16. Willis

    I think I’ve finally got a handle on this exponential growth.

    Via an ex _RAAFie’s song which starts

    “It started with three

    Now there’s millions on me

    How I’d like to get them with my crab lotion”


  17. Willis

    This year, the atmospheric CO2 level is right around four hundred ppmv. So to double, it would have to go to eight hundred ppmv

    But what is often overlooked is that the warmist base their rise above pre-industrial levels, ie., a doubling above about 270 to 280ppm. Armageddon is meant to be brought about by our reaching about 550ppm.

    Of the alleged dangerous 2 deg C, it is not 2 deg C above current temperatures, but rather 2 deg C above pre-industrial temperatures, ie of the lows of the LIA. According to the data sets we have already seen about 1.2degC of the dangerous 2 deg C warming.

    Accordingly dangerous CAGW is all about a further 150ppm of CO2 and a further 0.8degC of warming.

    Personally I am not at all concerned about a doubling of CO2 from current levels. CO2 is still; very low and 800 to 1200ppm would appear to be a very good thing for the biosphere. Personally, I am not at all concerned about a further warming of say an additional 2 to 4 degC above current temps. A return to the highs of the Holocene Optimum would benefit the vast majority of life on planet Earth.

    Whilst I consider that your observations

    Next, the two upper values seem unlikely, in that they assume a continuing exponential growth over eighty-five years. This kind of long-term exponential growth is rare in real life.

    is bang on, this is not addressing the warmist’s argument since they are concerned that we will see one complete doubling (about 550ppm) and that will be achieved by the end of the century on both models using the lower value for fossil fuel reserves, and by around 2050 using the Bern model (higher reserve0 and by around 2060 (using the simple exponential model higher reserve).

    • Correct.

      In most discussions we discuss doubling from Pre industrial
      The 2C boundary is reference from pre industrial.
      hence the recent debates about the start date of pre industrial

      So, can we get to 560ppm?

      The question is will we get to 560?
      Next question is How much warmer will 560 be than 280?
      1C warmer? ( ha we are already there)
      2C warmer ( Lewis and Curry)
      3C warmer ( basically the “consensus”)
      The next question is
      Who will benefit? who will be harmed?
      the next question: How much?
      The next question
      What, if anything, can “we” do about it.

      Within so many uncertainties to address, it seems a little distracting to ask the wrong question.
      Willis asks the wrong question.

  18. There is a PhD thesis by Willem Nel from the University of Johannesburg providing the same conclusion, with some refinements. He predicts peak coal, peak oil, and peak energy. He says the problem we face is not peak oil, but peak energy.

    His bottom line is that the CO2 concentration is not going to rise about 540 ppm no matter what is attempted.

    Separately, and just interestingly, another of the UJ PhD grads has departed from the Big Oil company he was working at and opened a fracking company that will concentrate on getting more than the current 25% of oil out of each well. If it is as successfully as his other innovations, he will approximately triple the available oil supply, which has interesting implications for the oil price and such calculations as Willis and Willem are attempting.

    Impacts of primary energy constraints in the 21st century
    Nel, Willem, P.

    Extract from the abstract: (my emphasis)

    “Analysing these real-life sustainability issues in a multi-disciplinary context leads to conclusions that are controversial in terms of established philosophical worldviews and policy trends. Firstly, the thesis establishes deterministic expectations of an imminent era of declining Energy Security resulting from the exhaustion of non-renewable fossil fuel resources, despite optimistic expectations of technology improvements in alternative energy sources such as renewable and nuclear. Secondly, the exhaustion of non-renewable fossil fuel resources imposes limits to the potential sources of anthropogenic carbon emissions that render the more pessimistic emissions cases considered in the global warming debate irrelevant. The lower level of attainable carbon emissions challenges the merits of the conventional carbon feedback cycle with the result that the predicted global warming is within acceptance limits of the contemporary global warming debate. Thirdly, the consequences of declining Energy Security on socio-economic welfare is a severe divergence from historical trends and demands the reassertion of the role of energy in human development, including Economic Growth theory.”

  19. The Bern Model is artificial in that it is not dealing with bases of the Carbon Cycle. What is driving human emissions and what is driving the Natural Sinks.

    I really like the chart at the head of the post. It is actually fairly accurate and almost up-to-date. It is however, just a point in time, because all of these numbers have changed through time. The Chart at the top is really a point-in-time —> 2012.

    The numbers at the top, 589 billion tons Carbon. This is 276 ppm CO2 which is what the CO2 level was in 1750 before the industrial revolution started. This is an Equilibrium level of CO2. In non-ice age conditions, this has been the CO2 level for the past 24 million years, ever since C4 grasses evolved and increased the Carbon balance that vegetation was capable of holding.

    In the absence of our emissions, the natural world will tend to try to stay at this 276-280 ppm level which is where the Natural sinks and sources are more-or-less in balance and have been for 24 million years.

    The current concentration is 850 billion tons Carbon –> 399 ppm CO2. It has gone up by 261 billion tons Carbon or 123 ppm CO2. (A handy formula is CO2 ppm X 2.13 = Carbon atmosphere GTons).

    Humans have added 401 billion tons Carbon since 1750 and plants, oceans and soils have sunk 141 billion tons Carbon. (I have this all in a big database).

    Human emissions have grown over time in close to an exponential rate and was about 9.8 billion tons Carbon in 2015. The rate of human emissions has slowed in the last few years, partly because the world economy has slowed and partly due to the increased efficiency of energy production, mostly the newer gas turbines versus coal electricity production.

    The natural sinks have varied by more over time and they used to dwarf our emissions. Some years, plants, oceans and soils were a net source of Carbon. Some years, they were a net sink and the Natural Sinks actually absorbed more than 100% of our emissions in the 1940s, 1850s and 1860s. CO2 actually fell in those periods.

    Here are the human emissions and natural sinks from 1750 to 2015.

    The Natural Sinks of plants, oceans and soils have been increasing slowly over time as the concentration of CO2 in the atmosphere has increased. (Note in some years before 1900, the Natural sinks were actually a Natural Source and were much higher in this regard than our emissions. The Sinks are trying to get back to 280 ppm which is the equilibrium level.

    The net Sinking/Source rate as a percent of CO2 above the equilibrium level of 280 ppm back 1750. It is now around 1.8% of the Excess CO2 above Equilibrium each year but it is increasing. The Bern Model must use this kind of calculation, NOT, a simplistic e-folding time one.

    Based on the human emission growth rate and the natural sink rate, we have to slow our emissions growth starting in about 2030 and then start on a slight reduction path in order to allow the Natural Sinks to catch up to our (currently higher) emission rate. It will take a long time for the Natural Sinks to rise to 9.8 billion tons per year. It is only about 4.8 billion right now.

    The Natural Sink rate will stay at 1.8% per year (or increase slightly not included) but its growth rate per year will also slow as the concentration in the atmosphere begins to slow.

    By 2180, we can stop CO2 at (the less the doubling) level of 530 ppm. By then, Humans will have emitted 1,639 billon tons Carbon. The Natural Sinks will have absorbed 1,443 billion tons Carbon.

    This is the way the math should be done.

    • Again, where does the idea come from that human emissions will peak in 15 years? I’m not saying it won’t happen, but it would be nice to know how Mr. Illis arrived at that conclusion.

      • Indeed. None of these graphs seem to have a ref. to the source of the data being plotted.

        In the absence of our emissions, the natural world will tend to try to stay at this 276-280 ppm level which is where the Natural sinks and sources are more-or-less in balance and have been for 24 million years.

        Utter crap , sorry Bill. Where on Earth does that idea come from? Circa 280 ppm may have been the ( dynamic ) equilibrium level in about 1850 AD. It certainly was not some universally applicable “equilibrium” before man started burning coal. It was maybe 10x greater in the past for perfectly natural reasons.

        Also PLEASE learn to use capital letters. The word carbon is like soot, soil or shit. It is NOT a proper noun ( like person’s name ) which requires capitalisation. Neither is “Natural Source” a proper noun. Here, “natural” is an adjective ( never capitalised ) and “source” is a common noun , not a name. Also “Excess CO2 ” ( adjective ) ; “Equilibrium “. ” Natural Sink rate “, “Natural Sinks ” , “the Excess CO2 above Equilibrium ” . WTF? , learn English , a little. Please.

      • Sorry Greg, didn’t know people felt so strong about that.

        I maintain many climate-related databases (including the probably the largest one of CO2 estimates throughout history – over 4,000 individual estimates). CO2 fell below 280 ppm, for perhaps the very first time in Earth history, 24 million years ago. For the non-ice age estimates since that time, roughly 95% of them have CO2 between 240 ppm to 280 ppm. So that makes it the current Earth arrangement favorite number, or let’s say the equilibrium number.

    • Very interesting. I don’t actually think we face a catastrophe or anything like that, but this math looks FAR more solid than the toy ‘carbon budgets’ you see around. (One of the many problems of carbon budgets, which your charts address, is that they ignore how nature will remove atmospheric CO2 – essentially making the ‘budget’ infinite).

      Do you have a link?

  20. Hello Willis.

    A very good exercise there showing what the CO2 concentration trend could be in the future according to the data coming from models that estimate figures about CO2 fluxes… some “extreme” given scenarios.

    I was wondering if you could use a similar exercise to produce (or reproduce graphically as in the above graph of yours) the actual trend of the last 100 or so years of the CO2 concentration as it is in its very steady acceleration!

    My “wild” guess is that even not being restricted in the range of numbers used by relying only in the models, but given the whole universe of numbers to chose-pick and much from, you would not be actually able to produce actually that trend in its actual ( very) steady acceleration….unless, let me say again unless and again unless the figure you use for CO2 residence time is in the range of ~5 years to max ~10 years 1/2 life of CO2 in Atmosphere.

    I will appreciated a lot if you will try to, or actually prove me wrong in this one, in any way possible that you could do that, as I will definitely learn something new.

    Even in Aris case the other day his model did not produce the actual trend of CO2 concentration in its actual (very) steady acceleration.
    In that graph the divergence seemed small bat actually when it comes to the consideration of its difference in the steadiness of the acceleration that stands as a huge one……representing actually something that was not the reality………….

    Hopefully you get my point..


    • whiten,

      Here the observed combination of temperature, total CO2 emissions by humans and increase in the atmosphere over the past 110 years. The former from fossil fuel sales and burning efficiency while the latter before 1959 from ice cores and from 1959 on from Mauna Loa:

      One can calculate the residual original human emissions fraction as function of the ~5 years residence time and the residual total increase of CO2 in the atmosphere as mass as function of the ~51 years decay rate of any excess CO2 in the atmosphere above equilibrium:

      Thus while the remaining fraction of original human CO2 is as low as 9% (= FA) in the atmosphere and 4.5% in the ocean surface (= FL), human emissions at some 400 GtC in the past 160 years are responsible for almost all of the increase of about 220 GtC in the same time span, as with an e-fold decay rate of ~51 years, the calculated increase in the atmosphere (= tCA) and the measured one (= tCA obs) are a near exact fit…

      Again, residence time does show you how long an individual CO2 molecule (human or not) remains in the atmosphere before being exchanged with a CO2 molecule from another reservoir. That says next to nothing about how long it takes to remove an excess quantity of CO2 out of the atmosphere back to steady state for the current average ocean temperature. The current sink rate is not fast enough to remove all human CO2 (as mass) in the same year as emitted. That is what causes the increase in the atmosphere…

      • Ferdinand Engelbeen

        As a retired CFO, I always perk up when two numbers (your 1st graph: CO2 PPM anomaly and temp anomaly) are graphed so that one EASILY draws the conclusion that one causes the other.

        Wall Street con-men use this all the time; however, I’m sure what you’re doing is ok.

        Taking your “anomaly” numbers at face value and converting them to absolute values of CO2 PPM & Kelvin temp: a 54% change in CO2 (260 in 1850 & 400 currently) equals a 0.1% change in Kelvin temp (about 293 in 1850 & 293.4 currently).

        We won’t even talk about error bars.

      • Chip Javert,

        You are right, but if I had plotted temperature at full scale, there was only straight line left… /sarc

        In this case temperature was indeed not important, as the main message was that the increase in the atmosphere paralleled total human emissions at about halve its rate (and as an aside that the influence of the “huge” temperature variability is small as it is hardly seen in the CO2 increase)…

        Still some here think that the increase in the atmosphere is fully caused by (ocean) temperatures and not by humans…

    • Ferdinand Engelbeen
      March 28, 2016 at 6:48 am

      Hello again Ferd.

      As always you keep failing the argument and the point made..

      The mentioned ~5years to 10 years 1/2 time for CO2 is not mentioned as mentioned simply as the only range of CO2 residence time that can fit the reality.
      in the context of that comment is not considered as fact or evidence from measurement of it, is simply mentioned as an assessment that explains the reality without failing.

      Let me make it simple for you.
      Base of argument:
      Human CO2 emissions count only for a contribution of 1/4-1/5 to the the overall increasing of concentrations, so 1/4 to 1/5 of overall CO2 mass added during last 120 years is anthropogenic.
      If you can show that this is not the case than I have no argument as per my comment above.
      You just have to tackle it there with no innuendos.

      Then what is important about it, the main beauty of human CO2, is that we have a very well established knowledge about the CO2 concentration trend, for the period in question, that serves as a reality check for any of the assessments guesses and assumptions raised.

      The basic requirement for not failing that reality check is simple:
      “If the ratio of human CO2 concentration is1/4 -1/5 of overall CO2 increased concentration, then that ratio must be approximately maintained when considering the yearly human CO2 emissions versus the overall “disbalance” of yearly emissions/yearly sinks, otherwise the CO2 concentration trend will be different with a very significant difference in acceleration of concentrations…..simple as that… rocket science and no innuendos………So the ratio of yearly anthropgenic emissions as per above should be ~1/4-1/5 otherwise it fails the reality check no matter what.
      The IPCC and the orthodox Climatology have an official estimation that consider the yearly human emission ratio as per above in the opposite direction at 3/4-4/5 and therefor fail by default…….in your case you get to a point that consider it at about 2/3-3/4, a little less but still that fails too to reality check.

      this far I have not even mentioned the residence time of CO2 in atmosphere, you see don’t you… simply is an undeniable result as per my argument above……it is the only residence time that keeps that ratio as required… is the most probable “refresh rate” of CO2 in atmosphere that keeps the numbers working without a fail; in reality check.

      Hope you get the point Ferd as in principle is very simple…..regardeless of how correct it could be…

      So please keep it simple without going is some weird assumptions or “string theory” approaches…if you tend to reply……please read it carefully before you decide to tackle that argument……and stop trying to convert emissions in to concentration by some weird magic………….


      • Whiten:

        Human CO2 emissions count only for a contribution of 1/4-1/5 to the the overall increasing of concentrations, so 1/4 to 1/5 of overall CO2 mass added during last 120 years is anthropogenic.

        Whiten, where is that based on?

        The current human CO2 emissions are calculated as ~9 GtC/year (~4.5 ppmv/year)
        The current increase of CO2 in the atmosphere is ~4.5 GtC/year (~2.25 ppmv/year)
        That makes that human emissions are around 200% of the increase in the atmosphere.

        For all human emissions since 1850: about 400 GtC
        Total increase in the atmosphere since 1850: about 250 GtC
        Again a lot more human emissions than increase in the atmosphere…

        Thus again my question, where is your 1/4th to 1/5th human contribution based on?

      • Ferdinand Engelbeen
        March 28, 2016 at 12:56 pm


        Human CO2 emissions count only for a contribution of 1/4-1/5 to the the overall increasing of concentrations, so 1/4 to 1/5 of overall CO2 mass added during last 120 years is anthropogenic.

        Whiten, where is that based on?
        Hello Ferd.
        This is the hard part now.
        This is a copy paste of a comment in WUWT in another good post of Aris some few days ago from aveollila:

        March 16, 2016 at 5:25 pm

        The main message of Halperin about the timescales of CO2 changes in the atmosphere is correct. I just add one measurable observation. IPCC says in AR5 that 240 GtC of the anthropogenic CO2 has accumulated in the atmosphere in 2011. It simply means that according to IPCC the total increase of the atmospheric CO2 from 597 GtC in 1750 up to about 850 GtC of today is anthropogenic by nature. The total fossil fuel emissions up to 2013 have been 394 GtC, In the anthropogenic CO2, the isotope relationship of 13C/12C is different in comparison to the natural CO2. The measurement unit has many names but let us use the word permille, which has a very special specification. Anyway the permille value of the anthropogenic CO2 is -26 and that of natural CO2 it is -7.0. The measured permille value in the present atmosphere is about -8.4, permille, which means that the amount of the anthropogenic CO2 in the atmosphere is only 67 GtC. If the amount would be 240 GtC, it should give the measurement result of -12,9 permille. It is amazing, how IPCC is still a very reliable scientific organization. Why IPCC acts like this? There must be a very good reason. The reason is that using this approach and unreliable timescales for the total CO2 change, it looks like the anthropogenic CO2 introduced into the atmosphere will never disappear. An when the warming effects of CO2 are about three times too great, the end result is the destruction of the Earth, It will be fried.”””

        Read it carefully…..yes you have the right to claim that is meaningless because is not coming from an “authority” in the matter, but as you may see is a very well constructed argument in questioning the authority itself in the matter, and showing a big problem there…
        I side with it because first seems to be a very good estimation of the rate of human CO2 concentration versus the overall increase of concentrations for the period in question, contrary to the IPCC idiotic claim.

        You see at the worse idiotic claimed scenario the human CO2 concentration could be no more than 1/2 of that overall 240GT/C increased mass of CO2…….for a very simple reason,,,,,, again the CO2 concentration trend is very steady in acceleration and it starts long before the human CO2 emissions had any significance, when in the same time the human CO2 emissions have a much higher acceleration lately, and because the disbalance as it stands lately (as per the numbers of IPCC) at 30gts of CO2 yearly emissions is too small for the increased concentrations.

        So taking all this in account, to make the numbers fit without failing the reality check of the CO2 concentration trend, we have to consider some weird conditions as possible regardless of the fact that such conditions are very very unlikely, but never the less that would not mean impossible or not probable.

        The list of the conditions REQUIRED:

        1- The ~30 Gts CO2 disbalance has been a constant since the day one 150 years or so ago till now at present, and did come up by some “magic” over a “single” night or moment, because otherwise you lose that beautiful steadiness that the trend shows. (SO THE DISBALANCE HAS BEEN THERE FOR ~150 years always at the same amount, constant with no change since the day one,,,,, at ~30Gts CO2)

        2- As that (the disbalance at 30Gts CO2) could not have being a result of human emissions in the early stages up to at least the last 1/3 part of the period in question then the deceleration of natural emission over time must be in accordance, in harmony and full synchronicity with the amount of human emissions acceleration during the whole period in question …….because otherwise, still, that steadiness of the stubborn concentration trend will be lost.

        3- The residence time of CO2 in atmosphere can not be more than 30-35 years 1/2 life……..otherwise the whole shit goes belly up…….

        So with all this ridiculous conditions required to be met as per requirement of the reality of the CO2 concentration trend……..still the human share in the increment of CO2 in atmosphere will barely and with some shoveling will make it up to at most a 1/2 of all overall CO2 increase in atmosphere for the period in question……..

        But when you consider the residence time of CO2, as probably shorter, than the figure of the human CO2 concentrations move from 1/2 towards the 1/4, and it gets to ~5-10years 1/2 life for the 1/4 figure of concentration where nothing weird and ridiculous required for a reality check confirmation and everything falls in place rather nicely,,,,,,definitely neither for IPCC or you I may say
        …..that is why the residence time is so important and it really has value when estimated through hard measurements and not modeled backwards.

        Remember the one such estimation we have out and based in measurement is far shorter than the one required under the bizarre conditions mentioned above

        From my point of view aveolila has got it right versus the mess of IPCC….and I have no problem up to this point to consider it as a correct estimation……sure you will disagree rather than…..

        So who would you think is correct aveolila or IPCC…….remember aveolila refers still to the IPCC finds but simply questions the conclusion, as it seems to be ridiculous….who do you think will be more likely to be right and correct…..the IPCC conclusion is in the realm of very very unlikely as per my understanding and opinion….

        Now have your field day……I have no problem to accept error in my part if shown……but please keep with the topic of the argument and the point raised if you could…..simply using your “magic” of “translating” quantities of emissions in to concentrations just like that wont help (without taking in consideration the CO2 residence time in atmosphere there is no way to refer to such translations) ….and “hand waving” simply in grounds of the “authority” may be not considered, as you very well may know that this is a question in the claims of the authority itself….please do trie to avoid the possibility of a circular reasoning.

        Please if you reply try to keep it simple also………all this said, your replies are still appreciated :)


      • Whiten,

        OK, I now see where your 1/4th to 1/5th is coming from. The problem is indeed that you mix up two different removal rates: the residence time of an individual CO2 molecule and the e-fold decay rate of an extra CO2 mass in the atmosphere.

        Take that humans added 100 GtC some 160 years ago. That would give an initial peak of 100 GtC CO2 mass in the atmosphere above the 580 GtC present at that time. 100% human caused. That gives that at that moment 15% of all CO2 in the atmosphere was of human origin.

        After some 50-60 years you can see that no “human” CO2 is left in the atmosphere, as that is near completely replaced by CO2 from other reservoirs, mainly the (deep) oceans. Despite that, about half the extra mass (some 50 GtC) still is in the atmosphere, 100% caused by the original human input of 100 GtC.

        You see, there is a huge difference between the residence time, which is responsible for the removal of any individual CO2 molecule (including human CO2) and the removal of an extra mass of CO2 (whatever the source) out of the atmosphere. Here in graph form:

        Where FA is the percentage (fraction) human CO2 in the atmosphere, FL the same in the ocean surface, tCA total CO2 in the atmosphere and nCA natural CO2 in the atmosphere.

        I know, this is one of the most difficult points to understand in the fate of CO2, human or not. Even professors of universities are mixing up the two (residence time and decay rate of an excess), thus nobody blames you for the difficulty to understand the difference in effects…

      • Ferdinand Engelbeen
        March 29, 2016 at 9:05 am.
        Hello again Ferdinand.

        I don’t see you getting the point yet.
        Let start it from another angle.

        You say:
        “Take that humans added 100 GtC some 160 years ago. That would give an initial peak of 100 GtC CO2 mass in the atmosphere above the 580 GtC present at that time. 100% human caused. That gives that at that moment 15% of all CO2 in the atmosphere was of human origin.”
        I do not really get what point you making there, but you seem to ignore that the translation of any amount of CO2 emissions in to concentrations, regardless of a human or a natural origin, needs the residence time of CO2 in atmosphere to be taken in account,,,,, it is simple as I keep telling you, different CO2 residence time will give a different translation…….that why I keep telling you to stop doing such translations in the way you try because it means nothing…..unless you just interested in spreading confusion and doubt.

        Different CO2 residence time requires a different amount of CO2 disbalance to produce the same concentration increment Ferd, you can not just use some “magic” there to achieve that. in the way that it may please you.

        Coming back to the actual matter Ferd…………the IPCC cocked the books at some values that if changed there will be consequences in regard to the relation of human CO2 versus natural CO2.

        If the IPCC claims that 4/5 of the yearly emissions are human, and the 4/5 of the increment of CO2 concentrations for the last 150 years are anthropogenic, in a CO2 residence time considered and “estimated” at somewhere of 60-100years 1/2 life then the problem it faces and has to explain away is that the acceleration of the anthropogenic CO2 emissions is not mirrored by the CO2 concentration trend and its acceleration, when it must as the whole almost is claimed to be due to anthropogenic CO2 emissions.
        It can not be considered any other way.

        You see, there is two things to be taken seriously, the actual concentration trend of the last ~150 year with its amazing steady acceleration and the other one the trend of the human CO2
        emission and its acceleration too.

        You see that is the beauty of the anthropogenic CO2 emissions, it is well known and well estimated, ether in the amount or the acceleration too, It works well enough as a “tracer” to add and attend to a better understanding of nature. it is there and we can not get rid of it or ignoring it or misusing-abusing the knowledge about it. .

        The only way to explain the actual reality of the anthropogenic CO2 emissions trend in accordance and relation to the CO2 concentration trend, in the way these two both stand as per reality actually, is by considering that the residence time of CO2 in atmosphere is far much shorter than the IPCC claims it to be.
        The only thing that will stop the acceleration of human CO2 from having an “impact” or a detected effect on the acceleration of the CO2 concentration trend is the very fast “refresh” ration (the short residence time of CO2).

        Again in the case of a high CO2 residence time as according to IPCC or the orthodox climatology, the CO2 concentration must mirror the anthropogenic CO2 emission trend, as all there or almost is due to or a result of human CO2 emissions effect, especially that should be so at least for the last 40 years.
        But you see it is not…… this was clearly shown by Dr. Sulby too………

        The simple reason, to explain it by this view point, why IPCC has decided for such a high residence time of CO2, is simply because the disbalance that supposes to produce the actual trend of CO2 emissions, claimed as almost anthropogenic is too small to stand to the maths in a case of a faster “refresh” rate of CO2.
        Even there barely makes it, with a lot of pushing and shoveling, but still when the amount of the concentration increment somehow “justified to a degree” the actual acceleration and its steadiness is lost and the acceleration of the human CO2 emissions fails to be mirrored by the acceleration of the concentration trend very badly, contrary to what it supposes to have being if anthropogenic CO2 emissions was causing all that concentration change.

        Reducing the CO2 residence time as to explain the discrepancy, and as it should be in case of considering the actual acceleration of both trends mentioned will require that the disbalance causing the CO2 concentration increment must increase considerably, because otherwise simply there will be lees amount in the increment of CO2 concentration.

        So as in case of reducing the residence time of the IPCC to the value of 30-35 years 1/2 life CO2 residence time, as in the case shown the other day by Mr. Halperin, the disbalance (you should now know what I mean by that) must increase, where the anthropogenic factor drops from 3/4-4/5 to ~1/2 of the disbalance and concentrations too, meaning that the latest disbalance should be~50- 60Gts emissions versus that of ~30Gts of the IPCC estimation where the anthropogenic CO2 still counts for only~25Gts of emissions.
        Keep in mind decreasing the residence time of CO2 requires the increase of disbalance (the amount that yearly emissions are over and above the yearly sinks) otherwise you lose the amount of the concentration increment and its actual acceleration and end up with a totally different concentration trend than the real one.

        Even at 30-35 years 1/2 life residence of CO2, you can see how bad the IPCC books are cocked.

        But from where I stand is even worse, because still the 30-35 years 1/2 life of CO2, is to high…..still even at 1/2 factor of anthropogenic share with that actual residence time the concentration trend must show to be mirroring to some degree and be affected by the human CO2 emissions trend, especially by the acceleration of that human emission trend for at least the latest period of the ~20-30 years,,,,,, but still there is not such thing seen or observed when comparing these two trends…
        That residence time is still too long (the refresh ratio) not fast enough to prevent or actually “stop” the acceleration of human emissions from affecting and been “mirrored” in the concentration trend and its acceleration.
        The whole time period and the effect of that residence time covers at ~95% an ~120-140 years period, so the impact in a period of 40 or 20 years will be minimal, so will effect little (not able to prevent much of) the probable effect of the human CO2 emissions acceleration to the concentration acceleration.

        But as there is no any mirroring at all in the concentration trend of the human CO2 acceleration then the simple conclusion and approach must be to consider that the residence time is still much less even than that.

        ~10years 1/2 life time residence covers at 95% a period of ~ 40 years, very well covering the most significant period of the human CO2 emissions and its accelerations, therefor much more effective on explaining why the acceleration of human CO2 emissions is not actually “mirrored” by the actual concentration trend and its acceleration.

        Going down even at ~5years 1/2 time residence works even better, but anywhere in between the range of 5-10 years it will work fine as to explain that given actual condition……….but that will mean also that the CO2 emission disbalance must increase up to ~ 4 to 5 times of the actual CO2 anthropogenic emissions…so approximately at 4 x 25Gts 90-100Gts CO2.

        Now that is the explanation how I got to these figures you asked for, simple math and reasoning……..the paste copy of the “aveollilas” comment was simply to show that some one else had a similar figure arrived at by a different angle of approach which seems reasonable enough to consider and not ignore…….both us in the case point out at some weird book cocking by the climate orthodoxy and its “authority”…….. so was Mr. Halperin too showing the other day.

        How wrong or right any one of us will be only time will tell….but all seems to show a worse than we thought case of deception and conning by those who are charged with authority and trust in such matters.

        You are very welcomed to keep ignoring such raised points and keep having full trust in the “anthropogenic is all there is with no any other questioning”…..

        Sorry for this getting rather long..:)

        Thank you for your time.


      • Whiten,

        Again, you are confused: the residence time doesn’t add or remove one gram of CO2 into or out of the atmosphere. Zero, nothing, nada. As long as inputs and outputs are equal. The residence time may increase a tenfold or reduced to half it was last year. Still zero CO2 is removed or added after a full year (a full seasonal cycle, which is at the base of most of the residence time).

        Humans emit ~9 GtC as CO2 per year that is the same as adding ~4.5 ppmv CO2 per year. The residence time doesn’t change that one gram: it would all stay in the atmosphere forever as mass, not as original human CO2 molecules, whatever the residence time, as long as natural sources and sinks remain equal.

        Of course, as we add CO2 into the atmosphere, the CO2 pressure in the atmosphere increases, which suppresses the influx of natural CO2 near the equator and increases the outflux of natural + human CO2 into the polar sinks. The same – to a lesser extent – in the ocean surface and vegetation. The difference between natural sources and total sinks is what removes CO2 out of the atmosphere, that is the e-fold decay rate of any excess CO2, whatever the source, in the atmosphere above the long-term dynamic equilibrium (“steady state”) for the current ocean temperature. That has nothing to do with the residence time. Over the past decades, the e-fold decay rate was ~55 years or 30-35 years half life time.

        Again, the e-fold decay rate is about the CO2 mass removal out of the atmosphere, nothing to do with the origin of what remains in the atmosphere. That changes much faster: 20%/year of all CO2 in the atmosphere is exchanged with CO2 out of other reservoirs. Thus 20% of all original human CO2 per year is replaced by CO2 from mainly the deep oceans, which doesn’t contain any (*) human CO2, as that was traveling some 1,000 years along the deep oceans, thus what comes out of the deep is CO2 from long before the industrial revolution. That makes that the original human emissions as measurable “fingerprint” (low 13C, zero 14C) is rapidly disappearing. That disappearing is measured by the residence time. But that is the molecular exchange rate, not the mass removal rate…

        You see, it is highly confusing, but even when the IPCC may be wrong about a lot of things: the Bern model, the positive feedbacks in the models they use, the overblown predictions of temperature increase, the non-existent catastrophes caused by more CO2,… On one point they are right: human emissions are the cause of most of the CO2 increase in the atmosphere…

        (*) not fully right as the biosphere drops some CO2 out of the surface into the deep oceans, which is already affected by human emissions with low-13C…

  21. The big problem with both models is by using these “exponential decay” fudge factors they neglect to consider changes in the natural net flux of CO2 when emissions from the tropics excede sinks near the poles.

    • Fred,

      Even with temporary outliers in tropical emissions (mainly during an El Niño), that gives only exceeding of the polar sinks diring a few months, largely compensated by the following extra sink capacity from regrowth of the tropical forests. The net result is that the huge year-by-year variability levels off to below zero in 1-3 years: vegetation is a small, but growing net sink for CO2. The earth is greening…

  22. Willis, I know from previous comments made that you prefer to blog rather than submit to peer reviewed journals. However, this seems like a dynamite story that may merit doing so. If the story is in the mainstream scientific literature, it is a bit harder to ignore.

    Regarding the issue of whether a doubling of CO2 and an increase in 2 Deg C is above pre-industrial or present levels, the issue is the final temperature reached. Using temperature estimates based on isotopes from ice cores. In the Eemian, about 125 000 years ago, the central estimate is for temperatures to have been about 3 C higher than the recent average. As the planet survived the Eemian, it seems there is plenty of room, whether we calculate from now or from pre-industrial levels for a 2 C increase.

  23. The true insanity of all of these calamitous scenarios is the ridiculously stupid assumption that the world will still be burning fossil fuels anywhere near the current level 40 years from now. Anyone who is familiar with power technologies realizes that Gen4 nuclear reactors (such as Transatomic Power’s new version of the molten salt reactor) will replace all other forms of power production, excepting (perhaps) peak load power plants. This doesn’t really require fear of carbon emissions, since the new technology is not only cleaner, safer than any technology, but also cheaper and totally reliable and also eliminates to a large extent any nuclear waste issues. There are no obstacles to the commercialization of this new technology and it is proliferation resistant to a high degree. Nor can anyone cause a nuclear meltdown at such a plant.
    The main obstacle are the braindead greenies, who oppose nuclear power, regardless. But they can easily be neutralized by facts, when presented properly. Even today we have informational videos produced by Transatomic Power, done by the female half of the company’s ownership team.

  24. James Hansen claims we’ll reach 1,400 ppm due to fossil fuel burning by about 2130, and this will lead to 20°C of warming and an uninhabitable planet.
    If we assume that fossil fuel emissions increase by 3% per year, typical of the past decade and of the entire period since 1950, cumulative fossil fuel emissions will reach 10 000 Gt C in 118 years. Are there sufficient fossil fuel reserves to yield 5000–10 000 Gt C? Recent updates of potential reserves, including unconventional fossil fuels (such as tar sands, tar shale and hydrofracking-derived shale gas) in addition to conventional oil, gas and coal, suggest that 5×CO2 (1400 ppm) is indeed feasible. Our calculated global warming in this case [1400 ppm] is 16°C, with warming at the poles approximately 30°C. Calculated warming over land areas averages approximately 20°C. Such temperatures would eliminate grain production in almost all agricultural regions in the world. Increased stratospheric water vapour would diminish the stratospheric ozone layer. More ominously, global warming of that magnitude would make most of the planet uninhabitable by humans.

    • KennethRichards, thank you for the link. If Hansen et al are as far off as I think they are, it should not take long to find out. Same if they are correct and I am wrong.

      • Another Hansen commentary on fossil fuel emissions. He’s puzzled why the airborne fraction hasn’t been correlating for several decades…

        “However, it is the dependence of the airborne fraction on fossil fuel emission rate that makes the post-2000 downturn of the airborne fraction particularly striking. The change of emission rate in 2000 from 1.5% yr-1 to 3.1% yr-1 (figure 1), other things being equal, would [should] have caused a sharp increase of the airborne fraction” —- Hansen et al, 2013

  25. [Note: I posted this comment yesterday at earlier post on CO2, but missed the conversation, so please forgive re-posting here.]

    There is another carbon-14 observation that enlightens this debate. Each year, cosmic rays create roughly 8 kg of carbon-14 in the upper atmosphere, and has done so for millions of years. One in eight thousand carbon-14 atoms decays into nitrogen every year. For equilibrium, there must be 64,000 kg of carbon-14 on Earth (so it will decay at the same rate it is being created). But there is only 800 kg of carbon-14 in the atmosphere (I’m rounding to one significant figure). Where is the rest of it? And how does a net transfer of 8 kg of carbon-14 take place into this reservoir each year? You will find more details at the posts starting with the one below.

    So far as I can tell, the remaining 63,200 kg must be in the deep ocean, where the concentration of carbon-14 is 80% of that in the atmosphere. We have something like 40 Pg (petagrams) of carbon moving into the deep ocean each year, and 40 Pg coming back, so a net flow of 8 kg takes place into the deep ocean. We can write down analytical equations for the resulting two-reservoir system, and solve them directly or numerically.

    We also note that absorption by the ocean, and emission, are governed by Henry’s Law. Absorption by the oceans increases in proportion to concentration in the atmosphere. According to this model, the residence time of CO2 in the atmosphere is around 17 years. According to the bomb test data, its about 15 years. If we consider how long it will take humans to double the atmospheric concentration of CO2, the answer is: roughly 6000 years, because we have to double the concentration in the oceans too.

    Or so it seems to me, anyways.

    • “Each year, cosmic rays create roughly 8 kg of carbon-14 in the upper atmosphere, and has done so for millions of years.” A very crude approximation. C14 concentration has historically varied, read about corrections needed for radiocarbon dating.

    • Kevan Hashemi,

      The 14C bomb test decay suffers from the time lag (~1000 years) between sinks into the deep near the poles and return near the equator: what was going into the deep in 1960 was at the height of the tests, what did return was from ~1000 years ago at about 44% of the bomb spike. See my reaction there.

      That makes that the decay rate of the 14C bomb spike was several times faster than for a 12CO2 spike…

      With an observed net sink rate of 2.15 ppmv for 110 ppmv excess pressure in the atmosphere, the e-fold decay rate is slightly over 50 years, about 3 times slower than the removal of 14CO2 out of the atmosphere…

      Over the past 55 years, the slightly over 50 years decay rate was practically constant, which points to a rather linear absorption rate in ratio to the extra pressure in the atmosphere above the oceans steady state per Henry’s law…

      • George, Creation rate constant to about +-25%.

        Ferdinand, I did admire your diagram in the comments to previous CO2 post, and it looks good to me, and I hear what you are saying about carbon-14 and carbon-12, although I have not studied carbon-12. My point is: you don’t need anything more than the following four numbers to figure out the carbon cycle of the Earth: carbon-14 production rate by cosmic rays (to +-25%), mass of carbon in the atmosphere (to +-25%), decay rate of carbon-14, and concentration of carbon-14 in the deep ocean. These are sufficient to fix the carbon cycle’s behavior, as expressed in a classic paper by Arnold et al “The Distribution of Carbon-14 in Nature”. The bomb test data is an independent confirmation of the model. The model shows how it will take 6000 years to double atmospheric CO2 concentration at 10 Pg/yr.

      • Kevan,

        You have a problem with your 14CO2 cycle: it is not enough to know the total mass of 14C in the oceans, you need to take into account the long delay between what is going into the deep oceans and what returns.

        The total amount of 14CO2 is negligible compared to the amount of 12/13CO2 circulating over the deep oceans – atmosphere. That makes that a doubling of 14CO2 in the atmosphere (as happened by the bomb tests) doesn’t have any influence on the total CO2 going in and out: what goes in and out as 12/13CO2 remains the same (in ratio to the extra CO2 in the atmosphere), but what returns as 14CO2 is only half the bomb spike (minus the radio-active decay rate in 1,000 years), even if there is zero difference in 12/13CO2 in/output.

        The change is in the difference between the 14CO2 concentrations at the input and output, while for a 12CO2 spike there is hardly a change in concentration, only in mass which returns. The latter has a much slower decay rate than the 14CO2 spike decay which is the product of total returning (12/13CO2) mass and 14CO2 concentration…

  26. :>

    I normally don’t like to post just to applaud, but. Very nice Willis. It’s stuff like this that never occurs to me in the first place. Not enough fossil fuel reserves to double from current levels huh.


  27. Excellent demonstration of the dimensions of the problem. An engineer’s approach. Climate science tends to keep these sorts of evaluations mainly in the dark so they can hyperbolize the fears without measuring anything. In a discussion about world population a few years ago (I’ve mentioned it a few times since in comments) I noted that the world population could all fit into Lake Superior with 15m^2 to tread water in. Yeah I know we take up a lot of space but I just wanted to see how much physical space we take up first.

    Regarding the sinks end of the formulae, the recent greening of the planet seems to have taken everyone by surprise, even (possibly) Ferdinand Englebeen. I presented a simple thought experiment a few days ago on another CO2 thread that the biological sinks are exponential: a fringe of green in the Sahel would make the soil in the strip a bit moister and a new ‘fringe’ would seed going into the arid area and so on, with the original and successive fringes increasing their masses going forward. This would trim the higher estimates of future atmospheric CO2 content until an equilibrium was reached between expanding emissions and sinks. I think we are going to see the effects in the very near future with the slope of CO2 growth beginning to flatten. Any net cooling would also slow the growth further although the lag would hide it for a while, although in 85 years it may show up.

    Thanks very much for this. I can see now why there is so much hand waving and throwing up very slow sink rates and long residence times. They shocked themselves with the math first and then cooked up ways to shore up the CO2 fears. I imagine the Bern model had many iterations to make it as scary as decently possible. When I know there is an agenda behind a program, I start by cutting their projections at least in half because I know they have stuffed every supportive parameter to the limits of what people can buy into.

    • Gary Pearse,

      Why should I be surprised?

      It is known for some time that the biosphere as a whole is a small, but growing sink for CO2, at least since 1990 when the accuracy of the oxygen measurements got good enough to measure the small surplus in oxygen produced by the biosphere…

      Still it is only the third decay speed (~170 years e-fold decay if I remember well) in the Bern model and for the 110 ppmv extra pressure in the atmosphere, the extra uptake still is limited to ~1 GtC/year (0.5 ppmv/year) of the ~9 GtC/year human emissions. The fastest sink is in the ocean surface (but limited to 10% of the change in the atmosphere, or ~0.5 GtC/year). The second are the deep oceans.

      • Ferdinand, I did put ‘possibly’ in deference to your widely accepted expertise on the subject. But I still believe the significant greening over a relatively short time, especially fringing arid areas which were expected to become even more arid was a surprise to most. Bravo to you for not being surprised, but I wish you had said something along time ago about it. The extra uptake of 1GtC/year sounds a little ‘static’ to me and that is the impression one gets from discussions. My point is that an exponential growth governs this sink (and in the oceans). Is this not a new idea coming out of the greening?

        Dissolved iron in the ocean is “low” but the general abundance of iron in a basaltic volcanic floored ocean basins and the the issuing of iron from weathered rocks on land by rivers where iron averages 5% of the total composition and meteoric dust, etc, means that when iron is taken up by biota, there is an abundance of sources to replenish it. Essentially, calcium carbonate too has low solubility, but it from the same sources is abundant and available to replenish the ocean’s soluble burden continuously. How else does one account for the coccolithophores making up the Cliffs of Dover, etc and the abundant shellfish of the oceans. Shell fish even can take it out of fresh water in granitic rock basins.

        I can see most of us have been deceived on these issues. With a cap on atmospheric CO2 in the atmosphere at ~550-650ppm if just left alone, all this worry about need for iron fertilization and quickly shutting down the fossil fuel business turns out to be a mask for the fact that we are already near the atmosphere’s cap for the effects of today’s emissions. Knowing that wasn’t going to push the new world order agenda as far forward as it has. Henry’s law is all very well for the world in an erlenmeyer flask, but is much wanting in a dynamic situation of the ocean and atmosphere’s complexities.

      • Gary,

        I did mention the increase of uptake by the biosphere of ~1 GtC/year many times before, including the two links I have about that budget:

        I don’t have a recent update which shows the further evolution of the biological sink, but what is clear is that it is heavily influenced by El Niño, where all bio-life suddenly turns into a net source, followed by a net sink when temperatures drop again…

  28. But, but, but……

    Willis, are you really saying that the IPCC is not simply predicting Peak Oil well before the end of the century, but Peak Fossil Fuel too?? Arghhh, wash your mouth out with soap and water, otherwise Richard S Courney will have a hissy fit. ;-)


  29. Willis,

    Wonderful post. It would be better if you did not include the incorrect single exponential model.

    You wrote: “we still don’t have enough information to distinguish whether the Bern Model or the single exponential decay model is more accurate.”

    That is not true. I don’t don’t know if the Bern model is right or wrong, but the single exponential decay model is certainly wrong. The short reason is that C-14 decay has a different time constant, so there must be at least two exponentials involved.

    If you look at Ari Halperin’s paper (, he starts out with a detailed description of a rather complex model. He then makes a number of simplifications that alter the physical meaning of the model. He ends up with his equation (16), which I reproduce here with somewhat different notation:

    d(C-Ce)/dt = E – lambda*(C-Ce)

    where C is concentration of CO2 in the atmosphere, Ce is equilibrium concentration, E is emission rate, and lambda is a first order rate constant. C and E are functions of time, Ce and lambda are constant properties of the system. Halperin’s equation looks different since he replaces (C-Ce) with the excess concentration, for which he uses the symbol C and breaks up E into several terms, but it is mathematically identical.

    Physically, the above equation represents a linear two-box model in which one box is the atmosphere and the other is an infinite reservoir of CO2. I say infinite because Halperin assume that no matter how much CO2 is added to the reservoir, the concentration, Ce, in equilibrium with the reservoir does not change.

    Now one can define the lifetime of CO2 at least four different ways:
    (1) The average residence time of individual CO2 molecules in the atmosphere as indicated, for example, by the lifetime of bomb test C-14.
    (2) The apparent residence time as indicated by comparing emission history to concentration history. That is what Halperin calculates.
    (3) The pulse decay time, that is, the decay time observed following the emission of a large pulse of CO2 into an initially equilibrium atmosphere.
    (4) The decay time following a sudden stop in emissions.

    In a complex system, there is no reason that these four lifetime have to be the same. But in Halperin’s model, all four are identical. We know for a fact that (1) and (2) are different, so Halperin’s model is wrong and can not be used to make extrapolations into the future.

    • Agreed the Halperin model does not even make a particularly good match the last 60y ( despite his claiming it was an “excellent” fit ). A single exponential is totally unsuitable for extrapolation.

      The infinite sink is a problem. It implies that given enough time of CO2 would end up in the oceans. Silly.

      The 15% which remains in the Bern parameters is supposed to reflect the proportion that would remain in the atm. when the sinks reach their new equilibrium. I don’t know whether that is an accurate guess.

      • Greg,

        You wrote: “The infinite sink is a problem. It implies that given enough time of CO2 would end up in the oceans. Silly.”

        To be fair to Halperin, he effectively assumes an infinite reservoir, not an infinite sink. For the reservoir, the concentration will eventually come to a certain fixed value, independent of how much CO2 was emitted and absorbed. For a sink, the fixed value would be zero, which would be silly indeed.

    • Mike M. (period),

      Except for (1) which is a complete different item (the long lag between sinks and return of 14C makes it quite different), (2) to (4) should be equal for a linear system. As far as I know, Halperin didn’t use equation (1) at all.

      Besides the questionable partitioning in quantities in the Bern model, it doesn’t make much difference if you use one decay rate or a mix of several, as long as there is no limit in the maximum uptake. Except for the ocean surface, that is not the case. Even if you look at the up to today emissions, that gives not more than 3 ppmv extra in the atmosphere when the steady state of the deep oceans and the atmosphere is reached again. Thus little residual increase, even not with 900 or 2000 GtC emissions.

      The general approach of multi-decay model is:

      1/τ = 1/τ(1) + 1/τ(2) + 1/τ(3) +…

      As long as the decay rates don’t change over time (the first being rapidly saturated, thus also giving a fixed decay), it doesn’t matter if you use the total decay rate or the sum of the individual one’s. The overall decay rate is slightly faster than the fastest decay rate, except for the first, because of its limit in quantity.

      • Ferdinand,

        You wrote: “Except for (1) which is a complete different item (the long lag between sinks and return of 14C makes it quite different), (2) to (4) should be equal for a linear system.”

        Lifetimes (3) and (4) should be the same in a linear system, but they need not be the same in a non-linear system, which is what we have in reality. Lifetime (2) should be different from the others even in a linear system provided there is not just one process involved.

        “As far as I know, Halperin didn’t use equation (1) at all.”

        I don’t understand what you mean. That he ignored lifetime (1)? Why is that relevant. His model is the equation that I gave.

        “The general approach of multi-decay model is:

        1/τ = 1/τ(1) + 1/τ(2) + 1/τ(3) +…

        As long as the decay rates don’t change over time (the first being rapidly saturated, thus also giving a fixed decay), it doesn’t matter if you use the total decay rate or the sum of the individual one’s.”

        That is simply not true. You can use that formula to combine the effects of multiple sinks when you have a steady state or quasi-steady state; that is why the Halperin model can give a decent fit. It will also give the initial rate of decay. But if you combine in that way, you will end up with a huge error when you extrapolate.

      • Mike M. (period),

        Some confusion here…

        I was responding to your points (1) … (4).

        Your point (1) is about the residence time of an individual CO2 molecule, which is not relevant for the decay rates of any excess CO2 mass in the atmosphere and thus not used by Halperin.
        Your points (2), (3) and (4) should give the same decay rates for a bunch of linear processes, no matter if you use the individual decay rates or one overall decay rate.

        You can use that formula to combine the effects of multiple sinks when you have a steady state or quasi-steady state

        Not at all, it is true for any combination of linear decay processes, no matter how far from steady state. Here for a double decay process:

        If we may forget the first decay rate for a moment, which is quite limited in uptake, the dominant decay is the second one in the deep oceans, but the third one in vegetation also helps as the overall decay is slightly faster than the second one alone.

        The main difference between the single decay and the Bern model is not in the multiple decay rate, it is the partitioning in separate compartments each with its own maximum sink limit which makes the extrapolation of the Bern model more questionable than the single decay model… See the nice fit of the past CO2 increase in the second graph in my response to Whiten here, with a single decay rate…

      • Ferdinand,

        “Not at all, it is true for any combination of linear decay processes, no matter how far from steady state.”

        I was indeed careless in what I said. You are correct *if* there is no saturation of the sinks. I had accepted the conventional wisdom on saturation, and implicitly assumed that in my earlier response, but you have given me reason to doubt that. If you are correct and there is no saturation, then Halperin’s model may give a reasonable extrapolation.

        My default position is that when multiple groups of capable people put years into studying something, it is unlikely that they have made some dumb error. Science can not otherwise proceed. But when evidence of an error is presented, that must also be considered. So it looks like I have some reading and thinking to do.

      • Mike M. (period),

        The Bern model was discussed already in 2001 between Peter Dietze (who used a single decay model too) and Fortunate Joos and others about the Bern model:

        Since that time, the 55 years e-fold decay (with a slightly different formula) remained about the same, even a little faster, which points to (currently) no limit in the CO2 uptake at the ocean sink places.

        I think the main problem in the Bern model is that they calculated it from a gigantic 5,000 GtC pulse, which indeed gives a huge residual even in the deep oceans, but they applied it to even the smallest pulse in the present.

        From the above discussion, regardless the mutual misunderstandings, it seems that the Bern model makers applied the Revelle factor to the whole ocean surface, including the sink places, which is highly questionable. Feely’s compilation of pCO2 measurements all over the oceans was published in the same year as the above discussion, thus may not be known at that time…

        Ì don’t think lots of researchers are busy with modeling the CO2 cycle, most even may try to figure out the present cycle and don’t care (much) about future scenario’s…

  30. Extrapolating Willis’s graphs into the next century would show a significant fall in CO2 (700 down to less than 500 by 2135 using the higher numbers, 500 /400 to less than 400 /340 the lower) and consequent cooling, sea level going down, etc.

    • Building on this, if all oil, gas and coal were burnt this century, presumably the difference between TCI and ECS would disappear. In AR5, TCI is in the range 1.0 to 2.5C and ECS 1.5 to 4.5C. If CO2 were to get into the 700+ range by the end of the century and then fall dramatically the next, the top end of temperature rise would fall dramatically.

  31. Willis:

    First, you can see that unfortunately, we still don’t have enough information to distinguish whether the Bern Model or the single exponential decay model is more accurate.

    It is clear that three exponential model will be more accurate: it has more parameters. The global carbon cycle obviously cannot be accurately represented by a single exponential. It may be “good enough” over 60y or so of data that one may suggest a more parsimonious description is preferable. It will not be more accurate.

    However, what is parsimonious for fitting a limited period of known data is NOT going to work as an argument for what is best for wild extrapolation outside the range of the data.

    Further more, if the single exponential is derived by fitting to the extended historical data it will not even be optimally fitted to the last 60y of good data.

    Historical emission data show three different, roughly exponential rates of growth. Probably only the last is meaningful for ( business as usual ) extrapolation.

    Since the Bern model seems to be “validated” by comparison to other models, I’m not particularly convinced by their derived coeffs. but the idea of three time constants for three main reservoirs seems sensible.

    However, the trust of the article is interesting: that we cannot keep on doubling atm CO2. One doubling from present levels is about the outside limit. And we’re not going to get beyond about 2.5x pre-industrial

    Good article.

  32. So, yeah, fine, even using their model, the max rise in atmospheric CO2 is insignificant. But, the model is bollocks. Atmospheric CO2 is governed by temperatures, and humans have very little impact on it.

    In the near future, La Nina is going to send temperatures crashing down, and there will be a decade or two of declining or stable global temperature thereafter. We will see the rate of change of CO2 decline with it, even as human inputs continue increasing. Hopefully, that divergence will finally end the vainglorious notion that humans are in control of the planet.

    • Bart,

      Where were you so late?

      The increase is 90% caused by human emissions, 10% by temperature. as all observations point to a human cause, none to temperature as the sole cause. Temperature variability causes most of the variability which is not more than +/- 1.5 ppmv for extremes like El Niño and Pinatubo around the 80 ppmv CO2 increase. See the real cause of the increase:

    • Bartemis:

      I just noticed something on the WoodForTrees temperature (green) net atmospheric CO2 emission correlation.

      Do you see that around year 1990 that the green (temperature) is above the red (net emission)? Was that a “strong” El Nino year? Was it different than others?

  33. The amount of CO2 in the atmosphere could easily double over the next hundred years!

    Everyone assumes the increase in atmospheric CO2 is because of anthropogenic emissions. That doesn’t have to be completely true.

    The amount of CO2 in the intermediate and deep oceans dwarfs that everywhere else. 1.5% of the oceans’ CO2 would double the amount of CO2 in the atmosphere. Referring to the CO2 solubility graph on this page we find that a temperature rise of 0.6 deg. would do it. (Yes, I do realize the amount of heat it would take to raise the temperature that much.)

    Anthropogenic emissions aren’t raising the atmospheric CO2, it’s Trenberth’s heat hiding in the deep oceans that’s causing all the extra CO2. :-)

    • commieBob,

      A rise of 0.6°C of the ocean surface (or the whole oceans, doesn’t matter) will increase the CO2 levels in the atmosphere with ~10 ppmv and then it stops, no matter if there is 100 or 10,000 times more CO2 in the deep oceans than in the atmosphere. The solubility of CO2 in seawater is a matter of pressure and ratio, less of quantities, as long as sufficiently available.

      Take a bottle of 0.5, 1.0 and 1.5 liter Coke from the same batch and shake them all three. You will measure the same pressure under the cap at the same temperature, despite the three times higher quantity of CO2 in the larger bottle (a small difference due to the relative larger loss out of the liquid in the smaller bottle allowed)…

      Currently the partial pressure of CO2 in the atmosphere is higher than of the oceans: the net CO2 flux is from the atmosphere into the oceans, not reverse…

      • … You will measure the same pressure under the cap at the same temperature …

        … and if you change the temperature the pressure will change, which is my point.

        A close reading of my post should reveal that I was being somewhat Rabelaisian.

        On the other hand, you give an increase of about 10 ppmv. As a gas approaches saturation, Henry’s law ceases to apply. How do you justify your figure?

        Does the volume of water matter? Yes. If I boil a beaker, the atmospheric concentration of CO2 won’t be measurably affected (even if the beaker contained 100% dry ice). You have to have enough CO2 to make a difference, which you acknowledge with: “as long as sufficiently available”.

        Here’s a good pdf.

        The atmosphere controls the oceans gas contents for all gases except radon, CO2 and H2O.

        In other words, the atmosphere does not control the oceans’ gas content for CO2.

      • OOPS – I forgot to close a blockquote.

        The atmosphere controls the oceans gas contents for all gases except radon, CO2 and H2O.

        In other words, the atmosphere does not control the oceans’ gas content for CO2.

      • Ferdinand, I think there are other possible factors that could be contributing, such as biological activity, which is affected by temperature. For example, Baker et al 2013 estimates an increase in atmospheric CO2 of 100ppmv from photosynthetic activity in certain ocean regions during different times of the year, while pointing out that simple temperature-dependent solubility calculations cannot explain the fluctuations in atmospheric CO2 in those ocean regions. Of course no-one has any comprehensive global data on biological activity in the oceans over the last 100 years and so for all we know a significant portion of the increase in CO2 could be biologically-driven which in turn could be temperature-driven also.

      • commieBob,

        The link to the .pdf doesn’t work, but a few renarks:

        – For a small change in temperature (like we see in the past few hundred years), there is a quasi-linear change in pCO2 of the oceans surface waters of about 16 ppmv/°C. That is all. That includes Henry’s law for the solubility of CO2 as gas in the ocean waters which is only 1% of all carbon species, 90% is bicarbonates and 9% carbonates. That also includes all equilibrium reactions between free CO2, bicarbonate and carbonate / hydrogen ions following the increase in temperature. See: where the formula used to compensate for temperature at measurement time vs. the in situ temperature is:
        (pCO2)sw @ Tin situ = (pCO2)sw @ Teq x EXP[0.0423 x (Tin-situ – Teq)]

        – There is no limit in CO2 that the atmosphere can receive, there is a limit in the ocean surface at about 10% of the change in the atmosphere. That is the Revelle factor. For a reverse change in the ocean surface, the amounts in the surface are too small (~1000 GtC) to give the full 10x change in the atmosphere (~800 GtC) and the new equilibrium is reached before the 10X increase of the change in the atmosphere is reached.

        – The amounts of CO2/derivatives in the deep oceans play little role on short time, as the exchanges with the deep oceans are limited.

        – As long as the pCO2, the partial pressure of CO2, in the atmosphere is higher than in the ocean surface, the net CO2 flux is from atmosphere into the oceans, not reverse. No matter the quantities involved.
        The area weighted average pCO2 in the atmosphere is 7 μatm (~ppmv) higher than in the ocean surface. See: and following pages and the graphs at: and next page

      • Richard,

        Indeed the biological factor is quite variable and heavily influenced by temperature.
        Fortunately that can be monitored as global change, due to the oxygen and δ13C balances. If there is a physical change in CO2 caused by an ocean temperature change, δ13C goes slightly up and CO2 goes slightly up or reverse with temperature. If the change is caused by bio-life, CO2 goes down and δ13C goes firmly up, or reverse with temperature.
        Thus if δ13C and O2 changes parallel each other, then degassing / absorbing oceans are dominant. If δ13C and O2 changes are opposite, then bio-life is dominant (both for land and ocean plants).

        The past 25 years of O2/δ13C monitoring show that bio-life (land + sea) is a small but growing sink for CO2 with higher temperatures and increased CO2 levels in the atmosphere. Thus not the cause of the CO2 increase in the atmosphere, neither of the firm δ13C decline since ~1850, which parallels human emissions.

      • “If there is a physical change in CO2 caused by an ocean temperature change, δ13C goes slightly up and CO2 goes slightly up or reverse with temperature. If the change is caused by bio-life, CO2 goes down and δ13C goes firmly up, or reverse with temperature”

        Sorry Ferdinand, but I cannot make sense of what you are trying to communicate to me here. It all seems rather muddled. Currently it is understood that δ13C is decreasing in the atmosphere and this is squarely blamed on human emissions. However a decrease in δ13C is a logical consequence of increased biological activity, such as photosynthetic activity, as mentioned above. According to Williams et al 2005, based on paleo-climate data: “Delta 13C values were high until 17.79 ka after which there was an abrupt decrease to 17.19 ka followed by a steady decline to a minimum at 10.97 ka. Then followed a general increase, suggesting a drying trend, to 3.23 ka followed by a further general decline. The abrupt decrease in δ-values after 17.79 ka probably corresponds to an increase in atmospheric CO2 concentration, biological activity and wetness at the end of the Last Glaciation”. Hence the current decrease in δ13C could be, in part, due to changes in biological activity.

      • handbook,

        One need to take into account both the height and direction of the changes in question.

        If the oceans are warming, CO2 is released at about 16 ppmv/°C and at the same time, the δ13C level (is a measure for the 13C/12C ratio in CO2) slightly increases in the atmosphere, because the δ13C of the ocean surface is higher than of the atmosphere, even including the δ13C shift at the ocean-air boundary.
        At the same time, higher temperatures give more plant growth (less land ice and longer growth seasons). More plant growth means more CO2 uptake (and O2 release), preferentially 12CO2, which makes that the residual 13CO2 in the atmosphere increases in ratio. Thus the δ13C level increases with more plant growth, while CO2 levels decrease.

        Over the past 800,000 years, the oceans were dominant for CO2 levels, as can be seen in parallel CO2 and temperature increase, where CO2 levels follow temperature levels with some lag. As the effect of plant growth on δ13C levels is much larger than from the oceans, the growing vegetation gives a slight increase with temperature of a few tenths of a per mil δ13C from the depth of a glacial period to an interglacial, which we are in now.

        During the whole current interglacial, the Holocene, there was some variability of δ13C of not more than +/- 0.2 per mil, mainly as result of temperature on vegetation and oceans (MWP-LIA and back).

        Since ~1850, humans have emitted lots of CO2 from fossil fuels, with very low δ13C (around -24 per mil), while vegetation was slowly growing, thus taking more 12CO2 out of the air and thus not the cause of the firm δ13C drop in the atmosphere. Neither are the oceans, as these should increase the δ13C with more CO2 release.

        The resulting drop of over 1.4 per mil δ13C is unprecedented over the past 800,000 years in ice cores, coralline sponges or any other δ13C/CO2 proxy:

    • The oceans are not a bottle of Coke. They are vast and flowing, and your conceptualization is facile.

      • Bart,

        As repeatedly shown to you, it doesn’t matter if you take a closed sample of seawater and wait for it to get in equilibrium with the atmosphere above it or look at the enormous amounts of CO2 flowing in and out between atmosphere and ocean surface at steady state: For the same area weighted ocean surface temperature, the same CO2 level in the atmosphere will be measured in the single sample as over the global oceans.
        There is no way that you can have a different result, as that simply will change the input and output fluxes to reach the steady state again…

      • Nonsense. You mean, as repeatedly claimed by you, with no empirical evidence nor, indeed, any scientific rigor at all.

        It is absurd. Of course the thermohaline circulation is temperature dependent. It’s built right into the name.

        If conditions remain the same, a steady state would eventually be reached, but only on a time scale commensurate with overturning of hundreds of years. In the meantime, the evolution of associated processes will tend to have integral relationships. And, that is what the empirical evidence shows – atmospheric CO2 evolves as the integral of temperature anomaly. There is no doubt about it.

      • Bart,

        You use any (im)possible scapegoat to defend your theory. No matter how ridicule it is. No matter that you have zero evidence for what you say and ignore every evidence of the opposite.

        Take an increased temperature of the oceans surface: if the increase is 1°C over all the ocean surface, everywhere, including upwelling and sink places, that will increase the local pCO2 of the oceans everywhere with ~16 μatm.
        At the upwelling sites, that gives a (~5%) increase in CO2 emissions as the influx is in direct ratio to the pCO2 difference between ocean surface and atmosphere.
        At the sink sites, that gives a (~5%) decrease in CO2 uptake, as the outflux is in direct ratio to the pCO2 difference between atmosphere and ocean surface.
        Both give an increase of CO2 in the atmosphere.
        The increasing pCO2 in the atmosphere decreases the CO2 emissions at the upwelling sites and increases the uptake at the sink sites, because the pCO2 differences change in the opposite direction of the temperature increase.
        At ~16 ppmv extra in the atmosphere, the original in-out pCO2 differences and thus fluxes are restored to what they were before the temperature increase, no matter if that was in steady state or not.

        That means that 1°C warming all over the oceans has exactly the same effect on the CO2 levels in the atmosphere above it as 1°C warming of a sample of seawater in a bottle.

        That is a matter of the most simple process dynamics, maybe the problem is that it is too simple for you…

    • commieBob, Trenberth ocean warming begs the question; If the oceans are warming faster than ever (even without a rise in sea surface temps) then shouldn’t the oceans be outgassing faster than ever?

  34. the only problem I see with the model is that it doesn’t include any newly discovered reserves along the way. We would have to make an estimate of the rate of growth of reserves, and then of economic activity that would burn those reserves. So that might increase the curve to the 800 ppm using the Bern model.

    • So the effective sameness of temperatures at pressures accounting for solar distances only further proves that gas specie is irrelevant. As per the gas laws. Radiative forcing exerts no effect. Zero NiL. Nada.
      Of all those working on this, my money remains on Ssllby. There is a real body of work.

      • Or, you’ve made several errors. Given the ones I’ve seen, such as falling for the ridiculous pseudo-mass balance argument and imagining the oceans as a great big bottle of Coke, I know where I’d invest my wager.

      • Bart,

        Did you already find your oceanic source of piling up CO2, which propagates back from sink to source?

        Or have you found any proof that the natural carbon cycle increased a fourfold to dwarf the fourfold increase of human emissions and resulting increase in the atmosphere and thus net sink rate?

        For some people here, the Coke bottle is a good example that quantities are less important that pressure and temperature, no matter if that is static as in the example or dynamic all over the oceans…

  35. Willis, did you consider how much the fuel reserves itself increases? Take a look at historical estimates. What if in 2050, the announced available fuel reserves is higher than it is now, despite having used oil and coal for 34 years?

  36. Fossil fuel reserves are actually very much a moving target. Because what amounts to “recoverable” depends on the cost to recover a given deposit, and the price that the resource will bring. As fossil prices rise, the amount of recoverable reserves goes up, and as prices decline, reserves go down (as deposits formerly recoverable are priced out of the market).

    Further, the amount of exploration is also economically limited. If reserves are low, companies spend more on exploration. But it simply makes no sense to spend huge amounts on exploration when the current reserve level is adequate several decades into the future. Like all things, there comes a point at which it makes no economic sense.

    Thus the amount of reserves is more of an economic issue than a geological one. As current reserves are depleted, exploration will ramp up and reserves will increase. If you look at historical reserve levels for oil and gas, we see this clearly: the world had 30 years of oil reserves in 1980, and 30 years later, in 2010, we reached 50 years of reserves. That number will no doubt decline this year because of the oil price drop, but we’re still in no danger of running out any time soon.

    The case for coal is even less limited, because coal reserves are already adequate for centuries rather than decades at current use rates. Thus nobody explores for coal any more. But if we really did get anywhere near burning our existing 900 Gt of coal, you can bet exploration would begin and reserves would rise.

  37. Dear Willis E.,

    I disagree with usage of time constant tau of 33 years as mentioned around Figure 2.

    In, you showed a determination of time constant tau being 59 years (IIRC, if I got this right) which *.693 (natural log of 2) means half life of 41 years.

    Lately, Ari Halperin has posted in WUWT arguing in favor of single exponential decay as opposed to Bern, along with a shorter half life (which I consider pushy-short) of 30-35 years. Divide that by .693 and the time constant tau is 43-50.5 years.

    Also, I noticed a graph in a recent post by Ari Halperin showing CO2 fitting closely with what Ari Halperin models, but with a slight difference in favor of an accelerating characteristic of CO2 growth.

    I think that with consideration of this and ingenuity for finding and extracting fossil fuels, we are in for about 700 PPMV CO2 (not quite a doubling from slightly over 400 PPMV but about 80% of a doubling on log scale,
    and about 1.3-1.32 doublings on log scale from 280-285 PPMV CO2 if not for human impact on CO2 level).

  38. Of note is the large amounts of C02 in the subsurface about mid ocean ridges and other underwater volcanoes which seems largely missing from Figure 6.1. The 1750 quoted in ocean subsurface sediments likely under-estimates this by a large margin, although how much of this varies with climate change or other factors may not be much.

  39. Something has always bothered me about CO2. Back in the 1970s I was taught to expect that most CO2 sequestration occurred in the oceans by plankton and that it was rate-limited only by the amount of CO2 available. Plants on land are respirators, reversing photosynthesis at night, in the big scheme of things they can’t account for much. Then two things got into my head, one thing was the two 1950s era papers proposing that dissolution in the ocean is rate-limited and the other that the ocean is nutrient-limited, so I let it go, there being no other apparent explanation for the build-up of CO2 but for Man’s output exceeding the rate limit. But there is always the question of the relative amount of natural variation vs. man-made contribution. My question – is there a good period of record on global upwelling index? It seems to me that upwelling should vary with ocean current and atmospheric circulation oscillations and that a significant increase in upwelling would probably result in big releases of CO2 to the atmosphere as the water warms. Do we assume like everything else the AGW crowd does that global upwelling and ocean-atmosphere CO2 exchange is a constant? Or do we assume the Earthy is a dynamic system that is out of equilibrium as we continue to move away from the Pleistocene and the climate warms? Is CO2 sequestered in the ocean from a colder time is only now finding its way to the atmosphere due to poor mixing? Beats the heck out of me, and I think I had an excellent education in the earth sciences in the 1970s.

Comments are closed.