Guest Post by Willis Eschenbach [see update at the end of the head post]
I first got introduced to the idea of “half-life” in the 1950s because the topic of the day was nuclear fallout. We practiced hiding under our school desks if the bomb went off, and talked about how long you’d have to stay underground to be safe, and somehow in all of that we never pondered exactly how a cheap sheet metal and plywood desk would ward off an atomic bomb … a simpler time indeed. But I digress. Half-life, as many people know, is how long it takes for a given starting amount of some radioactive substance to decay until only half of the starting amount remains. For example, the half-life of radioactive caesium 137 is about thirty years. This means if you start with a gram of radioactive caesium, in thirty years you’ll only have half a gram. And in thirty more years you’ll have a quarter of a gram. And in thirty years there will only be an eighth of a gram of caesium remaining, and on ad infinitum.
This is a physical example of a common type of natural decay called “exponential decay”. The hallmark of exponential decay is that every time period, the decay is a certain percentage of what remains at that time. Exponential decay also describes what happens when a system which is at some kind of equilibrium is disturbed from that equilibrium. The system doesn’t return to equilibrium all at once. Instead, each year it moves a certain percentage of the remaining distance to equilibrium. Figure 1 shows the exponential decay after a single disturbance at time zero, as the disturbance is slowly decaying back to the pre-pulse value.
Figure 1. An example of a hypothetical exponential decay of a system at equilibrium from a single pulse of amplitude 1 at time zero. Each year it moves a certain percentage of the distance to the equilibrium value. The “half-life” and the time constant “tau” are two different ways of measuring the same thing, which is the decay rate. Half-life is the time to decay to half the original value. The time constant “tau” is the time to decay to 37% of the original value. Tau is also known as the “e-folding time”.
Note that the driving impulse in Figure 1 is a single unit pulse, and in response we see a steady decay back to equilibrium. That is to say, the shape of the driving impulse is very different from the shape of the response.
Let’s consider a slightly more complex case. This is where we have an additional pulse of 1.0 units each succeeding year. That case is shown in Figure 2.
Figure 2. An example of a hypothetical exponential decay from constant annual pulses of amplitude 1. The pulses start at time zero and continue indefinitely.
Now, this is interesting. In the beginning, the exponential decay is not all that large, because the disturbance isn’t that large. But when we add an additional identical pulse each year, the disturbance grows.
But when the disturbance grows, the size of the annual decay grows as well. As a result, eventually the disturbance levels off. After a while, although we’re adding a one unit pulse per year, the loss due to exponential decay one pulse per year, so there is no further increase.
The impulse in Figure 2 is a steady addition of 1 unit per year. So once again, the shape of the response is very different from the shape of the exponentially decayed response.
With that as prologue, we can look at the relationship between fossil fuel emissions and the resulting increase in airborne CO2. It is generally accepted that the injection of a pulse of e.g volcanic gases into the planetary atmosphere is followed by an exponential decay of the temporarily increased volcanic gas levels back to some pre-existing equilibrium. We know that this exponential decay of an injected gas pulse is a real phenomenon, because if that decay didn’t happen, we’d all be choked to death from accumulated volcanic gases.
Knowing this, we can use an exponential decay analysis of the fossil fuel emissions data to estimate the CO2 levels that would result from those same emissions. Figure 3 shows theoretical and observed increases in various CO2 levels.
Figure 3. Theoretical and observed CO2 changes, in parts per million by volume (ppmv). The theoretical total CO2 from emissions (blue line) is what we’d have if there were no exponential decay and all emissions remained airborne. The red line is the observed change in airborne CO2. The amount that is sequestered by various CO2 sinks (violet) is calculated as the total amount put into the air (blue line) minus the observed amount remaining in the air (red line). The black line is the expected change in airborne CO2, calculated as the exponential decay of the total CO2 injected into the atmosphere. The calculation used best-fit values of 59 years as the time constant (tau) and 283 ppmv as the pre-industrial equilibrium level.
The first thing to notice is that the total amount of CO2 from fossil fuel emissions is much larger than the amount that remains in the atmosphere. The clear inference of this is that various natural sequestration processes have absorbed some but not all of the fossil fuel emissions. Also, the percentage of emissions that are naturally sequestered has remained constant since 1959. About 42% of the amount that is emitted is “sequestered”, that is to say removed from the atmosphere by natural carbon sinks.
Next, as you can see, using an exponential decay analysis gives us an extremely good fit between the theoretical and the observed increase in atmospheric CO2. In fact, the fit is so good that most of the time you can’t even see the red line (observed CO2) under the black line (calculated CO2).
Before I move on, please note that the amount remaining in the atmosphere is not a function of the annual emissions. Instead, it is a function of the total emissions, i.e. it is a function of the running sum of the annual emissions starting at t=0 (blue line).
Now, I got into all of this because against my better judgment I started to watch Dr. Salby’s video that was discussed on WUWT here. The very first argument that Dr. Salby makes involves the following two graphs:
Figure 4. Dr. Salby’s first figure, showing the annual global emissions of carbon in gigatonnes per year.
Figure 5. Dr. Salby’s second figure, showing the observed level of CO2 at Mauna Loa.
Note that according to his numbers the trend in emissions increased after 2002, but the CO2 trend is identical before and after 2002. Dr. Salby thinks this difference is very important.
At approximate 4 minutes into the video Dr. Salby comments on this difference with heavy sarcasm, saying:
The growth of fossil fuel emission increased by a factor of 300% … the growth of CO2 didn’t blink. How could this be? Say it ain’t so!
OK, I’ll step up to the plate and say it. It ain’t so, at least it’s not the way Dr. Salby thinks it is, for a few reasons.
First, note that he is comparing the wrong things. Observed CO2 is NOT a function of annual CO2 emissions. It is a function of total emissions, as discussed above and shown in Figure 3. The total amount remaining in the atmosphere at any time is a function of the total amount emitted up to that time. It is NOT a function of the individual annual emissions. So we would not expect the two graphs to have the same shape or the same trends.
Next, we can verify that he is looking at the wrong things by comparing the units used in the two graphics. Consider Figure 4, which has units of gigatonnes of carbon per year. Gigatonnes of carbon (GtC) emitted, and changes in airborne CO2 (parts per million by volume, “ppmv”), are related by the conversion factor of:
2.13 Gigatonnes carbon emitted = 1 ppmv CO2
This means that the units in Figure 4 can be converted from gigatonnes C per year to ppmv per year by simply dividing them by 2.13. So Figure 4 shows ppmv per year. But the units in Figure 5 are NOT the ppmv per year used in Figure 4. Instead, Figure 5 uses simple ppmv. Dr. Salby is not comparing like with like. He’s comparing ppmv of CO2 per year to plain old ppmv of CO2, and that is a meaningless comparison.
He is looking at apples and oranges, and he waxes sarcastic about how other scientists haven’t paid attention to the fact that the two fruits are different … they are different because there is no reason to expect that apples and oranges would be the same. In fact, as Figure 3 shows, the observed CO2 has tracked the total human emissions very, very accurately. In particular, it shows that we do not expect a large trend change in observed CO2 around the year 2000 such as Dr. Salby expects, despite the fact that such a trend change exists in the annual emission data. Instead, the change is reflected in a gradual increase in the trend of the observed (and calculated) CO2 … and the observations are extremely well matched by the calculated values.
The final thing that’s wrong with his charts is that he’s looking at different time periods in his trend comparisons. For the emissions, he’s calculated the trends 1990-2002, and compared that to 2002-2013. But regarding the CO2 levels, he’s calculated the trends over entirely different periods, 1995-2002 and 2002-2014. Bad scientist, no cookies. You can’t pick two different periods to compare like that.
In summary? Well, the summary is short … Dr. Salby appears to not understand the relationship between fossil fuel carbon emissions and CO2.
That would be bad enough, but from there it just gets worse. Starting at about 31 minutes into the video Dr. Salby makes much of the fact that the 14C (“carbon-14”) isotope produced by the atomic bomb tests decayed exponentially (agreeing with what I discussed above) with a fairly short time constant tau of about nine years or so.
Figure 6. Dr. Salby demonstrates that airborne residence time constant tau for CO2 is around 8.6 years. “NTBT” is the Nuclear Test Ban Treaty.
Regarding this graph, Dr. Salby says that it is a result of exponential decay. He goes on to say that “Exponential decay means that the decay of CO2 is proportional to the abundance of CO2,” and I can only agree.
So far so good … but then Dr. Salby does something astounding. He graphs the 14C airborne residence time data up on the same graph as the “Bern Model” of CO2 pulse decay, says that they both show “Absorption of CO2”, and claims that the 14C isotope data definitively shows that the Bern model is wrong …
Figure 7. Dr. Salby’s figure showing both the “Bern Model” of the decay of a pulse of CO2 (violet line), along the same data shown in Figure 6 for the airborne residence time of CO2 (blue line, green data points).
To reiterate, Dr. Salby says that the 14C bomb test (blue line identified as “Real World”) clearly shows that the Bern Model is wrong (violet line identified as “Model World”).
But as before, in Figure 8 Dr. Salby is again comparing apples and oranges. The 14C bomb test data (blue line) shows how long an individual CO2 molecule stays in the air. Note that this is a steady-state process, with individual CO2 molecules constantly being emitted from somewhere, staying airborne in the atmosphere with a time constant tau of around 8 years, and then being re-absorbed somewhere else in the carbon cycle. This is called the “airborne residence time” of CO2. It is the time an average CO2 molecule stays aloft before being re-absorbed.
But the airborne residence time (blue line) is very, very different from what the Bern Model (violet line) is estimating. The Bern Model is estimating how long it takes an entire pulse of additional CO2 to decay back to equilibrium concentration levels. This is NOT how long a CO2 molecule stays aloft. Instead, the Bern Model is estimating how long the increased atmospheric concentration from a pulse of injected CO2 takes to decay back to pre-pulse conditions. Let me summarize:
Airborne residence time (bomb test data): how long an individual CO2 molecule stays in the air.
Pulse decay time (Bern Model): how long the increased atmospheric concentration from a pulse of injected CO2 takes to decay back to pre-pulse conditions.
So again Dr. Salby is conflating two very different measurements—airborne residence time on the one hand (blue line), and CO2 post-pulse concentration decay time on the other hand (violet line). It is meaningless to display them on the same graph. The 14C bomb test data neither supports nor falsifies the Bern Model. The 14C data says nothing about the Bern Model, because they are measuring entirely different things.
I was going to force myself to watch more of the video of his talk. But when I got that far into Dr. Salby’s video, I simply couldn’t continue. His opening move is to compare ppmv per year to plain ppmv, and get all snarky about how he’s the only one noticing that they are different. He follows that up by not knowing the difference between airborne residence time and pulse decay time.
Sorry, but after all of that good fun I’m not much interested in his other claims. Sadly, Dr. Salby has proven to me that regarding this particular subject he doesn’t understand what he’s talking about. I do know he wrote a text on Atmospheric Physics, so he’s nobody’s fool … but in this case he’s way over his head.
Best regards to each of you on this fine spring evening,
w.
For Clarity: If you disagree with something, please quote the exact words you disagree with. That will allow everyone to understand the exact nature of your disagreement.
Math Note: The theoretical total CO2 from emissions is calculated using the relationship 1 ppmv = 2.13 gigatonnes of carbon emitted.
Also, we only have observational data on CO2 concentrations since 1959. This means that the time constant calculated in Figure 3 is by no means definitive. It also means that the data is too short to reliably distinguish between e.g. the Bern Model (a fat-tailed exponential decay) and the simple single exponential decay model I used in Figure 3.
Data and Code: I’ve put the R code and functions, the NOAA Monthly CO2 data (.CSV), and the annual fossil fuel carbon emissions data (.TXT) in a small zipped folder entitled “Salby Analysis Folder” (20 kb)
[Update]: Some commenters have said that I should have looked at an alternate measure. They said instead of looking at atmospheric CO2 versus the cumulative sum of annual emissions, I should show annual change in atmospheric CO2 versus annual emissions. We are nothing if not a full service website, so here is that Figure.
As you can see, this shows that it is a noisy system. Despite that, however, there is reasonably good and strongly statistically significant correlation between emissions and the change in atmospheric CO2. I note also that this method gives about the same numbers for the airborne fraction that I got from my analysis upthread.
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

I agree with Ferdinand that a reduction of CO2’s solubility from the warming oceans couldn’t account for the assumed 120ppmv rise. Ferdinand says 8ppmv/C but I think that’s a slight underestimation; assuming the entire oceans warmed by 1C down to 4000m the increase according to Henry’s law would be closer to 20ppmv. I think changes in ocean eutrophication as a result of temperature changes is a big unknown and may account for some of the increase, and I agree with a lot of what Bart says.
Richard,
Only the surface temperature is important for the (steady state) equilibrium between oceans and atmosphere. The deep(er) oceans are colder anyway and don’t emit much more CO2 when reaching the surface, only when these warm up to the rest of the ocean surface, the maximum emission rate is reached, regardless of what the temperature was in the deeper layers.
Once the steady state was reached (at about 290 ppmv for the current average ocean surface temperature), all excess CO2 pressure (~110 ppmv nowadays) in the atmosphere will push more CO2 into the oceans, which is what is measured:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/exchange.shtml
and following pages…
“Only the surface temperature is important for the (steady state) equilibrium between oceans and atmosphere. “
Completely and utterly wrong. This is not a static system. The surface oceans today are not the surface oceans of yesterday, or of tomorrow. They are always in flux.
Bart,
It was a response to Richard about a change in temperature of the deep(er) oceans, that plays no role in the equilibrium: only the surface temperature does.
Of course, if the deep ocean upwelling increased in either total water flux or CO2 concentration or both, that would give a change in equilibrium, but that gets to half the extra CO2 influx as with increased CO2 pressure in the atmosphere the influx is suppressed and the outflux is increased.